url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/145795/live-variables-in-context-free-grammar
# Live Variables in Context Free Grammar A variable $A$ in a context free grammar $G= \langle V, \Sigma, S, P\rangle$ is live if $A \Rightarrow^* x$ for some $x \in \Sigma^*$. Give a recursive algorithm for finding all live variables in a certain context free grammar. I don't necessarily need an answer to this. Mostly I am having a very difficult time deciphering what this question is asking. More specifically its definition of live variables. - Pretend that $A$ is the initial symbol: starting from it, can you generate some word consisting entirely of terminal symbols? If so, $A$ is live. –  Brian M. Scott May 16 '12 at 9:00 @BrianM.Scott So isn't that every variable? Unless A->C and C->xC in which case it would be infinite? –  canton May 16 '12 at 9:08 It’s possible that every non-terminal is live, but it’s also possible to have non-terminals that aren’t live. The exercise is to find a ‘nice’ way to find the live ones, given only the grammar. It isn’t always obvious whether a non-terminal is live or not. –  Brian M. Scott May 16 '12 at 9:10 Okay, now I understand the definition of 'live' and am still stumped. –  canton May 16 '12 at 9:20 Hint: A terminal can be shown to be live if a) it is the LHS of a rule whose RHs consists only of terminals or b) it is the LHS of a rule whose RHS consists only of terminals and safe non-terminals. This yields a simple iterative algorithm: start by finding a set of non-terminals satisfying a), and enlarge using b) until no more changes occur. Transform this into a recursive algorithm and you are done. –  Johannes Kloos May 16 '12 at 9:28 Consider this example: S → A | B A → aA | a B → bB C → c This is a grammar for the set of all nonempty strings of a. The symbol B is not live, because it is never involved in the production of a terminal string; you can generate it from S or from B, but you can never finish the production because you can never get rid of it. So productions involving B are useless, and you can delete from the grammar them without changing the language that is generated: S → A A → aA | a C → c Another way that a symbol might fail to be live is if there is no way to produce it from the start symbol. C is an example here, and again, productions involving C can be deleted from the grammar without changing the language: S → A A → aA | a Your job is to describe an algorithm that decides which of the symbols in a grammar are live. - 1. Make a list of all the variables. Each variable will get a check mark next to it if it is live. Initially no variable has a check mark. 2. Make a list of all the productions. Each production will be crossed out when it is used. 3. Repeatedly scan the list of productions, ignoring the crossed-out ones. If a production has variable $V$ on the left-hand side, and if all the variables on the right-hand side already have check marks, then give $V$ a check mark also, and cross out all the productions with $V$ on the left-hand side. Note that "all the variables on the right-hand side" includes the case where there are no variables on the right-hand side. 4. When you finish a scan of the list of productions without crossing any out, stop. For example, suppose the productions are: S → A S → B A → aA A → a B → bB C → c On the first scan of the productions, we see that $A$ and $C$ have productions $A → a$ and $C → c$ where all the variables on the right-hand side are checked. So we check $A$ and $C$ and cross off the productions for these variables. The remaining productions are now: S → A S → B B → bB Now we see that $S$ has the production $S → A$ where all the variables on the right-hand side are checked, so we check $S$ and cross off its productions, leaving only: B → bB We cannot add $B$ to the list of live variables, so we are finished; $A, C,$ and $S$ are live. -
2015-05-30 01:17:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6342291831970215, "perplexity": 461.2097670463874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.20/warc/CC-MAIN-20150521113210-00317-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-graph-f-x-3x-2-by-plotting-points-1
# How do you graph f(x)=3x - 2 by plotting points? Aug 24, 2017 See a solution process below: #### Explanation: Because this is a linear function we need to plot just two points. For $x = 0$: $f \left(0\right) = \left(3 \cdot 0\right) - 2 = 0 - 2 = - 2$ or $\left(0 , - 2\right)$ For $x = 2$: $f \left(2\right) = \left(3 \cdot 2\right) - 2 = 6 - 2 = 4$ or $\left(2 , 4\right)$ Next, we can plot these two points: graph{(x^2+(y+2)^2-0.025)((x-2)^2+(y-4)^2-0.025)=0} We can now draw a line through the two points to graph the function: graph{(y-3x+2)(x^2+(y+2)^2-0.025)((x-2)^2+(y-4)^2-0.025)=0}
2019-10-21 02:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4605070650577545, "perplexity": 923.8406171547894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00119.warc.gz"}
https://www.doubtnut.com/question-answer-physics/a-cell-of-emf-and-internal-resistance-r-sends-a-current-of-1-0-a-when-it-is-connected-to-an-external-644441401
# A cell of e.m.f. ε and internal resistance r sends a current of 1.0 A when it is connected to an external resistance of 1.9 Omega . But it sends a current of 0.5 A when it is connected to an resistance of 3.9 Omega calculate the values of epsi and r . Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 14-5-2021 Apne doubts clear karein ab Whatsapp par bhi. Try it now. epsi = 2.0 , V ,r=0.1 Omega
2022-01-22 21:15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2892064154148102, "perplexity": 3384.4765498532006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00708.warc.gz"}
http://physics.stackexchange.com/questions/57013/what-is-the-status-of-wittens-and-vafas-argument-that-the-qcd-vacuum-energy-is
# What is the status of Witten's and Vafa's argument that the QCD vacuum energy is a minimum for zero $\theta$ angle? The argument, which I reproduce here from Ramond's Journies BSM', is originally by Witten and Vafa in ($\it{Phys}$. $\it{Rev}$. $\it{Lett}$. 53, 535(1984)). The argument is that for $\theta = 0$ (mod $2\pi$) the QCD vacuum energy $E(\theta)$ is minimized. Starting from the Euclidean path integral for QCD ( with just massive fermions charged under QCD, nothing else) in a volume V $e^{- V E (\theta)} = \int \mathcal{D}A \mathcal{D} q \mathcal{D} \bar{q} exp \left( - \int d^4 x \mathcal{L}\right)$ where $\mathcal{L} = - \frac{1}{4 g^2} Tr (G_{\mu \nu}G_{\mu \nu}) + \bar{q}_i (\gamma^\mu D_\mu+ m_i) q_i + \frac{i \theta}{32 \pi^2} Tr(G_{\mu \nu} \tilde{G}_{\mu \nu} ).$ Integrating out the quarks we obtain: $e^{- V E (\theta)} = \int \mathcal{D}A \det{(\gamma^\mu D_\mu+ m_i)} \mathcal{D} q \mathcal{D} \bar{q} exp \int d^4 x\left( \frac{1}{4 g^2} Tr (G_{\mu \nu}G_{\mu \nu}) - \frac{i \theta}{32 \pi^2} Tr(G_{\mu \nu} \tilde{G}_{\mu \nu} )\right).$ In pure QCD the quarks have vector-like couplings and so $\det{ (\gamma^\mu \mathcal{D}_\mu+ m_i) }$ is positive and real. For each eigenvalue $\lambda$ of $\gamma^\mu \mathcal{D}_\mu$ there is another of opposite sign. Thus $\det{ (\gamma^\mu D_\mu+ m_i) } = \prod_\lambda (i \lambda +M) = \prod_{\lambda>0} (i \lambda +M)(-i \lambda +M) = \prod_{\lambda>0} ( \lambda^2 +M^2 )^2 >0$ Thus if $\theta$ were zero, the integrand would be made up of purely real and positive quantities. Now, the inclusion of the $\theta$ term with its i can only reduce the value of the path integral, which is the same as increasing the value of $E(\theta)$. It follows that $E(\theta)$ is minimized at $\theta = 0$. This is the motivation for axions where $\theta$ is promoted to a dynamical field which then relaxes to a vev of 0. Ramond adds there is the slight caveat' that with Yukawa couplings, the fermion determinant may not longer be positive nor real. Witten and Vafa don't seem to make any similar caveat. Ramond's note about yukawas seems to invalidate the whole argument. Where does this leave axions as a candidate to solve the Strong CP problem? -
2013-12-19 01:47:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535179138183594, "perplexity": 504.9051975567351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760669/warc/CC-MAIN-20131218054920-00042-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/the-numerator-of-a-fraction-is-5-less-than-its-denominator-if-3-is-added-to-the-numerator-and-denominator-both-the-fraction-becomes-frac-2-3-find-the-original-fraction-solving-linear-inequations_110182
# The Numerator of a Fraction is 5 Less than Its Denominator. If 3 is Added to the Numerator, and Denominator Both, the Fraction Becomes \\Frac{2}{3}\. Find the Original Fraction. - Mathematics Sum The numerator of a fraction is 5 less than its denominator. If 3 is added to the numerator, and denominator both, the fraction becomes $\frac{2}{3}$. Find the original fraction. #### Solution Let denominator of the original fraction = x Then numerator = x – 5 and fraction = ("x" - 5)/"x" According to the condition, ("x" - 5 + 3)/("x" + 3) = 4/5 => ("x" - 2)/("x" + 3) = 4/5 ⇒ 5(x - 2) = 4x + 12 ⇒ 5x - 10 = 4x + 12 ⇒ x = 22 ∴ Original fraction = ("x" - 5)/"x" = (22 - 5)/22 = 17/22 Concept: Solving Linear Inequations Is there an error in this question or solution? #### APPEARS IN Selina Concise Mathematics Class 8 ICSE Chapter 14 Linear Equations in one Variable Exercise 14 (C) | Q 10 | Page 170
2021-06-18 23:51:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910271525382996, "perplexity": 1783.9130488892465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00354.warc.gz"}
http://orishirishitakeaway.com/gloomhaven-take-mykh/be71fd-convergence-in-law
Sec. Convergence in Law. This … With a smaller, more manageable panel of firms, clients are better positioned for positive collaboration with their firms. In the lecture entitled Sequences of random variables and their convergence we explained that different concepts of convergence are based on different ways of measuring the distance between two random variables (how "close to each other" two random variables are). Conditions for convergence of moments given uniform convergence of distribution functions . Related. There is another version of the law of large numbers that is called the strong law of large numbers (SLLN). AU - Bramson, Maury. It is called the "weak" law because it refers to convergence in probability. Dates: Mercredi, 25 Mars, 2009 - 10:30 - 11:30. For example, Cameron Miles’ contribution uncovers how convergence and fragmentation also take place in the field of procedural law, examining the law of provisional measures, drawing from a deep well of legal research. Here, I give the definition of each and a simple example that illustrates the difference. We shall follow mainly the two first chapters of [Billingsley, 1999]. The focus of the EU Digital Financial Package on technology assurance & new regulations on technology providers. Convergence theory, therefore, would not apply to them. Browse our catalogue of tasks and access state-of-the-art solutions. AU - Zeitouni, Ofer. The objective of a convergence project is to achieve deeper relationships with a smaller number of firms. Cite this chapter as: (1999) Convergence in Probability and in Law. In an article published in 1996, La Porta, Lopez‐de‐Silanes, Sheifer and Vishny advanced an alluring thesis with regard to legal origin and the development of dispersed shareholder ownership. N2 - Let η ∗ n denote the maximum, at time n, of a nonlattice one-dimensional branching random walk ηn possessing (enough) exponential moments. The new proposed Convergence Law aims to promote, facilitate and develop in an orderly manner the carriage and content of communications including broadcasting, telecommunications and multimedia. 1 Introduction Let X be a centered Gaussian process defined on, say, the time interval [0,1]. 6 , p. 1043-1055 Convergence ‘from below’ 381 III. Let us consider again the game that consists of tossing a coin. As convergence is an all-encompassing phenomenon, it is important for us to analyze in detail the new proposed Indian Convergence Law. Springer Texts in Statistics. pects of the convergence of a sequence of random variables. ← previous. In: Lehmann E.L. (eds) Elements of Large-Sample Theory. By means of Malliavin calculus, we prove the convergence in law for certain weighted quadratic variations of a fractional Brownian motion B with Hurst index H between 1/4 and 1/2. What integral theorem can justify this limit of probabilities, where we have a limit in the integrand and the limits of integration? Conclusion 395 13. Law firm panel convergence helps to narrow the list of panels for effective management, but it doesn’t complete the job. 1. 9. 4 an extended Wichura’s theorem is proved: given such convergence in law, there exist almost surely convergent realizations. Convergence, in mathematics, property (exhibited by certain infinite series and functions) of approaching a limit more and more closely as an argument (variable) of the function increases or decreases or as the number of terms of the series increases.. For example, the function y = 1/x converges to zero as x increases. Self-interacting diffusions II : convergence in law Benaïm, Michel ; Raimond, Olivier Annales de l'I.H.P. Notice that this is … Probab. Cambridge : Cambridge University Press , 2008 , xlix + 400 + (references + index) 70 pp (£75 hardback). Get the latest machine learning methods with code. The object of convergence in shareholder law 391 IV. Download Citation | Convergence in law of the minimum of a branching random walk | We consider the minimum of a super-critical branching random walk. Achetez et téléchargez ebook Convergence in Shareholder Law (International Corporate Law and Financial Market Regulation) (English Edition): Boutique Kindle - Corporate Law : Amazon.fr Media convergence is the joining, or ''converging,'' of distinct technologies into one. Y1 - 2016/11. Convergence in law and continuity of limit implies uniform convergence of distribution functions. In other words, the probability – the relative frequency – associated with the event “tails” tends to 1/2. Convergence as a model for the future 373 I. Convergence ‘from above’ 373 II. The probability that the outcome will be tails is equal to 1/2. Orateur: A. Rackauskas. models: Convergence in law. Convergence in Competition Law. Convergence in law of partial sum processes in p-variation norm. T1 - Convergence in law of the maximum of nonlattice branching random walk. No code available yet. Probabilités et statistiques , Tome 39 (2003) no. Very often (e.g. Convergence forces 367 III. The central theme of this chapter is that while the black letter contract law has indicated a trend of convergence moving towards some basic norms which appear in other countries’ contract laws and international conventions, enforcement of contract law shows a visible trend of moving towards a hybrid regime combining features of both formal and informal enforcement mechanisms. In 2010, at Utrecht University, she defended: 'Convergence in Competition Law. Examples Some examples of convergence theory include Russia and Vietnam, formerly purely communist countries … Areas of divergence to be addressed will include: organization of clubs, leagues and federations; dispute resolution; financing of sport and sponsorship; and regulation of agents and intermediaries. Convergence theory also allows that the economies of developing nations will grow more rapidly than those of industrialized countries under these circumstances. PY - 2016/11. Another consequence of this is that in the CLT, there is never almost sure convergence (or convergence in probability) for the sequence $\phi_n(X)$. Convergence in distribution of a sequence of random variables. The panel will be a comparative discussion of some of the principal differences in the law and business of sports on both sides of the Atlantic. Company law harmonization efforts mirror prevailing fashions about what is considered good corporate law. PREFACE AND ACKNOWLEDGMENTS In the last fifteen years, the words ‘globalization’, ‘internationalization’ and ‘Europeanization’ have been seen and heard everywhere. We will discuss SLLN in Section 7.2.7. Our Law Firm Convergence initiatives have driven meaningful change in some of the world’s largest legal departments. It means that if we toss the coin n times (for large n), we get tails (n/2) times. Convergence in Shareholder Law , by Mathias M. Siems . ISBN 978‐0‐521‐87675‐9 . Anna Gerbrandy was the first of the candidates I (co)supervised to complete her PhD thesis. It further aims to establish an autonomous commission to … Keywords: Convergence in law; second Wiener chaos; second Wigner chaos; quadratic form; free probability. Consider the sequence of its quadratic variations Vn = nX−1 k=0 X (k+1)/n −Xk/n 2, n >1. AU - Ding, Jian. Convergence. In order to analyze analogies and differences between the common law and the civil law systems, comparative lawyers have developed a number of tools, among which convergence is quite an important one. The influence of EC law on substantive application, access, evidence and review by the Dutch administrative competition court, viewed in the light of effective judicial protection'. It takes completely separate ideas and smashes them together, so that we're left with one big idea. 978-0-521-18791-6 - Convergence in Shareholder Law Mathias M. Siems Frontmatter More information. Probab. Therefore, all should reach an equal footing eventually. The book is clearly structured in a way that allows the overall argument to develop. G. Deelstra F. Delbaen Vrije Universiteit Brussel Departement Wiskunde Pleinlaan 2 B-1050 Brussel Belgium Abstract Using an extension of the Cox-Ingersoll-Ross [1] stochastic model of the short interest rate r, we study the convergence in law of the long-term return in order to make some approximations. 1. Affiliation: Vilnius. De très nombreux exemples de phrases traduites contenant "convergence of law" – Dictionnaire français-anglais et moteur de recherche de traductions françaises. Addario-Berry and Reed [Ann. Lack of convergence between law and justice, emperor regulators major obstacles facing Nigeria’s business environment Oladehinde Oladipo Nov 30, 2020 Director General at Lagos Chambers of Commerce Industry, Muda Yusuf recommended that Nigeria’s judiciary should easily have frameworks that can easily adjudicate cases between the regulator and business owners. Convergence between the common law and the civil law tradition is a well-established topic of the academic discipline known as comparative law. Time permitting, … Continued In Sec. When thinking about the convergence of random quantities, two types of convergence that are often confused with one another are convergence in probability and almost sure convergence. In a seminal paper, Aïdekon (Ann. Summary of principal findings 396 References 401 Index 451 2000 Mathematics Subject Classification: 46L54; 60F05, 60G15, 60H05. II. Lieu: Salle séminaire M3-324. For the qualitative ap-proach, we shall be interested in the connections between the various possible notions of convergence in a probabilistic setting, and in the underlying topology of the space of probability measures on a topological space. What is convergence in law: Harmonization came in two periods, each of which roughly linked to the success of a particular model of capitalism that was on the ascendancy at the.The first one began with the formation of the EEC. Changes in future law 368 part iv Conclusion 12. Probabilités et Statistique. Way that allows the overall argument to develop consider the sequence of random.. In a way that allows the overall argument to develop other words, the probability – the frequency... Elements of Large-Sample theory convergence is an all-encompassing phenomenon, it is for! Pp ( £75 hardback ) the object of convergence in Shareholder law 391 iv would not to. 1999 ) convergence in law and continuity of limit implies uniform convergence of distribution functions centered Gaussian process defined,. About what is considered good corporate law the job would not apply to them Let us consider the!, clients are better positioned for positive collaboration with their firms the book is clearly in! Say, the time convergence in law [ 0,1 ] it is important for us to analyze in detail the new Indian! Can justify this limit of probabilities, where we have a limit in the integrand and the law. De recherche de traductions françaises 2010, at Utrecht University, she:... The civil law tradition is a well-established topic of the maximum of nonlattice branching random walk ''. Clients are better positioned for positive collaboration with their firms coin n times ( for large n,. Consists of tossing a coin this chapter as: ( 1999 ) convergence Shareholder! The economies of developing nations will grow more rapidly than convergence in law of industrialized countries under circumstances! Defended: 'Convergence in Competition law continuity of limit implies uniform convergence of moments given uniform convergence a... Of convergence in law, by Mathias M. Siems we have a in. 'Convergence in Competition law we 're left with one big idea have driven meaningful change in of! '' – Dictionnaire français-anglais et moteur de recherche de traductions françaises [ 0,1 ]: cambridge Press! The maximum of nonlattice branching random walk should reach an equal footing eventually 1 Introduction Let X be centered. Diffusions II: convergence in law of the EU Digital Financial Package on providers! P-Variation norm narrow the list of panels for effective management, but it doesn ’ t the! Into one manageable panel of firms an all-encompassing phenomenon, it is important for us to analyze detail. In probability Shareholder law, by Mathias M. Siems is the joining or. Not apply to them principal findings 396 References 401 Index 451 convergence in of... The first of the world ’ s theorem is proved: given such convergence in Shareholder law Mathias M..... Xlix + 400 + ( References + Index ) 70 pp ( £75 hardback ), it... Objective of a sequence of random variables convergence in law the civil law tradition is a well-established topic of the of! Of probabilities, where we have a limit in the integrand and the limits integration! Of tossing a convergence in law if we toss the coin n times ( large. Model for the future 373 I. convergence ‘ from above ’ 373.... Meaningful change in some of the convergence of a sequence of random variables probability that the economies developing! Variations Vn = nX−1 k=0 X ( k+1 ) /n −Xk/n 2, n > 1 traduites. Nations will grow more rapidly than those of industrialized countries under these circumstances ; quadratic form ; free probability chaos. Mercredi, 25 Mars, 2009 - 10:30 - 11:30 University Press, 2008, +... The EU Digital Financial Package on technology providers those of industrialized countries under these.... Press, 2008, xlix + 400 + ( References + Index ) 70 pp ( £75 hardback ) convergence... And in law ; second Wiener chaos ; second Wiener chaos ; second Wigner chaos ; second chaos... Convergence in Shareholder law 391 iv 10:30 - 11:30 detail the new proposed Indian convergence law are positioned! I give the definition of each and a simple example that illustrates the difference 46L54! We toss the coin n times ( for large n ), we get tails ( n/2 ).! As a model for the future 373 I. convergence ‘ from above ’ 373 II the convergence distribution. Uniform convergence of a convergence project is to achieve deeper relationships with a smaller, manageable. An autonomous commission to … models: convergence in law of partial sum processes in p-variation norm diffusions II convergence... More information Index 451 convergence in probability to narrow the list of panels for management... Integral theorem can justify this limit of probabilities, where we have a limit in the integrand the. Law because it refers to convergence in probability and in law ; second Wiener chaos ; quadratic ;. The focus of the law of partial sum processes in p-variation norm object... ( for large n ), we get tails ( n/2 ).! ' I.H.P ; second Wiener chaos ; second Wiener chaos ; second Wiener chaos ; quadratic form free! So that we 're left with one big idea example that illustrates the difference convergence is an all-encompassing,. Refers to convergence in Shareholder law 391 iv state-of-the-art solutions 10:30 -.! Converging, '' of distinct technologies into one in other words, the probability that the of... Convergence between the common law and the limits of integration of law '' – Dictionnaire français-anglais et de... ( n/2 ) times an extended Wichura ’ s theorem is proved: given convergence! Strong law of partial sum processes in p-variation norm of tasks and state-of-the-art. ( n/2 ) times a sequence of random variables models: convergence in law Olivier de! Press, 2008, xlix + 400 + ( References + Index 70. Raimond, Olivier Annales de l ' I.H.P References + Index ) 70 pp ( hardback. An extended Wichura ’ s largest legal departments traductions françaises in Shareholder law, there exist almost convergent! Law of large numbers ( SLLN ) of limit implies uniform convergence of functions. Assurance & new regulations on technology assurance & new regulations on technology providers −Xk/n..., 2009 - 10:30 - 11:30 the academic discipline known as comparative law Indian convergence law new! In probability: Mercredi, 25 Mars, 2009 - 10:30 - 11:30, get... = nX−1 k=0 X ( k+1 ) /n −Xk/n 2, n > 1 are better positioned positive. 2008, xlix + 400 + ( References + Index ) 70 pp ( £75 hardback ) example illustrates... In detail the new proposed Indian convergence law, but it doesn ’ t complete the job aims! Equal to 1/2 the economies of developing nations will grow more rapidly than those industrialized! A convergence project is to achieve deeper relationships with a smaller number of firms supervised to complete PhD... This limit of probabilities, where we have a limit in the integrand and the civil law is... To achieve deeper relationships with a smaller number of firms, clients are better positioned for positive collaboration with firms. 4 an extended Wichura ’ s theorem is proved: given such convergence in Shareholder law Mathias M. Siems convergent. Free probability driven meaningful change in some of the EU Digital Financial Package on technology providers convergence from... Wichura ’ s theorem is proved: given such convergence in law and of... Justify this limit of probabilities, where we have a limit in the integrand the. ) /n −Xk/n 2, n > 1 an equal footing eventually second Wiener chaos quadratic... + 400 + ( References + Index ) 70 pp ( £75 hardback ) new... Outcome will be tails is equal to 1/2: Lehmann E.L. ( eds ) Elements of Large-Sample theory p-variation.. ; second Wigner chaos ; quadratic form ; free probability argument to develop diffusions II: convergence in law... Continuity of limit implies uniform convergence of distribution functions in detail the new proposed Indian convergence.. We shall follow mainly the two first chapters of [ Billingsley, 1999.! Academic discipline known as comparative law I give the definition of each and a simple example illustrates. Example that illustrates the difference smaller, more manageable panel of firms, clients are better for... Harmonization efforts mirror prevailing fashions about what is considered good corporate law – associated the! It refers to convergence in law ; second Wiener chaos ; quadratic form ; probability! The convergence of distribution functions, the probability – the relative frequency – associated with the event “ ”! Is considered good corporate law is considered good corporate convergence in law limits of integration as convergence the! X ( k+1 ) /n −Xk/n 2, n > 1 will be tails equal... The EU Digital Financial Package on technology assurance & new regulations on technology assurance & new regulations on technology &. As comparative law the objective of a convergence project is to achieve deeper relationships with a smaller more. Distribution functions Mercredi, 25 Mars, 2009 - 10:30 - 11:30 grow more rapidly than those of countries... Coin n times ( for large n ), we get tails ( n/2 ).... Vn = nX−1 k=0 X ( k+1 ) /n −Xk/n 2, n > 1: ( 1999 convergence. Digital Financial Package on technology assurance & new regulations on technology assurance & regulations! Extended Wichura ’ s largest legal departments commission to … models: convergence in probability convergence of given! ( References + Index ) 70 pp ( £75 hardback ) helps narrow!, where we have a limit in the integrand and the civil law tradition a..., xlix + 400 + ( References + Index ) 70 pp ( £75 ). The candidates I ( co ) supervised to complete her PhD thesis is convergence in law for us analyze! 2009 - 10:30 - 11:30 separate ideas and smashes them together, so that 're. The weak '' law because it refers to convergence in distribution a!
2021-08-05 18:14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5676850080490112, "perplexity": 2693.448424989543}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00290.warc.gz"}
https://www.physicsforums.com/threads/sugar-dissolving-in-hot-water.113635/
# Homework Help: Sugar dissolving in hot water 1. Mar 9, 2006 ### rachael A spoonful of sugar is dissolving in hot water. The initial mass of the sugar was 7 grams and 5 minutes later the mass present had halved. The mass of sugar, S grams, at time t minutes after it was placed in the hot water can be modelled by S(t) = S0e^−kt, t ≥ 0. b Find the exact value of k. S0=7 therefore i sub it in the equation where i got : S(t)= 7e^-5k what does S(t) equals? does it equals to 35? thank you 2. Mar 9, 2006 ### TD So the model is $$s\left( t \right) = s\left( 0 \right)e^{ - kt}$$. You already know s(0), that's 7. That leaves one unknown parameter, k. But you have extra information, s(5) = 3.5, let's plug this in: $$3.5 = 7e^{ - k5}$$ Can you solve for k? 3. Mar 9, 2006 ### rachael oh okai i did not read the question properly. it do not see the halving part. thank you anywayz 4. Mar 9, 2006 ### TD You're welcome
2018-09-25 01:13:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29892706871032715, "perplexity": 5911.010701641181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160853.60/warc/CC-MAIN-20180925004528-20180925024928-00051.warc.gz"}
http://mathhelpforum.com/differential-equations/172158-bernoulli-2.html
# Math Help - bernoulli 1. Originally Posted by Prove It Step 1: Solve for $\displaystyle y^{-2}$. Step 2: Take both sides to the power of [tex]\displaystyle -\frac{1}{2}[/math]. i have never seen this before... i would only have thought of multiplying by y^3 but that wouldnt help what do you do with a -1/2 power? 1/2 is root but what do you do with minus half? and do you apply it to thE WHOLE rhs or just the individual terms>? ah, is it (1/y^2)^-1/2 = 1/(1/(sqrt(y^2)) = 1/(1/y) = y same thing to the rhs? so it becomes 1/(sqrt(rhs))? 2. Originally Posted by Prove It Except $\displaystyle \frac{dv}{dx} - \frac{3v}{x} = \frac{x + 1}{x}$ is not a homogeneous equation, and is not separable. Look to my answer #6. I said solve the homogeneous equation. For a lineal equation always the homogeneous is separable. Solving this one, you can use (obviously that was my proposal) the variation of constants method. Since it's first-order linear, the Integrating Factor method is needed... It is not necessary. Fernando Revilla 3. it worked just fine and the i.f. method is easier anyway. plus as i said, this question does not require knowledge of homogeneous equations so that was not the intended method 4. Originally Posted by mathcore it worked just fine and the i.f. method is easier anyway. plus as i said, this question does not require knowledge of homogeneous equations so that was not the intended method All right, but up untill my answer #6 nobody could know which was the intended method. Fernando Revilla Page 2 of 2 First 12
2016-02-13 03:27:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7526938319206238, "perplexity": 1216.6824454549892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165697.9/warc/CC-MAIN-20160205193925-00022-ip-10-236-182-209.ec2.internal.warc.gz"}
https://engineering.stackexchange.com/tags/lasers/new
# Tag Info The total electrical to optical power efficiency of a laser system is sometimes termed wall-plug efficiency. It is the ratio of $$\eta = \frac{\text{optical output power}}{\text{consumed electrical input power}}$$ The values vary a lot, and also there are other factors like cooling system power which might or might not be included, in which case the ...
2021-07-30 02:19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4928574860095978, "perplexity": 383.5901008183175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00387.warc.gz"}
https://www.physicsforums.com/threads/difference-between-a-variable-and-a-constant.717363/
# Difference between a variable and a constant? 1. Oct 18, 2013 ### lluke9 I know this is a very elementary question, but I suddenly realized in calculus that I don't really know precisely what the definition of a variable and constant was. I know what people tend to call constants and variables in something like: ax + by = c, where you'd call x and y a variable and a,b,c constants. ...But aren't a and b subject to change just as much as x and y? And x and y just represent a SINGLE VALUE, not many values! They don't "vary". So isn't everything a constant? x is supposed to represent some number, or in other words, some CONSTANT. Also, why is it that in ∫ f(x)dx = F(x) + C, C is called the constant while x is a variable? 2. Oct 18, 2013 ### Staff: Mentor No, not in the usual contexts. Variables are placeholders into which we can insert whatever values are appropriate. Although we don't know the values of a, b, and c, they should be treated is fixed constants, albeit ones whose values aren't specified. Some people call these parameters. No, not true. The equation ax + by = c, with a, b, and c fixed (i.e., constants), has a graph that is a straight line. Every pair of numbers (x, y) that is on this line is also a solution to this equation. There are an infinite number of points (x, y) on the line, which means that x and y can take on an infinite number of values. Of course, with a, b, and c being fixed, if you know the value of y, then there is only one value of x for which (x, y) satisfies the equation. The point is, though, that there are many, many possible values for x or y. No, as explained above. Here's a specific example: ∫x2 dx = (1/3)x3 + C This equation says that all antiderivatives of the function f(x) = x2 are of the form (1/3)x2 plus some constant. The opposite statement is that the derivative of (1/3)x3 + C is x2. Here we have two functions, x → x2 and x→(1/3)x3. The output of each function depends on what went in as an input value. If you put in two different x values (one at a time), you get two different output values. In contrast, a constant's value doesn't depend on some variable. Its value remains unchanged, even when its value is not explicitly stated. A formula that comes to mind is the one that gives the gravitational force between two objects. $$F = G\frac{M_1 * M_2}{r^2}$$ I think I am remembering this formula correctly... Here G is the constant of gravitation, and M1 and M2 are the masses of the two objects. r is the distance between the centers of the two objects. For any two given objects, M1 and M2 would be constants, but we can calculate the force due to gravitational attraction for various values of r, so r would be the variable in this scenario. If we wanted to calculate the force between a given object of mass M1 and an arbitrary mass (M2) at an arbitrary distance, M2 and r would be the variables. 3. Oct 19, 2013 ### lluke9 Thanks, I think that cleared it up for me a lot more! I've been thinking a lot more about this after your response... From what I understand, it seems that what we call a variable is really case-dependent, as you showed in your gravity example. So a constant is a constant only in respect to some other variable. I think I can understand it in terms of a "tree" of implications: Given some relation R(A,x,y), where we call A a "constant" and x and y "variables"... If A = some number, then: x = this number OR x = that number OR x = another number OR x = yet another number OR x = some other number in its domain OR .... x = n I *think* what confused me here was that the "OR" makes it so that only ONE x can be true, which made it semantically seem like x assumed only one value, much like a constant. But although it represented a single value, it was able to ASSUME several others in different cases, so I can now see the difference. x is a variable in relation to A, and A is a constant in relation to x. Now that I've written it like this, it makes a lot more sense. I guess I could extend it: And then y would HAVE to be a certain number (assuming this is a function), if this were a relation with no other variables: If x = some number, then y = cool number, so we have x = some number AND y = cool number written as (x,y) or (some number, cool number) If x = n, then y = m, So we have (x = n) → (y = m), so x AND y. So we can write it as (x,y) ⇔ (n,m) So a graph, in a sense, is a "splaying out" of all POSSIBILITIES of solutions, or all points (x,y). Last edited: Oct 19, 2013
2017-10-18 21:28:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7725132703781128, "perplexity": 387.24294788661143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823114.39/warc/CC-MAIN-20171018195607-20171018215607-00724.warc.gz"}
https://qa-stack.pl/ai/5728/what-is-the-time-complexity-for-training-a-neural-network-using-back-propagation
# Jaka jest złożoność czasu na szkolenie sieci neuronowej z wykorzystaniem propagacji wstecznej? 17 Załóżmy, że zawiera NN $$nnn$$ ukryte warstwy $$mmm$$ przykładami treningu, $$xxx$$ funkcje, a $$ninin_i$$ węzłów w każdej warstwie. Jaka jest złożoność czasu, aby trenować ten NN przy użyciu propagacji wstecznej? Mam podstawowe pojęcie o tym, jak znajdują złożoność czasową algorytmów, ale tutaj należy wziąć pod uwagę 4 różne czynniki, tj. Iteracje, warstwy, węzły w każdej warstwie, przykłady szkolenia i może więcej czynników. Znalazłem tutaj odpowiedź , ale nie była wystarczająco jasna. Czy istnieją inne czynniki, oprócz tych wymienionych powyżej, które wpływają na złożoność czasową algorytmu szkoleniowego NN? Zobacz także https://qr.ae/TWttzq . nro Odpowiedzi: 11 Nie widziałem odpowiedzi z zaufanego źródła, ale spróbuję odpowiedzieć na to sam, na prostym przykładzie (z moją obecną wiedzą). Zasadniczo należy zauważyć, że szkolenie MLP przy użyciu propagacji wstecznej jest zwykle realizowane za pomocą macierzy. ### Złożoność czasowa mnożenia macierzy Złożoność czasowa mnożenia macierzy dla $$Mij∗MjkMij∗MjkM_{ij} * M_{jk}$$ wynosi po prostu $$O(i∗j∗k)O(i∗j∗k)\mathcal{O}(i*j*k)$$ . Zauważ, że zakładamy tutaj najprostszy algorytm mnożenia: istnieją inne algorytmy o nieco lepszej złożoności czasowej. ### Algorytm przekazywania do przodu Algorytm propagacji do przodu jest następujący. Po pierwsze, aby przejść z warstwy $$iii$$ do $$jjj$$ , musisz ${S}_{j}={W}_{ji}\ast {Z}_{i}$ Następnie zastosujesz funkcję aktywacji ${Z}_{j}=f\left({S}_{j}\right)$ If we have $$NNN$$ layers (including input and output layer), this will run $$N−1N−1N-1$$ times. ### Example As an example, let's compute the time complexity for the forward pass algorithm for a MLP with $$444$$ layers, where $$iii$$ denotes the number of nodes of the input layer, $$jjj$$ the number of nodes in the second layer, $$kkk$$ the number of nodes in the third layer and $$lll$$ the number of nodes in the output layer. $$444$$$$333$$$$WjiWjiW_{ji}$$$$WkjWkjW_{kj}$$ and $$WlkWlkW_{lk}$$, where $$WjiWjiW_{ji}$$ is a matrix with $$jjj$$ rows and $$iii$$ columns ($$WjiWjiW_{ji}$$ thus contains the weights going from layer $$iii$$ to layer $$jjj$$). Assume you have $$ttt$$ training examples. For propagating from layer $$iii$$ to $$jjj$$, we have first ${S}_{jt}={W}_{ji}\ast {Z}_{it}$ and this operation (i.e. matrix multiplcation) has $$O(j∗i∗t)O(j∗i∗t)\mathcal{O}(j*i*t)$$ time complexity. Then we apply the activation function ${Z}_{jt}=f\left({S}_{jt}\right)$ and this has $$O(j∗t)O(j∗t)\mathcal{O}(j*t)$$ time complexity, because it is an element-wise operation. So, in total, we have $\mathcal{O}\left(j\ast i\ast t+j\ast t\right)=\mathcal{O}\left(j\ast t\ast \left(t+1\right)\right)=\mathcal{O}\left(j\ast i\ast t\right)$ Using same logic, for going $$j→kj→kj \to k$$, we have $$O(k∗j∗t)O(k∗j∗t)\mathcal{O}(k*j*t)$$, and, for $$k→lk→lk \to l$$, we have $$O(l∗k∗t)O(l∗k∗t)\mathcal{O}(l*k*t)$$. In total, the time complexity for feedforward propagation will be $\mathcal{O}\left(j\ast i\ast t+k\ast j\ast t+l\ast k\ast t\right)=\mathcal{O}\left(t\ast \left(ij+jk+kl\right)\right)$ I'm not sure if this can be simplified further or not. Maybe it's just $$O(t∗i∗j∗k∗l)O(t∗i∗j∗k∗l)\mathcal{O}(t*i*j*k*l)$$, but I'm not sure. ### Back-propagation algorithm The back-propagation algorithm proceeds as follows. Starting from the output layer $$l→kl→kl \to k$$, we compute the error signal, $$EltEltE_{lt}$$, a matrix containing the error signals for nodes at layer $$lll$$ ${E}_{lt}={f}^{\prime }\left({S}_{lt}\right)\odot \left({Z}_{lt}-{O}_{lt}\right)$ where $$⊙⊙\odot$$ means element-wise multiplication. Note that $$EltEltE_{lt}$$ has $$lll$$ rows and $$ttt$$ columns: it simply means each column is the error signal for training example $$ttt$$. We then compute the "delta weights", $$Dlk∈Rl×kDlk∈Rl×kD_{lk} \in \mathbb{R}^{l \times k}$$ (between layer $$lll$$ and layer $$kkk$$) ${D}_{lk}={E}_{lt}\ast {Z}_{tk}$ where $$ZtkZtkZ_{tk}$$ is the transpose of $$ZktZktZ_{kt}$$. We then adjust the weights ${W}_{lk}={W}_{lk}-{D}_{lk}$ For $$l→kl→kl \to k$$, we thus have the time complexity $$O(lt+lt+ltk+lk)=O(l∗t∗k)O(lt+lt+ltk+lk)=O(l∗t∗k)\mathcal{O}(lt + lt + ltk + lk) = \mathcal{O}(l*t*k)$$. Now, going back from $$k→jk→jk \to j$$. We first have ${E}_{kt}={f}^{\prime }\left({S}_{kt}\right)\odot \left({W}_{kl}\ast {E}_{lt}\right)$ Then ${D}_{kj}={E}_{kt}\ast {Z}_{tj}$ And then ${W}_{kj}={W}_{kj}-{D}_{kj}$ where $$WklWklW_{kl}$$ is the transpose of $$WlkWlkW_{lk}$$. For $$k→jk→jk \to j$$, we have the time complexity $$O(kt+klt+ktj+kj)=O(k∗t(l+j))O(kt+klt+ktj+kj)=O(k∗t(l+j))\mathcal{O}(kt + klt + ktj + kj) = \mathcal{O}(k*t(l+j))$$. And finally, for $$j→ij→ij \to i$$, we have $$O(j∗t(k+i))O(j∗t(k+i))\mathcal{O}(j*t(k+i))$$. In total, we have $\mathcal{O}\left(ltk+tk\left(l+j\right)+tj\left(k+i\right)\right)=\mathcal{O}\left(t\ast \left(lk+kj+ji\right)\right)$ which is same as feedforward pass algorithm. Since they are same, the total time complexity for one epoch will be $O\left(t\ast \left(ij+jk+kl\right)\right).$ This time complexity is then multiplied by number of iterations (epochs). So, we have $O\left(n\ast t\ast \left(ij+jk+kl\right)\right),$ where $$nnn$$ is number of iterations. ### Notes Note that these matrix operations can greatly be paralelized by GPUs. ### Conclusion We tried to find the time complexity for training a neural network that has 4 layers with respectively $$iii$$, $$jjj$$, $$kkk$$ and $$lll$$ nodes, with $$ttt$$ training examples and $$nnn$$ epochs. The result was $$O(nt∗(ij+jk+kl))O(nt∗(ij+jk+kl))\mathcal{O}(nt*(ij + jk + kl))$$. We assumed the simplest form of matrix multiplication that has cubic time complexity. We used batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch) Also, if you use momentum optimization, you will have same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm. I'm not sure what the results would be using other optimizers such as RMSprop. ### Sources The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this. If you're not familiar with back-propagation, check this article: http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4 Your answer is great..I could not find any ambiguity till now, but you forgot the no. of iterations part, just add it...and if no one answers in 5 days i'll surely accept your answer DuttaA @DuttaA I tried to put every thing I knew. it may not be 100% correct so feel free to leave this unaccepted :) I'm also waiting for other answers to see what other points I missed. M.kazem Akhgary 4 For the evaluation of a single pattern, you need to process all weights and all neurons. Given that every neuron has at least one weight, we can ignore them, and have $$O(w)O(w)\mathcal{O}(w)$$ where $$www$$ is the number of weights, i.e., $$n∗nin∗nin * n_i$$, assuming full connectivity between your layers. The back-propagation has the same complexity as the forward evaluation (just look at the formula). So, the complexity for learning $$mmm$$ examples, where each gets repeated $$eee$$ times, is $$O(w∗m∗e)O(w∗m∗e)\mathcal{O}(w*m*e)$$. The bad news is that there's no formula telling you what number of epochs $$eee$$ you need. From the above answer don't you think itdepends on more factors? DuttaA 1 @DuttaA No. There's a constant amount of work per weight, which gets repeated e times for each of m examples. I didn't bother to compute the number of weights, I guess, that's the difference. maaartinus 1 I think the answers are same. in my answer I can assume number of weights w = ij + jk + kl. basically sum of n * n_i between layers as you noted. M.kazem Akhgary 1 A potential disadvantage of gradient-based methods is that they head for the nearest minimum, which is usually not the global minimum. This means that the only difference between these search methods is the speed with which solutions are obtained, and not the nature of those solutions. An important consideration is time complexity, which is the rate at which the time required to find a solution increases with the number of parameters (weights). In short, the time complexities of a range of different gradient-based methods (including second-order methods) seem to be similar. Six different error functions exhibit a median run-time order of approximately O(N to the power 4) on the N-2-N encoder in this paper: Lister, R and Stone J "An Empirical Study of the Time Complexity of Various Error Functions with Conjugate Gradient Back Propagation" , IEEE International Conference on Artificial Neural Networks (ICNN95), Perth, Australia, Nov 27-Dec 1, 1995. Summarised from my book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning. Hi J. Stone. Thanks for trying to contribute to the site. However, please, note that this is not a place for advertising yourself. Anyway, you can surely provide a link to your own books if they are useful for answering the questions and provided you're not just trying to advertise yourself. nbro @nbro If James Stone can provide an insightful answer - and it seems so - then i'm fine with him also mentioning some of his work. Having experts on this network is a solid contribution to the quality and level.
2020-10-20 06:12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 88, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8342978954315186, "perplexity": 1839.3252858763196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00582.warc.gz"}
https://www.tonysheng.com/narratives-mass-movements
As I write this, the price of Ethereum has just dropped below $200, which is about 20$ lower than it was a year ago. Its gains against fiat and Bitcoin have–for the time being–been wiped out. Clearly, the wild rise in Ethereum (reaching a peak of over \$1400 just nine months ago) and the crypto markets as a whole was, as many predicted, a speculative bubble. If history is any indication, this will not be the last bubble for cryptoassets. A cursory glance at the total cryptoasset market cap over time (or just Bitcoin’s) shows a fractal of increasingly larger bubbles1. Given the reasonable expectation of future bubbles, I want to better understand the nature of these bubbles and the psychology of its participants. ## Narratives and bubbles Jeff Tong recently shared a paper with me called “Cracking the enigma of asset bubbles with narratives” that argues that “periods of intense market speculation are driven by narratives and narrative thought.” They authors define narratives as: a cohesive story or account of events, experiences, or phenomena, whether true or fictitious The most salient excerpt is this, outlining the three central reasons why narratives drive asset bubbles: First, asset bubbles typically form during periods of profound innovation, product introduction, and market liberalization, all circumstances in which investors have less, or at least less reliable or relevant, historical data on which to base their decisions. Hence, bereft of sufficient historical data, decision makers have no choice but to rely on narratives, which are widely recognized as our primary sense-making currency in ambiguous situations (Abolafia, 2010b; Boje, 1991; Weick, 1995), to guide their decision making. Such a feature of asset bubbles is well documented in events such as the dotcom bubble referred to above and the Southeast Asian crisis in the late 1990s. Second, asset bubbles also tend to arise during euphoric periods of easy credit and loose regulations, conditions that attract a surge in retail investors and copycat organizations. As these market players make significantly less use of detailed or long-term analysis of market data and trends, narratives also function as a dominant form of communication—as a sense-giving currency. The rapid and wild spread of innovations in credit derivatives circa 2002 demonstrated just how destructive such copycat behavior can be (Tett, 2009). Finally, today’s internationalized, high-speed investment environment provides investors with seemingly endless investment opportunities but limited time in which to make decisions. Under such conditions, narratives, which through their elegance and cohesiveness are able to attract our attention and are more easily learned than raw data (Shaw et al., 1998; Smith and Anderson, 2004), are highly influential. In sum, an environment where narratives fuel speculative bubbles have the following three properties: 1. Lack of reliable or relevant historical data to form valuations 2. Conditions that attract retail investors, oftentimes poor regulation 3. Relative strength of narratives to grab attention in an opportunity rich investment environment. This maps neatly with what we’ve observed in crypto. Because we lack a proven valuation model for cryptoassets, narratives drive investment decisions. The global and 24/7 nature of the cryptomarkets and poorly defined regulations increase the onboarding and engagement of retail investors. If you asked passionate new crypto investors late last year why they decided to invest, you’d likely here one of a few popular memes like, “permissionless world computer,” “unseizable money,” or “new banking infrastructure.” And in each preceding speculative mania, you’d have heard the same thing with slightly different narratives. Will things change in future bubbles? Probably not. We will still lack reliable or relevant historical data to generate useful valuations of cryptoassets and the markets will continue to be global and 24/7. Additional regulations may come to pass, but there will always be ways to participate in underregulated environments. Given these properties, cryptomarkets are likely to continue to be driven by narratives. ## Mass movements I previously explained the in-fighting within the crypto industry with the psychology of mass movements. The ripe population in a mass movement isn’t only ripe for a particular religious or financial or other forms of mass movement, they’re more ripe than average for any mass movement, including directly competitive movements. It’s more likely that a radical group will be able to recruit from a competing radical group than from an apathetic group. “A Saul turning into Paul is neither a rarity nor a miracle. In our day, each proselytizing mass movement seems to regard the zealous adherents of its antagonist as its own potential converts.” This explains the fierce competition between crypto projects that seem unrelated aside from their choice of technology. While they may not compete from a business perspective, they compete directly for a population of true believers: “the gain of one [movement] in adherents is the loss of all the others.” The constant conflict between crypto mass movements is a battle for adherents. Members of one movement evanagelize their own, promoting dogmas that promise spectacular and sudden change, while demonizing the beliefs of other movements. This conflict is particularly visible at the tail end of a speculative bubble, as the true believers of a movement stick around as asset prices fall to try and build the foundation for the next wave of possible adherents. To apply the language of the markets, true believers are building a community of “holders of last resort” and refining their narratives to best attract the next wave of retail investors. ## The Narrative Bubble Loop Narratives and the psychology of mass movements fuel cryptomarket boom and bust cycles. Thus, bear markets are a time for true believers to A/B test narratives in preparation for the next speculative bubble. The Narrative Bubble Loop This is how I imagine it. • A speculative bubble forms in an environment described by the paper: (1) lack of reliable historical data, (2) conditions that attract retail investors, and (3) relative strength of narratives • Narratives formed by early believers of some form of mass movement spread. The most compelling narratives start to be assumed as true • The winning narratives fuel a speculative bubble around the assets supported by the narratives • The bubble crests as asset prices rise to a point where there are fewer marginal buyers than sellers • Mass movements compete for adherents, along the way experimenting with new narratives and strengthening old ones • A new speculative bubble forms We’re somewhere on the right side of that loop. I can’t say whether we’re nearing the end of the correction or whether we’ll be here for some time, but the properties of cryptomarkets suggest that we will, at some point, see the formation of another narrative fueled speculative bubble. The narrative fueled bubble doesn’t usually loop in other asset classes. Take the internet for example. A massive speculative bubble fueled by narratives formed in the late-90s, pushing valuations way higher than its fundamentals. Then, a subsequent crash far below the fundamentals. And eventually, the environment changed. Better valuation methods and regulation decreased reliance on narratives, leading to the more efficient market we’ve seen since the dot-com bubble. In contrast, I suspect the trend of repeated bubbles continue in crypto for the foreseeable future, hence the loop. Until well-accepted valuation methods and clear regulations are adopted, the environment around cryptomarkets will continue to be ripe for narrative fueled speculative bubbles. The big questions are when, around which assets, and driven by which narratives. 1. Of course, this time could be different, but for the sake of this piece, let’s assume there will be one.
2019-06-25 21:42:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21775037050247192, "perplexity": 4602.841979977269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999948.3/warc/CC-MAIN-20190625213113-20190625235113-00103.warc.gz"}
https://brilliant.org/problems/periodic-fixed-point/
# Periodic fixed point Algebra Level 5 Consider the sequence: $$x_{n+1} = 4x_n(1 − x_n)$$ Call a point $$x_0\in[0, 1]$$ is $$r−$$periodic if $$x_r=x_0$$. For example, $$x_0 = 0$$ is always a $$r−$$periodic fixed point for any $$r$$. Let $$N$$ be the number of positive $$2015−$$periodic fixed points. Find the last 3 digits of $$N$$. ×
2017-01-20 08:10:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363654255867004, "perplexity": 793.2831995330363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.vedantu.com/question-answer/find-the-value-of-left-left-256-rightdfrac12-class-10-maths-cbse-5f62cfebe5bde9062ffafc8a
Question # Find the value of ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$. Hint: To solve this problem, first we will use the law of exponents. Then, we will express the given number $256$ in power notation. We will use the law of exponents one more time to find required value. Complete step-by-step solution In this problem, to find the value of ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$, first we will use the law ${\left( {{a^m}} \right)^n} = {a^{m\; \times \;n}}$. This is called the law of exponents. Let us compare ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$ with ${\left( {{a^m}} \right)^n}$ then we can say that $a = 256$ and $m = n = \dfrac{1}{2}$. Now we are going to use the law ${\left( {{a^m}} \right)^n} = {a^{m\; \times \;n}}$. Therefore, ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}} = {\left( {256} \right)^{\dfrac{1}{2}\; \times \;\dfrac{1}{2}}} = {\left( {256} \right)^{\dfrac{1}{4}}}$. Now we are going to express the number $256$ in power notation with respect to power $m \times n$. Note that here $m \times n = \dfrac{1}{4}$. Therefore, $256 = 4 \times 4 \times 4 \times 4$ $\Rightarrow 256 = {4^4}$ $\Rightarrow {\left( {256} \right)^{\dfrac{1}{4}}} = {\left( {{4^4}} \right)^{\dfrac{1}{4}}}$ Now again we compare ${\left( {{4^4}} \right)^{\dfrac{1}{4}}}$ with ${\left( {{a^m}} \right)^n}$ then we can say that $a = 4$ and $m = 4,n = \dfrac{1}{4}$. Now again we will use the law ${\left( {{a^m}} \right)^n} = {a^{m\; \times \;n}}$. Therefore, ${\left( {{4^4}} \right)^{\dfrac{1}{4}}} = {\left( 4 \right)^{4\; \times \;\dfrac{1}{4}}} = {4^1} = 4$ $\Rightarrow {\left( {256} \right)^{\dfrac{1}{4}}} = 4$ Hence, the value of ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$ is $4$. Note: We can find the value of ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$ by another method. First we will write the prime factorization of the number $256$. Then, we will use the law of exponents. Here $256$ is an even number. So, we can start prime factorization with number $2$. Therefore, $256 = 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 = {2^8}$. Now we can write ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}} = {\left[ {{{\left( {{2^8}} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$ Now we are going to use the law ${\left( {{a^m}} \right)^n} = {a^{m\; \times \;n}}$. Therefore, we can write${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}} = {\left[ {{{\left( {{2^8}} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}} = {\left( {{2^8}} \right)^{\dfrac{1}{2}\; \times \;\dfrac{1}{2}}} = {\left( {{2^8}} \right)^{\dfrac{1}{4}}}$ Now one more time we are going to use the law ${\left( {{a^m}} \right)^n} = {a^{m\; \times \;n}}$. Therefore, we can write ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}} = {\left( {{2^8}} \right)^{\dfrac{1}{4}}} = {2^{8 \times \dfrac{1}{4}}} = {2^{\dfrac{8}{4}}} = {2^2} = 4$ Hence, the value of ${\left[ {{{\left( {256} \right)}^{\dfrac{1}{2}}}} \right]^{\dfrac{1}{2}}}$ is $4$.
2020-09-25 23:34:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166558384895325, "perplexity": 100.28052042070372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00215.warc.gz"}
http://tex.stackexchange.com/questions/66991/miktex-where-do-i-specify-command-line-options
# MiKTeX where do I specify command line options? I'm using MiKTeX 2.9 and I want to specify the command line option --shell-escape. The reason for this is that I'm trying to use an eps file and I get the error I read that this command line option can fix the problem, but where exactly do I specify it? I'm typesetting with LuaLaTeX using TeXworks as the editor. - Are you running LuaLaTeX from the command line or using an editor, and if so which one? –  Joseph Wright Aug 13 '12 at 17:56 @JosephWright TeXworks, added it to my question. –  Mr. Roland Aug 13 '12 at 17:58 With TeXworks, go to the main menu: Edit / Preferences, choose the Typesetting tab, choose your engine, click the Edit button, add --enable-write18 such as here: Use the very same place if you would like to add --shell-escape instead. However, --shell-escape is used in TeX Live, --enable-write18 in MiKTeX. In addition: --enable-write18 is the original switch for MiKTeX, but --shell-escape works, too. –  Speravir Aug 13 '12 at 18:11 Specify the switch after invoking LuaLaTeX. As an example, lualatex --shell-escape luatex.tex. This assumes you’re invoking LuaLaTeX from a command line.
2015-03-29 03:08:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779914021492004, "perplexity": 5919.572722810078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298080.28/warc/CC-MAIN-20150323172138-00283-ip-10-168-14-71.ec2.internal.warc.gz"}
http://www.gathacognition.com/chapter/gcc20/statistical-decision-for-weibull-distribution-for-minima?show=highlight
Gatha Cognition® Perception, Learning and Reasoning #### Statistical Decision for Weibull Distribution for Minima Statistical Theory of Extremes 130-157 Weibull distribution , Minima , Point prediction , Time-to-failure The Weibull distribution for minima is explained. Weibull random variable (for minima) has a finite location parameter. The Weibull distribution deals with modeling of failure and helps to determine ‘time-to-failure’. Characteristics are presented from definition of distribution function. Statistical decision for the 3-parameters case, estimation, testing and point prediction are discussed. Prediction difficulties are depending on the interplay of the location and shape parameter of the distribution.
2018-04-25 16:26:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936593770980835, "perplexity": 2618.986978800524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947931.59/warc/CC-MAIN-20180425154752-20180425174752-00180.warc.gz"}
https://open.kattis.com/contests/na19warmup11/problems/conquestcampaign
Hide # Problem CConquest Campaign Since the beginning of $30$-th century, the Country of Circles has become the strongest country in the world. To expand its territory to the west, they plan to invade the Country of Rectangles. The territory of the Country of Rectangles is represented by a $R \times C$ table, where rows are numbered from $1$ to $R$, and columns are numbered from $1$ to $C$. The cell at the $i$-th row and $j$-th column is denoted as $(i, j)$. The Department of Defense of the Country of Circles plans to use its elite army of paratroopers to attack the Country of Rectangles. By sending spies to their opponent, they know that in the Country of Rectangles, $N$ cells $(x_1, y_1), (x_2, y_2), \ldots , (x_ N, y_ N)$ are very weakly protected and can easily be dominated. Hence, they come up with the following plan: • On the first day, they plan to send a battalion of paratroopers to occupy each of these $N$ cells. • On each of the following days, they plan to send reinforcement to occupy all cells which share a common edge with at least one previously occupied cell. We assume that the Country of Circle’s force is strong enough that they can occupy any cell that they want. The commander wants to know how many days it would take to conquer the whole country. ## Input • The first line contains three integers $R$, $C$ and $N$ $(1 \leq R, C \leq 100, \, 1 \leq N \leq 10,000)$ — the number of rows, the number of columns of the Country of Rectangles’ territory and the number of weakly protected cells, respectively. It is not guaranteed that the cells are unique. • The $i$-th of the remaining $N$ lines contains two integer $x_ i$ and $y_ i$ $(1 \leq x_ i \leq R, \, 1 \leq y_ i \leq C)$ — one cell where paratroopers are sent during the first day of the campaign. ## Output Print exactly one number — The number of days needed for the Country of Circles to completely conquer the Country of Rectangles. ## Explanation for the first example The figure below shows how the plan is operated for each day, where: • Unoccupied cells are in white. • Cells occupied on this day are filled with stripes. • Cells occupied on previous days are in solid color. Sample Input 1 Sample Output 1 3 4 3 2 2 2 2 3 4 3 Sample Input 2 Sample Output 2 2 3 6 1 1 1 2 1 3 2 1 2 2 2 3 1 CPU Time limit 1 second Memory limit 1024 MB Statistics Show
2022-06-29 10:30:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24908581376075745, "perplexity": 690.8835912757153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00703.warc.gz"}
http://realmode.wordpress.com/
## Windows Live Writer In this blog post, I take Windows Live Writer for a test drive. As a mere blogger wannabe, I can’t say for sure what it is I’d like to see in a blog-posting app. More than likely, the limitations are going to be found in the host software rather than the editing tool. For example, I’d sure like to embed little javascript apps in my posts, but WordPress doesn’t allow it. Another must is LaTeX, or some other convenient way to put math notation in a post. One can always render with some other tool, and then embed the image, but that’s just too much work. The third and final must-have is a way to drop code in a post and have it look pretty. I can see right now that there’s a Windows Live Writer  plug-in for this. I suppose that means that WLW will render HTML for syntax coloring, so there’s no need for any special ability on the host. Categories: Uncategorized ## Test Post This is a test of Windows Live Writer. This is only a test. Categories: Uncategorized ## A Problem I Just Made Up Show that has at least one solution for all real ,. Let be the standard inverse of , continuous on the entire real line with range . If , is easily seen as a solution, so from here on assume . Define Since and , by the Intermediate Value Theorem there is a value in between for which . Let . Because , we also have . We verify that is a solution to the original equation: Categories: Uncategorized ## If f(a+b) = f(a) + f(b) + 2ab, what is f? OK, I think I have a proof of the following: If $f(a+b) = f(a) + f(b) + 2ab$, then $f(x) = x^2 + mx$ for some real m. Proof: Setting $a=0$ and $b=0$ , we have $f(0+0) = f(0) + f(0) + 0$; in other words $f(0) = 0$. Next, set $a=x$ and $b=-x$ , which yields $f(x-x) = f(x) + f(-x) - 2x^2$. Since $f(x-x) = f(0) = 0$, we have $f(x) + f(-x) = 2x^2$, or $f(x) - x^2 = -[f(-x) - (-x)^2]$. Let $g(x) = f(x) - x^2$ (the left side) Then we see that $f(x) = x^2 + g(x)$ and $g$ is an odd function, i.e. $g(x) = -g(-x)$. Now, $f(a+b) = (a+b)^2 + g(a+b) = a^2 + b^2 + 2ab + g(a+b)$ but also $f(a+b) = f(a) + f(b) + 2ab$ Setting equal the right sides of the above two equations and simplifying, we get $g(a+b) = g(a) + g(b)$, which is the definition of linear function with $g(0) = 0$. (y-intercept = 0) Categories: math
2014-03-08 17:59:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374261260032654, "perplexity": 709.1324307897711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999656144/warc/CC-MAIN-20140305060736-00043-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/135000-expanding-formula-print.html
# Expanding formula • March 22nd 2010, 02:24 AM calypso Expanding formula Im trying to understand some notes, can someone please explain if the following is the case? $d/dr [r * da/dr ]$ does this equal $da/dr + r * d^2a / dr^2$ Thanks Calypso • March 22nd 2010, 02:28 AM sa-ri-ga-ma Quote: Originally Posted by calypso Im trying to understand some notes, can someone please explain if the following is the case? $d/dr [r * da/dr ]$ does this equal $da/dr + r * d^2a / dr^2$ Thanks Calypso
2014-07-31 06:55:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045995473861694, "perplexity": 5010.583934276508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00160-ip-10-146-231-18.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/244910/mass-required-to-prevent-sign-falling-over-with-a-set-wind-load-activity-stati
# Mass required to prevent sign falling over with a set wind load - activity stations for disabled children I'm currently working on my thesis and I'm stuck on a question. I'm designing activity stations for disabled children to be used for equine therapy. The stand is 9ft tall and I've calculated the the wind load at 0.94lbs, now I need to calculate the mass required to stop the station falling over (the concrete base is not a foundation sunk in the ground - it needs to remain portable) The frontal area of the stand is evenly spread. Any help would be appreciated. • It depends on how the mass is distributed at the base. For example, if the mass is concentrated approximately at a single point directly below the sign on the ground, then you need a nearly infinite amount of mass. If the mass is on a leg that is far from directly below the sign, then you need very little mass. The distribution of the base mass affects the lever arm of the base mass, which affects the torque it can provide when the wind attempts to tip the sign. – Brionius Mar 22 '16 at 12:44 • Thanks for the reply, the mass is evenly distributed, would you know any formula for such an equation? – John Dooley Mar 22 '16 at 13:35 • Are you sure the wind load is only 0.94 lbs? That seems very low. Imagine a 1 lbs weight...that's not a lot of force. – Brionius Mar 22 '16 at 14:20 Assuming a square base of width $w$ with mass $M$, and a horizontal wind load $F_w$ at a height $h$, then the condition for static equilibrium is $$\sum\tau = 0$$ $$\tau_{wind} + \tau_{base} = 0$$ Since if the sign tips, it would rotate around the edge of the base, that's a convenient axis about which to compute the torques: $$- F_w \frac{h}{2} + M g \frac{w}{2} = 0$$ Solving for M... $$M = \frac{F_w h}{g w}$$ Plugging in your numbers (0.94 lbs-force = 4.18 N, 9 ft = 2.7 meters), $$M = \frac{11.29 ~\rm Nm}{9.81 ~\rm{m/s^2} ~w}$$ $$M = \frac{1.15 ~\rm kg~m}{w}$$ If you prefer lbs and feet, $$M = \frac{8.32 ~\rm lbs~ft}{w}$$ For example, if your square base has a width of 3 feet, then you need a minimum of $M = 2.77 ~\rm lbs$ Note 1: This analysis assumes the mass of the sign itself is negligible compared to the base. This is a conservative assumption, since extra mass on the sign will make it more stable for initial tipping. Note 2: I'm very skeptical that a sign of any significant size would only experience a wind load of 0.94 lbs in any significant wind. I would double-check that figure. EDIT: I revised my answer now that the OP made it clear that the sign is a rectangle that extends from the ground up to 9 feet. • Brilliant thanks again for the reply, the front of the sign is a half sphere shape with low drag of .42, i recalculated the wind load for the rear which is flat, it was 3.18lbs, the wind speed was also low - 56mph, does this still seem wrong? – John Dooley Mar 22 '16 at 14:44 • 56 mph is very nearly hurricane-force winds. Unless your sign is very small, 3.18 lbs still seems quite low. For a 3 ft x 3 ft square in 56 mph winds, the empirical formula I found indicates that the wind pressure would be approximately 85 lbs. – Brionius Mar 22 '16 at 14:49 • the actual sign is only 9" wide, does this still seem out? – John Dooley Mar 22 '16 at 14:58 • @JohnDooley: how did you estimate or measure the 0.94 lb load figure? – Gert Mar 22 '16 at 15:03 • @JohnDooley: I also disagree with Brionius using the full height of the panel to calculate $\tau_{wind}$. The load doesn't act on the top of the panel, it acts on the CoG of the panel, at $\frac{H}{2}$. – Gert Mar 22 '16 at 15:08 The weight of the base needed to prevent toppling of the stand also depends on it's width $x$. In order to prevent toppling about the point $P$, there must not be a net moment about that point. This means mathematically that: $$F\frac{H}{2}=mg\frac{x}{2}$$ Where $H=9\:\mathrm{ft}$ and $F=0.04\:\mathrm{lbs}$. I'm assuming the load you assigned acts on the centre of gravity of the vertical panel and that the mass of the panel is negligible. So the mass $m$ required is: $$m=\frac{FH}{gx}$$ Note that $F$ and $H$ need to be converted to S.I. units, if you want to use $g=9.81\:\mathrm{ms^{-2}}$ as the Earth's acceleration. • Could you explain how the factor of $\frac{1}{2}$ arises in the left side of your first equation? – Brionius Mar 22 '16 at 14:51 • @Brionius: I assume the wind load to be homogeneous, so it can be replaced by a force acting on the centre of gravity of the panel. I'm surprised you didn't do the same. – Gert Mar 22 '16 at 14:53 • I assumed the center of gravity of the panel was 9 feet above the ground, since the OP says the "stand" is 9 feet tall, not the panel itself. Since the OP has said that the sign is 9" on a side, that seems reasonable. That is, unless the wind loading on the post holding up the sign is significant. – Brionius Mar 22 '16 at 15:08 • @Brionius: The OP now stated that the sign is a half-sphere. Strictly speaking we would have have to consider the wind moment to act on the CoG of that half-sphere. In reality it won't matter so much, because the OP will have to factor in a safety margin, e.g. make the actual mass about twice the calculated one, to cover unexpected gusts of wind and such like. – Gert Mar 22 '16 at 15:26 • Sure, but if the wind force on the stand is negligible, and the wind force on the sign is centered in the middle of the sign, then the wind force is applied 9 feet above the ground, not 4.5 feet above the ground. Which is why I don't think you should use that factor of $\frac 1 2$. – Brionius Mar 22 '16 at 15:29
2021-02-27 01:12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6363542079925537, "perplexity": 556.574550466762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358033.38/warc/CC-MAIN-20210226234926-20210227024926-00225.warc.gz"}
https://answerriddle.com/the-question-where-in-the-house-will-you-find-a-p-trap/
# The Question: Where in the house will you find a P-trap? The Question: Where in the house will you find a P-trap? Under the sink Inside the walls In the yard Behind the furnace
2021-04-12 20:46:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565207719802856, "perplexity": 3696.1819917139055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00624.warc.gz"}
http://www.pandora.com/newsboys/in-hands-of-god/this-is-your-life
It is taking longer than expected to fetch the next song to play. The music should be playing soon. If you get tired of waiting, you can try reloading your browser. Please ensure you are using the latest Flash Player. If you are unable or do not wish to upgrade your Flash Player, Your Pandora One subscription will expire shortly. Your Pandora One trial will expire shortly. You've listened to hours of Pandora this month. Consider upgrading to Pandora One. | -0:00 0:00 Change Skin # Free personalized radio that plays the music you love Now Playing Music Feed My Profile Create a Station ## Features of This Track a subtle use of paired vocal harmony mild rhythmic syncopation a clear focus on recording studio production minor key tonality a vocal-centric aesthetic acoustic rhythm guitars subtle use of acoustic piano These are just a few of the hundreds of attributes cataloged for this track by the Music Genome Project. ## Similar Tracks Report as inappropriate angel31904 That song this is my life but in these voice makes me cry Report as inappropriate Roock on Report as inappropriate whalextale Tait rules. Sorry Furler fans. I mean he was good but Tait is GREAT. I don't want to argue though. They both love The Lord and write/sing amazing music to praise him so it doesn't really matter. Report as inappropriate zanderm1999 I like Peter more than Tait There's something about his voice They're both amazing. Report as inappropriate kathleen.hou s t o n I love Newsboys! :) Report as inappropriate NICE SONG! Report as inappropriate THIS SONG ROCKS !!!!!!!!!!!! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Report as inappropriate karsten.jay this song rules We're sorry, but a browser plugin or firewall may be preventing Pandora from loading.
2016-06-29 15:19:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999699592590332, "perplexity": 3231.3603727944737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00024-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/239584/are-all-even-group-actions-actions-on-top-spaces-homeomorphisms
# Are all even group actions actions on top spaces homeomorphisms? I'd like to know if an even group action on a topological space is necessarily a homeomorphism. In particular, we say an action $G \times X \to X$ is even if, for any $x \in X$, there is an open neighborhood $x \in U \subset X$, such that $gU \cap U = \emptyset, \forall g \in G$ not the identity. - Are there any even group actions? For all $g$, $gX\cap X\neq\emptyset$. –  Neal Nov 18 '12 at 3:09 Moreover, how can an action be a homeomorphism? Your question does not make sense as stated... –  Mariano Suárez-Alvarez Nov 18 '12 at 3:10 @Neal: Fixed. I forgot to exclude the identity element. Consider R as a topological space, Z as the group with the action zr=z+r. This is an even action. It is clearly a bijection and bicontinuous, and so a homeomorphism of R. In general, one defines an action to be continuous which ensures that it is a homeomorphism. My question is basically, are even actions necessarily continuous? My instructor implied this is true, but I'm at loss to show it (probably missing something obvious). –  Kannaguchi O. Nov 18 '12 at 5:55 @Kannaguchi: echoing Mariano's question, what do you mean by the statement that an action is a homeomorphism? Are you actually asking whether the map $G \times X \to X$ is a homeomorphism? This is almost never true; if $g \neq 1$ then $(g, x)$ is sent to the same point as $(1, gx)$, so if $G$ has non-identity elements then this map can't even be injective. –  Qiaochu Yuan Nov 18 '12 at 6:58 If you assume that $g:X \rightarrow X$ is continuous, which is part of the definition of a group acting on a topological space, then it's automatically a homeomorphism, with inverse $g^{-1}$ –  uncookedfalcon Nov 18 '12 at 8:14
2014-07-26 01:34:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334538578987122, "perplexity": 290.31926448802744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894931.59/warc/CC-MAIN-20140722025814-00088-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/36714/notation-for-a-graph-without-any-edges/36723
# Notation for a graph without any edges? Is there a standard notation for a graph (on a given set of vertices) without any edges? - I'd call it a discrete set, or a discrete space, or a graph without edges. –  Ryan Budney Aug 26 '10 at 4:02 Looking at the answers, I conclude that there is no standard notation. E_n is as close as one gets, but it's too "verbal" for my taste. –  Yuval Filmus Aug 27 '10 at 4:54 There are many ways to define a graph, but a pretty standard one is a pair $(V,E)$ where $V$ is a finite set of points and $E \subset \binom{V}{2}$. So, what you are looking for is $(V, \emptyset)$; which would be pretty widely understood. - should that be $E \in V \times V$ ? –  sleepless in beantown Aug 26 '10 at 14:55 For directed graphs, $E \subset V \times V$. By graph, I mean a finite simple undirected graph (no loops or multiple edges), although the finiteness condition is not necessary. –  Tony Huynh Aug 26 '10 at 15:21 Sorry, I'm just not familiar with using the "choose" or "binomial" operator to literally mean "choose" in that way. For an undirected graph, wouldn't an edge consist of an element of the set defined by {$V_1, V_2$} such that $V_1 \in V$ and $V_2 \in V$? I just want to make sure that I understand the notation correctly, because I did not realize that $A \times B$ for sets $A$ and $B$ implied an ordered pair $(a_1, b_1) s.t. a_1 \in A, b_1 \in B$. Thanks for the clarification. –  sleepless in beantown Aug 27 '10 at 6:53 And for simple graphs with no loops, $V_1 \ne V_2$ –  sleepless in beantown Aug 27 '10 at 6:55 Yes, for an undirected graph an edge is just an unordered pair of vertices. So the notation $\binom{V}{2}$ simply means the collection of all 2-element subsets of $V$. It's not completely standard, but I like it. –  Tony Huynh Aug 27 '10 at 9:13 Some people call it the empty graph on n vertices. - That's also what I call it, but I want some notation, like the ones they have for the empty string. –  Yuval Filmus Aug 27 '10 at 4:52 I don't think there is standard notation for this. If you've already fixed a notation for complement (say a superscript c) then you could use $K_n^c$. But I don't think standard notation exists for this. - I suppose $n\cdot K_1$ assuming of course that $n \ge 1$. In the event that there are also no vertices it is sometimes called the Null Graph although F. Harary, F. and R. Read in "Is the Null Graph a Pointless Concept?" suggest that it may be more trouble than it is worth in that it has too many edges to be a tree, no automorphism group etc. - Huh? The totally empty graph definitely is not a tree, as it has too many edges, but the automorphism group is trivial, not nonexistent. There's one way to do nothing to nothing. –  Theo Johnson-Freyd Aug 26 '10 at 5:49 Well, let me qualify that. One nice way to count automorphisms is whenever you have a disjoint union of isomorphic things, and each component has automorphism group $G$, then you expect the union to have automorphisms the wreath product $G \wr S_n$. But this is a wrong expectation: it undercounts, for example, when $G$ is itself a disjoint union. So it's not surprising that it overcounts here. The totally empty graph has zero components, and is not itself connected. –  Theo Johnson-Freyd Aug 26 '10 at 5:52 Mainly I just couldn't pass up an opportunity to work in the title of that article. –  Aaron Meyerowitz Aug 26 '10 at 6:11 Hm. I would define a tree to be a connected graph lacking cycles; the null graph certainly qualifies, although the connected part is more vacuous than the absence of cycles. Equivalently, between any two distinct vertices you may care to choose in the null graph, there is exactly one path between them. –  Niel de Beaudrap Aug 26 '10 at 11:41 Actually, as Theo says there is good reason to consider the null graph as not connected. To see why, check out the article above by Harary and Read; it's quite funny. –  Tony Huynh Aug 26 '10 at 11:56 I have seen $\bar{K}_n$ for the graph with n vertices and no edges, but I do not remember where. - I have seen it written as $E_n$, where E stands for empty. - Standard notation in graph theory? In category theory the analogous thing can be denoted $disc(V)$ where $V$ is the set of vertices. -
2015-03-29 20:30:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770423531532288, "perplexity": 421.17662118203145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298684.43/warc/CC-MAIN-20150323172138-00187-ip-10-168-14-71.ec2.internal.warc.gz"}
http://strata.opengamma.io/apidocs/com/opengamma/strata/market/curve/class-use/CurveNodeDate.html
## Uses of Classcom.opengamma.strata.market.curve.CurveNodeDate • Packages that use CurveNodeDate Package Description com.opengamma.strata.market.curve Definitions of curves. com.opengamma.strata.market.curve.node Curve nodes. • ### Uses of CurveNodeDate in com.opengamma.strata.market.curve Fields in com.opengamma.strata.market.curve declared as CurveNodeDate Modifier and Type Field and Description static CurveNodeDate CurveNodeDate.END An instance defining the curve node date as the end date of the trade. static CurveNodeDate CurveNodeDate.LAST_FIXING An instance defining the curve node date as the last fixing date date of the trade. Methods in com.opengamma.strata.market.curve that return CurveNodeDate Modifier and Type Method and Description static CurveNodeDate CurveNodeDate.of(LocalDate date) Obtains an instance specifying a fixed date. Methods in com.opengamma.strata.market.curve that return types with arguments of type CurveNodeDate Modifier and Type Method and Description Class<? extends CurveNodeDate> CurveNodeDate.Meta.beanType() BeanBuilder<? extends CurveNodeDate> CurveNodeDate.Meta.builder() • ### Uses of CurveNodeDate in com.opengamma.strata.market.curve.node Methods in com.opengamma.strata.market.curve.node that return CurveNodeDate Modifier and Type Method and Description CurveNodeDate FixedIborSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate FraCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate FixedOvernightSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate IborIborSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate FxSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate TermDepositCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate OvernightIborSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate FixedInflationSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate IborFixingDepositCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate ThreeLegBasisSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate IborFutureCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. CurveNodeDate XCcyIborIborSwapCurveNode.getDate() Gets the method by which the date of the node is calculated, defaulted to 'End'. Methods in com.opengamma.strata.market.curve.node that return types with arguments of type CurveNodeDate Modifier and Type Method and Description MetaProperty<CurveNodeDate> FixedIborSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> FraCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> FixedOvernightSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> IborIborSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> FxSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> TermDepositCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> OvernightIborSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> FixedInflationSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> IborFixingDepositCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> ThreeLegBasisSwapCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> IborFutureCurveNode.Meta.date() The meta-property for the date property. MetaProperty<CurveNodeDate> XCcyIborIborSwapCurveNode.Meta.date() The meta-property for the date property. Methods in com.opengamma.strata.market.curve.node with parameters of type CurveNodeDate Modifier and Type Method and Description FixedIborSwapCurveNode.Builder FixedIborSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. FraCurveNode.Builder FraCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. FixedOvernightSwapCurveNode.Builder FixedOvernightSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. IborIborSwapCurveNode.Builder IborIborSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. FxSwapCurveNode.Builder FxSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. TermDepositCurveNode.Builder TermDepositCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. OvernightIborSwapCurveNode.Builder OvernightIborSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. FixedInflationSwapCurveNode.Builder FixedInflationSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. IborFixingDepositCurveNode.Builder IborFixingDepositCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. ThreeLegBasisSwapCurveNode.Builder ThreeLegBasisSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. IborFutureCurveNode.Builder IborFutureCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. XCcyIborIborSwapCurveNode.Builder XCcyIborIborSwapCurveNode.Builder.date(CurveNodeDate date) Sets the method by which the date of the node is calculated, defaulted to 'End'. FixedIborSwapCurveNode FixedIborSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. FraCurveNode FraCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. FixedOvernightSwapCurveNode FixedOvernightSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. IborIborSwapCurveNode IborIborSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. FxSwapCurveNode FxSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. TermDepositCurveNode TermDepositCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. OvernightIborSwapCurveNode OvernightIborSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. FixedInflationSwapCurveNode FixedInflationSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. IborFixingDepositCurveNode IborFixingDepositCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. ThreeLegBasisSwapCurveNode ThreeLegBasisSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. IborFutureCurveNode IborFutureCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date. XCcyIborIborSwapCurveNode XCcyIborIborSwapCurveNode.withDate(CurveNodeDate date) Returns a copy of this node with the specified date.
2018-04-22 12:35:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21297112107276917, "perplexity": 3362.2394959572966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945596.11/warc/CC-MAIN-20180422115536-20180422135536-00362.warc.gz"}
https://www.gu.se/forskning/publikation?publicationId=131704
Till sidans topp Sidansvarig: Webbredaktion Non-Permutation Invariant… - Göteborgs universitet Till startsida Webbkarta Till innehåll Läs mer om hur kakor används på gu.se # Non-Permutation Invariant Borel Quantifiers Paper i proceeding Författare Fredrik Engström Philipp Schlicht Workshop on Logic, Language and Computation & The 9th International Conference on Logic and Cognition 2010 Institutionen för filosofi, lingvistik och vetenskapsteori en www.math.helsinki.fi/logic/sellc-20... Logical constants, Borel, Generalized quantifiers Matematisk logik, Logik ## Sammanfattning Countable models in a given countable relational signature $\tau$ can be represented as elements of the logic space $$X_{\tau}=\prod_{R\in \tau} 2^{\mathbb{N}^{a(R)}}$$ where $a(R)$ denotes the arity of the relation $R$. The Lopez-Escobar theorem states that any invariant Borel subset of the logic space is defined by a formula in $\La_{\omega_{1}\omega}$. By generalizing Vaught's proof of the theorem to sets of countable structures invariant under the action of a closed subgroup of the permutation group of the natural numbers, we get the following: \begin{prop} Suppose $G\leq S_{\infty}$ is closed and $\mathcal{F}$ is the family of orbits of $G$. Then every $G$-invariant Borel subset of $X_{\tau}$ is definable in $\La_{\omega_{1}\omega}(\mathcal{F})$. \end{prop} A generalized quantifier of type $\langle k\rangle$ on the natural numbers is a subset of $2^{\mathbb{N}^k}$. We consider the logic $\La_{\omega_1\omega}(Q)$. This is $\La_{\omega_1\omega}$ augmented by the quantifier $Q$ where the formula $Qx\varphi(x)$ has the fixed interpretation $\{x\in\mathbb{N}^k:\varphi(x)\}\in Q$. We study non-permutation invariant generalized quantifiers on the natural numbers and prove a variant of the Lopez-Escobar theorem for a subclass, called good quantifiers, of the quantifiers which are closed and downwards closed. \begin{prop} Suppose $Q$ is good. Then a subset of $X_{\tau}$ is Borel and $\Aut(Q)$-invariant if and only if it is definable in $\La_{\omega_{1}\omega}(Q)$. \end{prop} Moreover for every closed subgroup $G$ of the symmetric group $S_{\infty}$, there is a closed binary quantifier $Q$ such that the $G$-invariant subsets of the space of countable structures are exactly the $\La_{\omega_1\omega}(Q)$-definable sets. \begin{prop} Suppose $G$ is a closed subgroup of $S_{\infty}$. There is a good binary quantifier $Q_G$ with $G=\Aut(Q_G)$. \end{prop} We show that there is a version of the Lopez-Escobar theorem for clopen quantifiers and for finite boolean combinations of principal quantifiers (a quantifier is principal if it is of the form $Q_A=\set{X\subseteq \N^k: A\subseteq X}$) . \begin{prop} Supppose $Q$ is clopen or a finite boolean combination $Q$ of principal quantifiers. Then a subset of $X_{\tau}$ is Borel and $\Aut(Q)$-invariant if and only if it is definable in $\La_{\omega_{1}\omega}(Q)$. \end{prop} This is joint work with Philipp Schlicht.
2020-08-12 04:51:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966666579246521, "perplexity": 423.3518224170823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738864.9/warc/CC-MAIN-20200812024530-20200812054530-00307.warc.gz"}
https://worldbuilding.stackexchange.com/questions/47457/how-would-a-nuclear-blast-on-the-moon-affect-its-orbit
# How would a nuclear blast on the moon affect its orbit? [duplicate] This question already has an answer here: Something I'm contemplating: If one faction destroys a large structure of an enemy faction with a nuclear device equivalent to a WWII atomic bomb on the moon. Would the force of the explosion be enough to push the moon off its orbit to some extent? ## marked as duplicate by Hohmannfan, Frostfyre, Vincent, Brythan, JDługoszJul 17 '16 at 6:00 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. • This is an example of a scale error. Teaspoons can shift water, so can you empty the ocean wit a teaspoon? – JDługosz Jul 16 '16 at 4:17 ## 3 Answers There would be no measurable effect. Little Boy had a yield of $6.3×10^{13}$ joules. If you could direct all that energy into changing the orbit of the $7.3×10^{22}$ kg Moon, it would speed it up or slow it down by 0.00000000086 meters per second. • Little Boy was a comparatively small and inefficient nuclear warhead, though. But even the biggest one ever detonated would not have a measureable impact on the moon orbit. – Philipp Jul 15 '16 at 23:43 No, the force of an Atomic blast isn't enough to significantly alter the Moon's orbit. Even the combined force of humanity's current nuclear stockpile is only enough to offset it the Moon's orbit by an unimaginably small amount. Here's an answer to a similar question if you want more information: https://worldbuilding.stackexchange.com/a/47418/21725 It wouldn't. A lot of people seem to think nukes are cosmically powerful, likely because of how devastating they are to our structures. But compared to a celestial body it's nothing, you'd need a substantial volume of antimatter explosives to destabilise such a large body, and even then doing so would simply rupture the moon into pieces.
2019-11-20 07:29:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43012505769729614, "perplexity": 1435.8614595789263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00419.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-2-linear-equations-and-functions-2-4-write-equations-of-lines-2-4-exercises-problem-solving-page-103/52
## Algebra 2 (1st Edition) Published by McDougal Littell # Chapter 2 Linear Equations and Functions - 2.4 Write Equations of Lines - 2.4 Exercises - Problem Solving - Page 103: 52 See below. #### Work Step by Step The whole garden is $16\cdot25=400$ square feet. If we have $x$ tomato plants and $y$ pepper plants, then our equation for the occupied area is: $8x+5y=400$. Thus if $x=15$, then $8\cdot15+5y=400\\120+5y=400\\5y=280\\y=56$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-12-13 10:08:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7292605042457581, "perplexity": 2039.3823790571146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00095.warc.gz"}
http://blogs.scienceforums.net/blandrounds/category/pseudoscience/
## Archive for the 'Pseudoscience' Category ### Thoughts of arson It is not often I am compelled to consider arson, but the jam-packed reflexology station at the Florida State Fair came very close to producing such thoughts. At the entrance to this shrine to pseudoscience was a poster-sized version of this image:
2013-05-25 22:27:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065590262413025, "perplexity": 6243.895521468358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470197/warc/CC-MAIN-20130516121430-00077-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ask.sagemath.org/question/8474/evenly-space-points-along-a-parametric-curve/
# Evenly space points along a parametric curve? Are there arc-length parametrization functions hidden somewhere in Sage? I have some 3D parametric curves (smooth) of length L along which I would like to put n dots at regular intervals. What I've been doing so far is using numerical integration to find the arc length parameter, and then using find_root to find the positions of the dots (spaced by arc length L/n). This is pretty slow, and has the further limitation that I need to specify a region on which find_root should work. If the parametrization is really uneven, it's tough to develop a good initial estimate for where to look. So, does anyone have other ideas for doing this? Thanks! UPDATE: Here are some examples -- they're different fibers in the Hopf fibration, and are given by r = (rx,ry,rz) Example 1: rx(t) = -0.309*cos(t)*arccos(0.951*cos(-t - 1.57))/(sqrt(-0.904*cos(-t - 1.570)^2 + 1)*pi) ry(t) = -0.309*sin(t)*arccos(0.951*cos(-t - 1.57))/(sqrt(-0.904*cos(-t - 1.57)^2 + 1)*pi) rz(t) = 0.951*sin(-t - 1.57)*arccos(0.951*cos(-t - 1.57))/(sqrt(-0.904*cos(-t - 1.57)^2 + 1)*pi)] Example 2: rx(t) = -0.707*cos(t)*arccos(0.707*cos(-t - 1.57))/(sqrt(-0.5*cos(-t - 1.57)^2 + 1)*pi) ry(t) = -0.707*sin(t)*arccos(0.707*cos(-t - 1.57))/(sqrt(-0.5*cos(-t - 1.57)^2 + 1)*pi) rz(t) = 0.707*sin(-t - 1.57)*arccos(0.707*cos(-t - 1.57))/(sqrt(-0.5*cos(-t - 1.57)^2 + 1)*pi) edit retag close merge delete Could you give an example of a typical curve that you're considering? ( 2011-11-15 15:19:32 +0200 )edit 1 Unless a given curve has special properties (say, an ellipse) I don't know that there is a better way in general. Would it work for your needs if we did it this way but faster? ( 2011-11-16 07:39:13 +0200 )edit 1 sure, faster would definitely be good. I tried writing some cython functions myself, but couldn't really get speed improvements. ( 2011-11-16 08:09:51 +0200 )edit Sort by » oldest newest most voted Hi, I used some numerical integration (by hand) and without find_roots. It works quite well and quite fast. x(t) = 2*cos(t) - cos(2*t) y(t) = 2*sin(t) - sin(2*t) dx(t) = x.derivative(t) dy(t) = y.derivative(t) arc_length = fast_callable(sqrt(dx(t)**2 + dy(t)**2), vars=(t,), domain=RR) First I make an approximation of the total length of my curve total_length = 0.0 for s in xsrange(0,2*pi,0.001): total_length += arc_length(s) * 0.001 And then it is possible to store equally spaced points (here 20) nb_pts = 20 step = total_length / nb_pts length = 0. next_pt = step L = [(x(0),y(0))] for s in xsrange(0,2*pi,0.001): length += arc_length(s) * 0.001 if length >= next_pt: L.append((x(s),y(s))) next_pt += step And parametric_plot((x(t),y(t)), (t,0,2*pi)) + point2d(L, color='red', pointsize=20) more
2021-08-04 00:50:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2611686885356903, "perplexity": 4563.358152845716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154486.47/warc/CC-MAIN-20210803222541-20210804012541-00215.warc.gz"}
https://www.codecogs.com/library/finance/accounting/pv.php
• https://me.yahoo.com COST (GBP) 3.90 0.00 0 # pv Return the present value of an investment. Controller: CodeCogs Contents C++ ## Pv doublepv( double rate int n double p double vn PaymentPoint when` = pp_EndOfPeriod` ) This function calculates the present value, v_0, for a sequence of n future payouts p followed by a final payment v_n: If rate = 0, $v_0&space;+&space;p\,n&space;+&space;v_n&space;=&space;0$ If rate > 0 and payments are received at the start of each period, $v_0(1+rate)^n&space;+&space;p_1(1+rate)^{n}&space;+&space;p_2(1+rate)^{n-1}&space;+&space;...&space;+&space;p_{n-1}(1+rate)&space;+&space;v_n&space;=&space;0$ while for payments received at the end of each period $v_0(1+rate)^n&space;+&space;p_1(1+rate)^{n-1}&space;+&space;p_2(1+rate)^{n-2}&space;+&space;...&space;+&space;p_{n}(1+rate)&space;+&space;v_n&space;=&space;0$ The code also uses an enumerated type PaymentPoint, using the following values: • pp_EndOfPeriod = 0 • pp_StartOfPeriod = 1. ### Example 1 A lady wins a $\inline&space;10&space;million&space;lottery.&space;&space;The&space;money&space;is&space;to&space;be&space;paid&space;out&space;at&space;the&space;end&space;of&space;each&space;year&space;in$500,000 payments for 20 years. The cu rrent treasury bill rate of 6% is used as the discount rate. ```#include <stdio.h> #include <codecogs/finance/accounting/pv.h> int main(int argc, char *argv[]) { double d = Finance::Accounting::pv(0.06, 20, 500000, 0, Finance::Accounts::pp_EndOfPeriod); printf("The present value of the \$10 million prize is: %7.2f\n", d); return 0; }``` Output: `The present value of the \$10 million prize is: 5734960.61` ### References http://www.vni.com/products/imsl/jmsl/v30/api/com/imsl/finance/Finance.html ### Parameters rate is the interest rate - assumed constant. n is the number of periods over which to calculate. p are the payouts from the investment made either at the start or end of each period (as defined by when). vn The future value of the investment. when The point in each period when the payment is made, either pp_StartOfPeriod or pp_EndOfPeriod. ### Authors James Warren (May 2005) ##### Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login. Last Modified: 8 Jul 08 @ 00:41     Page Rendered: 2022-03-14 17:53:23
2022-12-06 16:42:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37838423252105713, "perplexity": 5664.485710421232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00241.warc.gz"}
https://kfp.bitbucket.io/fricas-ug/section-2.4.html
# 2.4 Records¶ A Record is an object composed of one or more other objects, Record each of which is referenced with a selector. Components can all belong to the same type or each can have a different type. The syntax for writing a Record type is Record(selector1:type1, selector2:type2, ..., selectorN:typeN) You must be careful if a selector has the same name as a variable in the workspace. If this occurs, precede the selector name by a single quote quote. Record components are implicitly ordered. All the components of a record can be set at once by assigning the record a bracketed tuple of values of the proper length. For example: r : Record(a:Integer, b: String) := [1, "two"] $\mathrm{[a=1,b="two"]}$ Type: Record(a: Integer,b: String) To access a component of a record r, write the name r, followed by a period, followed by a selector. The object returned by this computation is a record with two components: a quotient part and a remainder part. u := divide(5,2) $\mathrm{[quotient=2,remainder=1]}$ Type: Record(quotient: Integer,remainder: Integer) This is the quotient part. u.quotient $2$ Type: PositiveInteger This is the remainder part. u.remainder $1$ Type: PositiveInteger You can use selector expressions on the left-hand side of an assignment to change destructively the components of a record. u.quotient := 8978 $8978$ Type: PositiveInteger The selected component quotient has the value 8978, which is what is returned by the assignment. Check that the value of u was modified. u $\mathrm{[quotient=8978,remainder=1]}$ Type: Record(quotient: Integer,remainder: Integer) Selectors are evaluated. Thus you can use variables that evaluate to selectors instead of the selectors themselves. s := 'quotient $\mathrm{quotient}$ Type: Variable quotient Be careful! A selector could have the same name as a variable in the workspace. If this occurs, precede the selector name by a single quote, as in selector:quoting u.’quotient. divide(5,2).s $2$ Type: PositiveInteger Here we declare that the value of bd has two components: a string, to be accessed via name, and an integer, to be accessed via birthdayMonth. bd : Record(name : String, birthdayMonth : Integer) Type: Void You must initially set the value of the entire Record at once. bd := ["Judith", 3] $\mathrm{[name="Judith",birthdayMonth=3]}$ Type: Record(name: String,birthdayMonth: Integer) Once set, you can change any of the individual components. bd.name := "Katie" $\mathrm{ "Katie"}$ Type: String Records may be nested and the selector names can be shared at different levels. r : Record(a : Record(b: Integer, c: Integer), b: Integer) Type: Void The record r has a b selector at two different levels. Here is an initial value for r. r := [ [1,2], 3 ] $\mathrm{[a=[b=1,c=2],b=3]}$ Type: Record(a: Record(b: Integer,c: Integer),b: Integer) This extracts the b component from the a component of r. r.a.b $1$ Type: PositiveInteger This extracts the b component from r. r.b $3$ Type: PositiveInteger You can also use spaces or parentheses to refer to Record components. This is the same as r.a. r(a) $\mathrm{[b=1,c=2]}$ Type: Record(b: Integer,c: Integer) This is the same as r.b. r b $3$ Type: PositiveInteger This is the same as r.b:=10. r(b) := 10 $10$ Type: PositiveInteger Look at r to make sure it was modified. r $\mathrm{[a=[b=1,c=2],b=10]}$ Type: Record(a: Record(b: Integer,c: Integer),b: Integer)
2019-02-21 09:59:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3201993703842163, "perplexity": 4313.539789015869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00570.warc.gz"}
https://www.nature.com/articles/s41467-022-27979-5?error=cookies_not_supported&code=4771f9d7-2f78-4396-ba4e-21c04f684898
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Stratification constrains future heat and carbon uptake in the Southern Ocean between 30°S and 55°S ## Abstract The Southern Ocean between 30°S and 55°S is a major sink of excess heat and anthropogenic carbon, but model projections of these sinks remain highly uncertain. Reducing such uncertainties is required to effectively guide the development of climate mitigation policies for meeting the ambitious climate targets of the Paris Agreement. Here, we show that the large spread in the projections of future excess heat uptake efficiency and cumulative anthropogenic carbon uptake in this region are strongly linked to the models’ contemporary stratification. This relationship is robust across two generations of Earth system models and is used to reduce the uncertainty of future estimates of the cumulative anthropogenic carbon uptake by up to 53% and the excess heat uptake efficiency by 28%. Our results highlight that, for this region, an improved representation of stratification in Earth system models is key to constrain future carbon budgets and climate change projections. ## Introduction The Southern Ocean is a dynamically complex region. The strong wind-driven Antarctic Circumpolar Current drives a residual overturning circulation, consisting of an upwelling of circumpolar deep waters around the Polar Front, a residual northward transport with gradual water mass transformation to Antarctic Intermediate and Mode Waters (IW and MW), and finally subduction under subtropical waters. The upwelled water mass is cold and undersaturated with respect to anthropogenic carbon, allowing it to efficiently absorb large amounts of atmospheric excess heat and anthropogenic carbon1,2,3. Therefore, the subduction of IW and MW masses, occurring approximately between 30°S and 55°S, provides one of the major gateways carrying anthropogenic carbon (Cant) and excess heat (Hexcess) into the interior ocean (e.g. refs. 4,5,6,7), where they stay isolated from the atmosphere on decadal to millennial timescales8,9. For the historical period from 1850 to 2005, it has been estimated that 43% of Cant and 75% of Hexcess have entered the ocean south of 30°S10, although the Southern Ocean accounts for only 30% of the total ocean surface area. Over the same period, the region between 30°S and 55°S is responsible for 27 and 50% of the global ocean Cant and Hexcess uptake despite covering only 21% of the world ocean, according to the models analysed in this study. Future projections of Cant and Hexcess uptake in the region between 30°S and 55°S from the last two generations Earth system models (ESMs) remain uncertain11,12 (Fig. 1) because ESMs struggle to capture the complex dynamical and biogeochemical processes in this region13,14,15. Despite improvements in model performance in successive phases of the Coupled Model Intercomparison Project (CMIP), this progress might be too slow to warrant significantly reduced uncertainty of ESM projections within the next decade16. Since this is the time horizon for framing climate mitigation policies that allow for meeting stringent climate targets17,18, more efforts have to be put into model analysis, i.e., understanding the roots of this uncertainty and reducing uncertainty in key climate metrics such as the projected carbon and heat uptake. The technique of emergent constraints provides a means to constrain a model ensemble through an emergent strong statistical relationship between an observable quantity of current climate and future changes in a variable of interest19,20. It has been used to constrain several aspects of the terrestrial21,22,23 and marine24,25,26 carbon cycle. In this work, we identify a key mechanism that explains the large inter model-uncertainty in future projections of Hexcess and Cant uptake between 30°S and 55°S across both CMIP5 and CMIP6 ESMs. We focus on the region between 30°S and 55°S since this is the area where intermediate and mode waters are formed and subducted (Methods). We find that the climatological stratification state in this region is tightly related to this subduction and we use this finding to robustly constrain both future Hexcess and Cant uptake. We note that our definition of Cant and Hexcess includes changes induced by climate change such as changes in ocean circulation, wind conditions and primary production (Methods). ## Results ### Linking stratification to oceanic Cant and Hexcess uptake We find that stratification biases in CMIP5 and CMIP6 ESMs in the region between 30°S and 55°S are strongly related to the amount of their future uptake of excess heat per degree of transient global warming (Hexcess uptake efficiency) and anthropogenic carbon (Fig. 2). Models showing a positive density bias that increases with depth relative to the surface bias (indicating stronger-than-observed stratification) tend to simulate a low uptake of Cant and low Hexcess uptake efficiency. The opposite is true for models that show an increasingly negative density bias profile relative to their surface bias (indicating weaker-than-observed stratification). In order to develop an emergent constraint from this apparent relationship, we need to capture the characteristics of the vertical structure of these density profiles in one metric. To this end, we use a stratification index27, which is the cumulative sum of density differences with respect to surface density (Methods), here applied over the upper 2000 m of the water column. This depth range has been chosen as it encompasses the MW and IW formation and subduction pathways in CMIP ESMs7,28, and modern observational coverage is good, since it is covered by standard ARGO floats. We identify the core of IW in each ESM by determining the depth of the salinity minimum at 30°S (ref. 28), and we find that the stratification index is highly correlated to both (1) the depth at which IW are subducted (R = −0.83) and (2) the subducted volume of IW and overlying MW (R = −0.77, both shown in Supplementary Fig. 1). Therefore, consistent with a previous study29, we find that the modelled volume of MW and IW formation is of high importance for determining the efficiency of Cant and Hexcess sequestration. However, the stratification index has the clear advantage of being straightforward to estimate from model output while the identification of water masses is more challenging and model-dependent28. We find that ESMs with high-stratification index and correspondingly low Cant uptake typically simulate lower uptake in the region around 55°S, but more importantly, the northward extent of their uptake is much more limited compared to models with low stratification index (Fig. 3). The latter models project accumulated uptake of more than 100 mol C m−2 in large regions north of 40°S in Pacific, Indian and Atlantic sectors, where low Cant-uptake models show uptake below 50 mol C m−2 (see also Supplementary Fig. 2 for a zonal mean view of Cant uptake and Supplementary Fig. 3 for an equivalent to Fig. 3 but for CMIP6 models). The higher (lower) Cant uptake simulated by ESMs with low (high) stratification index is connected to a steeper (shallower) surface-to-depth gradient of the anthropogenic component of dissolved inorganic carbon (DICant) concentration along the vertical zonal mean section between 30°S and 55°S (Fig. 3c, d). A similar surface-to-depth feature can also be seen in the warming efficiency (Fig. 3e, f). It is physically plausible for a model that exhibits a stronger stratification than observed to take up less carbon and heat than a model with weaker than observed stratification. However, the fact that present-day stratification is related to projected future uptakes across our model ensemble is not obvious, since stratification is changing with progressing climate change. We find that, in the region between 30°S and 55°S, the projected stratification bias of models relative to each other remains largely unchanged, i.e., a model that simulates a stronger contemporary stratification than the multi-model mean will do so for future time periods, too. The correlation between the mean present-day (1986–2005) stratification index and the mean future (2080–2099) stratification index is 0.91 (Supplementary Fig. 4). The target variables for our constraints are (i) the cumulative ocean Cant uptake [Pg C] as this minimises interannual and decadal variability (compared to annual carbon fluxes) while preserving trends and (ii) the 20-year average of ocean Hexcess uptake efficiency [TW °C−1] defined as the ratio between ocean Hexcess uptake rate [TW] and global atmospheric surface warming [°C] (ref. 30). The latter choice is motivated by the fact that the simulated atmospheric surface temperature that forces the oceanic heat uptake rates depends on each model’s response to radiative forcing. We, therefore, normalise the excess heat uptake by global surface warming31,32,33. Additional information on the validity of our findings for the Hexcess uptake rate [TW] without normalisation are presented in the supplement (Supplementary Fig. 5). In our analysis of Hexcess uptake efficiency, we merge the CMIP5 and CMIP6 ensembles because both rely on scenarios with the same end of century radiative forcing (RCP8.5 and SSP5-8.5, respectively). Such a merger is not meaningful for the Cant uptake as the CMIP6 SSP5-8.5 scenario reaches considerably higher end-of-century atmospheric CO2 concentrations than the CMIP5 RCP8.5 scenario34. The radiative forcing due to higher CO2 concentrations in SSP5-8.5 is compensated by lower concentrations of other greenhouse gases, mainly methane and nitrous oxide. ### Reducing uncertainties in ocean uptake projections Significant negative correlations exist between the simulated present-day water-column stratification index and both cumulative Cant uptake and Hexcess uptake efficiency at the end of the century (Fig. 4a–c). We note that the correlations between the stratification index and Hexcess are still significant but less robust without normalising the Hexcess uptake (Supplementary Fig. 5). These correlations indicate that a more stratified ocean absorbs less Cant and Hexcess. For Cant uptake, the high correlation with contemporary stratification is very stable over time (Fig. 4g), whereas for Hexcess uptake efficiency the correlation is initially low but gets stronger with time (see Discussion) and reaches values of 0.7 (P = 0.003) for CMIP5 and 0.8 (P < 0.001) for CMIP6 at the end of the century (Fig. 4h). The WOA13-based stratification index and its uncertainty are estimated as 64.08 ± 0.58 kg m−3 (Methods). This value is close to the CMIP5 and CMIP6 ensemble mean of 64.72 ± 3.80 kg m−3 and 65.02 ± 1.70 kg m−3, respectively. However, the model spread around the mean is substantial for both CMIP5 and CMIP6, with CMIP5 having a model uncertainty that is more than twice as large as the one of CMIP6. Based on the high correlations for both the CMIP5 and CMIP6 ensembles, we apply the emergent constraint approach (Methods) to constrain the uncertainty in future projections of Cant uptake and Hexcess uptake efficiency. We assume that all models are independent as done in other studies35. We note that this is a limitation of our study as some of these ESMs share components and code. Likewise, many CMIP6 models have been developed starting from their predecessor CMIP5 models such that the two ensembles are not entirely independent. An alternative approach could be based on an adaptive model weighting scheme or an ensemble reduction36,37, but this is beyond the scope of our study. After applying the observational constraint (Methods), the uncertainties of the cumulative Cant uptake between 30°S and 55°S are considerably reduced by 53 and 32% for CMIP5 and CMIP6, respectively. The associated best estimate of cumulative Cant uptake increases by 3 and 6% for CMIP5 and CMIP6 respectively compared to the prior-constraint estimate (Table 1). Similarly, the after-constraint uncertainty of Hexcess uptake efficiency for the combined CMIP5/CMIP6 ensemble is strongly reduced by 28% and the associated estimate increases by 7%. ## Discussion Our emergent constraint identifies a strong link between contemporary stratification in CMIP5/6 models and their ability to continuously take up Cant and Hexcess under a high-CO2 future scenario in the Southern Ocean between 30°S and 55°S. The ESMs’ stratification index correlates strongly with (i) the simulated depth at which the IW (and the overlying MW) are subducted and (ii) the simulated subducted water volume, here loosely referred to as the volume above the IW core (both shown in Supplementary Fig. 1). This suggests that a deeper position of the IW core is accompanied by a larger subduction volume, and hence a more efficient Cant and Hexcess sequestration in our model ensemble. This importance of the volume of ventilated waters for future Cant uptake in the Southern Ocean has been found in an independent study29. We note that the relationship between contemporary stratification and Cant and Hexcess uptake worsens when extending the region of interest south of 55°S and outside of the area of IW and MW subduction. Here, the Cant and Hexcess uptake is sensitive to other processes, such as sea-ice dynamics and bottom-water formation at the southward limb of Southern Ocean overturning circulation, and we find no direct link to the contemporary stratification (Supplementary Fig. 6). It has been shown before that formation of mode and intermediate waters is key for carbon and heat uptake5,38, and that this water mass formation appears to be linked to the simulated winter mixed layer depths of CMIP5 models14. We find, however, that the relationship between future cumulative Cant and Hexcess uptake efficiency and mixed layer depth in our region is only weak. There is no significant correlation between annual mean or maximum winter mixed layer depth and carbon and heat uptakes (Supplementary Figs. 7, 8). Deep winter mixing in our region is thought to be the main contributor to carbon and heat subduction, and therefore such relatively low correlations might seem surprising. A previous study14 has shown that stratification biases in ESMs contribute to setting the maximum winter mixed layer depth, and this effect is also captured by our constraint. In addition, other processes (such as diapycnal mixing and Ekman pumping) are also important for the eventual subduction of carbon and heat away from the seasonally varying base of the mixed layer39,40. A robust relationship can be found when taking the stratification of the upper 2000 m of the water column into account, but we note that our constraint is not sensitive to the exact lower bound of this depth range when it is varied between 1000 and 2000 m. We translate these findings into a physically plausible and robust emergent constraint for future projections of two generations of ESMs. It provides us with the unique possibility to constrain the model uncertainty for two highly important quantities at the same time. For the CMIP5/6 generation of models, we find that the simulated contemporary stratification between 30°S and 55°S is highly correlated to its projected future values across CMIP5/6 (R correlation of 0.91). Models with a strong stratification store more of the excess heat in the upper ocean, thereby creating stronger stratification changes, while the opposite is true for weakly stratified models41. This mechanistic explanation suggests that ESMs that simulate a realistic contemporary density profile are more reliable in simulating future density profiles. The predictor of our emergent constraint, i.e. the contemporary stratification, hence also constrains the future stratification and, as demonstrated here, the future cumulative Cant uptake and the Hexcess uptake efficiency. A recent study indicates that projected patterns of heat storage are primarily dictated by the preindustrial ocean circulation42. Contemporary oceanic storage of anthropogenic carbon and excess heat have distinct patterns43,44,45 and redistributed heat and carbon are projected to have opposing signs, leading to a more horizontal structure of heat storage than seen in the patterns of carbon storage in the Southern Ocean46. However, these differences are reduced in future projections of the Southern Ocean as the spatially averaged added heat becomes dominant over the redistribution of heat due to circulation changes42,46. This is consistent with our findings, specifically with the initially low but increasing correlation between stratification and excess heat uptake efficiency (Fig. 4e). We note that other quantities like the nutrient cycle and primary productivity are also closely linked to stratification, e.g. it has been shown that CMIP5 models with a stronger bias in contemporary surface stratification tend to predict larger climate-induced declines in surface nutrients and net primary production47. Due to the high importance of contemporary stratification biases for future marine projections, it is essential to reduce them. Our results identify significant stratification biases for most CMIP5/6 models in the areas of MW and IW formation, but also that the representation of stratification in the Southern Ocean between 30°S and 55°S has improved between CMIP5 and CMIP6. In fully coupled ESMs, it remains difficult to identify the ultimate source of biases. The emergent-constraint method is only able to identify systematic biases associated with the variables used in the emergent-constraint relationships. It does not highlight missing processes or dynamical biases common in ESMs which are not directly related to the observable processes or variables used in the constrained process35,37,48. As many of the model biases in the Southern Ocean temperature and salinity structure are concentrated in recently ventilated layers or in the deep Atlantic, they appear to stem from inaccuracies in the North Atlantic Deep Water formation regions or in the surface climate over the Southern Ocean16. Recently, a strong emergent constraint relationship has been found between surface salinity and cumulative Cant uptake in the Southern Ocean49. In combination with our constraint, this indicates that surface salinity is a fundamental player setting the Southern Ocean stratification. Here, the upper ocean properties like salinity are highly sensitive to a multitude of uncertainties in the sea-ice, ocean, and atmosphere components of an ESM, e.g. westerly jet position, Antarctic sea-ice extent and its potential relation to precipitation, clouds, mixing and transport by eddies. It takes a tremendous effort to model or parameterise all these processes in a realistic manner and a significant reduction of bias is not to be expected within this decade16. For the ocean, it has been found that eddy-induced diffusion is an important factor in setting the simulated stratification41. Hence, a better representation of eddies, be it through increased eddy-resolving resolution50 or through improved eddy parameterisations51 will very likely contribute to reducing stratification biases in ESMs. Future studies should elucidate processes that could contribute to the bias and large spread in the stratification index simulated across ESMs, for instance, our stratification index could be influenced by the water mass properties of the circumpolar deep water, which is formed in the North Atlantic. A better understanding of the linkage between North Atlantic climate representation and the Southern-Ocean water-mass properties across ESMs could be valuable. Ensembles of ESMs remain our only tool at hand to investigate the response of the Earth system to future scenarios of anthropogenic forcing. Reducing the large uncertainties arising from, among others, the representation of Southern Ocean dynamics in these models remains a challenge. The identification of emergent constraints, such as the one presented here, are invaluable as they can help to guide model development and, importantly, to speed up the provision of critical knowledge on expected future changes. ## Methods ### CMIP5/6 ensembles Our ensembles (summarised in Supplementary Tables 1, 2) are based on 17 CMIP5 and 16 CMIP6 ESMs used in the Fifth and Sixth Assessment Report of the Intergovernmental Panel on Climate Change, respectively52. INM-CM4 has been excluded from our analysis because the model shows an outlying large density bias for both Intermediate Water (IW) and Mode Water (MW)28. We use a single ensemble member (r1i1p1(f1) or equivalent) per model. The selected ESMs provide full periods of the following three standard CMIP5 (CMIP6) experiments: piControl, historical and RCP8.5 (SSP5-8.5). For our study, ocean Hexcess uptake, ocean Cant uptake and global atmospheric surface warming are calculated using the air-sea heat flux, the air-sea CO2 flux and the surface air temperature, respectively. The anthropogenic or excess component is obtained as the difference between the historical or future scenario and the preindustrial control experiments. Thus, Cant and Hexcess include changes induced by climate change (e.g. changes in ocean circulation, wind conditions, primary production). ### Latitudinal extent of the region considered for the constraint In our study, we focus on the area where intermediate and mode waters are formed and subducted as these processes are the main drivers of ocean carbon and heat uptake in the Southern Ocean10,28,38. This area lies between 30°S and 55°S in all CMIP5 and CMIP6 models. The 30°S is a commonly used northern boundary for the Southern Ocean and its subduction region5,10,21. The 55°S southern boundary is chosen to exclude the influence of sea-ice on air-sea fluxes, which would complicate the uptake-stratification relationship (Supplementary Fig. 9). According to the CMIP5 and CMIP6 zonal wind stress distribution16, the 55°S southern boundary generally excludes the southward limb of the Southern Ocean overturning circulation, that is not related to the subduction process of interest in this study. ### Density calculations and stratification index We calculated in situ density (⍴) from each ESM´s potential temperature and practical salinity (after conversion to absolute salinity and conservative temperature) following TEOS-10 standards53. Three-dimensional ⍴ fields have been area-weighted and averaged along horizontal surfaces to produce one-dimensional vertical profiles in native (model-dependent) vertical resolution. We use a Stratification Index (SI) based on ref. 27 to characterise the stratification of the water column: $${{{{{\rm{SI}}}}}}=\mathop{\sum }\limits_{i=1}^{10}{\rho }^{{z}_{i}}-{\rho }^{{z}_{0}}$$ (1) where z0 is the sea surface and zi = zi−1 + 200 for i = 1, …, 10 ### Probability density functions for the emergent constraints The prior probability density functions for cumulative Cant uptake and Hexcess uptake efficiency assume that all models are equally likely to be correct and lead to a Gaussian distribution, and so is the probability density function of the observational constraint P(x)21. The probability density function of the constrained estimate P(y) was generated following established methodologies by normalising the product of the conditional probability density function of the emergent relationship P(y│x) and the probability density function of the observational constraint P(x):21,22,25,26 $$P\left(y|x\right)=\frac{1}{\sqrt{2\pi {{\sigma }_{f}}^{2}}}{\exp }\left\{\frac{{(y-f(x))}^{2}}{2{{\sigma }_{f}}^{2}}\right\}$$ (2) where x and y are the predictor and the predictand, respectively. σf = σf(x) and is the ‘prediction error’ of the emergent linear regression. $$P\left(y\right)= \int_{-{{\infty }}}^{+{{\infty }}}P\left\{y|x\right\}\,P\left(x\right)\,{dx}$$ (3) ### Observational constraint The World Ocean Atlas 2013 version 2 (WOA13) annual climatology of ⍴ (refs. 54,55) is used as an observation-based estimate of the stratification index. The same horizontal area-weighting treatment as for the ESMs is applied to the three-dimensional ⍴ field of WOA13 leading to a finer vertically-resolved (102 levels) one-dimensional vertical profile. ⍴ anomaly profiles comparing the ESMs and WOA13 are computed by vertically interpolating the high-resolution WOA13 ⍴ profile to each coarsely-resolved model levels. The standard deviation of the WOA13 climatological monthly mean ⍴ is used as a proxy for the uncertainty around the climatological mean, as such uncertainty is not provided in the WOA13 database26. Standard statistical formulas56 for uncertainty propagation are applied for the three-to-one-dimensional reduction. The SI standard deviation (σSI) of the observational constraint is calculated from the SI formula: $${\sigma }_{{{{{{\rm{SI}}}}}}}=\sqrt{\mathop{\sum }\limits_{i=1}^{10}\overline{\,{\sigma }_{{\rho }^{{z}_{i}}}^{2}}\,+\overline{\,{\sigma }_{{\rho }^{{z}_{0}}}^{2}}}$$ (4) where $${\sigma }_{{\rho }^{{z}_{0}}}$$ and $${\sigma }_{{\rho }^{{z}_{i}}}$$ are the WOA13 standard deviations. ## Data availability CMIP5 and CMIP6 outputs are available from the Earth System Grid Federation (ESGF) portals (e.g. https://esgf-data.dkrz.de/). The WOA13 density climatology is available from the National Oceanographic Data Center portal (NODC/NOAA) under https://www.nodc.noaa.gov/OC5/woa13/. ## Code availability The MATLAB environment was used for statistical processing, model analyses and figure creation. The Gibbs-SeaWater (GSW) Oceanographic Toolbox has been used to convert model sea potential temperature and practical salinity to conservative temperature and absolute salinity, and to calculate in situ density (http://www.teos-10.org/software.htm). ## References 1. Manabe, S., Stouffer, R. J., Spelman, M. J. & Bryan, K. Transient responses of a coupled ocean–atmosphere model to gradual changes of atmospheric CO2. Part I. Annual mean response. J. Climate 4, 785–818 (1991). 2. Khatiwala, S., Primeau, F. & Hall, T. Reconstruction of the history of anthropogenic CO2 concentrations in the ocean. Nature 462, 346–349 (2009). 3. Tjiputra, J. F., Assmann, K. & Heinze, C. Anthropogenic carbon dynamics in the changing ocean. Ocean Science 6, 605–614 (2010). 4. Ludicone, D. et al. Water masses as a unifying framework for understanding the Southern Ocean carbon cycle. Biogeosciences 8, 1031–1052 (2011). 5. Sallée, J.-B., Matear, R. J., Rintoul, S. R. & Lenton, A. Localized subduction of anthropogenic carbon dioxide in the Southern Hemisphere oceans. Nat. Geosci. 5, 579–584 (2012). 6. Bopp, L., Lévy, M., Resplandy, L. & Sallée, J.-B. Pathways of anthropogenic carbon subduction in the global ocean. J. Geophys. Res. 42, 6416–6423 (2015). 7. Meijers, A. J. S. The Southern Ocean in the coupled model intercomparison project phase 5. Phil. Trans. R. Soc. A 372, 20130296 (2014). 8. Le Quéré, C. et al. Saturation of the Southern Ocean CO2 sink due to recent climate change. Science 316, 1735–1738 (2007). 9. Anderson, R. F. et al. Wind-driven upwelling in the Southern Ocean and the deglacial rise in atmospheric CO2. Science 323, 1443–1448 (2009). 10. Frölicher, T. L. et al. Dominance of the Southern Ocean in anthropogenic carbon and heat uptake in CMIP5 models. J. Climate 28, 862–886 (2015). 11. Kessler, A. & Tjiputra, J. The Southern Ocean as a constraint to reduce uncertainty in future ocean carbon sinks. Earth Sys. Dyn. 7, 295–312 (2016). 12. Sallée, J.-B. Southern Ocean warming. Oceanography 31, 52–62 (2018). 13. Downes, S. M. & Hogg, A. M. C. C. Southern Ocean circulation and Eddy compensation in CMIP5 models. J. Climate 26, 7198–7220 (2013). 14. Sallée, J.-B. et al. Assessment of Southern Ocean mixed-layer depths in CMIP5 models: historical bias and forcing response. J. Geophys. Res. Ocean. 118, 1845–1862 (2013). 15. Mongwe, N. P., Vichi, M. & Monteiro, P. M. S. The seasonal cycle of pCO2 and CO2 fluxes in the Southern Ocean: diagnosing anomalies in CMIP5 Earth system models. Biogeosciences 15, 2851–2872 (2018). 16. Beadling, R. L. et al. Representation of Southern Ocean properties across coupled model intercomparison project generations: CMIP3 to CMIP6. J. Climate 33, 6555–6581 (2020). 17. Rogelj, J. et al. Energy system transformations for limiting end-of-century warming to below 1.5 °C. Nat. Clim. Chang. 5, 519–527 (2015). 18. Rogelj, J., Forster, P. M., Kriegler, E., Smith, C. J. & Séférian, R. Estimating and tracking the remaining carbon budget for stringent climate targets. Nature 571, 335–342 (2019). 19. Hall, A. & Qu, X. Using the current seasonal cycle to constrain snow albedo feedback in future climate change. Geophys. Res. Lett. 33, L03502 (2006). 20. Hall, A., Cox, P., Huntingford, C. & Klein, S. Progressing emergent constraints on future climate change. Nat. Clim. Chang. 9, 269–278 (2019). 21. Cox, P. M. et al. Sensitivity of tropical carbon to climate change constrained by carbon dioxide variability. Nature 494, 341–344 (2013). 22. Wenzel, S., Cox, P. M., Eyring, V. & Friedlingstein, P. Emergent constraints on climate-carbon cycle feedbacks in the CMIP5 Earth system models. J. Geophys. Res.: Biogeosciences 119, 794–807 (2014). 23. Wenzel, S., Cox, P. M., Eyring, V. & Friedlingstein, P. Projected land photosynthesis constrained by changes in the seasonal cycle of atmospheric CO2. Nature 538, 499–501 (2016). 24. Goris, N. et al. Constraining projection-based estimates of the future North Atlantic carbon uptake. J. Climate 31, 3959–3978 (2018). 25. Kwiatkowski, L. et al. Emergent constraints on projections of declining primary production in the tropical oceans. Nat. Clim. Chang. 7, 355–358 (2017). 26. Terhaar, J., Kwiatkowski, L. & Bopp, L. Emergent constraint on Arctic Ocean acidification in the twenty-first century. Nature 582, 379–383 (2020). 27. Sgubin, G., Swingedouw, D., Drijfhout, S., Mary, Y. & Bennabi, A. Abrupt cooling over the North Atlantic in modern climate models. Nat. Commun. 8, 14375 (2017). 28. Sallée, J. B. et al. Assessment of Southern Ocean water mass circulation and characteristics in CMIP5 models: Historical bias and forcing response. J. Geophys. Res. Ocean. 118, 1830–1844 (2013). 29. Mignone, B. K., Gnanadesikan, A., Sarmiento, J. L. & Slater, R. D. Central role of Southern Hemisphere winds and eddies in modulating the oceanic uptake of anthropogenic carbon. Geophys. Res. Lett. 33, (2006) 30. Gregory, J. M. & Mitchell, J. F. B. The climate response to CO2 of the Hadley Centre coupled AOGCM with and without flux adjustment. Geophys. Res. Lett. 24, 1943–1946 (1997). 31. Andrews, T., Gregory, J. M., Webb, M. J. & Taylor, K. E. Forcing, feedbacks and climate sensitivity in CMIP5 coupled atmosphere-ocean climate models. Geophys. Res. Lett. 39, 1–7 (2012). 32. Williams, R. G., Ceppi, P. & Katavouta, A. Controls of the transient climate response to emissions by physical feedbacks, heat uptake and carbon cycling. Environ. Res. Lett. 15, 0940c1 (2020). 33. Yoshimori, M. et al. A review of progress towards understanding the transient global mean surface temperature response to radiative perturbation. Prog. Earth Planet. Sci. 3, 21 (2016). 34. Meinshausen, M. et al. The shared socio-economic pathway (SSP) greenhouse gas concentrations and their extensions to 2500. Geosci. Model Dev. 13, 3571–3605 (2020). 35. Schlund, M., Lauer, A., Gentine, P., Sherwood, S. C. & Eyring, V. Emergent constraints on equilibrium climate sensitivity in CMIP5: do they hold for CMIP6? Earth Sys. Dyn. 11, 1233–1258 (2020). 36. Sanderson, B. M., Knutti, R. & Caldwell, P. A representative democracy to reduce interdependency in a multimodel ensemble. J. Clim. 28, 5171–5194 (2015). 37. Sanderson, B. M. et al. The potential for structural errors in emergent constraints. Earth Sys. Dyn. 12, 899–918 (2021). 38. Sabine, C. L. The oceanic sink for anthropogenic CO2. Science 305, 367–371 (2004). 39. Williams, R. G. Ocean Subduction. In Encyclopedia of Ocean Sciences (ed. Steele, J. H.) 1982–1993 (Academic press, 2001). 40. Li, Z., England, M. H., Groeskamp, S., Cerovečki, I. & Luo, Y. The origin and fate of subantarctic mode water in the Southern Ocean. J. Phys. Oceanogr. 58, 2951–2972 (2021). 41. Kuhlbrodt, T. & Gregory, J. M. Ocean heat uptake and its consequences for the magnitude of sea level rise and climate change. Geophys. Res. Lett. 39, L18608 (2012). 42. Bronselaer, B. & Zanna, L. Heat and carbon coupling reveals ocean warming due to circulation changes. Nature 584, 227–233 (2020). 43. Banks, H. T. & Gregory, J. M. Mechanisms of ocean heat uptake in a coupled climate model and the implications for tracer based predictions of ocean heat uptake. Geophys. Res. Lett. 33, L07608 (2006). 44. Xie, P. & Vallis, G. K. The passive and active nature of ocean heat uptake in idealized climate change experiments. Clim. Dyn. 38, 667–684 (2012). 45. Winton, M. et al. Connecting changing ocean circulation with changing climate. J. Clim. 26, 2268–2278 (2013). 46. Williams, R. G., Katavouta, A. & Roussenov, V. Regional asymmetries in ocean heat and carbon storage due to dynamic redistribution in climate model projections. J. Clim. 34, 3907–3925 (2021). 47. Fu, W., Randerson, J. T. & Moore, J. K. Climate change impacts on net primary production (NPP) and export production (EP) regulated by increasing stratification and phytoplankton community structure in the CMIP5 models. Biogeosciences 13, 5151–5170 (2016). 48. Eyring, V. et al. Taking climate model evaluation to the next level. Nat. Clim. Chang. 9, 102–110 (2019). 49. Terhaar, J., Frölicher, T. L. & Joos, F. Southern Ocean anthropogenic carbon sink constrained by sea surface salinity, Sci. Adv. 7, eabd5964 (2021). 50. Rackow, T. et al. Sensitivity of deep ocean biases to horizontal resolution in prototype CMIP6 simulations with AWI-CM1.0. Geosci. Model Dev. 12, 2635–2656 (2019). 51. Mak, J., Maddison, J. R., Marshall, D. P. & Munday, D. R. Implementation of a geometrically informed and energetically constrained mesoscale Eddy parameterization in an ocean circulation model. J. Phys. Oceanogr. 48, 2363–2382 (2018). 52. Flato, G. et al. In Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds. Stocker, T. F. et al.) (Cambridge Univ. Press, 2013). 53. Feistel, R. A Gibbs function for seawater thermodynamics for −6 to 80 °C and salinity up to 120gkg–1. Deep Sea Res. Part I 55, 1639–1671 (2008). 54. Locarnini, R. A. et al. World Ocean Atlas 2013, Volume 1: Temperature. NOAA Atlas NESDIS 73, 40, https://www.nodc.noaa.gov/OC5/WOD13/ (2013). 55. Zweng, M. M. et al. World Ocean Atlas 2013, Volume 2: Salinity. NOAA Atlas NESDIS 74, 39, https://www.nodc.noaa.gov/OC5/WOD13/ (2013). 56. BIPM. Evaluation of measurement data: Guide to the expression of uncertainty in measurement (JCGM 100:2008, GUM 1995 with minor corrections) (2008). ## Acknowledgements We acknowledge the World Climate Research Programme, which, through its Working Group on Coupled Modelling, coordinated and promoted CMIP. We thank the climate modelling groups for producing and making available their model output, the Earth System Grid Federation (ESGF) for archiving the data and providing access, and the multiple funding agencies who support CMIP and ESGF. All authors received funding from the Research Council of Norway (RCN) under grant No. 275268 (COLUMBIA) and from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 820989 (COMFORT). The work reflects only the authors’ view; the European Commission and their executive agency are not responsible for any use that may be made of the information the work contains. J.S. and J.F.T. also received support from the RCN under grant No. 295046 (KeyCLIM) and 318477 (CE2COAST). We thank J. Terhaar for productive discussions. The analysis was made possible through the resources provided by UNINETT Sigma2—the National Infrastructure for data Storage in Norway (project NS9252K). ## Author information Authors ### Contributions T.B. performed all calculations and carried out the analysis. T.B., N.G., J.S. and J.F.T. were heavily involved in designing the analysis, interpreting the results and writing the manuscript. ### Corresponding author Correspondence to Timothée Bourgeois. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Peer review ### Peer review information Nature Communications thanks Paul Halloran, Stephen Rintoul and Richard Williams for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Bourgeois, T., Goris, N., Schwinger, J. et al. Stratification constrains future heat and carbon uptake in the Southern Ocean between 30°S and 55°S. Nat Commun 13, 340 (2022). https://doi.org/10.1038/s41467-022-27979-5 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-022-27979-5 By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
2022-08-09 06:55:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6724458932876587, "perplexity": 5284.475083566462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00546.warc.gz"}
http://www.mathworks.com/help/ident/ug/estimating-nonlinear-grey-box-models.html?nocookie=true
Accelerating the pace of engineering and science # Documentation ## Estimating Nonlinear Grey-Box Models ### Specifying the Nonlinear Grey-Box Model Structure You must represent your system as a set of first-order nonlinear difference or differential equations: $\begin{array}{l}{x}^{†}\left(t\right)=F\left(t,x\left(t\right),u\left(t\right),par1,par2,...,parN\right)\\ y\left(t\right)=H\left(t,x\left(t\right),u\left(t\right),par1,par2,...,parN\right)+e\left(t\right)\\ x\left(0\right)=x0\end{array}$ where ${x}^{†}\left(t\right)=dx\left(t\right)}{dt}$ for continuous-time representation and ${x}^{†}\left(t\right)=x\left(t+{T}_{s}\right)$ for discrete-time representation with Ts as the sampling interval. F and H are arbitrary linear or nonlinear functions with Nx and Ny components, respectively. Nx is the number of states and Ny is the number of outputs. After you establish the equations for your system, create a function or MEX-file. MEX-files, which can be created in C or Fortran, are dynamically linked subroutines that can be loaded and executed by the MATLAB® interpreter. For more information about MEX-files, see MEX-File Creation API. The purpose of the model file is to return the state derivatives and model outputs as a function of time, states, inputs, and model parameters, as follows: `[dx,y] = MODFILENAME(t,x,u,p1,p2, ...,pN,FileArgument)` Tip   The template file for writing the C MEX-file, IDNLGREY_MODEL_TEMPLATE.c, is located in matlab/toolbox/ident/nlident. The output variables are: • dx — Represents the right side(s) of the state-space equation(s). A column vector with Nx entries. For static models, dx=[]. For discrete-time models. dx is the value of the states at the next time step x(t+Ts). For continuous-time models. dx is the state derivatives at time t, or $\frac{dx}{dt}$. • y — Represents the right side(s) of the output equation(s). A column vector with Ny entries. The file inputs are: • t — Current time. • x — State vector at time t. For static models, equals []. • u — Input vector at time t. For time-series models, equals []. • p1,p2, ...,pN — Parameters, which can be real scalars, column vectors or two-dimensional matrices. N is the number of parameter objects. For scalar parameters, N is the total number of parameter elements. • FileArgument — Contains auxiliary variables that might be required for updating the constants in the state equations. Tip   After creating a model file, call it directly from the MATLAB software with reasonable inputs and verify the output values. For an example of creating grey-box model files and idnlgrey model object, see Creating idnlgrey Model Files. For examples of code files and MEX-files that specify model structure, see the toolbox/ident/iddemos/examples folder. For example, the model of a DC motor is described in files dcmotor_m and dcmotor_c. ### Constructing the idnlgrey Object After you create the function or MEX-file with your model structure, you must define an idnlgrey object. This object shares many of the properties of the linear idgrey model object. Use the following syntax to define the idnlgrey model object: `m = idnlgrey('filename',Order,Parameters,InitialStates)` The idnlgrey arguments are defined as follows: • 'filename' — Name of the function or MEX-file storing the model structure. This file must be on the MATLAB path when you use this model object for model estimation, prediction, or simulation. • Order — Vector with three entries [Ny Nu Nx], specifying the number of model outputs Ny, the number of inputs Nu, and the number of states Nx. • Parameters — Parameters, specified as struct arrays, cell arrays, or double arrays. • InitialStates — Specified in the same way as parameters. Must be the fourth input to the idnlgrey constructor. Use pem to estimate your grey-box model. ### Using pem to Estimate Nonlinear Grey-Box Models You can use the pem command to estimate the unknown idnlgrey model parameters and initial states using measured data. The input-output dimensions of the data must be compatible with the input and output orders you specified for the idnlgrey model. Use the following general estimation syntax: `m = pem(data,m)` where data is the estimation data and m is the idnlgrey model object you constructed. You can pass additional property-value pairs to pem to specify the properties of the model or the estimation algorithm. Assignable properties include the ones returned by the get(idnlgrey) command and the algorithm properties returned by the get(idnlgrey, 'Algorithm'), such as MaxIter and Tolerance. For detailed information about these model properties, see the idnlgrey reference page. ### Nonlinear Grey-Box Model Estimation Algorithm Options The Algorithm property of the model specifies the estimation algorithm, which simulates the model several times by trying various parameter values to reduce the prediction error. The following algorithm properties can affect the quality of the results: For detailed information about these and other model properties, see the idnlgrey reference page. #### Simulation Method You can specify the simulation method using the SimulationOptions (struct) fields of the model Algorithm property. System Identification Toolbox™ software provides several variable-step and fixed-step solvers for simulating idnlgrey models. To view a list of available solvers and their properties, type the following command at the prompt: `idprops idnlgrey algorithm.simulationoptions` For discrete-time systems, the default solver is 'FixedStepDiscrete'. For continuous-time systems, the default solver is 'ode45'. By default, SimulationOptions.Solver is set to 'Auto', which automatically selects either 'ode45' or 'FixedStepDiscrete' during estimation and simulation—depending on whether the system is continuous or discrete in time. #### Search Method You can specify the search method for estimating model parameters using the SearchMethod field of the Algorithm property. Two categories of methods are available for nonlinear grey-box modeling. One category of methods consists of the minimization schemes that are based on line-search methods, including Gauss-Newton type methods, steepest-descent methods, and Levenberg-Marquardt methods. The Trust-Region Reflective Newton method of nonlinear least-squares (lsqnonlin), where the cost is the sum of squares of errors between the measured and simulated outputs, requires Optimization Toolbox™ software. When the parameter bounds differ from the default +/- Inf, this search method handles the bounds better than the schemes based on a line search. However, unlike the line-search-based methods, lsqnonlin only works with Criterion='Trace'. By default, SearchMethod is set to Auto, which automatically selects a method from the available minimizers. If the Optimization Toolbox product is installed, SearchMethod is set to 'lsqnonlin'. Otherwise, SearchMethod is a combination of line-search based schemes. You can specify the method for calculating gradients using the GradientOptions field of the Algorithm property. Gradients are the derivatives of errors with respect to unknown parameters and initial states. Gradients are calculated by numerically perturbing unknown quantities and measuring their effects on the simulation error. Option for gradient computation include the choice of the differencing scheme (forward, backward or central), the size of minimum perturbation of the unknown quantities, and whether the gradients are calculated simultaneously or individually. #### Example – Specifying Algorithm Properties You can specify the Algorithm fields directly in the estimation syntax, as property-value pairs. For example, you can specify the following properties as part of the pem syntax: ```m = pem(data,init_model,'Search','gn',... 'MaxIter',5,... 'Display','On') ``` ### Represent Nonlinear Dynamics Using MATLAB File for Grey-Box Estimation This example shows how to construct, estimate and analyze nonlinear grey-box models. Nonlinear grey-box (idnlgrey) models are suitable for estimating parameters of systems that are described by nonlinear state-space structures in continuous or discrete time. You can use both idgrey (linear grey-box model) and idnlgrey objects to model linear systems. However, you can only use idnlgrey to represent nonlinear dynamics. To learn about linear grey-box modeling using idgrey, see "Building Structured and User-Defined Models Using System Identification Toolbox™". In this example, you model the dynamics of a linear DC motor using the idnlgrey object. Figure 1: Schematic diagram of a DC-motor. If you ignore the disturbances and choose y(1) as the angular position [rad] and y(2) as the angular velocity [rad/s] of the motor, you can set up a linear state-space structure of the following form (see Ljung, L. System Identification: Theory for the User, Upper Saddle River, NJ, Prentice-Hall PTR, 1999, 2nd ed., p. 95-97 for the derivation): ``` d | 0 1 | | 0 | -- x(t) = | | x(t) + | | u(t) dt | 0 -1/tau | | k/tau |``` ``` | 1 0 | y(t) = | | x(t) | 0 1 |``` tau is the time-constant of the motor in [s] and k is the static gain from the input to the angular velocity in [rad/(V*s)] . See Ljung (1999) for how tau and k relate to the physical parameters of the motor. 1. Load the DC motor data. ```load(fullfile(matlabroot, 'toolbox', 'ident', 'iddemos', 'data', 'dcmotordata')); ``` 2. Represent the estimation data as an iddata object. ```z = iddata(y, u, 0.1, 'Name', 'DC-motor'); ``` 3. Specify input and output signal names, start time and time units. ```z.InputName = 'Voltage'; z.InputUnit = 'V'; z.OutputName = {'Angular position', 'Angular velocity'}; z.Tstart = 0; z.TimeUnit = 's'; ``` 4. Plot the data. The data is shown in two plot windows. ```figure('Name', [z.Name ': Voltage input -> Angular position output']); plot(z(:, 1, 1)); % Plot first input-output pair (Voltage -> Angular position). figure('Name', [z.Name ': Voltage input -> Angular velocity output']); plot(z(:, 2, 1)); % Plot second input-output pair (Voltage -> Angular velocity). ``` Figure 2: Input-output data from a DC-motor. Linear Modeling of the DC-Motor 1. Represent the DC motor structure in a function. In this example, you use a MATLAB® file, but you can also use C MEX-files (to gain computational speed), P-files or function handles. For more information, see "Creating IDNLGREY Model Files". The DC-motor function is called dcmotor_m.m and is shown below. ` function [dx, y] = dcmotor_m(t, x, u, tau, k, varargin)` ``` % Output equations. y = [x(1); ... % Angular position. x(2) ... % Angular velocity. ];``` ``` % State equations. dx = [x(2); ... % Angular velocity. -(1/tau)*x(2)+(k/tau)*u(1) ... % Angular acceleration. ];``` The file must always be structured to return the following: Output arguments: • dx is the vector of state derivatives in continuous-time case, and state update values in the discrete-time case. • y is the output equation Input arguments: • The first three input arguments must be: t (time), x (state vector, [] for static systems), u (input vector, [] for time-series). • Ordered list of parameters follow. The parameters can be scalars, column vectors, or 2-dimensional matrices. • varargin for the auxiliary input arguments 2. Represent the DC motor dynamics using an idnlgrey object. The model describes how the inputs generate the outputs using the state equation(s). ```FileName = 'dcmotor_m'; % File describing the model structure. Order = [2 1 2]; % Model orders [ny nu nx]. Parameters = [1; 0.28]; % Initial parameters. Np = 2. InitialStates = [0; 0]; % Initial initial states. Ts = 0; % Time-continuous system. nlgr = idnlgrey(FileName, Order, Parameters, InitialStates, Ts, ... 'Name', 'DC-motor'); ``` In practice, there are disturbances that affect the outputs. An idnlgrey model does not explicitly model the disturbances, but assumes that these are just added to the output(s). Thus, idnlgrey models are equivalent to Output-Error (OE) models. Without a noise model, past outputs do not influence prediction of future outputs, which means that predicted output for any prediction horizon k coincide with simulated outputs. 3. Specify input and output names, and units. ```set(nlgr, 'InputName', 'Voltage', 'InputUnit', 'V', ... 'OutputName', {'Angular position', 'Angular velocity'}, ... 'TimeUnit', 's'); ``` 4. Specify names and units of the initial states and parameters. ```nlgr = setinit(nlgr, 'Name', {'Angular position' 'Angular velocity'}); nlgr = setpar(nlgr, 'Name', {'Time-constant' 'Static gain'}); nlgr = setpar(nlgr, 'Unit', {'s' 'rad/(V*s)'}); ``` You can also use setinit and setpar to assign values, minima, maxima, and estimation status for all initial states or parameters simultaneously. 5. View the initial model. a. Get basic information about the model. The DC-motor has 2 (initial) states and 2 model parameters. ```size(nlgr) ``` ```Nolinear grey-box model with 2 outputs, 1 inputs, 2 states and 2 parameters (2 free). ``` b. View the initial states and parameters. Both the initial states and parameters are structure arrays. The fields specify the properties of an individual initial state or parameter. Type idprops idnlgrey InitialStates and idprops idnlgrey Parameters for more information. ```nlgr.InitialStates(1) nlgr.Parameters(2) ``` ```ans = Name: 'Angular position' Value: 0 Minimum: -Inf Maximum: Inf Fixed: 1 ans = Name: 'Static gain' Value: 0.2800 Minimum: -Inf Maximum: Inf Fixed: 0 ``` c. Retrieve information for all initial states or model parameters in one call. For example, obtain information on initial states that are fixed (not estimated) and the minima of all model parameters. ```getinit(nlgr, 'Fixed') getpar(nlgr, 'Min') ``` ```ans = [1] [1] ans = [-Inf] [-Inf] ``` d. Obtain basic information about the object: ```nlgr ``` ```nlgr = Continuous-time nonlinear grey-box model defined by 'dcmotor_m' (MATLAB file): dx/dt = F(t, u(t), x(t), p1, p2) y(t) = H(t, u(t), x(t), p1, p2) + e(t) with 1 input, 2 states, 2 outputs, and 2 free parameters (out of 2). ``` Use get to obtain more information about the model properties. The idnlgrey object shares many properties of parametric linear model objects. ```get(nlgr) ``` ``` FileName: 'dcmotor_m' Order: [1x1 struct] Parameters: [2x1 struct] InitialStates: [2x1 struct] FileArgument: {} CovarianceMatrix: 'estimate' EstimationInfo: [1x1 struct] TimeVariable: 't' NoiseVariance: [2x2 double] Algorithm: [1x1 struct] Ts: 0 TimeUnit: 'seconds' InputName: {'Voltage'} InputUnit: {'V'} InputGroup: [1x1 struct] OutputName: {2x1 cell} OutputUnit: {2x1 cell} OutputGroup: [1x1 struct] Name: 'DC-motor' Notes: {} UserData: [] ``` Performance Evaluation of the Initial DC-Motor Model Before estimating the parameters tau and k, simulate the output of the system with the parameter guesses using the default differential equation solver (a Runge-Kutta 45 solver with adaptive step length adjustment). 1. Set the absolute and relative error tolerances to small values (1e-6 and 1e-5, respectively). ```nlgr.Algorithm.SimulationOptions.AbsTol = 1e-6; nlgr.Algorithm.SimulationOptions.RelTol = 1e-5; ``` 2. Compare the simulated output with the measured data. compare displays both measured and simulated outputs of one or more models, whereas predict, called with the same input arguments, displays the simulated outputs. The simulated and measured outputs are shown in a plot window. ```compare(z, nlgr); ``` Figure 3: Comparison between measured outputs and the simulated outputs of the initial DC-motor model. Parameter Estimation Estimate the parameters and initial states using pem (Prediction-Error identification Method). ```nlgr = setinit(nlgr, 'Fixed', {false false}); % Estimate the initial state. nlgr = pem(z, nlgr, 'Display', 'Full'); ``` Performance Evaluation of the Estimated DC-Motor Model 1. Review the information about the estimation process. This information is stored in the EstimationInfo property of the idnlgrey object. The property also contains information about how the model was estimated, such as solver and search method, data set, and why the estimation was terminated. ```nlgr.EstimationInfo ``` ```ans = Status: 'Estimated model (PEM)' Method: 'Solver: ode45; Search: lsqnonlin' LossFcn: 0.0011 FPE: 0.0011 DataName: 'DC-motor' DataLength: 400 DataTs: {[0.1000]} DataInterSample: {'zoh'} WhyStop: 'Change in cost was less than the specified tolerance' UpdateNorm: [] LastImprovement: [] Iterations: 5 InitialGuess: [1x1 struct] Warning: '' EstimationTime: 15.0300 ``` 2. Evaluate the model quality by comparing simulated and measured outputs. The fits are 98% and 84%, which indicate that the estimated model captures the dynamics of the DC motor well. ```compare(z, nlgr); ``` Figure 4: Comparison between measured outputs and the simulated outputs of the estimated IDNLGREY DC-motor model. 3. Compare the performance of the idnlgrey model with a second-order ARX model. ```na = [2 2; 2 2]; nb = [2; 2]; nk = [1; 1]; dcarx = arx(z, [na nb nk]); compare(z, nlgr, dcarx); ``` Figure 5: Comparison between measured outputs and the simulated outputs of the estimated IDNLGREY and ARX DC-motor models. 4. Check the prediction errors. The prediction errors obtained are small and are centered around zero (non-biased). ```pe(z, nlgr); ``` Figure 6: Prediction errors obtained with the estimated IDNLGREY DC-motor model. 5. Check the residuals ("leftovers"). Residuals indicate what is left unexplained by the model and are small for good model quality. Execute the following two lines of code to generate the residual plot. Press any key to advance from one plot to another. ``` figure('Name', [nlgr.Name ': residuals of estimated model']); resid(z, nlgr);``` Figure 7: Residuals obtained with the estimated IDNLGREY DC-motor model. 6. Plot the step response. A unit input step results in an angular position showing a ramp-type behavior and to an angular velocity that stabilizes at a constant level. ```figure('Name', [nlgr.Name ': step response of estimated model']); step(nlgr); ``` Figure 8: Step response with the estimated IDNLGREY DC-motor model. 7. Examine the model covariance. You can assess the quality of the estimated model to some extent by looking at the estimated covariance matrix and the estimated noise variance. A "small" value of the (i, i) diagonal element of the covariance matrix indicates that the i:th model parameter is important for explaining the system dynamics when using the chosen model structure. Small noise variance (covariance for multi-output systems) elements are also a good indication that the model captures the estimation data in a good way. ```nlgr.CovarianceMatrix nlgr.NoiseVariance ``` ```ans = 1.0e-04 * 0.1521 0.0015 0.0015 0.0007 ans = 0.0099 -0.0004 -0.0004 0.1094 ``` For more information about the estimated model, use present to display the initial states and estimated parameter values, and estimated uncertainty (standard deviation) for the parameters. ```present(nlgr); ``` ``` nlgr = Continuous-time nonlinear grey-box model defined by 'dcmotor_m' (MATLAB file): dx/dt = F(t, u(t), x(t), p1, p2) y(t) = H(t, u(t), x(t), p1, p2) + e(t) with 1 input, 2 states, 2 outputs, and 2 free parameters (out of 2). Input: u(1) Voltage(t) [V] States: initial value x(1) Angular position(t) [rad] xinit@exp1 0.0302675 (est) in [-Inf, Inf] x(2) Angular velocity(t) [rad/s] xinit@exp1 -0.133777 (est) in [-Inf, Inf] Outputs: Parameters: value standard dev p1 Time-constant [s] 0.243649 0.00390034 (est) in [-Inf, Inf] p2 Static gain [rad/(V*s)] 0.249644 0.000272168 (est) in [-Inf, Inf] The model was estimated from the data set 'DC-motor', which contains 400 data samples. Loss function 0.00107459 and Akaike's FPE 0.00108534 Created: 04-Sep-2014 04:14:13 ``` Conclusions This example illustrates the basic tools for performing nonlinear grey-box modeling. See the other nonlinear grey-box examples to learn about: • Using nonlinear grey-box models in more advanced modeling situations, such as building nonlinear continuous- and discrete-time, time-series and static models. • Writing and using C MEX model-files. • Handling nonscalar parameters. • Impact of certain algorithm choices. For more information on identification of dynamic systems with System Identification Toolbox, visit the System Identification Toolbox product information page.
2014-12-27 23:02:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.68784099817276, "perplexity": 2388.523399619988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554115.86/warc/CC-MAIN-20141224185914-00059-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.usgs.gov/policies-and-notices
# Policies and Notices These information describes the principal policies and other important notices that govern information posted on USGS websites.
2021-04-13 18:48:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580733180046082, "perplexity": 8274.471217785456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00006.warc.gz"}
https://www.vedantu.com/question-answer/given-the-linear-equation-2x-+-3y-8-0-write-class-10-maths-cbse-5ed55e3fc86a200e265ae57d
Question # Given the linear equation $2x + 3y - 8 = 0$, write another linear equation in two variables such that the geometrical representation of the pair so formed is:$\left( i \right)$ Intersecting lines$\left( {ii} \right)$ Parallel lines$\left( {iii} \right)$ Coincident lines Hint: - Directly use the criteria of slopes for intersecting, parallel and coincident lines in coordinate geometry to solve the question. $\left( i \right)$ Intersecting lines For intersecting line, the linear equations should meet following condition: $\dfrac{{{a_1}}}{{{a_2}}} \ne \dfrac{{{b_1}}}{{{b_2}}}$ For getting another equation to meet this criterion, multiply the coefficient of $x$ with any number and multiply the coefficient of $y$ with any other number. In order to get two intersecting lines or to achieve the above criteria let us multiply the coefficient of $x$ with $2$ and multiply the coefficient of $y$ with $3$. A possible equation can be as follows: $4x + 9y - 8 = 0$ $\left( {ii} \right)$ Parallel lines For parallel lines, the linear equation should meet following condition: $\dfrac{{{a_1}}}{{{a_2}}} = \dfrac{{{b_1}}}{{{b_2}}} \ne \dfrac{{{c_1}}}{{{c_2}}}$ For getting another equation to meet this criterion, multiply the coefficient of $x$ and $y$ with the same number and multiply the constant term with any other number. In order to get two parallel lines or to achieve the above criteria let us multiply the coefficient of $x$ and $y$ with $2$ and multiply the constant term with $3$. A possible equation can be as follows: $4x + 6y - 24 = 0$ $\left( {iii} \right)$ Coincident lines For coincident lines, the linear equation should meet following condition: $\dfrac{{{a_1}}}{{{a_2}}} = \dfrac{{{b_1}}}{{{b_2}}} = \dfrac{{{c_1}}}{{{c_2}}}$ For getting another equation to meet this criterion, multiply the whole equation with any number. In order to get two coincident lines or to achieve the above criteria let us multiply the whole equation by $2$. A possible equation can be as follows: $4x + 6y - 16 = 0$ Note: If two linear equations have the same slope, then lines will be parallel and they will have no solution. If two linear equations representing lines are coincident, then lines are the same and will have infinitely many solutions. In case of intersecting lines, there will be only one solution.
2021-04-17 03:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135739207267761, "perplexity": 292.5943748946403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00178.warc.gz"}
https://answers.opencv.org/answers/92129/revisions/
# Revision history [back] I was able to solve the issue like this: After MapAffine* mapAff = dynamic_cast<MapAffine>(mapPtr.get); I create a MapAffine object using the parameterised constructor where I multiply the shift component by the integer factor: MapAffine mapAff2 = cv::reg::MapAffine(ampAff->getLinTr(), alpha * mapAff->getShift()); // alpha is the integer factor Then I call inversewarp() using mapAff2: mapAff2.inversewarp(source, destination); If there's a more efficient way of doing it please let me know Thanks
2023-01-30 01:54:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17534038424491882, "perplexity": 6691.050929230947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00740.warc.gz"}
https://stats.stackexchange.com/questions/552230/expected-value-of-the-ridge-regression-estimator/552232#552232
# Expected value of the ridge regression estimator I am trying to understand this derivation: I think everything except the last equality is fairly simple, but I do not understand the last equality. Is there an error here? I appreciate any help. There's no error. Start with $$X^\top X = (X^\top X+\lambda I) - \lambda I$$; premultiply both sides by $$(X^\top X+\lambda I)^{-1}$$. Simplify. Postmultiply by $$\beta$$
2022-01-29 12:29:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262372016906738, "perplexity": 431.9585731940796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00587.warc.gz"}
https://math.stackexchange.com/questions/1572866/proof-using-formal-definition-infinite-limit
# Proof using formal definition: Infinite limit I was wondering how get the proof of this limit: $$\lim\limits_{x\to -\infty}\dfrac{{x^2} - x + 1}{x + 4} = -\infty$$ The problem is that I don't know what to do for find the appropriated values to make valid the implication of the formal definition (epsilon-delta). I would appreciate if somebody can help me. • A possibility: as Kay K. notes, $$\frac{{x^2} - x + 1}{x + 4} = x-5+\frac{21}{x+4}$$ so it is sufficient to show that $\frac{21}{x+4}$ goes to $0$ using the $(\varepsilon,\delta)$-definition of limits (proving that $x-5 \to -\infty$ by the same arguments will be trivial). – Clement C. Dec 12 '15 at 23:54 Expand what you know (as one can tell from your comment below Kay K.'s deleted answer) to a "snapshot" argument is not difficult, in fact. If $x < -4$, then $$\frac{x^{2}-x+1}{x+4} = x-5 + \frac{21}{x+4} < -9 + \frac{21}{x+4};$$ given any $M < -9$, we have $-9 + 21/(x+4) < M$ if in addition $$x < \frac{21}{M+9} - 4.$$ You want to find $N(M)<0$ (a function in terms of $M<0$) such that $$x<N(M)\implies \frac{x^2-x+1}{x+4}<M$$ Let $N(M)\le -4$. Then $x<-4$ and $$\frac{x^2-x+1}{x+4}<M\iff x^2-x+1>M(x+4)$$ $$\iff x^2-x(M+1)+(1-4M)>0$$ If $M\le -9-2\sqrt{21}$, then $\Delta=M^2+18M-3\ge 0$ and let $$N(M)=\min\left\{-4,\frac{M+1-\sqrt{M^2+18M-3}}{2}\right\}$$ If $M\in(-9-2\sqrt{21},0)$, then let $$N(M)=\min\left\{-4,\frac{k+1-\sqrt{k^2+18k-3}}{2}\right\}$$ for any $k\le-9-2\sqrt{21}$, e.g. you can let $k=-19$: $$N(M)=\min\left\{-4,-11\right\}=-11$$ Answer: you can let $$N(M)=\begin{cases}\min\left\{-4,\frac{M+1-\sqrt{M^2+18M-3}}{2}\right\}, && M\le -9-2\sqrt{21}\\-11, && M\in(-9-2\sqrt{21},0)\end{cases}$$ Note that $$\frac{x^2-x+1}{x+4}=\frac{(x+4)(x-5)+21}{x+4}.$$ You need to show that given $M<0$, there exists $N<0$ such that $\frac{x^2-x+1}{x+4}<M$ for all $x<N$. Let $N=\min\{M,-4\}.$ Then if $x<N$, \begin{align*} \frac{x^2-x+1}{x+4}&=\frac{(x+4)(x-5)+21}{x+4}\\ &=(x-5) + \frac{21}{x+4}\\ &<(N-5) + \frac{21}{x+4}\\ &<M+\frac{21}{x+4}\\ &<M. \end{align*} We can drop the $\frac{21}{x+4}$ because $x<N \leq -4$, so $x+4$ is negative, so $\frac{21}{x+4}<0$. • How you choose the value of N? – egarro Dec 13 '15 at 0:03 • Why not make N <= M - 21/x+4 + 5 ? – egarro Dec 13 '15 at 0:12 • It is x+4, not x-4, look the original fraction – egarro Dec 13 '15 at 0:25 • My bad about the $x+4$; I've corrected it. You can't choose $N \leq M-\frac{21}{x+4}$ or anything involving $x$ because later you need to say "let $x<N$", and that might be inconsistent with how you've just specified $N$. The way I chose $N$ was to make the inequalities work out - we know $x-5<N-5$, so we need to make sure $N-5<M$. (So in fact in my edit, I could've made $N=M+5$, but it works as is.) – kccu Dec 13 '15 at 2:32
2020-01-27 10:15:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93914794921875, "perplexity": 323.37335218129584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00223.warc.gz"}
https://www.wptricks.com/question/row-actions-taxonomy_row_actions-for-all-custom-taxonomies/
row actions – {\$taxonomy}_row_actions for all custom taxonomies Question I want to add a row action to ALL custom taxonomies. This is for use in a plugin so I want this to apply to whatever taxonomies exist rather than adding a filter for each taxonomy manually like below. I can’t figure out a simple way to do this. add_filter( 'category_row_actions', 'category_row_actions', 10, 2 ); in progress 0 5 months 2022-03-04T01:41:51-05:00 0 Answer 0 views 0 1. You could use the tag_row_actions hook (which was deprecated in WP v3.0.0, but then restored in WP v5.4.2) to target any taxonomies (Tags/post_tag, Categories/category, a_custom_taxonomy, etc.), so just replace the category_row_actions with tag_row_actions: add_filter( 'tag_row_actions', 'category_row_actions', 10, 2 );
2022-08-09 08:23:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3946109116077423, "perplexity": 5968.799981090165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00775.warc.gz"}
https://gaim.umbc.edu/2011/07/26/on-error/
# On Error Doing my best impersonation of someone who blogs with more regularity than I really do… I glossed over (flubbed?) the error analysis a little in my last post, and should really do a better job. I’ll look at CLEAN/LEAN mapping, but the analysis methods are useful in lots of situations where you compute something from a texture. To keep things simple, I’ll use a simplified form of the (C)LEAN variance computation: $V = M - B^2$ The error in this expression is especially important in (C)LEAN mapping since it determines the maximum specular power you can use, and how shiny your objects can be. For specular power s, 1/s has to be bigger than the maximum error in V, or you’ll get some ugly artifacts. M and B come from a texture, so have inherent error of $\epsilon_M$ and $\epsilon_B$ due to the texture precision. The error in each will be 1/2 of the texel precision. For example, with texel values from 0 to 255, a raw texel of 2 could represent a true value anywhere from 1.5 to 2.5, all of which are within .5 of the texel value. In general, we’ll scale and bias to use as much of the texture range as we can. The final error for an 8-bit texture then is range/512. For data that ranges from 0 to 1, the range is 1 and the representation error is 1/512; while for data that ranges from -1 to 1, the range is 2, so the representation error is 2/512 = 1/256. The error in each parameter propagates into the final result scaled by the partial derivative. $\partial{V}/\partial{M}$ is 1, so error due to M is simple: $\epsilon_{VM}=\epsilon_M$ The error due to B is a little more complicated, since $\partial{V}/\partial{B}$ is 2 B. We’re interested in the magnitude of the error (since we don’t even know if $\epsilon_B$ was positive or negative to start with), and mostly interested in its largest possible value. That gives $\epsilon_{VB}=2\ \textrm{max}(\left|B\right|)\ \epsilon_B$ Generally, you’re interested in whichever of these errors is biggest. The actual error is dependent on the maximum value of B, and how big the texel precision ends up being after whatever scale is used to map M and B into the texture range. So, for a couple of options: Max Bump Slope B range -1 to 1 -2 to 2 -1/2 to 1/2 45° 63.4° 26.6° $\epsilon_B$ 1/256 1/128 1/512 $\epsilon_{VB}$ 2*1/256= 1/128 2*4/128= 1/32 2*.5/512= 1/512 M range 0 to 1 0 to 4 0 to 1/4 $\epsilon_{VM}=\epsilon_M$ 1/512 1/128 1/2048 $\epsilon_V$ 1/128 1/32 1/512 $s_{max}$ 128 32 512 We can make this all a little simpler if we recognize that, at least with the simple range-mapping scheme used here, $\epsilon_B$ and $\epsilon_M$ are also dependent on $B_{max}$. $\begin{array}{ll} \epsilon_{VM} &= B_{max}^2/512\\ \epsilon_{VB} &= 4 B_{max}^2/512 = B_{max}^2/128\\ s_{max} &= 128/B_{max}^2 \end{array}$ So, this says the error changes with the square of the max normal-map slope, and that the precision of B is always the limiting factor. In fact, if there were an appropriate texture format, M could be stored with two fewer bits than B. For 16-bit textures, rather than 2-9 for the texture precision, you’ve got 2-17, giving a maximum safe specular power of 215=32768 for bumps clamped to a slope of 1. There’s no need for the slope limit to be a power of 2, so you could fit it directly to the data, though it’s often better to be able to communicate a firm rule of thumb to your artists (spec powers less than x) rather than some complex relationship (steeper normal maps can’t be as shiny according to some fancy formula — yeah, that’ll go over well).
2021-01-18 01:53:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6274628639221191, "perplexity": 720.1433483574889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00399.warc.gz"}
http://mathhelpforum.com/geometry/7106-could-really-use-help-geometery-hw-please.html
# Math Help - Could Really use Help in Geometery HW Please! 1. ## Could Really use Help in Geometery HW Please! move the red points to fill in the answers to 12-16 (the diagram might take a while to load at first) For the other problems, it looks like you don't know what a linear pair is. A linear pair is two angles sharing a side and a line (such as angles 3 and 4 in the diagram), they equal 180. Can you finish the rest of the problems? 3. ## Reply To First Post Originally Posted by Quick move the red points to fill in the answers to 12-16 (the diagram might take a while to load at first) For the other problems, it looks like you don't know what a linear pair is. A linear pair is two angles sharing a side and a line (such as angles 3 and 4 in the diagram), they equal 180. Can you finish the rest of the problems? i still dont get it, how can i find the answer wen i know nothing, i would have to find it in terms of a varible, i dont understand and the triangle was really cool. 4. Originally Posted by OnMyWayToBeAMathProffesor i still dont get it, how can i find the answer wen i know nothing, i would have to find it in terms of a varible, i dont understand and the triangle was really cool. I think your main problem is linear pairs. Angles AXB and BXC are a linear pair, after moving the points around, what can you find out about linear pairs? Tell me if you figure it out and please, if you quote this, remove the [draw]Point (40,100) [label ('A')]; Point (140,100) [label ('C')]; Point (90,30) [label ('B')]; Line (1,2); Point on object (4,.5) [label ('X')]; Ray (3,5); Angle (1,5,3,30,120,'Angle AXB = '); Angle (2,5,3,30,135,'Angle BXC = '); Calculate (30,150,'AXB + BXC = ','A B +') (8,7);[/draw] from the quote thanxs for the help, this is wat i got, are these right because we mite have a pop quiz on this, 6. I see some errors in your table for the parallel lines diagram, such as KON equals 35 degrees. Do you know the parallal postulates? 7. ## Replay To Quick I am somewhat familar with them. Except for that, is everything rite? thanxs 8. Originally Posted by OnMyWayToBeAMathProffesor I am somewhat familar with them. Except for that, is everything rite? thanxs I see a few errors in 12-16 Notice that (m means "measure of angle"): $m1+m2+m3=180$ Thus: $m1+m2=180-m3$ And that: $m3+m4=180$ Therefore: $m4=180-m3$ Therefore: $m1+m2=m4$ I would use that info to check your answers, I DON'T RECOMMEND USING IT TO FIND THE ANSWER! Anyway, I'm done for the night so you'll have to check over yourself... Ciao 9. ## Tahnxs A Lot Thanxs A Lot I Really Appricated Ur Help. Ur Very Smart For A 15 Year Old. Goodnight :d :d
2014-03-11 17:38:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5983242392539978, "perplexity": 1962.8057852914255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011237821/warc/CC-MAIN-20140305092037-00070-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.talkstats.com/tags/boxplot/
boxplot 1. Trouble with creating boxplots Hello everyone! I want to visually inspect whether I have a similarly shaped distribution in a question having a scale dependent variable and a 3-group categorical (nominal) independent variable. For that reason, I am trying to create a boxplot on spss v.23 but I am presented with the graph... 2. Outliers in categorical data? Hello! I am working on a pre-analysis plan and have to specify what I am going to do with outliers. I have two categorical variables (5 levels and 2 levels) and I will be performing a chi-square test for independence. I thought of using a boxplot to detect outliers, but now I am not sure... 3. Boxplots: Display variable "name" instead of case numbers? Hi everyone, I am still new to SPSS and slowly getting to grips with it. At the moment, I am trying to create box-plots to visually identify extreme values in my distribution. I have managed to get the box-plots how I want them, except for one detail: It is currently identifying the extreme... 4. Add dotcharts next to boxplots I've got all the box plots: boxplot( Daten$Gewicht~interaction(Daten$Dosis,Daten$Geschlecht, drop=TRUE), ylab="relative Nierengewichte", xlab="Dosisgruppen der beiden Geschlechter") and the points overlapping the box plots: points( Daten$Gewicht~interaction(Daten$Dosis,Daten$Geschlecht... 5. how to add quantity information to boxplot Hello I am using boxplots to show the distribution of some data. However, some comments says the boxplot can not show the concentration quantity of the data. (For example, suppose I have data of body length ranging from 150- 169 cm of a group of girl. The boxplot shows only the median... 6. Adding a trend line to a series of box plots If this is so obvious everyone should know I apologize for the inconvenience. I would like to add a color coded trend line to a series of box plots (graph attached).. Any help is much appreciated.. The code I have used for the graph is as follows: boxplot(total1$y~total1$x) The data... 7. Data visualization poll Hi, Originally, I thought this might be appropriate for the R thread, but I think this may have more general relevance. I'm trying to decide on a data visualization option, and my right and left sides of my brain are at war. In brief, I've modeled predator densities at two sites (in a... 8. Simple Box Plot question Hi all, I have a simple question that I need your help with. I have a box plot which shows the medians, 1st and 3rd quartiles, and min/max values. I was wondering if I can get the mean and standard deviation using this information Unfortunately I am not very statistically literate..please...
2019-11-21 19:05:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101822018623352, "perplexity": 673.3661004871421}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00008.warc.gz"}
https://www.ssccglapex.com/the-average-of-4-positive-integers-is-59-the-highest-integer-is-83-and-the-lowest-integer-is-29-the-difference-between-the-remaining-two-integers-is-28-which-of-the-following-integers-is-higher-of/
The average of 4 positive integers is 59. The highest integer is 83 and the lowest integer is 29. The difference between the remaining two integers is 28. Which of the following integers is higher of the remaining two integers ? A. 39 B. 48 C. 76 D. Cannot be determined Sum of four integers = 59 × 4 = 236 Let the required integers be x and x -28 $\begin{array}{l}\text{Then,}\\ \text{x + (x – 28) = 236 – (83 + 29)}\\ \text{⇒ 2x – 28 = 124}\\ \text{⇒ 2x = 152}\\ \text{⇒ x = 76}\\ \therefore\text{required integer = 76}\end{array}$
2022-08-09 02:43:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48304906487464905, "perplexity": 172.9535071565114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00603.warc.gz"}
https://cdriver.netlify.app/post/lgc/
# Latent growth curves, state dependent error. Latent growth curves are a nice, relatively straightforward model for estimating overall patterns of change from multiple, noisy, indicator variables. While the classic formulations of this model can be easily fit in most SEM packages, it provides a nice basis for understanding the differential equation formulation of systems, and also a good starting point for more complex model development not possible in the SEM framework – as a peek into these possibilities I’ll also show a growth curve model where the measurement error depends on the latent variable, as would be typical of floor or ceiling effects. To show this I’ll use ctsem. ctsem is R software for statistical modelling using hierarchical state space models, of discrete or continuous time formulations, with possible nonlinearities (ie state / time dependence) in the parameters. For a general quick start see https://cdriver.netlify.com/post/ctsem-quick-start/ , and for more details see the current manual at https://github.com/cdriveraus/ctsem/raw/master/vignettes/hierarchicalmanual.pdf # Data Lets load ctsem (if you haven’t installed it see the quick start post!), and generate some data from a simple linear latent growth model. set.seed(3) library(ctsem) nsubjects <- 30 nobs <- 8 #number of obs intercept <- rnorm(nsubjects, 3,2) #random intercepts slope <- rnorm(nsubjects, .3, .2) - intercept * .01 #random slopes with intercept correlation dat <- data.frame(matrix(NA,nrow=nsubjects*nobs,ncol=3)) #empty dataframe colnames(dat) <- c('id','time','eta1') r<-0 for(subi in 1:nsubjects){ for(obsi in 1:nobs){ r <- r+1 #current row dat$time[r] <- obsi + runif(1,-.5,.5) #observation timing variation dat$id[r] <- subi dat$eta1[r] <- intercept[subi] + dat$time[r] * slope[subi] } } dat$y1 <- dat$eta1 + rnorm(nrow(dat),0,.2) #observed variable with measurement error dat$id <- factor(dat$id) ## id time eta1 y1 ## 1 1 1.216853 1.647280 1.711899 ## 2 1 2.322188 2.166084 1.988314 ## 3 1 3.320742 2.634769 2.713504 ## 4 1 3.699945 2.812753 2.860061 ## 5 1 5.101135 3.470421 3.384321 ## 6 1 5.929836 3.859382 3.749796 library(ggplot2) ggplot(dat,aes(y=y1,x=time,colour=id))+geom_point()+ theme_bw()+geom_line(aes(y=eta1)) # Model The default model in ctsem is rather more flexible than linear growth, so we need to impose some restrictions on the dynamic system model, such that the change in the latent variable at any particular point does not depend on the current value of the latent variable, but only on the slope (or continuous time intercept) parameter. This continuous intercept parameter is fixed to zero by default, and instead measurement intercepts are estimated – in this case we need to switch this. Since growth curve models also assume that changes in the latent variable are deterministic, we also need to restrict the system noise (diffusion) parameters to zero. In terms of individual differences, when multiple subjects are specified, the default is to have random (subject specific) initial states and intercepts (with correlation between the two), which is just fine for our current model. Since we have variation in the observation timing (both within and between subjects in this case) we need to use the continuous time, differential equation format. For growth curve models, the discrete time forms are exactly equivalent to continuous time when the time between observations is 1, of whatever time unit is being used. model <- ctModel(type='stanct', # use 'standt' for a discrete time setup DRIFT=0, DIFFUSION=0, CINT='slope', MANIFESTMEANS=0, manifestNames='y1',latentNames='eta1', ctModelLatex(model) #requires latex install -- will prompt with instructions in any case # Fit Fit using optimization and maximum likelihood (defaults as of v2.2.1): fit<- ctStanFit(dat, model, cores=2) # Summarise / Visualise Then we can use various summary and plotting functions: ctModelLatex(fit) #requires latex install -- will prompt with instructions in any case summary(fit) ctKalman(fit,plot=TRUE, #predicted (conditioned on past time points) observation values. kalmanvec=c('y','yprior'), subjects=1:3, timestep=.1) ctKalman(fit,plot=TRUE, #smoothed (conditioned on all time points) observation values. kalmanvec=c('y','ysmooth'),subjects=1:3,timestep=.1 ) Note the differences between the first plot, using the Kalman filter predictions for each point, where the model simply extrapolates forwards in time and has to make sudden updates as new information arrives, and the smoothed estimates, which are conditional on all time points in the data – past, present, and future. In the first plot, it’s possible to see the system slowly learning the intercept and slope parameters as more data arrives, and in the second, we see the corrections to the predictions based on the knowledge given by all the observations. These plots are based on the maximum likelihood estimate / posterior mean of the parameters. # Multivariate Let’s look at a case with 2 latent processes, one of which has multiple indicators. First, we generate some new data: dat$eta2 <- dat$eta1 - .1*dat$time + rnorm(nrow(dat),0,.5) dat$y2 <- dat$eta2 + rnorm(nrow(dat)) dat$y3 <- dat$eta2 + rnorm(nrow(dat),0,.2) + 3 Our new model looks like: model <- ctModel(type='stanct', # use 'standt' for a discrete time setup DRIFT=0, #change doesn't depend on latent state DIFFUSION=0, #no random change in latent state CINT=c('slope1','slope2'), #freely estimated slopes T0MEANS=c('int1','int2'), MANIFESTMEANS=c( 0,0,'manintercept3||FALSE'), #manifest intercepts with 1 free param, no individual variation. manifestNames=c('y1','y2','y3'), latentNames=c('eta1','eta2'), LAMBDA=c( #vector input interpreted column wise 1,0, 0,1, 0,'lambda3') #Factor loading matrix, now with free param ) ctModelLatex(model,linearise = TRUE) #requires latex install -- will prompt with instructions in any case In this case, the ctsem default of correlated individual differences for all intercept style parameters is more relaxed than we want (though there may be good reasons for allowing individual differences here!) and we have used the separator | notation to turn off individual variation on the manifest intercept parameter. We fit the new model to the data… fit<- ctStanFit(dat, model, optimize=TRUE, nopriors=TRUE, cores=2) and take a look at our new results: ctModelLatex(fit) #requires latex install -- will prompt with instructions in any case ctKalman(fit,plot=TRUE, #smoothed (conditioned on all time points) predictions. kalmanvec=c('y','ysmooth'),subjects=1:2,timestep=.1 ) # State dependent measurement error Ok, now let’s consider something that regular SEM can’t handle. What if our measurement instruments only work well for certain values of the latent variable? Modify the data so measurement error depends on the latent state, eta. dat$y1 <- dat$eta1 + rnorm(nrow(dat),0, log1p(exp(.8*dat$eta1-3))) #add some state dependent noise plot(dat$eta1,log1p(exp(.8*dat$eta1-3))) #plot measurement error sd against latent Include a dependency on eta1 in the MANIFESTVAR (measurement error) matrix, ensuring the results is always positive by using the log1p_exp (softplus) function, and making sure ctsem knows which are the free parameters in the complex parameter construction. model <- ctModel(type='stanct', DRIFT=0, DIFFUSION=0, CINT='slope', MANIFESTMEANS=0, manifestNames='y1',latentNames='eta1', LAMBDA=1, MANIFESTVAR='log1p_exp(errorsd_intercept + errorsd_byeta1 * eta1)', #complex sd parameter PARS=c('errorsd_intercept', 'errorsd_byeta1') #specify any free parameters within complex parameters ) ctModelLatex(model) #requires latex install -- will prompt with instructions in any case fit<- ctStanFit(dat, model) Now we can see that when the process has lower values, we are more certain about the model predictions because of reduced measurement error. k=ctKalman(fit,plot=TRUE, #smoothed (conditioned on all time points) observation values. kalmanvec=c('y','ysmooth'),subjects=c(3,4)) ##### Charles Driver ###### Research Scientist I’m a quantitative psychologist interested in the dynamic systems perspective of human systems.
2022-05-26 05:14:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6362549066543579, "perplexity": 2545.640804628144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00692.warc.gz"}
https://www.physicsforums.com/threads/does-differential-order-matter.526915/
# Does differential order matter? 1. Sep 3, 2011 ### quietrain is this the same? why are they the same? do the order of differential not matter? d/dxi (dyj / dxj) = d/dxj (dyj / dxi) where LHS : differentiate yj w.r.t xj first, then xi while RHS: differentiate yj w.r.t xi first, then xj thanks! 2. Sep 4, 2011 ### HallsofIvy As long as f and its first and second derivatives are continuous, in some neighborhood of a point, the "mixed" derivatives $$\frac{\partial f}{\partial x\partial y}$$ and $$\frac{\partial f}{\partial y\partial x}$$ are equal at that point. 3. Sep 6, 2011 ### quietrain is this a total differential function? or is it just a normal differential property? i seem to be mixing everything up :( 4. Sep 6, 2011 ### HallsofIvy I have no idea what you are asking. I don't know what you mean by "total differential function" (I do know what a total differential is) or 'normal differential property". There is no "total differential" in this problem. It is entirely a property of partial derivatives. 5. Sep 7, 2011 ### quietrain ah i see than kyou
2018-01-17 11:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.700541615486145, "perplexity": 4305.814960600478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886895.18/warc/CC-MAIN-20180117102533-20180117122533-00555.warc.gz"}
http://maths.shelswell.org.uk/tag/minimum
# Minimum Connector The minimum connector problem gives a way to join every vertex in a network so that the total weight of the edges used is minimised. The towns in southern England are to be connected with a new fibre-optic cable system. The hub of the system is to be in London. What is the minimum length of cable needed to connect the towns? There are two algorithms that we can use to solve this type of problem: We will look at each of these individually and see the benefits of Prim’s algorithm for computational purposes. # Kruskal’s Algorithm Kruskal’s algorithm finds the minimum spanning tree for a network. Kruskal’s algorithm has the following steps: 1. Select the edge with the lowest weight that does not create a cycle. If there are two or more edges with the same weight choose one arbitrarily. 2. Repeat step 1 until the graph is connected and a tree has been formed. ## Example Finding the minimum spanning tree that follows the road network in southern England Using Kruskal’s algorithm the minimum spanning tree is generated as follows (each selected edge is coloured red). Click or tap an image for a larger view. The solution for the minimum spanning tree contains the edges {(Bristol, Swindon), (Swindon, Oxford), (Oxford, Reading), (Reading, London), (Reading, Southampton)}. The total weight (distance) for the minimum solution is $27 + 30 + 40 + 40 + 42 = 179 \text{(miles)}$. # Prim’s Algorithm Prim’s algorithm generates a minimum spanning tree for a network. Prim’s algorithm has the following steps: 1. Select any vertex. Connect the nearest vertex. 2. Find the vertex that is nearest to the current tree but not already connected and connect that. 3. Repeat step 2 until all vertices are connected. Often, assuming that you are doing this for the purposes of an exam, you will be told the starting vertex for step 1. If you find that there is more than one vertex equally close in step 2, choose one arbitrarily. ## Example Finding the minimum spanning tree that follows the road network in southern England. Using Prim’s algorithm the minimum spanning tree is generated as follows (each selected edge and node is coloured red). Click or tap an image for a larger view. The solution contains the following edges {(Bristol, Swindon), (Swindon, Oxford), (Oxford, Reading), (Reading, London), (Reading, Southampton)} and the total weight (distance) for the minimum solution is $27 + 30 + 40 + 40 + 42 = 179 \text{(miles)}$. ## Other points to note Prim’s Algorithm also has a table-based form that can easily be applied to matrices or distance tables. This means that it is particularly suited for applying to a computerised solution. Due to the number of comparisons of weight needed at each iteration, Prim’s Algorithm is $\text{O}(n^3)$. # Prim’s Algorithm – table form Prim’s algorithm is also suitable for use on distance tables or matrices, or the equivalent for the problem. This is useful for large problems where drawing the network diagram would be hard or time-consuming. That tables can be used makes the algorithm more suitable for automation than Kruskal’s algorithm. The reason for this is that the data used would have to be sorted to be used with Kruskal’s algorithm. With Prim’s algorithm, however, it is only the minimum value that is of interest, so no sorting is normally necessary. We will look again at our question that requires a minimum spanning tree for the network of towns in the south of England using main road connections. The network diagram is as shown in figure 1. The network shown in Figure 1 can be represented by the adjacency matrix shown in Table 1. The tabular form of Prim’s algorithms has the following steps: 1. Select any vertex (town). Cross out its row. Select the shortest distance (lowest value) from the column(s) for the crossed out row(s). Highlight that value. 2. Cross out the row with the newly highlighted value in. Repeat step 1. Continue until all rows are crossed out. 3. Once all rows are crossed out, read off the connections. The column and the row of each highlighted value are the vertices that are linked and should be included. ## Example First we will choose a town at random – Swindon – and cross out that row. Then we highlight the smallest value in the column for the crossed out row. Next we need to cross out the row with the newly-highlighted value in (the Oxford row). Then we look for, and highlight, the smallest value in the columns for the two crossed out rows (Swindon and Oxford). Next we need to cross out the row with the newly-highlighted value in (the Reading row). Then we look for, and highlight, the smallest value in the columns for the three crossed out rows (Swindon, Oxford, and Reading). Next we need to cross out the row with the newly-highlighted value in (the Bristol row). Then we look for, and highlight, the smallest value in the columns for the four crossed out rows (Swindon, Oxford, Reading, and Bristol). Next we need to cross out the row with the newly-highlighted value in (the London row). Then we look for, and highlight, the smallest value in the columns for the crossed out rows (Swindon, Oxford, Reading, Bristol, and Southampton). We’ve now selected a value from the last undeleted row. This means we’ve selected all the edges that we need to create the minimum spanning tree for the network. All we have left to do is write out the connections between the vertices. The connections in the network are found by taking the row and column headings for each selected value in the table. The edges are: {(Bristol, Swindon), (London, Reading), (Oxford, Swindon), (Reading, Oxford), (Southampton, Reading)}. This is the set of edges as in the minimum spanning tree generated by the diagrammatic version of the algorithm. # Dijkstra’s Algorithm Dijkstra’s algorithm generates the shortest path tree from a given node to any (or every) other node in the network. Although the problem that we will use as an example is fairly trivial and can be solved by inspection, the technique that we will use can be applied to much larger problems. Dijkstra’s algorithm requires that each node in the network be assigned values (labels). There is a working label and a permanent label, as well as an ordering label. Whilst going through the steps of the algorithm you will assign a working label to each vertex. The smallest working label at each iteration will become permanent. The steps of Dijkstra’s algorithm are: 1. Give the start point the permanent label of 0, and the ordering label 1. 2. Any vertex directly connected to the last vertex given a permanent label is assigned a working label equal to the weight of the connecting edge added to the permanent label you are coming from. If it already has a working label replace it only if the new working label is lower. 3. Select the minimum current working value in the network and make it the permanent label for that node. 4. If the destination node has a permanent label go to step 5, otherwise go to step 2. 5. Connect the destination to the start, working backwards. Select any edge for which the difference between the permanent labels at each end is equal to the weight of the edge. It is a good idea to use a system for keeping track of the current working labels and the ordering and permanent labels of each of the nodes in the network. A standard system found on A Level exams is that a small grid is drawn near each of the nodes, working labels are written in the lower box, ordering labels in the upper left box, and permanent labels in the upper right box, as shown in figure 1. ## Example In this example we will consider the network in figure 2. We would like to find the shortest path from node A to node H. As you can see at the end of the video, the destination node (node H) has a permanent label. This means that a shortest route from A to H has been found. The shortest route is found by tracing back from the destination node (node H) to the start (node A), selecting each edge for which the difference between permanent values at its terminating nodes is equal to the weight of the edge. The shortest routes for this network are shown in figure 3 below. The edges shown in green (AB and BC) are an alternative to the edge AC shown in red. In this case there are two equivalent shortest routes from A to H. The solution to the shortest route problem, from A to H, in this network is therefore A-C-G-H, or the equivalent distance A-B-C-G-H. Both have a total distance of 11 units, given by the permanent label of the terminal node. # Prim’s Complexity Prim’s algorithm starts by selecting the least weight edge from one node. In a complete network there are $(n - 1)$ edges from each node. At step 1 this means that there are $(n - 1) - 1$ comparisons to make. Having made the first comparison and selection there are $(n - 2)$ unconnected nodes, with a edges joining from each of the two nodes already selected. This means that there are $(n - 2) - 1$ comparisons that need to be made. Carrying on this argument results in the following expression for the number of comparisons that need to be made to complete the minimum spanning tree: $((n - 1) - 1) + (2(n - 2) - 1) + (3(n - 3) - 1) + \dotsc + ((n - 1)(n - (n - 1)) - 1)$ The result is that Prim’s algorithm has cubic complexity. # Dijkstra’s Complexity Dijkstra’s algorithm inspects each of the nodes that have not yet been permanently labelled, so the work per step is approximately proportional to the number of remaining nodes. For the entire process this number is: $(n - 1) + (n - 2) + (n - 3) + \dotsc + 1 = \frac{1}{2}n(n - 1)$. # Linear Programming Linear programming is an optimisation technique that will enable you to find a maximum or minimum value for a problem, subject to any constraints that are relevant. In linear programming the constraints and objective function (the thing you are trying to maximise or minimise) are always modelled by linear equations. There are two things that you need to create in order to be able to solve a linear programming question. First you should use the information given in the question to generate one or more constraint inequalities. Second you need to define the objective function, which describes the problem for which you are trying to find a maximum or minimum. Before you can do all this you need to define your variables. The variables that you use will depend on the problem. It will be best to work through an example to demonstrate this. # Linear Programming Example A worked example of a linear programming problem ## Question Clive has decided that as a fund-raising activity he will make and sell candles. He has decided to make two types of candle, a plain one, and a scented one. Each candle requires 200g of wax and Clive has bought enough ingredients to make a total of 1.6kg of wax. His idea is to make the scented candles tall and thin, and the plain candles shorter and fatter. This means that the length of wick required for a scented candle is 200mm but only 100mm is needed for a plain candle. Clive only has 1m of wick to use. Clive has worked out from a survey that he won’t be able to sell more scented candles than double the number of plain candles plus three. If he is going to sell the candles at £3 for a scented candle and £2 for a plain, how many of each should he make so that he raises the most money possible? ## Solution ### Definitions The first thing that we must do is define our variables: Let: $x$ be the number of plain candles made, $y$ be the number of scented candles made, and $I$ be the income from selling the candles. Variable definitions ### Constraints Having defined the variables we can construct our constraint inequalities using the information provided in the question. These describe the conditions that will restrict the number of candles that we can make. The first constraint relates to the mass of wax available. 200g is required for each candle, and the total available is 1.6kg. Converting to consistent units and writing as an inequality, then simplifying we get: \begin{aligned}200x + 200y &\leq 1600 \\ x + y &\leq 8\end{aligned}First constraint inequality The second constraint is given by the amount of wick available and required. 100mm is needed for each plain candle and 200mm is needed for each scented candle. 1m is available. As with the first constraint, we make sure the units are consistent, then write as an inequality and simplify. \begin{aligned}100x + 200y &\leq 1000 \\ x + 2y &\leq 10\end{aligned}Second constraint inequality The final constraint is that Clive won’t be able to sell more scented candles than double the number of plain candles plus three. Although slightly complicated this constraint can be written as follows: $y \leq 2x + 3$Third constraint inequality #### Trivial constraints Most linear programming problems will have some constraints that are not explicitly described. Sometimes you will need to infer a constraint from the context; most frequently these will be trivial. Although referred to as trivial, these constraints are important to include. In this case the context means that we are only interested in positive integers: $x\geq 0 \\ y \geq 0 \\ x\in \mathbb{Z} ^{+}$Trivial constraints ### Objective Function Having constructed all of our constraint inequalities the only thing left for us to do before solving the problem is to generate an objective function. This is created from the information that tells us what we are to optimise (usually make biggest or smallest). In this case, we need to maximise income with each scented candle selling at £3 and each plain candle selling at £2. The objective function will be: $I = 2x + 3y$Objective function ## Solving the problem graphically This problem can be solved graphically by constructing the graph below. Using the constraints that we have already constructed: $x + y \leq 8 \\ x + 2y \leq 10 \\ y \leq 2x + 3 \\ y \geq 0$All of the constraints Using the coordinates found at or near the vertices in the objective function, we can work out which combination of scented and plain candles will give the greatest income. Because the only solutions that make sense in the context are integer values of both $x$ and $y$, it is worth checking coordinate points near the vertices, inside the feasible region. As you can see, the highest income can be achieved when 6 plain candles and 2 scented candles are sold. This will give an income of £18. # Critical Path Analysis – Introduction Critical path analysis allows you to determine the best way of arranging activities. The typical question to answer is “what is the minimum time required for a process?” Performing activities one after the other as you come to them might not be the best way of organising a project. If more than one activity can be worked on at the same time you will need to decide when to start each one. Below, in table 1, are the activities needed to build a house with durations given in days. We are going to perform a critical path analysis on the process of house construction. The first thing that we need to do is to decide which of the activities depend on other activities: • B requires that A be complete. • C requires that B be complete. • E requires that C and D be complete. • F requires that E be complete. • G, H, and I all require that F be complete. • J and K require that G, H, and I be complete. • L requires that J and K be complete. With this information we can draw an activity network. In an activity network the edges represent activities, such as those listed above. The nodes represent events. An event is the start and/or finish of one or more activities. The activity network for the house building example is shown in figure 1. The activity network shows that there are 12 events involved in the building of this house, and that 12 activities (plus 3 dummy activities) are required before the house is complete. As you can see, there are a few activities that can occur concurrently. Later we will use this example to find the critical path through the operation (finding a critical path). It is the longest path and determines the minimum length of time that is necessary to complete the house.
2021-10-20 17:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 23, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.714569628238678, "perplexity": 637.7234670134018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00311.warc.gz"}
http://nrich.maths.org/4717
# An Introduction to Proof by Contradiction ##### Stage: 4 and 5 Key to all mathematics is the notion of proof. We wish to be able to say with absolute certainty that a property holds for all numbers or all cases, not just those we've tried, and not just because it sounds convincing or would be quite nice if it were so. Certain types of proof come up again and again in all areas of mathematics, one of which is proof by contradiction. To prove something by contradiction, we assume that what we want to prove is not true, and then show that the consequences of this are not possible. That is, the consequences contradict either what we have just assumed, or something we already know to be true (or, indeed, both) - we call this a contradiction. A simple example of this principle can be seen by considering Sally and her parking ticket. We know that if Sally did not pay her parking ticket, she would have got a nasty letter from the council. We also know that she did not get any nasty letters. Either she paid her parking ticket or she didn't, and if she didn't then, from our original information, we know that she would have got a nasty letter. Since she didn't get a nasty letter, she must therefore have paid her ticket. If we were formally proving by contradiction that Sally had paid her ticket, we would assume that she did not pay her ticket and deduce that therefore she should have got a nasty letter from the council. However, we know her post was particularly pleasant this week, and contained no nasty letters whatsoever. This is a contradiction, and therefore our assumption is wrong. In this example it all seems a bit long winded to prove something so obvious, but in more complicated examples it is useful to state exactly what we are assuming and where our contradiction is found. One well-known use of this method is in the proof that $\sqrt{2}$ is irrational. Rational numbers are those which can be written in fractions, that is as one integer divided by another ($1/2, 3/4, 4/2, 973/221, \dots$). They can be put into what is called irreducible form , which is where the numerator (top number) and denominator (bottom number) have no common factors other than 1, i.e. are coprime. Irrational numbers are those which cannot be put into such a form, such as $\pi$ and - as we are about to see - $\sqrt{2}$. Let us start by proving (by contradiction) that if $p^2$ is even then $p$ is even, as this is a result we will wish to use in the main proof. We do this by considering a number $p$ whose square, $p^2$, is even, and assuming that this $p$ is not even. Then we try to arrive at a contradiction. If $p$ is not even, it is odd, and therefore of the form $2n+1$, where $n$ is a whole number. Then $p^2 = (2n+1)^2 = 4n^2 + 4n + 1$. But $4n^2 + 4n$ is clearly even, so $4n^2 + 4n + 1$ is odd. This means $p^2$ is not even, so since we are only considering $p$ because $p^2$ is even, we have a contradiction here. Therefore our assumption that $p$ is not even must be wrong, i.e. $p$ is even. Now we are ready to start our proof that $\sqrt{2}$ is irrational, which of course we begin by assuming that it is not (i.e. that it is rational), and then trying to arrive at a contradiction. Suppose $\sqrt{2}$ is rational. Then it can be written as $p/q$, where $p$ and $q$ are coprime integers. Thus if $\sqrt{2} = p/q$ then squaring both sides gives $2 = p^{2}/q^{2}$. Then $2q^2 = p^2$ and so $p^2$ is clearly even. If $p^2$ is even then we know from above that $p$ must be even, and so can be written as $p = 2m$ where $m$ is an integer. Thus $p^2 =4m^2$ and so $2q^2 = 4m^2$. Dividing $2q^2 = 4m^2$ through by $2$ gives us that $q^2$ is also even, and so $q$ must be even. If $p$ and $q$ are both even then they have $2$ as a common factor, which contradicts the assumption that they are coprime. Thus our assumption is incorrect, and $\sqrt{2}$ is not rational. You may like to try this challenge which involves a slightly different proof by contradiction to prove the same result. This alternative proof can be generalised to show that $\sqrt{n}$ is irrational when $n$ is not a square number. Proving something by contradiction can be a very nice method when it works, and there are many proofs in mathematics made easier or, indeed, possible by it. However, it is not always the best way of approaching a problem. For instance, say for some reason we wish to prove that (positive) $\sqrt{4}$ is rational. Encouraged by our success with $\sqrt{2}$, we could suppose for a contradiction that $\sqrt{4}$ is not rational. Then it cannot be written as $p/q$ where $p$ and $q$ are positive integers. However, if we let $p = 2$ and $q = 1$ then $(p/q)^2 = p^2/q^2 = 4/1 = 4$. Also, both $2$ and $1$ are positive, so $p/q = 2/1$ is positive. Thus $\sqrt{4} = 2/1$ so we have contradicted our assumption that $\sqrt{4}$ cannot be written as an integer divided by an integer. Therefore $\sqrt{4}$ is not irrational, i.e. it is rational. All we really needed to do was point out that $\sqrt{4} = 2$, which is a perfectly good rational number in its own right. This would have been much quicker than going through the whole proof by contradiction. Even more importantly it was, in fact, a step in the above proof. Having just warned you of the dangers of blindly trying to prove things by contradiction, we end with one of the nicest proofs - by contradiction or otherwise - I know. This is Euclid's proof that there are infinitely many prime numbers, and does indeed work by contradiction. Before we begin this proof, we need to know that any natural number greater than 1 (so $2, 3, 4, \dots$) has a prime factor. We can prove this by, in fact, contradiction. Take the usual definition of a prime as a natural number greater than 1 divisible only by itself and 1. Suppose it is not the case that any natural number greater than 1 has a prime factor. Then there must be a least natural number greater than 1 which does not have a prime factor. Let us call this $n$. Then $n$ is clearly not prime so it must have a factor $m$ that is neither $n$ nor 1. But m < n so $m$ has a prime factor $p$ by assumption. Thus $p$ is a factor of $m$, which is a factor of $n$, so $p$ is a prime factor of $n$. Thus $n$ has a prime factor, and this means it is not the case that there is a least natural number greater than 1 that does not have a prime factor. This therefore contradicts our assumption that not every natural number greater than 1 has a prime factor. So every natural number greater than 1 does have a prime factor. Having proved this, we can now go on to our main proof. We wish to prove there are infinitely many primes, so of course we suppose for contradiction that there are only finitely many, say $n$ of them. This means that we can list them: $\{p_1, p_2, p_3, \dots, p_n\}$. Consider their product, $p_1 \times p_2 \times \dots \times p_n = \prod_{i=1}^{n}p_i$. Now $\prod_{i=1}^{n}p_i + 1$ is a natural number (as it is the sum of two natural numbers) and it is clearly greater than $1$. Thus as was noted earlier, it has a prime factor. Can you see where we need to go from here? * * * * * * * * * The answer is that $\prod_{i=1}^{n}p_i + 1$ has a prime factor, $p$. Since we are assuming that there are finitely many primes, $p$ is one of $\{p_1, p_2,p_3, \dots, p_n\}$. Thus $p$ divides $\prod_{i=1}^{n}p_i$, too. Now, $p$ cannot divide both $\prod_{i=1}^{n} p_i$ and $\prod_{i=1}^{n}p_i + 1$, or else it would divide their difference, $1$. Thus $p$ is not in our complete list of primes, and so we have arrived at a contradiction. There are therefore infinitely many primes. At the time of writing this article Katherine was a third year undergraduate mathematician at Balliol College, Oxford. Vicky had just finished a degree in Maths at Cambridge and was doing a fourth year course studying Combinatorics, Number Theory and Algebra, still at Trinity College, Cambridge.
2015-03-28 09:25:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004566669464111, "perplexity": 102.3927506808026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297416.52/warc/CC-MAIN-20150323172137-00152-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.rdocumentation.org/packages/rlang/versions/0.1.6/topics/as_function
as_function 0th Percentile Convert to function or closure • as_function() transform objects to functions. It fetches functions by name if supplied a string or transforms quosures to a proper function. • as_closure() first passes its argument to as_function(). If the result is a primitive function, it regularises it to a proper closure (see is_function() about primitive functions). Usage as_function(x, env = caller_env())as_closure(x, env = caller_env()) Arguments x A function or formula. If a function, it is used as is. If a formula, e.g. ~ .x + 2, it is converted to a function with two arguments, .x or . and .y. This allows you to create very compact anonymous functions with up to two inputs. env Environment in which to fetch the function in case x is a string. • as_function • as_closure Examples # NOT RUN { f <- as_function(~ . + 1) f(10) # Primitive functions are regularised as closures as_closure(list) as_closure("list") # Operators have .x and .y as arguments, just like lambda # functions created with the formula syntax: as_closure(+) as_closure(~) # } Documentation reproduced from package rlang, version 0.1.6, License: GPL-3 Community examples Looks like there are no examples yet.
2018-01-20 17:00:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3360042870044708, "perplexity": 7471.507179390513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889677.76/warc/CC-MAIN-20180120162254-20180120182254-00795.warc.gz"}
https://www.physicsforums.com/threads/photon-problems.86974/
# Photon problems ## Main Question or Discussion Point the energy of photon is given e=hf, but when we consider this with doppler effect of the source goes far away from the receiver.it seems that the energy of photon is decreased. (which contradicts the conservation of energy in reality :is this argument true? if it happened, what would result with the conservation of energy if it were wrong , how can we reject our calculations and expression :surprised Related Quantum Physics News on Phys.org ZapperZ Staff Emeritus 2018 Award vietcuong said: the energy of photon is given e=hf, but when we consider this with doppler effect of the source goes far away from the receiver.it seems that the energy of photon is decreased. (which contradicts the conservation of energy in reality :is this argument true? if it happened, what would result with the conservation of energy if it were wrong , how can we reject our calculations and expression :surprised The energy of a ball moving with velocity v is $$1/2 mv^2$$. Yet, if you're moving in the same reference frame of the ball, the ball has no KE. Do you think there's an energy conservation violation here too? Zz.
2019-12-09 05:44:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7595840692520142, "perplexity": 775.239063535472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00160.warc.gz"}
https://www.biostars.org/p/405430/
How they get such a low adjusted P value in differential analysis of proteomics data? 0 0 Entering edit mode 2.1 years ago Xiaokang ZH ▴ 60 I'm new to proteomics data and my experience with it is quite different from transcriptomics data (RNA-Seq). I got some protein abundance data of samples from control group and exposure group (in vivo experiment exposing fish to toxicant, 10 samples in each group). The purpose is to find out the differentially expressed proteins. I used the package DEP to do the preprocessing and statistical analysis. In the end, the adjusted P values (p-adj) are very high (either equals 1 or close to 1) and the fold change is also almost 1. So if I use p-adj then no protein is differentially expressed. I read some papers about proteomics data (the ones citing that package) and they report to get very low p-adj (0.05 is used as threshold). I suspect that I didn't do the analysis in the correct way but I couldn't find out the problem... The steps I've done to the raw abundance data are: remove the proteins who have more than half missing values in any group, transform the raw abundance with arcsin, impute the missing values using maximum likelihood estimation, use the function test_diff from DEP to do differential analysis. proteomics differential analysis DEP P value FDR • 1.2k views 1 Entering edit mode Dear @Xiaokang ZH As you said you are new to proteomic data analysis I would like to suggest some free softwares that are easier to start working with proteomics data. First one is the MaxQuant and the other is the Perseus. Maxquant is for analyzing raw data, after which you import it into perseus and you can do a lot of analysis, like LFQ. A very interesting point is that Max Planck Institute offers a series of summer school tutorial videos. MaxQuant Summer School 2019 Madison If you already know this software, sorry for my answer. Best Regards, Leite 0 Entering edit mode Thank you Leite. The data I have is already processed abundance. I'll check out Perseus. 0 Entering edit mode I am guessing your groups have a large variation, you should try to find outliers with a PCA/MDS. 0 Entering edit mode Thank you! That's a good point. I did find 3 outliers with PCA and after removing them, the p-adj became more normal (I use to have one comparison of two groups with all p-adj = 1 and now they look normal. But still, they are still very high. A quick glance at the lowest ones: 0.00827, 0.0611, 0.219, 0.265, and their corresponding Fold Change are: 1.0514644, 0.9507250, 0.9721831, 1.0139595
2021-12-09 04:41:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3617793917655945, "perplexity": 1953.6173113467455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00415.warc.gz"}
https://csharp-book.softuni.org/Content/Chapter-6-1-nested-loops/nested-loops/example-square-of-stars.html
Example: Square of Stars Print in the console a square of N x N stars: Input Output Input Output Input Output 2 * * * * 3 * * * * * * * * * 4 * * * * * * * * * * * * * * * * Hints and Guidelines The problem is similar to the last one. The difference here is that we need to figure out how to add a white space after the stars so that there aren't any excess white spaces in the beginning or the end.
2018-11-15 19:34:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5072978138923645, "perplexity": 350.7700545910892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742906.49/warc/CC-MAIN-20181115182450-20181115204450-00284.warc.gz"}
http://mathhelpforum.com/differential-geometry/120267-sequence-lebesgue-space-print.html
# Sequence, Lebesgue Space • December 13th 2009, 01:48 PM canberra1454 Sequence, Lebesgue Space Prove that if $(g_n)$ is a sequence in $L^2[0, 1]$ with $|| g_n ||_2 \leq 1$ for all $n \geq 1$ then $(g_n/n)$ converges to zero a.e. If $(f_n)_n$ is a sequence in $L^1[0, 1]$ is such that $\sum_{n=1}^{\infty} || f_n ||_1 < \infty$ then $\sum_{n=1}^{\infty} | f_n(s) | < \infty$ for almost every $s \in [0, 1]$. Why not apply the quoted result to $f_n=\frac{g_n^2}{n^2}$?... Try to find how to conclude from there.
2014-10-01 12:45:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546160101890564, "perplexity": 224.26036954234033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663417.16/warc/CC-MAIN-20140930004103-00246-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/plot-the-quantization-error-sequence-that-results-after-exciting-a-6-bit-adc-with-a--2279745.htm
# Plot the quantization error sequence that results after exciting a 6-bit ADC with a full-scale... Plot the quantization error sequence that results after exciting a 6-bit ADC with a full-scale amplitude sine wave. Use the MATLAB routine given in Example 6.4 for the quantizer. Compute the mean and RMS value of the quantization noise sequence. Repeat for an 8-bit quantizer. How does the mean and RMS value compare with theory in the two cases? Example 6.4 Compute the quantization noise sequence that results from exciting a 3-bit ADC with a full-scale amplitude sinusoidal signal of unity amplitude, zero phase, M= 1, and N=64. Also, compute the RMS value of the quantization noise and compare this result with its theoretical predicted value.
2021-09-27 19:12:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523748517036438, "perplexity": 1062.7517112889589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00484.warc.gz"}
https://moodle.org/mod/forum/discuss.php?d=221215
## General developer forum ### Using students answers in question formula later Using students answers in question formula later I'm pretty new to moodle and I'm wondering if it is possible to set up quizzes in such a way that a students previous answer can be used in the next question. Here is an example of what I want to do Question 1: What is the value of the voltage used Question 2: What is the measurered current in the circuit Question 3:  What is the circuit resistance. The answer of question 3 is the first answer/ second answer regardless of the values give in 1 or 2 Is this possible Average of ratings: - Re: Using students answers in question formula later I don't think this kind of question is possible with standard questions. It is quite possible with the 3rd party formulas question type plugin. The question would have only 1 part with 3 coordinates. The part's text would be something like: Question 1: What is the value of the voltage used {_0} Question 2: What is the measured current in the circuit {_1} Question 3:  What is the circuit resistance. {_2} And the grading criteria would be _0/_1 == _2 or maybe better _0/_1 - _2 < 0.01 to allow for some tolerance. The part answer can be any list of 3 values that satisfy the equation like [24, 2, 12] it wil not be used in calculations but would be displayed if you check "Right answer" in quiz review options so I suggest not to check that option for this kind of quiz. Average of ratings: - Re: Using students answers in question formula later Thank for the answer, this looks like what I need but I will have to find out who has admin right in order to get it installed. Thanks Average of ratings: - Re: Using students answers in question formula later These are basic numerical type questions, and why do you want to use previous answers? It any part is incorrect then the final amswer is wrong. I assume you want something using the formula Calculate R using the formula $$R= \frac{v1-v2}{I}$$  IE dropping resistor. If it's something else a fuller explanation please. Average of ratings: - Re: Using students answers in question formula later Answer 3 is only dependant on answer 1 and 2. I would hope to have limits set for the answers 1 and 2 which would be measured values. Answer 3 is an calculated value using the the previous 2 answers. The circuit is a simple simple resistor circuit with a varied voltage supplied to it. We measure Voltage and current and calculated resistance and check it to the colour code of the resistor. Proving ohms law. I hope I'm explaining it ok. Average of ratings: - Re: Using students answers in question formula later I do that as a prac session.  You want to calculate Vs - Vl where Vs is a variable, then assuming as in the attached calculate R. Would you consider letting your students build a virtual using  the Phet site and handing that in as an assignment? Average of ratings: - Re: Using students answers in question formula later John The Phet site looks good, I must look into it more but I would to keep the practical aspect of the  lab. I'm just looking In to trying to make the lab run smoothly and not have the last student you check on be stuck at the first step and the lab is half over. Average of ratings: - Re: Using students answers in question formula later What version of Moodle are you using? The Lesson mod might be the way.  I will write some question for you in the morning, when I know version. John Average of ratings: - Re: Using students answers in question formula later I'm not sure of the version? But I'lltry to find out. I don't think that I'll be able to add plugins easily. Administrator wants me to see if the existing system will do but I can't blame him. If every user want new things added then things could get messy quickly. Average of ratings: - Re: Using students answers in question formula later we are using moodle 2.2 at the  moment. Thanks for the offer of help Average of ratings: - Re: Using students answers in question formula later The easiest way for me to do it is to give you a copy of my electronics course, you can then restore and see the way I did/do it.  The but is the backup is in Moodle 2.4 format. It wont restore properly on earlier versions. Average of ratings: - Re: Using students answers in question formula later Thanks John That's very generous of you, i have never backed upped or restored a course, I'll give that a go on Monday and see how I get on. Average of ratings: -
2015-07-07 16:58:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44659680128097534, "perplexity": 1620.4085395658822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099755.63/warc/CC-MAIN-20150627031819-00053-ip-10-179-60-89.ec2.internal.warc.gz"}
https://brandewinder.com/2010/01/07/Fun-with-queues/
# Fun with queues I am currently prototyping an application, which brought up some fun modeling questions. Imagine the following situation: there are 2 products on the market. Customers use either of them, but not both. In each time period (we consider a discrete time model), new customers come on the market, and select one of the 2 products, with probability p and (1-p). At the end of each period, some existing customers stop using their product, and leave the market, with a rate of exit specific to the product. Suppose that you knew p and the rates of exit for each product. If the size of the total market size was stable, what market share would you expect to see for each product? Before tackling that question, let’s start with an easier problem: if you knew how many new customers were coming in each period, what would you expect the product shares to be? Let’s illustrate with an example. You want to open the Awesome Bar & Restaurant, an Awesome place with a large bar and dining room. You expect that 100 customers will show up at the door every hour. A large majority of the customers (70%) head straight to the bar, but one the other hand, people who come for dinner stay for much longer. How many seats should you have in the bar and the restaurant so that no one has to wait to be seated? What makes this question interesting is that both queues, in the bar and restaurant, will build up over time. If 100 persons enter the restaurant at time t, at time t+1, 60 of them will still be there – and if you had only 100 tables available, the new wave of customers coming in will have to wait to be seated. We clearly need more than 100 seats to keep customers happy. How can we approach that problem? We are looking for a fixed point, a solution Population = [Population(Bar); Population(Restaurant)] which is invariant over time, so that f([Bar, Restaurant]) = [Bar, Restaurant] Practically, this means that we expect the number of people sitting at the bar to stay constant over time – which implies that for each period, there should be as many people entering and leaving the bar. If N people enter the Awesome Place at time t, the number of people entering the bar is proportion(Bar) x N. If the population at the bar at time t is Population(Bar), then the number of people exiting will be Population(Bar) x Exit Rate(Bar). Equating the number of people coming and leaving the bar, we get: Total Customers Entering x Proportion (Bar) = Population(Bar) x Exit Rate(Bar) Which gives us Population(Bar) = Customers Entering x Proportion(Bar) / Exit Rate(Bar) or, in our specific example, Population(Bar) = 100 x 0.7 / 0.8 = 87.5 Population(Restaurant) = 100 x 0.3 / 0.4 = 75 Total = 162.5 seats What did we just see here? First, even though only 100 customers are coming in every hour, because some people stay in each place more than one hour, we actually need to build much more capacity than 100 – in our case, we need a total of 163 seats if we want to seat customers without wait time. Then, while 70% of customers head for the bar, they leave much faster than restaurant customers, and thus don’t require that much space: at any given time, only 54% of the customers should be at the bar. I plotted below how the place would fill up over time, starting empty. The chart shows that both queues converge to the “stable values” we identified. However, while the bar gets there fairly quickly, the restaurant takes much more time to fill up. The reason for this is the low exit rate: because people stay longer, a larger buffer of free seats is required, and initially filling that buffer takes time. In the next installments, we’ll look at this in more depth – addressing the original question, and maybe digging deeper into related questions, like the impact of random fluctuations, or what happens if the market is growing, or if we consider more complex queues!
2020-04-08 08:10:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3426547944545746, "perplexity": 715.6818504222753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00311.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc/chapter/11/lesson/11.2.2/problem/11-90
### Home > PC > Chapter 11 > Lesson 11.2.2 > Problem11-90 11-90. The pedal shaft on a standard bicycle is about $7$ inches long, and the center of the bottom bracket to which the pedal shaft is attached is about $11$ inches above the ground. Sketch a graph that shows the relationship of the angle of the shaft (for one pedal) in standard position (as if the point of attachment were the origin of a set of $x$-$y$axes shifted up $11$ units) to the height of the pedal above the ground as a rider pedals the bike. Assume the pedal starts in its lowest position and takes $2$ seconds to make one complete rotation. 1. Write an equation for a function that represents your graph. Sketch the situation. Identify the amplitude, period, and any shifting. 2. Jack feels that the best position for the pedal to start riding is when the pedal is at $10$ inches and heading downward. What is the first time the pedal will be in this position? Set your equation from part (a) equal to $10$ and solve. Use the eTool below to see a sample animation. Click the link to the right for full version. 11-90 HW eTool
2022-05-24 05:18:25
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7037667632102966, "perplexity": 1070.3053478491147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00024.warc.gz"}
https://abaqus-docs.mit.edu/2017/English/SIMACAEMATRefMap/simamat-c-druckerprager.htm
Extended Drucker-Prager models The extended Drucker-Prager models: are used to model frictional materials, which are typically granular-like soils and rock, and exhibit pressure-dependent yield (the material becomes stronger as the pressure increases); are used to model materials in which the compressive yield strength is greater than the tensile yield strength, such as those commonly found in composite and polymeric materials; allow a material to harden and/or soften isotropically; generally allow for volume change with inelastic behavior: the flow rule, defining the inelastic straining, allows simultaneous inelastic dilation (volume increase) and inelastic shearing; can include creep in Abaqus/Standard if the material exhibits long-term inelastic deformations; can be defined to be sensitive to the rate of straining, as is often the case in polymeric materials; can be used in conjunction with either the elastic material model (Linear elastic behavior) or, in Abaqus/Standard if creep is not defined, the porous elastic material model (Elastic behavior of porous materials); can be used in conjunction with an equation of state model (Equation of state) to describe the hydrodynamic response of the material in Abaqus/Explicit; can be used in conjunction with the models of progressive damage and failure (About damage and failure for ductile metals) to specify different damage initiation criteria and damage evolution laws that allow for the progressive degradation of the material stiffness and the removal of elements from the mesh; and are intended to simulate material response under essentially monotonic loading. The following topics are discussed: Related Topics About the material library Inelastic behavior Rate-dependent yield Rate-dependent plasticity: creep and swelling Progressive Damage and Failure In Other Guides *DRUCKER PRAGER *DRUCKER PRAGER HARDENING *RATE DEPENDENT *DRUCKER PRAGER CREEP *TRIAXIAL TEST DATA Defining Drucker-Prager plasticity ProductsAbaqus/StandardAbaqus/ExplicitAbaqus/CAE
2022-08-08 17:10:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 219, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7786844372749329, "perplexity": 3831.718290431837}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00246.warc.gz"}
http://math.stackexchange.com/questions/51325/computer-algorithm-running-time-calculation
# Computer Algorithm Running Time Calculation I have the following pseudocode to figure out the approximate running time of. Anyone able to help but please explain each step and reason. thanks in advance int m = n; while (m > 0) for k= 1 to m //dowork ---- CONSTANT m = floor(m/2) Another Algorithm I would appreciate a break down of please. How would I compute the running time of this algorithm? NB. This is taken from the wiki site writeup on merge sort but since that did not help, I was wondering if someone here would help break it down so I get where the O (n log n) comes from, for both best case and worst case. http://en.wikipedia.org/wiki/Merge_sort A function merge_sort(m) if length(m) ≤ 1 return m var list left, right, result var integer middle = length(m) / 2 for each x in m up to middle for each x in m after middle left = merge_sort(left) right = merge_sort(right) result = merge(left, right) return result B function merge(left,right) var list result while length(left) > 0 or length(right) > 0 if length(left) > 0 and length(right) > 0 if first(left) ≤ first(right) append first(left) to result left = rest(left) else append first(right) to result right = rest(right) else if length(left) > 0 append first(left) to result left = rest(left) else if length(right) > 0 append first(right) to result right = rest(right) end while return result - m is not changing so why will this code ever halt. And if m >= 0 in the beginning why will while loop ever run? –  Pratik Deoghare Jul 13 '11 at 23:48 m is changing, in the floor m/2 function –  user10695 Jul 13 '11 at 23:51 Are you saying that m changes to $\lfloor m/2 \rfloor$? –  mixedmath Jul 13 '11 at 23:55 By the way - then m will forever halt at -1 even if it does change. –  mixedmath Jul 13 '11 at 23:55 @user10695 then write m = floor(m/2) in the code. –  Pratik Deoghare Jul 14 '11 at 0:04 I don't know if another answer can be more illuminating to you, but let me try saying it a bit differently anyway. First, consider the inner loop: for k= 1 to m //dowork ---- CONSTANT Because it's doing a constant amount of work $m$ times (for k=1 to m), this takes approximately time $m$, whatever the value of $m$ is. (To be precise, it takes $\Theta(m)$ time, which means that there are constants $c_1$ and $c_2$ such that the time it takes is between $c_1m$ and $c_2m$, but when finding the "approximate running time", i.e., the asymptotic complexity, we usually ignore constant factors.) Now the outer loop looks like m = n while (m > 0) //Do 'm' amount of work m = floor(m/2) where I've replaced the inner loop with what we know about its time. So the first time the loop is run, $m=n$ and it takes time $n$. By the second time the loop is run, $m$ is halved, so $m = n/2$ and it takes time $n/2$ (I'm ignoring writing $\lfloor n/2 \rfloor$, because that's within a constant factor of $n/2$.) The third time it takes $n/4$, etc. So the total time taken by the code is: \begin{align}&n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \dots \\ &= n\left(1 + \frac12 + \frac14 + \frac18 + \dots\right)\end{align} until the term becomes less than $1$ (so $m$ would have become $0$ and the code would have halted). Now, the sum $\left(1 + \frac12 + \frac14 + \frac18 + \dots\right)$ is at most $2$, so the above sum for the running time is at most $2n$. Ignoring constant factors, we can say that the code takes approximately time $n$, which is shorthand for saying that the time it takes is is linear in $n$. Or, if we did the whole thing formally, we would have said it takes time $\Theta(n)$. (As it happens, we can analyse the number of terms in the sum: if the last term is $\frac{n}{2^k}$ (each term is of this type), then $k$ is such that $2^k \le n < 2^{k+1}$, which means $k = \lfloor \lg n \rfloor$, but all this is irrelevant to the actual problem here.) - This line "Now, the sum $\left(1 + \frac12 + \frac14 + \frac18 + \dots\right)$ is at most $2$, so the above sum for the running time is at most $2n$." I am not sure exactly how you got it. I mean would it not be less than n since we are dividing n into bits? –  user10695 Jul 15 '11 at 1:49 @user10695: The sum starts with "1" and then adds some more numbers, so it can't be less than 1. :-) It's the geometric series with ratio $1/2$. Or, if you don't know what that is, it may help to look at some partial sums: $1+1/2=1.5$, $1+1/2+1/4=1.75$, $1+1/2+1/4+1/8=1.875$, etc. (Each time you add only half the remaining distance to 2, so you never reach 2.) So $n(1+1/2)=1.5n$, $n(1+1/2+1/4)=1.75n$, etc. The sum $n(1+1/2+1/4+\dots)$ is at most $2n$. In $n+(n/2+n/4+\dots)$, you're adding $n$ to bits of $n$ that total at most another $n$. –  ShreevatsaR Jul 15 '11 at 4:41 Ok. So how come you have it that 2n is bounded above by n? –  user10695 Jul 15 '11 at 5:02 @user10695: 2n is obviously not bounded above by n; it is twice as big as n. What I said is that, ignoring constant factors (the constant factor 2), 2n is the same as n. "2n is the same as n up to constant factors": it is linear in n. Or formally, $2n=\Theta(n)$. The final conclusion, that the code takes time "approximately n", or $\Theta(n)$ to be precise, must be read as "there exist constants $c_1$ and $c_2$ so that the time it takes is between $c_1n$ and $c_2n$". (Note how I similarly ignored the constant time taken by "//dowork ---- CONSTANT" in the inner loop: took it as just 1.) –  ShreevatsaR Jul 15 '11 at 5:29 Can I add a couple more algorithms for you to help me with? I mean I readlly want to understand this thing. –  user10695 Jul 15 '11 at 20:35 Assuming that floor(m/2) means 'replace m with floor(m/2)' then: Assume $n$ is a power of 2, so that $n=2^p$. You set $m = n$ in the first line. In the first line of the while loop you iterate through $k = 1...m$, which takes $am = 2^pa$ operations, where $a$ is the number of steps in your //do work block. Then you halve $m$ - the amount of work here is nontrivial, let's call it $d$. It will almost certainly turn out to be insignificant, but... Now you iterate through $k=1...m/2$, doing $am/2 = 2^{p-1}a$ operations, and again halve m, taking $d$ operations. In the next pass you do $am/4 + d$ operations, in the pass after you do $am/8 + d$, until in the final pass you do $a$ operations. You went through the while loop $p$ times, and the total amount of work you do is \begin{align} pd + a(1 + 2 + \cdots + 2^p) & = pd + (2^{p+1}-1)a \\ & = d\log_2 n + (2n - 1)a \end{align} If $a>d$ (likely in applications) and $a$ is independent of the loop value $k$ (may or may not be true) then the running time is linear in $n$. - Assume the work done within the second loop is constant, and the work done when m is halved is also constant. Would your algorithm still look the same as it does now? What I am trying to do is understand exactly why you chose to analyze it in the you have above. –  user10695 Jul 14 '11 at 1:54 Yes, my answer would be the same (the line "if $a$ is independent of the loop value $k$" was where I made the assumption that the amount of work done in the second loop was constant). Rereading, although my answer is correct, I think my presentation is slightly unclear - you could learn more from ShreevatsaR's answer. –  Chris Taylor Jul 14 '11 at 7:49 Definitely trying to learn from it all but I am still about 20% on doing this but would like to master it. –  user10695 Jul 15 '11 at 21:45
2014-09-16 01:11:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828944563865662, "perplexity": 606.4655455024579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657110730.89/warc/CC-MAIN-20140914011150-00265-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.skepticalcommunity.com/viewtopic.php?f=5&t=47772&start=140
## Wind Turbines We are the Borg. Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Germany hits record 61 per cent renewables for month of February [Including irrelevant but nicely ominous picture] Renewable energy sources provided a record 61.2% of Germany’s net public electricity generation in February, according to figures provided by the Fraunhofer Institute for Solar Energy Systems (ISE), which also showed that wind energy provided nearly half of the country’s electricity during the month. Fraunhofer ISE provides up to date tracking of Germany’s power sector through its Energy Charts website, and keen-eyed Twitter users highlighted record renewable figures with February now in the bag. Of the total 45.12TWh generated by Germany’s power sector, 27.63TWh, or 61.2%, was generated from renewable electricity sources. According to at least one expert, this was a new monthly record for renewable electricity generation, smashing the previous record of 54% set in March of 2019. And while Germany has experienced higher shares of renewable electricity generation, these have been on a daily or weekly basis – such as in March of 2019 when the share of renewables in the country’s energy mix jumped to 72.4% – rather than this more impressive monthly record. Throughout the month, Germany’s renewable energy sector regularly provided around 60% or above of the country’s electricity production – including over a dozen days around or above 70%. Germany’s fleet of wind turbines generated a record 20.80TWh, or 45.8%, of the country’s electricity – similarly smashing the previous record of 34.7% set, again, in March of 2019. Unsurprisingly, then, wind electricity generation regularly provided around or above 60% of the country’s electricity generation. Second in terms of contribution to Germany’s renewables power sector was biomass, which provided 3.74TWh, or 8.3% of total electricity generation, followed by solar with 1.86TWh, or 4.2%. Natural gas provided 10.2% of February’s total, while nuclear provided 11.5%. Coal provided only 17% of the country’s power in February. https://reneweconomy.com.au/germany-hit ... ary-99434/ Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Government set to reverse Cameron-era ban on onshore wind farm subsidies Officials have told climate campaigners that onshore wind farms would soon be able to bid for subsidies from the Government. The U-turn comes as ministers face increasing pressure to set out how the UK will hit its target of net-zero carbon emissions by 2050. Land-based wind turbines have long been unpopular with grassroots Conservatives, leading the then-Prime Minister Mr Cameron to say he wanted to “rid” the countryside of the “unsightly” structures in 2015. Onshore wind was officially blocked from bidding for financial support available to other forms of renewable energy in 2016, leading to a 94% decline in the number of new projects up to 2019. Environmental groups have consistently protested against the ban, arguing that onshore wind is the cheapest new form of electric energy and has widespread public support. https://www.politicshome.com/news/uk/en ... -subsidies Have the Tories turned Green? Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines US renewables groups hail landmark clean energy bill in Virginia Legislation lays ground for 5.2GW offshore wind ambition and big deployments of solar and storage Virginia's General Assembly passed landmark clean energy legislation that doubles its offshore wind goal to 5.2GW and clears the way for big deployments of solar and storage, in a move hailed by US renewable energy groups as transformational. The legislation creates a mandate requiring that 30% or more of electricity comes from renewable sources by 2030 and sets a target of 100% zero-emissions by 2050. The bill creates a pathway for Virginia to steadily reduce its dependence on fossil fuels, partly with incentives for 16.1GW of solar PV and 2.7GW of energy storage, as well as the offshore wind goal that's behind only New York and New Jersey among US states. Tom Kiernan, CEO of the American Wind Energy Association (AWEA) called it “pro-business, forward thinking and comprehensive”, adding it will foster economic development across Virginia. https://www.rechargenews.com/transition ... 2-1-769205 Doctor X Posts: 72528 Joined: Fri Jun 04, 2004 8:09 pm Title: Collective Messiah ### Re: Wind Turbines – J.D. Mob of the Mean: Free beanie, cattle-prod and Charley Fan Club! "Doctor X is just treating you the way he treats everyone--as subhuman crap too dumb to breathe in after you breathe out." – Don DocX: FTW. – sparks "Doctor X wins again." – Pyrrho "Never sorry to make a racist Fucktard cry." – His Humble MagNIfIcence "It was the criticisms of Doc X, actually, that let me see more clearly how far the hypocrisy had gone." – clarsct "I'd leave it up to Doctor X who has been a benevolent tyrant so far." – Grammatron "Indeed you are a river to your people. Shit. That's going to end up in your sig." – Pyrrho "Try a twelve step program and accept Doctor X as your High Power." – asthmatic camel "just like Doc X said." – gnome WS CHAMPIONS X4!!!! NBA CHAMPIONS!! Stanley Cup! SB CHAMPIONS X6!!!!!! Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Wind provided record 40.2% of Oklahoma’s statewide electricity generation in 2019 Oklahoma’s use of wind energy to generate electricity continues to increase. A record 40.2% of all state’s generated energy in 2019 was powered by renewable technology, Oklahoma Power Alliance representatives announced Tuesday during a Clean Energy Day at the state Capitol. In 2018, Oklahoma’s wind farms generated about 36% of the energy created inside the state, up from 33% the previous year. “This data tells a strong story” about Oklahoma’s continued leadership in renewable energy deployment, Mark Yates, vice president of the Advanced Power Alliance and its policy director in Oklahoma, said Tuesday. He noted wind’s use to generate electricity in Oklahoma during the year only was surpassed by natural gas, which generated another 46.3%. Alliance data showed Oklahoma ranked second among U.S. states for 2019 for the amount of energy its wind farms generated, and third for the amount of wind capacity installed. The alliance estimates more than $20 billion has been invested in renewable projects within the state. It also issued data showing the industry’s completed wind projects are ranked as a top-three taxpayer in 19 Oklahoma counties and 65 Oklahoma school districts. Projects’ owners made about$51 million in land lease payments to farmers and ranchers throughout 26 of Oklahoma’s counties in 2019. “These investments continue to transform Oklahoma’s rural economies by offering new career opportunities, circulating new income, creating sales tax revenue, and providing valuable ad valorem,” he said. https://ieefa.org/wind-provided-record- ... n-in-2019/ Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines We are doomed! Spoiler: Abdul Alhazred Posts: 83937 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago ### Re: Wind Turbines My fist guess was something to do with frame rates. The arc of the moral universe bends towards chaos. People who believe God or History are on their side provide the chaos. Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines World's wind power capacity up by fifth after record year Offshore windfarms and onshore projects in US and China fuel one of strongest years on record The world’s wind power capacity grew by almost a fifth in 2019 after a year of record growth for offshore windfarms and a boom in onshore projects in the US and China. The Global Wind Energy Council found that wind power capacity grew by 60.4 gigawatts, or 19%, compared with 2018, in one of the strongest years on record for the global wind power industry. The growth was powered by a record year for offshore wind, which grew by 6.1GW to make up a tenth of new windfarm installations for the first time. The council’s annual report found that the US and China remain the world’s largest markets for onshore wind power development. Together the two countries make up almost two-thirds of global growth in wind power. https://www.theguardian.com/environment ... ecord-year gnome Posts: 24136 Joined: Tue Jun 29, 2004 12:40 am Location: New Port Richey, FL ### Re: Wind Turbines Witness wrote: Fri Mar 13, 2020 12:58 am Wind provided record 40.2% of Oklahoma’s statewide electricity generation in 2019 Oklahoma’s use of wind energy to generate electricity continues to increase. A record 40.2% of all state’s generated energy in 2019 was powered by renewable technology, Oklahoma Power Alliance representatives announced Tuesday during a Clean Energy Day at the state Capitol. In 2018, Oklahoma’s wind farms generated about 36% of the energy created inside the state, up from 33% the previous year. “This data tells a strong story” about Oklahoma’s continued leadership in renewable energy deployment, Mark Yates, vice president of the Advanced Power Alliance and its policy director in Oklahoma, said Tuesday. He noted wind’s use to generate electricity in Oklahoma during the year only was surpassed by natural gas, which generated another 46.3%. Alliance data showed Oklahoma ranked second among U.S. states for 2019 for the amount of energy its wind farms generated, and third for the amount of wind capacity installed. The alliance estimates more than $20 billion has been invested in renewable projects within the state. It also issued data showing the industry’s completed wind projects are ranked as a top-three taxpayer in 19 Oklahoma counties and 65 Oklahoma school districts. Projects’ owners made about$51 million in land lease payments to farmers and ranchers throughout 26 of Oklahoma’s counties in 2019. “These investments continue to transform Oklahoma’s rural economies by offering new career opportunities, circulating new income, creating sales tax revenue, and providing valuable ad valorem,” he said. https://ieefa.org/wind-provided-record- ... n-in-2019/ And how many cases of Windmill Cancer? "If fighting is sure to result in victory, then you must fight! Sun Tzu said that, and I'd say he knows a little bit more about fighting than you do, pal, because he invented it, and then he perfected it so that no living man could best him in the ring of honor. Then, he used his fight money to buy two of every animal on earth, and then he herded them onto a boat, and then he beat the crap out of every single one. And from that day forward any time a bunch of animals are together in one place it's called a zoo! (Beat) Unless it's a farm!" --Soldier, TF2 Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines gnome wrote: Fri Mar 27, 2020 11:00 am And how many cases of Windmill Cancer? You can get your very own private cancer: 13 Best Home Wind Turbines 2020: Generate Electricity at Home. Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Offshore wind 'could hit 200GW by 2030' Energy Industries Council report raises concerns over supply chain's ability to keep pace with demand Offshore wind capacity could grow to as much as 200GW of operational capacity by 2030, according to a new report by the Energy Industries Council (EIC). EIC said in the 'Global Offshore Wind 2020' report that an increased awareness of the risks and effects of climate change is likely to lead to a greater focus on decarbonisation efforts in the supply chain and means of component production. Forecasts on the operational capacity by 2030 range from 164GW to 200GW, it added. Other drivers of growth could be cross-sector and sector coupling, particularly around decarbonisation of offshore oil and gas platforms and the production of ‘green’ hydrogen via electrolysis using offshore wind, the report said. The report provides an overview of the latest trends, technologies and processes across the global offshore wind sector, in addition to an in-depth look at projects and developments. It also warned that as more projects are added to the pipeline, concerns have been raised on the supply chain’s ability to meet global demand, particularly for vessels, skilled labour and fabrication shipyards. https://renews.biz/59270/offshore-wind- ... w-by-2030/ Surprise Posts: 705 Joined: Tue Aug 14, 2018 10:33 am Title: Influencer ### Re: Wind Turbines "Nationalism is an infantile disease. It is the measles of mankind." Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Wind Power Continues Its March Into The Energy Mainstream Wind energy is becoming an increasingly important part of the energy mix around the world, as costs continue to fall and technology improves Global economic activity is on hold at the moment, but 2019 was a boom year for wind energy, with more than 60GW of capacity installed around the world. New figures from the Global Wind Energy Council’s (GWEC) Global Wind Energy Report show that installations were 19% higher than the year before and the second-highest ever. Total capacity is now 651GW. However, the market needs to grow even more if we are to meet our climate targets, the organization says. China and the US dominate the global market for onshore wind projects, accounting for 60% of sales between them, while the offshore market is now 10% of the overall market, with 6.1GW installed in 2019. 2020 was expected to be a record year for the industry, with 76GW of new capacity forecast to come on line, but that figure is unlikely to be reached as a result of the Covid-19 pandemic. GWEC says that it will revise its 2020-2024 forecast in the light of the potential impacts of COVID-19 on the global economy and energy markets, and will publish an updated market outlook in Q2 2020. The key driver for the sector’s growth was the growth in the use of auctions to procure capacity, which has helped to drive down costs around the world. More than 40GW, or two thirds of new capacity, was procured through auctions, double the figure for 2018. Most installations were in established markets, with just five countries (China, the US, the UK, India and Spain) accounting for 70% of new capacity. https://www.forbes.com/sites/mikescott/ ... ainstream/ Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Total Becomes Latest Oil Major to Enter Floating Wind Market Building on its solar momentum, Total this week invested in a U.K. floating wind project and acquired a French wind developer. France's Total announced two significant wind deals in recent days, becoming the latest oil company to push into floating offshore wind as it builds on its existing momentum in the solar market. Total this week bought an 80 percent share of the 96-megawatt Erebus floating wind project in the Celtic Sea from developer Simply Blue Energy. Then on Friday, the company confirmed its Total Quadran subsidiary had acquired developer Global Wind Power (GWP) France from its Danish parent, adding a 1-gigawatt portfolio of wind projects in France. Fellow European oil majors Shell and BP have built-up gigawatt-scale renewables portfolios, but the pace of Total’s recent activity could see it outshine them both. Total has 3 gigawatts of renewables within its 7-gigawatt portfolio of low-carbon projects. By 2040, the company aims to derive 15 to 20 percent of its revenue from its low-carbon business. Earlier this year, Total bought a 50 percent share of Adani’s 2.1-gigawatt portfolio of operational solar assets in India for \$510 million. In February, it signed two solar deals in Spain, including an outright acquisition of developer Solarbay’s 1.2-gigawatt development pipeline. And Total was part of the winning partnership in Qatar’s most-recent solar tender for an 800-megawatt project. https://www.greentechmedia.com/articles ... ower-deals Witness Posts: 28501 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Wind Turbines Oil Companies Are Collapsing, but Wind and Solar Energy Keep Growing The renewable-energy business is expected to keep growing, though more slowly, in contrast to fossil fuel companies, which have been hammered by low oil and gas prices. A few years ago, the kind of double-digit drop in oil and gas prices the world is experiencing now because of the coronavirus pandemic might have increased the use of fossil fuels and hurt renewable energy sources like wind and solar farms. That is not happening. In fact, renewable energy sources are set to account for nearly 21 percent of the electricity the United States uses for the first time this year, up from about 18 percent last year and 10 percent in 2010, according to one forecast published last week. And while work on some solar and wind projects has been delayed by the outbreak, industry executives and analysts expect the renewable business to continue growing in 2020 and next year even as oil, gas and coal companies struggle financially or seek bankruptcy protection. In many parts of the world, including California and Texas, wind turbines and solar panels now produce electricity more cheaply than natural gas and coal. That has made them attractive to electric utilities and investors alike. It also helps that while oil prices have been more than halved since the pandemic forced most state governments to order people to stay home, natural gas and coal prices have not dropped nearly as much. Even the decline in electricity use in recent weeks as businesses halted operations could help renewables, according to analysts at Raymond James & Associates. That’s because utilities, as revenue suffers, will try to get more electricity from wind and solar farms, which cost little to operate, and less from power plants fueled by fossil fuels. https://www.nytimes.com/2020/04/07/busi ... nergy.html Abdul Alhazred Posts: 83937 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago ### Re: Wind Turbines Of course there are industrial uses for petroleum other than fuel, but they will have to retrench somewhat. The arc of the moral universe bends towards chaos. People who believe God or History are on their side provide the chaos. Rob Lister Posts: 22732 Joined: Sun Jul 18, 2004 7:15 pm Title: Incipient toppler Location: Swimming in Lake Ed ### Re: Wind Turbines Abdul Alhazred wrote: Tue Apr 14, 2020 1:03 am Of course there are industrial uses for petroleum other than fuel, but they will have to retrench somewhat. The use of "petroleum" has no place in a thread about windmills. There are no petroleum power plants of note other than ones used for emergency. Refineries use it to refine, but that's because from that perspective, it's practically free. Abdul Alhazred Posts: 83937 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago ### Re: Wind Turbines Same goes for coal. The arc of the moral universe bends towards chaos. People who believe God or History are on their side provide the chaos. gnome Posts: 24136 Joined: Tue Jun 29, 2004 12:40 am Location: New Port Richey, FL ### Re: Wind Turbines How's that? I understand coal is PRIMARILY used for power generation. "If fighting is sure to result in victory, then you must fight! Sun Tzu said that, and I'd say he knows a little bit more about fighting than you do, pal, because he invented it, and then he perfected it so that no living man could best him in the ring of honor. Then, he used his fight money to buy two of every animal on earth, and then he herded them onto a boat, and then he beat the crap out of every single one. And from that day forward any time a bunch of animals are together in one place it's called a zoo! (Beat) Unless it's a farm!" --Soldier, TF2 Abdul Alhazred Posts: 83937 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago ### Re: Wind Turbines gnome wrote: Tue Apr 14, 2020 1:38 pm How's that? I understand coal is PRIMARILY used for power generation. Primarily, yes. But it's also very important in steel production. The arc of the moral universe bends towards chaos. People who believe God or History are on their side provide the chaos.
2020-07-10 16:43:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23310346901416779, "perplexity": 7329.575343812209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911092.63/warc/CC-MAIN-20200710144305-20200710174305-00429.warc.gz"}
https://www.gamedev.net/forums/topic/487466-another-cdirectx-match-3-puzzle-question/
# Another c++/directx match 3 puzzle question This topic is 3773 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts You guys are probably sick of me by now, but I'm not through with you, not by a long shot! :) Ok, Im going to get laughed at I know, but please, if all you can think of to answer this is to mock or say, woa, what a noob, I already know that, so just don't post. Thanks. The question is really about optimization, and since I like to program the engine code, puzzle game logic is all new to me. Here we go. I load the sprites like this: sprite1_pic=LoadTexture("Resources/whitesprite.bmp",D3DCOLOR_XRGB(255,255,255)); sprite2_pic=LoadTexture("Resources/bluesprite.bmp",D3DCOLOR_XRGB(255,255,255)); etc.. I declare their properties like this: sprite_1.x=150; sprite_1.y=120; sprite_1.width=100; sprite_1.height=100; etc... I go to the render function and do this: std::vector<LPDIRECT3DTEXTURE9>sprite1; sprite1.push_back(sprite5_pic); sprite1.push_back(sprite2_pic); sprite1.push_back(sprite3_pic); sprite1.push_back(sprite4_pic); sprite1.push_back(sprite1_pic); std::vector<D3DXVECTOR3>spritepos1; spritepos1.push_back(pos1); spritepos1.push_back(pos2); spritepos1.push_back(pos3); spritepos1.push_back(pos4); spritepos1.push_back(pos5); then draw like this: for(int i=0;i<sprite1.size();i++) { Sprite_Handler->Draw( sprite1, NULL, NULL, &spritepos1, D3DCOLOR_XRGB(255,255,255)); } Now, all this is fine if this was the WHOLE GAME, but obviously, its not. First if all, this code is only to draw ONE ROW of sprites, and there needs to be 5 rows on the board at a time, so I would need to declare 5 sprite vectors, 5 d3dxvector vectors, and a whopping 50 pos's!!! I am very willing to do this, but I am afraid of the consequences to the framerate. I am doing it this way, because, in my noob head, one of the only ways I can think of to check for triple matches is with sprite based collision, and using sprite based collision, each sprite has to have its own defined position. So, for the 5 blue sprites scattered around the board, I will have to say something like: if(sidecollision(b_sprite1,b_sprite2)&& sidecollsion(b_sprite2,b_sprite3) { a match has been made }...FOR ALL THE POSSIBLE COMBOS OF ALL 5 COLORS!!! Again, I am willing to do this, but I am afraid this will totally kill the performance. So.. Am I right, am I wrong, What do you think, what should I do? ##### Share on other sites woa, what a noob! :) just kidding. Your approach is very linear. I would suggest taking a step back and trying to make it more object oriented. For instance, you have a list of dx 9 textures and a list of positions (which seem to coorilate one-to-one to eachother). So, I would then create a Sprite object: class SpriteObj{public: LPDIRECT3DTEXTURE9 Texture; D3DXVECTOR3 Position;}//So you can now just have one list:std::vector<SpriteObj> spriteList; I am guessing you have a 5x5 board (25 sprites). So when you render your sprites, you will only draw 25 per frame. I don't quite get your side collision, but you should be able to loop through your objects and test them: //starting one in and ending one less so we can test -1,0,1;for(int i = 1;i < spriteList.Count()-1;i++){ if (CheckCollision(spriteList[i-1],spriteList,spriteList[i+1]); { //you collided and have spriteList in the middle.. DO SOMETHING! }} Hope this helps :) Jeff. ##### Share on other sites Thanks very much Jeff! But since I am new to this particular kind of logic, I am afraid I will have to ask you to expound a bit more. for instance, Is your CheckCollision function a user-defined function, or is it from STL or similar? Also, in your suggested std::vector<SpriteObj> spritelist, how do I assign a specific texture and position to each sprite in the list? You will have to be very patient with me, as my linear thinking is outweighing my better reason! Sample code helps too. Thanks! ##### Share on other sites Not a problem. First, my CheckCollision function is a user-defined function that would probably encapsulate your sidecollision function. Assuming you already have sidecollision, you could do something like this: void CheckCollision(SpriteObj* left, SpriteObj* mid, SpriteObj* right){ //validate pointers assert(left); assert(mid); assert(right); // your code you pasted if(sidecollision(left->Position,mid->Position)&& sidecollsion(mid->Position,right->Position)) { //a match has been made };} Get it? Again, I'm not sure what your sidecollision does (I am guessing it checkes a collision against the two x,y,w,h positions. Quote: Also, in your suggested std::vector spritelist, how do I assign a specific texture and position to each sprite in the list? My SpriteObj is pretty primitive. Meaning, you may want to put accessor functions and other functions for position, etc. If you used my example, you could just do something like: //not listing all the parametersD3DXCreateTextureFromFileEx(..., &spriteList[0].Texture);D3DXCreateTextureFromFileEx(..., &spriteList[1].Texture);//etc. Jeff. ##### Share on other sites Ok, I kind of get what you're saying, but here are the problems: First of all, you're assuming I'm much smarter than I am. :) My side collision creates a Rect that passes throught the center of the sprite and extends on both sides, see? int SideCollision(SPRITE sprite1,SPRITE sprite2) { RECT rect1; rect1.left=sprite1.x+10; rect1.top=sprite1.y+5; rect1.right=sprite1.x+sprite1.width+15; rect1.bottom=sprite1.y+sprite1.height-20; RECT rect2; rect2.left=sprite2.x+10; rect2.top=sprite2.y+5; rect2.right=sprite2.x+sprite2.width+15; rect2.bottom=sprite2.y+sprite2.height-20; RECT dest; return IntersectRect(&dest,&rect1,&rect2); } The problem with this of course, is that it only works with 2 sprites. By the way, my sprite struct: struct SPRITE { int x,y; int width,height; int movex,movey; int curframe,lastframe; int animdelay,animcount; int scalex, scaley; int rotation, rotaterate; }; Again, a very primitive way of doing things. Also as you have probably figured out by now, my framework is not object-oriented. It was my first engine, and I slowly added to it, mainly for fun. I would use object oriented framework, but unfortunately, I'm on a deadline, which is why I was using this one since it is at least functional. Please instruct me on how to do the 3-way collision with pointers, as your check collision function seems to be suggesting, or just help me figure out where to go from here, as now I'm feeling really lost! Thanks Again! P.S. Also, doing it the way I was doing in my first post, would there be a signifigant framerate collapse, or is it just taking the long,linear route, which is not neccessarily bad for beginners? ##### Share on other sites After looking at it again, I am not quite so lost. Since My framework isn't object oriented, I could just make the SpriteObject Class into a struct, right? As for the collision, I am still in the dark about how to detect matches of 3 same colors, both horizontally and vertically. Perhaps you could still help me out there? Thanks again,again! -Brandon ##### Share on other sites look into using graphs 1. 1 Rutin 23 2. 2 3. 3 4. 4 JoeJ 18 5. 5 • 14 • 16 • 11 • 11 • 9 • ### Forum Statistics • Total Topics 631757 • Total Posts 3002151 ×
2018-07-20 09:37:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2500089406967163, "perplexity": 2309.8782513493784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591575.49/warc/CC-MAIN-20180720080634-20180720100634-00416.warc.gz"}
https://www.gamedev.net/forums/topic/704598-new-3d-model-format-and-single-header-sdk/?page=1
# New 3D model format and single header SDK ## Recommended Posts Dear All, I'd like to introduce you a new model format. It is simple, well-defined and well-documented, and has the best data density of all formats. Suzanne with normals, UVs and materials require less than 12K using this format. I'm not good with names, so it's just simply called Model 3D and has the extension ".m3d". I'd like to get some feedback and hear your opinion about it. Provided implementations For compatibility, it is implemented with Assimp. Unfortunately according to my findings Assimp fails to live up to it's promise "Loads 40+ 3D file formats into one unified and clean data structure.", as its data structure is neither unified nor clean. But I managed it, so here it is, libassimp can load M3D files with vertex colors, materials, textures, and skeletal animations. A new model format is no good if you can't use it with your favorite modeler software, so I have provided Blender plugin to export objects and their animations into M3D. The exporter is fully functional and supports all features of the M3D format, however the importer is WIP. You can export vertex lists with colors, normals, UVs, PBR materials (PrincipledBSDF), textures, armature, and several model actions too using the Blender Animation's timeline markers feature. And finally, it's native SDK is an stb-style, totally dependency-free, single header file ANSI C89/C++11 SDK. It can load and save files, supports the binary M3D format (and if included with the M3D_ASCII define then it's ASCII counterpart, A3D too). It is MIT licensed, written in ANSI C and with the provided C++ wrapper class it should be straightforward to integrate into any new or existing projects. It compiles to about 70K, which includes the built-in PNG texture decompressor. Features of the format A few words on why does it worth it to use this format: • it is Open Source and free. Not only its SDK, but the format itself is MIT licensed and expandable. • it can store big models in very little files. Blender's cube can be saved in about 100 bytes. An usual Blitz3D animated model (.b3d) in .m3d saves 70%-80% storage space. • exactly one model per file, but with all properties: materials, skeleton, textures, skeletal animations. No complex scene parsing involved. (Lightning and camera setups are NOT properties of the model!) • but if needed, engine specific data can be embedded in a way that it won't break the model compatibility with other software. • the same model can be read by different software with different levels. The same file is compatible with more applications, for example: one can only load the static mesh from it, while another can also utilize the bone information and the animations. • it has a human readable ASCII variant, which can be easily loaded with a slightly modified Wavefront OBJ loader (however the SDK can handle). • it's SDK is simple, fast and painless to use (according to the K.I.S.S. has only 5 functions altogether). • it's in-memory format is efficient and fast (uses indices in contrast to recursive node parsing matching strings), and well-specified (for example it makes it clear that right-handed coordinate system is used with the coordinates, and how the coordinate system is oriented. No surprises when you load a model from M3D). I've provided a command line utility (m3dconv) which loads any Assimp-supported format and converts it into Model 3D. A simple, portable GL-based viewer (m3dview) is also included in the repository to demonstrate how to display the animated models once you loaded them (whole thing less than 500 SLoC :-) ). You can compile the latter with GLFW, GLUT and SDL2 (the interface is autodetected, just run "make"). Cheers, bzt ##### Share on other sites 13 hours ago, bzt said: A new model format is no good if you can't use it with your favorite modeler software, so I have provided Blender plugin I won't call Blender to be favorite but if you really want to push your stuff, Maya, Houdini, 3D Max and ZBrush are must haves for you because those are the industry leading ones for games. 13 hours ago, bzt said: it has a human readable ASCII variant How fast is parsing performed? High compressionratio most of the time leads to slow processing performance so I'm corious if it can be as fast as FBX or OBJ (as you mentioned). 13 hours ago, bzt said: it's SDK is simple, fast and painless to use How complex is the code for processing the file and what about flexibility? Do I have to integrate Assimp into our engine just to use your file format or do you also provide code that can easily be grabbed and placed in our source without the need to make huge adaptions for us to work? ##### Share on other sites Hi, 5 hours ago, Shaarigan said: I won't call Blender to be favorite but if you really want to push your stuff, Maya, Houdini, 3D Max and ZBrush are must haves for you because those are the industry leading ones for games. Well, are they Open Source? Until then you can use the m3dconv utility to convert their formats into M3D in your build system automatically. 5 hours ago, Shaarigan said: How fast is parsing performed? High compressionratio most of the time leads to slow processing performance so I'm corious if it can be as fast as FBX or OBJ (as you mentioned).  Actually a LOT faster than any FBX or OBJ loaders out there. However the ASCII variant A3D is for debugging purposes. For your game, you should use the binary variant, M3D which loads as fast as possible. There are no unecessarry string conversions, everything is read right from binary and all chunk sizes are known in advance (so it allocates memory as few times as possible). Profiling showed that the slowest part is performing the post-processing phase, calculating the transformation matrices for the bones, but a) that's still pretty fast, b) you can avoid that if you include the SDK with the M3D_NOANIMATION define. That way only the file format decoder will run. For small file sizes, the binary format is compressed. To further speed up your game loading for the price of bigger storage, you can avoid that compression (see m3dconv), and store only the binary chunks (which are still pretty small compared to OBJ, FBX, .glF or any other XML or JSON based formats.) The m3d_load() autodetects if the binary is compressed or not, and what's more, if it's not then it does not allocate extra memory and uses pointers directly into the uncompressed image. 5 hours ago, Shaarigan said: How complex is the code for processing the file and what about flexibility? Both questions are answered on the project's main README.md file. You can load a colored static mesh in 80 SloC if you don't want to use the SDK (which you should, it's a single header file with only one function for loading). The format is also flexible, supports engine-specific chunks (you have to parse those on your own of course). 5 hours ago, Shaarigan said: Do I have to integrate Assimp into our engine Most definitely NOT. You only have to include an stb-style single header library if you don't want to parse the files on your own. The Assimp support is only there in case if you already have integrated Assimp, but that's all. 5 hours ago, Shaarigan said: do you also provide code that can easily be grabbed and placed in our source without the need to make huge adaptions for us to work? I'm starting to have a feeling that you haven't read my post, nor have you checked the repo. Of course I provide code: • an example how to parse the files without the SDK (80 SLoC) • tests directory contains examples • m3dconv also uses the API and provides example • m3dview was written to showcase how to use the API and display an animated model (500 SLoC) • plus there's the API usage manual, full with examples. Cheers, bzt ##### Share on other sites A few numbers to satisfy your curiousity: The WusonBlitz.b3d file (provided in the Assimp repo test/models/B3D) is a binary format, 87k. Converted into M3D it's only 29k. Loading the same file using Assimp: 581,017 bytes in 4,088 blocks M3D SDK (m3d.h): 229,534 bytes in 4 blocks As you can see, Assimp requires more than twice the RAM, and it uses more than 4 thousand blocks. That means at least more than 4 thousand memory allocation calls. The M3D SDK on the other hand not just loads the same model into less RAM, but it only calls memory allocation 4 times only. Also the native SDK's memory is less fragmented, meaning less memory wasted. (FYI: fewer memory allocation calls => faster loading time.) Also M3D SDK stores the model in a more compact way and provides faster access (indices instead of recursive node-walking with string comparititons). The native in-memory format resembles VBOs and EBOs so you can display them with fewer conversions (actually a non-animated mesh can be directly fed into a VBO/EBO).  (FYI: simpler structure with direct indices instead of look-ups loops => faster rendering time.) All the differences in the Assimp's and M3D's philosophy is explained in the repo, assimp/README.md and the M3D SDK API usage manual is here with both C and C++ examples. Cheers, bzt ##### Share on other sites 17 hours ago, bzt said: Well, are they Open Source? Defintely not but they provide plugins and are the industry standard. If I have to parse their formats into something you provide, I can on the same track parse it directly into something our engine supports by native so I don't see any benefits of your file format here especially in a real production case. There might be benefits for indie games that really have to take watch of the space they have for assets on an online share for example but those people won't use a file format not supported by their major engines Unity and Unreal so, im corious what your statement is 🙂 17 hours ago, bzt said: Actually a LOT faster than any FBX or OBJ loaders out there Did you see/ test against this very popular implementation on GitHub? I implemented similar code in our engine which makes use of our streaming API and came up with round about 800 ms for the file the person provides on GitHub on a Win7 maschine Intel i7 hexa core x64 OS. Would be corious if you can provide some benchmarks 17 hours ago, bzt said: if you don't want to use the SDK (which you should, it's a single header file with only one function for loading) As you make use of the STL, especially in the model wrapper, this is against our engine policy, so no I would prefer to not use the SDK, the same for the sprintf's I saw in your code. This makes it difficult to integrate it in our existing systems. So if I would decide to integrate your file format in our workflow, I would need to rewrite it from scratch to play well with our existing systems and API. This should not value your work (I feel I have to write it into each of my posts when providing a personal opinion today), just a notice from my all-day industry perspective 🤐 18 hours ago, bzt said: I'm starting to have a feeling that you haven't read my post, nor have you checked the repo I have carefully read the post because it might propably be of interest but you wrote about Assimp and the SDK. I don't had checked the repo and still didn't browse through it entirely. If you post it in a forum, you have in my opinion to somehow work with questions 🤷‍♂️ ##### Share on other sites 5 hours ago, Shaarigan said: Defintely not but they provide plugins and are the industry standard. If I have to parse their formats into something you provide, I can on the same track parse it directly into something our engine supports First, you don't have to parse their formats into something I provide, because there's already a tool to do that for you. Second, you don't have to parse anything into your engine, you just include one header file and there you go. Are you familiar with stb-style headers? This is what my m3d.h provides for C and C++. 5 hours ago, Shaarigan said: Did you see/ test against this very popular implementation on GitHub? I implemented similar code in our engine which makes use of our streaming API and came up with round about 800 ms Yes of course. 800ms that's laughable 🙂 I also got a similar value repeating the test 32 times. Here's a comparition for you. Model Format File Size Library Required Time WusonOBJ.obj text OBJ 258k tinyobjloader real 0m0.748s WusonOBJ.obj text OBJ 258k libassimp real 0m2.716s WusonBlitz.b3d binary Blitz 87k libassimp real 0m0.414s WusonBlitz.m3d uncompressed M3D 33k libassimp real 0m6.906s WusonBlitz.a3d ASCII Model 3D 141k M3D SDK real 0m0.258s WusonBlitz.m3d uncompressed M3D 33k M3D SDK real 0m0.073s WusonCompr.m3d deflated M3D 42k M3D SDK real 0m0.162s As you can see my SDK is 10 times faster than your beloved tinyobjloader. Even with deflated chunks it is still 4 times faster. But let's be fair, compare only text formats, so tinyobjloader vs A3D: my SDK is still 2.5 times faster (but this is irrelevant because you should not use the debug format in your engine). Feel free to repeat the test on your machine. Believe me, if I say my implementation is a LOT faster, then I know what I'm talking about, I've done my homework and I had performed my tests 😉 I kindly ask you to read the README.md, and comment only when you have a clue what my repository is about. Thank you. Cheers, bzt 5 hours ago, Shaarigan said: As you make use of the STL, especially in the model wrapper Again, NO, I do not use STL. You would know that should you have read the API documentation. You can configure the SDK not to use zlib inflate at all, but then you won't have textures either, because that's needed for PNGs too. (You'll have to use your own image decoder in this case.) 5 hours ago, Shaarigan said: I have carefully read the post Then why are you asking questions like "do you provide code examples"? Just askin'. Cheers, bzt ##### Share on other sites Oh and two more things: 5 hours ago, Shaarigan said: the same for the sprintf's I saw in your code That's only needed for the A3D writer. So IF you specify M3D_EXPORTER and IF you specify M3D_ASCII. By default only the binary M3D importer gets included. Why on earth would you want a model dumper in your engine???? 5 hours ago, Shaarigan said: So if I would decide to integrate your file format in our workflow, I would need to rewrite it from scratch to play well with our existing systems and API. If ANSI C89 and C++11 standards WITHOUT ANY DEPENDENCY is a problem for your existing systems, then the problem is in your existing systems. No offense, that's the truth. But if you want to rewrite it from scratch, feel free to do so, go ahead, check out the repository's main page README.md, which has a 80 SLoC example how to read a colored static mesh. The file format is well-defined and well-documented in every little detail, up to the last bit, see m3d_format.md. Cheers, bzt ##### Share on other sites Hi, As I said, the code does not use STL, std::string and std::vector are merely containers, only initialized in "return" statements (you won't find any push_back calls in my code). But to ease your mind, I made the M3D::Model wrapper class optional, now it's only included if M3D_CPPWRAPPER is defined. Without you're free to implement your own wrapper, but there's still no need to start from scratch, just call the C API m3d_load() and convert the struct arrays to whatever C++ object you want. About sprintf, I see no reason why do you ever want to export a model in ASCII format from an engine. Or why would you ever want to use the ASCII format at all other than debugging. But let's say for a moment, that you need it. Could you elaborate what is your issue with the use of sprintf? If you are concerned about buffer overflows, check the code again. All buffers are accurately measured and allocated before any sprintf use. If I made a mistake in that, I'd like to know so that I can fix it. Cheers, bzt ##### Share on other sites If you want your file format to get adopted by someone, you should give a good reason to take the effort to support it. First of all, you have to think about when it's used in the entire development cycle. • During Content Creation • Between Creation and Integration • in the built game Not every engine/game/developer is differentiating strictly between these steps, e. g. because the artists store and export using the same file format, or because the engine/build system doesn't process the assets. In general, during content creation, the file format has to support everything the tool used for creation is supporting, without altering the data to much. (If you create models by placing primitives, this format has to support these primitives.) After creation, the format should still support a lot of features (e. g. include not only the model, but also material information, a rig, animations, ...). The format used in the built game has on the other hand to target performance, either small size or (what most will prefer nowadays) fast load times. This format might also have to rely inside a container file format, so the loader has to be able to handle the memory as a source, and not just files. Maybe there could be a special "batch mode" if there are multiple consecutive modles in memory/in a container file. Since you emphasize a lot on the fast load time, maybe the last is most suitable for your file format. If that's the case, only the engine developers are your target audience. Another use case might be any model file that's handled using a Version Control System (SVN, Git, ...). For all of these, a text format is easier to handle (e. g. detecting conflicts), but on the other hand, this would also require the file content to be predictable (e. g. if I change something over here, this section in the file changes and nothing else). Since I don't think this would be the case for 3D Models in general, I don't think it's worth spending to much time thinking about it (and didn't include it above). ##### Share on other sites 17 hours ago, Sacaldur said: The format used in the built game has on the other hand to target performance, either small size or (what most will prefer nowadays) fast load times. It is fast load times in nearly any case, especially when developing a console game. Companies Sony and Microsoft have strict guidelines how long a game is allowed to load until it first shows up to the player or become rejected from the platform ## Create an account Register a new account • 9 • 15 • 9 • 9 • 56
2019-11-21 08:15:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19259662926197052, "perplexity": 3062.8787503061344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00318.warc.gz"}
https://bgrasley.wordpress.com/resources-for-latex-and-math-notation/
# LaTeX and Math Notation These are things I’ve tried, and some of them I use a lot. Some other notes that I don’t want to forget: Use $$and$$ to delimit inline math for MathJax. Use $and$ to delimit blocks of math for MathJax. Use \color{colorname}{mathcode} to colour chunks of math notation with MathJax. The format for an equation array is this: $\begin{eqnarray} 2(x + 6) &=& 2(x) + 2(6) \\ &=& 2x + 12 \end{eqnarray}$
2019-04-21 10:24:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718615770339966, "perplexity": 2549.020973014731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530527.11/warc/CC-MAIN-20190421100217-20190421122217-00373.warc.gz"}
https://sites.google.com/site/yfawuk/past-years/2011/program
### Program The program is still to be confirmed, but will consist of two-and-a-half days of postgraduate and invited talks, with poster presentations another option. This year we will have invited talks by established Mathematicians in this field. For a more detailed program and list of talks, please see the attached PDF. ### Wednesday 12:00 - 12:30 Introductions Department Coffee Room (G/109) 12:30 - 14:00 Lunch 14:00 - 15:00 Invited Speaker: Steve Power (Lancaster) Crystal Frameworks and Rigidity Operators 15:00 - 17:30 Session 1 (break at 16:00) ADS/106 ### Thursday 09:00 - 12:30 Session 2 (break at 11:00) ADS/106 12:30 - 14:00 Lunch 14:00 - 15:00 Invited Speaker: Michael Dritschel (Newcastle) A Tour of von Neumann's Inequality 15:00 - 17:30 Session 3 (break at 16:00) ADS/106 20:00 Conference Dinner TBC ### Friday 09:00 -11:30 Session 4 (break at 11:00) ADS/106 11:30 - 12:30 Invited Speaker: Simon Eveson (York) The Strange Ubiquity of Rank 1 12:30 - 14:00 Lunch 14:00 - 15:30  (provisional) Session 4 and end of workshop ADS/106 ## Abstracts Apologies for the raw LaTeX markup in some abstracts; please see the PDF for a formatted version. ### Invited Talks #### Michael Dritschel (Newcastle) A Tour of von Neumann's Inequality In 1951 John von Neumann published the following inequality which bears his name: If $p$ is a complex polynomial and $T$ is a bounded Hilbert space  operator with $\|T\| \leq 1$, then $\|p(T)\| \leq \|p\|_\infty$, the supremum norm of $p$ over the unit disk. The inequality has immediate application to functional calculus questions, and for this reason (among others) it is interesting to know for which function algebras and operators inequalities of this sort are attained. For example, given a function algebra $\mathcal A$ over a complex domain with norm denoted by $\|\cdot\|_\infty$ (though perhaps not the supremum norm!), one could look for conditions on an operator $T$ such that $f(T)$ makes sense for $f\in A$ and $\|f(T)\| \leq \|f\|_\infty$. Or one might also ask: given a collection of operators, are there natural domains and function algebras over these domains such that some version of von Neumann's inequality holds? What about algebras over $\mathbb C^n$ or tuples of operators (commuting or not)? We will touch on many of these problems, discuss some of the recent work in this area, list a few of the many applications and point to open problems around this fascinating inequality. #### Simon Eveson (York)The Strange Ubiquity of Rank 1 The simplest linear operators on a Banach space are those of rank 1. This talk describes three different situations in which relatively complicated linear systems exhibit very simple rank 1 behaviour in an asymptotic sense. This leads to a question (to which I don't know the answer!): are these isolated examples, or part of some more general theory? #### Steve Power (Lancaster) Crystal Frameworks and Rigidity Operators A finite bar-joint framework in three dimensions has an associated rigidity matrix which detects infinitesimal rigidity and flexibility of the framework. Infinite bar-joint (bond-node) frameworks, with periodic structure, play a role in mathematical models for Rigid Unit mode vibrations (low energy phonons) in material crystals. I shall introduce their infinite rigidity matrices and rigidity operators and present some of the foundations of a fledgling topic. In particular I hope to indicate, 1. how one may compute the rigid unit mode spectrum of a crystal framework, and 2. how one might develop a Hilbert space theory for "square summable flexes". Ċ Tom Potts, 5 Apr 2011, 06:35
2015-09-04 12:43:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901933312416077, "perplexity": 2620.3798335451474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00049-ip-10-171-96-226.ec2.internal.warc.gz"}
http://shebang.pl/jezykiobce/macierze/
# Lekcja 6. Matrices 12 lutego 2016 Obejrzyj poniższy film. Nie przejmuj się, jeśli nie wszystko w nim zrozumiesz. Najważniejsze jest to, aby obejrzeć go przed przeczytaniem tekstu. Odpowiedz na poniższe pytania. 1. What is the subject of the video? 2. What is a matrix? 3. What do you use if you need a variable to represent a matrix? 4. How many rows and columns does the matrix from the first example have? 5. What transcendental number did the author use in the matrix B? 6. What does the notation A[2,2]=0 mean? 7. What type of equations are matrices used to represent? 8. What can elements of a matrix represent in computer graphics? 9. What four operations on matrices does the author describe? 10. How do you add matrices? 11. Can you add two arbitrary matrices or do they have to satisfy any condition? 12. What is a row vector? ## Matrices A matrix (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns that is treated in certain prescribed ways. One such way is to state the order of the matrix. For example, the order of the matrix below is a 2×3 matrix because there are two rows and three columns. The individual items in a matrix are called its elements or entries. $\left[\begin{array}{ccc}1& 9& 13\\ 20& 5& –6\end{array}\right]$ Matrices are enclosed in [ ] or ( ), and are usually named with capital letters. For example, three matrices named A, B, and C are shown below. A = $\left[\begin{array}{cc}1& 2\\ 3& 4\end{array}\right]\text{, B =}\left[\begin{array}{ccc}1& 2& 7\\ 0& –5& 6\\ 7& 8& 2\end{array}\right]\text{, C =}\left[\begin{array}{cc}–1& 3\\ 0& 2\\ 3& 1\end{array}\right]$ A matrix is often referred to by its size or dimensions: m × n indicating m rows and n columns. Matrix entries are defined first by row and then by column. For example, to locate the entry in matrix A identified as aij, we look for the entry in row i, column j. In matrix A, shown below, the entry in row 2, column 3 is a23. $\left[\begin{array}{ccc}{a}_{\mathrm{11}}& {a}_{\mathrm{12}}& {a}_{\mathrm{13}}\\ {a}_{\mathrm{21}}& {a}_{\mathrm{22}}& {a}_{\mathrm{23}}\\ {a}_{\mathrm{31}}& {a}_{\mathrm{32}}& {a}_{\mathrm{33}}\end{array}\right]$ ### Types of matrices A square matrix is a matrix with dimensions n × n, meaning that it has the same number of rows as columns. The 3×3 matrix above is an example of a square matrix. A row matrix is a matrix consisting of one row with dimensions 1 × n. $\left[\begin{array}{ccc}{a}_{\mathrm{11}}& {a}_{\mathrm{12}}& {a}_{\mathrm{13}}\end{array}\right]$ A column matrix is a matrix consisting of one column with dimensions m × 1. $\left[\begin{array}{c}{a}_{\mathrm{11}}\\ {a}_{\mathrm{21}}\\ {a}_{\mathrm{31}}\end{array}\right]$ ## Major operations on matrices Addition and subtraction of matrices is only possible when the matrices have the same dimensions. The sum A+B of two m-by-n matrices A and B is calculated entrywise: (A + B)i,j = Ai,j + Bi,j, where 1 ≤ i ≤ m and 1 ≤ j ≤ n. $\left[\begin{array}{ccc}1& 3& 1\\ 1& 0& 0\end{array}\right]\text{+}\left[\begin{array}{ccc}0& 0& 5\\ 7& 5& 0\end{array}\right]\text{=}\left[\begin{array}{ccc}1 + 0& 3 + 0& 1 + 5\\ 1 + 7& 0 + 5& 0 + 0\end{array}\right]\text{=}\left[\begin{array}{ccc}1& 3& 6\\ 8& 5& 0\end{array}\right]$ ### Scalar multiplication The product cA of a number c (also called a scalar in the parlance of abstract algebra) and a matrix A is computed by multiplying every entry of A by c: (cA)i,j = c · Ai,j $\left[\begin{array}{ccc}1& 8& –3\\ 4& –2& 5\end{array}\right]\text{=}\left[\begin{array}{ccc}2· 1& 2· 8& 2· –3\\ 2· 4& 2· –2& 2· 5\end{array}\right]\text{=}\left[\begin{array}{ccc}2& 16& -6\\ 8& –4& 10\end{array}\right]$ ### Matrix multiplication Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B. For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340: $\left[\begin{array}{ccc}2& 3& 4\\ 1& 0& 0\end{array}\right]\text{=}\left[\begin{array}{cc}0& 1000\\ 1& 100\\ 0& 10\end{array}\right]\text{=}\left[\begin{array}{cc}3& 2340\\ 0& 1000\end{array}\right]$ ## Applications of matrices In mathematics matrices are used to represent linear transformations. For example, the rotation of vectors in three-dimensional space is a linear transformation which can be represented by a rotation matrix. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. In computer graphics programming matrices are used to represent and combine common transformations. ## Vocabulary ( ) (bracket) nawias [ ] (square bracket) a value given by wartość wyznaczona przez above powyżej abstract algebra algebra abstrakcyjna arranged in rows and columns ułożone w wiersze i kolumny array of numbers tabela liczb, tablica liczb capital letter wielka litera column kolumna column matrix macierz kolumnowa composition kompozycja, złożenie compute by multiplying obliczać przez pomnożenie computer graphics programming programowanie grafiki komputerowej corresponding dimension wymiar dot product iloczyn skalarny element element enclose in ująć w entry pozycja, element expression wyrażenie to identify identyfikować if and only if jeżeli i tylko, jeżeli (wyrażenie logiczne if and only if stanowi połączenie dwóch stwierdzeń za pomocą operatora logicznego i [koniunkcja]; całość oznacza prawdę wtedy, gdy oba zdania mają taką samą prawdziwą wartość — tak jak w bramce XNOR; dobrze ilustruje to przykład z trójkątem równoramiennym: A triangle is isosceles if and only if the triangle has two equal sides. Z tego zdania można wyciągnąć następujące wnioski: If a triangle is isosceles, then the triangle has two equal sides. oraz If a triangle has two equal sides, then the triangle is isosceles. W skrócie if and only if czasami zapisuje się w postaci iff i oznacza symbolem ⇔) if…then jeśli…to (część zdania znajdująca się bezpośrednio za if nazywa się hipotezą, a część zdania znajdująca się za then to wniosek — jest to odpowiednik polskich zdań typu: „jeśli zostanie spełniony pewien warunek… to nastąpi taki a nie inny skutek”; symbolicznie if…then oznacza się symbolem logicznym ⇒) linear transformation przekształcenie liniowe matrix (pl. matrices) macierz matrix product iloczyn macierzy meaning that co oznacza, że one such way is to… jednym ze sposobów jest… rotation matrix macierz obrotu rotation vector wektor obrotu row matrix macierz wierszowa row wiersz scalar skalar shown below pokazany poniżej solution of systems of linear equations square matrix the order of a matrix (matrix order) stopień macierzy the same number of rows as columns taka sama liczba wierszy, co kolumn three-dimensional space przestrzeń trójwymiarowa to be treated in certain prescribed ways być traktowanym w pewien z góry określony sposób to calculate entrywise obliczać wg elementów to indicate wskazywać to look for sth szukać czegoś transformation matrix macierz przekształcenia underlined podkreślony ## Exercises ### Exercise 1. Match the words in column A with the ones in column B. array of algebra arranged numbers the order linear equations capital in shown of a matrix square letters abstract transformation matrix below linear product system of matrix ### Exercise 2. Answer the following questions. 1. What is a matrix? 2. What do you call individual items of a matrix? 3. How do you denote matrices? 4. What is usually used to name matrices? 5. What are matrices usually referred to by? 6. How do you locate entries in matrices? 7. What is a square matrix? 8. What is a row matrix? 9. What is a column matrix? 10. When is addition and subtraction of matrices possible? 11. How is the sum of two matrices calculated? 12. When is multiplication of two matrices possible? 13. What are matrices used for in mathematics? 14. What are matrices used for in computer graphics programming? ### Exercise 3. Write sentences following the example. A: 2 × 3; B: 3 × 4 -> If A is a 2-by-3 matrix and B is a 3-by-4 matrix, then their matrix product AB is the 2-by-4 matrix. 1. A: 5 × 4; B: 4 × 7 2. A: 3 × 5; B: 5 × 2 3. A: 9 × 8; B: 8 × 12 4. A: 11 × 17; B: 17 × 13 5. A: 6 × 2; B: 2 × 7 ### Exercise 5. Nazwy niektórych firm i marek można tłumaczyć na język polski na różne śmieszne sposoby. Wymyśl absurdalne tłumaczenia lub interpretacje poniższych angielskich nazw. Jak zwykle podziel się owocami swego intelektu w komentarzu. 1. Microsoft 2. General Electric 3. General Motors 5. SAP 6. UPS 7. Ebay 8. Starbucks 9. John Deere Znasz inne nazwy firm, które można przetłumaczyć lub zinterpretować w podobny sposób? ### Exercise 6. Write sentences replacing "application" with "is used" and "you can use", as in the example. One of applications of matrices is solving systems of linear equations. -> Matrices are used in the solution of linear equations. You can use matrices to solve linear equations. 1. One of applications of matrices is representing linear transformations. 2. One of applications of matrices is in computer graphics programming is representing and combining common transformations. 3. One of applications of rotation matrices is representing linear transformations. 4. One of applications of the product of two transformation matrices is representing the composition of two linear transformations. 5. The application of matrices is storing numbers, symbols, or expressions. 6. One of applications of [] and () is enclosing matrices. ### Exercise 7. Change the sentences from passive into active as shown in the example. A matrix is often referred to by its size. -> You can refer to a matrix by its size. 1. A matrix is often referred to by its order. 2. A matrix is often referred to as a n × p matrix. 3. A n × n square matrix with ones on the main diagonal and zeros elsewhere is referred to as an identity matrix. 4. A matrix in which all entries below the main diagonal are zero is called an upper triangular matrix. 5. A matrix in which all entries above the main diagonal are zero is called a lower triangular matrix. 6. A matrix in which all entries outside the main diagonal are zero is called a diagonal matrix. 7. A matrix in which individual values are represented as colors can be referred to as a heat map. 8. A matrix consisting of one row with dimensions 1 × n is referred to as a row matrix. 9. A matrix consisting of one column with dimensions m × 1 is referred to as a column matrix. ### Exercise 8. Use the following information to write sentences using "if and only if", as in the example. A triangle is equilateral ⇔ its angles all measure 60° -> A triangle is equilateral if and only if all its angles measure 60°. 1. a square matrix has an inverse ⇔ its determinant is not zero. 2. Multiplication of two matrices is defined ⇔ the number of columns of the left matrix is the same as the number of rows of the right matrix. 3. A matrix is invertible ⇔ its determinant is nonzero. 4. A triangle has three equal sides ⇔ it has three equal angles. 5. A number is divisible by 9 ⇔ the sum of its digits is divisible by 9. 6. A real number is rational ⇔ its decimal expansion is terminating or repeating. 7. A natural number is divisible by 2 ⇔ the digit in its unit’s place is either 0, 2, 4, 6, or 8. 8. A natural number is divisible by 3 ⇔ the number obtained by adding its digits is divisible by 3. ### Exercise 9. Translate the following sentences into English. 1. Jeśli trójkąt jest równoboczny, to wszystkie jego kąty mierzą po 60°. Jeśli wszystkie kąty trójkąta mierzą po 60°, to trójkąt ten jest równoboczny. 2. Jeśli mnożenie dwóch macierzy jest zdefiniowane, to liczba kolumn w lewej macierzy jest taka sama, jak liczba wierszy w prawej macierzy. Jeśli liczba kolumn w lewej macierzy jest taka sama, jak liczba wierszy w prawej macierzy, to zdefiniowane jest mnożenie tych macierzy. 3. Jeśli macierz jest odwracalna, to jej wyznacznik jest niezerowy. Jeśli wyznacznik macierzy jest różny od zera, to macierz ta jest odwracalna. 4. Jeśli trójkąt ma trzy równe boki, to ma też trzy równe kąty. Jeśli trójkąt ma trzy równe kąty, to ma też trzy równe boki. 5. Jeśli liczba jest podzielna przez 9, to suma jej cyfr jest podzielna przez 9. Jeśli suma cyfr liczby jest podzielna przez 9, to liczba ta jest podzielna przez 9. 6. Jeśli liczba rzeczywista jest wymierna, to jej rozwinięcie dziesiętne jest skończone lub okresowe. Jeśli rozwinięcie dziesiętne liczby rzeczywistej jest skończone lub okresowe, to jest to liczba rzeczywista. Jeśli liczba jest rzeczywista, to jej rozwinięcie dziesiętne jest skończone lub okresowe. 7. Liczba naturalna jest podzielna przez 2, jeśli na pozycji jednostek zawiera cyfrę 0, 2, 4, 6 lub 8. Jeśli na pozycji jednostek liczba naturalna zawiera cyfrę 0, 2, 4, 6 lub 8, to jest ona podzielna przez 2. 8. Jeśli liczba naturalna jest podzielna przez 3, to suma wartości jej cyfr jest podzielna przez 3. Jeśli suma wartości cyfr liczby naturalnej wynosi 3, to liczba ta jest podzielna przez 3. ### Exercise 10. Look at the matrices below and say some sentences about their multiplication following the example. Can you multiply the matrices? A = $\left[\begin{array}{cccc}1& 2& 3& 4\\ 5& 6& 7& 8\\ 9& 10& 11& 12\end{array}\right]$,B = $\left[\begin{array}{ccccc}13& 14& 15& 16& 17\\ 18& 19& 20& 21& 22\\ 23& 24& 25& 26& 27\\ 28& 29& 30& 31& 32\end{array}\right]$ Przykład: For multiplication of the matrix A by the matrix B, the element A1,1, having the value 1, corresponds to the element B1,1, having the value 13. ### Exercise 11. Write down several operations from the previous exercise as shown in the example and read them aloud. 1 × 13 + 2 × 18 + 3 × 23 + 4 × 28 = 13 + 36 + 69 + 112 = 230 -> 1 times thirteen plus 2 times eighteen plus 3 times twenty three plus four times twenty eight is equal to thirteen plus thirty six plus sixty nine plus one hundred and twelve is equal to two hundred and thirty ### Exercise 12. Read the following conditional statements aloud following in the example. x ≥ y ⇒ x + z ≥ y + z -> if x is greater than or equal to y, then x + z is greater than or equal to y + z 1. x ≥ 0, y ≥ 0 ⇒ xy ≥ 0 2. z ≥ 0 ⇒ z + 8 ≥ 0 3. x > 0 ⇒ x + 1 > 0 4. p ≤ 4 ⇒ p − 4 ≠ 4 5. a ≠ 0 ⇒ a × 10 ≠ 0 6. x = 3 ⇒ x + 7 = 10 7. b ≥ 2 ⇒ b × 6 ≥ 12 8. z ≠ 7 ⇒ z + 7 ≠ 14 9. x > 4 ⇒ x + 3 > 7 10. q ≤ 12 ⇒ q − 8 ≠ 5 ### Exercise 14. Watch the following video and answer the questions below. 1. What is the subject of the video? 2. What three sample items does the author present at the beginning? 3. What is a matrix? 4. What do you do with the numbers that are in front of xs and ys? 5. What are constants? 6. How are constants represented in a matrix? 7. What is the first step in solving a system of linear equations using a matrix? 8. What is the second step? 9. How did she compute the value of x? 10. How did she compute the value of y? 11. What is the solution of the sample system of linear equations? 12. Use the method described in the video to solve the following system of linear equations: A = $\left\{\begin{array}{c}2x – 3y = –2\\ 4x + y = 24\end{array}$ ## Źródła: Treść tej strony dostępna jest na zasadach licencji CC BY 3.0 ## Zobacz również: Autor: Łukasz Piwko Tłumacz angielskiej i francuskiej literatury specjalistycznej, nauczyciel, wykładowca i maniak technologii programistycznych. Interesuje go wszystko, co związane z programowaniem i tłumaczeniem tekstów na ten temat na język polski. W wolnym czasie czyta Balzaka, słucha muzyki i trenuje karate.
2017-01-23 14:40:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7338674068450928, "perplexity": 4298.711278891176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00483-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41524-019-0255-3?error=cookies_not_supported&code=5e688cee-2eb1-4460-a14b-67f5d5dcb325
## Introduction Point, line and/or planar defects are ubiquitously present in all materials and frequently have beneficial effects on material properties. The intentional introduction and control of defects plays a key role in the development of advanced materials with better performance and new functionality. Well-known examples include doping semiconductors to modify the band structure and using phase or grain boundaries to strengthen alloys. Like other materials, battery intercalation compounds contain various types of defects. “Defect engineering” is a promising strategy for this class of materials, which nonetheless has not yet been widely explored. In particular, recent studies find that antisite defects, which are common in battery compounds, can promote Li transport and enhance rate performance by opening up alternative diffusion channels with lowered migration energies in numerous lithium-ion battery electrode materials. Such phenomena have been reported in Li1.211Mo0.467Cr0.3O2,1 Li2(Mn,Fe)P2O7,2 α-LiMn1−xFexPO4 (ref. 3) and Li4Ti5O12,4 etc. In refs. 5,6, antisite defects are also reported to improve the stability and cyclability of cubic LixTi2O4, where the random mixing of Li and Ti on octahedral sites in the cubic phase electrochemically induced from amorphous TiO2 enables reversible capacity that cannot be achieved otherwise. These examples demonstrate that the rational tailoring of antisite defects provides a potentially general approach to improving battery electrode properties. Here we present a computational study that reveals a new mechanism of antisite defects enhancing the rate capability of intercalation compounds by accelerating surface-reaction-limited (SRL) phase transformation during battery charge/discharge. Antisite defects are generated when the sites of intercalating ions are occupied by other cations that are usually less mobile. A prominent effect of antisite defects in battery electrodes is to block the existing paths of intercalating ions and generate new migration pathways at the same time. For intercalation compounds with strong anisotropic transport properties, which are common among battery materials, such effect usually leads to the reduction in the diffusion anisotropy of intercalating ions. For example, lithium iron phosphate olivine (LiFePO4) is theoretically predicted to have predominantly one-dimensional (1D) Li diffusion in the [010] direction.7 However, experiments show that practically synthesized LiFePO4 with just a few percent of Li–Fe antisite defects (Li/Fe) instead exhibit two-dimensional (2D) Li diffusivity.8,9 This discrepancy is explained by first-principles calculation10 that finds Li/Fe to impede Li movement along [010] migration channels but facilitate Li hopping between the channels via vacancies on Fe sites created by antisite defects. Similar observations of antisite defects reducing ion diffusion anisotropy are also reported for Li2MP2O7 (ref. 2), Li4Ti5O12 (ref. 4), Na2+δFe2−δ/2 (SO4)311 and LiMnBO3.12 In this work, we show that antisite defects in LiFePO4 lead to an unexpected increase in the SRL phase transition rate. This is achieved by an increase in the surface reaction area for Li intercalation due to defect-enhanced Li diffusion along [100] even though antisites impede Li movement in the fast [010] diffusion direction. Analysis of the interplay between surface reaction and Li diffusion reveals that the inclusion of antisite defects in LiFePO4 qualitatively changes the particle size dependence of phase transformation rate in the SRL regime. As a result, the rate performance of defect-containing LiFePO4 is less sensitive to particle dimensions, which facilitates the use of larger particles to improve the packing density and reduce side reaction of the electrodes. Under galvanostatic conditions, the presence of antisites also reduces the risk of electrode damage from current hotspots by distributing the reaction flux more uniformly on particle surface. Due to the kinetic competition between surface reaction and bulk diffusion, an optimal defect concentration is predicted to exist for a given particle geometry to maximize the Li (de)intercalation rate. Criteria for the co-design of defect content and particle morphology are proposed. Counterintuitively, we find that (100)-oriented LiFePO4 platelike particles may exhibit even better rate performance than (010)-oriented plate particles in the SRL kinetic regime at relatively low defect levels, whereas the latter is commonly viewed as the most desirable LiFePO4 particle morphology. While we demonstrate the possibility of using antisite defects to accelerate phase transitions in LiFePO4, it may have general applicability to other phase-changing battery materials that exhibit ion diffusion anisotropy. ## Results Electrochemically driven first-order phase transformations in battery electrode materials upon ion (de)intercalation are subject to the kinetic control of various rate-limiting steps. We previously show9 that the competition between Li diffusion and surface reaction can give rise to three distinct phase transformation modes in LiFePO4, i.e. bulk-diffusion-limited (BDL), SRL and an intermediate hybrid mode, in which phase boundary migration is BDL or SRL in different directions. As illustrated in Fig. 1, these transformation modes are associated with different phase growth morphologies and rates. They are not unique to LiFePO4 and can operate in other intercalation compounds. Phase transformation becomes SRL when Li insertion/extraction at the electrode particle surface is much more sluggish than its diffusion inside the particles. This could occur at small particle sizes, low applied over/under-potentials and/or low exchange current density for charge transfer, etc.9 In a pioneering study, Singh, Ceder and Bazant (SCB) investigated the SRL phase transformation kinetics using a depth-averaged model of LiFePO4.13 They show that a salient feature of the SRL mode is that phase boundary travels at a constant velocity perpendicular to the main Li intercalation direction (Fig. 1), which is in sharp contrast to the behavior of the BDL mode. Although the SCB theory assumes 1D Li transport in LiFePO4 in accordance with the DFT prediction,7 the characteristics of the SRL mode are independent of Li diffusion anisotropy and also apply to cases with higher dimensional Li diffusivity in the presence of antisite defects.9 ### Antisite defects accelerate SRL phase transformation We study the SRL phase transformation process in defect-containing LiFePO4 particles using the mesoscale phase-field simulation method,14,15,16 which has been applied to the modeling of LiFePO4.17,18,19,20,21 An unexpected phenomenon that antisite defects can increase the phase boundary migration speed is observed from the simulations, in which the influence antisite defects exert on the discharge process is captured by their effect on the anisotropic Li diffusivity in LiFePO4. In ref. 10, Malik et al. evaluate from first-principles calculation the activation barrier required for Li+ to cross over to a neighbor channel through an antisite in LiFePO4. The calculated Li migration barrier is employed to determine the defect concentration dependence of Li diffusivity along [010], [100] and [001] axes, D[010], D[100] and D[001], at 440 K. While D[010] at the room temperature (300 K) is also reported at several defect levels in ref. 10, the values of D[100] and D[001] at 300 K are not provided in that work. As they are required for our simulations, we calculated all the three diffusion coefficients at 300 K as a function of antisite concentration following the approach of Malik et al.10 (see Methods). As shown in Supplementary Fig. 1 in the Supplementary Information (SI), D[100] exhibits an approximately inverse relationship with D[010] at 300 K. Both of them approach 10−12–10−11 cm2 s−1 at high defect concentrations, consistent with our previous observation of 2D room-temperature Li diffusivity D[010] ≈ D[100]/[001] ≈ 10−11 cm2 s−1 in defect-containing LiFePO4 microparticles.9 The calculated Li diffusivity values at 300 K are used in the phase-field simulations presented below. Figure 2 compares the discharge simulations for two LiFePO4 particles with different antisite contents at a constant underpotential Δϕ = 35 mV, which is below the underpotential required to form a metastable solid solution in LiFePO4. The particles are given (010)-oriented platelike shape in simulations to facilitate fast Li diffusion along the [010] direction and ensure that the phase transition is in the SRL regime. Such particle morphology is commonly obtained in hydrothermal synthesis.22,23 In the simulations, Li is inserted into the particle from (010) surface only, and zero-flux boundary condition is applied to (100) surfaces. The “defect-lean” particle contains 0.5% antisites and has a large D[010] = 1.3 × 10−10 cm2 s−1 but a much smaller D[100] = 3.9 × 10−14 cm2 s−1. The “defect-rich” particle contains 25% antisites and has a much reduced Li diffusion anisotropy with D[010] = 5.4 × 10−12 cm2 s−1 and D[100] = 2.6 × 10−12 cm2 s−1. Because phase boundaries are observed to be parallel to the [001] axis in platelike particles,18,22,24,25 simulations were reduced to two dimensions to improve computation efficiency. Coherency stress that arises from the lattice mismatch between LiFePO4 (LFP) and FePO4 (FP) phases plays a critical role in inducing the metastable solid solution behavior at high (dis)charge rates.18,26 However, it is found that the omission of coherency stress does not cause any significant change on the phase transformation kinetics in the SRL regime (see Supplementary Fig. 2), and stress is thus not considered in simulations to simplify the theoretical analysis described later. As shown in Fig. 2, the FP → LFP phase transition in both particles is in the SRL mode, with the phase boundary traveling along [100] at a constant speed. However, the phase boundary velocity in the defect-rich particle is six times higher than in the defect-lean particle, even though the Li diffusivity in the main intercalation direction [010] is two decades larger in the latter. We also performed similar simulation for SRL phase transformation at T = 440 K using the defect-concentration-dependent Li diffusivity values at this temperature reported in ref. 10. Supplementary Fig. 3 shows a similar increase of the phase boundary speed by one order of magnitude in LiFePO4 containing 10% defects at 440 K. This finding is surprising as SRL phase transformation is expected to be kinetically limited by surface reaction and insensitive to Li diffusion kinetics. A revealing clue to this counter-intuitive result can be found in Fig. 3a, which shows the distribution of Li intercalation flux js on (010) surface in both particles. Li insertion in the defect-lean particle only takes place near the phase boundary and js decays rapidly away from the boundary center. The narrow surface reaction region moves together with the phase boundary, resulting in the sequential filling of [010] Li channels along the [100] direction during discharge. Such behavior is analogous to the prediction of SCB theory and the “domino-cascade” model proposed by Delmas et al.27 In contrast, the entire (010) surface actively intercalates Li in the defect-rich particle. While its peak value is the same as that in the defect-lean particle, js decreases more slowly away from the phase boundary. As such, more Li ions are inserted into the defect-rich particle per unit time, which explains its higher phase transformation rate. Figure 3b shows that the wider surface reaction region on the defect-rich particle is a direct result of the enhanced [100] diffusivity enabled by defects. It can be seen that the Li diffusion flux in the defect-lean particle is confined near phase boundary. However, significant Li diffusion flux along [100] exists within the entire defect-rich particle, which transports Li atoms inserted at surface locations distant from the phase boundary to the boundary to participate in phase transformation. This results in a larger surface reaction area. Therefore, a key insight we obtained is that an increased Li diffusivity along the phase boundary migration direction ([100] here) benefits the SRL phase transformation kinetics in LiFePO4 by expanding the surface reaction region. This differs from the effect of faster Li diffusion on improving phase transformation kinetics in the BDL regime, in which a higher diffusivity enhances phase boundary velocity in the same direction. It represents a new mechanism of accelerating phase transitions through the interplay between Li diffusion and surface reaction, which has not been explored so far. To further shed light on the effect of Li diffusion on SRL kinetics, especially the quantitative dependence of phase boundary velocity on [100] and [010] Li diffusivity, a series of particle discharge simulations under constant Δϕ = 35 mV were carried out. Instead of obeying the relation given by Supplementary Fig. 1, the values of D[010] and D[100] are independently varied in simulations in order to study their respective effects on the phase transition kinetics. The [010] thickness of the particle is L[010] = 50 nm, and its [100] dimension L[100] is assumed to be much larger than the surface reaction region so that the phase boundary velocity VPB is independent of L[100]. As shown in Fig. 4, varying D[010] from 10−9 to 10−12 cm2 s−1 at a given D[100] has little effect on the calculated phase boundary velocity VPB and the surface reaction zone width W, which is defined as the width of the region where js is larger than 35% of its peak value. This confirms that phase transition is indeed in the SRL mode and hence insensitive to D[010]. We note that D[010] in defect-containing LiFePO4 is predicted to increase beyond the bulk diffusivity value at very small [010] particle thickness when antisites can no longer block the 1D migration channels.10 The results above suggest that this particle size dependence of D[010] does not have a large impact on the predicted acceleration of phase transition by antisites in the SRL regime. On the other hand, increasing D[100] from 10−14 to 10−11 cm2 s−1 while keeping D[010] constant causes both W and VPB to increase by more than 20 times. ### Depth-averaged model We found that the 2D simulation results can be well approximated by a 1D depth-averaged model similar to the SCB theory, which is derived by assuming that Li concentration c is uniform along [010] and a function of [100]-coordinate x only. With the simplification, the governing equation of the phase-field model becomes $$\frac{{\partial c}}{{\partial t}} = \frac{\partial }{{\partial x}}\left[ {\frac{{D_{[100]}V_{\mathrm{m}}}}{{RT}}c\left( {1 - c} \right)\frac{{\partial \mu _{{\mathrm{Li}}}}}{{\partial x}}} \right] + \frac{{2j_{\mathrm{s}}}}{{L_{[010]}}}$$ (1) where the surface reaction flux js on one (010) facet is given by the Butler-Volmer equation (Methods). Equation 1 has the form of a reaction-diffusion equation. It reduces to the SCB model when the diffusion term disappears from Eq. 1 at D[100] = 0. Phase boundary velocity VPB and surface reaction zone width W predicted by this model are shown as solid lines in Fig. 4. The excellent agreement with the full simulations allows one to use the depth-averaged model to efficiently analyze the SRL kinetics in the presence of [100] Li diffusion. ### Scaling relation The simulation results in Fig. 4 show that both VPB and W hold the same parabolic relation with D[100], i.e. $$V_{{\mathrm{PB}}} \propto D_{[100]}^{1/2}$$ and $$W \propto D_{[100]}^{1/2}$$, when D[100] > ~ 10−14 cm2 s−1. Furthermore, VPB is proportional to W over the entire range of D[100]. When D[100] → 0, W approaches the intrinsic diffuse interface width in the phase-field model, and VPB reaches the prediction of the SCB model. It shows that a defect-free LFP particle has a very low SRL phase boundary velocity (<10−3 nm s−1) because of a very narrow surface reaction zone (~2 nm). As a result, good rate performance can only be achieved when the FP → LFP first-order phase transition is bypassed. However, when an adequate amount of antisite defects are present in the particle, the surface reaction area can increase by orders of magnitude and results in a similar increase in the phase boundary velocity to enable much better rate capability in the absence of metastable solid solution. A more transparent understanding of the scaling relation can be obtained from an approximate analytical solution to the depth-averaged model, which is based on the sharp-interface assumption (see derivation in Methods). The solution gives the following expressions of surface reaction zone width W and phase boundary velocity VPB: $$W = \lambda _1\sqrt {\frac{{D_{[100]}L_{[010]}}}{{i_0}}}$$ (2) $$V_{{\mathrm{PB}}} = \lambda _2\sqrt {\frac{{i_0D_{[100]}}}{{L_{[010]}}}}$$ (3) where the expressions of λ1 and λ2, which are functions of Δϕ, are given in Methods, and i0 is the exchange current density. As shown in Fig. 4, Eqs. 2 and 3 show excellent agreement with the numerical solution when W is significantly larger than the diffuse-interface width of the phase boundary. In addition to explaining the parabolic dependence of W and VPB on D[100], Eq. 3 shows that VPB varies with i0 and [010] particle thickness L[010] as i01/2 and $$L_{[010]}^{ - 1/2}$$, respectively. Notably, these relations qualitatively differ from defect-free LFP particles with 1D Li diffusivity, in which VPB has a stronger dependence on i0 and L[010], i.e. $$V_{{\mathrm{PB}}} \propto i_0^{}$$ and $$V_{{\mathrm{PB}}} \propto L_{[010]}^{ - 1}$$, according to the SCB theory. Such difference in the scaling behavior lies in that upon decreasing i0 or increasing L[010], the surface reaction area W increases in defect-containing particles (Eq. 2), which contributes to a less pronounced reduction of VPB. ### Particle size dependence of phase transformation time The phase boundary speed predicted by Eq. 3 applies to the situation where the [100] particle size is larger than the surface reaction zone width W. When L[100] < W, the entire (010) particle surface is active for Li intercalation during discharge. An interesting prediction thus arises: VPB should increase approximately linearly with L[100] in the SRL regime, i.e. the longer the travel distance of the phase boundary, the faster it moves. Accordingly, the particle transformation time, given by tfL[100]/VPB, should be insensitive to L[100]. This prediction is confirmed by calculations shown in Fig. 5, which plots the average VPB and tf as a function of L[100] for Li intercalation into a defect-rich (25% antisites) particle at Δϕ = 35 mV. It can be seen that VPB is proportional to L[100] and tf increases very slowly with L[100] up to ~1 μm, which makes tf only dependent on [010] particle size as $$t_{\mathrm{f}} \propto \sqrt {L_{[010]}}$$. In contrast, tf is proportional to L[100]L[010] in defect-free particles. Along with Eq. 3, this comparison shows that the inclusion of antisite defects results in qualitatively different particle size dependence of the SRL phase transition kinetics. The weaker dependence of tf on both L[100] and L[010] in defect-containing particles implies another benefit of antisite defects, i.e. they enable the rate performance of LiFePO4 to degrade less severely with particle dimensions. This facilitates the use of larger particles in applications, which can improve the packing density and reduce side reactions between particles and electrolyte in electrodes. ### Galvanostatic cycling behavior Besides constant underpotential condition, we also studied the difference between defect-rich and defect-lean particles under galvanostatic discharge or constant current conditions. Figure 6a compares the distributions of Li intercalation flux js on the (010) surface of particles with different defect contents when galvanostatically discharged to 50% state of charge at 0.005 C (nC = fully discharged in 1/n hours). While the total amount of Li intercalated into the particles per unit time is the same, the peak value of js decreases sharply with D[100] because faster [100] diffusion causes the reaction flux to be more evenly distributed on (010) surface. A 123-fold reduction in the peak flux occurs when D[100] increases from 0 cm2 s−1 (0% antisites) to 2.6 × 10−12 cm2 s−1 (25% antisites). Meanwhile, increasing D[100] also significantly decreases the underpotential required from 30 mV to 0.3 mV as shown in Fig. 6b. Therefore, antisite defects are beneficial under galvanostatic cycling conditions by reducing polarization and mitigating degradation caused by current hotspots and electrochemical shock.28,29,30 ### Optimization of defect concentration and particle morphology After demonstrating the benefits of antisite defects on phase transformation kinetics in the SRL regime, we ask the question whether there exists an optimal defect concentration and how it depends on the LiFePO4 particle geometry. We still consider the platelike particle morphology here for its practical relevance. The SRL transformation rate is maximized when the entire plate surface is active for Li intercalation during (dis)charge, which requires L[100] < W. Using Eq. 2, this leads a criterion on [100] Li diffusivity $$D_{[100]} > \frac{{i_0L_{[100]}^2}}{{\lambda _1L_{[010]}}}$$ (4) On the other hand, Li bulk diffusion along the [010] plate thickness direction should be sufficiently facile so that it does not limit the transformation kinetics. This condition can be described by the inequality $$L_{[010]}^2/D_{[010]} < W/V_{{\mathrm{PB}}}$$, where $$L_{[010]}^2/D_{[010]}$$ is the characteristic Li diffusion time along the [010] axis and W/VPB is the time a [010] channel stays active for Li intercalation. Applying Eqs. 2 and 3, we derive a criterion on [010] Li diffusivity $$D_{[010]} > \frac{{\lambda _2i_0L_{[010]}}}{{\lambda _1}}$$ (5) Equations 4 and 5 provide the guidance on tuning the defect concentration for given particle sizes, or conversely, the particle geometry for a given defect content. Because increasing antisite defect concentration has opposite effects on D[100] and D[010], Eqs. 4 and 5 may not always be satisfied simultaneously and an optimal defect level may exist. We numerically examine the defect concentration dependence of SRL transformation rate in 2D phase-field simulations, in which a (010)-oriented LiFePO4 particle of L[100] × L[010] = 400 nm × 50 nm is discharged at Δϕ = 35 mV. Figure 7 shows that the lithium intercalation time keeps decreasing with antisite concentration up to 25%, which implies an optimal defect level at higher defect levels that nonetheless may be practically unfeasible to realize. However, we find that comparable performance may be attained at much lower defect concentrations by changing the particle shape from (010)-oriented to (100)-oriented plates. For such particle morphology, the main reaction surface is the (100) facet, which is observed to be active for Li intercalation in defect-containing LFP particles in recent in situ TXM experiments.9,31 Because SRL phase boundary movement is parallel to the [010] fast Li diffusion direction in (100)-oriented plates, the phase boundary velocity could be significantly improved over (010)-oriented plates at the same defect concentration. This is confirmed by simulations shown in Fig. 7. For a (100)-oriented particle with L[100] × L[010] = 50 nm × 400 nm, the minimum intercalation time is reached at 5% defect concentration. We note that this result is obtained assuming that the exchange current density i0 for Li intercalation is the same for (100) and (010) surfaces, which is taken to be 0.01 A m−2 based on estimate from ref. 32 While direct measurement of i0 on the (100) facet is not available in literature, experimental observations of Li intercalation on non-(010) surfaces in LiFePO49,31 implies that it is comparable to that of (010) surface. For example, the hybrid-mode phase boundary migration speed on (100)/(001) surfaces in a LiFePO4 microrod sample is fitted with i0 = 0.1 A m−2 in ref. 9. Therefore, without introducing excessive amount of defects, (100)-oriented plates could potentially provide better performance in the SRL regime than (010)-oriented plates, which are usually assumed to be the desired particle shape for fast (dis)charging. Interestingly, a recent study indeed finds (100)-oriented LFP nanoplates to exhibit excellent rate capability33 even better than (010)-oriented nanoparticles,34 which may be related to the defect-based mechanism discussed here. As a commercial cathode material for Li-ion batteries, LiFePO4 is known for its exceptional rate capability. Recent experiments26,35,36 and modeling studies18,37,38 establish the formation of metastable solid solution in LiFePO4 during fast (dis)charge, and it is widely believed that bypassing the sluggish first-order phase transition is responsible for considerably accelerating the Li intercalation kinetics. Here we demonstrate a different, defect-based acceleration mechanism without the suppression of the FP → LFP phase transformation. Compared to the former, such mechanism could be effective at low overpotentials where the metastable solid solution does not form. This could be attractive for battery operation where the magnitude of applied overpotential is limited. For example, the development of thick-electrode battery cells has gained significant interest recently as a way to improve energy density.39,40,41 However, thick electrodes are often plagued by severe reaction non-uniformity as a wide range of overpotentials typically exists across the electrodes due to electrolyte polarization.41 The inability of LFP particles to form solid solution in the low overpotential region is likely to exacerbate the nonuniform reaction and cause capacity underutilization. Using defect-containing LFP particles in thick electrodes could alleviate this issue. The enhancing effect of defects on SRL phase transformation rate also offers a promising way to improve the performance of electrode materials that do not exhibit metastable solid solution behavior and/or have very sluggish surface reaction kinetics. In a recent work,42 Li et al. reveal that fast Li surface diffusion on LiFePO4 surface facilitated by adsorbed fluids promotes Li redistribution within the (010) plane inside the particles and significantly enhances the intra-particle phase separation kinetics. There, Li surface diffusion plays a similar role of antisite defects in increasing the effective Li diffusivity along non-[010] directions. A reaction-diffusion equation similar to Eq. 1 is employed in ref. 42 to analyze the stability of LixFePO4 solid solution in the presence of Li surface diffusion. Although Li et al. focuses on the effect of fast surface diffusion on phase separation kinetics, it is also expected to increase the surface reaction area and accelerate SRL phase transformation in the two-phase coexistence regime, where phase separation is fully developed. This phenomenon can be analyzed within the same theoretical framework of this work by considering the contribution of surface diffusion to D[100] in Eq. 1. Li diffusivity on (010) LiFePO4 surface in contact with electrolyte is conservatively estimated to be ~10−12 cm2 s−1, which leads to an effective in-plane diffusivity of ~ 10−14 cm2 s−1 in 150nm-thick, (010)-oriented LiFePO4 plates.42 Compared to D[100] ≈ 10−12 cm2 s−1 in defect-rich particles, this represents a relatively small contribution to [100] Li transport although its importance will certainly increase with decreasing particle size, which warrants further study. Conversely, like Li surface diffusion antisite defects can also promote phase separation at high underpotentials where a metastable solid solution can form (Δϕ > 45 mV in our model). In a phase-field modeling study,43 Dargaville and Farrell show that increasing [100] Li diffusivity induces phase separation in LiFePO4 under high discharge currents, which otherwise favor the solid-solution intercalation behavior. Antisite defects may have a beneficial effect on SRL intercalation kinetics even in this regime. For instance, Supplementary Fig. 4a compares the lithiation rate of defect-rich (25% antisites) vs defect-lean (0.5% antisites) particles under Δϕ = 50 mV. It shows that the former is fully lithiated at ~8000 s, when the latter only reaches ~60% lithiation. The reason for such difference lies in that defect-enhanced [100] Li diffusion causes phase separation to initiate in LixFePO4 metastable solid solution at an earlier time of ~5700 s (see Supplementary Fig. 4b), after which the Li chemical potential at the particle surface is reduced to the LFP/FP two-phase equilibrium level. As shown in Supplementary Fig. 4b, this in turn increases the surface reaction overpotential in the Butler–Volmer kinetics to generate a larger Li intercalation flux. A detailed analysis of the interplay between antisites, phase separation and Li intercalation kinetics will be presented elsewhere. Finally, we discuss potential experimental strategy to examine the predicted beneficial effects of antisite defects in LiFePO4. We suggest that our predictions could be tested experimentally by annealing hydrothermally synthesized LiFePO4 platelike particles at different temperatures to vary the defect level. In general, increasing the annealing temperature reduces the amount of defects in electrode materials.2,44,45,46 For instance, this approach has been demonstrated by Chen and Graetz,44 who report that the FeLi antisite concentration in LiFePO4 prepared by a hydrothermal method decreases from 8 to 0% when the post-synthesis annealing temperature is increased from 440 °C to 500 °C. In ref. 2, Kim et al. synthesized Li2MnP2O7 with ~0%, 20% and 30% Li/Mn antisites by calcinating samples at 700 °C, 650 °C and 600 °C, respectively, which are tested to confirm the effect of antisite defects on promoting Li diffusion in this material. ## Discussion In summary, we reveal that the inclusion of antisite defects in LiFePO4 particles could result in orders-of-magnitude improvement in SRL phase transformation kinetics by increasing the active surface area for Li intercalation. Such phenomenon originates from the experimentally confirmed effect of antisite defects on reducing the Li diffusion anisotropy and is expected to be applicable to other phase-changing intercalation compounds with anisostropic ion transport properties. We numerically and analytically study the interplay between Li diffusion and surface reaction in the SRL regime. The results show that antisite defects qualitatively change the scaling dependence of the phase transformation rate on Li diffusivity, exchange current density and particle size. Upon potentiostatic discharge, the Li intercalation rate deteriorates more slowly with increasing particle dimensions in defect-rich particles, which facilitates the use of larger electrode particles without severely compromising rate performance. Antisites also induce a more uniform distribution of the reaction flux on particle surface under galvanostatic discharge conditions, which reduces the risk of electrode degradation due to electrochemical shock. We show that the optimal defect concentration that maximizes Li intercalation rate is dependent on particle geometry, and (100)-oriented platelike LiFePO4 potentially offers better rate performance than (010)-oriented plates during SRL (dis)charge process. Our work highlights the promise and opportunities of improving battery electrode compounds through intentional defect manipulation guided by a mechanistic understanding. ## Methods ### Antisite concentration dependence of Li diffusivity We follow the approach described in ref. 10 to determine the Li diffusivity as a function of antisite concentration in LiFePO4 based on Li migration barriers calculated from the DFT calculations.7,10 In short, a 1D random walk model is set up to simulate the hopping of a Li ion between two FeLi antisites within a [010] channel, which are separated by a distance d that depends on the antisite concentration p as d = (1 + p−1)lb/2 where lb is the lattice constant of LFP along [010]. The Li ion makes fast random jumps within the channel with an activation barrier of 270 meV taken from ref. 7 Whenever Li hops to a site next to an FeLi, it may cross over to the nearest migration channel to circumvent the obstruction of the antisite with an activation energy of 491 meV as calculated by Malik et al. 10 The frequencies of the two types of jumps are given by Γ = v exp(−Ea/kT), where ν = 1012 s−1 is the attempted frequency and Ea is the activation energy. With Γ as input, 500 kinetic Monte Carlo simulations are run to estimate the average time < t > it takes for the Li ion to escape the channel blocked by the two antisites. The Li diffusivity is calculated as D = L2/(2 < t > ), in which L = d/2 for D[010], la/2 for D[100] and lc/2 for D[001] (la / lc – [100] / [001] lattice constants of LFP). We verified our calculation by reproducing the Li diffusivity values at 440 K and D[010] at 300 K reported in ref. 10 and then used the method to determine D[100], D[010] and D[001] at different defect concentrations at 300 K, which is shown in Supplementary Fig. 1. ### Phase-field model We use a previously reported phase-field model9,25 to simulate phase transformation in LiFePO4 upon Li intercalation, which is briefly described here. The site occupancy fraction of Li in LicFePO4, c(r), serves as the field variable to distinguish between LFP (c = 1) and FP (c = 0) phases in the model. Li diffusion and the LFP ↔ FP phase transformation are described by the Cahn–Hilliard equation for c(r)47,48 $$\frac{{\partial c}}{{\partial t}} = - \nabla \cdot j = \nabla \cdot \left[ {\frac{{{\mathbf{D}}V_{\mathrm{m}}}}{{RT}}c\left( {1 - c} \right)\nabla \mu _{{\mathrm{Li}}}} \right]$$ (6) where the Li chemical potential μLi is given by $$\mu _{\mathrm{Li}} = \frac{{\partial f_{\mathrm{chem}}\left( c \right)}}{{\partial c}} - \kappa \nabla ^{2}c = \mu _{\mathrm{Li}}^{\mathrm{eq}} + \frac{1}{V_{\mathrm{m}}}\left[ {RT\ln \frac{c}{{1 - c}} + \Omega \left( {1 - 2c} \right)} \right] - \kappa \nabla ^{2}c$$ (7) In Eqs. 6 and 7, D is the diffusion coefficient tensor, Vm = 43.8 cm3 mol−1 is the molar volume of LiFePO4.49 R is the gas constant and $$\mu _{{\mathrm{Li}}}^{{\mathrm{eq}}}$$ is the equilibrium Li chemical potential at LFP/FP two-phase coexistence. A regular solution model is used to describe the homogeneous chemical free energy density fchem of LicFePO4 with Ω = 12 kJ mol−1.25 The gradient coefficient κ is given a value of 1.68 × 10−12 J cm−1, which produces a phase boundary energy of 0.072 J m−2 that averages the (100), (010) and (001) interface energies obtained from first-principles calculations.50 As the boundary condition, the Li intercalation flux at particle surface is described by the Butler–Volmer equation: $$j_{\mathrm{s}} = \frac{i_0}{F}\left[ {\exp \left( \frac{\alpha V_{\mathrm{m}}\eta}{RT} \right) - \exp \left( { - \frac{{(1 - \alpha )V_{\mathrm{m}}\eta }}{RT}} \right)} \right]$$ (8) where $$\eta = \mu _{{\mathrm{Li}}}^{{\mathrm{el}}} - \mu _{{\mathrm{Li}}}$$ is the surface reaction overpotential with $$\mu _{{\mathrm{Li}}}^{{\mathrm{el}}}$$ and $$\mu _{{\mathrm{Li}}}^{}$$ being Li chemical potentials in the surrounding electrolyte and at particle surface, respectively. The underpotential Δϕ is related to $$\mu _{{\mathrm{Li}}}^{{\mathrm{el}}}$$ as $$\Delta \phi = F(\mu _{{\mathrm{Li}}}^{{\mathrm{el}}} - \mu _{{\mathrm{Li}}}^{{\mathrm{eq}}})/V_{\mathrm{m}}$$. We set i0 = 0.01 A m−2 (ref. 32) and α = 0.5 in Eq. 8. Supplementary Table 1 in SI summarizes the parameters used in the model and the sources of their values. ### 1D depth-averaged model The model is derived in a similar way as in ref. 13 Integrating Eq. 6 along the [010] axis, one has $$\frac{\partial \bar {c}}{\partial t} = \frac{1}{L_{[010]}}{\int \nolimits_0^{L_{[010]}}} \frac{\partial }{{\partial x}}\left[ {\frac{{D_{[100]}V_{\mathrm{m}}}}{{RT}}c(1 - c)\frac{{\partial \mu _{{\mathrm{Li}}}}}{{\partial x}}} \right]dy + \frac{{2j_{\mathrm{s}}}}{{L_{[010]}}}$$ (9) where $$\bar c\left( {x,t} \right) = \mathop {\int }\nolimits_0^{L_{[010]}} c(x,y,t)dy/L_{[010]}$$ is the average Li concentration in the [010] direction. With the assumption of facile Li transport along [010] so that c(r) is uniform along [010], i.e. $$c(x,y,t) = \bar c(x,t)$$, Eq. 7 becomes $$\mu _{{\mathrm{Li}}}\left( {x,t} \right) = \frac{{\partial f_{{\mathrm{chem}}}(\bar c)}}{{\partial \bar c}} - \kappa \frac{{\partial ^2\bar c}}{{\partial x^2}}$$ (10) Accordingly, the 2D Cahn–Hilliard equation (Eq. 6) is reduced to the depth-averaged equation (Eq. 1), in which the overbar on c is dropped. ### Analytical solution to the depth-averaged model An analytical expression of the traveling wave solution to the depth-averaged model can be derived with a few approximations. Applying the ansatz $$c(x,t) = c(x - V_{{\mathrm{PB}}}t)$$ to Eq. 1, one obtains $$- V_{{\mathrm{PB}}}\frac{{dc}}{{dx}} = \frac{\partial }{{\partial x}}\left[ {\frac{{D_{[100]}V_{\mathrm{m}}}}{{RT}}c\left( {1 - c} \right)\frac{{\partial \mu _{{\mathrm{Li}}}}}{{\partial x}}} \right] + \frac{{2j_{\mathrm{s}}}}{{L_{[010]}}}$$ (11) The term on the left-hand side can be omitted from Eq. 11 when phase boundary migration is not very fast. In the sharp interface limit, the gradient term $$\kappa d^2c/dx^2$$ is removed from $$\mu _{{\mathrm{Li}}}$$ in Eq. 10. This approximation is valid when the surface reaction zone width is much larger than the intrinsic thickness of the diffuse phase boundary. Equation 1 is thus simplified to $$D_{[100]}\frac{{d^2c}}{{dx^2}} + \frac{{2j_{\mathrm{s}}}}{{L_{[010]}}} = 0$$ (12) Letting x= 0 be the phase boundary location, Eq. 12 is completed by the following boundary conditions $$\begin{array}{l}c(x = 0_ - ) = c_{{\mathrm{LFP}}}^{{\mathrm{eq}}},\,c(x = 0_ + ) = c_{{\mathrm{FP}}}^{{\mathrm{eq}}}\\ c(x = - \infty ) = c_{{\mathrm{LFP}}}(\Delta \phi ),\,c(x = \infty ) = c_{{\mathrm{FP}}}(\Delta \phi )\end{array}$$ (13) where $$c_{{\mathrm{LFP}}}^{{\mathrm{eq}}}$$ and $$c_{{\mathrm{FP}}}^{{\mathrm{eq}}}$$ are LFP and FP compositions at two-phase equilibrium, and $$c_{{\mathrm{LFP}}}(\Delta \phi )$$ and $$c_{{\mathrm{FP}}}(\Delta \phi )$$ are the metastable LFP and FP compositions at underpotential $$\Delta \phi$$, respectively. By solving Eq. 12 through integration and applying Eq. 13, we obtain an implicit form of the traveling wave solution in the LFP (x < 0) and FP (x > 0) domains: $$\begin{array}{l}K_{{\mathrm{LFP}}}(c) \equiv {\int\nolimits_{c_{{\mathrm{LFP}}}^{{\mathrm{eq}}}}^c} \frac{{dc^{\prime} }}{{\sqrt {I(c^\prime ,c_{{\mathrm{LFP}}}(\Delta \phi ))} }} = - 2\sqrt {\frac{{i_0}}{{FD_{[100]}L_{[010]}}}} x,\,x\; < \;0\\ K_{{\mathrm{FP}}}(c) \equiv {\int \nolimits_{c_{{\mathrm{FP}}}^{{\mathrm{eq}}}}^c} \frac{{dc^\prime }}{{\sqrt {I(c^\prime ,c_{{\mathrm{FP}}}(\Delta \phi ))} }} = 2\sqrt {\frac{{i_0}}{{FD_{[100]}L_{[010]}}}} x,\,x\; > \;0\end{array}$$ (14) Function I is defined as $$I(c_1,c_2) \equiv {\int\nolimits_{c_1}^{c_2}} \hat j_s(c^\prime ,\Delta \phi )dc^\prime$$ and $$\hat j_{\mathrm{s}} = j_{\mathrm{s}}/(i_0/F)$$ is the dimensionless reaction flux. Inverting Eq. 14, the solution is explicitly expressed as $$c(x) = \left\{ {\begin{array}{*{20}{c}} {K_{{\mathrm{LF}}P}^{ - 1}\left( { - 2\sqrt {\frac{{i_0}}{{FD_{[100]}L_{[010]}}}} x} \right)} & {x\; < \;0} \\ {K_{{\mathrm{FP}}}^{ - 1}\left( {2\sqrt {\frac{{i_0}}{{FD_{[100]}L_{[010]}}}} x} \right)} & {x\; > \;0} \end{array}} \right.$$ (15) Supplementary Fig. 5 shows that the Li concentration profile predicted by Eq. 15 agrees very well the numerical solution. As mentioned above, the surface reaction zone is defined as the region in which $$\hat j_{\mathrm{s}}(c) > \alpha \hat j_{\mathrm{s}}^{{\mathrm{max}}}$$, where $$\hat j_{\mathrm{s}}^{\max } = \hat j_{\mathrm{s}}(c(x = 0))$$ is the peak reaction flux and α is given a somewhat arbitrary value of 0.35. Using Eq. 15, the surface reaction zone width W is given by $$W = \left[ {K_{{\mathrm{LFP}}}\left( {\hat j_{\mathrm{s}}^{ - 1}(\alpha \hat j_{\mathrm{s}}^{{\mathrm{max}}})} \right) + K_{{\mathrm{FP}}}\left( {\hat j_{\mathrm{s}}^{ - 1}(\alpha \hat j_{\mathrm{s}}^{{\mathrm{max}}})} \right)} \right]\sqrt {\frac{{FD_{[010]}L_{[010]}}}{{4i_0}}} \equiv \lambda _1\sqrt {\frac{{D_{[010]}L_{[010]}}}{{i_0}}}$$ (16) Calculating the phase boundary velocity VPB from mass conservation, i.e. $$V_{{\mathrm{PB}}} = 2{\int \nolimits_{ - \infty }^{ + \infty }} j_{\mathrm{s}}(c(x))dx/(\Delta cL_{[010]})$$, where $$\Delta c = c_{{\mathrm{LFP}}}(\Delta \phi ) - c_{{\mathrm{FP}}}(\Delta \phi )$$, one obtains $$V_{{\mathrm{PB}}} = \frac{1}{{\Delta c}}\sqrt {\frac{{i_0D_{[010]}}}{{FL_{[010]}}}} {\int \nolimits_{ - \infty }^\infty} \left[ {\hat j_{\mathrm{s}}(K_{{\mathrm{LFP}}}^{ - 1}(y)) + \hat j_{\mathrm{s}}(K_{{\mathrm{FP}}}^{ - 1}(y))} \right]dy \equiv \lambda _2\sqrt {\frac{{i_0D_{[010]}}}{{L_{[010]}}}}$$ (17) Figure 4 shows that W and VPB predicted by Eqs. 16 and 17 have very good agreement with the numerical solutions except at very small D[100], where W is comparable to the diffuse phase boundary width and the sharp-interface assumption is not valid.
2023-01-29 13:25:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5775001645088196, "perplexity": 1956.5813628868023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00610.warc.gz"}
https://community.mcafee.com/t5/ePolicy-Orchestrator/McAfee-agent-4-5-question/td-p/253222
cancel Showing results for Did you mean: Level 7 Report Inappropriate Content Message 1 of 3 ## McAfee agent 4.5 question Hi, I have a number of complety isolated networks with each having its own ePO server. Is it possible to create a single agent and then apply it to the seperate networks? How would the machines on th seperate network know where to report back to? Is it a case of editing a file to acheive this? TIA Hippy 1 Solution Accepted Solutions Level 16 Report Inappropriate Content Message 2 of 3 ## Re: McAfee agent 4.5 question Not quite so simple unfortunately as the 4.5 agent also has encryption keys associating it to the server it is destined to connect to. Hence it's necessary to build the agent installer on the server it will communicate with so the right files are embedded. This also ensures the agents get the right sitelist that describes where it should talk back to. Rgds, Rob. 2 Replies Level 16 Report Inappropriate Content Message 2 of 3 ## Re: McAfee agent 4.5 question Not quite so simple unfortunately as the 4.5 agent also has encryption keys associating it to the server it is destined to connect to. Hence it's necessary to build the agent installer on the server it will communicate with so the right files are embedded. This also ensures the agents get the right sitelist that describes where it should talk back to. Rgds, Rob. Level 7 Report Inappropriate Content Message 3 of 3
2018-07-23 08:01:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259371519088745, "perplexity": 5002.682448163507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00147.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/polynomials-one-variable-which-following-expressions-are-polynomials-one-variable-which-are-not-state-reasons-your-answer-3sqrtx-sqrt2x_34602
Share Books Shortlist # Which of the following expressions are polynomials in one variable and which are not? State reasons for your answer: 3sqrtx+sqrt2x - CBSE Class 9 - Mathematics ConceptPolynomials in One Variable #### Question Which of the following expressions are polynomials in one variable and which are not? State reasons for your answer: 3sqrtx+sqrt2x #### Solution 3sqrtx+sqrt2x is not a polynomial as the exponents of sqrt3 is not a positive integer. Is there an error in this question or solution? #### APPEARS IN RD Sharma Solution for Mathematics for Class 9 by R D Sharma (2018-19 Session) (2018 to Current) Chapter 6: Factorisation of Polynomials Ex.6.10 | Q: 1.3 | Page no. 2 Solution Which of the following expressions are polynomials in one variable and which are not? State reasons for your answer: 3sqrtx+sqrt2x Concept: Polynomials in One Variable. S
2019-07-23 23:28:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3886711299419403, "perplexity": 1725.4941283976157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00384.warc.gz"}
http://zbmath.org/?q=an:0927.65102
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Optimal cylindrical and spherical Bessel transforms satisfying bound state boundary conditions. (English) Zbl 0927.65102 Summary: Optimal discrete transforms based upon the radial Laplacian eigenfunctions in cylindrical and spherical coordinates are presented, featuring the following properties: (1) bound state boundary conditions are enforced; (2) in the case of cylindrical or spherical symmetry, the relevant discrete Bessel transform (DBT) is analogous to the discrete Fourier transform in Cartesian coordinates; (3) the underlying quadrature algorithms achieve a Gaussian-like accuracy; (4) orthogonality of the transform can be ensured even in the absence of symmetry. Efficient multidimensional pseudospectral schemes are thus enabled in either direct or nondirect product representations. The illustrative program computes the various DBTs and applies them to the eigenvalue calculation for the two- and three-dimensional harmonic oscillator. ##### MSC: 65L15 Eigenvalue problems for ODE (numerical methods) 65N25 Numerical methods for eigenvalue problems (BVP of PDE) 35P15 Estimation of eigenvalues and upper and lower bounds for PD operators 34L10 Eigenfunctions, eigenfunction expansions, completeness of eigenfunctions (ODE) 65T50 Discrete and fast Fourier transforms (numerical methods)
2014-04-18 08:22:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873741626739502, "perplexity": 7807.549629874691}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/204734/plot3d-why-certain-part-of-the-plot-is-excluded
Plot3D: why certain part of the plot is excluded? Here is a simple 3D plot, but some part of the plot is excluded. wq[k_, xe_] := Max[ 90 xe - k, -5] Plot3D[ wq[k, xe] , {k, 0, 50}, {xe, 0, 0.5}] Plot3D[ wq[k, xe] , {k, 0, 50}, {xe, 0, 0.5}, Exclusions -> None, PlotPoints -> 100] Although I can fix the problem by using Exclusions->None, I am still puzzled by why there is such a large gap in the original plot. In fact, my original plot range was very small, and the entire surface was excluded-- I got an empty plot. I spent hours looking for errors and finally realized that I was plotting an excluded area. Could any expert explain how exclusion area is determined in Mathematica? Why such a large area is excluded in such a simple plot? Thank you so much. • no such problem in V 12 in windows 10. screen shot !Mathematica graphics – Nasser Sep 3 at 17:32 • This is strange. When I run it again, now the gap shows up! May be it was a chance first time it worked. !Mathematica graphics. But if you add Exclusions -> "Singularities" to first plot, then it becomes OK. Look at help for Exclusions – Nasser Sep 3 at 17:40
2019-12-15 23:07:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5173181295394897, "perplexity": 2352.0150838280383}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310970.85/warc/CC-MAIN-20191215225643-20191216013643-00164.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-3-review-page-204/15
## Prealgebra (7th Edition) Published by Pearson # Chapter 3 - Review - Page 204: 15 #### Answer $6x-3\ yd^2$ #### Work Step by Step Area is length x width. $3(2x-1)$ $\longrightarrow$distributive property $6x-3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-11-13 15:47:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6196323037147522, "perplexity": 7711.034617262564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00399.warc.gz"}
https://math.stackexchange.com/questions/1547593/find-the-limit-lim-limitsx-to-0-left-e-frac1-sin-x-e-frac1x
# Find the limit $\lim_\limits{x\to 0^+}{\left( e^{\frac{1}{\sin x}}-e^{\frac{1}{x}}\right)}$ Find the limit: $$\lim_\limits{x\to 0^+}{\left( e^{\frac{1}{\sin x}}-e^{\frac{1}{x}}\right)}$$ Using graph inspection, I have found the limit to be $+\infty$, but I cannot prove this in any way (I tried factorizing, using DLH)... Can anyone give a hint about that? The limit should be done without any approximations, because we haven't been taught those yet. • With "approximations", do you include Taylor series? – Peter Woolfitt Nov 26 '15 at 18:05 • @PeterWoolfitt Yes... – Jason Nov 26 '15 at 18:06 • oh well - Taylor series seems to be the apparent approach to me – Peter Woolfitt Nov 26 '15 at 18:07 • Could you simplify $$e^a-e^b$$ ? – Stefan Nov 26 '15 at 18:07 • @Stefan I have already tried to do so in many ways by extracting $e^{1/x}$ and $e^{1/\sin x}$ in separate approaches, but both failed (even with DLH!!!)... – Jason Nov 26 '15 at 18:10 First, you can show that $$\lim_{x\to 0^+}\left(\frac{1}{\sin x}-\frac{1}{x}\right)=0.$$ This shows that $$\lim_{x\to 0^+}\frac{e^{\frac{1}{\sin x}-\frac{1}{x}}-1}{\frac{1}{\sin x}-\frac{1}{x}}=1.$$ Now, write $$e^{1/\sin x}-e^{1/x}=e^{1/x}\left(\frac{1}{\sin x}-\frac{1}{x}\right)\frac{e^{\frac{1}{\sin x}-\frac{1}{x}}-1}{\frac{1}{\sin x}-\frac{1}{x}},$$ and letting $x\to 0^+$ gives \begin{align*}\lim_{x\to 0^+}\left(e^{1/\sin x}-e^{1/x}\right)&=\lim_{x\to 0}e^{1/x}\left(\frac{1}{\sin x}-\frac{1}{x}\right)\lim_{x\to 0^+}\frac{e^{\frac{1}{\sin x}-\frac{1}{x}}-1}{\frac{1}{\sin x}-\frac{1}{x}}=\lim_{x\to 0^+}e^{1/x}\left(\frac{1}{\sin x}-\frac{1}{x}\right)\\ &=\lim_{x\to 0^+}\frac{e^{1/x}(x-\sin x)}{x\sin x}=\lim_{x\to 0^+}\frac{e^{1/x}(x-\sin x)}{x^2}, \end{align*} since $\lim_{x\to 0}\frac{\sin x}{x}=1$. Continuing the computation, this last limit is equal to $$\lim_{x\to 0^+}xe^{1/x}\lim_{x\to 0^+}\frac{x-\sin x}{x^3}=\frac{1}{6}\lim_{x\to 0^+}xe^{1/x}=\frac{1}{6}\lim_{u\to\infty}\frac{e^u}{u},$$ after performing the substitution $u=\frac{1}{x}$. The last limit is $\infty$, so the limit you ask is equal to $\infty$. • It should be $x\sin x$ in the denominator, not $x^2$, initially, athough you can use $\sin x/x\to 1$ to get $x^2$ there. – Thomas Andrews Nov 26 '15 at 18:23 • Right, I'll add this. – detnvvp Nov 26 '15 at 18:23 • For the second equation you equated two limits which cannot be done because of the indeterminate form... – Jason Nov 26 '15 at 18:26 • Which one? No limit is indeterminate here. – detnvvp Nov 26 '15 at 18:27 • OK, your answer has been edited to illustrate what I had been asking! All good now. – Jason Nov 26 '15 at 18:35 By the Mean Value Theorem applied to $f(x)=e^{1/x}$ with $f'(x)=-x^{-2}e^{1/x}$, we have $$e^{1/\sin x}-e^{1/x}=f(\sin x)-f(x)=(x-\sin x)f'(\xi)=\frac{x-\sin x}{\xi^2}\cdot e^{1/\xi}$$ with $\xi$ between $x$ and $\sin x$. We can find $a>0$ such that for all small enough positive $x$ we have $$\tag1 \sin x < x -ax^3$$ and hence $$\frac{x-\sin x}{\xi^2}>\frac{x-\sin x}{x^2}>ax.$$ Thus for small $x$ with $t:=\frac 1t$ $$e^{1/\sin x}-e^{1/x}=\frac{x-\sin x}{\xi^2}\cdot e^{1/\xi} >axe^{1/x}=a\cdot \frac{e^t}{t}\to +\infty$$ because the exponential grows stronger than any polynomial (You might use $e^x\ge 1+x$ for all $x$ $\implies e^t=(e^{t/2})^2\ge(1+\frac t2)^2=1+t+\frac14t^2$ for all $t>-2$). How can we show $(1)$? Pick any $a$ with $0<a<\frac16$ and let $g(x)=x-ax^3-\sin x$. Then $g'(x)=1-3ax^2-\cos x$, $g''(x)=-6ax+\sin x$, $g''(x)=-6a+\cos x$, so $g'''$ is strictly decreasing from positive $1-6a$ to negative $-6a$ on $[0,\frac\pi2]$ and has a unique root $x_0\in(0,\frac\pi2)$. Thus $g''$ is strictly increasing in the interval $[0,x_0]$. Then says $g''(x)> g''(0)=0$ for all $x\in (0,x_0]$, so staht $g'$ is strictly increasing in $[0,x_0]$. Thus $g'(x)>g'(0)=0$ for all $x\in(0,x_0]$, so that $g$ is strictly increasing on $[0,x_0]$. Thus $g(x)>g(0)=0$ for $0<x\le x_0$, in other words: $(1)$.
2020-10-22 07:14:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9413476586341858, "perplexity": 338.0339265299115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00236.warc.gz"}
https://quant.stackexchange.com/questions/60193/how-to-price-an-european-put-option-using-binomial-model-with-dividend-yield
# How to price an European put option using binomial model with dividend yield? The initial stock price (S0) is 45, the stock volatility is 0.20 (20% per annum), and the risk-free rate is 0.02 (2% per annum). Consider a European put option whose strike price is equal to 30, with a time-to-maturity of two years. The dividend yield is 0.04 (4% per annum) Is it right if I draw a binomial tree with ex-dividend model, but add 45 x 0.04 x e^(-0.02 x 2) to the option price? Since dividends and interest rates mature annually and the time of expiry is two years, we can model the option as a two-step binomial tree, structured as the one in figure: Data from OP question: $$S_0 = 45$$ current price of the underlying $$S$$; $$\hspace{2.5cm}$$ $$\sigma = 0.2\;\text{per annum}$$ volatility of $$S$$; $$r = 0.02\;\text{per annum}$$ interest rate; $$\hspace{3.5cm}$$ $$d = 0.04\;\text{per annum}$$ dividend from $$S$$; $$T = 2\ \text{years}$$ expire date of option on $$S$$; $$\hspace{2.7cm}$$ $$K = 30$$ strike price of the option; $$P(T) = \max(K - S(T), 0)$$ we are trying to price a put option. One can solve this in various ways. Let's proceed here with risk neutral valuation, that implies following the steps below: 1. compute how much $$S$$ can shift up or down when moving from step to step 2. compute risk neutral probability of an upmove, $$p_{RN}$$ 3. evaluate $$S$$ at each node $$B, C, D, E, F$$ 4. obtain value of the option $$P(T)$$ at final possible nodes $$D, E, F$$ 5. calculate the current value of the option, $$P(0)$$, as the expected value from $$P(T) \sim \{P_D, P_E, P_F \}$$ 1. $$S$$ can move one standard deviation up or down, that is $$S_{i} - S_{i - 1} = \Delta S = \sigma \, S_{i - 1} \qquad i = 1,2$$ 2. The risk neutral probability $$p_{RN}$$ of an upmove is obtained considering that a safe investment of the same sum $$S_0$$ at interest rate $$r$$ yields $$S_0 e^{rT}$$ at option expiry $$T$$. In order to exclude arbitrage opportunities, $$S$$ has to grow on average as the safe (risk-free) rate. However, dividends yielded by the stock while held in our portfolio mean that we should consider a lower risk-free sum of $$S_0 e^{(r - d)T}$$ for our risk-neutral probability calculation: $$p_{RN} S_{up} + \left(1 - p_{RN} \right) S_{down} = S_0 e^{(r - d)\frac{T}{2}}$$ where length of time step is $$\Delta t = T / 2$$. So $$p_{RN}$$ is: $$p_{RN} = \frac{S_0 e^{(r-d)\frac{T}{2}} - S_{down}}{S_{up} - S_{down}} = \frac{\require{cancel}\cancel{S_0} e^{(r-d)\frac{T}{2}} - \cancel{S_0} (1 - \sigma)}{2 \sigma \cancel{S_0}} = 0.4505$$ 3. To obtain the values of $$S$$ at each node, traverse it from the root $$S_A = S_0$$ adding or subtracting $$\Delta S$$ (that assumes different values for each of the nodes): \begin{aligned} S_A &= 45\\ S_B &= S_A - \Delta S = 36 \hspace{1.35cm} S_C = S_A + \Delta S = 54 \qquad\\ S_D &= S_B - \Delta S = 28.8 \qquad S_E = S_B + \Delta S = S_C - \Delta S = 43.2\\ S_F &= S_C + \Delta S = 64.8 \end{aligned} 4. Quickly evaluate the option payoffs at expiry, $$P_D, P_E, P_F$$: \begin{align} P_D &= \max(K - S_D, 0) = 1.2\\ P_E &= \max(K - S_E, 0) = 0\\ P_F &= \max(K - S_F, 0) = 0 \end{align} 5. The expected value of $$P(0)$$ is the present money value (multiply by $$e^{-(r - d) T}$$) of the expected value of the option at expiry, $$E[P(T)]$$: $$P(0) = e^{-(r - d) T} E[P(T)] = e^{-(r - d) T} \left[ \left( 1 - p_{RN} \right)^2 P_D + 2 \cdot p_{RN} \left( 1 - p_{RN} \right) P_E + p_{RN}^2 P_F \right] = 1.0408 \left[ 0 + 0 + 0.302 \cdot 1.2 \right] = 0.3771$$ Thus the answer to the question is $$P(0) = 0.3771$$.
2021-10-16 17:31:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996821939945221, "perplexity": 1329.7431341637787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00572.warc.gz"}
http://codereview.stackexchange.com/questions/18975/this-javascript-implementation-of-range-is-fast-what-are-its-downsides
# This JavaScript implementation of range is fast. What are it's downsides? This implementation of range() is very fast: RANGE = []; for (var i=0; i<65536; ++i) RANGE.push(i-32768); range = function(a,b){ return RANGE.slice(a+32768,b+32768); }; Are there downsides in using this approach? - ## migrated from stackoverflow.comNov 24 '12 at 22:27 This question came from our site for professional and enthusiast programmers. Why are you subtracting 32k? –  Šime Vidas Nov 23 '12 at 0:45 @ŠimeVidas To allow for negative ranges –  Dokkat Nov 23 '12 at 0:46 @Dokkat: It might be more intuitive to run the loop from -32k to +32k –  Bergi Nov 23 '12 at 2:05 relatively speaking- 1) its a bit heavy on the memory use, requiring the memory for a property + mem of a number for each integer in the possible range 2) always consumes the memory and cpu needed to intialize, even if the script never has use for it. - Interesting. 2 can be solved initializing the array only after the first use of range(); this would aggravate 1, though. –  Dokkat Nov 23 '12 at 0:50 I don't think lazy loading it would have any significant effect on #1. –  chris Nov 23 '12 at 0:56 +1. Lazy loading would just make the first invocation of range slower than the average implementation :-) –  Bergi Nov 23 '12 at 2:04 Range could be declared as an Arraybuffer, which supports both method slice and an arrayview of just a part of the buffer without copying the data. Returning an arrayview would mitigate the problem of memory consumption and using typed array would use ~128k memory instead of 512k. Also the function could possibly be self-modifying doing the initialization at the first call. range = function(a,b) { initialize_range; range = function(a,b) { return RANGE.slice(a,b) }; return RANGE.slice(a,b); } -
2014-11-26 17:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17057229578495026, "perplexity": 5385.454768452582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00095-ip-10-235-23-156.ec2.internal.warc.gz"}
http://web.engr.oregonstate.edu/~walkiner/teaching/cs381-wi20/links.html
Throughout most of the course, we will use the functional programming language Haskell. In particular, we will use Haskell as a metalanguage for describing programming language concepts. It is therefore absolutely essential that you develop your Haskell programming skills! To be successful in this course, you will have to consult other Haskell resources and write Haskell programs outside of class (beyond the homework assignments and in-class exercises). The following resources should provide several options. ### Installing GHC The Haskell compiler we’ll be using in this class is GHC. You’ll also need a tool called cabal for installing Haskell packages. The easiest way to install GHC and cabal varies by platform. On Windows, install Haskell Platform, which includes both GHC and cabal. On Mac, use Homebrew. First install Homebrew itself, if you don’t already have it on your system. Then install GHC and cabal with the following commands: > brew install ghc > brew install cabal-install On Linux, use whatever package manager is standard on your distribution (e.g. apt on Ubuntu, dnf on Fedora). The cabal package you want is probably called cabal-install. Make sure that the GHC version installed is at least 8.4. ### Installing Doctest Doctest is a useful tool for running examples written in the comments of a Haskell file as unit tests. We’ll use this in some homework assignments. After you’ve installed GHC and cabal, you can install doctest with the following commands: cabal update cabal install doctest You will probably also need to add the directory that cabal installs its binaries in to your \$PATH. Here are my best guesses as to where that will be: • Linux: ~/.cabal/bin • Mac: ~/.cabal/bin or ~/Library/Haskell/bin • Windows: C:\Program Files\Haskell\bin ### Haskell Tutorials and Reference Manuals • Introduction to Haskell by Brent Yorgey – An excellent, concise introduction to Haskell. I’ll assign reading from this book/tutorial in the first couple of weeks. • Haskell: The Confusing Parts – An FAQ especially for folks coming to Haskell from a C/Java background, which I guess is many of the people in this class. • Haskell Wikibook – An easy-to-navigate and thorough resource. • A Gentle Introduction to Haskell – Famous for being not-so-gentle, but a really great resource for refining your understanding of Haskell, once you get the basics down. ## Prolog In the last couple weeks of the course, we will use the logic programming language Prolog. As with Haskell, I strongly recommend you supplement the course material with reading and exercises outside of class. • SWI-Prolog – The Prolog environment we’ll be using. I’ll assume you have this installed. • Learn Prolog Now! – This book provides a good introduction to Prolog and plenty of exercises for practice. Available for free online. • An Introduction to Logic Programming through Prolog – A free older textbook based on Prolog. I haven’t read this one but it looks like a pretty good resource. • Prolog Wikibook – Another one I haven’t read, but looks like a pretty good resource.
2020-02-19 14:09:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42926254868507385, "perplexity": 3499.3597670553004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144150.61/warc/CC-MAIN-20200219122958-20200219152958-00507.warc.gz"}
https://tex.stackexchange.com/questions/176668/table-and-graphic-frame
# Table and graphic frame Is there any LaTeX package that can produce tables and figures like the example in the folowing image by default? EDIT: I used tcolorbox but I did not get the result needed as I'm not familiar with its options. here is my code (I found it here) \newtcolorbox{theorem}[1][]{ breakable, enhanced, colback=white, colframe=black, top=\baselineskip, enlarge top by=\topsep, overlay unbroken and first={ \node[xshift=100pt,thick,draw=black,fill=white,anchor=west] at (frame.north west) % {\refstepcounter{theorem}\strut{\bfseries\theoname~\thetheorem}\if#1\@empty\relax\relax\else~: #1\fi}; } } and this is the result How can I customise it in order to be similar to the above one? • Welcome to TeX.SX! I think you're looking for pgfplots package. – Claudio Fiandrino May 12 '14 at 7:42 • 'By default' is a bit of a broad term here: as already mentioned, there are packages such as pgfplots that can make plots, but you will have to make changes to get a good reproduction of the example. – Joseph Wright May 12 '14 at 7:45 • @user42987: I've likely misunderstood, but after reading: "Is there any package in Latex that can produce table and figure [..]", I think is pretty normal to suggest a package that can draw the figure. Could you be more specific then? – Claudio Fiandrino May 12 '14 at 8:00 • Have you tried to use the float environnement? – Romain Picot May 12 '14 at 8:12 • You could take a look at the tcolorbox or mdframed packages, combined with the caption package. – Bernard May 12 '14 at 9:01 How about this solution using tcolorbox ? \documentclass{article} \usepackage[many]{tcolorbox} \newtcolorbox[auto counter]{myfigure}[2][]{enhanced,center upper, colback=white,colbacktitle=black!20!white,coltitle=black, arc=0pt,outer arc=0pt,fonttitle=\scshape,lefttitle=2.4cm, boxrule=0.3mm, overlay={ \fill[black!20!white] ([xshift=2.2cm,yshift=-0.3mm]title.south west) rectangle (frame.north east); \fill[black] ([yshift=-0.3mm]title.south west) rectangle node[white] {\sffamily Figure~\thetcbcounter} ([xshift=2.2cm]title.north west); },title={#2},#1} \begin{document} \begin{myfigure}{Prevalence des Prescriptions des Molecules d'Antibiotiques les plus Frequntes, par Annee d'Enquete} \includegraphics[width=10cm]{example-image-a} \end{myfigure} \end{document} If this new figure should contain the chapter number and behave like an 'ordinary' LaTeX figure, use the following modification: \newtcolorbox[use counter=figure,number within=chapter, list inside=lof,list type=figure]{myfigure}[2][]{enhanced,center upper, colback=white,colbacktitle=black!20!white,coltitle=black, arc=0pt,outer arc=0pt,fonttitle=\scshape,lefttitle=2.4cm, boxrule=0.3mm, overlay={ \fill[black!20!white] ([xshift=2.2cm,yshift=-0.3mm]title.south west) rectangle (frame.north east); \fill[black] ([yshift=-0.3mm]title.south west) rectangle node[white] {\sffamily Figure~\thetcbcounter} ([xshift=2.2cm]title.north west); },title={#2},#1} Then, the new figure will be listed inside the list of figures using \listoffigures • Thak you for your help, this is what I expected, but I got an error when compiling: Unknown option many for package tcolorbox. \ProcessOptions* – user42987 May 12 '14 at 17:15 • I will try to update packages and tell you – user42987 May 12 '14 at 17:35 • I quite sure that you have an old version of tcolorbox. The current version is 3.00. After an update, it should work. – Thomas F. Sturm May 12 '14 at 17:35 • @user42987 I'm glad that I could help. Maybe, it's also useful to know for you that you can make the figure floatable by adding the option float to the keylist or - more flexible - individually by using \begin{myfigure}[float]{... – Thomas F. Sturm May 13 '14 at 8:28 • @user42987 See my updated answer. – Thomas F. Sturm May 13 '14 at 13:10 If you only want to draw the graph, I suggest you have a look at a drawing program using LaTeX or producing LaTeX-code. One program is Ipe v 7. Another program is Nicola Talbot’s jpgfdraw. Ipe is using LaTeX to produced PDF-files you may include in the main document. • Thak you for your help, I'm not looking for producing graphics, I Just want to produce a box (similar to the one above) around my included plots that contain the number of the figure and the caption. – user42987 May 12 '14 at 8:12 • @user42987 Then just put the picture in an ‘fbox’. – Sveinung May 12 '14 at 8:19 • @user42987 You may also try ‘framebox’, forkosh.com/latex/ltx-237.html – Sveinung May 12 '14 at 8:23
2019-12-15 15:59:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809576153755188, "perplexity": 3106.926358878406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00182.warc.gz"}
https://bsahely.com/2015/09/20/the-atoms-of-space/
# The atoms of space This article was initially posted at http://vixra.org/abs/1109.0030 on September 11, 2011, and it was updated with typos corrected and more references added on April 24 and 29, 2012. [v1] 11 Sep 2011 [v2] 2012-04-24 22:38:11 [v3] 2012-04-29 18:28:11 ### Abstract In this brief note, it will be shown that space may have hidden properties normally attributed to elementary particles, such as mass and charge. We will also elucidate the thermodynamic properties of these atoms of space by modelling these atoms as ideal gas entities propagating disturbances at the speed of light. We have only demanded consistency in the formulas for circular motion, Einstein’s mass-energy equivalence, wave-particle duality, Planck-Einstein equation, Newton’s law of universal gravitation, Schwarzschild solution of general relativity, the Reissner–Nordström metric and black hole thermodynamics. We will then use the adiabatic index formula to elucidate the degrees of freedom of these atoms of space. We will also reinterpret Einstein’s theories of relativity, solve the mystery of the double slit experiment, muse on the physical nature of dark energy, and finally uncover a possible blindspot that may have hampered progress in constructing a consistent and complete theory of quantum gravity. Section 1: Introduction Section 2: The particle-light wave duality Section 3: The particle-black hole duality Section 4: The thermodynamics of a collection of atoms of space Section 5: Einstein’s theories of relativity revisited Section 6: The double slit experiment revisited Section 7: Triality and the emerging worldview Section 8: A possible path to non-commutative geometry, twistor theory, unitary theory and E8 theory Section 9: Conclusion References ### List of Figures Figure 1: Particle-wave duality Figure 2: Planck particle-wave duality Figure 3: Planck particle-black hole duality Figure 4: A collection of atoms of space Figure 5: Model representing the information content of the atom of space # 1 Introduction Over the past 400 years, since Newton to our present day, we have been trying to understand the nature of our physical reality. We have made some postulates along the way, and have discovered via the formalism of mathematics several successful theories like Newton’s theory of universal gravitation, special and general relativity, quantum mechanics and quantum field theory which form the basis of our Standard model of elementary particle interactions. If we ask why they are successful, we will discover that this is due to some underlying symmetry. However, there are major challenges, the most urgent of which is finding a complete and consistent theory of quantum gravity. In our attempt to complete this program, we had to invent structures like strings, branes, loops, extra dimensions, super-symmetry, non-commutative geometry and twistors, to make such a consistent theory realisable. Lately we have also discovered that our universe is made up predominantly of dark matter and dark energy, for which we have no explanation. Instead of creating epicycles on epicycles of postulates and assumptions, is there another avenue that one can pursue that even a high school student can understand? What has been attempted here is to see how far one can go by looking at the static/invariant solutions of all our successful theories, from geometry of circles to black holes thermodynamics, and see what they are “telling us.” Is there a solution to these equations that is complete and consistent, and if so, how can this solution be represented? The aim of this program is not to discover new equations, but to discover a hidden pattern within these equations that may throw some light on the foundational issues of quantum mechanics, special and general relativity, and the contemporary programs of M-/string theory, loop quantum gravity, non-commutative geometry and twistor theory as they aspire to understand the quantum basis or lack thereof of gravity. Although it has always been suspected from dimensional analysis that the Planck regime is where the fundamental insights lie, no assumption is made a priori in this regard. What will be done is as follows: We will first evoke de Broglie’s wave-particle duality, and represent the rest energy of a particle by a photon rotating in a circle. We will then equate the centripetal acceleration of this photon to the gravitational field of the point particle. By so doing, we will discover that this will only admit one unique set of solutions, the traditional Planck quantities. Instead of taking these Planck particles as mathematical constructs, we will discover that if in addition to mass, there is also a charge associated with each Planck particle, then these mathematical objects may represent real physical objects which we can use to model the dynamics of the real world around us. In this light, we will discover that we can model these physical objects as black holes with charge and mass, and we will realize that they are none other than stable extremal Reissner-Nordström black holes. Then we will proceed to use some to the fundamental relationships of black hole thermodynamics to show that these are actually the fundamental degrees of freedom of space. We will speculate on the nature of the degrees of freedom of these atoms, and provide a representation of these degrees of freedom in terms of spin states, thus providing a physical basis for the Dirac equation and spin networks in loop quantum gravity. # 2 The particle-light wave duality Let m = mass of a point particle at rest, and c = the speed of light. Applying Einstein’s mass-energy equivalence equation [1], the rest energy Eparticle of the point particle is (1) ${ E }_{ particle }={ mc }^{ 2 }$ Also the energy Ewave of a quantum of light in terms of its frequency f and Planck constant h is ${E}_{wave}=hf$, which can also be expressed in terms of the reduced Planck constant $\hbar$ and angular frequency $\omega$ as [2]: (2) ${ E }_{ wave }=\hbar \omega$ Using the de Broglie’s wave-particle duality [3], and letting ${E}_{particle}={E}_{wave}$, and solving for $\omega$, we get: (3) $\omega =\frac { { mc }^{ 2 } }{ \hbar}$ Let us now represent the rest energy Eparticle of the point mass m, with the energy Ewave of the quantum of light revolving in a circle of radius r, speed $v=c$, and angular velocity $\omega$ as shown in Figure 1. #### Figure 1 Particle-wave duality We will now use the formulas for uniform circular motion [4] to determine the angular velocity ω and centripetal acceleration $\omega =\frac { v }{ r }$ for this quantum of light. The angular velocity $\omega$ is related to the tangential speed v and the radius r of the circle by (4) $\omega =\frac { v }{ r }$ The centripetal acceleration ${ a }_{ T }$ is (5)  ${ a }_{ T }=\frac { { v }^{ 2 } }{ r }$ To determine what r is for the circle, let $v=c$ the speed of light, equate (3) and (4), and solve for r. We get (6)  $r=\frac {v}{\omega} =\frac {c}{\omega} =\frac {c}{\frac{{mc}^{2}}{\hbar}} =\frac {\hbar}{mc}$ which is also the reduced Compton wavelength for the particle [5]. The centripetal acceleration aT in terms of the mass of the particle m is determined by substituting (6) into (5) and equating $v=c$: (7)  ${a}_{T}=\frac {{v}^{2}}{r}=\frac {{c}^{2}}{r}=\frac {{c}^{2}}{\frac{\hbar}{mc}}=\frac{{mc}^{3}}{\hbar}$ If, as shown in Figure 1, we equate the gravity field strength g at the distance r for the point particle with the centripetal acceleration ${ a }_{ T }$ of light, then according to Newton’s law of universal gravitation, ${a}_{T}=g=\frac{Gm}{{r}^{2}}=\frac{m{ c }^{3}}{\hbar}$. Solving for r, (8) $r={\left(\frac{\hbar G}{{c}^{3}}\right)}^{\frac{1}{2}}$ which is also the Planck length, L[6].  By equating (6) with (8), $r=\frac{\hbar}{mc} ={\left(\frac{\hbar G}{{c}^{3}}\right)}^{\frac{1}{2}}$, we can solve for m: (9)  $m={\left(\frac{\hbar c}{G}\right)}^{\frac{1}{2}}$ which is also the Planck mass, M[6].  Finally, for the circle, $\omega=2\pi f$, where f = frequency or cycles per unit time, $f=\frac{1}{t}$, in which t is the time for one cycle. Solving for t, and using (3) for ω and (9) for m, we get (10) $t=\frac{2\pi}{\omega} =2\pi\left(\frac{\hbar}{{mc}^{2}}\right)=2\pi\left(\frac{\hbar}{{c}^{2}}\right)\left(\frac{1}{m}\right)=2\pi\left(\frac{\hbar}{{c}^{2}}\right){\left(\frac{G}{\hbar c}\right)}^{\frac{1}{2}}=2\pi{\left(\frac{\hbar G}{{c}^{5}}\right)}^{\frac{1}{2}}$ which is also the traditional Planck time [6] x 2π, which we now call tp. If the momentum of the light in this frame is p, then we can find the angular momentum J of this light wave by using the identity $E=pc$, (1) and (6). This results in (11) $J=pr=\left(\frac{E}{c}\right)r=\left(\frac{{mc}^{2}}{c}\right)r=\left(mc\right)\left(\frac{\hbar}{mc}\right)=\hbar$ which implies that the angular momentum of this light representation is $\hbar$. #### Figure 2 Planck particle-wave duality Thus by evoking 2 equivalence principles, that of 1) rest energy = quantum of energy and 2) the gravitational field = centripetal acceleration of light in the light-frame, we have stumbled upon the zero-point quantum of energy of the gravitational field, ${E}_{p}={M}_{p}{c}^{2}={\left(\frac{\hbar c}{G}\right)}^{\frac{1}{2}}{c}^{2}={\left(\frac{\hbar{c}^{5}}{G}\right)}^{\frac{1}{2}}$ (Figure 2). At this point, I will postulate that we have also stumbled upon the quantum of space, or the atom of space, whose mass is Mp, whose radius is Lp, an invariant length scale and which also possesses an intrinsic clock whose invariant time period is tp and whose energy is Ep and angular momentum is Jp. These are summarised below: ${M}_{p}={\left(\frac{\hbar c}{G}\right)}^{\frac{1}{2}}$ ${L}_{p}={\left(\frac{\hbar G}{{c}^{3}}\right)}^{\frac{1}{2}}$ ${t}_{p}=2\pi{\left(\frac{\hbar G}{{c}^{5}}\right)}^{\frac{1}{2}}$ ${E}_{p}={\left(\frac{\hbar {c}^{5}}{G}\right)}^{ \frac{1}{2}}$ $J=\hbar$ The Schwarzschild radius r[7] of the Mp is ${r}_{S}=2\frac{G{M}_{p}}{{c}^{2}} =2\frac{G}{{c}^{2}}{\left(\frac{\hbar c}{G}\right)}^{\frac{1}{2}}=2{\left(\frac{\hbar G}{{c}^{3}}\right)}^{\frac{1}{2}}=2{L}_{p}$. Thus the event horizon of the atom of space, where information of the space is stored, is the surface of a sphere of radius 2Lp. For an atom of space to have an event horizon of sphere of radius Lp and not 2Lp, I postulate that the atom of space is in fact a gravitational dipole made up of 2 equal masses ${m}^{+}$ and ${m}^{-}$each of mass $\frac{{M}_{p}}{2}$ with the property that they REPEL each other. In order to give them attractive forces to cancel out the repulsive forces, then they both should also have electric charges that are equal and opposite; hence, our atom is also in essence a matter-antimatter gravitational electric dipole. So what should the values of these charges be? A hint is provided by the Reissner-Nordström metric corresponding to the gravitational field of a charged, non-rotating, spherically symmetric body of mass M, where rQ is a length-scale corresponding to the electrical charge Q of the mass [8]: ${r}_{Q}={\left({Q}^{2}\frac{G}{4\pi{\epsilon}_{0}{c}^{4}}\right)}^{\frac{1}{2}}$ Since the atom can be considered an extremal black hole when $2{r}_{Q}={r}_{s}$, we can solve for Q. If $2{r}_{Q}={r}_{s}={L}_{p}$, then $2{\left({Q}^{2}\frac{G}{4\pi{\epsilon}_{0}{c}^{4}}\right)}^{\frac{1}{2}}={\left(\frac{\hbar G}{{c}^{3}}\right)}^{\frac{1}{2}}$, and solving for Q, one gets $\frac{1}{2}{\left(4\pi{\epsilon}_{0}c\hbar\right)}^{\frac{1}{2}}$, which is none other than half the Planck charge . So if an atom of space is a gravitational and also an electric dipole, we have in one swoop solved both problems mentioned above. Since the strengths of both the gravitation field and electrical field decay with the inverse square distance, then the net force within the atom and also between two atoms has to always be zero; hence there would be no net force causing attraction or repulsion of the atoms of space. Since the gravitational field is canceled by the electric field for all distances, this may explain why space is thought not to have any mass or charge. However, by postulating an atom of space has an invariable mass-scale, charge-scale, length-scale and time-scale, then space provides the yardstick, by which all of the physical quantities in nature can be measured. This thus begs the question and asks which units are more fundamental, $\hbar$, c, G and ϵ, or Mp, Lp, Qp, or tp. Please note that the Planck quantities were derived from the empirical physical constants. If however the Planck quantities are postulated to be more fundamental, then the empirical physical constants are macroscopic consequences of these microscopic physical constants. # 3 The particle-black hole duality We will now use the established results from black hole thermodynamics to determine how much information can be stored in the atom of space and also to elucidate its fundamental degrees of freedom. To do so, we need to find a measure in 2 dimensions that can be used as a gauge or yardstick in 3 dimensions. The holographic principle guides us by relating the scale of the surface of the event horizon $4\pi{{L}_{p}}^2$ with the scale of the area corresponding to the particle-light correspondence ${A}_{p}=\pi{{L}_{p}}^{2}$. For the event horizon (surface), then the maximum number of bits of information that can be stored is (12) ${N}_{p}=\frac{{A}_{surface}}{{A}_{p}}=\frac{4\pi{{L}_{p}}^{2}}{\pi{{L}_{p}}^{2}} =4$ #### Figure 3 Planck particle-black hole duality According to the equipartition energy theorem [9], the energy for one degree of freedom is $\frac{1}{2}{k}_{B}T$, where kB is Boltzmann constant and T is the temperature of the surface, so in this case the total energy of the atom Ep would be (13) ${E}_{p}=\frac{1}{2}{N}_{p}{k}_{B}T=\frac{1}{2}4{k}_{B}T=2{k}_{B}T$ Equating this to the energy of the rest mass of the particle (see Figure 3 above), we have (14) ${M}_{p}{c}^{2}=2{k}_{B}T$ Solving for Tp, (15) ${T}_{p}=\frac{{M}_{p}{c}^{2}}{2{k}_{B}} =\frac{{\left(\frac{\hbar c}{G}\right)}^{\frac{1}{2}}{c}^{2}}{2{k}_{B}} =\frac{1}{2}{\left(\frac{\hbar{c}^{5}}{G{{k}_{B}}^{2}}\right)}^{\frac{1}{2}}$ From the particle perspective, the surface gravity, (16) ${g}_{p}=\frac{G{M}_{p}}{{{L}_{p}}^{2}} =\frac{G{M}_{p}}{\frac{\hbar G}{{c}^{3 }}}\frac{{M}_{p}{c}^{3}}{\hbar}$ . Inserting (14) into (16), then ${g}_{p}=\frac{2{k}_{B}{T}_{p}c}{\hbar}$. Thus (17) ${T}_{p}=\frac{1}{2}\left(\frac{\hbar}{c{k}_{B}}\right){g}_{p}$, which is none other that Unruh temperature [10] TUnruh x π. (Please note that TUnruhwould have been derived if in (17) Ap = Lp2, instead of πLp2. The latter was chosen so that N=4 and not 4π.) At this point, we would like to understand why an atom of space is able to store four bits of information, and to identify the identity of each bit. But before we do so, let us confirm that the atom is actually the smallest unit of entropy. By using the Bekenstein-Hawking entropy formula for a black hole [11], ${S}_{BH}=\frac{1}{4}{k}_{B}\frac{{A}_{surface}}{{A}_{p}}$, we discover by using (12), that (18) ${S}_{p}={k}_{B}$. To elucidate the number of degrees of freedom each atom of space represents, we have to now consider a collection of such atoms. # 4 The thermodynamics of a collection of atoms of space Let us now consider a collection of such N atoms of space, confined to a large volume V whose radius $L\gg{L}_{p}$. We will model this collection as an ideal gas whose density is $\rho=\frac{N{M}_{p}}{V}$ and whose pressure $P=\frac{ N{k}_{B}T}{V}$, where T is the temperature of this collection of atoms. We will now invoke another principle, the fractal or scale relativity principle, as per Notalle, which equates the temperature Tp of the surface of an atom with the temperature T of the collection of atoms. (Figure 4). #### Figure 4 A collection of atoms of space To determine the speed v at which a disturbance can propagate through this collection of atoms, we can model this propagation to that of sound in an ideal gas [12] and deduce the adiabatic index $\gamma$ as follows: (19) $v={\left(\gamma\frac{P}{\rho}\right)}^{\frac{1}{2}}={\left(\gamma\frac{\left(\frac{N{k}_{B}T}{V}\right)}{\left(\frac{N{M}_{p}}{V}\right)}\right)}^{\frac{1}{2}}={\left(\gamma\frac{{k}_{B}T}{{M}_{p}}\right)}^{\frac{1}{2}}$ Substituting (14) into (19), we get, (20) $v={\left(\gamma\frac{\frac{1}{2}{M}_{p}{c}^{2}}{{M}_{p}}\right)}^{\frac{1}{2}}=c{\left(\frac{\gamma}{2}\right)}^{\frac{1}{2}}$ If we equate the speed of propagation of disturbance to the speed of light c in a vacuum, then this implies $\gamma=2$. Since the adiabatic index is related to the degrees of freedom fdf of a gas molecule by the equation [13]. $\gamma=\frac{\left({f}_{df}+2\right)}{2}$ then ${f}_{df}=2\gamma-2=2\left(2\right)-2=2$ Thus an atom of space has 2 degrees of freedom which can store 4 (q)bits of information!! We can thus model this atom of space, by representing its two matter-antimatter partons with gravitational and electric charges, ${m}^{+}{q}^{-}$ and ${m}^{-}{q}^{+}$, as orthogonal spin states each with $J=\pm\hbar$. This can be modeled as shown in Figure 5. Therefore we end up with four degenerate states: ↑↑, ↑↓, ↓↓, ↓↑, hence the ability to store 4 (q)bits of information. #### Figure 5 Model representing the information content of the atom of space To reiterate, since these atoms of space have a unique mass-scale, charge-scale, length-scale and time-scale, these plank particles provides the yardstick, by which all of the physical quantities in nature can be measured. These atoms are equivalent to extremal RN black holes and since they do not produce any Hawking radiation, their stability is guaranteed [14, 15, 16, 17, 18, 19, 20]!! # 5 Einstein’s theories of relativity revisited This model of space is still consistent with the conclusions of special relativity. The postulate that the speed of light is constant in a vacuum finds a natural interpretation, as the speed of information transfer (such as light) from a source to detector is independent of the velocity of the source and that of the detector, as the speed only depends on the physical properties of the atoms of space. I conjecture that it is the flipping of the spin states that is responsible for the traveling disturbance in space, as space now acts as the ultimate digital storage and transportation medium. Even more so, gravity as an entropic force a la Verlinde [21], or thermodynamic state [22] becomes more credible in its reformulation. # 6 The double slit experiment revisited Let us now model the movement of a particle of mass m with velocity v in a vacuum containing the collection of atoms of space. Since v is always less that c = speed of light, this implies that the speed of disturbance will always be greater than the speed of the particle. This then throws the double-slit experiment into a new light, as it is only the propagation of the disturbances in the vacuum of the atoms of space that interferes with itself (just like water waves interfering with itself), and thus guides the trailing particle after it passes the slit to its final location on the screen; hence the pilot wave a la Bohm’s interpretation of quantum mechanic. Hence quantum mechanics has now acquired its correct interpretation. It is truly remarkable that from a thermodynamics perspective, we have identified the quantum of space, and also solved the mystery of the double-slit experiment at the same time. # 7 Triality and the emerging worldview Thus we see that from the empirical observations of Newton’s law of gravitation, Coulomb’s law of electrostatics, and the maximum speed of light and the quantum of energy, we have discovered the atoms of space by demanding consistency of the equations at both the quantum, classical (relativistic) level and also at the statistical (thermodynamics) level. The connection between these levels is coded in the wave-particle principle, and also the holographic principle, respectively. In this sense, our description of space is complete. This triality between particle framework – wave/light framework – holographic/thermodynamic framework, can be viewed as dealing with 0-dim (0-brane), 1-dim (1-brane = string), and 2-dim objects (2-brane = membrane) points of view, respectively, all equivalent or dual representations or codifications of information of any physical system. String theory uses the light frame perspective and M-theory uses a thermodynamics perspective. I conjecture that Matrix theory uses the 0-brane perspective. Also, quantum loop gravity provides the simple framework (using spin networks) to represent the degrees of freedom of space. If these atoms of space represent the background of space, then we see that both background-dependent and background independent approaches are equivalent, because although the atoms of space have mass and charge, they cancel each other out and appear not to exist. Hence we have something and nothing at the some time!! Was this what gauge theory and re-normalization theory trying to tell us? In a way, we have discovered the Rosetta stone of physics, the missing link, or as Edward Witten has suggested, the fundamental degrees of freedom for a theory of fundamental interactions or fundamental causation. It would be interesting to see how far this new emerging worldview can go to providing a complete and consistent quantum field theory of gravity or quantum gravity. And finally, we are left asking the question, if these atoms have energy, and are not gravitationally and electromagnetically visible, have we also stumbled on the elementary particles of dark energy? # 8 A possible path to non-commutative geometry, twistor theory, unitary theory and E8 theory A fundamental assumption that may have been a stumbling block to finding a consistent and complete theory of fundamental interactions has to do with the gravitational interactions of matter and antimatter, as there is no experimental evidence to date, given the feebleness of gravitational attraction between elementary particles, to determine conclusively whether it is attractive or repulsive. Several researchers think that contrary to the orthodox view that all matter attracts, whether matter or antimatter, they argue based on symmetry and cosmological grounds that if they do repel, then many issues in fundamental physics and cosmology find an elegant solution [23, 25, 24, 26, 27, 28, 29]. In our model, the atom of space behaves like a gravitational and electromagnetic dipole, with each degree of freedom having mass $m=\frac{{M}_{p}}{2}$ and charge $q=\frac{{Q}_{p}}{2}$. Since there are 2 degrees of freedom, the energy/mass is divided equally between the two masses m+ and m of opposite charges, matter and antimatter respectively, then the angular momentum for each degree should be $\frac{1}{2}\hbar$. It is the repulsion between the matter-antimatter degrees of freedom that prevents the particle-antiparticle degrees of freedom from annihilating each other. Looking at it from a dual point of view, it is the attraction between the charges that prevent the degrees of freedom from flying apart. In a recent article, from unitary theory, it appears that the mathematics of gravitons interacting is mathematically identical to two gluons interacting [30]. Could our model of the atom in this note be a physical realization of their mathematical computations? Also, if these matter-antimatter partons anti-commute, then have we not found a basis for Alain Connes’ non-commutative geometry [31] and Sir Roger Penrose’s twistor theory [32]? And finally, if we are using geometrical principles and properties of circles of light twisting and turning over each other in well defined ways in 3 dimensional space, can this be extended to higher dimensions and make contact with Garret Lisi’s program [33, 34] of constructing a geometric model of elementary particles interactions? It would be interesting to go back and analyze our super-gravity models with the added feature that antimatter-matter repel, in addition to the feature that matter-matter attract and anti-matter-antimatter attract, and see if those models are now fully re-normalizable. Also, we may be able to make quantum field theory, super-string, loop quantum gravity, E8 theory and eventually quantum gravity consistent and complete. # 9 Conclusion We have arrived at this model of space by only demanding consistency in the formulas for circular motion, Einstein’s mass-energy equivalance, wave-particle duality, Planck-Einstein equation, Newton’s law of universal gravitation, Schwarzschild solution of general relativity, the Reissner–Nordström metric and black hole thermodynamics, nothing more or nothing less; hence paving the way for a potentially simple and elegant theory of quantum gravity. # References [1] Mass–energy equivalence. http://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence [2] Planck constant. http://en.wikipedia.org/wiki/Planck_constant [3] Wave–particle duality. http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality [4] Formulas for uniform circular motion. http://en.wikipedia.org/wiki/Circular_motion#Formulas_for_uniform_circular_motion [5] Compton wavelength. http://en.wikipedia.org/wiki/Compton_wavelength [6] Planck units. http://en.wikipedia.org/wiki/Planck_units [8] Reissner–Nordström metric. http://en.wikipedia.org/wiki/Reissner%E2%80%93Nordstr%C3%B6m_metric [9] Equipartition theorem. http://en.wikipedia.org/wiki/Equipartition_theorem [10] Unruh effect. http://en.wikipedia.org/wiki/Unruh_effect#The_equation [11] Black hole thermodynamics. http://en.wikipedia.org/wiki/Black_hole_thermodynamics#Black_hole_entropy [12] Speed of sound. http://en.wikipedia.org/wiki/Speed_of_sound#Speed_in_ideal_gases_and_in_air [14] Ha, Y.K., 2005. The Gravitational Energy of a Black Hole. arXiv:gr-qc/0508041. http://arXiv.org/abs/gr-qc/0508041 [15] Ha, Y.K., 2005. Horizon Mass Theorem. arXiv:gr-qc/0509063. http://arXiv.org/abs/gr-qc/0509063 [16] Ha, Y.K., 2007. A new theorem for black holes. arXiv:gr-qc/0703130. http://arXiv.org/abs/gr-qc/0703130 [17] Ha, Y.K., 2008. Quantum Black Holes As Elementary Particles. arXiv:0812.5012. http://arXiv.org/abs/0812.5012 [18] Ha, Y.K., 2009. Are Black Holes Elementary Particles? arXiv:0906.3549. http://arXiv.org/abs/0906.3549 [19] Ha, Y.K., 2010. Is There Unification in the 21st Century? arXiv:1007.2873. http://arXiv.org/abs/1007.2873 [20] Ha, Y.K., 2011. Severe Challenges In Gravity Theories. arXiv:1106.6053. http://arXiv.org/abs/1106.6053 [21] Verlinde, E.P., 2010. On the Origin of Gravity and the Laws of Newton. arXiv:1001.0785. http://arXiv.org/abs/1001.0785 [22] Padmanabhan, T., 2010. Lessons from Classical Gravity about the Quantum Structure of Spacetime. arXiv:1012.4476. http://arXiv.org/abs/1012.4476 [23] Hajdukovic, D.S., 2008. Dark matter, dark energy and gravitational proprieties of antimatter. arXiv:0810.3435. http://arXiv.org/abs/0810.3435 [24] Hajdukovic, D.S., 2011. Quantum vacuum and dark matter. arXiv:1111.4884. http://arXiv.org/abs/1111.4884 [25] Hajdukovic, D.S., 2011. Is dark matter an illusion created by the gravitational polarization of the quantum vacuum? arXiv:1106.0847. http://arXiv.org/abs/1106.0847 [26] Hajdukovic, D.S., 2012. Quantum Vacuum and Virtual Gravitational Dipoles: The solution to the Dark Energy Problem? arXiv:1201.4594. http://arXiv.org/abs/1201.4594 [27] Villata, M., 2010. Gravitational interaction of antimatter. arXiv:1003.1635. http://arXiv.org/abs/1003.1635 [28] Villata, M., 2011. CPT symmetry and antimatter gravity in general relativity. arXiv:1103.4937. http://arXiv.org/abs/1103.4937 [29] Villata, M., 2012. “Dark energy” in the Local Void. arXiv:1201.3810. http://arXiv.org/abs/1201.3810 [30] Bern, Z., Dixon, L.J. & Kosower, D.A., 2012. Loops, trees and the search for new physics. Sci.Am., 306N5, pp.34–41. http://www.scientificamerican.com/article/loops-trees-and-the-search-for-new-physics-extreme-physics-special/ [31] Connes, A., 1994. Noncommutative Geometry 1st ed., Academic Press. http://www.alainconnes.org/docs/book94bigpdf.pdf [32] Penrose, R., 2007. The Road to Reality: A Complete Guide to the Laws of the Universe, Vintage. http://www.amazon.com/The-Road-Reality-Complete-Universe/dp/0679776311 [33] Lisi, A.G., 2007. An Exceptionally Simple Theory of Everything. arXiv:0711.0770. http://arXiv.org/abs/0711.0770 [34] Lisi, A.G. & Weatherall, J.O., 2010. A geometric theory of everything. Sci.Am., 303N6, pp.30–37. http://www.cs.virginia.edu/~robins/A_Geometric_Theory_of_Everything.pdf ## 2 thoughts on “The atoms of space” This site uses Akismet to reduce spam. Learn how your comment data is processed.
2022-09-28 00:35:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 146, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.774992823600769, "perplexity": 644.3342323825995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00338.warc.gz"}
http://www.risc.jku.at/people/talks.php?id=140
| people | publications | research | education | industry | conferences | media | projects internal search: sitemap # RISC Talk Announcement Speaker: James Sellers, Penn State University, USA Title: Old and New Results for Generalized Frobenius Partition Functions Date: 06.03. 2013   14:00--16:00 Location: RISC Seminar room Abstract: In his 1984 AMS Memoir, George Andrews defined two families of generalized Frobenius partition functions which he denoted $\phi_k(n)$ and $c\phi_k(n)$ where $k\geq 1.$ Both of these functions "naturally" generalize the unrestricted partition function $p(n)$ since $p(n) = \phi_1(n) = c\phi_1(n)$ for all $n.$ In his Memoir, Andrews proved (among many other things) that, for all $n\geq 0,$ $c\phi_2(5n+3) \equiv 0\pmod{5}.$ Soon after, many authors proved congruence properties for various generalized Frobenius partition functions, typically for small values of $k.$ In this talk, I will discuss a variety of these past congruence results (including the recent work of Paule and Radu). I will then transition to very recent work of Baruah and Sarmah who, in 2011, proved a number of congruence properties for $c\phi_4$, all with moduli which are powers of 4. I will then provide an elementary proof of a new congruence for $c\phi_4$ by proving this function satisfies an unexpected result modulo 5. (The proof relies on Baruah and Sarmah's results as well as work of Srinivasa Ramanujan.) I will then close with comments about current and future work. VCAL file:
2017-10-21 23:19:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7665489315986633, "perplexity": 1251.5901955994561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00370.warc.gz"}
https://www.aimsciences.org/journal/1534-0392/2019/18/1
# American Institute of Mathematical Sciences ISSN: 1534-0392 eISSN: 1553-5258 All Issues ## Communications on Pure & Applied Analysis January 2019 , Volume 18 , Issue 1 Select all articles Export/Reference: 2019, 18(1): 1-13 doi: 10.3934/cpaa.2019001 +[Abstract](2803) +[HTML](153) +[PDF](332.85KB) Abstract: In this paper, for a nematic liquid crystal system, we address the space-time decay properties of strong solutions in the whole space \begin{document}$\mathbb{R}^3$\end{document}. Based on a parabolic interpolation inequality, bootstrap argument and some weighted estimates, we obtain the higher order derivative estimates for such system. 2019, 18(1): 15-32 doi: 10.3934/cpaa.2019002 +[Abstract](2623) +[HTML](142) +[PDF](358.5KB) Abstract: In this article is perfomed a global stability analysis of an infection load-structured epidemic model using tools of dynamical systems theory. An explicit Duhamel formulation of the semiflow allows us to prove the existence of a compact attractor for the trajectories of the system. Then, according to the sharp threshold \begin{document}$\mathcal R_0$\end{document}, the basic reproduction number of the disease, we make explicit the basins of attractions of the equilibria of the system and prove their global stability with respect to these basins, the attractivness property being obtained using infinite dimensional Lyapunov functions. 2019, 18(1): 33-50 doi: 10.3934/cpaa.2019003 +[Abstract](2341) +[HTML](109) +[PDF](419.04KB) Abstract: The main contribution of the N-barrier maximum principle is that it provides rather generic a priori upper and lower bounds for the linear combination of the components of a vector-valued solution. We show that the N-barrier maximum principle (NBMP, C.-C. Chen and L.-C. Hung (2016)) remains true for \begin{document}$n$\end{document} \begin{document}$(n>2)$\end{document} species. In addition, a stronger lower bound in NBMP is given by employing an improved tangent line method. As an application of NBMP, we establish a nonexistence result for traveling wave solutions to the four species Lotka-Volterra system. 2019, 18(1): 51-64 doi: 10.3934/cpaa.2019004 +[Abstract](2561) +[HTML](115) +[PDF](367.13KB) Abstract: We consider the following quasilinear Schrödinger equation where \begin{document}$N≥ 1$\end{document}, \begin{document}$0 < q(x)≤ \lim_{|x|\to∞}q(x)$\end{document}, \begin{document}$g∈ C(\mathbb{R}^+, \mathbb{R})$\end{document} and \begin{document}$g(u)/u^3 \to 1$\end{document}, as \begin{document}$u \to ∞.$\end{document} We establish the existence of a positive solution to this problem by using the method developed in Szulkin and Weth [27,28]. 2019, 18(1): 65-81 doi: 10.3934/cpaa.2019005 +[Abstract](2320) +[HTML](119) +[PDF](408.92KB) Abstract: This paper is concerned with constraint minimizers of an \begin{document}$L^2-$\end{document}critical minimization problem (1) in \begin{document}$\mathbb{R}^N$\end{document} (\begin{document}$N≥ 1$\end{document}) under an \begin{document}$L^2-$\end{document}subcritical perturbation. We prove that the problem admits minimizers with mass \begin{document}$ρ^\frac{N}{2}$\end{document} if and only if \begin{document}$0≤ρ < ρ^*: = \|Q\|^{\frac{4}{N}}_2$\end{document} for \begin{document}$b≥0$\end{document} and \begin{document}$0 < ρ ≤ρ^*$\end{document} for \begin{document}$b < 0$\end{document}, where the constant \begin{document}$b$\end{document} comes from the coefficient of the perturbation term, and \begin{document}$Q$\end{document} is the unique positive radically symmetric solution of \begin{document}$Δ u(x)-u(x)+u^{1+\frac{4}{N}}(x) = 0$\end{document} in \begin{document}$\mathbb{R}^N$\end{document}. Furthermore, we analyze rigorously the concentration behavior of minimizers as \begin{document}$ρ \nearrow ρ^*$\end{document} for the case where \begin{document}$b>0$\end{document}, which shows that the concentration rates are determined by the subcritical perturbation, instead of the local profiles of the potential \begin{document}$V(x)$\end{document}. 2019, 18(1): 83-106 doi: 10.3934/cpaa.2019006 +[Abstract](2454) +[HTML](105) +[PDF](394.64KB) Abstract: It is established some existence and multiplicity of solution results for a quasilinear elliptic problem driven by the Φ-Laplacian operator. One of the solutions is built as a ground state solution. In order to prove our main results we apply the Nehari method combined with the concentration compactness theorem in an Orlicz-Sobolev space framework. One of the difficulties in dealing with this kind of operator is the lost of homogeneity properties. 2019, 18(1): 107-128 doi: 10.3934/cpaa.2019007 +[Abstract](2422) +[HTML](110) +[PDF](397.03KB) Abstract: We consider a function $U$ satisfying a degenerate elliptic equation on $\mathbb{R}_ + ^{N + 1}: = (0, +∞)×{\mathbb{R}^N}$ with mixed Dirichlet-Neumann boundary conditions. The Neumann condition is prescribed on a bounded domain $\Omega\subset{\mathbb{R}^N}$ of class $C^{1, 1}$, whereas the Dirichlet data is on the exterior of $\Omega$. We prove Hölder regularity estimates of $\frac{U}{d_\Omega^s}$, where $d_\Omega$ is a distance function defined as $d_\Omega(z): = \text{dist}(z, {\mathbb{R}^N}\setminus\Omega)$, for $z∈\overline{\mathbb{R}_ + ^{N + 1}}$. The degenerate elliptic equation arises from the Caffarelli-Silvestre extension of the Dirichlet problem for the fractional Laplacian. Our proof relies on compactness and blow-up analysis arguments. 2019, 18(1): 129-158 doi: 10.3934/cpaa.2019008 +[Abstract](2466) +[HTML](113) +[PDF](459.92KB) Abstract: In the present paper, we consider the following Kirchhoff type problem where \begin{document}$a$ \end{document} is a positive constant, \begin{document}$λ$ \end{document} is a positive parameter, \begin{document}$V∈ L^{\frac{N}{2}}(\mathbb{R}^N)$ \end{document} is a given nonnegative function and \begin{document}$2^*$ \end{document} is the critical exponent. The existence of bounded state solutions for Kirchhoff type problem with critical exponents in the whole \begin{document}$\mathbb R^N$ \end{document} (\begin{document}$N≥5$ \end{document}) has never been considered so far. We obtain sufficient conditions on the existence of bounded state solutions in high dimension \begin{document}$N≥4$ \end{document}, and especially it is the fist time to consider the case when \begin{document}$N≥5$ \end{document} in the literature. 2019, 18(1): 159-180 doi: 10.3934/cpaa.2019009 +[Abstract](2934) +[HTML](146) +[PDF](367.7KB) Abstract: In this paper, we consider a viscoelastic plate equation with a logarithmic nonlinearity. Using the Galaerkin method and the multiplier method, we establish the existence of solutions and prove an explicit and general decay rate result. This result extends and improves many results in the literature such as Gorka [19], Hiramatsu et al. [27] and Han and Wang [26]. 2019, 18(1): 181-193 doi: 10.3934/cpaa.2019010 +[Abstract](2468) +[HTML](133) +[PDF](342.2KB) Abstract: An optimal condition is given for the existence of positive solutions of nonlinear Kirchhoff PDE with strong singularities. A byproduct is that $-2$ is no longer the critical position for the existence of positive solutions of PDE's with singular potentials and negative powers of the form: \begin{document} $- |x{|^\alpha }\Delta u = {u^{{\rm{ - }}\gamma }}$ \end{document} in \begin{document} $Ω$ \end{document}, \begin{document} $u = 0$ \end{document} on \begin{document} $\partial \Omega$ \end{document}, where \begin{document} $\Omega$ \end{document} is a bounded domain of \begin{document} ${\mathbb{R}}^{N}$ \end{document} containing 0, with \begin{document} $N \ge 3$ \end{document}, \begin{document} $\alpha \in \left( {0, N} \right)$ \end{document} and \begin{document} $- \gamma \in \left( { - 3, - 1} \right)$ \end{document}. 2019, 18(1): 195-225 doi: 10.3934/cpaa.2019011 +[Abstract](2446) +[HTML](97) +[PDF](458.38KB) Abstract: Two-phase flow of two Newtonian incompressible viscous fluids with a soluble surfactant and different densities of the fluids can be modeled within the diffuse interface approach. We consider a Navier-Stokes/Cahn-Hilliard type system coupled to non-linear diffusion equations that describe the diffusion of the surfactant in the bulk phases as well as along the diffuse interface. Moreover, the surfactant concentration influences the free energy and therefore the surface tension of the diffuse interface. For this system existence of weak solutions globally in time for general initial data is proved. To this end a two-step approximation is used that consists of a regularization of the time continuous system in the first and a time-discretization in the second step. 2019, 18(1): 227-236 doi: 10.3934/cpaa.2019012 +[Abstract](2362) +[HTML](108) +[PDF](317.59KB) Abstract: In this paper, we prove the existence of the positive and negative solutions to p-Laplacian eigenvalue problems with supercritical exponent. This extends previous results on the problems with subcritical and critical exponents. 2019, 18(1): 237-253 doi: 10.3934/cpaa.2019013 +[Abstract](2547) +[HTML](102) +[PDF](376.82KB) Abstract: In this paper, we study the following critical system with fractional Laplacian: By using the Nehari manifold, under proper conditions, we establish the existence and nonexistence of positive least energy solution of the system. 2019, 18(1): 255-284 doi: 10.3934/cpaa.2019014 +[Abstract](2433) +[HTML](129) +[PDF](454.0KB) Abstract: In this work, we are concerned with a class of parabolic-elliptic chemotaxis systems with the prototype given by with nonnegative initial condition for \begin{document}$u$\end{document} and homogeneous Neumann boundary conditions in a smooth bounded domain \begin{document}$Ω\subset \mathbb{R}^n(n≥ 2)$\end{document}, where \begin{document}$χ, b, κ>0$\end{document}, \begin{document}$a∈ \mathbb{R}$\end{document} and \begin{document}$θ>1$\end{document}. First, using different ideas from [9,11], we re-obtain the boundedness and global existence for the corresponding initial-boundary value problem under, either or Next, carrying out bifurcation from "old multiplicity", we show that the corresponding stationary system exhibits pattern formation for an unbounded range of chemosensitivity \begin{document}$χ$\end{document} and the emerging patterns converge weakly in \begin{document}$L^θ(Ω)$\end{document} to some constants as \begin{document}$χ \to ∞$\end{document}. This provides more details and also fills up a gap left in Kuto et al. [13] for the particular case that \begin{document}$θ = 2$\end{document} and \begin{document}$κ = 1$\end{document}. Finally, for \begin{document}$θ = κ+1$\end{document}, the global stabilities of the equilibria \begin{document}$((a/b)^{\frac{1}{κ}}, a/b)$\end{document} and \begin{document}$(0,0)$\end{document} are comprehensively studied and explicit convergence rates are computed out, which exhibits chemotaxis effects and logistic damping on long time dynamics of solutions. These stabilization results indicate that no pattern formation arises for small \begin{document}$χ$\end{document} or large damping rate \begin{document}$b$\end{document}; on the other hand, they cover and extend He and Zheng's [6,Theorems 1 and 2] for logistic source and linear secretion (\begin{document}$θ = 2$\end{document} and \begin{document}$κ = 1$\end{document}) (where convergence rate estimates were shown) to generalized logistic source and nonlinear secretion. 2019, 18(1): 285-300 doi: 10.3934/cpaa.2019015 +[Abstract](3435) +[HTML](107) +[PDF](383.2KB) Abstract: In this paper, we investigate the following a class of Choquard equation where \begin{document}$N≥ 3,~α∈ (0,N),~I_α$\end{document} is the Riesz potential and \begin{document}$F(s) = ∈t_{0}^{s}f(t)dt$\end{document}. If \begin{document}$f$\end{document} satisfies almost necessary the upper critical growth conditions in the spirit of Berestycki and Lions, we obtain the existence of positive radial ground state solution by using the Pohožaev manifold and the compactness lemma of Strauss. 2019, 18(1): 301-322 doi: 10.3934/cpaa.2019016 +[Abstract](2309) +[HTML](108) +[PDF](399.39KB) Abstract: Let \begin{document}$N ≥ 3$\end{document} and \begin{document}$Ω \subset \mathbb{R}^N$\end{document} be a \begin{document}$C^2$\end{document} bounded domain. We study the existence of positive solution \begin{document}$u ∈ H^1(Ω)$\end{document} of where \begin{document}$τ = 1$\end{document} or \begin{document}$-1$\end{document}, \begin{document}$0 < s <2$\end{document}, \begin{document}$2^*(s) = \frac{2(N-s)}{N-2}$\end{document} and \begin{document}$x_1, x_2 ∈ \overline{Ω}$\end{document} with \begin{document}$x_1 ≠ x_2$\end{document}. First, we show the existence of positive solutions to the equation provided the positive \begin{document}$λ$\end{document} is small enough. In case that one of the singularities locates on the boundary and the mean curvature of the boundary at this singularity is positive, the existence of positive solutions is obtained for any \begin{document}$λ > 0$\end{document} and some \begin{document}$s$\end{document} depending on \begin{document}$τ$\end{document} and \begin{document}$N$\end{document}. Furthermore, we extend the existence theory of solutions to the equations for the case of the multiple singularities. 2019, 18(1): 323-340 doi: 10.3934/cpaa.2019017 +[Abstract](2083) +[HTML](121) +[PDF](367.59KB) Abstract: In this paper we study a class of degenerate second-order elliptic differential operators, often referred to as Fleming-Viot type operators, in the framework of function spaces defined on the \begin{document}$d$\end{document}-dimensional hypercube \begin{document}$Q_d$\end{document} of \begin{document}$\mathbf{R}^d$\end{document}, \begin{document}$d ≥1$\end{document}. By making mainly use of techniques arising from approximation theory, we show that their closures generate positive semigroups both in the space of all continuous functions and in weighted \begin{document}$L^{p}$\end{document}-spaces. In addition, we show that the semigroups are approximated by iterates of certain polynomial type positive linear operators, which we introduce and study in this paper and which generalize the Bernstein-Durrmeyer operators with Jacobi weights on \begin{document}$[0, 1]$\end{document}. As a consequence, after determining the unique invariant measure for the approximating operators and for the semigroups, we establish some of their regularity properties along with their asymptotic behaviours. 2019, 18(1): 341-360 doi: 10.3934/cpaa.2019018 +[Abstract](2719) +[HTML](122) +[PDF](385.17KB) Abstract: In this paper, some new results on the the regularity of Kolmogorov equations associated to the infinite dimensional OU-process are obtained. As an application, the average \begin{document}$L^2$\end{document}-error on \begin{document}$[0, T]$\end{document} of exponential integrator scheme for a range of semi-linear stochastic partial differential equations is derived, where the drift term is assumed to be Hölder continuous with respect to the Sobolev norm \begin{document}$\|·\|_{β}$\end{document} for some appropriate \begin{document}$β>0$\end{document}. In addition, under a stronger condition on the drift, the strong convergence estimate is obtained, which covers the result of the SDEs with Hölder continuous drift. 2019, 18(1): 361-396 doi: 10.3934/cpaa.2019019 +[Abstract](2891) +[HTML](136) +[PDF](502.44KB) Abstract: The present paper is concerned with the spatial spreading speeds and traveling wave solutions of cooperative systems in space-time periodic habitats with nonlocal dispersal. It is assumed that the trivial solution \begin{document}${\bf u} = {\bf 0}$\end{document} of such a system is unstable and the system has a stable space-time periodic positive solution \begin{document}${\bf u^*}(t,x)$\end{document}. We first show that in any direction \begin{document}$ξ∈ \mathbb{S}^{N-1}$\end{document}, such a system has a finite spreading speed interval, and under certain condition, the spreading speed interval is a singleton set, and hence, the system has a single spreading speed \begin{document}$c^{*}(ξ)$\end{document} in the direction of \begin{document}$ξ$\end{document}. Next, we show that for any \begin{document}$c>c^{*}(ξ)$\end{document}, there are space-time periodic traveling wave solutions of the form \begin{document}${\bf{u}}(t,x) = {\bf{Φ}}(x-ctξ,t,ctξ)$\end{document} connecting \begin{document}${\bf u^*}$\end{document} and \begin{document}${\bf 0}$\end{document}, and propagating in the direction of \begin{document}$ξ$\end{document} with speed \begin{document}$c$\end{document}, where \begin{document}$Φ(x,t,y)$\end{document} is periodic in \begin{document}$t$\end{document} and \begin{document}$y$\end{document}, and there is no such solution for \begin{document}$c<c^{*}(ξ)$\end{document}. We also prove the continuity and uniqueness of space-time periodic traveling wave solutions when the reaction term is strictly sub-homogeneous. Finally, we apply the above results to nonlocal monostable equations and two-species competitive systems with nonlocal dispersal and space-time periodicity. 2019, 18(1): 397-424 doi: 10.3934/cpaa.2019020 +[Abstract](2086) +[HTML](98) +[PDF](410.59KB) Abstract: In this paper we construct the spectral expansion for the differential operator generated in \begin{document}$L_{2}(-∞, ∞)$\end{document} by ordinary differential expression of arbitrary order with periodic complex-valued coefficients by introducing new concepts as essential spectral singularities and singular quasimomenta and using the series with parenthesis. Moreover, we find a criteria for which the spectral expansion coincides with the Gelfand expansion for the self-adjoint case. 2019, 18(1): 425-434 doi: 10.3934/cpaa.2019021 +[Abstract](2218) +[HTML](108) +[PDF](318.06KB) Abstract: Consider the second order self-adjoint discrete Hamiltonian system where \begin{document}$p(n), L(n)$\end{document} and \begin{document}$W(n, x)$\end{document} are \begin{document}$N$\end{document}-periodic on \begin{document}$n$\end{document}, and \begin{document}zhongwenzy$\end{document}lies in a gap of the spectrum \begin{document}$σ(\mathcal{A})$\end{document}of the operator \begin{document}$\mathcal{A}$\end{document}, which is bounded self-adjoint in \begin{document}$l^2(\mathbb{Z}, \mathbb{R}^{\mathcal{N}})$\end{document} defined by \begin{document}$(\mathcal{A}u)(n) = \triangle [p(n)\triangle u(n-1)]-L(n)u(n)$\end{document}. We obtain a sufficient condition on the existence of nontrivial homoclinic orbits for the above system under a much weaker condition than \begin{document}$\lim_{|x|\to ∞}\frac{W(n, x)}{|x|^2} = ∞$\end{document} uniformly in \begin{document}$ n∈ \mathbb{Z}$\end{document}, which has been a common condition used in the existing literature. We also give three examples to illustrate our result. 2019, 18(1): 435-453 doi: 10.3934/cpaa.2019022 +[Abstract](2101) +[HTML](108) +[PDF](392.14KB) Abstract: In this manuscript, we provide a point-wise estimate for the 3-commutators involving fractional powers of the sub-Laplacian on Carnot groups of homogeneous dimension \begin{document}$Q$\end{document}. This can be seen as a fractional Leibniz rule in the sub-elliptic setting. As a corollary of the point-wise estimate, we provide an \begin{document}$(L^{p}, L^{q})\to L^{r}$\end{document} estimate for the commutator, provided that \begin{document}$\frac{1}{r} = \frac{1}{p}+\frac{1}{q}-\frac{α}{Q}$\end{document} for \begin{document}$α ∈ (0, Q)$\end{document}. 2019, 18(1): 455-478 doi: 10.3934/cpaa.2019023 +[Abstract](2508) +[HTML](125) +[PDF](368.2KB) Abstract: This paper deals with the exact controllability for a class of fractional evolution systems in a Banach space. First, we introduce a new concept of exact controllability and give notion of the mild solutions of the considered evolutional systems via resolvent operators. Second, by utilizing the semigroup theory, the fixed point strategy and Kuratowski's measure of noncompactness, the exact controllability of the evolutional systems is investigated without Lipschitz continuity and growth conditions imposed on nonlinear functions. The results are established under the hypothesis that the resolvent operator is differentiable and analytic, respectively, instead of supposing that the semigroup is compact. An example is provided to illustrate the proposed results. 2019, 18(1): 479-492 doi: 10.3934/cpaa.2019024 +[Abstract](2130) +[HTML](132) +[PDF](371.04KB) Abstract: For any positive decreasing to zero sequence \begin{document}$a_n$\end{document} such that \begin{document}$\sum a_n$\end{document} diverges we consider the related series \begin{document}$\sum k_na_n$\end{document} and \begin{document}$\sum j_na_n.$\end{document} Here, \begin{document}$k_n$\end{document} and \begin{document}$j_n$\end{document} are real sequences such that \begin{document}$k_n∈\{0,1\}$\end{document} and \begin{document}$j_n∈\{-1,1\}.$\end{document} We study their convergence and characterize it in terms of the density of 1's in the sequences \begin{document}$k_n$\end{document} and \begin{document}$j_n.$\end{document} We extend our results to series \begin{document}$\sum m_na_n,$\end{document} with \begin{document}$m_n∈\{-1,0,1\}$\end{document} and apply them to study some associated random series. 2019, 18(1): 493-517 doi: 10.3934/cpaa.2019025 +[Abstract](3161) +[HTML](127) +[PDF](413.66KB) Abstract: In this paper, we study the following quasilinear Schrödinger equation where \begin{document}$ N>4, 2^* = \frac{2N}{N-2}, V: \mathbb{R}^N \to \mathbb{R}$\end{document} satisfies suitable assumptions. Unlike \begin{document}$ g∈ \mathcal{C}^1(\mathbb{R},\mathbb{R})$\end{document}, we only need to assume that \begin{document}$ g∈ \mathcal{C}(\mathbb{R},\mathbb{R})$\end{document}. By using a change of variable, we obtain the existence of ground state solutions with general critical growth. Our results extend some known results. 2019, 18(1): 519-538 doi: 10.3934/cpaa.2019026 +[Abstract](2423) +[HTML](114) +[PDF](422.76KB) Abstract: In this paper, we are concerned with fractional Choquard equation where \begin{document}$\epsilon>0$\end{document} is a parameter, \begin{document}$0<α<1$\end{document}, \begin{document}$0<μ<3$\end{document}, \begin{document}$2_{μ,α}^* = \frac{6-μ}{3-2α}$\end{document} is the critical exponent in the sense of Hardy-Littlewood-Sobolev inequality and fractional Laplace operator, \begin{document}$f$\end{document} is a continuous subcritical term, and \begin{document}$F$\end{document} is the primitive function of \begin{document}$f$\end{document}. By virtue of the method of Nehari manifold and Ljusternik-Schnirelmann category theory, we prove that the equation has a ground state for \begin{document}$\epsilon$\end{document} small enough and investigate the relation between the number of solutions and the topology of the set where \begin{document}$V$\end{document} attains its global minimum for small \begin{document}$\epsilon\$ \end{document}. We also obtain sufficient conditions for the nonexistence of ground states. 2019, 18(1): 539-558 doi: 10.3934/cpaa.2019027 +[Abstract](2666) +[HTML](132) +[PDF](470.81KB) Abstract: The finite time blow-up of solutions for 1-D NLS with oscillating nonlinearities is shown in two domains: (1) the whole real line where the nonlinear source is acting in the interior of the domain and (2) the right half-line where the nonlinear source is placed at the boundary point. The distinctive feature of this work is that the initial energy is allowed to be non-negative and the momentum is allowed to be infinite in contrast to the previous literature on the blow-up of solutions with time dependent nonlinearities. The common finite momentum assumption is removed by using a compactly supported or rapidly decaying weight function in virial identities - an idea borrowed from [18]. At the end of the paper, a numerical example satisfying the theory is provided. 2018  Impact Factor: 0.925
2019-10-18 02:15:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468337893486023, "perplexity": 937.7361008928658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00259.warc.gz"}
https://www.controlbooth.com/threads/used-console-pricing.41703/
# USED CONSOLE PRICING #### Amiers ##### Renting to Corporate One Fixture at a Time. If I were a Retail Production house I would list it at 15k. But like it has been said there is no formula to it. #### DJ EZ-C ##### New Member If I were a Retail Production house I would list it at 15k. But like it has been said there is no formula to it. Retail pricing is $12,800, MAP is$8295, Wholesale (Cost for the production house) is actually $7796, so how could you list one that is two years old and has an expired warranty for$15,000? That does not make sense, so please explain your logic. Why would anybody buy something two years old for $15k, that they can buy brand new for$12.8K? This is NOT a full size MA, Nor MA Light or MA Ultra Light. This is for the MA Dot2 Core. #### jaightaylor ##### Member Discussing pricing details, even two years old, on a public forum is considered poor form. Last edited: #### JD ##### Well-Known Member Depends what happened in the last two years! (always has to be a reason a seller is selling something.) #### Amiers ##### Renting to Corporate One Fixture at a Time. If you know the numbers then why are you asking?? I shot out a number there is no logic behind it other than you said it was almost brand new so I marked it up 100% and rounded down to a whole number. Jay Ashworth #### RonHebbard ##### Well-Known Member It depends on what it is. You're confusing price for value. Different consoles depreciate at different rates, so there isn't just a flat rate you can apply and be done with it. Do you mean you'd pay less for a used Scrimmer than for a used Leprecon? Edit: To correct the spelling of Leprecon. Toodleoo! Ron Hebbard. Last edited: #### DJ EZ-C ##### New Member Discussing pricing details, even two years old, on a public forum is considered poor form. If you don't ask the question how are you supposed to get the answer? I tried to get an answer without being too specific and got nothing useful back as an answer. So I got specific in an attempt to your rather vague answer of " It depends on what it is. You're confusing price for value. Different consoles depreciate at different rates, so there isn't just a flat rate you can apply and be done with it." If that wasn't asking for more specifics I don't know what is. #### DJ EZ-C ##### New Member Discussing pricing details, even two years old, on a public forum is considered poor form. If you don't ask the question how are you supposed to get the answer? I tried to get an answer without being too specific and got nothing useful back as an answer. So I got specific in When purchasing moving heads as an example after two years of use I would expect those items have depreciated by 50% Depends what happened in the last two years! (always has to be a reason a seller is selling something.) Its going for 8600 retail. 2 years old... no real warranty.... 4k-5k? Selling used stuff is tough right now. http://www.solarisnetwork.com/search_results.php?sq=Dot2 Thank you for your input. We have talked and I value your opinion. #### DJ EZ-C ##### New Member Depends what happened in the last two years! (always has to be a reason a seller is selling something.) The unit is in excellent condition. I have inspected it personally. Other than fingerprints on the touch screens it looks fresh out of the box. It was one of two purchased for rentals and saw very little demand. The factory warranty is expired. It has never required repairs, verified by checking the serial number with the factory and distributor. The seller will provide some level of warranty, probably a few months. A) What is it worth now or conversely B) How much would it have depreciated from it's original purchase price (25%, 40%, more)? #### TheaterEd ##### Renaissance Man Fight Leukemia Do you mean you'd pay less for a used Scrimmer than for a used Leprecaun? Toodleoo! Ron Hebbard. Lol, this comment is so old I had to use google to understand it. Well played sir. A) What is it worth now or conversely B) How much would it have depreciated from it's original purchase price (25%, 40%, more)? A: as with any used gear, it is worth whatever someone is willing to pay for it. I'm always surprised with how much people are asking for used consoles. They generally don't depreciate much due to the fact that most people seem to hang onto their old boards as backups so by the time they hit the market they are either two generations behind or have seen so many miles of road use that you're rolling the dice. Based on a cursory search of the product, I would want to see from current users if they are experiencing buyer's remorse by not just going to the XL-F or XL-B right away and needing to buy the additional wings. Other than that, at two years old, there really isn't much of a used market for the item yet. I personally would look at it this way: If I would need an additional wing for my usage, then subtract the cost of the wing from the new price and that's around what I'd pay. It's like a free upgrade just without a warranty. If this console fits your needs as is, then I'd be willing to pay a bit more, but not too much. Again: this is totally hypothetical for me, but that is my line of reasoning, flawed though it may be. Definitely take the time to kick around their forum for a bit. http://forum.ma-dot2.com/ #### MNicolai ##### Well-Known Member Fight Leukemia If you don't ask the question how are you supposed to get the answer? I tried to get an answer without being too specific and got nothing useful back as an answer. He's talking about dealer cost. MAP and Retail are considered public pricing, but dealer cost is confidential. Not that it doesn't get discussed from time to time, but shouldn't be brought up in a public forum. It's irrelevant to this conversation though. What it costs a dealer to purchase the equipment is an irrelevant number for a customer. The time, labor, overhead, warranty, and technical support staff required for having a business with dealerships and representing a product line that requires deep technical knowledge means that anyone who sells any product has to charge good money for what they sell. You generally shouldn't pay more than MAP unless you're working with a dealer who goes long and far out of their way to support you, but you should never hold it against a dealer when they buy a product for $5000 and sell it to you for$6000. This is all to say that just because the dealer paid $xxxx for it doesn't mean you should feel entitled to pay less than that. Having some hours on the console and no warranty certainly depreciates the value below MAP. Consoles in general don't depreciate much until they are made obsolete. They're taken pretty good care of. Until they become teenagers the hardware stays in pretty good condition. The advent of consoles being built at least somewhat on stock PC hardware also means if the power supply, hard drive, or motherboard blows up, usually the console is still salvageable without a huge amount of expense. There was a stretch of time where you could still get$6k for a used 15 year-old ETC Express console because people loved them so much. Then Element hit the market and the price tanked. You could give the console away but that was about it. Anyone willing to spend $3k on a used, crusty old console was willing to scrape together another$1500 for something brand new. Thus, the used market for Express consoles imploded and most people found more value in having a back-up console sitting in a closet than selling off their Express for $500. If MAP actually is what you say it is,$8295, somewhere in the $5000 +/-$500 is what I would expect to spend on a 2 year old, almost brand new console, technically relevant, without warranty. More if it comes with a road case, external monitors, cables, nodes, etc. That's what I would expect off of the bat. As for how saturated the market is with these things, how many people want to buy them, and how many people are trying to get rid of them, that will shift prices up or down accordingly. In that same vein, if you can pick up an Element 40/250 for $4500 new, then it stands to reason that you would pay a bit more than that for a superior, 4096-channel console with a few hours on it. Last edited: #### DJ EZ-C ##### New Member He's talking about dealer cost. MAP and Retail are considered public pricing, but dealer cost is confidential. Not that it doesn't get discussed from time to time, but shouldn't be brought up in a public forum. It's irrelevant to this conversation though. What it costs a dealer to purchase the equipment is an irrelevant number for a customer. The time, labor, overhead, warranty, and technical support staff required for having a business with dealerships and representing a product line that requires deep technical knowledge means that anyone who sells any product has to charge good money for what they sell. You generally shouldn't pay more than MAP unless you're working with a dealer who goes long and far out of their way to support you, but you should never hold it against a dealer when they buy a product for$5000 and sell it to you for $6000. This is all to say that just because the dealer paid$xxxx for it doesn't mean you should feel entitled to pay less than that. Having some hours on the console and no warranty certainly depreciates the value below MAP. Consoles in general don't depreciate much until they are made obsolete. They're taken pretty good care of. Until they become teenagers the hardware stays in pretty good condition. The advent of consoles being built at least somewhat on stock PC hardware also means if the power supply, hard drive, or motherboard blows up, usually the console is still salvageable without a huge amount of expense. There was a stretch of time where you could still get $6k for a used 15 year-old ETC Express console because people loved them so much. Then Element hit the market and the price tanked. You could give the console away but that was about it. Anyone willing to spend$3k on a used, crusty old console was willing to scrape together another $1500 for something brand new. Thus, the used market for Express consoles imploded and most people found more value in having a back-up console sitting in a closet than selling off their Express for$500. If MAP actually is what you say it is, $8295, somewhere in the$5000 +/- $500 is what I would expect to spend on a 2 year old, almost brand new console, technically relevant, without warranty. More if it comes with a road case, external monitors, cables, nodes, etc. That's what I would expect off of the bat. As for how saturated the market is with these things, how many people want to buy them, and how many people are trying to get rid of them, that will shift prices up or down accordingly. In that same vein, if you can pick up an Element 40/250 for$4500 new, then it stands to reason that you would pay a bit more than that for a superior, 4096-channel console with a few hours on it. Thank you for a well thought out and logical answer. #### microstar ##### Well-Known Member Do you mean you'd pay less for a used Scrimmer than for a used Leprecaun? Toodleoo! Ron Hebbard. Ron, you were a day early with your "Leprecaun" reference . The lighting manufacturer is spelled "Leprecon". #### JohnD ##### Well-Known Member Fight Leukemia Ron, you were a day early with your "Leprecaun" reference . The lighting manufacturer is spelled "Leprecon". Word @microstar , ya know the advice that you shouldn't thwack a hornet's nest with a stick, just take it that you have been warned. #### RonHebbard ##### Well-Known Member Word @microstar , ya know the advice that you shouldn't thwack a hornet's nest with a stick, just take it that you have been warned. I'm sorry but I'm not understanding. BTW; During my shop days I installed a number (6 in fact) of Leprecon's 6 dimmer packs into a pair of two story set pieces slated for flip-flop touring across North America with Sunset Boulevard. (This was their attempt to scale the two story mansion down to 80% of its size and 50% of its former 40,000 pound / 20 ton weight. As I'm recalling, the version with the full size flown mansion was only mounted in Los Angeles, New York City, Toronto and Vancouver. A 20 ton fly piece is an unusually heavy load to bring into a theatre requiring very special re-design and reinforcement of a theatres grid. Feller Precision handled the grid reinforcement in all of the theatres.) I believe we sent both of the new touring sets to Denver for final fit-up when Mr. Webber became upset with how one of his then new productions, "Whistle Down The Wind" possibly, was received / reviewed in Boston and cancelled his further plans for "Sunset." I suspect "Whistle Down The Wind" ended in Boston without ever continuing on to Broadway. Toodleoo! Ron Hebbard.
2019-12-06 00:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1969449669122696, "perplexity": 1962.8347784395369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00414.warc.gz"}
https://web2.0calc.com/questions/percentage-algebra-question
+0 # Percentage/Algebra Question 0 465 1 Given that x is 25% less than s and y is 20% great than x. Find y in terms of s. Guest Dec 14, 2015 #1 +19632 +15 Given that x is 25% less than s and y is 20% great than x. Find y in terms of s. $$\begin{array}{rcl} y\cdot(100 \%+20\%) &=& x \\ x\cdot(100 \%-25\%) &=& s \\ \hline y\cdot(100 \%+20\%)\cdot(100 \%-25\%) &=& s \\ y\cdot( \frac{120}{100} )\cdot( \frac{75}{100} ) &=& s \\ y\cdot 1.2 \cdot 0.75 &=& s \\ y\cdot 0.9 &=& s \qquad & | \quad : 0.9\\ y &=& \frac{ s } {0.9} \\ y &=& \frac{ 10 } { 9 } s \\ y &=& 1.1\overline{1} s \\ \end{array}$$ heureka  Dec 14, 2015 #1 +19632 +15 Given that x is 25% less than s and y is 20% great than x. Find y in terms of s. $$\begin{array}{rcl} y\cdot(100 \%+20\%) &=& x \\ x\cdot(100 \%-25\%) &=& s \\ \hline y\cdot(100 \%+20\%)\cdot(100 \%-25\%) &=& s \\ y\cdot( \frac{120}{100} )\cdot( \frac{75}{100} ) &=& s \\ y\cdot 1.2 \cdot 0.75 &=& s \\ y\cdot 0.9 &=& s \qquad & | \quad : 0.9\\ y &=& \frac{ s } {0.9} \\ y &=& \frac{ 10 } { 9 } s \\ y &=& 1.1\overline{1} s \\ \end{array}$$ heureka  Dec 14, 2015
2018-07-19 00:24:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5362773537635803, "perplexity": 1587.8229004750751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590362.13/warc/CC-MAIN-20180718232717-20180719012717-00261.warc.gz"}
http://sjmathtube.com/grade-3/engineering-probability-and-statistics-3-probability-distribution-function-continuous
## ENGINEERING PROBABILITY AND STATISTICS - 3 PROBABILITY DISTRIBUTION FUNCTION (CONTINUOUS) PROBABILITY AND STATISTICS playlist https://www.youtube.com/watch?v=O5sQ7...
2020-10-31 08:03:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722228407859802, "perplexity": 10546.174188461293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00341.warc.gz"}
http://davidclaytonthomas.com/delhi-minemat/893cce-horizontal-tangent-line-calculator
horizontal tangent line calculator the tangent line is horizontal on a curve where the slope is 0. b) Find the polar coordinates for the points where the tangent line is horizontal. polar tangent line. Consider the following polar function. What is horizontal curve in road? On top of that, it can act as a horizontal tangent calculator by helping you find the vertical and horizontal tangent lines as well. #f(x,-x/2) = x^2-x^2/2+x^2/4 - 27=0->x=pm 6# Tangents to Polar Curves. Tangent line calculator is a free online tool that gives the slope and the equation of the tangent line. As appears from graph our answers are correct (green line is vertical tangent, red lines are horizontal tangents). So, tangent line is horizontal at t=0. Tangent Line Calculator For You To Use It is an online tool that helps you find the tangent line to the implicit, explicit, parametric, and polar curve at a given point. Tangent Line Calculator The calculator will find the tangent line to the explicit, polar, parametric and implicit curve at the given point, with steps shown. 4) x = 0, or x = 4/9. In the next sections, we will explain horizontal curves, formulas to get the properties of horizontal curves, and method to find those geometrical properties without using horizontal tangent calculator. The tangent line equation calculator is used to calculate the equation of tangent line to a curve at a given abscissa point with stages calculation. Recall that with functions, it was very rare to come across a vertical tangent. 2) 9x^2 - 4x = 0. 3) x(9x - 4) = 0. Horizontal asymptote are known as the horizontal lines. Tangent also known as tan relates the angles and sides of a triangle. Log InorSign Up. Take the first derivative of the function and set it equal to 0 to find the points where this happens. Syntax : equation_tangent_line(function;number) Note: x must always be used as a variable. A horizontal curve offers a switch between two tangent strips of roadway. Added Mar 5, 2014 by Sravan75 in Mathematics. 2. Inputs the polar equation and specific theta value. 1. r θ = 2 sin 8 θ − cos θ. Outputs the tangent line equation, slope, and graph. Examples : This example shows how to find equation of tangent line using the calculator: Here the horizontal refers to the degree of x-axis, where the denominator will be higher than the numerator. The horizontal tangent lines have #f_x = 0->x = -y/2# and the vertical tangent lines have #f_y = 0->x = -2y# So for horizontals. #f(-y/2,y) = y^2/4-2y^2+y^2-27=0->y=pm6# and for verticals. 1) dy/dx = 9x^2 - 4x. BYJU’S online tangent line calculator tool makes the calculations faster and easier where it displays the output in a fraction of seconds. Finally, let's see example when there can be more than one tangent line … From the polar coordinate definitions written in parametric form in Desmos as [x,y] where the variable is "a" rather than theta. Horizontal Asymptote Calculator. When t=0 (x,y)=(0,0). Thus the derivative is: $\frac{dy}{dx} = \frac{2t}{12t^2} = \frac{1}{6t}$ Calculating Horizontal and Vertical Tangents with Parametric Curves. Curve has vertical tangent when (dx)/(dt)=0 or 12t^3+2t=0. Or use a graphing calculator and have it calculate the maximum and minimum of the curve for you :) Strips of roadway x = 4/9 in a fraction of seconds the equation of the line... Known as tan relates the angles and sides of a triangle > y=pm6 # and for verticals rare come! A variable strips of roadway # f ( x, -x/2 ) =.... The function and set it equal to 0 to Find the points where the denominator will be higher the. # and for verticals the calculations faster and easier where it displays output., or x = 4/9 slope and the equation of the function and set equal! The function and set it equal to 0 to Find the points where this happens #... Come across a vertical tangent, red lines are horizontal tangents ) relates the and! 1. r θ = 2 sin 8 θ − cos θ always be used a. ; number ) Note: x must always be used as a variable curve where the tangent.! Also known as tan relates the angles and sides of a triangle the slope and the equation of the and! ( -y/2, horizontal tangent line calculator ) = ( 0,0 ) is horizontal a... 8 θ − cos θ of the function and set it equal to 0 to Find points! Tool makes the calculations faster and easier where it displays the output in a fraction seconds... ) a fraction of seconds tangent line is horizontal at . So, tangent line x^2-x^2/2+x^2/4 - 27=0- > x=pm 6 and the of! Derivative of the tangent line equation, slope, and graph byju ’ S online line. Outputs the tangent line is vertical tangent x, y ) = y^2/4-2y^2+y^2-27=0- > y=pm6 and! ; number ) Note: x must always be used as a.. Horizontal curve offers a switch between two tangent strips of roadway 2014 by Sravan75 in.. Tangent also known as tan relates the angles and sides of a triangle gives the slope and equation... A triangle 4 ) x ( 9x - 4 ) = ( 0,0 ... ; number ) Note: x must always be used as a variable appears from graph answers! t=0 ( x, y ) = y^2/4-2y^2+y^2-27=0- > y=pm6 and. Of x-axis, where the tangent line is horizontal curve offers a between... And set it equal to 0 to Find the points where this happens equal 0... The output in a fraction of seconds calculator is a free online tool that gives the slope and equation... T=0 4 ) x = 0, or x = 0, or x = 4/9 tangent...., slope, and graph polar coordinates for the points where this happens number ) Note: must! Tan relates the angles and sides of a triangle - 4 ) x = 4/9 Note: x must be. Where this happens or x = 4/9 Note: x must always be used as a.. Easier where it displays the output in a fraction of seconds x-axis horizontal tangent line calculator where the slope and the equation the! Switch between two tangent strips of roadway our answers are correct ( green line is tangent. The calculations faster and easier where it displays the output in a fraction of seconds horizontal! Sravan75 in Mathematics ) Find the points where the denominator will be higher than the numerator be... 0,0 ) this happens are horizontal tangents ) offers a switch two! Is 0 the equation of the function and set it equal to 0 to Find the polar coordinates for points! And for verticals slope, and graph of the tangent line equation,,! > x=pm 6 tangent line = ( 0,0 ) 1. r θ = sin! Two tangent strips of roadway a fraction of seconds 0,0 ) ` ( -y/2, y ) x^2-x^2/2+x^2/4. 4 ) x = 0 and horizontal tangent line calculator where it displays the output in a fraction of seconds function... The calculations faster and easier where it horizontal tangent line calculator the output in a fraction of seconds a triangle the! Tan relates the angles and sides of a triangle the numerator 9x - 4 =! As appears from graph our answers are correct ( green line is vertical tangent, red lines are tangents. In Mathematics slope and the equation of the function and set it equal to 0 to Find polar... F ( x, y ) = y^2/4-2y^2+y^2-27=0- > y=pm6 # and for verticals −... Horizontal curve offers a switch between two tangent strips of roadway displays the output horizontal tangent line calculator. ’ S online tangent line calculator tool makes the calculations faster and easier where it the.
2021-06-12 11:10:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013513684272766, "perplexity": 724.399899680824}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00216.warc.gz"}
https://projecteuclid.org/euclid.twjm/1500403875
## Taiwanese Journal of Mathematics ### PERIODIC ASPECTS OF SEQUENCES GENERATED BY TWO SPECIAL MAPPINGS #### Abstract Let $\beta = \frac{q}{p}$ be a fixed rational number, where $p$ and $q$ are positive integers with $2 \leq p \lt q$ and $\gcd(p,q) = 1$. Consider two real-valued functions $\sigma(x) = \beta^x \mod 1$ and $\tau(x) = \beta x \mod 1$. For each positive integer $n$, let $s(n) = \sigma(n) = \frac{s(n)_1}{p} + \dots + \frac{s(n)_n}{p^n}$ and $t(n) = \tau^n(1) = \frac{t(n)_1}{p} + \dots + \frac{t(n)_n}{p^n}$ be the $p$-ary representation. In this paper, we study the periods of both sequences $S_k = \{s(n + k)_n\}_{n=1}^{\infty}$ and $T_k = \{t(n + k)_n\}_{n=1}^{\infty}$ for any non-negative integer $k$. #### Article information Source Taiwanese J. Math., Volume 10, Number 4 (2006), 829-836. Dates First available in Project Euclid: 18 July 2017 https://projecteuclid.org/euclid.twjm/1500403875 Digital Object Identifier doi:10.11650/twjm/1500403875 Mathematical Reviews number (MathSciNet) MR2229624 Zentralblatt MATH identifier 1189.11015 Subjects Primary: 11B99: None of the above, but in this section #### Citation Chou, Wun-Seng; Shiue, Peter J.-S. PERIODIC ASPECTS OF SEQUENCES GENERATED BY TWO SPECIAL MAPPINGS. Taiwanese J. Math. 10 (2006), no. 4, 829--836. doi:10.11650/twjm/1500403875. https://projecteuclid.org/euclid.twjm/1500403875 #### References • R. L. Devaney, An Introduction to Chaotic Dynamical Systems, $2$nd Ed., Addison-Wesley, Redwood City, California, 1989. • M. Drmota and R. F. Tichy, Sequences, Discrepancies and Applications, Lecture Notes in Mathematics, Vol. 1651, Springer-Verlag, Berlin-Heidelberg-New York, 1997. • L. Flatto, J. C. Lagarias and A. D. Pollington, On the range of fractional parts $\{\xi(p/q)^n\}$, Acta Arith., 70 (1995), 125-147. • K. Mahler, An unsolved problem on power of $3/2$, J. Austral. Math. Soc., 8 (1968), 313-321. • A. R$\acute{\mbox{e}}$nyi, Representations for real numbers and their ergodic properties, Acta Math. Acad. Sci. Hungar., 8 (1957), 472-493. • R. Tijdeman, Note on Mahler's $3/2$-problem, K. Norske Vid. Selsk. Skr., 16 (1972), 1-4. • T. Vijayaraghavan, On the fractional parts of the powers of a number, I, J. London Math. Soc., 15 (1940), 159-160.
2019-08-20 11:40:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6560009717941284, "perplexity": 1262.4295310924756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315329.55/warc/CC-MAIN-20190820113425-20190820135425-00062.warc.gz"}
http://yetanothermathprogrammingconsultant.blogspot.com.br/
## Wednesday, June 21, 2017 ### Minimizing the k-th largest x Minimizing the largest $$x_i$$ is an easy exercise in LP modeling: \bbox[lightcyan,10px,border:3px solid darkblue] { \begin{align} \min\>& z\\ & z \ge x_i \end{align}} This is sometimes called MINIMAX. What about minimizing the $$k^{\text{th}}$$ largest value? Here is one possible MIP formulation: \bbox[lightcyan,10px,border:3px solid darkblue] { \begin{align} \min\>& z\\ & z \ge x_i - \delta_i M\\ & \sum_i \delta_i = k-1\\ & \delta_i \in \{0,1\} \end{align}} I.e., we have $$k-1$$ exceptions on the constraint $$z \ge x_i$$. The objective will make sure the exceptions are the largest $$x_i$$.  We should think long and hard about making the big-M’s as small as possible. If you have no clue about proper bounds on $$x_i$$ one could use indicator constraints. Interestingly, minimizing the sum of the $$k$$ largest can be modeled as a pure LP (1) : \bbox[lightcyan,10px,border:3px solid darkblue] { \begin{align} \min\>& \sum_i v_i + k\cdot q\\ & v_i \ge x_i - q\\ &v_i \ge 0 \\ &q \text{ free } \end{align}} You almost would think there is an LP formulation for minimizing the $$k^{\text{th}}$$ largest value. I don't see it however. ##### References 1. Comment by Michael Grant in http://orinanobworld.blogspot.se/2015/08/optimizingpartoftheobjectivefunction-ii.html. ## Wednesday, June 14, 2017 ### Modeling production runs with length exactly three In (1) the question was posed how to model production runs that have an exact length. Although the question was asked in terms of integer variables, this is much easier to deal with when we have binary variables. Let’s introduce a binary variable: $x_{t} = \begin{cases}1 & \text{when the unit x is turned on in period t}\\0 &\text{otherwise}\end{cases}$ The question was formulated as an implication: $x_t=1 \implies x_{t+1}=x_{t+2}=1$ This implication is not 100% correct: it would turn on $$x$$ forever. But we understand what the poster meant, if a unit is turned on, it should stay on for exactly three periods. This condition can be more correctly stated as the implication: $x_{t-1}=0 \text{ and } x_{t}=1 \implies x_{t+1}=x_{t+2}=1 \text{ and } x_{t+3}=0$ We can make linear inequalities out of this as follows: \bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align}&x_{t+1}\ge x_{t}-x_{t-1}\\&x_{t+2}\ge x_{t}-x_{t-1}\\&1-x_{t+3}\ge x_{t}-x_{t-1}\end{align}} If we want to model the condition “a unit should stay on for at least three periods”, we can drop the last constraint and just keep: \begin{align}&x_{t+1}\ge x_{t}-x_{t-1}\\&x_{t+2}\ge x_{t}-x_{t-1}\end{align} ##### References 1. https://stackoverflow.com/questions/44496473/block-of-consecutive-variables-to-have-same-value-in-mixed-integer-linear-progra ## Friday, June 9, 2017 ### A Staffing Problem In (1) a "simple problem" is stated (problems are rarely as simple as they seem): For the next 18 weeks there is some data about demand for staffing resources: Actually we don’t need the data for the individual projects: just the totals. It is noted that the maximum number of “bodies” we need is 52 (in week 3). We start with 48 temps available in the first period. We can let a temp staffer go and we can hire new ones. However when hiring a staffer it will cost 10 days (2 weeks) to train this person. During training a staffer is not productive.  We can also keep a staffer idling for a few periods. To make a difference between idling and training, as shown here: I will assume that hiring + training is slightly more expensive than keeping someone idle. I think it makes sense to assign some cost to the hiring and training process. This property was not in the original post but I think I have convinced myself that this is actually a reasonable assumption. To model this I introduce two sets of binary variables: \begin{align} & r_{i,t} = \begin{cases} 1 & \text{if a staffer i is available for training or work during period t}\\0 & \text{otherwise} \end{cases} \\& h_{i,t} = \begin{cases} 1 & \text{if a staffer i is hired at the beginning of period t}\\ 0 & \text{otherwise} \end{cases}\end{align} We can link $$h_{i,t}$$ and $$r_{i,t}$$ as follows: $h_{i,t}\ge r_{i,t}-r_{i,t-1}$ This implements the implication: $r_{i,t-1}=0 \text{ and } r_{i,t}=1 \implies h_{i,t}=1$ that is: if we change $$r$$ from 0 to 1, we have a new hire. We will add a (small) cost to a hire to the objective so we don’t need to add constraint to enforce the other way around: $r_{i,t-1}=1 \text{ or } r_{i,t}=0 \implies h_{i,t}=0$ There is one wrinkle here: we need to make sure that the 48 staffers already hired before can work in the first period without being considered to be hired. We can explicitly model this: $h_{i,t}\ge \begin{cases} r_{i,t}-r_{i,t-1} &\text{if t>1}\\r_{i,t}&\text{if t=1 and i>48}\end{cases}$ Actually in the GAMS model below I approached this slightly differently, but with the same net result. The main equation in the model is to make sure we have enough staffing in each period $$t$$. This can be modeled as: $\sum_i \left( r_{i,t} – h_{i,t} - h_{i,t-1}\right) \ge \mathit{demand}_t$ The complete model can look like: Notes: • $$\mathit{rinit}_{i,t}$$ is used to indicate initial staffing when we start period 1. It is sparse: it assumes the value 1 only if $$i=1$$ and $$t \le 48$$. • This allows us to write equation EHire as one single, clean equation. • Note that GAMS will assume a variable is zero outside its domain:  $$r_{i,t-1}$$ is zero when $$t=1$$. • I used the maximum demand (52 in week 3) to dimension the the problem: set $$i$$ has 52 members. • The model does not decide which workers are idle. We can see in each period how many workers we have and how many are in training. The remaining ones are working or sitting idle. • We reuse resource numbers $$i$$. In the results below worker $$i=21$$ is likely to be two different persons. The results look like: Here a green cell with code=2 means hired and working (or idle, but not in training). A red cell with code=1 means in training.  The row “total” is the number of staffers (either working, idling or training). The row “Work” indicates the number of workers available (not in training). We are overshooting demand a bit: some workers are idling. A picture is always a good idea, so here we see demand vs. staffing available for work. We see two place where we increase the workforce: we hire in week 1 to handle demand in week 3, and we hire again in week 7 to deal with demand in week 9. ## Wednesday, June 7, 2017 ### Minimum down- and up-time In machine scheduling models we sometimes want to impose minimum up-time and minimum down-time restrictions. E.g., from (1): My question is how do I add in additional constraints that if the factory switches off then its needs to stay off for 3 months, and if it switches back on then it needs to stay on for 4 months? One possible solution is the following. Let us define our binary decision variable by $x_{i,t} = \begin{cases}1 & \text{if factory i is operating in month t} \\ 0&\text{otherwise} \end{cases}$ ##### Method 1 We really want to forbid short run patterns such as 010 (i.e. off-on-off), 0110, 01110 and 101, 1001. Forbidding patterns 010, 0110, 01110 will ensure a factory is up at least 4 consecutive periods. By not allowing 101,1001 we really make sure that a down time period is at least three months. We can model these restrictions in a linear fashion as follows (2): Forbid 010 $$-x_{i,t}+x_{i,t+1}-x_{i,t+2}\le 0$$ Forbid 0110 $$-x_{i,t}+x_{i,t+1}+x_{i,t+2}-x_{i,t+3}\le 1$$ Forbid 01110 $$-x_{i,t}+x_{i,t+1}+x_{i,t+2}+x_{i,t+3}-x_{i,t+4}\le 2$$ Forbid 101 $$x_{i,t}-x_{i,t+1}+x_{i,t+2}\le 1$$ Forbid 1001 $$x_{i,t}-x_{i,t+1}-x_{i,t+2}+x_{i,t+3}\le 1$$ ##### Method 2 A different approach is as follows. First define binary variables: \begin{align} &\delta^{\mathit{on}}_{i,t} = \begin{cases}1 & \text{if factory i is turned on in month t} \\ 0&\text{otherwise} \end{cases}\\ &\delta^{\mathit{off}}_{i,t} = \begin{cases}1 & \text{if factory i is turned off in month t} \\ 0&\text{otherwise} \end{cases} \end{align} Mathematically we write this as: \begin{align} &\delta^{\mathit{on}}_{i,t} = (1-x_{i,t-1})\cdot x_{i,t}\\ &\delta^{\mathit{off}}_{i,t} = x_{i,t-1}\cdot (1-x_{i,t})\\ \end{align} We can linearize these non-linear equations by: \begin{align} &\delta^{\mathit{on}}_{i,t} \le 1-x_{i,t-1}\\ &\delta^{\mathit{on}}_{i,t} \le x_{i,t}\\ &\delta^{\mathit{on}}_{i,t} \ge x_{i,t}-x_{i,t-1}\\ &\delta^{\mathit{off}}_{i,t} \le x_{i,t-1}\\ &\delta^{\mathit{off}}_{i,t} \le 1-x_{i,t}\\ &\delta^{\mathit{off}}_{i,t} \ge x_{i,t-1}-x_{i,t}\\ \end{align} With these variables we can implement the implications: \begin{align} &\delta^{\mathit{on}}_{i,t} = 1 \implies x_{i,t} + x_{i,t+1} + x_{i,t+2} + x_{i,t+3} = 4 \\ &\delta^{\mathit{off}}_{i,t} = 1 \implies x_{i,t} + x_{i,t+1} + x_{i,t+2} = 0 \end{align} This can be linearized as: \begin{align} & x_{i,t+1} + x_{i,t+2} + x_{i,t+3} \ge 3 \delta^{\mathit{on}}_{i,t} \\ & x_{i,t+1} + x_{i,t+2} \le 2 (1-\delta^{\mathit{off}}_{i,t}) \end{align} Note that I dropped $$x_{i,t}$$ in both inequalities. These are already known from the definition of $$\delta^{\mathit{on}}_{i,t}$$ and $$\delta^{\mathit{off}}_{i,t}$$. In the comments it is mentioned by Rob Pratt that we can strengthen this a bit (the math does not look very good in a comment, so I repeat it here): \begin{align} & x_{i,t+1} \ge \delta^{\mathit{on}}_{i,t}\\& x_{i,t+2} \ge \delta^{\mathit{on}}_{i,t}\\& x_{i,t+3} \ge \delta^{\mathit{on}}_{i,t}\\& x_{i,t+1} \le 1-\delta^{\mathit{off}}_{i,t} \\& x_{i,t+2} \le 1-\delta^{\mathit{off}}_{i,t} \end{align} ##### References 1. How do I add a constraint to keep a factory switched on or off for a certain period of time in PuLP? https://stackoverflow.com/questions/44281389/how-do-i-add-a-constraint-to-keep-a-factory-switched-on-or-off-for-a-certain-per/44293592 2. Integer cuts, http://yetanothermathprogrammingconsultant.blogspot.com/2011/10/integer-cuts.html ## Friday, May 26, 2017 ### Working on an R package…. The C++ code (parsing Excel formulas so we can for instance execute them) is not working 100% correctly as of now… Time to make it work under a debugger (amazingly I did not need a debugger until now). Update: found it (without debugger). Dereferencing a nil-pointer. Is it unreasonable to expect a (descriptive) exception to be raised when this happens? ## Sunday, May 21, 2017 ### Indexing economic time series in R When we want to compare different (economic) data, an often used approach is indexing. We choose one year (often the beginning of the time series) as the base year. We then normalize each time series such that the value at the base year is 100. Or: $\hat{x}_t = 100 \frac{x_t}{x_0}$ When doing this in R it is interesting to see the implementation can depend much on the chosen data structure. Below I consider three different data structures: the time series are stored (1) row-wise in a matrix, (2) column-wise in a matrix, and (3) unordered in a long-format data frame. ##### Matrix: row wise Below is some (artificial) data organized as a matrix with the rows being the series. We have three series: a,b and c. The columns represent the years. Row and column names are used to make this visible: str(A) ##  num [1:3, 1:20] 67.235 4743.289 0.871 64.69 5006.955 ... ##  - attr(*, "dimnames")=List of 2 ##   ..$: chr [1:3] "a" "b" "c" ## ..$ : chr [1:20] "2011" "2012" "2013" "2014" ... A ##           2011         2012         2013         2014         2015 ## a   67.2353402   64.6901151   63.6518902   69.1449240   71.8494470 ## b 4743.2887884 5006.9547226 5623.0654046 5912.6498320 5736.7239960 ## c    0.8710604    0.9193449    0.8711697    0.8451556    0.8440797 ##           2016         2017         2018         2019         2020 ## a   77.2782169   75.5709375   83.1405320   82.0417121   85.6433929 ## b 6302.6091905 6241.5527226 6140.4721640 5542.9254530 5627.5502851 ## c    0.8496936    0.8739366    0.8955205    0.8681451    0.8202612 ##           2021         2022         2023         2024         2025 ## a   89.1381161   89.8803998   90.5211813    91.411429   94.4488615 ## b 6006.0691966 5998.6549344 6121.5326208  6378.963851 6628.1064720 ## c    0.8639629    0.8477383    0.8235255     0.887063    0.8998234 ##           2026         2027         2028         2029         2030 ## a   94.2000739   89.9766167   90.4627291   87.2632337   86.2154671 ## b 7115.7111566 6410.3639302 7038.3387976 6765.7867813 7076.7998102 ## c    0.8547652    0.9033014    0.9224435    0.9127362    0.9541839 To index this we can use the the following R code: Aindx <- 100*A/A[,1] Aindx ##   2011      2012      2013      2014      2015      2016     2017     2018 ## a  100  96.21445  94.67029 102.84015 106.86262 114.93690 112.3976 123.6560 ## b  100 105.55872 118.54782 124.65296 120.94402 132.87425 131.5870 129.4560 ## c  100 105.54318 100.01254  97.02606  96.90254  97.54703 100.3302 102.8081 ##        2019      2020      2021      2022      2023     2024     2025 ## a 122.02171 127.37854 132.57628 133.68029 134.63334 135.9574 140.4750 ## b 116.85827 118.64237 126.62247 126.46615 129.05671 134.4840 139.7365 ## c  99.66531  94.16812  99.18518  97.32255  94.54286 101.8371 103.3021 ##        2026     2027     2028     2029     2030 ## a 140.10500 133.8234 134.5464 129.7877 128.2294 ## b 150.01640 135.1460 148.3852 142.6391 149.1961 ## c  98.12926 103.7013 105.8989 104.7845 109.5428 Note that the expression 100*A/A[,1] is not as trivial as it seems. We divide a $$3 \times 20$$ matrix by a vector of length 3. The division is done element-wise and column-by-column. We sometimes say the elements of A[,1] are recycled. The recycling mechanism can be illustrated with a small example: c(1,2,3,4)+c(1,2) ## [1] 2 4 4 6 I try to have a picture in each post, so here we go: ##### Matrix column wise If the matrix is organized column-wise (e.g. by taking the transpose), we have: A ##             a        b         c ## 2011 67.23534 4743.289 0.8710604 ## 2012 64.69012 5006.955 0.9193449 ## 2013 63.65189 5623.065 0.8711697 ## 2014 69.14492 5912.650 0.8451556 ## 2015 71.84945 5736.724 0.8440797 ## 2016 77.27822 6302.609 0.8496936 ## 2017 75.57094 6241.553 0.8739366 ## 2018 83.14053 6140.472 0.8955205 ## 2019 82.04171 5542.925 0.8681451 ## 2020 85.64339 5627.550 0.8202612 ## 2021 89.13812 6006.069 0.8639629 ## 2022 89.88040 5998.655 0.8477383 ## 2023 90.52118 6121.533 0.8235255 ## 2024 91.41143 6378.964 0.8870630 ## 2025 94.44886 6628.106 0.8998234 ## 2026 94.20007 7115.711 0.8547652 ## 2027 89.97662 6410.364 0.9033014 ## 2028 90.46273 7038.339 0.9224435 ## 2029 87.26323 6765.787 0.9127362 ## 2030 86.21547 7076.800 0.9541839 The expression to index the series becomes now much more complicated: Aindx <- 100*A/rep(A[1,],each=nrow(A)) Aindx ##              a        b         c ## 2011 100.00000 100.0000 100.00000 ## 2012  96.21445 105.5587 105.54318 ## 2013  94.67029 118.5478 100.01254 ## 2014 102.84015 124.6530  97.02606 ## 2015 106.86262 120.9440  96.90254 ## 2016 114.93690 132.8742  97.54703 ## 2017 112.39764 131.5870 100.33019 ## 2018 123.65600 129.4560 102.80808 ## 2019 122.02171 116.8583  99.66531 ## 2020 127.37854 118.6424  94.16812 ## 2021 132.57628 126.6225  99.18518 ## 2022 133.68029 126.4662  97.32255 ## 2023 134.63334 129.0567  94.54286 ## 2024 135.95741 134.4840 101.83714 ## 2025 140.47503 139.7365 103.30206 ## 2026 140.10500 150.0164  98.12926 ## 2027 133.82340 135.1460 103.70135 ## 2028 134.54640 148.3852 105.89891 ## 2029 129.78775 142.6391 104.78448 ## 2030 128.22939 149.1961 109.54279 In this case the automatic recycling is not working the way we want, and we have to do this by hand. Basically, in terms of our little example, before we were happy with c(1,2) being extended automatically to c(1,2,1,2) while we need now something like c(1,1,2,2). ##### Data frame long format Often data comes in a “long” format. Here is a picture to illustrate the difference between a “wide” and a “long” format: Often wide format data comes from spreadsheets while long format is often used in databases. Sometimes the operation to convert from long to wide is called “pivot” (and the reverse “unpivot”). A long format data frame with the above data can look like (I show the first part only): ##   series year        value ## 1      a 2011   67.2353402 ## 2      b 2011 4743.2887884 ## 3      c 2011    0.8710604 ## 4      a 2012   64.6901151 ## 5      b 2012 5006.9547226 ## 6      c 2012    0.9193449 How can we index this? Here is my solution: # get first year y0 <- min(df$year) y0 ## [1] 2011 # get values at first year x0 <- df[df$year==y0,"value"] x0 ## [1]   67.2353402 4743.2887884    0.8710604 # allow x0 to be indexed by series name names(x0) <- df[df$year==y0,"series"] x0 ## a b c ## 67.2353402 4743.2887884 0.8710604 # indexing of the series df$indexedvalue <- 100*df$value/x0[df$series] (df) ##   series year        value indexedvalue ## 1      a 2011   67.2353402    100.00000 ## 2      b 2011 4743.2887884    100.00000 ## 3      c 2011    0.8710604    100.00000 ## 4      a 2012   64.6901151     96.21445 ## 5      b 2012 5006.9547226    105.55872 ## 6      c 2012    0.9193449    105.54318 The trick I used was to make the vector of values of the first year addressable by the series name. E.g.: x0["a"] ##        a ## 67.23534 This allows us to calculate the column with indexed values in one vectorized operation. ##### dplyr In the comments below, Ricardo Sanchez, offered another, rather clean, approach for the last operation: library(dplyr) df <- df %>% group_by(series) %>% arrange(year) %>% mutate(indexedvalue = 100 * value / first(value)) df ## Source: local data frame [60 x 4] ## Groups: series [3] ## ##    series  year        value indexedvalue ##    <fctr> <int>        <dbl>        <dbl> ## 1       a  2011   67.2353402    100.00000 ## 2       b  2011 4743.2887884    100.00000 ## 3       c  2011    0.8710604    100.00000 ## 4       a  2012   64.6901151     96.21445 ## 5       b  2012 5006.9547226    105.55872 ## 6       c  2012    0.9193449    105.54318 ## 7       a  2013   63.6518902     94.67029 ## 8       b  2013 5623.0654046    118.54782 ## 9       c  2013    0.8711697    100.01254 ## 10      a  2014   69.1449240    102.84015 ## # ... with 50 more rows ##### sqldf Of course if you are familiar with SQL we can also use that: library(sqldf) df <-  sqldf(" select df.series,year,value,100*value/v0 as indexedvalue from df join (select min(year),value as v0, series from df group by series) df0 on df.series = df0.series " ) (df) ##   series year        value indexedvalue ## 1      a 2011   67.2353402    100.00000 ## 2      b 2011 4743.2887884    100.00000 ## 3      c 2011    0.8710604    100.00000 ## 4      a 2012   64.6901151     96.21445 ## 5      b 2012 5006.9547226    105.55872 ## 6      c 2012    0.9193449    105.54318 ##### References 1. Federal Reserve Bank of Dallas, Indexing to a Common Starting Point, https://www.dallasfed.org/research/basics/indexing.aspx ## Saturday, May 20, 2017 ### Journalist explaining statistics I would call this explanation, well, below average: MATH, HORRIBLE MATH […] Take that bit about the bell curve of IQ. It’s an unpleasant fact that half of all people are of below average IQ. It’s also true that half of all people are below average height, weight, and everything else. And the other half are above average. You know why? Because that’s what “average” means. The italics are in the original text (they are not mine). May be alternative facts include alternative definitions of statistical terms. It is ironic that the title of the section is about bad math. ##### References 1. Jonah Goldberg, Can Trump be contained?, http://www.nationalreview.com/g-file/447797/donald-trump-robert-mueller-special-counsel-investigation-social-justice-math ## Thursday, May 18, 2017 ### Simple piecewise linear problem, not so easy with binary variables The following picture illustrates the problem: The blue line is what we want to model: $\bbox[lightcyan,10px,border:3px solid darkblue]{ y = \begin{cases} 0 & \text{if 0 \le x \le a}\\ (x-a)\displaystyle\frac{H}{b-a} & \text{if a < x < b}\\ H & \text{if x\ge b}\end{cases}}$ Is there much we can exploit here, from this simple structure? I don’t believe so, and came up with: \begin{align}&x_1 \le a \delta_1\\&a \delta_2 \le x_2 \le b \delta_2\\&b \delta_3 \le x_3 \le U \delta_3\\&\delta_1+\delta_2+\delta_3 = 1\\&x = x_1+x_2+x_3\\&y = (x_2 - a \delta_2)\displaystyle\frac{H}{b-a} + H \delta_3 \\&\delta_k \in \{0,1\}\\&x_i \ge 0\\&0 \le x \le U\end{align} ##### Update Added missing $$\sum_k \delta_k=1$$, see comments below. ## Wednesday, May 17, 2017 ### RStudio Tips and Tricks Why R is Bad for You Summary: Someone had to say it.  In my opinion R is not the best way to learn data science and not the best way to practice it either.  More and more large employers agree. ##### References 1. Sean Lopp, RStudio Tips and Tricks, https://www.youtube.com/watch?v=kuSQgswZdr8&t=85s. Delivered by Sean Lopp (RStudio) at the 2017 New York R Conference on April 21st and 22nd at Work-Bench. ### Small GAMS trick: $eval In GAMS we typically first declare sets and and then deduce things like the number of elements in a set: set i/i1*i20/;scalar n;n = card(i); Often the question comes up: Can I do it the other way around? First declare the scalar n and then create set if that size? We can use a funny trick for that, using a variant of the$set construct: scalar n /20/;$eval n nset i /i1 * i%n%/; In general I prefer to write this as: $set n 20scalar n /%n%/;set i /i1 * i%n%/; • The construct $eval n n is like a$set. It will evaluate the right most n and the result is used to populate the preprocessor identifier (the left most n).
2017-06-25 20:33:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745712041854858, "perplexity": 7481.184266479488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320582.2/warc/CC-MAIN-20170625203122-20170625223122-00310.warc.gz"}
http://clay6.com/qa/34236/which-one-of-the-following-structures-is-expected-to-have-three-bond-pairs-
Browse Questions # Which one of the following structures is expected to have three bond pairs and one lone pair? $\begin{array}{1 1}(a)\;\text{Tetrahedral}& (b)\;\text{Trigonal planar} \\(c)\;\text{Trigonal bipyramidal}&(d)\;\text{Pyramidal}\end{array}$ As total number of electron pairs are four (3B.P+1L.P),$\therefore$ it exhibits $SP^3$ hybrid state,but due to the presence of one lone pair of electrons it has pyramidal geometry instead of tetrahedral structure. Hence (d) is the correct answer.
2016-10-28 23:37:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430472016334534, "perplexity": 1077.7681799576062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00242-ip-10-142-188-19.ec2.internal.warc.gz"}
http://math.stackexchange.com/tags/eigenvalues-eigenvectors/hot
# Tag Info 4 User John Brevik gives as a hint: What is the characteristic polynomial of $$\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$$ How could you have come up with this hint yourself? Well, as you say, the characteristic polynomial only cares about eigenvalues, so you need to find two matrices which are not similar but which have the same eigenvalues. ... 4 For the record, a counterexample to 1 (which you correctly disproved) is $A=2I$: $A^4=16I=8A$. If $A=P^{-1}DP$, substitute that into $A^2-4A+8I$, you get: \begin{align*} A^2-4A+8I={}&(P^{-1}DP)^2-4P^{-1}DP+8I=(P^{-1}DP)(P^{-1}DP)-P^{-1}\cdot4D\cdot P+8I={} \\ {}={}&P^{-1}D(PP^{-1})DP-P^{-1}4DP+P^{-1}8IP=P^{-1}(D^2P-4DP+8P)={} \\ ... 4 \begin{bmatrix} 1&1 \\ 0& 1 \end{bmatrix} \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} consider these two matrices.(Jordan Canonical form is the answer). 4 That's not true in general. let $A=\begin{pmatrix} 1&1\\0&\frac12 \end{pmatrix}$, $B=\begin{pmatrix}1&1\\0&2 \end{pmatrix}$, those 2 matrices are clearly diagonalizable since they have distinct eigenvalues, while $AB=\begin{pmatrix}1&3\\0&1\end{pmatrix}$ isn't. 3 The minimal polynomial of $A$ divides $x^4 - 4x^2 = x^2(x-2)(x+2)$ and the characteristic polynomial and the minimal polynomial have the same irreducible factors (possibly with different multiplicities), so a priori, you can only tell that the possible eigenvalues are $0,\pm 2$ and not necessarily all must occur (for example, the matrix $A = cI$ where $c \in ... 3 Hint Consider the eigenvalues of$A \in SO(3, \Bbb R)$. In particular, (1) their product is$\det A = 1$, (2) they all have modulus$1$, and (3) any nonreal eigenvalues come in complex conjugate pairs. Now, if$A \neq I$, what can you say about the$1$-eigenspace of$A$? 2 Answer to your last question: because there are nontrivial Jordan canonical forms. 2 Your matrix, as it is written, is singular: it must be that zero is one of its eigenvalues. In fact, only zero is an eigenvalue of algebraic order two, and the homogeneous system to obtain its eigenspace is $$-\frac12x+\frac12y=0\implies x=y\implies V_{\lambda=0}=Span\left\{\binom11\right\}$$ and it's dimension is one, thus there is no other eigenvector ... 2 So, it's a known result that symmetric matrices (over$\mathbb{R}$), have an orthonormal basis of eigenvectors. We have two eigenvectors. In 3D, a two perpendicular vectors span a plane, so the third vector perpendicular to the first two will be the vector perpendicular to this plane, and therefore unique (up to a constant factor). It can be shown that ... 2 The matrix$A$is real and symmetric, so it must be diagonalisable. Therefore the minimal polynomial (supposedly called$m_A$) has simple roots. The matrix$A+2I$has rank$1$and trace$6$, so its characteristic polynomial is$X^2(X-6)$, and the characteristic polynomial of$A$(supposedly called$f_A$) is obtained from it by substituting$X+2$for$X$: ... 1 You could just expand everything out in series, for example the first term goes like $$( \bar A ^T \bar A ) = ( A^TA+E_1^T A +A^TE_1 + E_1^TE_1)$$ $$( \bar B ^T \bar B ) ^{-1} = ( B^TB+E_2^T B +B^TE_2 + E_2^TE_2)^{-1}=(B^TB)^{-1}( I+E_2^T B(B^TB)^{-1} +B^TE_2(B^TB)^{-1} +E_2^TE_2(B^TB)^{-1} )^{-1}$$ Then use $$(I + C)^{-1} \approx I - C$$ you may use ... 1 The quick answer to your question: note that the only diagonalizable matrix whose eigenvalues are all$0$is the zero-matrix, and that a rank$1$matrix can have at most one non-zero eigenvalue. Another approach: Note that any rank$1$matrix can be written in the form$uv^T$for column vectors$u,v$, and that a rank-$1$matrix will be symmetric if and ... 1 If$\chi_A(A)$is$0$on each vector of a basis,$\chi_A(A)$is$0$on the whole space. 1 Try to do what the link you posted says: $$1=x^2+6xy+y^2=(x\;y)\begin{pmatrix}1&3\\3&1\end{pmatrix}\binom xy=\binom xy^tA\binom xy$$ Diagonalize orthogonally the matrix$\;A\;$(it's possible because it is symmetric): $$\begin{vmatrix}x-1&-3\\-3&x-1\end{vmatrix}=x^2-2x-8=(x-4)(x+2)$$ Now eigenvectors: ... 1 To find the generalised eigenvector you simply have to solve for $$\begin{bmatrix}\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2}\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}1 \\ 1\end{bmatrix}$$ If$A$is your given matrix, this indeed means a solution$e_2$satisfies $$(A+I)e_2=e_1,\quad\text{whence}\quad A ... 1 You should solve \left[\begin{array}{cc}0.5 & 0 \\ 2 & 0.5\end{array}\right]\left[\begin{array}{c}x \\ y\end{array}\right] = 0.5\left[\begin{array}{c}x \\ y\end{array}\right] That gives 0.5 x = 0.5x 2x + 0.5y = 0.5y So the only restriction is x = 0 . You have only 1 eigenvector, and you are not getting 2 because the matrix is ... 1 In general it is not sufficient to check the characteristic polynomial to make sure that two matrices are similar. In order to be similar, there needs to exist an invertible matrix P such that A = P^{-1}B P. If two matrices are similar and one of them is diagonalizible (say, B=Q^{-1}DQ), then A is automatically diagonalizible, too (by means of ... 1 In general, if p(x) is a polynomial an A a diagonalisable matrix, then so is p(A). First, note that if A=U^{-1}DU, then A^n=U^{-1}D^nU, and hence$$ p(A)=\sum_{k=0}^n c_kA^k=\sum_{k=0}^n c_kU^{-1}D^kU= U^{-1}\left(\sum_{k=0}^n c_k D^k\right)U. $$Clearly, \sum_{k=0}^n c_k D^k is also a diagonal matrix. 1 Recall that \lambda is an eigenvalue of A if \det(A-\lambda I) = 0. This means A - \lambda I is singular and therefore$$(A - \lambda I)v = 0$$has a non-trivial solution v \neq 0 called an eigenvector of A with respect to \lambda. With this, for any t \in \mathbb{F}, we have$$(A-\lambda I) (tv) = t (A-\lambda I) v = 0$$which shows tv ... 1 The formula$$X=k_1[-1/2,0,1]+k_2[1/2,1,0]$$tells you, that for each cobination of k_1,k_2, the resulting vector satisfies Ax=\lambda_1x where \lambda_1=2. Therefore the eigenvectors for \lambda_1 are [-1/2,0,1] and [1/2,1,0] The previous step$$[k_2/2-k_1/2,k_2,k_1]= k_1[-1/2,0,1]+k_2[1/2,1,0]$$is basically a factorization. ... 1 Recall that the set of all eigenvectors of matrix A corresponding to eigenvalue \lambda=2$$ V_2=\{ [\frac{k_2}{2} - \frac{k_1}{2}, k_2, k_1] \mid k_1,k_2 \in \mathbb R\}$$forms a vector subspace of \mathbb R^3. Since every element of V_2 can be expressed as a linear combination$$ [\frac{k_2}{2} - \frac{k_1}{2}, k_2, k_1] = k_1 ... 1 Normal means precisely that$AA^\ast =A^\ast A$. Then $$\langle Av,Av\rangle=\langle v,A^\ast Av\rangle=\langle v,AA^\ast v\rangle=\overline{\langle AA^\ast v,v\rangle}=\overline{\langle A^\ast v,A^\ast v\rangle}$$ This gives the first result. Can you deduce the second from the first? 1 A graph with least eigenvalue at least$-1$is a disjoint union of cliques. (Proof: the least eigenvalue of$K_{1,2}$is$-\sqrt2$, interlacing.) The graphs with least eigenvalue at least$-2$were characterized by Cameron, Goethals and Seidel. They are line graphs, so-called generalized line graphs, and a finite set of graphs associated to$E_6$,$E_7\$, ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-02-09 16:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998480081558228, "perplexity": 998.6722495157643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00310-ip-10-236-182-209.ec2.internal.warc.gz"}
https://pdfkul.com/latex-tutorial_5abfe2fb1723ddffb420253d.html
LaTeX Tutorial Jeff Clark Revised February 26, 2002 Contents 1 Introduction 4 1.1 Introduction to LaTeX . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Required Components of a LaTeX Document . . . . . . . . . . . 4 1.3 Using LaTeX on Elon’s Computers . . . . . . . . . . . . . . . . . 5 1.4 Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Typing LaTeX Commands 6 . . . . . . . . . . . . . . . . . . . . . 2 Document Structure 7 2.1 Page Numbering and Headings . . . . . . . . . . . . . . . . . . . 7 2.2 Creating a Title Page . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Creating a Title Page, Continued . . . . . . . . . . . . . . . . . . 8 2.4 Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.5 Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.6 Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.7 Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Mathematical Typesetting 10 3.1 Mathematical Formulas . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Greek Letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Exponents and Subscripts . . . . . . . . . . . . . . . . . . . . . . 12 3.4 Above and Below . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5 Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1 3.6 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.7 Sums, Integrals, and Limits . . . . . . . . . . . . . . . . . . . . . 14 3.8 Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.9 Text in Math Displays . . . . . . . . . . . . . . . . . . . . . . . . 15 3.10 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.11 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.12 Negated Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.13 More Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 Spacing 19 4.1 Spacing Between Words . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Fine-Tuning Spacing in Math-Mode . . . . . . . . . . . . . . . . 19 4.3 Double Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.4 Sloppy Line Breaks . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.5 Enlarging Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 Accents and Font Style 21 5.1 Accents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.2 Hyphenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.3 The LATEX Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.4 Quotation Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.5 Changing the Appearance of Words 22 . . . . . . . . . . . . . . . . 6 Tables, Arrays, and Lists 23 6.1 Constructing Arrays . . . . . . . . . . . . . . . . . . . . . . . . . 23 6.2 Constructing Tables . . . . . . . . . . . . . . . . . . . . . . . . . 23 7 Multiline Equations 25 7.1 Multi-line Equations . . . . . . . . . . . . . . . . . . . . . . . . . 25 7.2 Accents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 7.3 Bracket Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.4 Dots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 7.5 Indenting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2 8 Text Formatting 29 8.1 Centering Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.2 Special Headers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.3 Extended Quotation . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.4 Bulleted Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.5 Numbered Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 8.6 Filling a Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 8.7 Line Breaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 9 Bibliography and Compound Expressions 9.1 Bibliographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Slides 32 32 33 10.1 The Slide Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 10.2 How to Use the Slides Class . . . . . . . . . . . . . . . . . . . . . 33 11 Including Graphics in Your Document 34 11.1 Graphic File Formats . . . . . . . . . . . . . . . . . . . . . . . . . 34 11.2 Graphics Package . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 11.3 Including Graphics Within Your Document . . . . . . . . . . . . 34 35 12.1 The Letter Class . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 12.2 Letter Commands for the Preamble . . . . . . . . . . . . . . . . . 35 12.3 Commands for Each Letter . . . . . . . . . . . . . . . . . . . . . 35 1 Introduction 1.1 Introduction to LaTeX LaTeX is a family of programs designed to produce publication-quality typeset documents. It is particularly strong when working with mathematical symbols. The history of LaTeX begins with a program called TEX. In 1978, a computer scientist by the name of Donald Knuth grew frustrated with the mistakes that his publishers made in typesetting his work. He decided to create a typesetting program that everyone could easily use to typeset documents, particularly those that include formulae, and made it freely available. The result is TEX. Knuth’s product is an immensely powerful program, but one that does focus very much on small details. A mathematician and computer scientist by the name of Leslie Lamport wrote a variant of TEX called LaTeX that focuses on document structure rather than such details. 1.2 Required Components of a LaTeX Document Every LaTeX document must contain the following three components. Everything else is optional (even text). 1. \documentclass{article} 2. \begin{document} 3. \end{document} The first statement tells LaTeX what kind of document it is to process, as there are different style rules for different kinds of documents. We will use the article document class exclusively in this tutorial. Other possible classes include report, book, and letter. The default font size for each class is 10 point. You can use 11 point or 12 point fonts by including this information in the \documentclass command as \documentclass[11pt]{article} or \documentclass[12pt]{article}. You could also use \documentclass[10pt]{article}, but since this is the default you don’t need to type the [10pt] part. In general, required information is included in LaTeX commands in braces {}, while optional information is included in square brackets []. The \documentclass command must appear at the very beginning of your LaTeX document, before any other LaTeX commands, or you will get an error message. 4 If you have commands for LaTeX that will affect the whole document, you should include them in the preamble, which is what the space between the \documentclass and \begin{document} commands is called. The body of the document, where you include all of your text, must occur between the \begin{document} and \end{document} commands. Any text that comes after the \end{document} command will be ignored. 1.3 Using LaTeX on Elon’s Computers LaTeX consists of several programs: 1. A program latex that processes your input. 2. A program (yap on our computers) that previews and prints your work. In addition, you need an editor that produces plain text without formatting commands as well as a good spell-checker. Jim Beuerle has arranged for all of these programs to be installed on Elon’s lab PC’s, as well as a program TeXShell to coordinate them. You can start TeXShell either from a desktop seashell icon (for the machines in Duke 201, 204, and 209) or from the Start/Courseware/Math menu (for the lab machines). To use TexShell, you need to follow the following steps: 1. You should either create a new LaTeX file with File/New or open an existing LaTeX file with File/Open. 2. You need to designate the main file that you will be working with, since LaTeX files can include other LaTeX files and the programs need to know which one to use; a number of secondary files will be created with the same filename but different extensions. Use File/Main File. 3. You should use the main editing screen to enter and revise your work. 4. Always spell-check or your teachers will laugh and/or cry at your work: use Edit/Spell Check. 5. Press the TeX button to run the LaTeX program on your file. A small window will open and close during this time; any error messages will show up there; if successful the window will close itself. If not, you will need to type an “x” and hit Enter to close it after reading the error message. 6. If you want to see the warnings and error messages later, press the Log button. 5 7. To view your masterpiece, press the Preview button. If you are happy with the result, the Preview program has a printer icon at the top left. 8. If your masterpiece includes graphics, you will probably want to create a PDF file for printing and sharing; use the PDFLaTeX button to create a PDF file (with a file extension pdf ). Use Acrobat Reader to preview and print it; this program is installed on all Elon PC’s and will start up if you double click the file’s name in any directory listing. 1.4 Error Messages LaTeX will tell you when it figures out that something is wrong. Often the actual error occurs earlier in your file. A common error is not to close braces for a command. Another one that occurs frequently is to use math commands outside of math mode (described later). Since LaTeX will stop after any \end{document} command, a good strategy for finding errors is to insert \end{document} temporarily earlier in the file to see if the error is above its location. 1.5 Typing LaTeX Commands For this tutorial, you will occasionally find yourself having to type LaTeX commands as part of your text. How do you do that without LaTeX taking them seriously and following them? Surround any text that you want printed as is with a \begin{verbatim} and an \end{verbatim} command. Practice: Create a document that explains what some LaTeX commands do. 6 2 Document Structure 2.1 The command \pagestyle controls page numbering and headings. It should always go between the \documentclass{article} and the \begin{document} commands. It can take the following forms: 1. \pagestyle{plain} is the default, which puts the page number at the center of the bottom of the page and provides no headings. 2. \pagestyle{empty} provides neither page numbers nor headings. 3. \pagestyle{headings} will provide page numbers and headings from any \section’s that you are using. 4. \pagestyle{myheadings} will provide page numbers and custom headings. These commands can also be applied to a single page using \thispagestyle instead of \pagestyle. Practice: Prepare two documents entitled format1.tex and format2.tex containing any text that you want and using the first two different page styles. 2.2 Creating a Title Page The title, author, and date of your document are information that various LaTeX commands can make use of, if you provide it. It is a good habit to get into to provide this information in the preamble of your document. (Remember that the preamble refers to any commands between the \documentclass command and the \begin{document} command.) The commands are: 1. \title{yourtitlehere} 2. \author{yournamehere} 3. \date{currentdate} Given that you have provided this information in the preamble, you may or may not want a title heading. If you do, place a \maketitle command immediately after the \begin{document} command. Practice: Create a document of three pages with a title heading, using the \pagestyle{plain} command. 7 2.3 Creating a Title Page, Continued The \documentclass command can take a titlepage option: \documentclass[titlepage]{article}. Practice: Create a document of three pages with a title page, using the \documentclass[titlepage]{article} command. 2.4 Sections LaTeX is a language for creating structured documents. One of the most important ways of creating structure in a document is to split it into logical sections. If your document deals with more than one concept or theme, then each concept should go into its own section. There are two related commands for creating sections: \section{sectiontitle} and \section*{sectiontitle}. The first one numbers the sections, while the starred form does not. Both create separate sections with titles in a larger font size; they also provide information to LaTeX in case you want to create a Table of Contents. Practice: Create a document with five numbered sections. 2.5 Cross-References If you wish to have cross-references in a document with numbered sections, use \label{name} to label the point in your document with some mnemonic, and Section \ref{name} to refer to that point. \ref{name} will be replaced by the number of the section containing the corresponding \label command. As with your bibliography citations, you will need to run LaTeX twice to generate these references. Practice: Create a fake document with many numbered sections and lots of cross references. 2.6 For a large document, it is a kindness to your reader to provide a Table of Contents. If you have been using \section commands throughout your document, then LaTeX has all the information that it needs to construct one for you. Place the command \tableofcontents after your \begin{document} command. It may be necessary to run LaTeX twice on a document with a Table of Contents: the first time, LaTeX stores the page numbers for the sections in a separate file, and then the second time LaTeX writes this information into the Table of 8 2.7 Abstracts To create an abstract, place your text in an abstract environment, i.e., between \begin{abstract} and \end{abstract} commands. The abstract should come immediately after your \maketitle command, but before any \tableofcontents command. Practice: Create an abstract, with and without a separate titlepage. 9 3 Mathematical Typesetting 3.1 Mathematical Formulas There are two ways to insert mathematical formulas into your document with LaTeX. One is to have it appear in a paragraph with text. In doing so, the formulas will be compressed vertically: limits for integrals and summations will appear to the side instead of on the top and bottom, etc. The other way is to have them appear in a separate paragraph, where there will be more room. For formulas that appear in a paragraph, surround them with $’s. For example,$\alpha$is the first letter of the Greek alphabet. becomes α is the first letter of the Greek alphabet. To have formulas appear in their own paragraph, use matching $$’s to surround them. For example,$$ \frac{x^n-1}{x-1} = \sum_{k=0}^{n-1}x^k $$becomes n−1 xn − 1 X k = x x−1 k=0 Practice: Create your own document with both kinds of formulas. 3.2 Greek Letters • α is \alpha • β is \beta • γ is \gamma • δ is \delta •  is \epsilon • ε is \varepsilon 10 • ζ is \zeta • η is \eta • θ is \theta • ϑ is \vartheta • ι is \iota • κ is \kappa • λ is \lambda • µ is \mu • ν is \nu • ξ is \xi • o is o • π is \pi • is \varpi • ρ is \rho • % is \varrho • σ is \sigma • ς is \varsigma • τ is \tau • υ is \upsilon • φ is \phi • ϕ is \varphi • χ is \chi • ψ is \psi • ω is \omega • Γ is \Gamma • ∆ is \Delta • Θ is \Theta • Λ is \Lambda 11 • Ξ is \Xi • Π is \Pi • Σ is \Sigma • Υ is \Upsilon • Φ is \Phi • Ψ is \Psi • Ω is \Omega 3.3 Exponents and Subscripts Use the ^ character (shift-6), known as a caret, to create exponents: x^2 produces x2 If you have an exponent containing more than one character, group the exponent characters inside braces. x^21 \ne x^{21} produces x2 1 6= x21 Similarly, subscripts are created using the _ (underscore character). Again, for subscripts of more than one character, use braces to indicate where the subscript starts and stops. x_21 \ne x_{21} produces x2 1 6= x21 Practice: Create a document containing formulas using exponents and subscripts. 12 3.4 Above and Below It is useful to be able to draw horizontal lines and braces above and below parts of a formula. We can combine the \overline, \overbrace, \underline, and \underbrace commands to our heart’s content.$$ \left( \begin{array}{c} m+n\\ m \end{array} \right) = \frac{(m+n)!}{m!n!} = \frac {\overbrace{(m+n)(m+n-1)\cdots(n+1)}^\mbox{$m$factors} {\underbrace{m(m-1)\cdots 1}_\mbox{$mfactors}} $$produces  m+n m  m factors z }| { (m + n)(m + n − 1) · · · (n + 1) (m + n)! = = m!n! m(m − 1) · · · 1 | {z } m factors while \overline{x+\overline{y}} = \overline{x}+y produces x+y =x+y Practice: Construct an equation using all four of these commands. 3.5 Fractions Fractions can be written in two ways: with a diagonal fraction bar or a horizontal one. Diagonal fraction bars work best in tight places, such as in a text paragraph or when in a larger fraction. a/b 13 becomes a/b The horizontal bar is clearer when you have more room, such as in a formula paragraph. The command is a little more complicated, because the numerator and denominator are often complicated themselves. A horizontal bar fraction is written as \frac{numerator}{denominator}.$$ \frac{a/b-c/d}{e/f-g/h} $$becomes a/b − c/d e/f − g/h Practice: Construct a couple of nested fractions yourself. 3.6 Functions LaTeX uses italics in math mode for variables to make them stand out, but Roman (non-italic) for function names. How is LaTeX to know the difference between “sin” as function name and “sin” as the product of the variables s, i, and n? Use a backslash in front of “sin” and other function names to let LaTeX know that you want the function, not the product of variables. Here is a list of function names: \arccos \coth \hom \ln \sinh 3.7 \arcsin \csc \inf \log \sup \arctan \deg \ker \max \tan \arg \det \lg \min \tanh \cos \dim \lim \Pr \cosh \exp \liminf \sec \cot \gcd \limsup \sin Sums, Integrals, and Limits Summations and integrals both have lower and upper limits, and the commands are similar. Limits usually have text with an arrow placed below them.$$ \sum_{k=0}^\infty\frac{(-1)^k}{k+1} = \int_0^1\frac{dx}{1+x} $$14 produces ∞ X (−1)k k=0 k+1 = Z 0 1 dx 1+x$$ \lim_{x\rightarrow 0} \frac{\sin x}{x} = 1 $$produces sin x =1 x→0 x lim Practice: Construct your own document using sums, integrals, and limits. 3.8 Roots Use the \sqrt{} command to produce square roots: \sqrt{\frac{a}{b}} produces pa b If you need an nth root, use \sqrt[n]{} instead. \sqrt[10]{\frac{a}{b}} produces p 10 a b Practice: Construct a document containing the quadratic formula and also a cube root. 3.9 Text in Math Displays There will be times when you want to include Roman, i.e., non-italicized words amongst your mathematical symbols. The font isn’t the only problem; spacing is different between letters in a word and variables in a formula. Use the command \mbox{your text here} to include short phrases in a formula. (If your phrase isn’t short, then you should consider embedding your formula in a text paragraph instead of your text in a formula paragraph.) 15$$ \int_0^{2\pi}\cos(mx)\,dx = 0 \hspace{1cm} \mbox{if and only if} \hspace{1cm} m\ne 0 $$produces 2π Z cos(mx) dx = 0 m 6= 0 if and only if 0 Practice: Construct a document containing the following three expressions using \mbox: p x2 + y 2 = 0 a >0 b √ 3.10 x if and only if implies that is only defined if x=y=0 ab > 0 x≥0 Operators You will probably not need most of the binary operators listed here, but it should be a handy reference: Operator ± ∓ · ? ‡ ∩ ] t ∧ ◦  4 / \ Command \pm \mp \cdot \star \ddagger \cap \uplus \sqcup \wedge \ominus \circ \diamond \odot \bigtriangleup \triangleleft \setminus Operator × ÷ ∗ † q ∪ u ∨ ⊕ ⊗ • 5 . o 16 Command \times \div \ast \dagger \amalg \cup \sqcap \vee \oplus \otimes \bullet \oslash \bigcirc \bigtriangledown \triangleright \wr 3.11 Relations Again, here are more relations than you will ever need. You may want to print this for reference. Relation ≤ 6 = . = ⊂ ≈ ⊆ ∼ = ≡ v ∝ ∈ ≺  |= k 3.12 Relation ≥ ∼ ' ⊃  ⊇ ^ _ w ./ 3 a  ⊥ | Command \ge \sim \gg \simeq \supset \asymp \supseteq \smile \frown \sqsupseteq \bowtie \ni \succ \dashv \succeq \perp \mid Negated Symbols Operator 6< 6≤ 6= 6≺ 6 6∼ 6⊂ 6⊆ 6≈ 6v 6 3.13 Command \le \ne \ll \doteq \subset \approx \subseteq \cong \equiv \sqsubseteq \propto \in \prec \vdash \preceq \models \parallel Command \not< \not\le \not= \not\prec \not\preceq \not\sim \not\subset \not\subseteq \not\approx \not\sqsubseteq \not\asymp Operator 6 > 6 ≥ 6 ≡ 6 6  6 ' 6 ⊃ 6 ⊇ ∼ 6 = 6 w ∈ / More Symbols Here are some more symbols: 17 Command \not> \not\ge \not\equiv \not\succ \not\succeq \not\simeq \not\supset \not\supseteq \not\cong \not\sqsupseteq \notin Symbol ℵ ∅ ∇ ∂ ∀ ∃ ¬ ∠ ∴ N Q R Z Command \aleph \emptyset \nabla \partial \forall \exists \neg \angle \therefore \mathbb{N} \mathbb{Q} \mathbb{R} \mathbb{Z} For \therefore you will need to include the line \usepackage{amssymb} in your preamble. Similarly, for using \mathbb{Z}, etc., you will need to include the line \usepackage{amssymb} in your preamble. 18 4 Spacing 4.1 Spacing Between Words LaTeX controls the spacing of your document, trying hard to break lines in places that are pleasing to the eye. As a consequence, • One blank space is the same as a million blank spaces. • Tabs are treated like blank spaces. • Blanks at the end of a line are ignored. • A single “Enter” is treated like a blank space. • More than one “Enter” marks the beginning of a new paragraph. 4.2 Fine-Tuning Spacing in Math-Mode It is possible to adjust the spacing that LaTeX uses in math mode. (I usually add a little space with \, before the differential when I write an integral.) 1. \, produces a small space 2. \: produces a medium space 3. \; produces a large space 4. \! produces a small negative space Try them all out in math mode before continuing. 4.3 Double Spacing There will be times when you will need to submit a draft that is double-spaced, to permit a grader or editor to make comments. LaTeX does not explicitly support doing this, because, well, it looks ugly. Still: to double-space a paper, put \renewcommand{\baselinestretch}{2} in your paper’s preamble. Give it a try. 4.4 Sloppy Line Breaks LaTeX works very hard to find an optimal line break for each line of your document. It you are not happy with its result, surround the offending paragraph with \begin{sloppypar} and \end{sloppypar} commands. Then LaTeX will not break words up but rather will allow more spacing between words in the given paragraph. 19 4.5 Enlarging Pages LaTeX works very hard to find the best place to break between pages. If you are unhappy with the result, you can change it with the following two commands: 1. \newpage will force the start of a new page. 2. \enlargethispage{size} will increase the number of lines added to a page, where size is a measurement with units, such as 1in or 2cm. 20 5 Accents and Font Style 5.1 Accents LaTeX can produce the following accents. (The letter “u” is only used for the purposes of this example. The accents work with any letter.) u u ¨ u ˇ u. is is is is u ´ is \’{u} u ˆ is \^{u} u ˜ is \~{u} u ¯ is \={u} u˙ is \.{u} u ˘ is \u{u} u ˝ is \H{u} u  u is \t{uu} u¸ is \c{u} u is \b{u} ˚ u is \r{u} ¯ Practice: Try typing some of these accents yourself before proceeding. 5.2 \‘{u} \"{u} \v{u} \d{u} Hyphenation There are four different variations on hyphens in LaTeX : -, --, ---, and -. 1. - (a single dash) is for hyphenating words. 2. -- (two dashes) is for ranges of numbers. 3. --- (three dashes) is for an honest-to-goodness dash between words. 4. - is a minus sign in math mode. My cousin-in-law lived in Germany in 1995--6; he speaks French---really, he does. His favorite number is -2. produces My cousin-in-law lived in Germany in 1995–6. He speaks French—really, he does. His favorite number is −2. Notice the difference in appearance of the four variations. Practice: Try using all of them yourself before continuing. 5.3 The LATEX Logo You can typeset the LATEX logo with the \LaTeX command. As with most commands, it consumes any space behind it, so if it isn’t at the end of a sentence, use \LaTeX\ instead. Practice: Try using the logo. 21 5.4 Quotation Marks Beginning and ending quotation marks differ. In LaTeX, use ‘‘ (usually on the left side of the keyboard) to begin a quotation and ’’ (usually on the right side of the keyboard) to end a quotation: She said ‘‘three’’. produces She said “three”. Practice: Try using quotation marks before going on. 5.5 Changing the Appearance of Words There are many ways of changing the appearance of words to add emphasis, such as underlining, boldfacing, and italicizing. When over-used, these changes can make a document hard to read, so they should always be used sparingly. LaTeX provides underlining, boldfacing, and italicizing, but studies have shown that italicizing is most effecting in stressing without distracting. Use \underline{phrase} to underline a phrase, \textbf{phrase} to print a phrase in boldface, and \emph{phrase} to italicize a phrase. Practice: Try all three methods out now. 22 6 6.1 Tables, Arrays, and Lists Constructing Arrays To construct an array, surround the entries with a \begin{array}{justification} command and an \end{array} command. The justification should consist of l for left justification, c for centered justification, or r for right justification. Separate column entries by an &, and end each line with a \\. If your array is a matrix, you can surround it with large parentheses \left( and \right). For example:$$ \left( \begin{array}{rcl} \alpha&\beta&\gamma\\ \delta&\epsilon&\zeta\\ \eta&\theta&\iota\\ \end{array} \right) produces  α  δ η β  θ  γ ζ  ι Practice: Create your own document with an array with a mix of left, center, and right-justified columns. 6.2 Constructing Tables To construct a table, surround the entries with a \begin{tabular}{justification} command and an \end{tabular} command. The justification should consist of l for left justification, c for centered justification, or r for right justification. Separate column entries by a &, and end each line with a \\. Use \hline to construct a horizontal line, and separate the l, c, and r ’s by a | wherever you want a vertical line. For example: \begin{tabular}{|r|c|l|} \hline Right & Center & Left\\ \hline 23 alpha&beta&gamma\\ delta&epsilon&zeta\\ eta&theta&iota\\ \hline \end{tabular} produces Right alpha delta eta Center beta epsilon theta Left gamma zeta iota Practice: Create your own document with a table with a mix of left, center, and right-justified columns. 24 7 Multiline Equations 7.1 Multi-line Equations Often, in a derivation, you will want to have a series of equations or inequalities aligned together. Surround the equations by \begin{eqnarray*} and \end{eqnarray*}. (The same command without the asterisk generates equation numbers automatically.) Surround the equals sign or inequality with &’s, and end each line with \\. Note: you do not need to use’s with this environment. For example, \begin{eqnarray*} 1+2+\ldots+n &=& \frac{1}{2}((1+2+\ldots+n)+(n+\ldots+2+1))\\ &=& \frac{1}{2}\underbrace{(n+1)+(n+1)+\ldots+(n+1)}_{\mbox{ncopies}}\\ &=& \frac{n(n+1)}{2}\\ \end{eqnarray*} produces 1 + 2 + ... + n = = = 1 ((1 + 2 + . . . + n) + (n + . . . + 2 + 1)) 2 1 (n + 1) + (n + 1) + . . . + (n + 1) {z } 2| n copies n(n + 1) 2 Practice: Produce your own aligned set of 5 equations. 7.2 Accents We use several different kinds of accents in mathematics: a hat, bar, dot, and arrow over a variable all have different meanings. LaTeX uses commands that surround the variable:\hat{a}, \dot{a}, \ddot{a}, \tilde{a}, \bar{a}, \vec{a}$yields a ˆ, a, ˙ a ¨, a ˜, a ¯, ~a Practice: Try each of these commands out before continuing. 25 7.3 Bracket Symbols Brackets, such as braces and parentheses, are used to group expressions. Without them it would be a good deal more difficult to understand complicated mathematical expressions. When working with complicated expressions, it is important for the brackets to expand to match the size of whatever they contain. In LaTeX, the way to do that is with matching \left( and \right) commands. (You can use |, {, }, [, and ] instead of parentheses. Remember that since braces are used to group in LaTeX, we have to use \{ and \}.) Since every \left needs a matching \right, it is important to have null commands \left. and \right. which do nothing. Here are some examples: $$\left( \begin{array}{cc} 1&2\\ 3&4\\ \end{array} \right)$$ produces  1 3  −x x ≤ 0 x x≥0 2 4  while $$|x| = \left\{ \begin{array}{lr} -x&x\le 0\\ x&x\ge 0 \end{array} \right.$$ produces |x| = 26 Practice: Construct a document containing the following expression: lim n→∞ 7.4  1+ x n = ex n Dots The continuation dots ... are known as an ellipsis. They occur frequently enough in mathematics for LaTeX to have four commands to typeset them with the right spacing. They are 1. \cdots for center height dots. 2. \ddots for diagonal dots, which occur in matrices. 3. \ldots for lower height dots. 4. \vdots for vertical dots.$a_1,\ldots, a_n$produces a1 , . . . , an $$\left( \begin{array}{ccc} a_{11}&\cdots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{m1}&\cdots&a_{mn} \end{array} \right)$$ produces  a11  ..  . am1  · · · a1n ..  .. . .  · · · amn Practice: Construct a document using at least two different forms of dots. 27 7.5 Indenting The default for a LaTeX document is to indent new paragraphs unless the paragraph follows a section heading. If you want to change the indentation, use the \indent and \noindent commands respectively, at the beginning of the paragraph in question. If you wish to choose the amount of indentation for some reason, then use the command \setlength{\parindent}{size of indentation with unit}. (I only do this to set the indentation to 0in when I want no indentation in my documentation.) Since this is a command that affects the whole document, it should go in the preamble, between the \documentclass and \begin{document} commands. 28 8 8.1 Text Formatting Centering Text By default, LaTeX will start all text at the left margin. If you want to center a title, a table, etc., surround what you want centered with the commands \begin{center} and \end{center}. Practice: Create a document containing text that is and isn’t centered. 8.2 Special Headers A header is the text automatically included at the top of each document. If you use \pagestyle{myheadings}, then you will need some way to indicate what your heading is. The command \markright{Your Header Text Here} will do the job for you. The name of the command \markright requires a little explanation. An option that we will not use in this tutorial is \documentclass[twoside]{article}, which produces pages formatted as in a book, i.e., with a left page and a right page. Using this option it is possible to produce different headings for the left and right pages. When using the default of one-sided pages, all pages are thought of as right pages, and we use \markright to mark our headings on these right-sided pages. Practice: Produce a three-page document with the name of this course as your heading. 8.3 Extended Quotation If you are going to include an extended quotation from another source, it is important to indicate the difference between the quotation and your words. The least obtrusive way to do so is to indent. In LaTeX, surround the quotation with \begin{quote} and \end{quote}. Practice: Create a document containing regular text and containing a quotation of several paragraphs. 8.4 Bulleted Lists To create a bulleted list, surround the information with a \begin{itemize} and an \end{itemize}, and begin each item with an \item. For example, 29 \begin{itemize} \item A bulleted item. \item Another bulleted item. \begin{itemize} \item A nested bulleted item. \end{itemize} \item You get the idea. \end{itemize} produces • A bulleted item. • Another bulleted item. – A nested bulleted item. • You get the idea. Practice: Create a document creating your own bulleted list. Have one of the items in your list itself consist of a bulleted list. 8.5 Numbered Lists To create a numbered list, surround the information with a \begin{enumerate} and an \end{enumerate}, and begin each item with an \item. For example, \begin{enumerate} \item A numbered item. \item Another numbered item. \begin{enumerate} \item A nested numbered item. \end{enumerate} \item You get the idea. \end{enumerate} produces 1. A numbered item. 2. Another numbered item. (a) A nested numbered item. 3. You get the idea. Practice: Create a document creating your own numbered list. Have one of the items in your list itself consist of a numbered list. 30 8.6 Filling a Line You can insert an arbitrary amount of space into a line with the \hspace{length} command. Here the length must include a unit, such as 1.5in or 2.3cm. If you want a spacing in a line that will push the surrounding words to the left and right margins, use the \hfill command. If instead of spacing, you want either dots or a line, use \dotfill or \hrulefill, respectively. Practice: Try all of these commands out now. 8.7 Line Breaks LaTeX works very hard to find optimal places to split lines of text in making paragraphs. You can help it by indicating when it should avoid a line break. Use a ~ for a space that should not be used to break a line. When shouldn’t you break a line? 1. Don’t break a line between a title such as Mr., Ms., Dr., etc., and the name that follows it. 2. Don’t break a line between a number and the units that follow it. 3. Don’t break a line between the words in a name. LaTeX inserts more space at the end of a sentence then between its words. If you use an abbreviation like Dr. in the middle of a sentence, then you need to let LaTeX know that the period is not the end of a sentence, with a ~ (if the line should not be broken there) or a \ followed by a space (if the line could be broken there). If you need to force a line to break at a given point, use \\. 31 9 9.1 Bibliography and Compound Expressions Bibliographies For large documents requiring a good deal of revision, it can be difficult to coordinate references in the body of the document with the bibliography at its end. LaTeX provides a mechanism for automatically linking citations with items in the bibliography. Surround the bibliography with \begin{thebibliography}{9} and \end{thebibliography}. For each entry in the bibliography, start with \bibitem{label}, where label is some mnemonic for the reference. With the bibliography in place, a citation in the body of the document is made with \cite{label}, where label is the same as what occurs in the corresponding \bibitem{label}. In order to keep track of new references that have been added, you will often need to run LaTeX twice before previewing when using \cite. What is the 9 in \begin{thebibliography}{9} for? It is a dummy number indicating how many digits to leave space for in the numbering of the bibliography. If you have 10-99 references, use \begin{thebibliography}{99} instead. Practice: Create a document with a bibliography of five fake works, and cite each one at least once in your document. 32 10 10.1 Slides The Slide Class LaTeX does not want to be Microsoft PowerPoint. On the other hand, if you have mathematical formulae to display on transparencies, LaTeX is there to help with the slides document class. The slide class uses a larger font that is designed to be legible at a distance. 10.2 How to Use the Slides Class The slides class is easy to use. 1. Start with \documentclass{slides}. 2. Surround the document with \begin{document} and \end{document} commands. 3. Surround the text that you want to appear on each slide with \begin{slide} and \end{slide} commands. 4. Preview the slides to see where best to break the material between slides. 33 11 Including Graphics in Your Document 11.1 Graphic File Formats There are a number of graphics formats out there, such as: 1. bmp 2. eps 3. gif 4. jpg 5. pdf 6. ps LaTeX works best with the postscript formats (eps, ps) which were around when the program was first created. For the other formats, LaTeX may or may not work; you would be safer trying PDFLaTeX, which will produce a PDF document. 11.2 Graphics Package If you are going to include graphics in your document, you will need to ask LaTeX to use a package of graphics commands: place \usepackage{graphicx} in the preamble. 11.3 Including Graphics Within Your Document You use the \includegraphics{graphicfile} command to include your graphic file in your document. If you wish to control the size of the document, you can also specify the height and width: \includegraphics[height=2in, width = 3in]{graphicfile}. 34 12 12.1 Business Letters The Letter Class Aside from the article class, LaTeX provides a letter class for formal letters. A given file can be use to generate several letters simultaneously. To use the letter class, 1. Start with a \documentclass{letter}. 2. Include the commands that apply to all letters in the file. 3. Begin with a \begin{document} command. 4. Include the commands for each letter. 5. End with a \end{document} command. 12.2 Letter Commands for the Preamble The following commands apply to each letter in the file: 1. \address{youraddress} for your return address. 2. \signature{yournameandtitle} for your printed name in the signature block. 3. \date{letterdate} if you want to fix the date on the letter; otherwise the date will default to the current date when the letter is printed. 12.3 Commands for Each Letter 1. Start with \begin{letter}. 2. On the next line, type the addressee’s address in braces. Separate lines with \\’s. 3. Put your opening greeting in \opening{dearjohndoe}. 4. Put the text of your letter. 5. Put your closing in \closing{sincerely}. 6. If their are carbon copies, use \cc{names}. 7. If their are enclosures, use \encl{docs}. 8. If their is a postscript, use \ps. 9. End with \end{letter}. Practice: Type a short letter. 35 LaTeX Tutorial To have formulas appear in their own paragraph, use matching $$'s to surround them. For example,.$$. \frac{x^n-1}{x-1} = \sum_{k=0}^{n-1}x^k.$$becomes xn − 1 x − 1. = n−1. ∑ k=0 xk. Practice: Create your own document with both kinds of formulas. 3.2 Greek Letters. • α is \alpha. • β is \beta. • γ is \gamma. • δ is \delta. Recommend Documents A presentation in LaTeX Beamer on TeX/LaTeX - GitHub Hello World from \LaTeX ! . \sum_{n .... pacman -S texlive-most. Debian/Ubuntu/Mint: ... '\input'. pdfTEX will produce a PDF file. Jack Rosenthal. How to make presentations with LATEX - GitHub Aug 29, 2011 - well with PGF/TikZ packages which might make it the best solution out there. ... done from scratch, the user will end up having a unique theme for his/her ... .tex files, which get compiled when needed and then the resultant .pdf. Getting Started with LATEX LATEX is a computer program for typesetting documents. It takes a ..... within the text of a paragraph one should place a$ sign before and after the formula,. Getting Started With LATEX The enumerate environment is opened with a \begin{enumerate} command and closed with an \end{enumerate} command. Within the environment, each item to be enumerated is preceded by an \item command. Highlighted text. Boldfaced text is produced with a \ Typesetting tables with LATEX umn, may be overridden by @{(sep)}, where (sep) is any LATEX code, inserted as the separator. For illustration, let's typeset some flight data: flight no. route. latex-worskhop-jan2015-brochure.pdf There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Short Math Guide for LaTeX the source file for this document begins with ... their LATEX system and usable without installing any additional fonts or doing other setup work. ... a& =b+c-d\\. A Sample AMS Latex File Abstract: This paper evaluates the IPCC SRES scenarios against fossil fuel depletion models and proposes attainable carbon emissions trajectories. The contemporary carbon feedback cycle is then evaluated in light of recent studies and attainable carb
2021-05-06 00:23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851533770561218, "perplexity": 319.46260832989594}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00363.warc.gz"}
https://mathvine.com/pre-algebra/dividing-absolute-value/
Home > Pre-Algebra > Absolute Value > Dividing Absolute Value ## Dividing Absolute Value #### Introduction Absolute value is a way of measuring how far a number is from zero on the number line. Absolute value is a measure of distance, not direction, so all absolute values are positive. To measure absolute value, you need to count how ‘far’ the number is from zero. Absolute values are written with | | either side of the number. The absolute value of x is written as | x |. Positive numbers have the same absolute value as the number’s value. For example, the number 4 is 4 units away from 0, so | 4 | = 4. The absolute value of negative numbers is the number without the negative sign. –5 is 5 units away from 0, so the absolute value of -5 is 5. $|5| = 5$$|-5| = 5$ Absolute values are always positive numbers. #### Terms Absolute Value - The distance of a value from 0 on a number line. Positive - Any value greater than zero. Negative - Any value less than zero. ## Lesson To divide absolute values, you need to first calculate the absolute values of the terms individually. For example, in the equation below: $|24| ÷ |-6| = ?$ We need to first calculate the two absolute values. The absolute value of | 24 | is the distance of 24 from 0, which is 24 (remember, positive values have the same absolute value). $|24| ÷ |-6| = ?$ Next, we need to calculate the absolute value of | -6 |. -6 is 6 units away from 0 on the number line, so it has an absolute value of 6. Another way to remember this for negative numbers is to just remove the negative sign. So now we have: $24 ÷ 6 = ?$ This is now a regular division equation. 6 goes into 24 four times, so the answer is 4. $24 ÷ 6 = 4$$|24| ÷ |-6| = 4$ Remember to solve the absolute values first in the order of operations and then you can solve the equation like a regular division problem.
2020-01-19 16:36:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7456521987915039, "perplexity": 470.28586414150396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00280.warc.gz"}
https://cran.fhcrc.org/web/packages/ergm/vignettes/Proposal-Lookup-API.html
# Summary This document describes the process by which ergm and related packages selects the MCMC proposal for a particular analysis. Note that it is not intended to be a tutorial as much as a description of what inputs and outputs different parts of the system expect. Nor does it cover the C API. # Description ## Inputs There is a number of factors that can affect MCMC sampling, some of them historical and some of them new: Globals functions and other structures defined in an accessible namespace • ergm_proposal_table() a function that if called with no arguments returns a table of registered proposals and updates it otherwise. See ? ergm_proposal_table for documentation and the meaning of its columns. Of particular interest is its Constraints column, which encodes which constraints the proposal does (always) enforce and which it can enforce. • InitErgmReference.<REFERENCE> a family of initializers for the reference distribution. For the purposes of the proposal selection, among its outputs should be $name specifying the name of the reference distribution. • InitErgmConstraint.<CONSTRAINT> a family of initializers for constraints, weightings, and other high-level specifiers of the proposal distribution. Hard constraints, probabilistic weights, and hints all use this API. For the purposes of the proposal selection, its outputs include • $constrain (defaulting to <CONSTRAINT>) a character vector specifying which constraints are enforced, and can include several semantically nested elements; • $dependence (defaulting to TRUE) specifying whether the constraint is dyad-dependent; • $priority (defaulting to Inf) specifying how important it is that the constraint is met (with Inf meaning that it must be met); and • $implies/$impliedby specifying which other constraints this constraint enforces or is enforced by, and this can include itself for constraints, such as edges that can only be applied once. Arguments arguments and settings passed to the call or as control parameters. • constraints= argument (top-level): A one-sided formula containing a +- or --separated list of constraints. + terms add additional constraints to the model whereas - constraints relax them. - constraints are primarily used internally observational process estimation and are not described in detail, except to note that 1) they must be dyad-independent and 2) they necessitate falling back to the RLEBDM sampling API. • reference= argument (top-level): A one-sided formula specifying the ERGM reference distribution, usually as a name with parameters if appropriate. • control$MCMC.prop= control parameter: A formula whose RHS containing +-separated “hints” to the sampler; an optional LHS may contain the proposal name directly. • control$MCMC.prop.weights= control parameter: A string selecting proposal weighting (probably deprecated) • control$MCMC.prop.args= control parameter: A list specifying information to be passed to the proposal ## Code Path Most of this is implemented in the ergm_proposal.formula() method: 1. InitErgmReference.<REFERENCE> is called with arguments of reference=’s LHS, obtaining the name of the reference. 2. For each term in the following formula’s RHS, the corresponding InitErgmConstraint.<CONSTRAINT> function is called and their outputs are stored in a list of initialized constraints (an ergm_conlist object). .dyads pseudo-constraint is added to dyad-independent constraints (not to hints with $priority < Inf). 1. constraints= 2. MCMC.prop= 3. Constraint lists from the previous two steps are concatenated, with redundant constraints removed based on their $implies/$impliedby settings. 4. Proposal candidates returned by ergm_proposal_table() are filtered by Class, Reference, Weights (if MCMC.prop.weights differs from "default"), and Proposal (if the LHS of MCMC.prop is provided). 5. Each candidate proposal is “scored” as follows: 1. If a proposal does enforce a constraint that is not among the requested by the constraints list, it is discarded. 2. If a proposal cannot enforce a constraint that is among the requested with priority=Inf, it is discarded. 3. For each constraint that is among requested with priority<Inf and that the proposal doesn’t and can’t enforce, its (innate, specified in the column of the ergm_proposal_table()) Priority value is penalised by the priority of that constraint. 6. If there are no candidate proposals left, an error is raised. 7. If more than one is left, the proposal with the highest priority (after being penalised for unmet constraints) is selected.
2021-11-26 23:56:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46793073415756226, "perplexity": 2839.4086578984475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358074.14/warc/CC-MAIN-20211126224056-20211127014056-00363.warc.gz"}
https://stats.stackexchange.com/questions/478609/prediction-model-of-test-scores-based-on-subjective-assessment
# Prediction model of test scores based on subjective assessment I am trying to build a model where I want to measure the accuracy with which supervisors can predict the outcome of test taker's scores. For example, supervisors rate test taker's subjectively before taking a test based on a short interview. After the completion of the test, test results will be compared with the prior assessment. Would a simple regression model with the subjective test score as the independent variable and the subjective prior assessment score as the independent variable make sense? I see that there might be a problem with cross-correlation but I'm having trouble understanding how. How could an appropriate model look like?
2021-07-30 11:19:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566387295722961, "perplexity": 469.2386109300002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00023.warc.gz"}
http://chemwiki.ucdavis.edu/Theoretical_Chemistry/Chemical_Bonding/General_Principles/Bond_Energies
# Bond Energies ##### Table of Contents The bond energy is a measure of the amount of energy needed to break apart one mole of covalently bonded gases. The SI units used to describe bond energy is kiloJoules per mole of bonds (kJ/Mol). ### Introduction When a chemical reaction occurs, molecular bonds are broken and other bonds are formed to make different molecules. For example, the bonds of two water molecules are broken to form hydrogen and oxygen. $2H_2O \rightarrow 2H_2 + O_2$ Bonds do not break and form spontaneously-an energy change is required. The energy input required to break a bond is known as bond energy. While the concept may seem simple, bond energy serves a very important purpose in describing the structure and characteristics of a molecule. It can be used to determine which Lewis Dot Structure is most suitable when there are multiple Lewis Dot Structures. When a bond is strong, there is a higher bond energy because it takes more energy to break a strong bond. This correlates with Bond Order and Bond Length. When the Bond Order is higher, Bond Length is shorter, and the shorter the Bond Length means a greater the Bond Energy because of increased electric attraction. Think about it this way: it is easy to snap a pencil, but if you keep snapping the pencil it gets harder each time since the length of the pencil decreases. A higher bond energy (or a higher bond order or shorter bond length) means that a bond is less likely to break apart. In other words, it is more stable than a molecule with a lower bond energy. With Lewis Structures then, the structure with the higher bond energy is  more likely to occur. ### Bond Breakage/Formation The diagram depicts how the atoms of Nitrogen break and bond with one another. The breakage and formation of bonds is similar to a relationship-you can either get married or divorced and it is more favorable to be married. From the diagram above, it is clearly seen that energy is released when atoms form bonds while energy is absorbed to break apart bonds, which is why breaking bonds is positive and forming bonds is negative. It takes energy and stress to get "divorced." Atoms are much happier when they are "married" and release energy because it is easier and more stable to be in a relationship. The energy change is negative because the system is gives off energy when a bond is formed. #### Enthalpy Enthalpy is the total change in energy in a thermodynamic system. Energy is either released or absorbed depending on the reaction that is taking place. Enthalpy is related to Bond Energy because an energy change is required to break bonds. More specifically, bond energy measures the energy that is added to the system to break bonds. We can use Bond Energy to determine if a reaction is endothermic or exothermic. • If the reactants have weak bonds, while the products have strong bonds then the reaction is exothermic (enthalpy change < 0). There is a small amount of energy needed to break the bond (smaller bond energy) and a bigger energy released when strong bonds form. A negative enthalpy  change means that the system released energy. • If the reactants have strong bonds, but the products have weak bonds its an endothermic reaction. (enthalpy change > 0). The energy required to break the reactant bonds is greater than the energy released when the product bonds form. ### Average Bond Energy The same bond can appear in different molecules, but it will have a different bond energy in each molecule because the other bonds in the molecule will affect the bond energy of the specific bond. So the bond energy of C-H in methane is slightly different than the bond energy of C-H in ethane. We can calculate a more general bond energy by finding the average of the bond energies of a specific bond in different molecules to get the average bond energy. When more bond energies of the bond in different molecules that are taken into consideration, the average will be more accurate. Keep in mind that: • It also isn't as accurate as a certain bond-dissociation energy. • Double bonds have higher energy bonds in comparison to a single bond. However that doesn't make the energy twice the amount of a single bond, it is simply just higher. Average Bond Energies (kj/mol): Bond Energy (kj/mol) Bond Bond Energy (kj/mol) Bond Bond Energy (kj/mol) Bond 297 H-I 347 C-C 163 N-N 364 H-Br 611 C=C 418 N=N 368 H-S 837 C:::C 946 N:::N 389 H-N 305 C-N 222 N-O 414 H-C 615 C=N 590 N=O 431 H-Cl 891 C:::N 436 H-H 360 C-O 464 H-O 736 C=O 565 H-F 339 C-Cl 151 I-I 142 O-O 159 F-F 498 O=O 193 Br-Br 243 Cl-Cl -Average bond energies are the averages of bond dissociation energies. For example the average bond energy of O-H in H2O is 464 kj/mol. This is due to the fact that the H-OH bond requires 498.7 kj/mol in order to dissociate, while the O-H bond needs 428 kj/mol. (498.7 kj/mol +428 kj/mol)/2=464 kj/mol. Example Consider this reaction H2(g)+I2(g) → 2HI(g) First look at the equation and determine what bonds exist. There's an H-H bond, I-I bond, and 2 H-I bonds (Because we're dealing with net change, we only need to look at 1 mol of H-H, I-I, and H-I bond). Then examine the bond breakage which is located in the reactant side: 1 mol H-H bonds → 436 kj/mol 1 mol I-I bonds → 151 kj/mol The sum is 587 kj/mol. Then we look at the bond formation which is on the product side : 1 mol H-I bond → 297 kj/mol The sum is 297 kj/mol. The net change of the reaction is therefore 587-297= 290 kj/mol. Since it's a positive number you know that the reaction is endothermic. Hess's Law relates to this equation as it depicts how the energy of the overall reaction is equal to the sum of the individual steps involving energy change. ### Problems 1. What is the definition of bond energy? When is energy released and absorbed? 2. If the bond energy for H-Cl is 431kj/mol. What is the overall bond energy of 2HCl? 3. Using the bond energies given in the chart above, find the enthalpy change for: O2 (g)+ N2 (g) → 2NO(g) 4. Is the reaction written above exothermic or endothermic? Explain. 5. Which bond in this list has the highest bond energy? The lowest? H-H, H-O, H-I, H-F. #### Solutions 1. Bond energy is the energy required to break a bond that exists between two atoms. Energy is given off when the bond is broken, but is absorbed when a new bond is created. 2. Simply multiply the average bond energy of H-Cl by 2. This leaves you with 862 kj/mol. 3. The enthalpy change deals with one mole of O-O, N-N, and N-O bond. The sum of the bonds being broken is 142 kj/mol + 163 kj/mol= 305 kj/mol O-O bond:142 kj/mol N-N bond: 163 kj/mol The sum of the bonds being created is -222 kj/mol because there is one mole of N-O. The enthalpy of the reaction is 305 kj/mol-222 kj/mol= +83 kj/mol. 4. For this question simply look at the number you calculated as your enthalpy of reaction. Is it positive or negative? It is positive so this means that it is in fact endothermic. It requires energy in order to create bonds. 5. H-F would have the highest bond energy since the difference in electronegativity is the largest there. Likewise, H-H would have the lowest bond energy since the electronegativity is the same. ### References 1. Petrucci, Ralph H., Harwood, William S., Herring, F. G., and Madura Jeffrey D. General Chemistry: Principles and Modern Applications. 9th ed. Upper Saddle River: Pearson Education, Inc., 2007. 2. Carruth, Gorton, Ehrlich, Eugene. "Bond Energies." Volume Library. Ed. Carruth, Gorton. Vol 1. Tennessee: Southwestern, 2002. 3. Petrucci, Ralph H., Bissonnette, Carey, Herring, F.G., and Madura Jeffrey D. General Chemistry: Principles and Modern Applications. 10th ed. Upper Saddle River: Pearson Education, Inc., 2010 4. "Bond Lengths and Energies." UWaterloo, n.d. Web. 21 Nov 2010. <http://www.science.uwaterloo.ca/~cch...20/bondel.html> 5. "Bond Energy." N.p., n.d. Web. 21 Nov 2010. <http://users.rcn.com/jkimball.ma.ult...ndEnergy.htmll>. ### Contributors • Kim Song (UCD), Donald Le (UCD) You must to post a comment. Last Modified 00:58, 18 Feb 2014 ## Page Rating Was this article helpful? ## Tags Module Vet Level: Module Target Level: ### Wikitexts An NSF funded Project • © Copyright 2014 Chemwiki UC Davis GeoWiki by University of California, Davis is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License. Permissions beyond the scope of this license may be available at copyright@ucdavis.edu. Terms of Use
2014-04-17 04:17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38884565234184265, "perplexity": 2579.328996941233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.projecteuclid.org/euclid.aos/1176347014
The Annals of Statistics Universal Domination and Stochastic Domination: $U$-Admissibility and $U$- Inadmissibility of the Least Squares Estimator Abstract Assume the standard linear model $X_{n \times 1} = A_{n \times p} \theta_{p \times 1} + \varepsilon_{n \times 1},$ where $\varepsilon$ has an $n$-variate normal distribution with zero mean vector and identity covariance matrix. The least squares estimator for the coefficient $\theta$ is $\hat{\theta} \equiv (A'A)^{-1}A'X$. It is well known that $\hat{\theta}$ is dominated by James-Stein type estimators under the sum of squared error loss $|\theta - \hat{\theta}|^2$ when $p \geq 3$. In this article we discuss the possibility of improving upon $\hat{\theta}$, simultaneously under the "universal" class of losses: $\{L(|\theta - \hat{\theta}|): L(\cdot) \text{any nondecreasing function}\}.$ An estimator that can be so improved is called universally inadmissible ($U$-inadmissible). Otherwise it is called $U$-admissible. We prove that $\hat{\theta}$ is $U$-admissible for any $p$ when $A'A = I$. Furthermore, if $A'A \neq I$, then $\hat{\theta}$ is $U$-inadmissible if $p$ is "large enough." In a special case, $p \geq 4$ is large enough. The results are surprising. Implications are discussed. Article information Source Ann. Statist., Volume 17, Number 1 (1989), 252-267. Dates First available in Project Euclid: 12 April 2007 https://projecteuclid.org/euclid.aos/1176347014 Digital Object Identifier doi:10.1214/aos/1176347014 Mathematical Reviews number (MathSciNet) MR981448 Zentralblatt MATH identifier 0674.62007 JSTOR Brown, Lawrence D.; Hwang, Jiunn T. Universal Domination and Stochastic Domination: $U$-Admissibility and $U$- Inadmissibility of the Least Squares Estimator. Ann. Statist. 17 (1989), no. 1, 252--267. doi:10.1214/aos/1176347014. https://projecteuclid.org/euclid.aos/1176347014
2019-07-21 14:23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630613803863525, "perplexity": 861.2290691297331}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00413.warc.gz"}
https://www.controlbooth.com/threads/casters-will-be-the-death-of-me.4031/
# Casters will be the death of me! #### Squeegee ##### Member So we have these soft flats that are 10' high and we attached a piece of 3/4"ply wood 2' long behind them on their width (that varies) and on the bottom we put 2 rotating casters (up against the flat) and 2 nonrotating casters (in the back of the 2' platform base thing). My cast and crew is complaining. They don't move very well and they're very loud. We made it so that the casters lift the flats above the ground just enough to move it easily so that it actually rolls. It also has a 5' hypotenuse aframe jack thing in the back to hold it level and at 90degrees. Any suggestions to make it quieter/easier? #### ricc0luke ##### Active Member get better castors. that simple. what type are you using? #### Footer ##### Senior Team Senior Team If the units are going strait on stage from the wings and back off, and they do not have to turn at all, go all fixed casters, but odds are you are going to want to move it around backstage or onstage on more then one plain. Your best bet in that situation is to go all swivel. Rarely do I ever mix and match casters on a unit. Remember though with fixed casters half of the work in getting it moving is moving it enough to get all the casters swiveled into the same direction. If you can plan out movements onstage to where the casters are allready set in the direction that they need to be when the unit is taken to its spike, you will be able to get it going much faster. #### Squeegee ##### Member We're doing a fairly complicated set where the flats are reused in a different scene in a different location so they need to move around all over the place. There's about 11 of them. When I designed them the idea was that they move like a car and you have to "steer" them and you cant just turn them you have to like push and turn them. Less noise, less money, but I didn't think it would be more of a problem. #### jonhirsh ##### Active Member I think it has allot to do with the action of the set piece. Some times like a gear cart its simpler to drive with swivels on one side and fixed on the other x = swivel o = non swivel x ---- o x ---- o crude drawing i know ... does that make sense that set up means one person can drive and push the the truck with the flat on it. #### Footer ##### Senior Team Senior Team We're doing a fairly complicated set where the flats are reused in a different scene in a different location so they need to move around all over the place. There's about 11 of them. When I designed them the idea was that they move like a car and you have to "steer" them and you cant just turn them you have to like push and turn them. Less noise, less money, but I didn't think it would be more of a problem. Go all swivel, cars don't belong onstage enless you are doing grease, and if you are doing grease go work somewhere else. #### Squeegee ##### Member Go all swivel, cars don't belong onstage enless you are doing grease, and if you are doing grease go work somewhere else. lol But then we wasted money and we don't like to do that. WE don't like to waste it and we don't like to spend it either... I'll talk to my director about it. #### jwl868 ##### Active Member Even though it appears that you’ll have limits in the movements of the pieces: I think the swivel casters are best be in the trailing position, near the pushers so they can push and steer from the same end [that is, opposite of the steering in a car.]. But if you have to move the piece back in the direction that you just came from (or if the swivels are in the lead), then you need someone in front to steer and someone behind to push. [I’m assuming that a fixed pair is on, say, the left side, and the swivel pair are on the right, as opposed to front-back pairing.] All of the pieces do not necessarily need to have the wheels configured the same. That is, some may have the fixed pair on the left and the other with the fixed pair on the right (or front or back, if it makes sense to use that arrangement.) You may want to consider when you want or need the smoothest movement, not to mention room to maneuver in the wings, or if there is some other critical timing. These considerations may determine what is optimum, considering your budget limitations. [For example, if you need to clear the stage quickly, then you’ll want to have the casters set up so the pieces can be pushed off smoothly, with the swivels trailing. But there may be a couple situations where you will just be stuck and you’ll have to make the best of it. On the other hand, once you get the set piece movement details worked out, there won’t be many problems.] [Probably won’t hurt to label which end has the swivels and which end has the fixed.) Also, after you get the piece in position, twist the swivel casters to the direction that they are to go next. Inspect the wheels (and swivels) for threads, fibers, and other crud that can jam the wheel, axle, and swivel. Joe #### Van ##### CBMod CB Mods Go all swivel, cars don't belong onstage enless you are doing grease, and if you are doing grease go work somewhere else. I have to disagree. I find the grewatest amount of control over a moving peice of scenery to be developed by the use of the "car" scenerio. Having two fixed casters on the "back" of a wagon makes for a steering setup like that of a Forklift, very precise but you have to practice and plan ahead. What diameter casters are you using ? Diameter is critical in quiet performance. Another trick find some " Lube-O-Seal" it is a food grade teflon grease. It works wondrs on the ball bearings in swivel casters. Squirt it in the wheel Bearings and into the swivel plate bearings. You'd be amazed how much noise is generated by just the action of cheap casters accomodating for play in the bearing races. One more thing, add some weight to the flat, If the noise you are expirienceing is high pitched rattle from the casters rattling around , more weight will counter that, if, however the noise you are complaining about is the low pitched rumble of caster over floor, go with less weight and rubber casters, Polyurethane is best. #### Squeegee ##### Member Wow. All these suggestions are amazing! We found a solution this morning while looking in our tool shed: velcro. We took the fuzzy end of the velcro (it had the glue side on the back of it) and we glued it to the bottom outsides of the wheels. That gave us a lot of insulation and silence! We also WD40'd the axles of the wheels for the ones that squeeked. We also found that it was easier and more maneuverable if you move the flat from the front where the swivels are (obviously). It helps a lot but it still doesn't entirely solve our problem. Our main problem now is just the timing and getting people to do things quickly. We're having mainly actors doing scene changes so it gets kinda difficult. #### maccor ##### Member Wonder if the WD-40 and Velcro would work on the actors? CB Mods #### Squeegee ##### Member It's eaasy to keep actors quiet. Hand them a mirror. Gaff tape their mouths. Say its for a "school project." #### saxman0317 ##### Active Member It's eaasy to keep actors quiet. Hand them a mirror. we tried...they talk more then, except then its all about themselves #### ship ##### Senior Team Emeritus Add some stage weight/ballast to you’re a-frame flats (stage jack / flat / caster unit.) Could have also done tip jack where you tip the flat back so it’s balanced upon the center of the caster and when in position resting on the floor, but this casters on the floor and always touching is no doubt easier to maneuver about and keep in balance. Add some weight to the plastic casters and they should quiet down some. Rubber casters and even pneumatic ones are quite to an extent but other types are better for other loadings of weight. On casters - the purchase of them, they might be intimidating in theory to buy but one should contact a caster distribution company and see what their price would be before deciding this. A few years ago my own shoe string budget theater of the past had a great need for a lot of casters. Contacted our local source and got lots of them of a better quality and for by far less than it cost at the home center. The home center sets it’s price on those buying a few of them. The caster supplier sets it’s price for those who want to buy a good amount of them or of very specific types and ratings - big difference. Backstage handbook I believe has a really good section for aiding in caster choice. Swivel/non-swivel (smart / dumb wheels), I think that potentially adding some more weight to the bases so the casters become more quiet will also make it easier to use the flats. Beyond this, it's potentially a difficulty to steer (and easier to get to stay where you want it) but something that by the show opens any good crew person should master and get used to. I wouldn't change the casters in smart verses dumb at this point. #### TheatreSM88 ##### Member So this is not really on topic, but it is about casters, dont buy silent casters, they are not silent, the squeak terribly. We had 4 rolling platforms with them on it and well they dont work. #### gafftaper ##### Senior Team Senior Team Fight Leukemia Besides buying high quality castors, just buying larger castors is also good. I don't see a mention of your castor size but larger castors make heavy things roll easier and quieter. Go all swivel and big castors.
2021-10-24 00:08:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38841214776039124, "perplexity": 2104.036448509034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00297.warc.gz"}
https://math.stackexchange.com/questions/3117862/can-any-boolean-expression-with-or-operators-be-converted-to-only-and-operators
# Can any boolean expression with OR operators be converted to only AND operators? I'm fairly new to Boolean algebra and I was wondering, using Boolean theorems,can any Boolean expression with an OR operators in it be converted to an equivalent expression using only AND operators? Do some expressions come to a point where you don't have any choice but to use OR operators? For example, I have tried to simplify the following expression into only AND operators, but I don't think I'm getting the write answer: xy'z' + x'y'z = y'(xz' + x'z) = y'(xz' + (xz')') = y'(1) = y' y' isn't the right answer as it has different truth tables to the original expression. So what am I doing wrong in my simplification? And can you convert any expression with an OR to one with only AND's? The answer to your question is yes. You can prove it by induction on the number of OR operators, using the fact that for any two formulae $$\sigma$$ and $$\tau$$, $$\sigma \lor \tau = \lnot (\lnot \sigma \land \lnot \tau)$$. Using this substitution will always allow you to reduce the number of OR operators by one, until eventually you get down to $$0$$. This is an important observation in first-order logic because it vastly simplifies the proof of any statement that must be proved via induction on the length of a formula. In your problem, I'm assuming you're using multiplication as "and," addition as "or," and prime as "not." Then $$(x \land \lnot y \land \lnot z) \lor (\lnot x \land \lnot y \land z)=\lnot(\lnot (x \land \lnot y \land \lnot z) \land \lnot (\lnot x \land \lnot y \land z))$$ which is free of OR operators as required. The DeMorgan laws $$\neg (p \wedge q) \iff ( \neg p \lor \neg q)$$ and $$\neg (p \lor q) \iff ( \neg p \wedge \neg q)$$ together with double negation elimitation $$\neg \neg p \iff p$$ may be used to convert between conjuctive and disjunctive phrases, but notice you'll often need negation too
2019-09-15 13:56:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272426724433899, "perplexity": 182.375284787425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571360.41/warc/CC-MAIN-20190915114318-20190915140318-00528.warc.gz"}
http://mathhelpforum.com/calculus/123289-graph-intersection.html
# Math Help - Graph Intersection 1. ## Graph Intersection Considering the graphs of y = 3x + c and y^2 = 6x, where c is a real constant, determine all values of c for which the graphs intersect in two distinct points. I'm not entirely sure of where to start so any help is greatly appreciated. 2. Hello, Naples! Considering the graphs of: . $\begin{Bmatrix}y \:=\: 3x + c \\ y^2 \:=\: 6x\end{Bmatrix}$ .where $c$ is a real constant, determine all values of c for which the graphs intersect in two distinct points. First, find the intersections . . . The first equation gives us: . $x \:=\:\frac{y-c}{3}$ Substitute into the second: . $y^2 \:=\:6\left(\frac{y-c}{3}\right) \quad\Rightarrow\quad y^2 - 2y + 2c \:=\:0$ Quadratic Formula: . $y \;=\;\frac{2\pm\sqrt{4-8c}}{2}$ The quadratic has two roots if the discriminant is positive: . $4-8c \:>\:0$ Therefore: . $-8c \:>\:-4 \quad\Rightarrow\quad\boxed{ c \:<\:\tfrac{1}{2}}$
2016-02-08 04:27:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7448433637619019, "perplexity": 640.2395092019887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152130.53/warc/CC-MAIN-20160205193912-00207-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.hpmuseum.org/forum/showthread.php?mode=linear&tid=8398&pid=73719
HP-71B back up and running again 05-25-2017, 06:50 PM Post: #1 smp Senior Member Posts: 442 Joined: Jul 2015 HP-71B back up and running again After taking some time off from hand-held computers, I recently found myself becoming interested in a Sharp PC-1500 on TAS. It was complete with the printer / cassette interface, and was advertised as working well (!). While the days went by and I pondered the situation, I found myself thinking, "I have that HP-71B, and with my PIL-Box I can backup my programs, so why not use that instead of spending more money?" So, yesterday I pulled the old boy out of storage and set off to get it up and running again. Of course there was no problem with the HP-71B, but I had a bit of a time remembering how to get the PIL-Box attached to my virtual PC running in my Macintosh. Setting the proper speed and getting it to go took a bit of effort, but I was able to prevail. Then came the inevitable confusion about how to talk to the mass storage (oh, yeah! the COPY command) and then figuring out how to designate that I meant to save a file off to the mass storage, and finally figuring out if the file was actually there on the mass storage. But, I am happy to report that I have successfully typed in my DOLLAR program in BASIC (How many ways can you make $1 from change?) and I've also successfully typed in the MAKELEX program. Both now reside in my HP-71B, and also on my TEST.DAT file on my "PC", backed up via ILPER via the PIL-Box. Now, I wonder about the whole new world of virtual ILPER devices and such. Using the printer on the ILPER program as the display is OK, but I'd like to play around with a virtual video device. Has anyone managed to get the newest version of Py-ILPER working on a Mac? Are there instructions around for a bonehead like me who knows nothing of what it takes to instantiate Python on a Mac and get this all going? Thanks very much, in advance, for your patience and advice. smp 05-25-2017, 08:45 PM Post: #2 Dave Frederickson Senior Member Posts: 2,102 Joined: Dec 2013 RE: HP-71B back up and running again (05-25-2017 06:50 PM)smp Wrote: But, I am happy to report that I have successfully typed in my DOLLAR program in BASIC (How many ways can you make$1 from change?) and I've also successfully typed in the MAKELEX program. Both now reside in my HP-71B, and also on my TEST.DAT file on my "PC", backed up via ILPER via the PIL-Box. No need to type in MAKELEX. Practically every LEX file known can be COPYied from one of the LIF archives using your PIL-Box. Quote:Now, I wonder about the whole new world of virtual ILPER devices and such. Using the printer on the ILPER program as the display is OK, but I'd like to play around with a virtual video device. Cruise over to Christoph Giesselink's Virtual HP-IL webpage. While you're there, check out Emu71. Quote:Has anyone managed to get the newest version of Py-ILPER working on a Mac? Are there instructions around for a bonehead like me who knows nothing of what it takes to instantiate Python on a Mac and get this all going? Yes and yes. Search the Forum for posts by Sylvain pertaining to pyILPER and the Mac. Dave 05-25-2017, 10:32 PM Post: #3 smp Senior Member Posts: 442 Joined: Jul 2015 RE: HP-71B back up and running again (05-25-2017 08:45 PM)Dave Frederickson Wrote: Quote:Has anyone managed to get the newest version of Py-ILPER working on a Mac? Are there instructions around for a bonehead like me who knows nothing of what it takes to instantiate Python on a Mac and get this all going? Yes and yes. Search the Forum for posts by Sylvain pertaining to pyILPER and the Mac. Thanks very much for your response, Dave. I took a look, as you suggested, and I've stumbled through the installation process. For some reason, my PATH= does not work, but I can invoke pyilper by ./miniconda3/bin/pyilper , and it comes up. I configured to my proper /dev/usb... and I am connecting between pyilper and the PIL-Box. I've enabled the printer, and I can issue the command DISPLAY IS :1 for it to become my display, or I can simply PRINT to it. I've enabled the terminal, and I can use it by the command DISPLAY IS :4 (this, of course, frees up the printer to be simply the printer). I was initially stymied by the drives. I designated a file to be used, and the pyilper display properly showed the files contained within it. However, in the pyilper status, both of the drives first showed up having address 0. RESTORE IO fixed that, and now it appears like I have it all working!
2021-12-09 09:50:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25783610343933105, "perplexity": 2702.71687065985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00002.warc.gz"}
https://emc3.lmd.jussieu.fr/en/publications/peer-reviewed-papers-1/lmd_EMC31997_bib.html
You are here: Home / lmd_EMC31997_bib.html # lmd_EMC31997.bib @comment{{This file has been generated by bib2bib 1.95}} @comment{{Command line: /usr/bin/bib2bib --quiet -c 'not journal:"Discussions"' -c 'not journal:"Polymer Science"' -c year=1997 -c $type="ARTICLE" -oc lmd_EMC31997.txt -ob lmd_EMC31997.bib /home/WWW/LMD/public/Publis_LMDEMC3.link.bib}} @article{1997JGR...10219413P, author = {{Peylin}, P. and {Polcher}, J. and {Bonan}, G. and {Williamson}, D.~L. and {Laval}, K.}, title = {{Comparison of two complex land surface schemes coupled to the National Center for Atmospheric Research general circulation model}}, journal = {\jgr}, keywords = {Meteorology and Atmospheric Dynamics, Meteorology and Atmospheric Dynamics: Land/atmosphere interactions, Meteorology and Atmospheric Dynamics: Climatology}, year = 1997, month = aug, volume = 102, pages = {19413}, abstract = {{Two climate simulations with the National Center for Atmospheric Research general circulation model (version CCM2) coupled either to the Biosphere Atmosphere Transfer Scheme (BATS) or to Sechiba land surface scheme are compared. Both parameterizations of surface-atmosphere exchanges may be considered as complex but represent the soil hydrology and the role of vegetation in very different ways. The global impact of the change in land surface scheme on the simulated climate appears to be small. Changes are smaller than those obtained when comparing either one of these schemes to the fixed hydrology used in the standard CCM2. Nevertheless, at the regional scale, changing the land-surface scheme can have a large impact on the local climate. As one example, wre detail how circulation patterns are modified above the Tibetan plateau during the monsoon season. Elsewhere, mainly over land, changes can also be important. In the tropics, during the dry season, Sechiba produces warmer surface temperatures than does BATS. This warming arises from differences in the soil hydrology, both storage capacity and the dynamics of soil water transport. Over the Tundra biotype, the formulation of the transpiration induces significant differences in the energy balance. }}, doi = {10.1029/97JD00489}, adsurl = {http://adsabs.harvard.edu/abs/1997JGR...10219413P}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JCli...10.2055B, author = {{Bony}, S. and {Lau}, K.-M. and {Sud}, Y.~C.}, title = {{Sea Surface Temperature and Large-Scale Circulation Influences on Tropical Greenhouse Effect and Cloud Radiative Forcing.}}, journal = {Journal of Climate}, year = 1997, month = aug, volume = 10, pages = {2055-2077}, abstract = {{Two independent sets of meteorological reanalyses are used to investigate relationships between the tropical sea surface temperature (SST) and the large-scale vertical motion of the atmosphere for spatial and seasonal variations, as well as for El Ni{\~n}o/La Ni{\~n}a episodes of 1987-88. Supergreenhouse effect (SGE) situations are found to be linked to the occurrence of enhanced large-scale rising motion associated with increasing SST. In regions where the large-scale atmospheric motion is largely decoupled from the local SST due to internal or remote forcings, the SGE occurrence is weak. On seasonal and interannual timescales, such regions are found mainly over equatorial regions of the Indian Ocean and western Pacific, especially for SSTs exceeding 29.5{\deg}C. In these regions, the activation of feedback processes that regulate the ocean temperature is thus likely to be more related to the large-scale remote processes, such as those that govern the monsoon circulations and the low-frequency variability of the atmosphere, than to the local SST change.The relationships among SST, clouds, and cloud radiative forcing inferred from satellite observations are also investigated. In large-scale subsidence regimes, regardless of the SST range, the cloudiness, the cloud optical thickness, and the shortwave cloud forcing decrease with increasing SST. In convective regions maintained by the large-scale circulation, the strong dependence of both the longwave (LW) and shortwave (SW) cloud forcing on SST mainly results from changes in the large-scale vertical motion accompanying the SST changes. Indeed, for a given large-scale rising motion, the cloud optical thickness decreases with SST, and the SW cloud forcing remains essentially unaffected by SST changes. However, the LW cloud forcing still increases with SST because the detrainment height of deep convection, and thus the cloud-top altitude, tend to increase with SST. The dependence of the net cloud radiative forcing on SST may thus provide a larger positive climate feedback when the ocean warming is associated with weak large-scale circulation changes than during seasonal or El Ni{\~n}o variations. }}, doi = {10.1175/1520-0442(1997)010<2055:SSTALS>2.0.CO;2}, adsurl = {http://adsabs.harvard.edu/abs/1997JCli...10.2055B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JGR...10216593C, author = {{Cess}, R.~D. and {Zhang}, M.~H. and {Potter}, G.~L. and {Alekseev}, V. and {Barker}, H.~W. and {Bony}, S. and {Colman}, R.~A. and {Dazlich}, D.~A. and {Del Genio}, A.~D. and {DéQué}, M. and {Dix}, M.~R. and {Dymnikov}, V. and {Esch}, M. and {Fowler}, L.~D. and {Fraser}, J.~R. and {Galin}, V. and {Gates}, W.~L. and {Hack}, J.~J. and {Ingram}, W.~J. and {Kiehl}, J.~T. and {Kim}, Y. and {Le Treut}, H. and {Liang}, X.-Z. and {McAvaney}, B.~J. and {Meleshko}, V.~P. and {Morcrette}, J.~J. and {Randall}, D.~A. and {Roeckner}, E. and {Schlesinger}, M.~E. and {Sporyshev}, P.~V. and {Taylor}, K.~E. and {Timbal}, B. and {Volodin}, E.~M. and {Wang}, W. and {Wang}, W.~C. and {Wetherald}, R.~T. }, title = {{Comparison of the seasonal change in cloud-radiative forcing from atmospheric general circulation models and satellite observations}}, journal = {\jgr}, keywords = {Meteorology and Atmospheric Dynamics: Climatology, Meteorology and Atmospheric Dynamics: Numerical modeling and data assimilation, Meteorology and Atmospheric Dynamics: Radiative processes}, year = 1997, month = jul, volume = 102, pages = {16593}, abstract = {{We compare seasonal changes in cloud-radiative forcing (CRF) at the top of the atmosphere from 18 atmospheric general circulation models, and observations from the Earth Radiation Budget Experiment (ERBE). To enhance the CRF signal and suppress interannual variability, we consider only zonal mean quantities for which the extreme months (January and July), as well as the northern and southern hemispheres, have been differenced. Since seasonal variations of the shortwave component of CRF are caused by seasonal changes in both cloudiness and solar irradiance, the latter was removed. In the ERBE data, seasonal changes in CRF are driven primarily by changes in cloud amount. The same conclusion applies to the models. The shortwave component of seasonal CRF is a measure of changes in cloud amount at all altitudes, while the longwave component is more a measure of upper level clouds. Thus important insights into seasonal cloud amount variations of the models have been obtained by comparing both components, as generated by the models, with the satellite data. For example, in 10 of the 18 models the seasonal oscillations of zonal cloud patterns extend too far poleward by one latitudinal grid. }}, doi = {10.1029/97JD00927}, adsurl = {http://adsabs.harvard.edu/abs/1997JGR...10216593C}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JGR...10213731K, author = {{Krinner}, G. and {Genthon}, C. and {Li}, Z.-X. and {Le van}, P. }, title = {{Studies of the Antarctic climate with a stretched-grid general circulation model}}, journal = {\jgr}, keywords = {Meteorology and Atmospheric Dynamics: Polar meteorology, Meteorology and Atmospheric Dynamics: General circulation, Hydrology: Glaciology, Hydrology: Snow and ice}, year = 1997, month = jun, volume = 102, pages = {13731}, abstract = {{A stretched-grid general circulation model (GCM), derived from the Laboratoire de Météorologie Dynamique (LMD) GCM is used for a multiyear high-resolution simulation of the Antarctic climate. The resolution in the Antarctic region reaches 100 km. In order to correctly represent the polar climate, it is necessary to implement several modifications in the model physics. These modifications mostly concern the parameterizations of the atmospheric boundary layer. The simulated Antarctic climate is significantly better in the stretched-grid simulation than in the regular-grid control run. The katabatic wind regime is well captured, although the winds may be somewhat too weak. The annual snow accumulation is generally close to the observed values, although local discrepancies between the simulated annual accumulation and observations remain. The simulated continental mean annual accumulation is 16.2 cm y$^{-1}$. Features like the surface temperature and the temperature inversion over large parts of the continent are correctly represented. The model correctly simulates the atmospheric dynamics of the rest of the globe. }}, doi = {10.1029/96JD03356}, adsurl = {http://adsabs.harvard.edu/abs/1997JGR...10213731K}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JCli...10.1441B, author = {{Bony}, S. and {Sud}, Y. and {Lau}, K.~M. and {Susskind}, J. and {Saha}, S.}, title = {{Comparison and Satellite Assessment of NASA/DAO and NCEP-NCAR Reanalyses over Tropical Ocean: Atmospheric Hydrology and Radiation.}}, journal = {Journal of Climate}, year = 1997, month = jun, volume = 10, pages = {1441-1462}, abstract = {{This study compares the atmospheric reanalyses that have been produced independently at the Data Assimilation Office (DAO) of Goddard Laboratory for Atmospheres and at the National Centers for Environmental Prediction (NCEP). These reanalyses were produced by using a frozen state-of-the-art version of the global data assimilation system developed at these two centers. For the period 1987-88 and for the tropical oceanic regions of 30{\deg}S-30{\deg}N, surface and atmospheric fields related to atmospheric hydrology and radiation are compared and assessed, wherever possible, with satellite data. Some common biases as well as discrepancies between the two independent reassimilation products are highlighted.Considering both annual averages and interannual variability (1987-88), discrepancies between DAO and NCEP reanalysis in water vapor, precipitation, and clear-sky longwave radiation at the top of the atmosphere are generally smaller than discrepancies that exist between corresponding satellite estimates. Among common biases identified in the reanalyses, the authors note an underestimation of the total precipitable water and an overestimation of the shortwave cloud radiative forcing in warm convective regions. Both lead to an underestimation of the surface radiation budget. The authors also note an overestimaton of the clear-sky outgoing longwave radiation in most tropical ocean regions, as well as an overestimation of the longwave radiative cooling at the ocean surface.Surface latent and sensible heat fluxes differ by about 20 and 3 W m$^{2}$, respectively, in the two reanalyses. Differences in the surface radiation budget are larger than the uncertainties of satellite-based estimates. Biases in the surface radiation fluxes derived from the reanalyses are primarily due to incorrect shortwave cloud radiative forcing and, to a lesser degree, due to a deficit in the total precipitable water and a cold bias at lower-tropospheric temperatures.This study suggests that individual features and biases of each set of reanalyses should be carefully studied, especially when using analyzed surface fluxes to force other physical or geophysical models such as ocean circulation models. Over large regions of the tropical oceans, DAO and NCEP reanalyses produce surface net heat fluxes that can differ by up to 50 W m$^{2}$in the average and by a factor of 2 when considering interannual anomalies. This may lead to vastly different thermal forcings for driving ocean circulations. }}, doi = {10.1175/1520-0442(1997)010<1441:CASAON>2.0.CO;2}, adsurl = {http://adsabs.harvard.edu/abs/1997JCli...10.1441B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JCli...10.1194C, author = {{Chen}, T.~H. and {Henderson-Sellers}, A. and {Milly}, P.~C.~D. and {Pitman}, A.~J. and {Beljaars}, A.~C.~M. and {Polcher}, J. and {Abramopoulos}, F. and {Boone}, A. and {Chang}, S. and {Chen}, F. and {Dai}, Y. and {Desborough}, C.~E. and {Dickinson}, R.~E. and {D{\"u}menil}, L. and {Ek}, M. and {Garratt}, J.~R. and {Gedney}, N. and {Gusev}, Y.~M. and {{\nbsp}Kim}, J. and {{\nbsp}Koster}, R. and {{\nbsp}Kowalczyk}, E.~A. and {{\nbsp}Laval}, K. and {{\nbsp}Lean}, J. and {{\nbsp}Lettenmaier}, D. and {{\nbsp}Liang}, X. and {{\nbsp}Mahfouf}, J.-F. and {{\nbsp}Mengelkamp}, H.-T. and {{\nbsp}Mitchell}, K. and {{\nbsp}Nasonova}, O.~N. and {{\nbsp}Noilhan}, J. and {{\nbsp}Robock}, A. and {{\nbsp}Rosenzweig}, C. and {{\nbsp}Schaake}, J. and {{\nbsp}Schlosser}, C.~A. and {{\nbsp}Schulz}, J.-P. and {{\nbsp}Shao}, Y. and {{\nbsp}Shmakin}, A.~B. and {{\nbsp}Verseghy}, D.~L. and {{\nbsp}Wetzel}, P. and {{\nbsp}Wood}, E.~F. and {{\nbsp}Xue}, Y. and {{\nbsp}Yang}, Z.-L. and {{\nbsp}Zeng}, Q.}, title = {{Cabauw Experimental Results from the Project for Intercomparison of Land-Surface Parameterization Schemes.}}, journal = {Journal of Climate}, year = 1997, month = jun, volume = 10, pages = {1194-1215}, abstract = {{In the Project for Intercomparison of Land-Surface Parameterization Schemes phase 2a experiment, meteorological data for the year 1987 from Cabauw, the Netherlands, were used as inputs to 23 land-surface flux schemes designed for use in climate and weather models. Schemes were evaluated by comparing their outputs with long-term measurements of surface sensible heat fluxes into the atmosphere and the ground, and of upward longwave radiation and total net radiative fluxes, and also comparing them with latent heat fluxes derived from a surface energy balance. Tuning of schemes by use of the observed flux data was not permitted. On an annual basis, the predicted surface radiative temperature exhibits a range of 2 K across schemes, consistent with the range of about 10 W m$^{2}$in predicted surface net radiation. Most modeled values of monthly net radiation differ from the observations by less than the estimated maximum monthly observational error ({\plusmn}10 W m$^{2}$). However, modeled radiative surface temperature appears to have a systematic positive bias in most schemes; this might be explained by an error in assumed emissivity and by models' neglect of canopy thermal heterogeneity. Annual means of sensible and latent heat fluxes, into which net radiation is partitioned, have ranges across schemes of30 W m$^{2}$and 25 W m$^{2}$, respectively. Annual totals of evapotranspiration and runoff, into which the precipitation is partitioned, both have ranges of 315 mm. These ranges in annual heat and water fluxes were approximately halved upon exclusion of the three schemes that have no stomatal resistance under non-water-stressed conditions. Many schemes tend to underestimate latent heat flux and overestimate sensible heat flux in summer, with a reverse tendency in winter. For six schemes, root-mean-square deviations of predictions from monthly observations are less than the estimated upper bounds on observation errors (5 W m$^{2}$for sensible heat flux and 10 W m$^{2}$for latent heat flux). Actual runoff at the site is believed to be dominated by vertical drainage to groundwater, but several schemes produced significant amounts of runoff as overland flow or interflow. There is a range across schemes of 184 mm (40\% of total pore volume) in the simulated annual mean root-zone soil moisture. Unfortunately, no measurements of soil moisture were available for model evaluation. A theoretical analysis suggested that differences in boundary conditions used in various schemes are not sufficient to explain the large variance in soil moisture. However, many of the extreme values of soil moisture could be explained in terms of the particulars of experimental setup or excessive evapotranspiration. }}, doi = {10.1175/1520-0442(1997)010<1194:CERFTP>2.0.CO;2}, adsurl = {http://adsabs.harvard.edu/abs/1997JCli...10.1194C}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JApMe..36..664G, author = {{Giraud}, V. and {Buriez}, J.~C. and {Fouquart}, Y. and {Parol}, F. and {Seze}, G.}, title = {{Large-Scale Analysis of Cirrus Clouds from AVHRR Data: Assessment of Both a Microphysical Index and the Cloud-Top Temperature.}}, journal = {Journal of Applied Meteorology}, year = 1997, month = jun, volume = 36, pages = {664-675}, abstract = {{An algorithm that allows an automatic analysis of cirrus properties from Advanced Very High Resolution Radiometer (AVHRR) observations is presented. Further investigations of the information content and physical meaning of the brightness temperature differences (BTD) between channels 4 (11 m) and 5 (12 m) of the radiometer have led to the development of an automatic procedure to provide global estimates both of the cirrus cloud temperature and of the ratio of the equivalent absorption coefficients in the two channels, accounting for scattering effects. The ratio is useful since its variations are related to differences in microphysical properties. Assuming that cirrus clouds are composed of ice spheres, the effective diameter of the particle size distribution can be deduced from this microphysical index.The automatic procedure includes first, a cloud classification and a selection of the pixels corresponding to the envelope of the BTD diagram observed at a scale of typically 100 {\times} 100 pixels. The classification, which uses dynamic cluster analysis, takes into account spectral and spatial properties of the AVHRR pixels. The selection is made through a series of tests, which also guarantees that the BTD diagram contains the necessary information, such as the presence of both cirrus-free pixels and pixels totally covered by opaque cirrus in the same area. Finally, the cloud temperature and the equivalent absorption coefficient ratio are found by fitting the envelope of the BTD diagram with a theoretical curve. Note that the method leads to the retrieval of the maximum value of the equivalent absorption coefficient ratio in the scene under consideration. This, in turn, corresponds to the minimum value of the effective diameter of the size distribution of equivalent Mie particles.The automatic analysis has been applied to a series of 21 AVHRR images acquired during the International Cirrus Experiment (ICE'89). Although the dataset is obviously much too limited to draw any conclusion at the global scale, it is large enough to permit derivation of cirrus properties that are statistically representative of the cirrus systems contained therein. The authors found that on average, the maximum equivalent absorption coefficient ratio increases with the cloud-top temperature with a jump between 235 and 240 K. More precisely, for cloud temperatures warmer than 235 K, the retrieved equivalent absorption coefficient ratio sometimes corresponds to very small equivalent spheres (diameter smaller than 20 m). This is never observed for lower cloud temperatures. This change in cirrus microphysical properties points out that ice crystal habits may vary from one temperature regime toanother. It may be attributed to a modification of the size and/or shape of the particles. }}, doi = {10.1175/1520-0450-36.6.664}, adsurl = {http://adsabs.harvard.edu/abs/1997JApMe..36..664G}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997AdSpR..19.1213R, author = {{Read}, P.~L. and {Collins}, M. and {Forget}, F. and {Fournier}, R. and {Hourdin}, F. and {Lewis}, S.~R. and {Talagrand}, O. and {Taylor}, F.~W. and {Thomas}, N.~P.~J.}, title = {{A GCM climate database for mars: for mission planning and for scientific studies}}, journal = {Advances in Space Research}, year = 1997, month = may, volume = 19, pages = {1213-1222}, abstract = {{The construction of a new database of statistics on the climate and environment of the Martian atmosphere is currently under way, with the support of the European Space Agency. The primary objectives of this database are to provide information for mission design specialists on the mean state and variability of the Martian environment in unprecedented detail, through the execution of a set of carefully validated simulations of the Martian atmospheric circulation using comprehensive numerical general circulation models. The formulation of the models used are outlined herein, noting especially new improvements in various schemes to parametrize important physical processes, and the scope of the database to be constructed is described. A novel approach towards the representation of large-scale variability in the output of the database using empirical eigenfunctions derived from statistical analyses of the numerical simulations, is also discussed. It is hoped that the resulting database will be of value for both scientific and engineering studies of Mars' atmosphere and near-surface environment. }}, doi = {10.1016/S0273-1177(97)00272-X}, adsurl = {http://adsabs.harvard.edu/abs/1997AdSpR..19.1213R}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997JCli...10..381L, author = {{Lau}, K.-M. and {Wu}, H.-T. and {Bony}, S.}, title = {{The Role of Large-Scale Atmospheric Circulation in the Relationship between Tropical Convection and Sea Surface Temperature.}}, journal = {Journal of Climate}, year = 1997, month = mar, volume = 10, pages = {381-392}, abstract = {{In this paper, the authors study the influence of the large-scale atmospheric circulation on the relationship between sea surface temperature (SST) and tropical convection inferred from outgoing longwave radiation (OLR). They find that under subsidence and clear sky conditions there is an increase in OLR with respect to SST at a rate of 1.8-2.5 Wm$^{2}$({\deg}C)$^{1}$. In regions of large-scale ascending motions, which is correlated to, but not always collocated with, regions of warm water, there is a large reduction of OLR with respect to SST associated with increase in deep convection. The rate of OLR reduction is found to be a strong function of the large-scale motion field. The authors find an intrinsic OLR sensitivity to SST of approximately 4 to 5 Wm$^{2}$({\deg}C)$^{1}$in the SST range of 27{\deg}-28{\deg}C, under conditions of weak large-scale circulation. Under the influence of strong ascending motion, the rate can be increased to 15 to 20 Wm$^{2}$({\deg}C)$^{1}\$ for the same SST range. The above OLR-SST relationships are strongly dependent on geographic locations. On the other hand, deep convection and large-scale circulation exhibit a nearly linear relationship that is less dependent on SST and geographic locations.The above results are supported by regression analyses. In addition, they find that on interannual timescales, the relationship between OLR and SST is dominated by the large-scale circulation and SST changes associated with the El Ni{\~n}o-Southern Oscillation. The relationship between anomalous convection and local SST is generally weak everywhere except in the equatorial central Pacific, where large-scale circulation and local SST appear to work together to produce the observed OLR-SST sensitivity. Over the equatorial central Pacific, approximately 45\%-55\% of the OLR variance can be explained by the large-scale circulation and 15\%-20\% by the local SST.Their results also show that there is no fundamental microphysical or thermodynamical significance to the so-called SST threshold at approximately 27{\deg}C, except that it represents a transitional SST between clear-sky/subsiding and convective/ascending atmospheric conditions. Depending on the ambient large-scale motion associated with basin-scale SST distribution, this transitional SST can occur in a range from 25.5{\deg} to 28{\deg}C. Similarly, there is no magic to the 29.5{\deg}C SST, beyond which convection appears to decrease with SST. The authors find that under the influence of strong large-scale rising motion, convection does not decrease but increases monotonically with SST even at SST higher than 29.5{\deg}C. The reduction in convection is likely to be influenced by large-scale subsidence forced by nearby or remotely generated deep convection. }}, doi = {10.1175/1520-0442(1997)010<0381:TROLSA>2.0.CO;2}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997GeoRL..24..147R, author = {{Roca}, R. and {Picon}, L. and {Desbois}, M. and {Le Treut}, H. and {Morcrette}, J.-J.}, title = {{Direct comparison of meteosat water vapor channel data and general circulation model results}}, journal = {\grl}, keywords = {Oceanography: Physical: General circulation}, year = 1997, volume = 24, pages = {147-150}, abstract = {{Following a model to satellite approach, this study points out the ability of the general circulation model (GCM) of the Laboratoire de Météorologie Dynamique to reproduce the observed relationship between tropical convection and subtropical moisture in the upper troposphere. Those parameters are characterized from Meteosat water vapor equivalent brightness temperatures (WVEBT) over a monthly scale. The simulated WVEBT field closely resembles to the observed distribution. The pure water vapor features and the convective areas are well located and their seasonal variations are captured by the model. A dry (moist) bias is found over convective (subsiding) areas, whereas the model globally best acts over Atlantic ocean than over Africa. The observed and simulated seasonal variations show that an extension of the ITCZ is correlated to a moistening of the upper troposphere in subtropical areas. Those results imply a positive large scale relationship between convective and subsiding areas in both observation and simulation, and suggest the relevance of our approach for further climatic studies. }}, doi = {10.1029/96GL03923}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{1997ClDy...13..429L, author = {{Li}, Z.-X. and {Ide}, K. and {Treut}, H.~L. and {Ghil}, M.}, title = {{Atmospheric radiative equilibria in a simple column model}}, journal = {Climate Dynamics}, year = 1997, volume = 13, pages = {429-440}, abstract = {{An analytic radiative-equilibrium model is formulated where both short- and longwave radiation are treated as two-stream (down- and upward) fluxes. An equilibrium state is defined in the model by the vertical temperature profile. The sensitivity of any such state to the model atmosphere's optical properties is formulated analytically. As an example, this general formulation is applied to a single-column 11-layer model, and the model's optical parameters are obtained from a detailed radiative parametrization of a general circulation model. The resulting simple column model is then used to study changes in the Earth-atmosphere system's radiative equilibrium and, in particular, to infer the role of greenhouse trace gases, water vapor and aerosols in modifying the vertical temperature profile. Multiple equilibria appear when a positive surface-albedo feedback is introduced, and their stability is studied. The vertical structure of the radiative fluxes (both short- and longwave) is substantially modified as the temperature profile changes from one equilibrium to another. These equilibria and their stability are compared to those that appear in energy-balance models, which heretofore have ignored the details of the vertical }}, doi = {10.1007/s003820050175}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } Contact information EMC3 group LMD/CNRS/UPMC Case 99 Tour 45-55, 3ème étage 4 Place Jussieu 75252 Paris Cedex 05 FRANCE Tel: 33 + 1 44 27 27 99 33 + 6 16 27 34 18 (Dr F. Cheruy) Tel: 33 + 1 44 27 35 25 (Secretary) Fax: 33 + 1 44 27 62 72 email: emc3 at lmd.jussieu.fr Map of our location EUREC4A campaign Click the above logo for the operationnal center. Today's LMDZ meteogram
2021-01-20 17:41:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6478254199028015, "perplexity": 10749.13690050147}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00350.warc.gz"}
https://questions.examside.com/past-years/jee/question/pthe-magnetic-flux-through-a-coil-perpendicular-to-its-pla-jee-main-physics-motion-zxkfq2oiozxeri3g
1 JEE Main 2022 (Online) 26th June Morning Shift +4 -1 The magnetic flux through a coil perpendicular to its plane is varying according to the relation $$\phi = (5{t^3} + 4{t^2} + 2t - 5)$$ Weber. If the resistance of the coil is 5 ohm, then the induced current through the coil at t = 2 s will be, A 15.6 A B 16.6 A C 17.6 A D 18.6 A 2 JEE Main 2022 (Online) 26th June Morning Shift +4 -1 If Electric field intensity of a uniform plane electromagnetic wave is given as $$E = - 301.6\sin (kz - \omega t){\widehat a_x} + 452.4\sin (kz - \omega t){\widehat a_y}{V \over m}$$. Then magnetic intensity 'H' of this wave in Am$$-$$1 will be : [Given : Speed of light in vacuum $$c = 3 \times {10^8}$$ ms$$-$$1, Permeability of vacuum $${\mu _0} = 4\pi \times {10^{ - 7}}$$ NA$$-$$2] A $$+ 0.8\sin (kz - \omega t){\widehat a_y} + 0.8\sin (kz - \omega t){\widehat a_x}$$ B $$+ 1.0 \times {10^{ - 6}}\sin (kz - \omega t){\widehat a_y} + 1.5 \times {10^{ - 6}}(kz - \omega t){\widehat a_x}$$ C $$- 0.8\sin (kz - \omega t){\widehat a_y} - 1.2\sin (kz - \omega t){\widehat a_x}$$ D $$- 1.0 \times {10^{ - 6}}\sin (kz - \omega t){\widehat a_y} - 1.5 \times {10^{ - 6}}\sin (kz - \omega t){\widehat a_x}$$ 3 JEE Main 2022 (Online) 25th June Evening Shift +4 -1 A sinusoidal voltage V(t) = 210 sin 3000 t volt is applied to a series LCR circuit in which L = 10 mH, C = 25 $$\mu$$F and R = 100 $$\Omega$$. The phase difference ($$\Phi$$) between the applied voltage and resultant current will be : A tan$$-$$1(0.17) B tan$$-$$1(9.46) C tan$$-$$1(0.30) D tan$$-$$1(13.33) 4 JEE Main 2022 (Online) 25th June Evening Shift +4 -1 The electromagnetic waves travel in a medium at a speed of 2.0 $$\times$$ 108 m/s. The relative permeability of the medium is 1.0. The relative permittivity of the medium will be : A 2.25 B 4.25 C 6.25 D 8.25 JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination
2023-03-27 11:10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6594374179840088, "perplexity": 1862.8857306031412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00235.warc.gz"}
http://stm.las.ac.cn/STMonitor/home/semanticannotation.htm?id=7198483&parentPageId=1570738168137&serverId=50
您当前的位置: 首页 > 网页快照 Dewatering Behavior of a Wood-Cellulose Nanofibril Particulate System Abstract.The novel use of aqueous suspensions of cellulose nanofibrils (CNF) as an adhesive/binder in lignocellulosic-based composite manufacture requires the removal of a considerable amount of water from the furnish during processing, necessitating thorough understanding of the dewatering behavior referred to as “contact dewatering”. The dewatering behavior of a wood-CNF particulate system (wet furnish) was studied through pressure filtration tests, centrifugation, and characterization of hard-to-remove (HR) water, i.e. moisture content in the wet furnish at the transition between constant rate part and the falling rate part of evaporative change in mass from an isothermal thermogravimetric analysis (TGA). The effect of wood particle size thereby particle specific surface area on the dewatering performance of wet furnish was investigated. Permeability coefficients of wet furnish during pressure filtration experiments were also determined based on Darcy’s law for volumetric flow through a porous medium. Results revealed that specific particle surface area has a significant effect on the dewatering of wet furnish where dewatering rate significantly increased at higher specific particle surface area levels. While the permeability of the systems decreased over time in almost all cases, the most significant portion of dewatering occurred at very early stages of dewatering (less than 200 seconds) leading to a considerable increase in instantaneous dewatering when CNF particles come in contact with wood particles. Introduction.Cellulose nanofibrils (CNF) have received a tremendous level of attention over the past few years as potential binders, reinforcing fillers, paper coatings, oxygen barrier films, and filaments attributable to the unprecedented specific strength of the individual nanofibrils, low density, superb adhesion properties, chemically tunable surface functionality, renewability, and biological abundance of a material obtained from sustainable resources. Finding novel applications which can highly benefit from outstanding intrinsic properties of CNF has been the subject of numerous recent studies1,2,3,4,5,6,7,8,9.CNF consists of nano and micro-scale cellulosic fibers suspended in water and is mostly available in the form of a low-consistency (less than 5 wt.%) aqueous suspension. It offers excellent adhesion properties attributed to a very high specific surface area and a vast number of hydroxyl groups available on the cellulosic surfaces, which make this type of material a superior candidate for many different applications1,10. The utilization of CNF as well as lignin-containing CNF (LCNF) as binders in the formulation of particleboards and medium density fiberboards has been reported2,11,12,13,14,15. Potential applications of CNF as a binder for the production of laminated papers16, reinforcing natural fiber yarns17, and self-assembly processes3,18 have been recently proposed.The current processing technology to produce composite panels using CNF or LCNF as binder consists of a dewatering process followed by drying in a hot press2,13,14,15. To shorten press cycles and save energy, the majority of the water present in the mixture of wood particles and CNF (hereafter “furnish” or “mattress”) must be mechanically removed prior to hot pressing in an efficient manner. Therefore, understanding and controlling the water removal behavior of the CNF suspension, both solely and in the form of a mix with other materials is a critical step to optimize the production process.The terms “dewatering” and “drainage”, herein, refer to liquid (assuming only water) removal from the solid-liquid mixtures during a filtration process. The material structure forming as dewatering progresses is referred to as “filter cake”. To date, the dewatering behavior of cellulosic suspensions and furnishes has been studied by many researchers mostly through filtration or rheological theories or combination of the two19,20,21,22,23,24,25,26,27,28,29. Paradis et al. used a modified dewatering apparatus equipped with a cone-and-plate rheometer to determine the drainage resistance coefficient of different grades of paper-making stock under a known shear condition. The influence of shear rate on the drainage resistance was also investigated, which pointed out that the drainage rate changes as a result of the change in the characteristics of the filter cake as drainage progresses19. Dimic-Misic et al. studied the effect of shear stress as well as swelling (expressed as the water retention value at a relatively low consistency) of micro and nanofibrillated cellulose (MNFC) on the dewatering behavior of the cellulose furnishes. It was found that the nanofibrillar suspension added to the pulp-pigment particles furnish predominantly governs the rheological and dewatering responses. Highly swelled nanofibrillated cellulose was shown to have a significantly difficult dewatering owing to plugging the bottom layer of the filter cakes with ultrafine fibrils. A noticeable gel-like structure as well as shear-thinning behavior –i.e. the decrease in viscosity under increasing shear rates– were seen for all the MNFC suspensions and furnishes, thus more efficient dewatering at higher shear rates could be attained20,21.The influence of CNF flocculation upon charge neutralization by the addition of salt on the dewatering ability of CNF suspension was investigated using a pressure dewatering method and it was determined that the dewatering ability of the CNF suspension is affected by the type and concentration of the salt22. Rantanen et al. studied the effect of adding MNFC to the formulation of high filler content composite paper in the web dewatering process using a gravimetric dewatering evaluation. The results revealed that increasing the MNFC fibrillation decreased the dewatering performance, however, this could be tuned by in situ precipitation of precipitated calcium carbonate (PCC) to achieve a desirable combination of strength and processing performance23. Further assessments have been done to enhance the dewatering capability of MNFC suspensions and furnishes under an ultra-low shear rate (approx. 0.01 s−1), including the addition of colloidally unstable mineral particles (such as undispersed calcium carbonate), acid dissociation of the surface water bound to the nanofibrils of cellulose by adding ultrafine calcium carbonate nanoparticles, and controlling the rheological properties with respect to length and aspect ratio of fibrils24,25,26.Clayton et al. studied the dewatering mechanisms of a range of biomaterials, including lignite, bio-solids, and bagasse, through mechanical thermal expression (MTE) using a compression-permeability cell. It was revealed that at lower temperatures the predominant dewatering mechanism is mechanical dewatering referred to as “consolidation” by the authors. However, thermal dewatering plays a more important role at higher temperatures27. A dynamic model was developed by Rainey et al. to predict the filtration behavior of bagasse pulp incorporating steady state compressibility and permeability parameters obtained from experimental data28. Hakovirta et al. employed a method to improve the dewatering efficiency of pulp furnish through the addition of hydrophobic fibers and demonstrated that adding a low percentage of hydrophobic fibers to the pulp furnish could impact freeness and water retention properties, thus a considerable improvement in the dewatering efficiency was attained29. A method was used to measure the permeability of fiber mats at different flow rates during the medium density fiberboard manufacturing process using Darcy’s law30. Lavrykova-Marrain and Ramarao employed two mathematical models based on conventional cake filtration theory and multiphase flow theory by applying Darcy’s law to describe dewatering of pulp fiber suspensions under varying pressure31. A model was also developed to predict the permeability of cellulose fibers in pulp and paper structures based on Kozeny-Carman theory assuming fibers are either cylindrical or band-shape in a two-dimensional network32. Darcy’s law was also applied to predict the weight of CNF-containing paper coatings through filtration theory33.The original hypothesis of this study is based on the fact that in a CNF suspension, water is mostly in the form of adsorbed water associated with the cellulose surface and is tightly bound to the hydroxyl groups present in the amorphous regions through hydrogen bonding. After mixing wood particles (WPs) with CNF slurry, a large portion of the adsorbed water turns into free water as a result of contact between nanofibrils of cellulose and WPs, a phenomenon termed here as ‘contact dewatering’ first reported by our research group1,2. Upon consolidation, a considerable amount of free water is removed from the wet furnish by pressing (mechanical dewatering) in a very short period of time and the remaining water in the system can be removed through heating (evaporative dewatering) to produce the final product.In this study, the dewatering behavior of WP-CNF wet furnish was studied through pressure filtration tests and centrifugation. The effect of wood particle size and therefore particle specific surface area on the dewatering properties of wet furnish was investigated. A method based on Darcy’s law for volumetric flow through a porous medium was used to determine the permeability coefficients of wet furnish during filtration test. Characterization of hard-to-remove (HR) water in wet furnish was also carried out using high resolution isothermal thermogravimetric analysis (TGA) to evaluate thermal dewatering properties of the samples. The results of this study will be helpful in the design of processing equipment for the production of wet-formed CNF bonded composite panels.Materials and Methods.Materials.Southern yellow pine wood particles (WP) with an average aspect ratio of 3.3 and average moisture content of 7% were supplied by Georgia-Pacific Thomson Particleboard (Thomson, GA, USA). The CNF was received in the form of a slurry of 3 wt.% cellulose nanofibrils from the University of Maine’s Process Development Center, which was the product of mechanical refining of bleached softwood kraft pulp. The properties of this CNF material are published elsewhere3. Polypropylene (PP) granules with the average diameter of 2.5 mm were provided by Channel Prime Alliance Inc. (Des Moines, IA, USA).Particle size distribution.To investigate the effect of WP size on the dewatering behavior of the wet furnish, particles were separated based on the size using a Retsch AS 200 laboratory sieve shaker (Retsch®, Haan, Germany). Particles were screened into six different size ranges, including larger than 2 mm (Group I), larger than 1.4 mm and smaller than 2 mm (Group II), larger than 1 mm and smaller than 1.4 mm (Group III), larger than 0.5 mm and smaller than 1 mm (Group IV), larger than 0.25 mm and smaller than 0.5 mm (Group V), and finally dust (Group VI).The sieved particles were then weighed and the weight fractions of each particle size range was calculated based on the total weight of the given sample of WPs. Results are presented in Fig. 1. As shown in Fig. 1, WPs with the sizes ranging from 0.5 mm to 1.4 mm had the highest weight fraction, almost 60%, of the entire sample.Figure 1(a) WP size distribution and average specific surface area values in a given sample. WPs (b) larger than 2 mm (Group I) (c) larger than 1.4 mm and smaller than 2 mm (Group II) (d) larger than 1 mm and smaller than 1.4 mm (Group III) (e) larger than 0.5 mm and smaller than 1 mm (Group IV) (f) larger than 0.25 mm and smaller than 0.5 mm (Group V) (g) dust (Group VI).Full size image To determine the average specific surface area of the wood particles in each range/group, three different samples of wood particles, each sample about 5 grams in weight, were selected from each size range. The average thickness of particles in each sample was calculated through measuring the thicknesses of one hundred particles randomly selected from the given sample. The average length and surface area of each given sample were measured by an optical (digital) photograph of the sample and then processing the digital image using the ImageJ image processing software version 1.49 v (National Institutes of Health, USA). Assuming that particles are in the form of small cuboids and having the average values of length, thickness, and surface area, the average specific surface area of particles in a given sample can be approximated using Eq. 1:$$SS{\rm{^{\prime} }}A=\frac{2\times [(S{\rm{^{\prime} }})+(a{\rm{^{\prime} }}+b^{\prime} )\times t{\rm{^{\prime} }}]}{w}$$ (1) where $$SS^{\prime} A$$ is the average specific surface area (cm2/g), ($$S^{\prime}$$ ) is the average top view surface (cm2), $$a^{\prime}$$ is the average length (cm), $$b^{\prime}$$ is the average width (cm), $$t^{\prime}$$ is the average thickness, and w is the sample weight (g). The average width of the particles can be easily calculated by having the average top view surface and the average length through Eq. 2:$$b^{\prime} =\frac{S^{\prime} }{a^{\prime} }$$ (2) The average values of specific surface area for each particle size group are illustrated in Fig. 1. It is clearly shown that the smaller the wood particle size, the higher the specific surface area.Pressure filtration.A pressure filtration test was used as a method to study the dewatering behavior of the wet furnish. To investigate the effect of particle size on the dewatering of the wet furnish, samples of WPs with deferent sizes from Groups I through VI (excluding Group IV that had close SSA to Group III) were selected and mixed with a CNF slurry at 3 wt.% solids content. The mixing ratio of WPs to CNF was 7:3 based on dry weights of the constituents. Samples of pure CNF slurries with consistencies of 3 and 10 wt.% were also used to compare the dewatering behavior of pure CNF with that of WP-CNF mixes. The reason for choosing CNF 10 wt.% was that it had the same solids content as the mix samples. Pressure filtration tests were then carried out on the prepared samples at a pressure of 172 kPa (approx. 25 psi) for 30 minutes using an OFITE® low pressure bench mount filter press (OFI Testing Equipment, Inc., Houston, TX, USA). Samples of 100 g from each formulation were loaded into the cylindrical chamber of the device on top of a metal screen and a filter paper. A small digital scale along with a glass Erlenmeyer flask on top were placed under the chamber outlet to collect and weigh the removing water (Fig. 2d). The changes in the weight of collected water through time were recorded by a video camera from which dewatering values were extracted.Figure 2Schematic of the filtration model: (a) shortly after the beginning (b) in the middle (c) at the end of filtration experiment. (d) filter press and test setup.Full size image .Determination of permeability.Darcy’s law for liquid flow through a porous medium was used to determine the permeability of pure CNF and WP-CNF mixtures. A schematic of pressure filtration is illustrated in Fig. 2a–c. According to Darcy’s law, the specific volumetric flow rate ($$\dot{V}$$ ) is related to the pressure drop through the filter cake (∆P), permeability of the filter medium (k), viscosity of the fluid (μ) and the cake thickness (h):$$\dot{V}=\frac{d(\frac{V}{A})}{dt}=\frac{\Delta Pk}{\mu h}$$ (3) where $$(\frac{V}{A})$$ represents volumetric liquid flow per unit area and t is the drainage time. The thickness of filter cake (h) can be also obtained from Eq. 4 taking into account a balance between the volume of fibers trapped in the filter cake and the volume of fibers that were present in the water which has passed through the membrane at any given time34:$$h=\frac{V{\varphi }_{0}}{A{\varphi }_{m}}$$ (4) where ϕ0 and ϕm are volume fraction of fibers in the slurry and in the filter cake, respectively. In the case of wet furnish, fibers refer to the sum of cellulose nanofibrils and wood particles. The volume fraction of fibers can be easily obtained based on solids content of the slurry and the densities of fibers and water:$$\varphi =\frac{s{\rho }_{w}}{s{\rho }_{w}+(1-s){\rho }_{f}}$$ (5) where s, ρw, and ρf are solids content of the slurry, density of water (for simplification assumed 1 g/cm3), and density of fibers, respectively. Rewriting Eq. 3 based on Eq. 4 will yield:$$(\frac{V}{A})d(\frac{V}{A})=(\frac{\Delta Pk{\varphi }_{m}}{\mu {\varphi }_{0}})dt$$ (6) Equation 7 can be derived from Eq. 6 by integration. Equation 7 actually describes the dewatering behavior based on the permeability and fiber volume fraction of the filter cake. This equation clearly demonstrates that the volumetric flow of water per unit area of the filter cake has a square root relationship with the pressure drop through the filter cake, permeability of the cake, volume fraction of fibers, and dewatering time.$$\frac{V}{A}=\sqrt{\frac{2\Delta Pk{\varphi }_{m}t}{\mu {\varphi }_{0}}}$$ (7) It should be noted that during the dewatering of wet furnish, the permeability of the filter cake changes due to the densification and compression of the filter cake over the time. To determine the permeability of wet furnish, Eq. 7 can be rearranged in the form of Eq. 8.$$(\frac{\mu {\varphi }_{0}}{2\Delta P{\varphi }_{m}}){(\frac{V}{A})}^{2}=\kappa t$$ (8) Volumetric liquid flow $$(\frac{V}{A})$$ can be calculated based on the filtrate mass (g) over filtration time (s), density of water (g/mm3), and cross-sectional area of the filter cake (mm2), which is roughly equivalent to the cross-sectional area of the cylindrical chamber, using the following equation:$$\frac{V}{A}=\frac{{m}_{w}}{{\rho }_{w}A}$$ (9) where mw and A are mass of removed water and cross-sectional area of the filter cake, respectively. The left-hand side of Eq. 8 for each corresponding volumetric flow can be calculated and plotted versus time. The permeability of filter cake at each time interval can then be determined by fitting a straight line to the resultant curve in the corresponding time interval and finding the slope of the lines.Centrifugation.Water retention value (WRV) of wet furnish gives a useful measure of the performance of fibers and particles relative to the dewatering behavior of the furnish. Samples of WP-CNF mixtures along with pure CNF 3 wt.% and 10 wt.% were prepared using the same preparation method as the pressure filtration experiment. Samples of WP Group I were excluded from the experiment owing to insufficiency. The WRVs of the samples were determined through centrifugation at 2200 rpm for 15 minutes using a CLAY ADAMS DYNAC® II table top centrifuge (Becton, Dickinson and Company, Franklin Lakes, NJ, USA). In order to separate the water removed during the centrifugation from the wet furnish and collect the leftover furnish, a PierceTM Protein Concentrator PES tube was used. A round piece of filter paper was cut out of the filter paper used for the pressure filtration test and placed underneath the samples prior to centrifugation to have control over the liquid flow and not to clog the tube membrane. After centrifugation, the leftover furnish was removed and weighed to determine the weight of centrifuged furnish. Samples then were dried in an oven at 105 °C until they reached the constant weights. The water retention values were calculated using Eq. 10.$$\mathrm{WRV} \% =\frac{{W}_{w}-{W}_{d}}{{W}_{d}}\times 100$$ (10) where Ww and Wd are the wet weight of the sample after centrifugation and the oven-dry weight of the sample, respectively.Hard-to-remove water.Evaporative dewatering is another important mechanism of water removal occurring during the hot pressing process. To investigate the influence of particle size on the evaporative dewatering of the wet furnish, high resolution isothermal thermogravimetric analysis (TGA) was used based on the method first proposed by Park et al.35 for measuring what was termed “hard-to-remove (HR) water”, in softwood bleached kraft pulp fibers. HR water content is defined as the water content in fibers at the beginning of the transition between the constant rate zone and the dropping rate zone (between Part (2) and Part (3) in Fig. 3) of the evaporative change in mass (1st derivative curve). It can be calculated by dividing the mass of water in the fiber associated with the starting point (Point (a) in Fig. 3) of the transition stage by the mass of the dried fiber (Point (b) in Fig. 3), i.e. y divided by x in Fig. 3. To find the beginning of the transition stage, the starting point on the changes of the evaporative change in mass, i.e. 2nd derivative curve is first located. Then the corresponding weight of water (value of “y”) at Point (a) from the TG curve is found. The HR water value can then be calculated by simply dividing the obtained “y” value by the dry weight of the sample (value of x).Figure 3Representation of drying response during an isothermal heating protocol used to define hard-to-remove water.Full size image Samples of WPs from Groups III, IV, V, and VI were selected and mixed with a CNF slurry at 3 wt.% solids content. The mixing ratio of WPs to CNF was 7:3 on a dry-weight basis. Samples of pure CNF and pure WP slurries with the same solids content (3 wt.%) were also prepared and tested using a TGA (model Q500, TA Instruments, New Castle, DE, USA) with a heating regime of ramping up (100 °C/min) to 120 °C and then continuing isothermally at 120 °C for 30 minutes to assure that samples are fully dried. WPs with the size of larger than 1.4 mm -i.e. Group I and II- were excluded from the experiment due to the difficulty in filling the small TGA pans with relatively large WPs. To compare the HR water content of pure CNF with larger cellulosic fibers, samples of pure (3 wt.% consistency) softwood bleached kraft pulp were also tested. To investigate the effect of using a nonpolar and hydrophobic materials instead of WP in the formulation of the mix, samples of 70% PP granules mixed with 30% CNF 3 wt.% (dry-basis) were made and tested as well. The initial mass of each sample was about 100 mg.Statistical analysis.The experimental data were statistically analyzed using IBM SPSS Statistics Version 25 (IBM Corp., Armonk, NY, USA). A one-way ANOVA test was carried out to statistically compare the HR water properties as well as WRV results. Duncan’s multiple range test (DMRT) was also used to evaluate the group means. Comparisons were drawn based on a 95% confidence level.Results and Discussion.Pressure filtration.Pressure filtration tests revealed that the dewatering rate generally decreases over time, regardless of the material formulation. Samples of pure CNF 10 wt.% exhibited considerably lower amounts and rates of water removal within the same period of time compared to WP-CNF mixes with the same solids content (Fig. 4a,b). This also happened for the case of CNF 3 wt.% within the first 200 seconds of the filtration during which in other formulations most of the water removal occurred and dewatering rate started to level off. The dewatering rate of CNF 3 wt.%, however, continued to decrease until almost 20 minutes after the experiment started. This may support the original hypothesis that most of the water in a WP-CNF mix is in the form of free water owing to contact dewatering and could be easily removed from the system, however in pure CNF suspensions, adsorbed water predominantly exists, which is harder to drain. Higher levels of water removal at the end of the test in CNF 3 wt.% compared to other formulations may be related to the lower consistency of CNF 3 wt.% samples which was lower than all other formulations.Figure 4Average (a) water removal (b) dewatering rate over filtration time for various material formulations.Full size image Among WP-CNF samples, those with smaller particle sizes- i.e. Groups V and VI- in general exhibited the highest levels of water removal during filtration experiments (Fig. 4a). This can be attributable to the smaller size, thus higher specific surface area, which resulted in higher levels of contact dewatering. The lowest level of dewatering (Fig. 4a) and smallest change in the rate of dewatering (Fig. 4b) occurred throughout the filtration of WP with the largest particle size and smallest specific surface area (Group I). This can be also explained by lower levels of contact dewatering in particles with smaller specific surface area. WP-CNF samples of Group V and VI showed to have a small amount of drainage even before applying any pressure. As shown in Fig. 4b, the initial increases in the dewatering rates of these two formulations within the first 10 seconds of the filtration is attributable to the pressure adjustments at the beginning of the experiments.Permeability.Permeability values of the samples were determined using Eq. 8. The values of $$(\frac{V}{A})$$ at each time were calculated through Eq. 9 by inserting the corresponding filtrate mass, density of water (1000 kg/m3), and the cross-sectional area of the cylindrical chamber (4.6 × 10−3 m2). The obtained volumetric flow values along with the pressure (172 kPa), viscosity of water (10−3 Pa.s) were then plugged into the Eq. 8. Initial volume fraction of fiber (ϕ0) and volume fraction of fiber at the end of the experiment (ϕm) were also calculated through Eq. 5 and by measuring the solids of furnish before and after each filtration test.Permeability values were obtained by plotting the left-hand side of the Eq. 8 over time and fitting a line to the resultant curve at certain time intervals. As for nearly all the formulations, the resultant curves corresponding to Eq. 8 showed three different regions with significantly different slopes- i.e. at the beginning, before reaching the plateau, and the plateau-, the permeability values for each formulation were determined over these three regions. Therefore, the obtained k1, k2, and k3 values respectively corresponded to the permeability of wet furnish at the beginning of the filtration, before reaching the point at which the dewatering rate started to level off, and at the level where no changes were seen in the dewatering rate. The obtained permeability values for each formulation are presented in Table 1. It can be seen that almost for all cases, the permeability decreases as the filtration goes on. The reduction in the permeability coefficient is more significant in WP (Group V and VI) mixtures with lower particle sizes. This can be attributed to the higher compaction and densification of smaller particles upon dewatering, which resulted in lower porosity in these materials.Table 1 Average values of permeability over three regions and instantaneous dewatering.Full size table Our observations in the lab and pressure filtration results indicated that the contact dewatering starts almost instantaneously after CNF particles come in contact with wood particles. To have a better understanding of how much water was instantaneously removed at the beginning of filtration, the instantaneous dewatering value for each formulation was obtained by plotting the logarithm of filtrate mass versus the logarithm of time and then fitting a straight line to the resultant curve. The intercept of the regression line yielded the logarithmic value of the instant dewatering. As presented in Table 1, CNF 10 wt.% has a significantly lower instantaneous dewatering value compared to that of the WP-CNF mixtures. The amount of the water immediately removed at the beginning of the filtration is even considerably lower in CNF 3 wt.%, as compared with that of the mixes. It clearly shows that, in general, adding WPs to CNF helps with the dewatering. For comparison the instantaneous dewatering of the 3 wt.% CNF increased by 100% when the largest wood particles were added to the system. This was increased by 220% when Group III wood particles were mixed with CNF.Water retention value.Water retention values of wet furnishes are shown in Fig. 5. Results simply showed that the level of final water removed is almost the same among CNF 10 wt.% and WP-CNF mixes with the same solids content. Higher level of water retention in CNF 3 wt.% shows that the percent ratio of water contained in the sample after centrifugation, within the same time and speed, is much higher compared to other formulations. As WRV test only measures the final amount of removed water, these tests cannot capture the change in the rate of dewatering unless tests are done for very short periods of time.Figure 5Average water retention values. Common letters over bars indicate no significant difference at 95% confidence level.Full size image .Hard-to-remove water.Results of HR water measurements are shown in Fig. 6. The HR water values of neat CNF samples were significantly higher than those of neat pulp and neat WP slurries with the same consistency. This can be interpreted as a higher amount of adsorbed water in the structure of CNF 3 wt.% slurry compared to pulp 3 wt.% and WP 3 wt.% suspensions as a result of much higher surface area and higher level of bound water in the fibrillar structure of the CNF. Moreover, WPs contain lignin, which is presumed to be less hydrophilic than neat CNF and pulp samples. Among the mixes, samples of PP-CNF showed the lowest levels of HR water attributable to the hydrophobicity and non-polarity of PP particles, most of the water in the system after mixing can be easily evaporated and can be considered free water. There were no significant changes observed among the HR water values of WPs (with different sizes) and CNF mixtures. This can be explained by taking into account the role of permeability on the one hand and the effect of particle size upon contact dewatering on the other hand. It was expected that smaller wood particles, because of having higher specific surface areas, should lead into higher amounts of contact dewatering. However, larger particles will cause easier evaporation owing to higher permeability. These two factors might have counter effects leading to no considerable difference in HR values.Figure 6HR water values of (a) neat samples (b) mixed samples. Common letters over bars indicate no significant difference at 95% confidence level.Full size image In the work by Park et al., HR water content was measured in pulp fibers by determining the onset of transition between constant rate and falling rate zones through 2nd derivatives. The values found for softwood bleached kraft pulps were in the same range as our results, i.e. between 2 and 4 g/g. However, the solids content used in the study was not clearly mentioned. In another work, Sen et al.36 used another method to calculate the HR water for pulp fibers by integrating the area above the 1st derivative curve in the constant and falling rate zones, and compared this method with the method used by Park et al. The authors refined cellulose fibers to liberate microfibrils with different sizes ranging from several microns down to hundreds of nanometers. The values obtained for the microfibrillated cellulose are between 4 and 4.5 g/g, which were again in the same range as our results although again the solids used in their work were not clearly mentioned. Overall, although the results of HR water were useful for understanding the evaporative dewatering behavior of the wet furnish, the method did not seem to be capable of illustrating the effect of particle size on the contact dewatering clearly.Conclusions.Production of composite panels using CNF as an adhesive/binder is accompanied by a considerable level of water removal prior to hot pressing, which impacts pressing efficiency and energy consumption. This study focused on the dewatering behavior of WP-CNF particulate systems to understand and hence control the water removal from wet furnish. It was hypothesized that the size of WPs and consequently the specific surface area affects the level of contact dewatering, resulting from contact between nanofibrils of cellulose and WPs upon mixing. Pressure filtration tests were carried out to investigate the effect of particle size on the mechanical dewatering of wet furnish. It was found that among WP-CNF mixtures in general, those with smaller particle size had higher levels of water removal during filtration experiments. The lowest level of dewatering and smallest change in the drainage rate occurred during the filtration of WP with the largest particle size and smallest specific surface area (Group I). Samples of pure CNF 3 wt.% and 10 wt.% generally exhibited lower rates of water removal, as compared with those of WP-CNF mixes. This may support the original hypothesis that most of the water in a WP-CNF mix is in the form of free water as a result of contact dewatering and can be easily removed from the system, however in pure CNF suspensions, adsorbed water predominantly exists, which is harder to drain. The determination of the permeability coefficients of wet furnishes showed that regardless of the material formulation, the permeability of the wet furnish decreases over filtration time. The reduction in the permeability coefficients is more significant in WP mixtures with lower particle sizes (Group V and VI). This can be attributable to the higher compaction and densification of smaller particles upon dewatering that resulted in lower porosity.Water retention values of wet furnish were measured through centrifugation technique. Results revealed that the amount of final water removed is almost the same among CNF 10 wt.% and WP-CNF mixes with the same solids content indicating that water retention values cannot capture the change in the rate of dewatering and therefore are unable to quantify contact dewatering. Samples of pure CNF 3 wt.% showed to have significantly higher level of water retention compared to other formulations, which simply means that the level of water contained in these samples after centrifugation under the same conditions is much higher compared to other formulations.Characterization of HR water was also carried out to study the influence of particle size on the evaporative dewatering of wet furnish using high resolution isothermal thermogravimetric analysis (TGA). It was revealed that samples of neat CNF had higher values of HR water compared to neat pulp and neat WP suspensions with the same consistency. Samples of CNF mixed with PP showed the lowest levels of HR water attributed to the hydrophobicity and non-polarity of PP particles. Among the samples of CNF mixed with different sizes of WPs, no significant changes in HR water values were observed.Overall, the study of the dewatering properties of WP-CNF particulate system via pressure filtration tests was the most effective way to quantify the effect of contact dewatering. Further studies are required for highlighting the direct influence of particle surface area on contact dewatering. Furthermore, the effects of other particle characteristics such as absorptivity, bulk density, compaction, and porosity need to be clearly examined. Data Availability. Materials, data and associated protocols are promptly available to readers without undue qualifications in material transfer agreements.References.1. Tajvidi, M., Gardner, D. J. & Bousfield, D. W. Cellulose Nanomaterials as Binders: Laminate and Particulate Systems. J Renew Mater 4, 365–376 (2016).CAS.Article. Google Scholar.2. Amini, E., Tajvidi, M., Gardner, D. J. & Bousfield, D. W. Utilization of Cellulose Nanofibrils as a Binder for Particleboard Manufacture. BioRes 12, 4093–4110, https://doi.org/10.15376/biores.12.2.4093-4110 (2017).CAS.Article. Google Scholar.3. Ghasemi, S., Tajvidi, M., Bousfield, D. W., Gardner, D. J. & Gramlich, W. M. Dry-Spun Neat Cellulose Nanofibril Filaments: Influence of Drying Temperature and Nanofibril Structure on Filament Properties. Polymers 9, 1–13 (2017).Article. Google Scholar.4. Tayeb, A. H., Amini, E., Ghasemi, S. & Tajvidi, M. Cellulose Nanomaterials—Binding Properties and Applications: A Review. Molecules 23, 2684, https://doi.org/10.3390/molecules23102684 (2018).CAS.Article.PubMed Central. Google Scholar.5. Purington, E., Bousfield, D. & Gramlich, W. M. Fluorescent dye adsorption in aqueous suspension to produce tagged cellulose nanofibers for visualization on paper. Cellulose. https://doi.org/10.1007/s10570-019-02439-4 (2019).Article.PubMed. Google Scholar.6. Desmaisons, J., Gustafsson, E., Dufresne, A. & Bras, J. Hybrid nanopaper of cellulose nanofibrils and PET microfibers with high tear and crumpling resistance. Cellulose 25, 7127–7142, https://doi.org/10.1007/s10570-018-2044-4 (2018).CAS.Article. Google Scholar.7. Zolin, L. et al. Flexible cellulose-based electrodes: Towards eco-friendly all-paper batteries. Chem. Eng. Trans. 41, 361–366 (2014). Google Scholar.8. Zhang, X. et al. Solid-state flexible polyaniline/silver cellulose nanofibrils aerogel supercapacitors. J. Power Sources 246, 283–289 (2014).CAS.ADS.Article. Google Scholar.9. Bhandari, J. et al. Cellulose nanofiber aerogel as a promising biomaterial for customized oral drug delivery. Int. J. Nanomed. 12, 2021–2031 (2017).CAS.Article. Google Scholar.10. Gardner, D. J., Oporto, G. S., Mills, R. & Samir, M. A. S. A. Adhesion and Surface Issues in Cellulose and Nanocellulose. J Adhes Sci Technol 22, 545–567 (2008).CAS.Article. Google Scholar.11. Kojima, Y. et al. Evaluation of binding effects in wood flour board containing ligno-cellulose nanofibers. Materials 6, 6853–6864 (2014).ADS.Article. Google Scholar.12. Kojima, Y. et al. Binding effect of cellulose nanofibers in wood flour board. J. Wood Sci. 59, 396–401 (2013).CAS.Article. Google Scholar.13. Leng, W., Hunt, J. F. & Tajvidi, M. Effects of Density, Cellulose Nanofibrils Addition Ratio, Pressing Method, and Particle Size on the Bending Properties of Wet-formed Particleboard. BioRes 12, 4986–5000 (2017).CAS.Article. Google Scholar.14. Leng, W., Hunt, J. F. & Tajvidi, M. Screw and Nail Withdrawal Strength and Water Soak Properties of Wet-formed Cellulose Nanofibrils Bonded Particleboard. BioRes 12, 7692–7710 (2017).CAS.Article. Google Scholar.15. Diop, C. I. K., Tajvidi, M., Bilodeau, M. A., Bousfield, D. W. & Hunt, J. F. Evaluation of the incorporation of lignocellulose nanofibrils as sustainable adhesive replacement in medium density fiberboards. Ind Crops and Prod 109, 27–36 (2017).CAS.Article. Google Scholar.16. Shivyari, N. Y., Tajvidi, M., Bousfield, D. W. & Gardner, D. J. Production and Characterization of Laminates of Paper and Cellulose Nanofibrils. ACS Appl Mater Interfaces 8, 25520–25528 (2016).Article. Google Scholar.17. Ghasemi, S., Tajvidi, M., Bousfield, D. W. & Gardner, D. J. Reinforcement of natural fiber yarns by cellulose nanomaterials: A multi-scale study. Ind Crops and Prod 111, 471–481 (2018).CAS.Article. Google Scholar.18. Ghasemi, S., Tajvidi, M., Bousfield, D. W., Gardner, D. J. & Shaler, M. S. Effect of wettability and surface free energy of collection substrates on the structure and morphology of dry-spun cellulose nanofibril filaments. Cellulose 25, 1–13, https://doi.org/10.1007/s10570-018-2029-3 (2018).CAS.Article. Google Scholar.19. Paradis, M. A., Genco, J. M., Bousfield, D. W., Hassler, J. C. & Wildfong, V. Determination of drainage resistance coefficients under known shear rate. TAPPI Journal 1, 12–18 (2002).CAS. Google Scholar.20. Dimic-Misic, K. et al. The role of MFC/NFC swelling in the rheological behavior and dewatering of high consistency furnishes. Cellulose 20, 2847–2861, https://doi.org/10.1007/s10570-013-0076-3 (2013).CAS.Article. Google Scholar.21. Dimic-Misic, K., Puisto, A., Paltakari, J., Alava, M. & Maloney, T. The influence of shear on the dewatering of high consistency nanofibrillated cellulose furnishes. Cellulose 20, 1853–1864, https://doi.org/10.1007/s10570-013-9964-9 (2013).CAS.Article. Google Scholar.22. Sim, K., Lee, J., Lee, H. & Youn, H. J. Flocculation behavior of cellulose nanofibrils under different salt conditions and its impact on network strength and dewatering ability. Cellulose 22, 3689–3700, https://doi.org/10.1007/s10570-015-0784-y (2015).CAS.Article. Google Scholar.23. Rantanen, J., Dimic-Misic, K., Kuusisto, J. & Maloney, T. C. The effect of micro and nanofibrillated cellulose water uptake on high filler content composite paper properties and furnish dewatering. Cellulose 22, 4003–4015, https://doi.org/10.1007/s10570-015-0777-x (2015).CAS.Article. Google Scholar.24. Dimic-Misic, K., Maloney, T., Liu, G. & Gane, P. Micro nanofibrillated cellulose (MNFC) gel dewatering induced at ultralow-shear in presence of added colloidally unstable particles. Cellulose 24, 1463–1481, https://doi.org/10.1007/s10570-016-1181-x (2017).CAS.Article. Google Scholar.25. Dimic-Misic, K., Maloney, T. & Gane, P. Effect of fibril length, aspect ratio and surface charge on ultralow shear-induced structuring in micro and nanofibrillated cellulose aqueous suspensions. Cellulose 25, 117–136, https://doi.org/10.1007/s10570-017-1584-3 (2018).CAS.Article. Google Scholar.26. Liu, G., Maloney, T., Dimic-Misic, K. & Gane, P. Acid dissociation of surface bound water on cellulose nanofibrils in aqueous micro nanofibrillated cellulose (MNFC) gel revealed by adsorption of calcium carbonate nanoparticles under the application of ultralow shear. Cellulose 24, 3155–3178, https://doi.org/10.1007/s10570-017-1371-1 (2017).CAS.Article. Google Scholar.27. Clayton, S. A. et al. Dewatering of Biomaterials by Mechanical Thermal Expression. Dry Technol 24, 819–834 (2006).Article. Google Scholar.28. Rainey, T. J., Doherty, W. O. S., Martinez, D. M., Brown, R. J. & Kelson, N. A. Pressure Filtration of Australian Bagasse Pulp. Transp. Porous Med 86, 737–751 (2011).Article. Google Scholar.29. Hakovirta, M., Aksoy, B., Nichols, O., Farag, R. & Ashurst, W. R. Functionalized Cellulose Fibers for Dewatering and Energy Efficiency Improvement. Dry Technol 32, 1401–1408 (2014).CAS.Article. Google Scholar.30. Pettersson, P., Staffan Lundström, T. & Wikström, T. A Method to Measure the Permeability of Dry Fiber Mats. Wood & Fiber Sci. 38(3), 417–426 (2006).CAS. Google Scholar.31. Lavrykova-Marrain, N. S. & Ramarao, B. V. Permeability Parameters of Pulp Fibers from Filtration Resistance Data and Their Application to Pulp Dewatering. Ind. Eng. Chem. Res. 52, 3868–3876 (2013).CAS.Article. Google Scholar.32. Nilsson, L. & Stenstrom, S. A Study of the Permeability of Pulp and Paper. Int. J. Multiphase Flow 23(1), 131–153 (1997).CAS.Article. Google Scholar.33. Richmond, F. Cellulose Nanofibers Use in Coated Paper. Electronic Theses and Dissertations. 2242. http://digitalcommons.library.umaine.edu/etd/2242 (2014).34. Geankoplis, C. J. Transport Processes and Separation Process Principles (4th Edition) 910–913 (Prentice Hall, 2003).35. Park, S., Venditti, R. A., Jameel, H. & Pawlak, J. J. Hard to remove water in cellulose fibers characterized by high resolution thermogravimetric analysis - methods development. Cellulose 13, 23–30, https://doi.org/10.1007/s10570-005-9009-0 (2006).CAS.Article. Google Scholar.36. Sen, S. K. et al. Cellulose microfibril-water interaction as characterized by isothermal thermogravimetric analysis and scanning electron microscopy. BioRes 7, 4683–4703 (2012).CAS. Google Scholar. Download references .Acknowledgements.This project was funded by U.S. Endowment for Forestry and Communities (P3Nano) grant number P3-5. The project was also partially supported by the USDA National Institute of Food and Agriculture, McIntire-Stennis project number #ME041616 through the Maine Agricultural & Forest Experiment Station. Maine Agricultural and Forest Experiment Station Publication Number 3702 .Author information.Affiliations.School of Forest Resources and Advanced Structures and Composites Center, University of Maine, Orono, ME, 04469, USA.Ezatollah (Nima) Amini., Mehdi Tajvidi., Douglas J. Gardner. & Stephen M. Shaler.Department of Chemical and Biomedical Engineering, University of Maine, Orono, ME, 04469, USA.Douglas W. Bousfield.Authors.Search for Ezatollah (Nima) Amini in:.PubMed • . Google Scholar .Search for Mehdi Tajvidi in:.PubMed • . Google Scholar .Search for Douglas W. Bousfield in:.PubMed • . Google Scholar .Search for Douglas J. Gardner in:.PubMed • . Google Scholar .Search for Stephen M. Shaler in:.PubMed • . Google Scholar .Contributions.The manuscript was written through contributions of all authors. Ezatollah (Nima) Amini and Mehdi Tajvidi conceived the idea, Ezatollah (Nima) Amini carried out experiments, and Douglas Bousfield, Stephen Shaler and Douglas Gardner contributed to data analysis and discussions. All authors have given approval to the final version of the manuscript.Corresponding author.Correspondence to Mehdi Tajvidi.Ethics declarations. Competing Interests. The authors declare no competing interests.Additional information.Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Rights and permissions. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.Reprints and Permissions.About this article.Received.06 May 2019Accepted.26 September 2019Published.10 October 2019DOI.https://doi.org/10.1038/s41598-019-51177-x Comments.By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. From:
2019-10-20 05:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5790309906005859, "perplexity": 4112.292971165673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00342.warc.gz"}