url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/please-clear-my-doubt.105862/
1. Jan 5, 2006 Hey guys, how do you prove that 'pi' is irrational? I think that it is related to infinite series? Is there any geometrical method? 2. Jan 5, 2006 ### Tx The series for Pi is Gregory's Series to proove the irrationality is quite difficult, just google it. 3. Jan 5, 2006 ### HallsofIvy Here's my favorite proof: Lemma 1: Let c be a positive real number. If there exist a function, f, continuous on [0,c] and positive on (0,c) and such that f and its iterated anti-derivatives can be taken to be integer valued at both 0 and c, the c is irrational. ("can be taken to be"- we can always choose the constant of integration such that we have any given value at either 0 or c. Here we require that it the function be integer valued at BOTH 0 and c. I will post the proof separately. I remember seeing it in "Mathematics Magazine" many years ago but don't remember the author.) Theorem $\pi$ is irrational: f(x)= sin(x) is continuous for all x and positive on (0, $\pi$). All anti-derivatives can be taken to be $\pm sin(x)$ or $\pm cos(x)$, all of which are integer valued (0 or $\pm 1$) at $x= \pi$. Therefore, by lemma 1, $\pi$ is irrational. One can also use lemma 1 to prove that e is irrational. Lemma 2: If a is a positive real number, not equal to 1, such that ln(a) is rational, then a itself is irrational. Proof: First note that ln(1/a)= -ln(a) is rational if and only if ln(a) is and 1/a is rational if and only if a is rational so it is sufficient to prove this for a> 1. (If a<1, apply the lemma to 1/a.) If a>1 then ln(a)> 0. Suppose that ln(a) is rational and, contradicting the hypothesis, that a is rational: a= m/n reduced to lowest terms. Apply lemma 1 with c= ln(a)= ln(m/n) and f(x)= nex. Then f(x) is positive and continuous for all x and so for the required intervals. We can take ALL anti-derivatives to be f(x)=nex by taking the constant of integration to be 0. f(0)= n, an integer, and f(c)= f(ln(m/n)= neln(m/n)= m, an integer. Therefore, by lemma 1, a is rational, a contradiction. Now: Theorem: e is irrational. e is a positive real number, not equal to 1 (since 1= e0 and ex is 1 to 1). ln(e)= 1, a rational number. Therefore, by lemma 2, e is irrational. 4. Jan 5, 2006 ### HallsofIvy Here is the proof of "lemma 1" above. As I said there, I remember reading it in "Mathematics Magazine" many years ago but don't remember the author. It is certainly not original with me! I find it intriguing for two reasons- first, irrationality is a numeric, not function, property, yet this depends upon calculus methods. Second, it is the "worst" kind of proof by contradiction! Contradicting the conclusion (that c is irrational) leads to two conclusions (I call them "statement A" and "statement B" below) neither of which seems to have much to do with irrationality but which contradict one another. Lemma i: let c be a positive real number. If there exist a function f, continuous on [0,c] and positive on (0,c), such that f and all of its anti-derivatives can be taken to be integer valued at 0 and c (by appropriate choice of the constant of integration), then c is irrational. First: define the set P of all polynomials, p(x), such that p and all of its derivatives are integer valued at 0 and c. Notice "derivatives" rather than "anti-derivatives". That allows us to prove: Lemma i: if f(x) is the function above and p(x) is any polynomial in P, then $\int_0^c f(x)p(x)dx$ is an integer. To prove this, use repeated integeration by parts, repeatedly integrating the f "part" and differentiating the p "part". Since p is a polynomial and differentiating reduces the degree, that will eventually terminate giving the integral as a sum of anti-derivatives of f times derivatives of p, all of which are integer valued at 0 and c. A similar proof gives Lemma ii: the set P is closed under multiplication. Suppose p and q are both in P. The pq is a polynomial and pq(0)= p(0)q(0) and pq(c)= p(c)q(c) are products of integers. All derivatives of pq can be done by repeated application of the product rule: every derivative is a sum of products of various derivatives of p times derivatives of q- all integer valued at 0 and c. Now, suppose c is rational: c= m/n reduced to lowest terms. Let p0(x)= m- 2nx. Clearly p is a polynomial. p(0)= m, an integer and p(c)= p(m/n)= m- 2n(m/n)= -m, an integer. p'(x)= -2n, an integer for all x, and all subsequent derivatives are 0. Therefore, p0 is in P. For i any positive integer, let $p_i(x)= \frac{(mx- nx^2)^i}{i!}$. We will prove, by induction, that pi is in P for all i., If i= 1, p1(x)= mx- nx^2= x(m- nx). p1(0)= 0 because of that 'x' factor and p1(c)= p1(m/n)= 0 because of the 'm-nx' factor. p'1= m- 2nx= p0(x). Since that is in P we have immediately that all derivatives of p1 are integer valued at p and so p1 is in P. Assume that pi is in P for some i. $p_{i+1}= \frac{(mx-nx^2)^{i+1}}{(i+1)!}= \frac{x^{i+1}(m-nx)^{i+1}}{(i+1)!}$ is 0 at both x=0 and x= c= m/n because of the factors. $p'(x)= \frac{i(mx-nx^2)^i(m-2nx)}{(i+1)!}= \frac{(mx-nx^2)^i}{i!}(m- 2nx)= p_i(x)p_0(x)$. Since both pi and p0 are in P and P is closed under multiplication, so is p'i+1- all further derivatives are integer valued at 0 and c and so pi+1 is in P. Since f(x) is continuous on [0,c], it takes on a maximum value there: let M= max f(x) on [0,c]. Further, since f is positive on (0, c), M> 0. Since, for all i, pi is differentiable on [0, c] it not only takes on a maximum value but that maximum value occurs either at an endpoint (0 or c) or in the interior where p'i(x)= 0. We have already seen that pi is 0 at 0 and c, for all i, (and only at x= 0 or c) and that p'i(x)= pi-1(x)p0(x). p'i(x)= 0 only where p0(x)= m- 2nx= 0 or x= m/2n= c/2. pi(m/2n)= \frac{\left(\frac{m^2}{4n}\right)^i}{i!}. Since that is positive, that is the maximum value of pi on [0, c]. Now we can prove: $$\int_0^c f(x)p_i(x)dx\le \int_0^cM\frac{\left(\frac{m^2}{4n}\right)^i}{i!}dx= Mc\frac{\left(\frac{m^2}{4n}\right)^i}{i!}$$ That is a constant times a constant to the i power, divided by i!. As i goes to infinity, the factorial dominates and the limit is 0- we can make that as small as we please: Therefore, Statement A: For some i, $\int_0^c f(x)p_i(x)dx< \frac{1}{2}$. But since f(x) and all pi(x) are positive on (0, c) (every pi is 0 only at 0 and c and is positive at c/2), $\int_0^cf(x)p_i(x)dx$ is a positive integer and so (statement B) must be larger than or equal to 1 for all i. That contradicts statement A and so the theorem is true. Last edited by a moderator: Jan 10, 2006 5. Jan 5, 2006 ### Tx Thanks for posting that, I've never seen a proof for 'lemma 1'. 6. Jan 10, 2006
2018-01-18 10:28:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928634524345398, "perplexity": 952.9103623351249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887224.19/warc/CC-MAIN-20180118091548-20180118111548-00689.warc.gz"}
https://www.questarter.com/q/using-longtable-and-multicolumns-having-problems-with-setting-the-width-of-a-column-24_504171.html
# Using Longtable and Multicolumns having problems with setting the width of a column by Harombidlo   Last Updated August 14, 2019 09:23 AM - source I have tried many answers to other questions but my problem wasn't solved. I would like my third column to have a certain width. And optimaly i would like to have my first and second column the width of the first cell. I tried the p{} argument but it didnt work. Here my code: \documentclass[a4paper,12pt]{article} \usepackage[utf8]{inputenc} \usepackage{multirow} \usepackage[table,xcdraw]{xcolor} \usepackage{longtable} \usepackage[a4paper,left=2cm, top=1cm]{geometry} \begin{document} \begin{longtable}[c]{ccp{6cm}} \hline \multicolumn{3}{c}{\textbf{Product Quality - ISO/IEC 25010}} \\ \hline % % \hline \endfoot % \endlastfoot % \rowcolor[HTML]{C0C0C0} \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{Characteristics}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{Sub-Characteristics}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{Definition}} \\ \cline{3-3} \multicolumn{1}{|c|}{\cellcolor[HTML]{DFB460}} & \multicolumn{1}{c|}{\cellcolor[HTML]{F0D39A}\textbf{Functional Completeness}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FDF0D8}degree to which the set of functions covers all the specified tasks and user objectives} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{DFB460}} & \multicolumn{1}{c|}{\cellcolor[HTML]{F0D39A}\textbf{Functional Correctness}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FDF0D8}degree to which a product or system provides the correct results with the needed degree of precision} \\ \multicolumn{1}{|c|}{\multirow{-3}{*}{\cellcolor[HTML]{DFB460}\textbf{Functional Suitability}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{F0D39A}\textbf{Functional Appropriateness}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FDF0D8}degree to which the functions facilitate the accomplishment of specified tasks and objectives} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{8ED27E}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C4ECBA}\textbf{Time-behaviour}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E1F7DB}Degree to which the response and processing times and throughput rates of a product or system, when performing its functions, meet requirements} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{8ED27E}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C4ECBA}\textbf{Resource Utilisation}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E1F7DB}degree to which the amounts and types of resources used by a product or system, when performing its functions, meet requirements} \\ \multicolumn{1}{|c|}{\multirow{-3}{*}{\cellcolor[HTML]{8ED27E}\textbf{\begin{tabular}[c]{@{}[email protected]{}}Performance\\ Efficiency\end{tabular}}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C4ECBA}\textbf{Capacity}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E1F7DB}degree to which the maximum limits of a product or system parameter meet requirements} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{74BACC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0E1EA}\textbf{Co-existence}} & \multicolumn{1}{l|}{\cellcolor[HTML]{DEF5FB}degree to which a product can perform its required functions efficiently while sharing a common environment and resources with other products, without detrimental impact on any other Product} \\ \multicolumn{1}{|c|}{\multirow{-2}{*}{\cellcolor[HTML]{74BACC}\textbf{Compatibility}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0E1EA}\textbf{Interoperability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{DEF5FB}degree to which two or more systems, products or components can exchange information and use the information that has been exchanged} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Appropriateness recognisability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which users can recognise whether a product or system is appropriate for their needs} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Learnability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{User error protection}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a system protects users making errors \textbackslash{}TODO vs OLD {[}degree to which a product or system protects users against making errors{]}} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Operability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a product or system has attributes that make it easy to operate and control} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{User interface aesthetics}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a user interface enables pleasing and satisfying interaction for the user} \\ \multicolumn{1}{|c|}{\multirow{-6}{*}{\cellcolor[HTML]{A987CC}\textbf{Usability}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Accessibility}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a product or system or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified content of use} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{E8DB76}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFF7BC}\textbf{Maturity}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFCE7}degree to which a system, product or component meets for reliability under normal operation} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{E8DB76}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFF7BC}\textbf{Availability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFCE7}degree to which a system, product or component is operational and accessible when required for use} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{E8DB76}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFF7BC}\textbf{Fault tolerance}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFCE7}degree to which a system, product or component operates as intended despite the presence of hardware or software faults} \\ \multicolumn{1}{|c|}{\multirow{-4}{*}{\cellcolor[HTML]{E8DB76}\textbf{Reliability}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFF7BC}\textbf{Recoverability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFCE7}degree to which, in the event of an interruption or a failure, a product or system can recover the data directly affected and re-establish the desired state of the system} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{E85151}} & \multicolumn{1}{c|}{\cellcolor[HTML]{E38383}\textbf{Confidentiality}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFC0C0}degree to which a product or system ensures that data are accessible only to those authorized to have access} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{E85151}} & \multicolumn{1}{c|}{\cellcolor[HTML]{E38383}\textbf{Integrity}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFC0C0}degree to which a system, product or component prevents unauthorized access to, or modification of, computer program or data} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{E85151}} & \multicolumn{1}{c|}{\cellcolor[HTML]{E38383}\textbf{Non-repudiation}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFC0C0}degree to which actions or events can be proven to have taken place, so that the events or actions cannot be repudiated later} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{E85151}} & \multicolumn{1}{c|}{\cellcolor[HTML]{E38383}\textbf{Accountability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFC0C0}degree to which the actions of an entity can be traced uniquely to the entity} \\ \multicolumn{1}{|c|}{\multirow{-5}{*}{\cellcolor[HTML]{E85151}\textbf{Security}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{E38383}\textbf{Authenticity}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFC0C0}degree to which the identity of a subject or resource can be proved to be the one claimed} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{7FB6B8}{\color[HTML]{000000} }} & \multicolumn{1}{c|}{\cellcolor[HTML]{B3D7D8}{\color[HTML]{000000} \textbf{Modularity}}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E0FAFB}{\color[HTML]{000000} degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components}} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{7FB6B8}{\color[HTML]{000000} }} & \multicolumn{1}{c|}{\cellcolor[HTML]{B3D7D8}{\color[HTML]{000000} \textbf{Reusability}}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E0FAFB}{\color[HTML]{000000} degree to which an asset can be used in more than one system, or in building other assets}} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{7FB6B8}{\color[HTML]{000000} }} & \multicolumn{1}{c|}{\cellcolor[HTML]{B3D7D8}{\color[HTML]{000000} \textbf{Analysability}}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E0FAFB}{\color[HTML]{000000} degree of effectiveness and efficiency with which it is possible to assess the impact on a product or system of an intended change to one or more of its parts, or to diagnose a product for deficiencies or causes of failures, or to identify parts to be modified}} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{7FB6B8}{\color[HTML]{000000} }} & \multicolumn{1}{c|}{\cellcolor[HTML]{B3D7D8}{\color[HTML]{000000} \textbf{Modifiability}}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E0FAFB}{\color[HTML]{000000} degree to which a product or system can be effectively modified without introducing defects or degrading existing product quality}} \\ \multicolumn{1}{|c|}{\multirow{-5}{*}{\cellcolor[HTML]{7FB6B8}{\color[HTML]{000000} \textbf{Maintainability}}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{B3D7D8}{\color[HTML]{000000} \textbf{Testability}}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E0FAFB}{\color[HTML]{000000} Degree of effectiveness and efficiency with which test criteria can be established for a system, product or component and tests can be performed to determine whether those criteria have been met}} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}} & \multicolumn{1}{c|}{\cellcolor[HTML]{DFDFDF}\textbf{Adaptability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{EFEFEF}degree to which a product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}} & \multicolumn{1}{c|}{\cellcolor[HTML]{DFDFDF}\textbf{Installability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{EFEFEF}degree of effectiveness and efficiency with which a product or system can be successfully installed and/or uninstalled in a specified environment} \\ \multicolumn{1}{|c|}{\multirow{-3}{*}{\cellcolor[HTML]{C0C0C0}\textbf{Portability}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{DFDFDF}\textbf{Replaceability}} & \multicolumn{1}{l|}{\cellcolor[HTML]{EFEFEF}degree to which a product can replace another specified software product for the same purpose in the same environment} \\ \hline \multicolumn{3}{c}{\textbf{Quality in Use - ISO/IEC 25010}} \\ \hline \rowcolor[HTML]{DFB460} \multicolumn{2}{|c|}{\cellcolor[HTML]{DFB460}\textbf{Effektiveness}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FDF0D8}accuracy and completeness with which users achieve specified goals} \\ \hline \rowcolor[HTML]{8ED27E} \multicolumn{2}{|c|}{\cellcolor[HTML]{8ED27E}\textbf{Efficiency}} & \multicolumn{1}{l|}{\cellcolor[HTML]{E1F7DB}resources expended in relation to the accuracy and completeness with which users achieve goals} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{74BACC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0E1EA}\textbf{Usefullness}} & \multicolumn{1}{l|}{\cellcolor[HTML]{DEF5FB}degree to which a user is satisfied with their perceived achievement of pragmatic goals, including the results of use and the consequences of use} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{74BACC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0E1EA}\textbf{Trust}} & \multicolumn{1}{l|}{\cellcolor[HTML]{DEF5FB}degree to which a user or other stakeholder has confidence that a product or system will behave as intended} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{74BACC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0E1EA}\textbf{Pleasure}} & \multicolumn{1}{l|}{\cellcolor[HTML]{DEF5FB}degree to which a user obtains pleasure from fulfilling their personal needs} \\ \multicolumn{1}{|c|}{\multirow{-4}{*}{\cellcolor[HTML]{74BACC}\textbf{Satisfaction}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0E1EA}\textbf{Comfort}} & \multicolumn{1}{l|}{\cellcolor[HTML]{DEF5FB}degree to which the user is satisfied with physical comfort} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Health and safety risk mitigation}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a product or system mitigates the potential risk to people in the intended contexts of use} \\ \multicolumn{1}{|c|}{\cellcolor[HTML]{A987CC}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Economic risk mitigation}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to which a product or system mitigates the potential risk to financial status, effivient operation, commercial property, reputation or other resources in the intended contexts of use} \\ \multicolumn{1}{|c|}{\multirow{-3}{*}{\cellcolor[HTML]{A987CC}\textbf{Freedom from risk}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{D7C3EC}\textbf{Environmental risk mitigation}} & \multicolumn{1}{l|}{\cellcolor[HTML]{F3E7FF}degree to wich a product or system mitigates the potential risk to property or the environment in the intended contexts of use} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{E8DB76}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFF7BC}\textbf{Context completeness}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFCE7}degree to which a product can be used with effectiveness, efficiency, freedom from risk and satisfaction in all specified contexts of use} \\ \multicolumn{1}{|c|}{\multirow{-2}{*}{\cellcolor[HTML]{E8DB76}\textbf{Context coverage}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFF7BC}\textbf{Flexibility}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFCE7}degree to which a product or system can be used with effectiveness, efficiency, freedom from risk an satisfaction in contexts beyond those initially specified in the requirements} \\ \hline \textbf{} & \textbf{} & \\ \hline \caption{ISO Standards Product Quality and Quality in Use from the ISO/IEC 25010} \label{tab:ISO}\\ \end{longtable} \end{document} Tags : ## Longtable and multicolumn Updated June 14, 2017 18:23 PM
2019-08-25 20:27:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95674067735672, "perplexity": 4697.122440403307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330800.17/warc/CC-MAIN-20190825194252-20190825220252-00411.warc.gz"}
https://masurp.github.io/specr/reference/plot_samplesizes.html
This function plots a histogram of sample sizes per specification. It can be added to the overall specification curve plot (see vignettes). plot_samplesizes(df, var = .data$estimate, desc = FALSE) ## Arguments df a data frame resulting from run_specs(). var which variable should be evaluated? Defaults to estimate (the effect sizes computed by run_specs()). desc logical value indicating whether the curve should the arranged in a descending order. Defaults to FALSE. ## Value a ggplot object. ## Examples # load additional library library(ggplot2) # for further customization of the plots # run specification curve analysis results <- run_specs(df = example_data, y = c("y1", "y2"), x = c("x1", "x2"), model = c("lm"), controls = c("c1", "c2"), subsets = list(group1 = unique(example_data$group1), group2 = unique(example_data$group2))) # plot ranked bar chart of sample sizes plot_samplesizes(results) # add a horizontal line for the median sample size plot_samplesizes(results) + geom_hline(yintercept = median(results$fit_nobs), color = "darkgrey", linetype = "dashed") + theme_linedraw()
2022-08-12 12:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388308882713318, "perplexity": 5634.333781067419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00331.warc.gz"}
https://www.doubtnut.com/question-answer-physics/two-capillary-tubes-of-same-diameter-are-put-vertically-one-each-in-two-liquids-whose-relative-densi-14159208
# Two capillary tubes of same diameter are put vertically one each in two liquids whose relative densities are 0.8 and 0.6 and surface tensions are 60 dyne/cm and 50 dyne/cm respectively. Ratio of heights of liquids in the two tubes (h_(1))/(h_(2)) is Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 19-6-2020 Apne doubts clear karein ab Whatsapp par bhi. Try it now. Watch 1000+ concepts & tricky questions explained! 89.9 K+ 4.5 K+ Text Solution (10)/(9)(3)/(10)(10)/(3)(9)/(10)
2021-12-06 14:20:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20141762495040894, "perplexity": 13088.113410623679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00361.warc.gz"}
https://brilliant.org/discussions/thread/combi-natrics-2/
× # Combi-natrics-2 How many subsets $$A$$ of $$\{1,2,3,.....,100\}$$ have the property that no three elements of $$A$$ sum to $$101?$$ Note by Ayush Rai 1 year, 4 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: There are 2^10 such subsets. Since 1+2+... + 10 = 55, there is no subset that sums to 101. Instead of posting each of these problems as individual notes, my suggestion would be for you to post them together in a single note. - 1 year, 4 months ago i have edited the question.try it and also the other two parts of combinatrics. - 1 year, 4 months ago good one! i will surely make it in a single note. - 1 year, 4 months ago
2017-11-24 11:09:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987538456916809, "perplexity": 6561.7561850664915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00310.warc.gz"}
http://cfdblogvienna.blogspot.com/p/computational-fluid-dynamics-basic-d-2.html
# 1-D viscous, steady Burgers' equation In this section, exact and numerical solutions of a generalized form of 1-D steady, viscous, Burgers' equation are presented: $$(bu - c) u_x = \nu u_{xx},$$ where $b$ and $c$ are constants, $\nu$ is kinematic viscosity, and the subscripts denote partial derivatives. # Exact solution The exact solution of steady Burgers' equation can be presented as: $$u = \frac{c}{b} \left[1-\tanh \frac{c(x - x_0)}{2\nu} \right] \ , \ c^2 + b^2 \neq 0.$$ The equation is solved in an interval of $[0,1]$, with the following parameters: $$\nu = 0.01 \ , \ b = 1 \ , \ c = 0.5 \ , \ x_0 = 0.5.$$ # Numerical solution Burgers' equation is to be solved with Dirichlet boundary conditions on $u$. For discretisation, the three-point centered difference scheme is employed using a uniform grid: $$(u_i - 0.5)\frac{u_{i+1} - u_{i-1}}{2 \Delta x} = \nu \frac{u_{i+1} - 2u_i + u_{i-1}}{\Delta x^2}.$$ Then the equation can be written as: $$F_i = (u_i - 0.5)\frac{u_{i+1} - u_{i-1}}{2 \Delta x} - \nu \frac{u_{i+1} - 2u_i + u_{i-1}}{\Delta x^2} = 0.$$ The boundary values of $u$ are taken from the exact solution described above and the system of equations is solved using Newton's method. In order to solve the system of equations using Newton's method, the following equation should be iteratively solved: $$\Delta \boldsymbol{u}^{k+1} = \boldsymbol{u}^{k+1} - \boldsymbol{u}^k = - \boldsymbol{J}^{-1}(u^k) \boldsymbol{F}(u^k),$$ where $\boldsymbol{u}^k$ is the vector of solutions at $k$-th iteration, $\boldsymbol{F}$ contains all $F_i$ values, and $\boldsymbol{J}$ is the Jacobian matrix. The numerical results together with the respective exact solution is plotted in Fig. d.4. Fig. d.4: Exact vs. numerical solution for 1-D steady Burgers' equation. # Convergence: Newton's method vs. numerical solution It is a common mistake to mix up the convergence rate of Newton's method and the convergence of the numerical solution to the exact solution. When the Newton's method iteration converges (e.g. with the convergence rate of $10^{-8}$), it does not mean that the numerical solution error converges with the same rate. The numerical solution convergence also depends on the grid size. This is illustrated in Fig. d.5. It is clear that the Newton's method (red curve) converges steadily to the defined convergence criteria ($10^{-8}$), but the numerical solution convergence (blue curve) saturates to a much larger error value ($6 \times 10^{-3}$) for grid size of $100$. Fig. d.5: Newton's method vs. numerical solution convergence.
2022-10-01 03:54:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.8990638852119446, "perplexity": 306.49969995838467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00096.warc.gz"}
https://brilliant.org/problems/presents-for-all/
# Presents for all! There are $$4$$ children awaiting their presents. $$7$$ elves come one-by-one and each of them gives $$1$$ present to exactly $$1$$ child. Every elf chooses the child randomly. The probability that each child receives at least one present is of the form $\dfrac{A}{B}$ where $$A$$ and $$B$$ are co-prime positive integers. Find the value of $B-A.$ $$\textbf{Assumptions}$$ • Each elf has a different present. ×
2017-10-18 18:50:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38211849331855774, "perplexity": 795.4154017612866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00718.warc.gz"}
http://www.sciforums.com/threads/area-of-a-triangle.90980/#post-2180152
# Area of a triangle Discussion in 'The Cesspool' started by Vkothii, Feb 26, 2009. Not open for further replies. 1. ### VkothiiBannedBanned Messages: 3,674 Construct a unit circle $x^2+y^2 = 1$ between (0,1) and (1,0). Then construct the hyp. $x^2-y^2 = 1$ in the same interval. You should have the ++ quadrant of a unit circle. and the upper half of an hyperbola, coincident at (0,1). Construct any line from (0,0) to the curve $x^2-y^2 = 1$ Write the formula for the area of the triangle bounded by y=0; y=tan(theta) where theta is <pi/4, and subtended by the line from (0,0) to the hyperbola; and the hyperbolic curve, in terms of x and y. Can you construct a line from (0.5,0) to the same point on the hyperbola as y=tan(theta) which is the upper right vertex of the triangle? How close can theta be to pi/2, and what limits this? P,S, you may notice that you can't fit $x^2-y^2 = 1$ into (0,1), (1,0); the hyp. lives 'outside' x=1; you need (0,1), (1,y). Last edited: Feb 26, 2009 3. ### VkothiiBannedBanned Messages: 3,674 A note about a line 'from' a point, if the point is (0,0) a line is all the points beyond it, so that the line has a single direction towards another point, which the line excludes - a line is a list with 3 elements. The exclusion of the start and endpoints of any line is because the start is a reference that has to be independent so it has a successor, which is the first point in the line; the endpoint is the successor of the last point in the line. (0,0) has to have a successor in any direction. a list for any line starts with p0, then p0' means p0' = p0 + p0'; p0'' = p0' + p0'', etc. the last point pN in the line has a successor pN'; the list is: {p0,{p0',p0'',... ,pN},pN'} = {p0,{p1 p2 ... pN},pN'}. If you colour the area outside the circle's ++ quadrant, up to x = 1, and y =1, it has an area too. So does the area outside the hyperbola, but it extends to infinity, so you need an infinite colour. If you flip the quadrant left-to-right along the line x = 1, which is tangent to both curves, and draw another circle with origin at the focus of the hyperbola, can you find the ratio between it and the unit circle? 5. ### VkothiiBannedBanned Messages: 3,674 Note: doesn't imply addition of values, it's a successor function, it generates a position or a place. It says "the successive position of p0 is p0' = (p1)" not "add p0 to p0' and write the result in p1". It generates the position or the next ordinal (zero for the origin, which is the head of the list), not a cardinal value. 7. ### rpennerFully WiredValued Senior Member Messages: 4,833 It's not a triangle if one of the sides is the curve of a hyperbola. 8. ### BenTheManDr. of Physics, Prof. of LoveValued Senior Member Messages: 8,967 Vkothii--- If you post this thread again, you will be banned.
2021-10-21 07:18:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6995037794113159, "perplexity": 1081.5622010423308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00163.warc.gz"}
https://lifexsoft.org/index.php/resources/19-texture/radiomic-features
## Conventional Indices CONV_SUVmin reflects the minimum Standardized Uptake Value (SUV) in the Volume of Interest. CONV\_SUVmin=\min_{i}SUV_i ## First Order Features - Histogram Histogram calculation To build a histogram, it is necessary to determine a bin width ("bin" parameter). The indices derived from the histogram will depend on this bin width parameter. This dependence, similar to that found in texture index calculations, is often overlooked in publications. In LIFEx, with the absolute model the histogram is built a number of bins equal to that entered by the user in the "number of grey level" and "size of bin" fields of the resampling menu. In LIFEx, with the relative model the histogram is built only with "number of grey level" fields of the resampling menu that entered by the user and min and max are extracted values of each ROI. ## First Order Features - Shape SHAPE_Sphericity is how spherical a Volume of Interest is. Sphericity is equal to 1 for a perfect sphere. SHAPE\_Sphericity=\frac{\pi^{1/3}\cdot(6V)^{2/3}}{A} where $$V$$ and $$A$$ correspond to the volume and the surface of the Volume Of Interest based on the Delaunay triangulation. ## Grey-Level Zone Length Matrix (GLZLM) The grey-level zone length matrix (GLZLM) provides information on the size of homogeneous zones for each grey-level in 3 dimensions. It is also named Grey Level Size Zone Matrix (GLSZM). From this matrix, 11 texture indices can be computed. Element $$(i,j)$$ of GLZLM corresponds to the number of homogeneous zones of $$j$$ voxels with the intensity $$i$$ in an image and is called $$GLZLM(i,j)$$ thereafter. ## Grey-Level Run Length Matrix (GLRLM) The grey-level run length matrix (GLRLM) gives the size of homogeneous runs for each grey level. This matrix is computed for the 13 different directions in 3D (4 in 2D) and for each of the 11 texture indices derived from this matrix, the 3D value is the average over the 13 directions in 3D (4 in 2D). The element $$(i,j)$$ of GLRLM corresponds to the number of homogeneous runs of $$j$$ voxels with intensity $$i$$ in an image and is called $$GLRLM(i,j)$$ thereafter. ## Neighborhood Grey-Level Different Matrix (NGLDM) The neighborhood grey-level different matrix (NGLDM) corresponds to the difference of grey-level between one voxel and its 26 neighbours in 3 dimensions (8 in 2D). Three texture indices can be computed from this matrix. An element $$(i,1)$$ of NGLDM corresponds to the probability of occurrence of level $$i$$ and an element $$(i,2)$$ is equal to: NGLDM(i,2)= \sum_{p}\sum_{q} \left\lbrace \begin{array}{ll} |\overline{M}(p,q)-i| \mbox{~if $I(p,q)=i$} \\ 0 \mbox{~else} \end{array} \right. where $$\overline{M}(p,q)$$ is the average of intensities over the 26 neighbour voxels of voxel $$(p,q)$$. ## Grey Level Co-occurrence Matrix (GLCM) The grey level co-occurrence matrix (GLCM) [Haralick] takes into account the arrangements of pairs of voxels to calculate textural indices. The GLCM is calculated from 13 different directions in 3D with a $$\delta$$-voxel distance ($$\|\overrightarrow{d\|}$$) relationship between neighboured voxels. The index value is the average of the index over the 13 directions in space (X, Y, Z). Six textural indices can be computed from this matrix. An entry $$(i,j)$$ of GLCM for one direction is equal to: GLCM_{\Delta x,\Delta y}(i,j)= \frac{1}{Pairs_{ROI}}\sum_{p=1}^{N-\Delta x}\sum_{q=1}^{M-\Delta y} \left\lbrace \begin{array}{ll} \mbox{1 if ($I(p,q)=i$, $I(p+\Delta x, q+\Delta y)=j$) } \\ \mbox{ and $I(p,q), I(p+\Delta x, q+\Delta y) \in ROI$ } \\ \mbox{0 otherwise} \end{array} \right. where $$I(p,q)$$ corresponds to voxel $$(p,q)$$ in an image ($$I$$) of size $$N * M$$. The vector $$\overrightarrow{d}=(\Delta x,\Delta y)$$ covers the 4 directions (D1, D2, D3, D4, in 2D space or 13 directions (D1, D2, ..., D13, in 3D space and $$Pairs_{ROI}$$
2019-07-16 19:13:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4446825683116913, "perplexity": 1827.0382166435554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524685.42/warc/CC-MAIN-20190716180842-20190716202842-00133.warc.gz"}
https://brilliant.org/discussions/thread/elevators/
× # Elevators Assume that we are in a building with $$N$$ storeys. There is one elevator in the building. Whenever there is no one waiting to use the elevator, the elevator will always go to the $$x$$ storey and remain there until someone wishes to use it. Then, it will go to the storey that the person who wishes to use it is at and fetch that person to the storey that he wishes to go to. Assuming that the people in the building do not visit each other within the building, what value of $$x$$ will result in the least energy wasted? Note by Jerry Han Jia Tao 7 months, 3 weeks ago Sort by: 0 · 7 months, 2 weeks ago X = N/2 · 7 months, 2 weeks ago 0 · 7 months, 3 weeks ago ×
2016-12-08 09:47:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5288383960723877, "perplexity": 1144.391335324751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542520.47/warc/CC-MAIN-20161202170902-00185-ip-10-31-129-80.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/71627/find-pairs-of-integers-a-b-in-an-array-such-that-a-b-k-in-linear-time-e
# Find pairs of integers (a, b) in an array such that a = b + k in linear time - elements are not unique A while ago, I was asked to solve a question similar to this: We are given an array arr and we would like to find all pairs of items (a, b) where a = b + k. The items are NOT unique and it is also possible to have k = 0. I know that if items are unique, this problem can be solved in linear time by using a hashmap. However, when items are NOT unique, I think that the problem becomes very different. See this example: arr = [1, 1, 1, 1, 1] k = 0 The expected output is: (1, 1), (1, 1), (1, 1), (1, 1) // For the first element (1, 1), (1, 1), (1, 1) // For the second element (1, 1), (1, 1) // For the third element (1, 1) // For the fourth element As it is obvious to me, in the worst case (the above example) the output is of size $n \choose 2$, which is $\Theta (n^2)$. How is it possible to have a linear algorithm, when the output size is definitely $\Theta (n^2)$? My interviewer insisted that it is possible to still solve it in linear time, if the correct data structure is used. If you must write out all of the pairs individually, then the overall problem takes quadratic time because the running time of this last post-processing step is $O(n^2)$. If you are allowed to represent the output in a different way, then you can simply keep track of distinct pairs and associate them with a counter (see run-length encoding). Thus, you can compute the solution in $O(n)$ time using the following algorithm: 1. Initialize an auxiliary array (or hash map) $Aux$. 2. Perform a first linear scan of the input array $Arr$ and use the auxiliary array to keep track of how many times each of the elements appears. For example, $Aux[2]=3$ indicates that $2$ appears $3$ times in $Arr$. 3. Perform a second linear scan of the input array $Arr$. During the $i$th iteration, $a=Arr[i]$ and you must look for the element $b =a-k$, which appears $c =Aux[b]$ times in $Arr$. If $c \gt 0$, then you will add the pair $(a,b)$ to your solution, associated with the count $c$. Two special cases that must be considered at step 3: • If the pair already exists in your solution, you can simply increment the existing counter by $c$. • If $k=0$, you must add the pair $(a,b)$ only if $c \gt 1$. The counter associated with this pair will not be set to (or incremented by) $c$, but, rather, by $c-x-1$, where $x$ stores how many times we have seen $a$ before. • Let's give your solution a try for the given example [1, 1, 1, 1, 1]. In our Aux array, we have only Aux[1] = 5. At step 3, we are looking for b = 1 - 0; therefore, c = Aux[b] is 5. We add 5 (1, 1) to our solution. At this point, it becomes clear that (for the next steps) to make sure we are not inserting repeated pairs, we have to check the indices. Therefore, the count idea does not apply here. You should know that elements are different, although they all have value 1. – Matin Kh Mar 17 '17 at 16:15 • Mmm, I think you misunderstood the second bulleted point (I am going to edit it, for clarification). If $k=0$, at step 3, iteration 0, you will add $(1,1)$ to your solution. The counter will be $c=5-0-1=4$, which is correct. At iteration 1, you will not add $(1,1)$, but rather increment its counter by $c=5-1-1=3$, which is also correct. At iteration 2, you will increment the counter by $c=5-2-1=2$, and so on ... for a final result of $(1,1)$ and counter equal to $10$. – Mario Cervera Mar 17 '17 at 16:36 • In any case, you are printing (1, 1) 4 times the first time; 3 times the second time; 2 times the third time; and once for the last iteration. Thus, it is still $n \choose 2$. Please note that I was required to return the pairs, so I had to print them. If the question were how many pairs are there? and we did not have to print them... Then yes; your solution would give us that answer in linear time. – Matin Kh Mar 17 '17 at 20:43 • Well, if writing out all of the pairs individually is a requirement, then, IMHO, the overall problem does not take linear time (in contrast to the interviewer's suggestion): the solution can be computed in $O(n)$ time, but there is a post-processing step that takes $O(n^2)$ time. – Mario Cervera Mar 17 '17 at 23:30
2020-02-26 16:47:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5945096611976624, "perplexity": 265.0090190610067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146414.42/warc/CC-MAIN-20200226150200-20200226180200-00444.warc.gz"}
https://math.paperswithcode.com/paper/on-compactness-of-the-bar-partial-neumann
# On compactness of the $\bar{\partial}$-Neumann operator on Hartogs domains 26 Sep 2018  ·  Jin Muzhi · We show that Property $(P)$ of $\partial\Omega$, compactness of the $\bar{\partial}$-Neumann operators $N_1$, and compactness of Hankel operator on a smooth bounded pseudoconvex Hartogs domain $\Omega={\{(z, w_1, w_2,\dots, w_n) \in \mathbb{C}^{n+1} \mid\sum_{k=1}^{n} |w_k|^2 < e^{-2\varphi(z)}, z\in\mathit{D}\}}$ are equivalent, where $D$ is a smooth bounded connected open set in $\mathbb{C}$... PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Categories Complex Variables
2021-12-04 10:13:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7173230051994324, "perplexity": 2996.676759024247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00172.warc.gz"}
https://worldbuilding.stackexchange.com/questions/171203/how-can-space-cop-arrest-the-suspicious-spacecraft-moving-at-sub-luminal-velocit/171210#171210
# How can space cop arrest the suspicious spacecraft moving at sub-luminal velocity? Set in a distant future, reports of alien and drug traffickings have been on the rise and there was mounting pressure for Mr User6760, president of intergalactic interpol to resign. The traffickers made use of stolen tech to achieve fractional light speed travel and stealth to evade the space cop, now Mr User6760 must stop these traffickers or step down if there is still no result by the end of the term. I am wondering what technology can be used by space cop to capture a suspicious spacecraft traveling at sub-luminal velocity without committing unlawful 2nd or perhaps even 3rd degree murder? No FTL tech and arrest must be made before suspects exit heliosphere, the boundary between intergalactic free space and the jurisdiction aka furthest reach of the law. BTW we have identified that the stolen tech is actually antimatter propulsion engine so try not to sabotage it, we are space cop not space mafia! • Is the question related to what technology should Mr User6760, president of intergalactic interpol deploy to react in such situations (or in this very particular case). Or do you want to know how a 'cop on the beat' should react if s/he gets such an order from dispatch? Or what? Mar 13 '20 at 7:24 • Well, if the Galactic Interpol is anything like the actual Interpol, then all they can do is send a Red Notice to the local police authority in the Solar System. (Interpol does not have the power to arrest anybody anywhere; it is an organization intended to facilitate co-operation between the various national police organizations.) Mar 13 '20 at 7:42 • Could you tell us what you see as the difficulties? On Earth we have coast guards which have been doing this for centuries with nothing but fast boats and guns. What do you think would be different in space? Mar 13 '20 at 17:03 • @David42: a car can pull over at the side of a road, a boat can capsize, a spacecraft can... explode is not an option! Mar 13 '20 at 17:15 • You could make 2nd and 3rd degree murder lawful after a "Stop or we kill you" warning Mar 13 '20 at 18:29 Tugboat drones drones, unencumbered by squishy meat bags, can accelerate much harder than your bad guys can. Once they catch up the drones could try to disable engines (but that’s complicated and might cause explosions) or they could batten themselves onto the hull in great numbers and use their (vastly more powerful because there’s no need for life support to take up space) engines to oppose the motion of the bad guys ship. Even if they can’t flat-out stop the enemy they can try to turn them (I assume your drones will be more powerful than the RCS thrusters on the target) so their main thruster is useless for acceleration. Then it’s a simple matter of the drones dragging your bad guys home, safe and sound. And if needs be you can always just use them as missiles and blow the bad guys up, Dirty Harry style. “Stop, or we’ll shoot!” If they don’t stop, well, you shoot. If you’re lucky you’ll be able to damage vulnerable bits like radiators or life support that would force the runaways to cut engines and wait for rescue. If you’re really lucky you have some kind of remote cutout for their engines and can just use that. Otherwise your only real option is to get into such ranges that you can kill the ship with your weapons and order them to cut engines and prepare for boarding. Use a missile so you can self destruct it if they lose their nerve before it’s too late. (If you can’t get into kill range with any weapon, you can’t really catch your fugitives anyway) The antimatter engine isn't really a major consideration if the fugitives are trying to run away - space is really darn big and even the engine were capable of detonating 20 kilos of antimatter all at once it wouldn't do much of anything as soon as they get about as far away as the moon (if that) • Stop, or we’ll shoot! "Well, fuck off, pig. I just rigged the antimatter engine that we... ummm... borrowed to explode if you try something. In case you dropped off from college, let me tell you what the explosion of 20kg of antimatter will do to the Solar system. I suggest you go ask Mr User6760, president of intergalactic interpol, before you do the last thing of your life" Mar 13 '20 at 7:27 • 20 kg antimatter (if fully annihilated with the same amount of matter, which will not happen) is still just 860 MT TNT-equivalent. You do not want it on or near a planet, but in deep interplanetary space it is harmless. The sun produces bilion times that energy in every second Mar 13 '20 at 8:39 • Pretty much. Even a naive sphere area based calculation drops all that energy to peanuts very very fast. You may as well throw said peanuts at the cops - at least you'll survive it and they might have an allergy Mar 13 '20 at 9:14 • @b.Lorenz hell, if they have a way of converting radiation to energy, they can put out some collectors and get a free recharge out of the deal too Mar 13 '20 at 15:50 # There ain't no stealth in space Added to the general rules around "Hot pursuit" mean that once you have your target located, and you're in pursuit, the jurisdictional boundary no longer applies. If the enemy has an impenetrable fortress, make sure he stays there - General Callus Tacticus Of course you could just knock out their engines and walk away. A criminal that you know the exact location of might as well be in custody for all practical purposes, except you don't have to feed them. He's in his ship, travelling at speed $$v$$ on bearing $$xyz$$ at time $$t$$ and you know where he is for the foreseeable future with all the evidence on board. No engines means he can't change course and can't slow down. Monitor and collect at your leisure for court dates. Speed is relative. The problem is not that bad guy is so fast. It is that you are so slow! 1. Borrow similar tech. 2. Fit out your ship the Giant Clam with tech. 3. Catch up to bad guy. Now bad guy is not so fast. And there is nothing around for either of you to bump into. 4. Giant Clam does what Giant Clam does best. 5. Robot gorilla space marines enter bad guy ship, remove bad guys, turn them loose in Clown Car space ship. 6. Bad guys in Clown Car captured on video for other bad guys to laugh at. what technology can be used by space cop to capture a suspicious spacecraft traveling at sub-luminal velocity without committing unlawful 2nd or perhaps even 3rd degree murder? If a cop follows the procedures in the exercise of his duty, he cannot be charged with murder. And I hope you are aware that without the threat of recurring to potentially deadly force you won't be able to stop anybody, not even granny while crossing outside the zebra path. Most likely you will be required to, in order: 1. identify yourself as a police officer 2. demand the suspects to halt 3. once the suspects do not respond, issue another warning with threat of recurring to forced stop 4. once they again do not respond, go with force. Force can mean: • put obstacles on their path. Like earthling cops put spikes and nails on the road to stop cars, just put scraps on their trajectory. At the speed they are going they will be forced to stop/detour • active fire: lasers, rockets, whatever you have in your arsenal • do you have a tractor beam? use it to pull them back If they happen to find death in the proceeding, you have followed your rule book, you are good and covered. • Let me introduce myself again, I'm Mr User6760, president of intergalactic interpol. Do you really suggest that I have to take the "if you want to do something right, you have to do it yourself" course of action? Mar 13 '20 at 7:19 The problem with flying criminals is even worse than with driving criminals. They can literally go in any direction, so unless you can surround them well and truly (something nearly impossible at those speeds) just chasing them isn't going to do much. Your best bet in this scenario? Get them before they set off. Track them to wherever they undertake their illicit activities, and bust the whole operation at once. If getting them in transit is your only choice, your success will depend on one thing only. What tech do you have available? If you have tractor beams or EMP then it's a done deal really, but if all your weapons are destructive it gets more complicated. The only way to arrest someone who literally can go anywhere, is to make them believe that surrender is their best option. Either threaten them with superior weaponry to get them to stop, but historically that hasn't always worked out for the best. You could do the mafia option and threaten their friends or family (if known), but this might not get you reelected. If you have very precise weaponry, you might be able to damage their spaceship minimally, but just enough to render it inoperable. I would start with the life support systems rather than propulsion. They will have the choice of surrender or suffocate, and human nature will not really choose suffocation. If it is alien nature though, that's anyone's guess... • Nitpick, EMP won't work they will still be travelling the same speed just unable to control anything. Mar 13 '20 at 17:25 • There is no hiding in space. Once you know where the fugitive is, you can track him with a telescope and he won't be able to run in an unknown direction. Except by moving behind a planet and expending a huge amount of energy to change course (the planet can assist but not much if you start the maneuver that near). Interpol has multiple telescopes, so hiding behind a planet is difficult - landing on it is a better idea, there you can hide. Mar 14 '20 at 7:13
2022-01-22 21:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23814387619495392, "perplexity": 2304.4118537524914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00268.warc.gz"}
https://wizedu.com/questions?page=8
##### need to know answers to a, b, and c A. comparr and contrast the morphological features... need to know answers to a, b, and c A. comparr and contrast the morphological features of a long bone with those of a flat bone. B. what is the significance of osteons in compact bone versus cancellous bone in flat bones? C. which type of bone is stronger? In: Anatomy and Physiology Price per Unit Variable Cost per Unit Units Sold per Year Basic $700$ 220 700 Retest 1,050 580 200 Vital 4,600 3,100 100 Variable costs include the labor costs of the medical technicians at the lab. Fixed costs of $490,000 per year include building and equipment costs and the costs of administration. A basic "unit" is a routine drug test administered. A retest is given if there is concern about the results of the first test, particularly if the test indicates that the athlete has taken drugs that are on the banned drug list. Retests are not done by the laboratory that performed the basic test. A "vital" test is the laboratory's code for a high-profile case. This might be a test of a famous athlete and/or a test that might be challenged in court. The laboratory does extra work and uses expensive expert technicians to ensure the accuracy of vital drug tests. Limitless Labs is subject to a 20 percent tax rate. a. How much will Limitless Labs earn each year after taxes? b. Assuming the above sales mix is the same at the break-even point, at what sales revenue does Limitless Labs break even? (Do not round intermediate calculations. Round your final answer to the nearest whole dollar.) c. At what sales revenue will the company earn$200,000 per year after taxes assuming the above sales mix? (Do not round intermediate calculations. Round your final answer to the nearest whole dollar.) d-1. Limitless Labs is considering becoming more specialized in retests and vital cases. Assume the number of retests increased to 500 per year and the number of vital tests increased to 250 per year, while the number of basic tests dropped to 100 per year? With this change in product mix, the company would increase fixed costs to $520,000 per year. What would be the effect of this change in product mix on Limitless Labs's earnings after taxes per year? d-2. If the laboratory's managers seek to maximize the company's after-tax earnings, would this change be a good idea? In: Accounting ##### In a school district, all sixth grade students take the same standardized test. The superintendant of... In a school district, all sixth grade students take the same standardized test. The superintendant of the school district takes a random sample of 25 scores from all of the students who took the test. She sees that the mean score is 139 with a standard deviation of 16.5865. The superintendant wants to know if the standard deviation has changed this year. Previously, the population standard deviation was 29. Is there evidence that the standard deviation of test scores has decreased at the α=0.025 level? Assume the population is normally distributed. Step 1 of 5: State the null and alternative hypotheses. Round to four decimal places when necessary. Step 2 of 5: Determine the critical value(s) of the test statistic. If the test is two-tailed, separate the values with a comma. Round your answer to three decimal places. Step 3 of 5: Determine the value of the test statistic. Round your answer to three decimal places. Step 4 of 5: Make the decision. Reject or fail to reject In: Statistics and Probability ##### Recording purchases and sales of inventory LO1, 2 Lowder Company purchased 275 units of inventory on... Recording purchases and sales of inventory LO1, 2 Lowder Company purchased 275 units of inventory on account for$5775. Due to early payment, Lowder received a discount and paid only $5225. Lowder then sold 150 units for cash at$55 each, purchased an additional 65 units for cash at a cost of $1430, and then sold 100 more units for cash at$55 each. Lowder uses a perpetual inventory system. Required a Prepare all journal entries to record Lowder’s purchases and sales assuming the FIFO inventory costing method > Exercise 4 additional requirement b) Prepare the same journal entries as part a, this time using the periodic inventory system using the FIFO cost flow assumption. Compare the similarities and differences between journal entries recorded using the perpetual and periodic inventory systems. You may wish to write the journal entries side by side for ease of comparison. > Exercise 4 additional requirement c) Instead of receiving a purchase discount for early repayment, Lowder received an allowance of \$2 per unit because the inventory delivered had some minor defects. Record the journal entry for the allowance under BOTH the perpetual and the periodic inventory systems. In: Accounting ##### 1. What are the reasons for monopoly, (list the standard reasons for barriers to entry)? 2.... 1. What are the reasons for monopoly, (list the standard reasons for barriers to entry)? 2. Where do prices come from in a monopoly? 3. How are output and pricing decisions made by a monopolist entrepreneur? 4. How is this different from Perfect Competition? 5. Draw three graphs of a monopoly, one in each of the three situations, a, b, and c and demonstrate how you would choose a price and quantity and discuss what the monopolist would do in each situation. 6. What happens in the long-run to the monopolist in each situation and what does this say about efficiency in monopoly markets? In: Economics ##### PYTHON Let n denote an integer entered by the user. Write a program to print n... PYTHON Let n denote an integer entered by the user. Write a program to print n multiples of 5 in the descending order, with the last number being 5. Print the average of those n multiples In: Computer Science ##### Define the following terms with examples (a) Absolute return (b) Nominal rate of return (c) Real... Define the following terms with examples (a) Absolute return (b) Nominal rate of return (c) Real rate of return In: Finance ##### Do question E The following table contains the ACT scores and the GPA (grade point average)... Do question E The following table contains the ACT scores and the GPA (grade point average) for eight college students. Grade point average is based on a four-point scale and has been rounded to one digit after the decimal. Student 1 2 3 4 5 6 7 8 GPA (y) 2.8 3.4 3.0 3.5 3.6 3.0 2.7 3.7 ACT (x) 21 24 26 27 29 25 25 30 We want to fit a straight-line model to relate GPA y to the ACT. ??? = ?0 + ?1 ??? + ? a. Compute: ????, ????, ????, ?̂ 0, ?̂ 1 and hence write the equation of the fitted line. b. Compute the fitted values and residuals for each observation, and verify that the residuals (approximately) sum to zero. (Do not use computer for parts c,d,e). c. Draw a scatter plot of ACT and GPA d. On your scatter plot draw the fitted line you have in part (b). e. On your plot indicate ?3, ?̂3,?̂3 Where ?̂3 ?? ?ℎ? ???????? ??? ?ℎ? ?ℎ??? ???????????. In: Statistics and Probability ##### One hundred teachers attended a seminar on mathematical problem solving. The attitudes of representative sample of... One hundred teachers attended a seminar on mathematical problem solving. The attitudes of representative sample of 12 of the teachers were measured before and after the seminar. A positive number for change in attitude indicates that a teacher's attitude toward math became more positive. The twelve change scores are as follows. 4; 8; −1; 1; 0; 4; −3; 2; −1; 5; 4; −2 B) What is the standard deviation for this sample? (Round your answer to two decimal places.) C) What is the median change score? (Round your answer to one decimal place.) D) Find the change score that is 2.2 standard deviations below the mean. (Round your answer to one decimal place.) In: Statistics and Probability ##### A clothing store is open each day for 12 hours. On average, the store gets 120... A clothing store is open each day for 12 hours. On average, the store gets 120 customers per day. Round each answer below to the nearest hundredth. a. Find the probability that the store gets 140 or more customers today. b. Find the probability that the store gets 15 or more customers in the first hour of business. c. Find the probability that the store gets 40 or more customers in the first three hours of business. **Please show me how to put this into a calculator Ti- 84** In: Statistics and Probability ##### 5. Explain how cell division is different in prokaryotic and eukaryotic cells. Also, compare the DNA... 5. Explain how cell division is different in prokaryotic and eukaryotic cells. Also, compare the DNA of prokaryotic and eukaryotic cell. In: Biology ##### True or False In 2017, venture capital dollars were concentrated in the following three industries: internet,... True or False In 2017, venture capital dollars were concentrated in the following three industries: internet, health care, and business products and services. Of the eight factors usually considered when valuing the venture, provide four of them along with a sentence of explanation for each factor you provide. Describe and compare early-stage financing to expansion or development financing. Who are the usual investors in each of the two stages Identify and briefly describe, in order, each of the four major stages of the venture capital process. In: Finance ##### A Company is selling two products A and B. It sells the two products at the... A Company is selling two products A and B. It sells the two products at the prices of AED 190 and 230 respectively. The finance department estimated the fixed cost for both products A and B as AED 2400 and 2550 and the variable cost to make a unit of each product is AED 70 and 60 respectively. 1. Compute the number of units the company must sell to Break-Even from each product? Compute the total revenue at these Break-Even Points? 2. Compute the number of units the company sells at which the two products have the same total cost? Compute the company total revenue for this number of sold units? If the company used new technology in redesigning product A and the total fixed cost reduced to AED 2250. The marketing department estimated the new selling price per unit to be AED 240 and that the company would be break-even if they sale 10 units of product A. Compute the variable cost per unit and the total variable cost of the newly designed product A? In: Finance ##### 1. Explain how a tariff reduction leads to an increase in the quantity of imports and... 1. Explain how a tariff reduction leads to an increase in the quantity of imports and a decrease in equilibrium price. 2. Explain how trade barriers raise wages in protected industries by reducing average wages economy-wide. 3. You have been placed in charge of trade policy for the U.S. Suppose corn is a crop that is doing well and the export market is beginning to expand. Corn production is considered an infant industry. Corn producers come to you and ask for tariff protection from cheaper corn grown in Canada. What sorts of policies will you enact and why? In: Accounting ##### Use the following corn futures quotes: Corn 5,000 bushels Contract Month Open High Low Settle Chg... Use the following corn futures quotes: Corn 5,000 bushels Contract Month Open High Low Settle Chg Open Int Mar 455.125 457.000 451.750 452.000 −2.750 597,913 May 467.000 468.000 463.000 463.250 −2.750 137,547 July 477.000 477.500 472.500 473.000 −2.000 153,164 Sep 475.000 475.500 471.750 472.250 −2.000 29,258 Suppose you buy 18 of the September corn futures contracts at the last price of the day. One month from now, the futures price of this contract is 462.5, and you close out your position. Calculate your dollar profit on this investment. (A negative value should be indicated by a minus sign. Do not round intermediate calculations. Round your answer to 2 decimal places.) In: Finance
2023-03-25 02:00:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2266855537891388, "perplexity": 2195.160034087548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00442.warc.gz"}
http://encyclopedia.kids.net.au/page/pr/Principal_components_analysis
Encyclopedia > Principal components analysis Article Content Principal components analysis In statistics, principal components analysis (PCA) is a transform[?] used for reducing dimensionality in a dataset while retaining the most important characteristics of that dataset. In signal processing it called the (discrete) Karhunen-Loève transform. It is also called the Hotelling transform. The principal component w1 of a dataset x can be defined as $\mathbf{w}_1 = \arg\max_{\Vert \mathbf{w} \Vert = 1} E\left\{ \left( \mathbf{w}^T \mathbf{x}\right)^2 \right\}$ with the first $k - 1$ components, the $k$-th component can be found by subtracting the first $k - 1$ principal components from x: $\mathbf{\hat{x}}_{k - 1} = \mathbf{x} - \sum_{i = 1}^{k - 1} \mathbf{w}_i \mathbf{w}_i^T \mathbf{x}$ and by substituting this as the new dataset to find a principal component in: $\mathbf{w}_k = \arg\max_{\Vert \mathbf{w} \Vert = 1} E\left\{ \left( \mathbf{w}^T \mathbf{\hat{x}}_{k - 1} \right)^2 \right\}.$ A simpler way to calculate the components wi uses the covariance matrix of x, the measurement vector. By finding the eigenvalues and eigenvectors of the covariance matrix, we find that the eigenvectors with the largest eigenvalues correspond to the dimensions that have the strongest correlation in the dataset. The original measurements are finally projected onto the reduced vector space. Related (or even more similar than related?) is the calculus of empirical orthogonal functions[?] (EOF). Another method of dimension reduction is a self-organizing map. All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article KANU ... as President in the December 1997 elections, and his KANU Party narrowly retained its parliamentary majority, with 109 out of 122 seats. On December 29, 2002, th ...
2020-04-05 00:44:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627370715141296, "perplexity": 985.5288858554935}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00161.warc.gz"}
https://tongfamily.com/2012/12/23/mineral-miniitx-pc/
Well, the next step of the road is here, we've built a 12-gallon sized computer that uses an ATX board (P8Z77-V Deluxe), an overclocked Intel Core i5 3570K with dual eVGA GTX-670 in SLI. It's been running fine (except I can't get the pump to throttle). Here are notes on the first step of building a 5 gallon computer from a mini-ITX chassis. The bill of materials mainly gotten through Newegg (btw, use the united mall, to get double miles when you click through) • Silverstone SG06-450. 450 watt SFX power supply • ASUS P8Z77-I Mainboard • Intel Core i5 3570K Processor • Maxstor 120GB SSD • Samsung 2x4GB DDR3-1600 Memory • Noctua NH-L12 Cooler Here are the installation notes and order, it isn't super obvious because we are tryingt o use a big cooler on a tiny board • Open up the Silverstone, remove the cover with the four screws in the back and slide it back • Remove the 3.5" carrier and the optical drive • Put the Noctua backing plate behind the motherboard. Remove the rubber insert in the middle and make sure it lines up with the stock backing plate. You don't remove the stock one. • The hardest part of the direction of the cooler as it is nearly as big as the mini-ITX board. Try as we might, we could not use the 120mm fan in the lower position, it kept hitting the ram, the voltage modules or the capacitors. Ugh. Only way was with the 92mm fan on (not as good for cooling). Seems like the two solution are either to shove the 120mm fan to the side so it fits or get a slim 120mm fan so it can clear the memory (probably that's the best answer). • The rest of the install is pretty straightforward. Install the cooler with the heat pipes towards the PCIe slot So the big problems to solve are: • Enough cooling either by moving the 120mm fan or by a thin 120mm fan. However, running Prime95 at full bore, the CPU temps get to just 60C running non-overclocked. That's not quite as cool as the big micro-ATX sized P8Z77-Deluxe with Noctua NH-14 running at 50C at full bore, but pretty amazing for using just the 92mm fan. • Overclocking is a little confusing. We got to 4.4GHz with 43 multiplier runing at 103% of base clock (103MHz) with voltage increased to 1.3V it looksl like. And the H4000 onboard graphics is overclocked 13% to 1.3GHz. But the DDR3-1600 ram isn't overclocked, some say that it can run better with hand overclocking and I need to find a good manual guide. • Dell Ultrasharp U2412M. It is just $300 and is a very decent monitor with all the fixings. It is an IPS panel so much brighter than the cheaper TN panels. It is fine for gaming. The ASUS PA248Q is a close second and support 1:1 mapping which is great for XBox and other applications. See Wirecutter.com and FlatPanelsHD.com • Dell Ultrasharp U2312HM. It is 23" so not visibily smaller than the 2412M, but it is just$250 and is also an IPS monitor
2022-10-06 16:12:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38373279571533203, "perplexity": 5132.655264869705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00518.warc.gz"}
http://math.stackexchange.com/questions/270499/proof-using-darbouxs-definition-int-01f-lim-n-to-infty-frac1n-sum-i-1
# Proof using Darboux's Definition: $\int_0^1f=\lim_{n\to \infty}\frac1n\sum_{i=1}^{n}f\left(\frac in\right)$ Let $f:[0,1]\to \mathbb{R}$ be Darboux integrable. I ask for a proof of $$\int_0^1f=\lim_{n\to \infty}\frac1n\sum_{i=1}^{n}f\left(\frac in\right)$$ where the integral in the left hand side is the Darboux Integral of $f$. I know how to prove this when the integral is the Riemann Integral or when $f$ is monotone. I also know that the Darboux Integral is equivalent to the Riemann Integral. But can the above be proven directly without going into Riemann Sums? For completeness here are the definitions: If $\mathcal{P}=\left\{a=x_0<...<x_n=b\right\}$ partitions $[a,b]$ and $f:[a,b]\to \mathbb{R}$ is bounded we define $$M_i(f)=\sup_{x\in [x_{{i-1}},x_i]}f(x)\text{ and }m_i(f)=\inf_{x\in [x_{{i-1}},x_i]}f(x)$$ The Darboux sums are: $$U_{f,\mathcal{P}}:=\sum\limits_{i=1}^{n}{M_i(f)\left( x_i-x_{i-1} \right)}\text{ and } L_{f,\mathcal{P}}:=\sum\limits_{i=1}^{n}{m_i(f)\left( x_i-x_{i-1} \right)}$$ The Darboux upper and lower integrals are $$\overline{\int\limits_a^b}f:=\inf_{\mathcal{P}}U_{f,\mathcal{P}}\text{ and } \underline{\int\limits_{a}^{b}}f:=\sup_{\mathcal{Q}}L_{f,\mathcal{Q}}$$ If the two coincide, $f$ is integrable and their common value is the Darboux integral of $f$. We also have this criterion: $f$ is integrable iff $\forall \epsilon>0$ there exists a partition $\mathcal{P}$ of $[a,b]$ such as that $U_{f,\mathcal{P}}-L_{f,\mathcal{P}}<\epsilon$ which can be written equivalently as $f$ is integrable iff there exists a sequence of partitions $\mathcal{P}_n$ of $[a,b]$ such as that $U_{f,\mathcal{P}_n}-L_{f,\mathcal{P}_n}\to 0$. In that case $$\int\limits_{a}^{b}f=\lim_{n\to \infty}U_{f,\mathcal{P}_n}=\lim_{n\to \infty}L_{f,\mathcal{P}_n}$$ Proof if $f$ is monotone: WLOG $f$ is increasing. Then $$\mathcal{P}_n=\left\{ 0=x_0<...<x_i=\frac{i}{n}<...<x_n=1 \right\}$$ partitions $[0,1]$. Obviously, $M_i(f)=f(\frac in)$ and so $$\int\limits_{0}^{1}f=\lim_{n\to \infty}U_{f,\mathcal{P}_n}=\lim_{n\to \infty}\sum\limits_{i=1}^{n}{M_i(f)\left( x_i-x_{i-1} \right)}=\lim_{n\to \infty}\sum\limits_{i=1}^{n}{f(\frac in)\left(\frac1n\right)}$$ I will say it again: I don't want Riemann sums to be used, only upper and lower sums as above. What happens if you compare $f(i/n)$ to $M_i(f)$ and $m_i(f)$ for $x_i=i/n$? – Alex R. Jan 4 '13 at 19:41 @Alex I can't believe it's that simple. You have $m_i<f(\frac in)<M_i\implies L_{f,\mathcal{P}_n}<\sum_{i=1}^n\frac1n f(\frac in)<U_{f,\mathcal{P}_n}$ and by letting $n\to \infty$ we have the result. I believe this is correct. Why don't you post it as an answer? – Nameless Jan 4 '13 at 19:45 @ThomasAndrews Yeah I mistook $\exists$ for $\forall$. Can you explain the point with the refinements? – Nameless Jan 4 '13 at 20:59 @ThomasAndrews Here is what I think. Let $\mathcal{P}_n$ be paritions so that $U_{f,P_n}-L_{f,P_n}\to 0$ and $\mathcal{Q}_n=P_n\cup\left\{0,\frac1n,...,1\right\}$. Then $U_{f,Q_n}-L_{f,Q_n}\to 0$. Can we show that $U_{f,Q_n}\ge \sum\limits_{i=1}^{n}{f(\frac in)\left(\frac1n\right)}$? – Nameless Jan 4 '13 at 21:14
2016-02-14 17:04:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9727715253829956, "perplexity": 132.83827709349336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701999715.75/warc/CC-MAIN-20160205195319-00340-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=14982&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Avtomat. i Telemekh.: Year: Volume: Issue: Page: Find Avtomat. i Telemekh., 2019, Issue 1, Pages 38–53 (Mi at14982) Stochastic Systems On optimal retention of the trajectory of discrete stochastic system in tube V. M. Azanov, Yu. S. Kan Moscow State Aviation Institute, Moscow, Russia Abstract: Consideration was given to the design of the optimal control of the general discrete stochastic system with a criterion as the probability of the state vector sojourn in the given sets at each time instant. Derived were relations of the dynamic programming enabling one to establish an optimal solution in the class of Markov strategies without extension of the state vector with subsequent reduction to an equivalent problem with the probabilistic terminal criterion. Consideration was given to the problem of one-parameter correction of the flying vehicle trajectory. An analytical solution was established. Keywords: discrete systems, stochastic optimal control, probabilistic criterion, method of dynamic programming, one-parameter pulse correction, control of flying vehicle motion. Funding Agency Grant Number Russian Foundation for Basic Research 18-08-00595_à Russian Science Foundation 16-11-00062 Author to whom correspondence should be addressed DOI: https://doi.org/10.1134/S0005231019010033 Full text: PDF file (803 kB) First page: PDF file References: PDF file   HTML file English version: Automation and Remote Control, 2019, 80:1, 30–42 Bibliographic databases: Presented by the member of Editorial Board: Á. Ì. Ìèëëåð Revised: 26.06.2018 Accepted: 08.11.2018 Citation: V. M. Azanov, Yu. S. Kan, “On optimal retention of the trajectory of discrete stochastic system in tube”, Avtomat. i Telemekh., 2019, no. 1, 38–53; Autom. Remote Control, 80:1 (2019), 30–42 Citation in format AMSBIB \Bibitem{AzaKan19} \by V.~M.~Azanov, Yu.~S.~Kan \paper On optimal retention of the trajectory of discrete stochastic system in tube \jour Avtomat. i Telemekh. \yr 2019 \issue 1 \pages 38--53 \mathnet{http://mi.mathnet.ru/at14982} \crossref{https://doi.org/10.1134/S0005231019010033} \elib{http://elibrary.ru/item.asp?id=37135084} \transl \jour Autom. Remote Control \yr 2019 \vol 80 \issue 1 \pages 30--42 \crossref{https://doi.org/10.1134/S000511791901003X} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000463630500003} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85063957693}
2020-02-17 03:41:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2042122483253479, "perplexity": 10690.194223399876}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141653.66/warc/CC-MAIN-20200217030027-20200217060027-00427.warc.gz"}
https://testbook.com/objective-questions/mcq-on-force-and-mass--5eea6a1339140f30f369ef47
# An unbalanced force of 90 N acts on an object and produces an acceleration of 15 m/s2 in it. Then, the mass of the object is 1. 15 kg 2. 90 kg 3. 1350 kg 4. 6 kg Option 4 : 6 kg ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) CALCULATION: Given that: Force (F) = 90 N Acceleration (a) = 15 m/s2 Force (F) = Mass (m) × acceleration (a) So mass (m) = F/a = 90/15 = 6 kg Hence option 4 is correct. # Which of the following is not a force? 1. Thrust 2. Impulse 3. Weight 4. Tension Option 2 : Impulse ## Detailed Solution The correct option is: 2 CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or the state of motion is called force. • Thrust: The force acting perpendicular to the surface of the object is called thrust. • Impulse (J): The change in momentum is called impulse. • It is not a force. It is simply the difference between the two momentum. • Weight: The gravitational force acting on any object on the earth's surface is called its weight. • Tension in a rope: In ideal case rope is massless and intangible, the force on one side is equal to force on the other side. EXPLANATION • Since the impulse is the change in momentum and it is not a force. Hence option 2 is correct. # A scooter of mass 120 kg is moving with a uniform velocity of 108 km/h. the force required to stop the vehicle in 10 s, 1. 720 N 2. 180 N 3. 1200 N 4. 360 N Option 4 : 360 N ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) • Retardation: The force acting in the opposite direction to the motion of a body is called the retarding force. This force produces negative acceleration for the body which is called retardation or deacceleration. Retardation (a) = Force/Mass The equation of motion is given below: V = u + a t Where V is final velocity, u is initial velocity, a is acceleration and t is time CALCULATION: Given that: Mass of scooter(m) = 120 kg, Initial velocity (u) = 108 km/h u = (108 × 1000)/3600 m/s = 30 m/s Force (F) =? Time taken to stop (t) = 10 sec Final velocity (V) = 0 m/s​ • We know that body is stopped, it means the final velocity is 0. V = u + a t 0 = 30 + a × 10 a = -30/10 = - 3 m/s2 Force (F) = m a = 120 × 3 = 360 N So option 4 is correct. # A body of mass 4 kg accelerates from 15 m/s to 25 m/s in 5 seconds due to the application of a force on it. Calculate the magnitude of this force (in N). 1. 32 2. 8 3. 16 4. 64 Option 2 : 8 ## Detailed Solution CONCEPT: • Force: Force is a push or pull on an object. A force can cause an object to accelerate, slow down, remain in place, or change shape. Force exerted = mass x acceleration F = m × a Acceleration = (Final velocity - Initial velocity)/time a = (v- v1) / t Where F is the force exerted on the body, m is mass of the body, a is the acceleration of the body, v2 is final velocity, v1 is initial velocity and t s time. CALCULATION: Given that: the mass of the body m = 4 kg, Final velocity v= 25 m/s, Initial velocity v1 = 15 m/s, time t = 5 s. We know that, ⇒ Acceleration = (Final velocity - Initial velocity)/time a = (25 - 15)/5 m/s2 = 2 m/s2 Also, ⇒ Force exerted = mass x acceleration Force = 4 * 2 = 8 N So option 2 is correct. # The effect of Thrust depends on which of the following? 1. Area 2. Volume 3. Mass 4. Weight Option 1 : Area ## Detailed Solution CONCEPT: • Thrust: The force acting perpendicular to the surface of the object is called thrust. • It is a type of force so the SI unit of thrust is Newton (N). • The effect of thrust is more on the smaller surface area than the thrust acting on a larger surface area. EXPLANATION: • The effect of thrust depends on the area of the surface on which it acts. So option 1 is correct. • The normal force exerted by liquid at rest on a given surface in contact with it is called the thrust of liquid on that surface. • The normal force (or thrust) exerted by liquid at rest per unit area of the surface in contact with it, is called pressure of liquid or hydrostatic pressure. # Among the following, the weakest force is 1. gravitational force 2. electric force 3. nuclear force 4. magnetic force Option 1 : gravitational force ## Detailed Solution Concept: • Force is a physical quantity that changes or tends to change the position, size, or direction of motion. • The SI unit of force is Newton. • Some of the examples of Forces are Gravitational Force • It is the force between two objects by virtue of their mass. • The planetary motions are based on gravitational force. • The Law of Gravitation was given by Sir Issac Newton. The force between two objects of mass m1 and m2 separated by distance r is given as $$F =\frac{Gm_1m_2}{r^2}$$ G is a universal Gravitational Constant having the value 6.67 × 10 -11 N m2 Kg -2 Electric Force • It is the force acting between two charged particles. • It is given as $$F =\frac{kq_1q_2}{r^2}$$ Here, q1 and q2 are magnitudes of charges, r is the distance between them. The value of constant k is 9 × 10 9 N m2 C-2 • We can see that k is too large compared to G. It signifies, the electric force is much larger when compared with Gravitational force in given conditions. Nuclear Forces • The Nuclear force is the force that binds the nucleons (Protons and neutrons) together in the nucleus. • There are two types of Nuclear Force, strong and weak. • The strong nuclear force acts when the distance between nucleons is less than 10 -15 m or 1 fermimeter. • In that case, it is even stronger than electrostatic force. • The weak nuclear force is weaker than the strong nuclear force but stronger than the gravitational force. • The force can be observed during the β decay of the nucleus. • The range of the force is 10 -16 m. Magnetic Force • It is the force that attracts magnetic substances. • When a magnet is brought near iron nails, it attracts them. • A small magnet can attract pins, which are held by the gravity of large earth. So, clearly, it is stronger than the gravitational force. Explanation: So, we get the basic idea about the forces given and their relative strength. The following table signifies the strength of different forces. Force Relative Strength Gravitational Force 10 - 39 Electric Force 10 - 2 Strong Nuclear Force 10 -13 Weak Nuclear Force 1 Note: -The source of the above data is NCERT Textbook of class 11. So, the weakest force is gravitational force. # Power exerted by an object moving in a straight line is equal to Force multiplied by ______. 1. Velocity 2. Displacement 3. Acceleration 4. Work Option 1 : Velocity ## Detailed Solution Key Points • In the straightforward cases where a constant force moves an object at a constant velocity, the power is just P = Fv. • The velocity of an object is the rate of change of its position with respect to a frame of reference and is a function of time. • Displacement is defined to be the change in the motion of an object. • Acceleration is the rate of change of the velocity of an object with respect to time. • Work the energy transferred to or from an object via the application of force along with the displacement. # If a 10 kg body is moving with a speed of 15 m/s, is acted upon by a retarding force of 50 N, then how long will it take for the body to come to a stop? 1. 10 sec 2. 3 sec 3. 1 sec 4. 5 sec Option 2 : 3 sec ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) • Retardation: The force acting in the opposite direction to the motion of a body is called the retarding force. This force produces negative acceleration for the body which is called retardation or deacceleration. Retardation (a) = Force/Mass The equation of motion is given below: V = u + a t Where V is final velocity, u is initial velocity, a is acceleration and t is time CALCULATION: Given that: Mass (m) = 10 kg, Initial velocity (u) = 15 m/s Force (F) = - 50 N Acceleration (a) = F/m = - 50/10 = - 5 m/s2 Final velocity (V) = 0 m/s V = u + a t 0 = 15 - 5 × t So t = 15/5 = 3 sec​ # A body of mass 30 kg moves with an initial speed of 20 m/s. If a retarding force of 60 N is applied, how long will the body body take to stop? 1. -10 s 2. 10 s 3. 9 s 4. 0.10 s Option 2 : 10 s ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) • Retardation: The force acting in the opposite direction to the motion of a body is called the retarding force. This force produces negative acceleration for the body which is called retardation or deacceleration. Retardation (a) = Force/Mass The equation of motion is given below: V = u + a t Where V is final velocity,C is initial velocity, a is acceleration and t is time CALCULATION: Given that: Mass (m) = 30 kg, Initial velocity (u) = 20 m/s Force (F) = - 60 N Acceleration (a) = F/m = - 60/30 = - 2 m/s2 Final velocity (V) = 0 m/s​ • We know that body is stopped, it means the final velocity is 0. V = u + a t 0 = 20 + (-2)t 2t = 20 Time taken (t) = 10 seconds. So option 2 is correct. # A 10 N force is applied on a body which produces in it an acceleration of 2 m/s2. The mass of the body is: 1. 5 kg 2. 10 kg 3. 15 kg 4. 20 kg Option 1 : 5 kg ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) CALCULATION: Given that: Force (F) = 10 N Acceleration (a) = 2 m/s2 Force (F) = Mass (m) × acceleration (a) Mass (m) = F/a = 10/2 = 5kg So option 1 is correct. # The force exerted on an object is 200 N its mass is 100kg. Find the acceleration of the object. 1. 2 ms-1 2. 2 ms-2 3. 2 ms1 4. 2 ms2 Option 2 : 2 ms-2 ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) CALCULATION: Given that: Force = 200 Newton (N) Mass = 100 kg We know that, F = ma putting values, 200 N = 100 kg x a a (acceleration) = F/m = 200/100 = 2 ms-2 Thus, an object weighing 100Kg, when subjected to a force of 200 N, is accelerated by 2 ms-2 # An 800-kg car is moving at a speed of 90 km/h. It takes 5 s to stop after the brakes are applied. The force applied by the brakes will be________. 1. 3000 N 2. 4000 N 3. 1000 N 4. 2000 N Option 2 : 4000 N ## Detailed Solution CONCEPT: • Force: The interaction which after applying on body changes or try to change the state of rest or state of motion is called force. It is denoted by F. Force (F) = Mass (m) × Acceleration (a) The equation of motion is given by: V = u + a t Where V is final velocity, u is initial velocity, a is acceleration and t is time. CALCULATION: Given that: Initial velocity (u) = 90 km/h = 90× 5/18 = 25 m/s Time taken to stop the car (t) = 5s Final velocity (after stoping) = V = 0 m/s Mass of car (m) = 800 kg To find: Force =? Use V = u + a t 0 = 25 + a × 5 So Acceleration (a) = - 25/5 = - 5 m/s2 Force (F) = Mass (m) × Acceleration (a) = 800 × (-5) = - 4000 N • Negative sign show that the force acting is in opposite direction of the motion of the car. # The mass of the earth is _____. 1. 6 × 10-24 kg 2. 6 × 10-23 kg 3. 6 × 1023 kg 4. 6 × 1024 kg Option 4 : 6 × 1024 kg ## Detailed Solution The correct answer is 6 × 1024 kg. Key Points • The exact mass of the earth is 5.9722×1024 Kg. • The average density of the earth is 5515 kg.m−3. • The mass of the moon is 7.342×1022 kg. • Other names of the earth are - Gaia, Gaea, Terra, Tellus. • Earth has only one natural satellite i.e. Moon. Important Points • The total circumference of Earth is 40,000 km. • The total surface area of the Earth is 510072000 km2. • The axial tilt of the Earth is 23.4392811°. • The total number of Latitudes is 179. • Equator (The Great circle) passes through South America, Africa, Asia. • 1° longitudinal distance at the equator is approximately 111.3 km. • 180° longitude is known as International Date Line. # _______ never occurs singly in nature. 1. Force 2. Momentum 3. Velocity 4. Pressure Option 1 : Force ## Detailed Solution CONCEPT: • Force: According to Newton, a force can never occur singly in nature. • It is the mutual interaction between the two bodies. • It is that external energy which when acts on a body changes or stories to change the initial state of rest or motion with a uniform velocity of the body. • Newton is the SI unit of force. • Momentum: It is the property of a moving body and is defined as the product of mass and velocity of the body. • SI unit is kgm/s. • Velocity: The displacement of the object in a unit time interval is called velocity. It is denoted by V. • SI unit is meter/second. • Pressure: It is the force applied per unit area in a direction perpendicular to the surface of the object. • SI unit of pressure is the pascal. EXPLANATION: • Since the force can never occur singly. It is a mutual interaction between two bodies. So option 1 is correct. # Grooves are made in the tyres of vehicles to: 1. Increase the friction 2. Remove the friction 3. Equalize friction 4. Reduce the friction Option 1 : Increase the friction ## Detailed Solution The correct answer is Increase the friction. • Tyre has a groove that increases the friction between them and the road. • This enables the vehicle to make a firm grip on-road and prevent slipping. • Grooved offers more friction to the ground which gives a better grip. • When treads are worn out, the tyres need to be replaced with a new one. Key Points • Friction is a force between two surfaces that resist the motion of the object. • Friction always works in the direction opposite to the direction in which the object is moving or trying to move. • The amount of friction depends on the materials from which the two surfaces are made. • The rougher the surface, the more friction is produced. • Friction also produces heat. • Methods to increase friction • By making, the surfaces rough– Friction can be increased by increasing the roughness of the surface. • By making grooves:- We can increase the friction is a case of tyres of bicycles, cars or buses by making grooves in them, Due to greater friction, the tyres get a better grip on the road which prevents skidding. • Streamline:- The friction due to air is increased by making the automobiles less streamlined. # Name the physical quantity that is equal to the product of force and velocity. 1. Energy 2. Acceleration 3. Work 4. Power Option 4 : Power ## Detailed Solution • The physical quantity obtained by the product of force and velocity is Power. • It is a Scalar quantity. • Work done is generally referred to relate to the force applied while energy is used in reference to other factors such as heat. • Power is defined as work done per unit time. •  Energy is the ability to perform work. Energy can neither be created nor destroyed. • It can only be transformed from one kind to another. • The unit of energy is the as of Work i.e. Joules. • Energy is found in many things and thus there are different types of energy. • Power is a physical concept that has several different meanings, depending on the context and the information that is available. • We can define power as the rate of doing work. It is the amount of energy consumed per unit of time. # An object of mass 100 kg is accelerated uniformly from a velocity of 5 ms-1 to 15 ms-1 in 5 s. Find the magnitude of force exerted on the object. 1. 200 J 2. 200 Kg 3. 200 Pa 4. 200 N Option 4 : 200 N ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) There are three equations of motion: V = u + at V2 = u2 + 2 a S $${\text{S}} = {\text{ut}} + \frac{1}{2}{\text{a}}{{\text{t}}^2}$$ Where, V = final velocity, u = initial velocity, s = distance traveled by the body under motion, a = acceleration of body under motion, and t = time taken by the body under motion. CALCULATION: Given that: mass of the object (m) = 100 kg. Time taken (t) = 5 sec. ⇒ Initial velocity = 5 m/s. ⇒ Final velocity = 15 m/s. Use V = u + at 15 = 5 + a × 5 So a = 2 m/s2 Force = m a  = 100 × 2 = 200 N # When a force of 10 N acts on a body of the mass of 10 kg that is able to move freely, which of the following statements will apply? 1. The body moves with a speed of 1 km/s 2. The body moves with an acceleration of 10 ms-2 3. The body moves with an acceleration of 1 ms-2 4. The body moves with a speed of 1 m/s Option 3 : The body moves with an acceleration of 1 ms-2 ## Detailed Solution CONCEPT: • Force: The interaction which after applying on a body changes or try to change the state of rest or state of motion of the body is called force. Force (F) = Mass (m) × acceleration (a) CALCULATION: Given data: The force acting on the body (F) = 10 N Mass of the body (m) = 10 kg Acceleration of the body (a) =? From the Newton Second law, we get F = ma a = F/m Now by substituting above values, we get a = 10/10 = 1 m/s2 Hence, we get acceleration (a) = 1. • The body moves with an acceleration of 1 m/s2. # The pressure on earth will be less when the man is __________. 1. Lying 2. Sitting 3. Standing on one foot 4. Standing on two feet Option 1 : Lying ## Detailed Solution • The pressure exerted by a man on earth will be lowest when he is lying down. Key Points • The force applied to the unit area of a surface is called pressure. • Its unit is 'Newton per square meter'. • This means Pressure=force/area. • Hence area and pressure are inversely proportional. • When a person is lying down, the area is maximum thus pressure will be minimum. • There are many more popular units of pressure. Pressure is a scalar sign. # Which of the following is the weakest force? 1. Gravitational Force 2. Weak Nuclear Force 3. Electromagnetic Force 4. Strong Nuclear Force Option 1 : Gravitational Force ## Detailed Solution CONCEPT: These 4 forces are known as the four fundamental forces of nature. 1. Gravitation: The attraction force between two objects that have mass or energy is called Gravity. 2. The weak nuclear force/interaction: The force that is responsible for particle decay is known as the weak nuclear force. • This force shows the change of one type of subatomic particle into another • It is responsible for radioactive decay and neutrino interactions. 3. Electromagnetic force: The force that acts between charged particles. • Opposite charges attract each other, while the same charges repel each other. 4. The strong nuclear force/interaction: This force binds the fundamental particles of matter together to form larger particles. • It holds together the quarks that make up protons and neutrons, and part of the strong force also keeps the protons and neutrons of an atom's nucleus together. • This force is the strongest of the four fundamental forces of nature. EXPLANATION: • The Gravitational Force is the weakest force. So option 1 is correct. • Weak Nuclear force has a very short range and is 1025 times stronger than the Gravitational Force.
2021-09-19 11:52:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.626049280166626, "perplexity": 1097.2970794171456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00010.warc.gz"}
https://www.physicsforums.com/threads/stupid-question.161256/
# Stupid Question 1. Mar 17, 2007 ### PhillipKP I have a stupid question: I saw the term "structureless Particle". What is a "structureless particle"? Please explain to me like I am a 4th grader. 2. Mar 17, 2007 ### quasar987 It all depends on the context of course, but I would guess it means a particle that is not made up of more particles. Fundamentally, a proton is not a structureless particle because it is made up of 3 quarks. However, a quark would fit in this category and so would an electron, because as far as we know, they are not made up of anything else. 3. Mar 17, 2007 ### BenTheMan Structureless particle? Maybe it has not internal quantum numbers? I.e. no spin. 4. Mar 18, 2007 ### Hootenanny Staff Emeritus Both electrons and quarks [as far as we know] are elementary particles (have no internal structure), both are also fermions and hence have half integer spin (i.e $\pm1/2, \pm3/2, \pm5/2,...$). However, I do agree with quasar987's interpretation. If you google 'elementary particle' you should find some more information. Last edited: Mar 18, 2007 5. Mar 18, 2007 ### daica According my opinion, structureless particles haven't its mass, size and aren't made up of more particles. For example, photon, neutrino are the structureless particles. However, they are still influenced by interactions (fundamental force as weak,electromagnetic, gravitational, strong ..). 6. Mar 18, 2007 ### Hootenanny Staff Emeritus According to current theory and experimental evidence, neutrinos have a small but finite mass. Out of curiosity why do you require that a structureless particle must be massless? 7. Mar 18, 2007 ### BenTheMan I know quite a bit about electrons and quarks:) I guess "structureless" is pretty ambiguous. I took structureless to mean without any internal quantum numbers, and spin is internal in the sense that it is intrinsic. I've never seen an electron called a "structureless" particle, only "fundamental" particle. It would be nice to see some context, I guess. And 5/2 isn't an allowed spin for fundamental particles:) 8. Mar 18, 2007 ### pmb_phy Do you think that is an answer a 4-th grader would understand? Pete 9. Mar 19, 2007 ### daica In my country, one call pmb_phy 's post above is a spam. And you are spamer. It isn't contructive.
2017-04-29 21:44:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7516248822212219, "perplexity": 2498.64597863513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00456-ip-10-145-167-34.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1909432/how-do-i-get-the-ft-n1-y-n1-needed-to-use-the-implicit-euler-method
# How do I get the $f(t_{n+1}, y_{n+1})$ needed to use the implicit Euler method? I have to solve a system of 1st order ODEs which are implicit. I know the formula for Explicit or forward Euler method is: $$y_{n+1}= y_n + hf(t_n, y_n),$$ whereas the formula for implicit or backward Euler method is $$y_{n+1}= y_n + hf(t_{n+1}, y_{n+1}).$$ In order to use the implicit Euler method, how can i get the value of $$f(t_{n+1}, y_{n+1})$$ ? Can I use the forward Euler method to get the value $$y_{n+1}$$ then substitute in backward Euler formula? • There is a difference between implicit ODE $0=F(t,y,\dot y)$ and implicit numerical methods. Most numerical methods, explicit as well as implicit, are for explicit ODE $\dot y=f(t,y)$. – LutzL Aug 31 '16 at 16:27 Backward Euler is an implicit method whereas Forward Euler is an explicit method. The latter means that you can obtain $y_{n+1}$ directly from $y_n$. The former means that you in general must solve a (non-linear) equation at each time step to obtain $y_{n+1}$. The typical way to do this to to use a non-linear equation solver such as Newton's method. Example: Say we want to solve $$\frac{dy}{dx}= y\cos{y}.$$ Backward Euler gives us $$y_{n+1} = y_n + h y_{n+1} \cos{y_{n+1}}$$ or $$y_{n+1}(1 - h \cos{y_{n+1}}) = y_n,$$ which clearly is non-linear in $y_{n+1}$ and thus requires a non-linear solver. Update: The reason we shouldn't use Forward Euler to obtain $f(t_{n+1},y_{n+1})$ is that it defies the entire purpose of the implicit method. If we can obtain $f(t_{n+1},y_{n+1})$ in a stable way using Forward Euler, then we can also obtain $y_{n+1}$ in a stable way, hence there is no need for the implicit method in the first place. However, usually the very reason for using an implicit method is that explicit ones (Forward Euler in this case) are unstable for certain problems, which brings us back to square one. • Dear, it is very difficult to apply Raphson for my problem because this method involves derivative. Can i use finite difference method for finding $y _{n+1}$ or suggest some other technique. – Abbeha Aug 31 '16 at 12:13 • It's hard to tell without knowing the problem you are solving. There are plenty of iterative methods that may work, depending on the problem at hand. You could try posting the details of the problem you are trying to solve on the Computational Science site and ask for advice there. – ekkilop Aug 31 '16 at 12:30 • @ekkilop : [+1] Abbeha: Newton's method can be replaced by secant method if computing the derivative is a delicate problem. – Jean Marie Aug 31 '16 at 22:09 You can use fixed point iteration or Newton iteration Fixed point iteration is $$y_{n+1}^{(s+1)}= y_n + hf(t_{n+1}, y_{n+1}^{(s)})$$ with iteration starting with explicit Euler: $y_{n+1}^{(0)}= y_n + hf(t_{n+1}, y_n)$
2019-07-17 22:38:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914320230484009, "perplexity": 219.687896999214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525414.52/warc/CC-MAIN-20190717221901-20190718003901-00156.warc.gz"}
https://en.cppreference.com/w/cpp/algorithm/ranges
# Constrained algorithms (since C++20) < cpp‎ | algorithm C++ Language Standard Library Headers Freestanding and hosted implementations Named requirements Language support library Concepts library (C++20) Diagnostics library Utilities library Strings library Containers library Iterators library Ranges library (C++20) Algorithms library Numerics library Input/output library Localizations library Regular expressions library (C++11) Atomic operations library (C++11) Thread support library (C++11) Filesystem library (C++17) Technical Specifications Algorithm library Constrained algorithms and algorithms on ranges (C++20) Concepts and utilities: std::Sortable, std::projected, ... Constrained algorithms: std::ranges::copy, std::ranges::sort, ... Execution policies (C++17) Non-modifying sequence operations all_ofany_ofnone_of(C++11)(C++11)(C++11) for_each for_each_n(C++17) Modifying sequence operations copycopy_if(C++11) copy_n(C++11) copy_backward move(C++11) move_backward(C++11) shift_leftshift_right(C++20)(C++20) transform removeremove_if replacereplace_if reverse rotate unique random_shuffle(until C++17) Operations on uninitialized storage uninitialized_copy_n(C++11) uninitialized_move_n(C++17) uninitialized_fill_n destroy(C++17) destroy_n(C++17) Partitioning operations is_partitioned(C++11) partition_point(C++11) partition partition_copy(C++11) Sorting operations is_sorted(C++11) is_sorted_until(C++11) Binary search operations Set operations (on sorted ranges) Heap operations is_heap(C++11) is_heap_until(C++11) Minimum/maximum operations minmax(C++11) minmax_element(C++11) clamp(C++17) compare_3way(C++20) Permutations is_permutation(C++11) Numeric operations accumulate reduce(C++17) transform_reduce(C++17) partial_sum inclusive_scan(C++17) exclusive_scan(C++17) transform_inclusive_scan(C++17) transform_exclusive_scan(C++17) C library Constrained algorithms Non-modifying sequence operations Modifying sequence operations Operations on uninitialized storage Partitioning operations Sorting operations Binary search operations Set operations (on sorted ranges) Heap operations Minimum/maximum operations Permutations C++20 provides constrained versions of most algorithms in the namespace std::ranges. In these algorithms, a range can be specified as either a iterator-sentinel pair or as a single range argument, and projections and pointer-to-member callables are supported. Additionally, the return type of most algorithms have been changed to return all potentially useful information computed during the execution of the algorithm. ## Contents ### Algorithm concepts and utilities The header <iterator> provides a set of concepts and related utility templates designed to ease constraining common algorithm operations. Defined in header <iterator> Defined in namespace std ##### Indirect callable concepts specifies that a callable type can be invoked with the result of dereferencing a readable type (concept) specifies that a callable type, when invoked with the result of dereferencing a readable type, satisfies predicate (concept) specifies that a callable type, when invoked with the result of dereferencing two readable types, satisfies predicate (concept) specifies that a callable type, when invoked with the result of dereferencing two readable types, satisfies strict_weak_order (concept) ##### Common algorithm requirements specifies that values may be moved from a readable type to a writable type (concept) specifies that values may be moved from a readable type to a writable type and that the move may be performed via an intermediate object (concept) specifies that values may be copied from a readable type to a writable type (concept) specifies that values may be copied from a readable type to a writable type and that the copy may be performed via an intermediate object (concept) specifies that the values referenced by two readable types can be swapped (concept) specifies that the values referenced by two readable types can be compared (concept) specifies the common requirements of algorithms that reorder elements in place (concept) specifies the requirements of algorithms that merge sorted sequences into an output sequence by copying elements (concept) specifies the common requirements of algorithms that permute sequences into ordered sequences (concept) ##### Utilities computes the result of invoking a callable object on the result of dereferencing some set of readable types (alias template) helper template for specifying the constraints on algorithms that accept projections (class template) ### Constrained algorithms Defined in header <algorithm> Defined in namespace std::ranges ##### Non-modifying sequence operations checks if a predicate is true for all, any or none of the elements in a range (niebloid) applies a function to a range of elements (niebloid) returns the number of elements satisfying specific criteria (niebloid) finds the first position where two ranges differ (niebloid) determines if two sets of elements are the same (niebloid) returns true if one range is lexicographically less than another (niebloid) finds the first element satisfying specific criteria (niebloid) finds the last sequence of elements in a certain range (niebloid) searches for any one of a set of elements (niebloid) finds the first two adjacent items that are equal (or satisfy a given predicate) (niebloid) searches for a range of elements (niebloid) searches for a number consecutive copies of an element in a range (niebloid) ##### Modifying sequence operations copies a range of elements to a new location (niebloid) copies a number of elements to a new location (niebloid) copies a range of elements in backwards order (niebloid) moves a range of elements to a new location (niebloid) moves a range of elements to a new location in backwards order (niebloid) assigns a range of elements a certain value (niebloid) assigns a value to a number of elements (niebloid) applies a function to a range of elements (niebloid) saves the result of a function in a range (niebloid) saves the result of N applications of a function (niebloid) removes elements satisfying specific criteria (niebloid) copies a range of elements omitting those that satisfy specific criteria (niebloid) replaces all values satisfying specific criteria with another value (niebloid) copies a range, replacing elements satisfying specific criteria with another value (niebloid) swaps two ranges of elements (niebloid) reverses the order of elements in a range (niebloid) creates a copy of a range that is reversed (niebloid) rotates the order of elements in a range (niebloid) copies and rotate a range of elements (niebloid) randomly re-orders elements in a range (niebloid) removes consecutive duplicate elements in a range (niebloid) creates a copy of some range of elements that contains no consecutive duplicates (niebloid) ##### Partitioning operations determines if the range is partitioned by the given predicate (niebloid) divides a range of elements into two groups (niebloid) copies a range dividing the elements into two groups (niebloid) divides elements into two groups while preserving their relative order (niebloid) locates the partition point of a partitioned range (niebloid) ##### Sorting operations checks whether a range is sorted into ascending order (niebloid) finds the largest sorted subrange (niebloid) sorts a range into ascending order (niebloid) sorts the first N elements of a range (niebloid) copies and partially sorts a range of elements (niebloid) sorts a range of elements while preserving order between equal elements (niebloid) partially sorts the given range making sure that it is partitioned by the given element (niebloid) ##### Binary search operations (on sorted ranges) returns an iterator to the first element not less than the given value (niebloid) returns an iterator to the first element greater than a certain value (niebloid) determines if an element exists in a certain range (niebloid) returns range of elements matching a specific key (niebloid) ##### Set operations (on sorted ranges) merges two sorted ranges (niebloid) merges two ordered ranges in-place (niebloid) returns true if one set is a subset of another (niebloid) computes the difference between two sets (niebloid) computes the intersection of two sets (niebloid) computes the symmetric difference between two sets (niebloid) computes the union of two sets (niebloid) ##### Heap operations checks if the given range is a max heap (niebloid) finds the largest subrange that is a max heap (niebloid) creates a max heap out of a range of elements (niebloid) adds an element to a max heap (niebloid) removes the largest element from a max heap (niebloid) turns a max heap into a range of elements sorted in ascending order (niebloid) ##### Minimum/maximum operations returns the greater of the given values (niebloid) returns the largest element in a range (niebloid) returns the smaller of the given values (niebloid) returns the smallest element in a range (niebloid) returns the smaller and larger of two elements (niebloid) returns the smallest and the largest elements in a range (niebloid) ##### Permutation operations determines if a sequence is a permutation of another sequence (niebloid) generates the next greater lexicographic permutation of a range of elements (niebloid) generates the next smaller lexicographic permutation of a range of elements (niebloid) ### Constrained uninitialized memory algorithms Defined in header Defined in namespace std::ranges uninitialized_copy(C++20) copies a range of objects to an uninitialized area of memory (niebloid) uninitialized_copy_n(C++20) copies a number of objects to an uninitialized area of memory (niebloid) uninitialized_fill(C++20) copies an object to an uninitialized area of memory, defined by a range (niebloid) uninitialized_fill_n(C++20) copies an object to an uninitialized area of memory, defined by a start and a count (niebloid) uninitialized_move(C++20) moves a range of objects to an uninitialized area of memory (niebloid) uninitialized_move_n(C++20) moves a number of objects to an uninitialized area of memory (niebloid) constructs objects by default-initialization in an uninitialized area of memory, defined by a range (niebloid) constructs objects by default-initialization in an uninitialized area of memory, defined by a start and count (niebloid) constructs objects by value-initialization in an uninitialized area of memory, defined by a range (niebloid) constructs objects by value-initialization in an uninitialized area of memory, defined by a start and a count (niebloid) destroy_at(C++20) destroys an object at a given address (niebloid) destroy(C++20) destroys a range of objects (niebloid) destroy_n(C++20) destroys a number of objects in a range (niebloid)
2019-08-23 01:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2706681489944458, "perplexity": 6178.050122441052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00169.warc.gz"}
http://docs.sympy.org/0.7.6/_modules/sympy/geometry/plane.html
Source code for sympy.geometry.plane """Geometrical Planes. Contains ======== Plane """ from __future__ import print_function, division from sympy.core import S, C, sympify, Dummy, nan, Eq, symbols, Symbol, Rational from sympy.core.function import expand_mul from sympy.functions.elementary.trigonometric import _pi_coeff as pi_coeff, \ sqrt from sympy.core.logic import fuzzy_and from sympy.core.exprtools import factor_terms from sympy.simplify.simplify import simplify from sympy.solvers import solve from sympy.polys.polytools import cancel from sympy.geometry.exceptions import GeometryError from .entity import GeometryEntity from .point3d import Point3D from .point import Point from .line3d import LinearEntity3D, Line3D, Segment3D, Ray3D from .line import Line, Segment, Ray from sympy.matrices import Matrix [docs]class Plane(GeometryEntity): """ A plane is a flat, two-dimensional surface. A plane is the two-dimensional analogue of a point (zero-dimensions), a line (one-dimension) and a solid (three-dimensions). A plane can generally be constructed by two types of inputs. They are three non-collinear points and a point and the plane's normal vector. Attributes ========== p1 normal_vector Examples ======== >>> from sympy import Plane, Point3D >>> from sympy.abc import x >>> Plane(Point3D(1, 1, 1), Point3D(2, 3, 4), Point3D(2, 2, 2)) Plane(Point3D(1, 1, 1), (-1, 2, -1)) >>> Plane((1, 1, 1), (2, 3, 4), (2, 2, 2)) Plane(Point3D(1, 1, 1), (-1, 2, -1)) >>> Plane(Point3D(1, 1, 1), normal_vector=(1,4,7)) Plane(Point3D(1, 1, 1), (1, 4, 7)) """ def __new__(cls, p1, a=None, b=None, **kwargs): p1 = Point3D(p1) if not a and not b and kwargs.get('normal_vector', None): a = kwargs.pop('normal_vector') if not b and not isinstance(a, Point3D) and \ len(a) == 3: normal_vector = a elif a and b: p2 = Point3D(a) p3 = Point3D(b) if Point3D.are_collinear(p1, p2, p3): raise NotImplementedError('Enter three non-collinear points') a = p1.direction_ratio(p2) b = p1.direction_ratio(p3) normal_vector = tuple(Matrix(a).cross(Matrix(b))) else: raise ValueError('Either provide 3 3D points or a point with a ' 'normal vector') return GeometryEntity.__new__(cls, p1, normal_vector, **kwargs) @property [docs] def p1(self): """The only defining point of the plane. Others can be obtained from the arbitrary_point method. See Also ======== sympy.geometry.point3d.Point3D Examples ======== >>> from sympy import Point3D, Plane >>> a = Plane(Point3D(1, 1, 1), Point3D(2, 3, 4), Point3D(2, 2, 2)) >>> a.p1 Point3D(1, 1, 1) """ return self.args[0] @property [docs] def normal_vector(self): """Normal vector of the given plane. Examples ======== >>> from sympy import Point3D, Plane >>> a = Plane(Point3D(1, 1, 1), Point3D(2, 3, 4), Point3D(2, 2, 2)) >>> a.normal_vector (-1, 2, -1) >>> a = Plane(Point3D(1, 1, 1), normal_vector=(1, 4, 7)) >>> a.normal_vector (1, 4, 7) """ return self.args[1] [docs] def equation(self, x=None, y=None, z=None): """The equation of the Plane. Examples ======== >>> from sympy import Point3D, Plane >>> a = Plane(Point3D(1, 1, 2), Point3D(2, 4, 7), Point3D(3, 5, 1)) >>> a.equation() -23*x + 11*y - 2*z + 16 >>> a = Plane(Point3D(1, 4, 2), normal_vector=(6, 6, 6)) >>> a.equation() 6*x + 6*y + 6*z - 42 """ x, y, z = [i if i else Symbol(j, real=True) for i, j in zip((x, y, z), 'xyz')] a = Point3D(x, y, z) b = self.p1.direction_ratio(a) c = self.normal_vector return (sum(i*j for i, j in zip(b, c))) [docs] def projection(self, pt): """Project the given point onto the plane along the plane normal. Parameters ========== Point or Point3D Returns ======= Point3D Examples ======== >>> from sympy import Plane, Point, Point3D >>> A = Plane(Point3D(1, 1, 2), normal_vector=(1, 1, 1)) The projection is along the normal vector direction, not the z axis, so (1, 1) does not project to (1, 1, 2) on the plane A: >>> b = Point(1, 1) >>> A.projection(b) Point3D(5/3, 5/3, 2/3) >>> _ in A True But the point (1, 1, 2) projects to (1, 1) on the XY-plane: >>> XY = Plane((0, 0, 0), (0, 0, 1)) >>> XY.projection((1, 1, 2)) Point3D(1, 1, 0) """ rv = Point3D(pt) if rv in self: return rv return self.intersection(Line3D(rv, rv + Point3D(self.normal_vector)))[0] [docs] def projection_line(self, line): """Project the given line onto the plane through the normal plane containing the line. Parameters ========== LinearEntity or LinearEntity3D Returns ======= Point3D, Line3D, Ray3D or Segment3D Notes ===== For the interaction between 2D and 3D lines(segments, rays), you should convert the line to 3D by using this method. For example for finding the intersection between a 2D and a 3D line, convert the 2D line to a 3D line by projecting it on a required plane and then proceed to find the intersection between those lines. Examples ======== >>> from sympy import Plane, Line, Line3D, Point, Point3D >>> a = Plane(Point3D(1, 1, 1), normal_vector=(1, 1, 1)) >>> b = Line(Point(1, 1), Point(2, 2)) >>> a.projection_line(b) Line3D(Point3D(4/3, 4/3, 1/3), Point3D(5/3, 5/3, -1/3)) >>> c = Line3D(Point3D(1, 1, 1), Point3D(2, 2, 2)) >>> a.projection_line(c) Point3D(1, 1, 1) """ from sympy.geometry.line import LinearEntity from sympy.geometry.line3d import LinearEntity3D if not isinstance(line, (LinearEntity, LinearEntity3D)): raise NotImplementedError('Enter a linear entity only') a, b = self.projection(line.p1), self.projection(line.p2) if a == b: # projection does not imply intersection so for # this case (line parallel to plane's normal) we # return the projection point return a if isinstance(line, (Line, Line3D)): return Line3D(a, b) if isinstance(line, (Ray, Ray3D)): return Ray3D(a, b) if isinstance(line, (Segment, Segment3D)): return Segment3D(a, b) [docs] def is_parallel(self, l): """Is the given geometric entity parallel to the plane? Parameters ========== LinearEntity3D or Plane Returns ======= Boolean Examples ======== >>> from sympy import Plane, Point3D >>> a = Plane(Point3D(1,4,6), normal_vector=(2, 4, 6)) >>> b = Plane(Point3D(3,1,3), normal_vector=(4, 8, 12)) >>> a.is_parallel(b) True """ from sympy.geometry.line3d import LinearEntity3D if isinstance(l, LinearEntity3D): a = l.direction_ratio b = self.normal_vector c = sum([i*j for i, j in zip(a, b)]) if c == 0: return True else: return False elif isinstance(l, Plane): a = Matrix(l.normal_vector) b = Matrix(self.normal_vector) if a.cross(b).is_zero: return True else: return False [docs] def is_perpendicular(self, l): """is the given geometric entity perpendicualar to the given plane? Parameters ========== LinearEntity3D or Plane Returns ======= Boolean Examples ======== >>> from sympy import Plane, Point3D >>> a = Plane(Point3D(1,4,6), normal_vector=(2, 4, 6)) >>> b = Plane(Point3D(2, 2, 2), normal_vector=(-1, 2, -1)) >>> a.is_perpendicular(b) True """ from sympy.geometry.line3d import LinearEntity3D if isinstance(l, LinearEntity3D): a = Matrix(l.direction_ratio) b = Matrix(self.normal_vector) if a.cross(b).is_zero: return True else: return False elif isinstance(l, Plane): a = Matrix(l.normal_vector) b = Matrix(self.normal_vector) if a.dot(b) == 0: return True else: return False else: return False [docs] def distance(self, o): """Distance beteen the plane and another geometric entity. Parameters ========== Point3D, LinearEntity3D, Plane. Returns ======= distance Notes ===== This method accepts only 3D entities as it's parameter, but if you want to calculate the distance between a 2D entity and a plane you should first convert to a 3D entity by projecting onto a desired plane and then proceed to calculate the distance. Examples ======== >>> from sympy import Point, Point3D, Line, Line3D, Plane >>> a = Plane(Point3D(1, 1, 1), normal_vector=(1, 1, 1)) >>> b = Point3D(1, 2, 3) >>> a.distance(b) sqrt(3) >>> c = Line3D(Point3D(2, 3, 1), Point3D(1, 2, 2)) >>> a.distance(c) 0 """ from sympy.geometry.line3d import LinearEntity3D x, y, z = map(Dummy, 'xyz') if self.intersection(o) != []: return S.Zero if isinstance(o, Point3D): x, y, z = map(Dummy, 'xyz') k = self.equation(x, y, z) a, b, c = [k.coeff(i) for i in (x, y, z)] d = k.xreplace({x: o.args[0], y: o.args[1], z: o.args[2]}) t = abs(d/sqrt(a**2 + b**2 + c**2)) return t if isinstance(o, LinearEntity3D): a, b = o.p1, self.p1 c = Matrix(a.direction_ratio(b)) d = Matrix(self.normal_vector) e = c.dot(d) f = sqrt(sum([i**2 for i in self.normal_vector])) return abs(e / f) if isinstance(o, Plane): a, b = o.p1, self.p1 c = Matrix(a.direction_ratio(b)) d = Matrix(self.normal_vector) e = c.dot(d) f = sqrt(sum([i**2 for i in self.normal_vector])) return abs(e / f) [docs] def angle_between(self, o): """Angle between the plane and other geometric entity. Parameters ========== LinearEntity3D, Plane. Returns ======= angle : angle in radians Notes ===== This method accepts only 3D entities as it's parameter, but if you want to calculate the angle between a 2D entity and a plane you should first convert to a 3D entity by projecting onto a desired plane and then proceed to calculate the angle. Examples ======== >>> from sympy import Point3D, Line3D, Plane >>> a = Plane(Point3D(1, 2, 2), normal_vector=(1, 2, 3)) >>> b = Line3D(Point3D(1, 3, 4), Point3D(2, 2, 2)) >>> a.angle_between(b) -asin(sqrt(21)/6) """ from sympy.geometry.line3d import LinearEntity3D if isinstance(o, LinearEntity3D): a = Matrix(self.normal_vector) b = Matrix(o.direction_ratio) c = a.dot(b) d = sqrt(sum([i**2 for i in self.normal_vector])) e = sqrt(sum([i**2 for i in o.direction_ratio])) return C.asin(c/(d*e)) if isinstance(o, Plane): a = Matrix(self.normal_vector) b = Matrix(o.normal_vector) c = a.dot(b) d = sqrt(sum([i**2 for i in self.normal_vector])) e = sqrt(sum([i**2 for i in o.normal_vector])) return C.acos(c/(d*e)) @staticmethod [docs] def are_concurrent(*planes): """Is a sequence of Planes concurrent? Two or more Planes are concurrent if their intersections are a common line. Parameters ========== planes: list Returns ======= Boolean Examples ======== >>> from sympy import Plane, Point3D >>> a = Plane(Point3D(5, 0, 0), normal_vector=(1, -1, 1)) >>> b = Plane(Point3D(0, -2, 0), normal_vector=(3, 1, 1)) >>> c = Plane(Point3D(0, -1, 0), normal_vector=(5, -1, 9)) >>> Plane.are_concurrent(a, b) True >>> Plane.are_concurrent(a, b, c) False """ planes = set(planes) if len(planes) < 2: return False for i in planes: if not isinstance(i, Plane): raise ValueError('All objects should be Planes but got %s' % i.func) planes = list(planes) first = planes.pop(0) sol = first.intersection(planes[0]) if sol == []: return False else: line = sol[0] for i in planes[1:]: l = first.intersection(i) if not l or not l[0] in line: return False return True [docs] def perpendicular_line(self, pt): """A line perpendicular to the given plane. Parameters ========== pt: Point3D Returns ======= Line3D Examples ======== >>> from sympy import Plane, Point3D, Line3D >>> a = Plane(Point3D(1,4,6), normal_vector=(2, 4, 6)) >>> a.perpendicular_line(Point3D(9, 8, 7)) Line3D(Point3D(9, 8, 7), Point3D(11, 12, 13)) """ a = self.normal_vector return Line3D(pt, direction_ratio=a) [docs] def parallel_plane(self, pt): """ Plane parallel to the given plane and passing through the point pt. Parameters ========== pt: Point3D Returns ======= Plane Examples ======== >>> from sympy import Plane, Point3D >>> a = Plane(Point3D(1, 4, 6), normal_vector=(2, 4, 6)) >>> a.parallel_plane(Point3D(2, 3, 5)) Plane(Point3D(2, 3, 5), (2, 4, 6)) """ a = self.normal_vector return Plane(pt, normal_vector=a) [docs] def perpendicular_plane(self, *pts): """ Return a perpendicular passing through the given points. If the direction ratio between the points is the same as the Plane's normal vector then, to select from the infinite number of possible planes, a third point will be chosen on the z-axis (or the y-axis if the normal vector is already parallel to the z-axis). If less than two points are given they will be supplied as follows: if no point is given then pt1 will be self.p1; if a second point is not given it will be a point through pt1 on a line parallel to the z-axis (if the normal is not already the z-axis, otherwise on the line parallel to the y-axis). Parameters ========== pts: 0, 1 or 2 Point3D Returns ======= Plane Examples ======== >>> from sympy import Plane, Point3D, Line3D >>> a, b = Point3D(0, 0, 0), Point3D(0, 1, 0) >>> Z = (0, 0, 1) >>> p = Plane(a, normal_vector=Z) >>> p.perpendicular_plane(a, b) Plane(Point3D(0, 0, 0), (1, 0, 0)) """ if len(pts) > 2: raise ValueError('No more than 2 pts should be provided.') pts = list(pts) if len(pts) == 0: pts.append(self.p1) if len(pts) == 1: x, y, z = self.normal_vector if x == y == 0: dir = (0, 1, 0) else: dir = (0, 0, 1) pts.append(pts[0] + Point3D(*dir)) p1, p2 = [Point3D(i) for i in pts] l = Line3D(p1, p2) n = Line3D(p1, direction_ratio=self.normal_vector) if l in n: # XXX should an error be raised instead? # there are infinitely many perpendicular planes; x, y, z = self.normal_vector if x == y == 0: # the z axis is the normal so pick a pt on the y-axis p3 = Point3D(0, 1, 0) # case 1 else: # else pick a pt on the z axis p3 = Point3D(0, 0, 1) # case 2 # in case that point is already given, move it a bit if p3 in l: p3 *= 2 # case 3 else: p3 = p1 + Point3D(*self.normal_vector) # case 4 return Plane(p1, p2, p3) [docs] def random_point(self, seed=None): """ Returns a random point on the Plane. Returns ======= Point3D """ import random if seed is not None: rng = random.Random(seed) else: rng = random t = Dummy('t') return self.arbitrary_point(t).subs(t, Rational(rng.random())) [docs] def arbitrary_point(self, t=None): """ Returns an arbitrary point on the Plane; varying t from 0 to 2*pi will move the point in a circle of radius 1 about p1 of the Plane. Examples ======== >>> from sympy.geometry.plane import Plane >>> from sympy.abc import t >>> p = Plane((0, 0, 0), (0, 0, 1), (0, 1, 0)) >>> p.arbitrary_point(t) Point3D(0, cos(t), sin(t)) >>> _.distance(p.p1).simplify() 1 Returns ======= Point3D """ from sympy import cos, sin t = t or Dummy('t') x, y, z = self.normal_vector a, b, c = self.p1.args if x == y == 0: return Point3D(a + cos(t), b + sin(t), c) elif x == z == 0: return Point3D(a + cos(t), b, c + sin(t)) elif y == z == 0: return Point3D(a, b + cos(t), c + sin(t)) m = Dummy() p = self.projection(Point3D(self.p1.x + cos(t), self.p1.y + sin(t), 0)*m) return p.xreplace({m: solve(p.distance(self.p1) - 1, m)[0]}) [docs] def intersection(self, o): """ The intersection with other geometrical entity. Parameters ========== Point, Point3D, LinearEntity, LinearEntity3D, Plane Returns ======= List Examples ======== >>> from sympy import Point, Point3D, Line, Line3D, Plane >>> a = Plane(Point3D(1, 2, 3), normal_vector=(1, 1, 1)) >>> b = Point3D(1, 2, 3) >>> a.intersection(b) [Point3D(1, 2, 3)] >>> c = Line3D(Point3D(1, 4, 7), Point3D(2, 2, 2)) >>> a.intersection(c) [Point3D(2, 2, 2)] >>> d = Plane(Point3D(6, 0, 0), normal_vector=(2, -5, 3)) >>> e = Plane(Point3D(2, 0, 0), normal_vector=(3, 4, -3)) >>> d.intersection(e) [Line3D(Point3D(78/23, -24/23, 0), Point3D(147/23, 321/23, 23))] """ from sympy.geometry.line3d import LinearEntity3D from sympy.geometry.line import LinearEntity if isinstance(o, (Point, Point3D)): if o in self: return [Point3D(o)] else: return [] if isinstance(o, (LinearEntity, LinearEntity3D)): if o in self: p1, p2 = o.p1, o.p2 if isinstance(o, Segment): o = Segment3D(p1, p2) elif isinstance(o, Ray): o = Ray3D(p1, p2) elif isinstance(o, Line): o = Line3D(p1, p2) else: raise ValueError('unhandled linear entity: %s' % o.func) return [o] else: x, y, z = map(Dummy, 'xyz') t = Dummy() # unnamed else it may clash with a symbol in o a = Point3D(o.arbitrary_point(t)) b = self.equation(x, y, z) c = solve(b.subs(list(zip((x, y, z), a.args))), t) if not c: return [] else: p = a.subs(t, c[0]) if p not in self: return [] # e.g. a segment might not intersect a plane return [p] if isinstance(o, Plane): if o == self: return [self] if self.is_parallel(o): return [] else: x, y, z = map(Dummy, 'xyz') a, b = Matrix([self.normal_vector]), Matrix([o.normal_vector]) c = list(a.cross(b)) d = self.equation(x, y, z) e = o.equation(x, y, z) f = solve((d.subs(z, 0), e.subs(z, 0)), [x, y]) if len(f) == 2: return [Line3D(Point3D(f[x], f[y], 0), direction_ratio=c)] g = solve((d.subs(y, 0), e.subs(y, 0)),[x, z]) if len(g) == 2: return [Line3D(Point3D(g[x], 0, g[z]), direction_ratio=c)] h = solve((d.subs(x, 0), e.subs(x, 0)),[y, z]) if len(h) == 2: return [Line3D(Point3D(0, h[y], h[z]), direction_ratio=c)] def __contains__(self, o): from sympy.geometry.line3d import LinearEntity3D from sympy.geometry.line import LinearEntity x, y, z = map(Dummy, 'xyz') k = self.equation(x, y, z) if isinstance(o, Point): o = Point3D(o) if isinstance(o, Point3D): d = k.xreplace(dict(zip((x, y, z), o.args))) return d.equals(0) elif isinstance(o, (LinearEntity, LinearEntity3D)): t = Dummy() d = Point3D(o.arbitrary_point(t)) e = k.subs([(x, d.x), (y, d.y), (z, d.z)]) return e.equals(0) else: return False [docs] def is_coplanar(self, o): """ Returns True if o is coplanar with self, else False. Examples ======== >>> from sympy import Plane, Point3D >>> o = (0, 0, 0) >>> p = Plane(o, (1, 1, 1)) >>> p2 = Plane(o, (2, 2, 2)) >>> p == p2 False >>> p.is_coplanar(p2) True """ if isinstance(o, Plane): x, y, z = map(Dummy, 'xyz') return not cancel(self.equation(x, y, z)/o.equation(x, y, z)).has(x, y, z) if isinstance(o, Point3D): return o in self elif isinstance(o, LinearEntity3D): return all(i in self for i in self) elif isinstance(o, GeometryEntity): # XXX should only be handling 2D objects now return all(i == 0 for i in self.normal_vector[:2])
2018-08-15 10:30:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30972230434417725, "perplexity": 12292.262103049829}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210058.26/warc/CC-MAIN-20180815102653-20180815122653-00573.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=06ea5e04dc5c59648bb4fe7cfa2197dd&t=22034&goto=nextoldest
mersenneforum.org Complexity of Chinese Remainder Theorem Register FAQ Search Today's Posts Mark Forums Read 2017-02-08, 10:12 #1 carpetpool     "Sam" Nov 2016 2×3×53 Posts Complexity of Chinese Remainder Theorem What is the average running time on the Chinese remainder theorem for solving congruence modulo primes up to p_n where p_n is the nth prime? e.g. the complexity of solving congruence set {[x, 2], [x_2, 3}, {[x, 2], [x_2, 3], [x_3, 5]}, {[x, 2], [x_2, 3], [x_3, 5], [x_4, 7]},.... {[x, 2], [x_2, 3], [x_3, 5], [x_4, 7]......, [x_n, p_n]} For the purpose of generating fairy large random numbers, what is the complexity at solving congruence sets modulo the first n primes? Programs such as ntheory, PARI/GP should handle this but I'm unaware of the timing. It takes a few seconds for primes p < 1669, but what about timing when solving congruence sets mod primes p < 38677? 2017-02-08, 13:01   #2 science_man_88 "Forget I exist" Jul 2009 Dumbassville 20B116 Posts Quote: Originally Posted by carpetpool What is the average running time on the Chinese remainder theorem for solving congruence modulo primes up to p_n where p_n is the nth prime? e.g. the complexity of solving congruence set {[x, 2], [x_2, 3}, {[x, 2], [x_2, 3], [x_3, 5]}, {[x, 2], [x_2, 3], [x_3, 5], [x_4, 7]},.... {[x, 2], [x_2, 3], [x_3, 5], [x_4, 7]......, [x_n, p_n]} For the purpose of generating fairy large random numbers, what is the complexity at solving congruence sets modulo the first n primes? Programs such as ntheory, PARI/GP should handle this but I'm unaware of the timing. It takes a few seconds for primes p < 1669, but what about timing when solving congruence sets mod primes p < 38677? Code: my(a=primes([2,38677]),b=vector(#a,i,Mod(random(a[i]-1),a[i])));fold((x,y)->chinese(x,y),b) runs in 782 ms. on my machine roughly and Code: my(a=primes([2,38677*2]),b=vector(#a,i,Mod(random(a[i]-1),a[i])));fold((x,y)->chinese(x,y),b) is 3,970 ms. and Code: my(a=primes([2,38677*4]),b=vector(#a,i,Mod(random(a[i]-1),a[i])));fold((x,y)->chinese(x,y),b) is about 19,470 ms. and this includes generating the primes etc. so roughly speaking every time you double the top prime used you times by 5 the time needed. okay the ratio seems to start falling more the higher you go for me personally. 38677 *8 gave a time of about 1 min, 30.172 seconds , note I haven't done too much testing at each level to get a random sample. 2017-02-08, 15:37   #3 R. Gerbicz "Robert Gerbicz" Oct 2005 Hungary 58916 Posts Quote: Originally Posted by science_man_88 Code: my(a=primes([2,38677]),b=vector(#a,i,Mod(random(a[i]-1),a[i])));fold((x,y)->chinese(x,y),b) runs in 782 ms. on my machine roughly and Code: my(a=primes([2,38677*2]),b=vector(#a,i,Mod(random(a[i]-1),a[i])));fold((x,y)->chinese(x,y),b) is 3,970 ms. and Code: my(a=primes([2,38677*4]),b=vector(#a,i,Mod(random(a[i]-1),a[i])));fold((x,y)->chinese(x,y),b) is about 19,470 ms. You can do it much faster. It is a known problem, and solvable in almost linear time with binary splitting. In vector v give the problem: if you need x==r[i] mod m[i] for i=1..L, then v=[Mod(r[1],m[1]),...,Mod(r[L],m[L])]. The solution (if exists) will be f(v), note that this solves the general problem, so we don't assume that the m[] values are pairwise coprime. Code: allocatemem(10^9) default(primelimit,10^7) f(v)={local(L,mid,j);L=length(v); if(L==1,return(v[1])); if(L==2,return(chinese(v[1],v[2]))); mid=L\2;return(chinese(f(vecextract(v,vector(mid,j,j))),f(vecextract(v,vector(L-mid,j,mid+j)))))} v=[Mod(0,2),Mod(1,3),Mod(4,5),Mod(2,7)]; f(v) test(s)={a=primes([2,s]);b=vector(#a,i,Mod(random(a[i]),a[i]));return(f(b))} # test(38677); test(2*38677); test(4*38677); test(8*38677); test(16*38677); test(32*38677); test(64*38677); test(128*38677); test(256*38677); it has given (in the first line the solution of the sample CRT problem) Code: ? %4 = Mod(184, 210) ? time = 11 ms. ? time = 23 ms. ? time = 51 ms. ? time = 113 ms. ? time = 267 ms. ? time = 646 ms. ? time = 1,533 ms. ? time = 3,736 ms. ? time = 8,924 ms. 2017-02-09, 10:30 #4 Nick     Dec 2012 The Netherlands 101110111002 Posts If you are interested in the theory, look up Garner's algorithm in Prof. Knuth's "The Art of Computer Programming" (volume 2). 2017-02-09, 19:26 #5 danaj   "Dana Jacobsen" Feb 2011 Bangkok, TH 2·3·151 Posts Since Perl/ntheory was mentioned... The GMP code for my chinese function does a recursive function to what Robert mentions, though I chunk by splitting in 8 rather than 2. The times for large numbers of inputs are nearly identical to his simple test function on my machine. But ... my non-GMP chinese function takes surprisingly long to return indicating overflow. It comes down to, of all things, sorting the moduli. I just fixed that. It now goes to GMP with little overhead even with hundreds of thousands of pairs. Similar Threads Thread Thread Starter Forum Replies Last Post alpertron Math 23 2017-12-15 16:46 Nick Number Theory Discussion Group 4 2016-10-31 22:26 ShiningArcanine Math 2 2007-11-17 10:01 ShiningArcanine Software 3 2007-11-17 05:55 TheJudger Math 4 2007-10-18 19:01 All times are UTC. The time now is 06:49. Fri Nov 27 06:49:20 UTC 2020 up 78 days, 4 hrs, 4 users, load averages: 0.79, 1.23, 1.23
2020-11-27 06:49:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4843558669090271, "perplexity": 4997.077283837057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189141.23/warc/CC-MAIN-20201127044624-20201127074624-00060.warc.gz"}
https://suresolv.com/ssc-cgl-tier-ii?page=8
SSC CGL Tier II | Page 9 | SureSolv You are here For guided learning and practice on Geometry, follow Comprehensive Suresolv Geometry Guide with all articles SSC CGL Tier II Highlights Guidelines, Question sets and Solution sets specially tailored for SSC CGL Tier II test. Students should refer to tutorials for the required base concepts and then only take the tests. The solution sets provide conceptual analytical efficient solutions. After taking a question set, Student need to go through the corresponding solution set with focused attention for effective and assured problem solving skill improvement.
2020-09-19 07:05:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099493384361267, "perplexity": 6517.018241800159}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00382.warc.gz"}
https://orinanobworld.blogspot.com/2020/09/installing-rcplex-and-cplexapi.html
Thursday, September 3, 2020 Installing Rcplex and cplexAPI I've previously mentioned solving MIP models in R, using CPLEX. In one post [1], I used the OMPR package, which provides a domain specific language for model construction. OMPR uses the ROI package, and in particular the ROI.plugin.cplex package, to communicate with CPLEX. That, in turn, uses the Rcplex package. In another post [2], I used Rcplex directly. Meanwhile, there is still another package, cplexAPI, that provides a low-level API to CPLEX. Both Rcplex and cplexAPI will install against CPLEX Studio 12.8 and earlier, but neither one installs with CPLEX Studio 12.9 or 12.10. Fortunately, IBM's Daniel Junglas was able to hack solutions for both of them. I'll spell out the steps I used to get Rcplex working with CPLEX 12.10. You can find the solutions for both in the responses to this question on the IBM Decision Optimization community site. Version information for what follows is: Linux Mint 19.3; CPLEX Studio 12.10; R 3.6.3; and Rcplex 0.3-3. Hopefully essentially the same hack works with Windows. 1. Download Rcplex_0.3-3.tar.gz, put it someplace harmless (the Downloads folder in my case, but /tmp would be fine) and expand it, producing a folder named Rcplex. 2. Go to the Rcplex folder and open the 'configure' file in a text editor (one you would use for plain text files). 3. Line 1548 should read as follows: CPLEX_LIBS="-L${CPLEXLIBDIR} ${AWK} 'BEGIN {FS = " = "} /^CLNFLAGS/ {print $2}'${CPLEX_MAKEFILE}" . Replace it with CPLEX_LIBS="-L${CPLEXLIBDIR} ${AWK} 'BEGIN {FS = " = "} /^CLNFLAGS/ {print $2}'${CPLEX_MAKEFILE} | sed -e 's,\\$(CPLEXLIB),cplex,'" . Save the modified file. 4. Open a terminal in the parent directory of the Rcplex folder and run the following command: R CMD INSTALL --configure-args="--with-cplex-dir=.../CPLEX_Studio1210/cplex/" ./Rcplex . Adjust the file path (particularly the ...) so that it points to the 'cplex' directory in your CPLEX Studio installation (the one that has subdirectories named "bin", "examples", "include" etc.). 5. Assuming there were no error messages during installation, you should be good to go.
2020-09-23 16:46:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4456786513328552, "perplexity": 6719.765086360459}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00733.warc.gz"}
https://github.com/vsbuffalo/scythe/blob/872a54c996a1a9f5b3f5210d92bcbd8d7efcaa04/paper/chngpage.sty
# vsbuffalo/scythe Fetching contributors… Cannot retrieve contributors at this time 296 lines (278 sloc) 10.7 KB
2016-05-02 12:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850748181343079, "perplexity": 5077.826372525558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461863599979.27/warc/CC-MAIN-20160428171319-00195-ip-10-239-7-51.ec2.internal.warc.gz"}
http://blog.codefights.com/tag/interview-question/
### Tag: interview question Interview Basics: Static vs. Dynamic Arrays ## Interview Basics: Static vs. Dynamic Arrays Arrays are one of the most basic data structures in computer science. But they form the basis of some difficult technical interview questions! So no matter where you are in your programming career, you need to be very familiar with arrays and how to use them. Read on to review the basics of static and dynamic arrays, then watch our video to get clear on the differences between them. And once you’re ready to solve some array-based technical interview questions on your own, head to Interview Practice. You’ll be able to practice solving array interview questions that have been asked at Google, LinkedIn, Apple, Uber, and Palantir. ## First Things First Whatever programming language you choose use in an interview, make sure you know its array methods very well. Like, forwards and backwards well. Interviewers will definitely be judging you on this! If you don’t know something basic like this in the language you’ve chosen to interview with, they’ll question how well you know that language at all… Not to mention how well you can program, period. So spend some time with your language’s documentation to make sure that you’ve got a handle on its basic and advanced array methods. ## Arrays The simplest definition of an array is that it’s a data structure that contains a group of elements. In interviews, you’ll get a lot of questions about static arrays, which are the most basic implementation of this data structure. You’ll also get questions about dynamic arrays. We’re going to focus on static and dynamic arrays in this article. (You’ll also get questions about multidimensional arrays in technical interviews, which we’re going to cover in an upcoming article.) ## Static Arrays The most basic implementation of an array is a static array. A static array is a random-access data structure with a fixed size. You have to declare its size when you first initialize it. To add or remove elements from a static array, you have to create a new appropriately-sized contiguous chunk of memory. Then you have to copy the elements that you want to keep into this new array. The good: The item lookup by index for static arrays is really quick (O(1)), and they have a very low memory footprint. The bad: Adding elements to or deleting elements from the array is an O(n) operation, where n is the number of elements in the array. Searching the array for a particular element is also O(n). Arrays don’t allow for the quick rearrangement of their elements. ## Dynamic Arrays A dynamic array is a random-access, variable-sized list data structure that elements can be added to or removed from. They’re basically like static arrays, but with the addition of space that’s reserved for adding more elements. When dealing with a dynamic array, there are two important factors to consider. There’s the logical size of the array (the number of elements being used by the array’s contents) and its physical size (the size of the underlying array). The physical size is counted in either the number of elements the array has room for, or the number of bytes used. In a technical interview, you should use a dynamic array if you know you’ll need to add or delete information. The good: On average, it’s quick to insert elements to the end of the dynamic array. And item lookup by index is O(1). The bad: They don’t allow for the quick rearrangement of their elements. And they have inconsistent runtimes for adding elements to the array, so they’re not good for real-time systems. ## Diving Deeper Now that you’re clear on the basics of static and dynamic arrays, watch this video for a deeper dive on the differences between the two. ## Bonus Array Joke 1. Why did the programmer leave his job? 2. Because he didn’t get arrays. (Get it? Arrays, “a raise”! Stop groaning. It’s a great joke.) Interview Practice: Graphs, Advanced Trees & RegEx ## Interview Practice: Graphs, Advanced Trees & RegEx We’ve just added three-brand new computer science topics to Interview Practice! Get ready to dive deep on Graphs, Trees: Advanced, and RegEx. We’ve added these topics to our Extra Credit learning plan, which covers all of the topics in Interview Practice. Why are these topics so important to know for technical interviews? Read on for a brief introduction to each concept! ### Graphs A graph is an abstract data structure composed of nodes and the edges between nodes. Graphs are a useful way of demonstrating the relationship between different objects. For instance, did you know that you can represent social networks as graphs? Or that the 6 Degrees of Kevin Bacon game can be modeled as a graph problem? Graph questions are really common in technical interviews. In some cases, the question will be explicitly about graphs, but in other cases the connection is more subtle. Read our tutorial to get up to speed on this topic and to learn how to identify this kind of question. Then practice your skills on graph questions from real technical interviews! A tree is a data structure composed of parent and child nodes. Each node contains a value and a list of references to its child nodes. Tree traversal and tree implementation problems come up a lot in technical interviews. Common use-cases for an interview are: needing to store and do searches on data that is sorted in some way; needing to manage objects that are grouped by an attribute (think computer file systems); or implementing a search strategy like backtracking. You need to be very familiar with how to deal with these kinds of questions! (The tasks in Trees: Advanced ramp up in difficulty from the ones you get in the Trees: Basic category, so make sure you finish those questions before moving on to these ones!) ### RegEx A regex is a string that encodes a pattern to be searched for (and perhaps replaced). They let you find patterns of text, validate data inputs, and do text replacement operations. A well-written regex can make it easier to solve really tricky interview questions like “Find all of the 10-digit phone numbers in a block of text, where the area code might or might not be surrounded by parentheses and there might or might not be either a space or a dash between the first and second number groupings”. While the specifics of how to implement a regex can vary between languages, the basics are pretty much the same. In the topic tutorial, we cover regex character classes, quantifiers, anchors, and modifiers and how to use them to write a good regex. ### Start now These topics might not get asked in every interview, but they’re important to know! Read the tutorials about each concept, then solve the real interview tasks to practice your skills and solidify your understanding of the topic. (Learn more about how we’ve updated the Interview Practice experience to make it an even better practice and preparation tool.) If you’re signed up for the Extra Credit learning plan, these topics have been added to your Interview Practice page already. If you’re signed up for a different learning plan, you can switch over to Extra Credit. Or you can sign up for a customizable Freestyle plan and add these topics! Do you need to prepare for technical interviews? ## Do you need to prepare for technical interviews? If you’re already working as a software engineer, you might think that you don’t need to do any preparation for your next technical interview. Maybe you write C++ that’s pure poetry, or perhaps your SQL queries are so efficient that they make grown men weep. So when you’re looking for a new job, it’s easy to fall into the trap of assuming that you’re ready for interviews right away – no prep needed. But are you? Sidebar: If you’re not working as a software engineer yet, don’t stop reading! A lot of this applies to you too. And we’re going to be posting another article about preparing for coding interviews specifically for you very soon. Stay tuned! Think back to your last interview experience. What kind of questions did you get asked? Some of the questions might have been pretty straightforward, aimed at evaluating how well you could do the task at hand. And if that task was something you were already pretty comfortable doing, you probably didn’t have too much trouble getting it done. But chances are good that you also got some pretty esoteric or challenging questions. Questions that were more about testing whether you remembered how to implement certain algorithms or data structures… potentially ones that you hadn’t touched since you were in school. Let’s face it: You’re good at your job, but that doesn’t necessarily mean you’re good at interviewing. Interviews are a completely different beast. Bottom line: If you’re looking for a new job, you might not be as prepared as you think you are. Tigran Sloyan, the founder of CodeFights, puts it this way: “The reality is that the interview questions you’ll face at most companies are miles away from what you do at your day job, so make sure to do some research and practice using real questions that the company uses in its interviews.” You might think that traditional technical interviews don’t effectively measure how well you would actually perform on the job, and you’re not alone in that. But the fact is that for now, most companies rely on them to weed out people who can’t cut it. They also use them to gauge the aptitude, interest, and intelligence of those who can. ### What should you practice? A mainstay of the technical interview process is asking questions that help the interviewer determine how well a candidate understands computer science fundamentals like data structures and algorithms, whether they can implement them appropriately, and whether they take time and space complexity into account. A great way for you to revisit these concepts and get those rusty skills back up to snuff is solving Interview Practice challenges on CodeFights. All 100+ of these questions are pulled directly from actual interviews at top tech companies. You can filter by company and by question topic, which gives you a personalized experience that lets you focus on the topics you need to practice the most. The list of topics you need to study will be largely informed by research that you’ve done on the companies you are interviewing at (or would like to interview at). If you know that a company is likely to ask you questions about tree traversal, you can start working on tree traversal interview questions to prepare! It’s really important to be honest with yourself about the current state of your skills and knowledge. For example, you might have been a dynamic programming expert in college. But if you’ve been a front-end developer working strictly in HTML, CSS, and JavaScript for the past few years, you probably need a refresher. ### Finding time One common issue that we hear from professionals who are starting to look for new programming jobs is that they don’t have time to practice technical interview questions. It’s true that adding yet another commitment on top of your job and your real life can be daunting. Once you’ve determined what it is that you need to study, you’ll need to carve time out to make that happen. This is going to look different for everyone. Do you actually have an interview in three weeks? Then that’s your timeline. If you’re just at the contemplation stage and don’t have any interviews lined up yet, then your timeline might be more in the month to two months range. In general, though, it’s best to keep your timeline fairly short. Having a longer timeline means that you risk losing focus and drive. Now that you know your timeline and what you need to study, it’s time to set your routine. A routine benefits most people because it becomes a built-in framework to adhere to, which in turn creates accountability. There are countless different schools of thought about what constitutes an effective routine, but they all have one thing in common: consistency. You have to practice consistently in order to see the benefits from it. For most people, at least an hour a day is ideal. If your timeline is short, try to spend more time daily. You may have to scale back on some other commitments while you’re in interview preparation mode! #### Stick to it! Once you’ve got a routine that works for you, stick to it. This is the hard part, because it usually involves scaling back on other, more fun parts of your life. But stick to your guns and protect the time you’ve set aside for practice. Remember, this isn’t a forever commitment! Once you’ve gotten to your goal, you can lay off on the interview preparation and get back to whatever it was that you had to scale back on to find the time, whether it’s watching Friends reruns or running marathons. ### Practice pays off We know you’re a good engineer. You know you’re a good engineer! But technical interviews require different skills – and like any other skill, you have to work to get better. Actually writing code that solves the actual technical interview questions makes you more comfortable with the process. We can’t emphasize this enough: The absolute best way to ensure that you’re good at interviewing is to practice solving coding interview problems! Now go get ’em, tiger. You’re going to knock that interviewer’s socks off! ### On the job hunt? Read these articles too: Resumes. Not fun, right? But in a lot of cases, they’re a necessary part of the job search process. Read Make Your Engineering Resume Stand Out to find out how to write a resume that really highlights your programming skills and experiences and makes you stand out from the crowd of applicants. Once you’re on a company’s radar, there’s still a few steps before you make it to the in-person technical interview! First, you have to get past the recruiter phone screen. Read Ace Your Phone Screen By Telling Your Story, Pt. 1  to learn how to create a personal elevator pitch that resonates with recruiters. Then check out Ace Your Phone Screen By Telling Your Story, Pt. 2 for tips on how to wow the recruiter during the phone screen itself. ## Tell us… What’s your take on preparing for interviews? If you do prepare (and we hope you do), what does your process look like? Let us know over on the CodeFights forum! CodeFights Solves It: chessQueen ## CodeFights Solves It: chessQueen Our latest Interview Practice Task of the Week was chessQueen, which has been asked in technical interviews at Adobe. This question might seem fairly easy at first: Given the location of a queen piece on a standard 8 × 8 chess board, which spaces would be safe from being attacked by the queen? But as with any “easy” technical interview question, it’s imperative that you think it through fully, watch out for potential traps, and be able to explain your reasoning as you solve it! This question is a variation on a classic problem, Eight Queens. It’s very possible that you’ve encountered Eight Queens or a common variation, nQueens, in computer science classes or interviews in the past. Chess-based problems like this one come up a lot in interviews, there’s a good reason for that! If you think about a chess board, what does it look like? A grid. And chess pieces follow specific rules, making chess scenarios a great framework for questions that test your basic problem-solving and implementation skills. If you haven’t solved chessQueen yet, now’s the time! Once you’ve written your own solution, head back here. I’ll discuss my solution, and why I chose it over other possible solutions to the problem. ## The technical interview problem chessQueen gives us the location of the queen in standard chess notation (e.g. “d4”). Our solution needs to return a list of all the locations on the board that are safe from the queen. The diagram below shows the locations that the queen could move to on her next move, which means that these cells are not safe from the queen. In this example, we’d need to have our function chessQueen("d4") return the following list: Keep in mind that the output above has been formatted nicely to help us visualize the solution when looking at the diagram. The native output would put line breaks at locations dictated by the size the window. ## What can the queen attack? Remember that in chess, a queen can move any number of squares vertically, horizontally, or diagonally. The rows are already numbered, so let’s number the columns as well. Internally, we can think of column A as ‘0’, column B as ‘1’, etc. We will call the queen’s location (qx,qy). Then the queen can take any location (x,y) that is: 1. Anything in the same row (i.e. qy == y) 2. Anything in the same column (i.e. qx == x) 3. Anything on the up-and-right diagonal from the queen’s location. (i.e. slope 1 lines though the queen’s location: qx - qy == x - y) 4. Anything on the down-and right diagonal from the queen’s location. (i.e. slope -1 lines though the queen’s location: qx + qy == x + y) Any other location on the board is safe from the queen. ## Our strategy We see that we have to report and return all the squares that are safe from the queen for our solution. If we think of the chess board as N × N, we see that there are N squares excluded from the row, N excluded from the column, and at most N excluded from each of the diagonals. We are expected to return O(N^2) elements, so we know that we aren’t going to find a solution shorter than O(N^2). The creative parts of this solution will be how you choose to map between the column letters and the numbers – basically, how you do the arithmetic! My choice below was to use the hard-coded string "abcdefgh", where the position in the string maps to the index. I’ll mention some alternatives below, as well as their advantages and disadvantages. ### Other approaches to this coding challenge My approach to translating between the column’s names and the numbers won’t be everyone’s favorite way. My approach to listing the elements isn’t elegant, but it gets the job done! And a lot of times, that’s exactly that an interviewer needs to see. Here are some alternatives, and why I didn’t choose them: • Using the built-in string library We can get rid of the hardcoded string by writing You have to be a little careful that people running different locales will still have the same “first 8 characters” that you have! Note that you can use cols = list(string.ascii_lowercase[:8]) instead to be safe, but at that point I’d rather be explicit about the letters I’m using. • Using “ASCII math” We can find the column numbers by subtracting the character code a from the letter. In Python, ord('a') will return the code for the letter ‘a’ (97 in ASCII). In C, the function casting the character to an integer does the same thing. So we can actually write code that is a lot smaller: Because we aren’t relying on the position in the list, and have a direct translation between the column letters and numbers, we’re able to remove the column directly, instead of having to use the continue keyword. I avoided this because I get nervous doing encoding math (what about the locales?!) but the Python documentation assures me I’d actually be fine! • Using a dictionary for the conversion If I wanted to be fully explicit, I could make a dictionary {'a':0,'b':1, ..., 'h':7} to encode the values, and a separate dictionary to decode. This is a very flexible approach, but I would worry about what my interviewer might read into me setting up two hash tables for something that really only needed a little arithmetic! This is probably the easiest version to internationalize, so if you have a technical interview at a company that works in many different character sets, you could use this flexible code as a selling point. (“Oh, you want the names of the columns in Chinese? No problem, I just have to change the keys in this dictionary…”) ## Tell us… As you’ve seen, this is a challenge that can be solved in a number of different ways! How did you tackle it? How would you describe your reasoning for taking that approach to an interviewer? Let us know over on the CodeFights forum! CodeFights Solves It: goodStringsCount ## CodeFights Solves It: goodStringsCount Our Interview Practice challenge this week is goodStringsCount, a combinatorics problem. As the name suggests, combinatorics deals with combinations of objects that belong to finite sets, and it’s one of those topics that come up a lot in technical interviews. This specific coding problem is from Apple, which makes sense since they’re known for asking combinatorics questions in their technical interviews! If this isn’t your first CodeFights Solves It rodeo, I bet you know what I’m going to ask next: Have you solved goodStringsCount yet? If not, hop to it! Once you’ve got your solution, come back and we’ll walk through the problem together. Done? Great! Let’s get into it. ## The technical interview problem For this programming interview question, our job is to write a function that counts “good strings” of length n. A “good string” is defined as a string that satisfies these three properties: 1. The strings only contain the (English) lowercase letters a – z. 2. Each character in the string is unique. 3. Exactly one letter in the string is lexicographically greater than the one before it. The first two conditions tell us that any string made up of lowercase letters qualifies. For example, there are no good strings of length 1, because the single letter doesn’t have anything to be greater than! In other words, goodStringsCount(1) should return 0. ## Looking at good strings of length 3 Strings of length 2 are actually a little misleading, because they don’t really allow us to dig into the third condition on good strings. It just tells us that the characters have to appear in ascending order, as the second character has to be greater than the first. So let’s start by looking for good strings of length 3 instead! The third condition tells us that “bfg” is not a good string because all the letters appear in ascending order. But “bgf” is a good string since: • looking at the first two characters: ‘b’ < ‘g’ (so there is a letter greater than the one that precedes it) • looking at the next two characters: ‘g’ > ‘f’ (so there is only one letter greater than the one that precedes it) From the letters b, f, and g, there are six possible strings, of which 4 are “good strings”. “bfg”, “bgf”, “fbg”, “fgb”, “gbf”, “gfb” There is nothing particularly special about the characters b, f, and g. We could use a, h, and k as well (just replace b with a, h with k, and g with k in the list above). In fact, any three distinct letters would do, since the only property we used was their relative order. So we have: ${\text{numGoodStringsOfLength3}}=4\times(\text{numWaysOfPicking3LettersFrom26})=4\times{{26}\choose{3}}=4\times\frac{26!}{4!22!}$ where $n\choose{r}$ is n choose r, the function for counting the number of ways of choosing r elements from n. This is a common function in combinatorics problems, so I’ve included a discussion of it as a footnote to this article! We’ll be using this function a lot, so if you haven’t come across it before skip to the end and read about it now. ## Overall strategy for this problem After looking at a specific case, we get an idea of the structure of this problem. For finding the good strings of length n: • We need to find the number of ways of picking n letters, which is 26 choose n. • Given the numbers 1, 2, …, n, we need to find the number of ways we can arrange them in a “good sequence”. A “good sequence” is when exactly one number is greater than the one before it. We will call the number of good sequences of 1, 2, …, n good(n). The idea is that for each set of n letters, we can put them in alphabetical order, and label the letters from 1 to n by their order in this array. Any good string then gives a “good sequence” of 1 to $n$, and any “good sequence” corresponds to a unique string (from these letters). For example, using n=3 and the letters b,f, g: * The good sequence [2,3,1] corresponds to the good string “fgb” * The good string “gbf” corresponds to the good sequence [3,1,2] The number of good strings of length n is then ${{26}\choose{n}}\times\text{good}(n)$. ## Finding good(n) Let’s call our sequence [ a[1], a[2] , ..., a[n] ], where each a[i] is distinct and takes a value from 1 to n, inclusive. If this is a “good sequence”, then there is exactly one element a[j-1] that is greater than the element before it, a[j]. Note that this means that: • a[1] > a[2] > a[3] > … > a[j], i.e. the elements before a[j+1] make a decreasing sequence • a[j+1] > a[j+2] > a[j+3] > … > a[n], i.e. the elements from a[j+1] onward make a decreasing sequence For example, a good sequence [4,2,5,3,1] can be broken into the two decreasing sequences [4,2] followed by [5,3,1]. In the notation above the value of j in [4,2,5,3,1] is 2 because a[2+1] = 5 is the only element bigger than the previous element. Given the list of n numbers, there are n-1 choices for a[j]. (You can’t use the very last element, because there is no a[j+1].) Another way of seeing this is that we are splitting the array into two decreasing subarrays, and the only places that we can split on are the locations of the commas! For now, let’s pick a particular value of j and ask how many good sequences of 1 through n we can make with this value of j. Given a set of distinct numbers, there is only one way of making them into a decreasing sequence, so really we just need to pick which j numbers go in the first decreasing sub-array, and which n-j numbers go in the second decreasing array. Note that once we pick the j numbers for the first array, the remaining numbers automatically go into the second array, so the number of distinct ways of splitting these numbers up is really just ${{n}\choose{j}}$. (We have counted one problematic sequence in here, which we’ll come back to.) In case you got lost in the n and js, here’s an example to help you out. Let’s pick n = 5 and j=2. Instead of counting all the good sequences, we are just counting the ones where the split happens at position 3. So we have: [a[1], a[2]] make a decreasing sequence, and [a[3],a[4],a[5]] make a decreasing sequence. How many sequences like this are there? I need to split 1,2,3,4,5 up into two pieces: two elements (i.e. j) become a[1] and a[2], while the remaining three elements (n-j) become a[3] through a[5]. But once I pick a[1] and a[2], whatever is left over has to go into the other array. There are ${{5}\choose{2}}=10$ ways of picking a[1] and a[2]: * a[1] and a[2] are 1,2: the sequences are [2,1] [5,4,3] -> [2,1,5,4,3] * a[1] and a[2] are 1,3: the sequences are [3,1] [5,4,2] -> [3,1,5,4,2] * a[1] and a[2] are 1,4: the sequences are [4,1] [5,3,2] -> [4,1,5,3,2] * … * a[1] and a[2] are 3,5: the sequences are [5,3] [4,2,1] -> [5,3,4,2,1] * a[1] and a[2] are 4,5: the sequences are [5,4] [3,2,1] -> [5,4,3,2,1] (UH OH!) The last one is problematic, because all the numbers are decreasing! So while there are 10 ways of making the split into two decreasing sequences with n=5 and j=2, there are only 9 good sequences (with n=5 and j=2) because these is one sequence where all the numbers are decreasing. The example shows us the problem with the way we have counted by splitting into two decreasing arrays: We never made sure that a[j+1] was bigger than a[j]. Fortunately there is only one arrangement that doesn’t work: the arrangement [n,n-1,...,1], so we have over-counted by exactly one. The number of good sequences split of 1 through n, split into arrays of length j and n-j is therefore ${{n}\choose{j}}-1$. To determine good(n), we just need to add up all the different good sequences we can get by putting the cutoff between the two arrays at different locations j. We get: $\text{good(n)}=\large\sum_{j=1}^{n-1}\bigg[{{n}\choose{j}}-1\bigg]$ This gets us to a point where we can write a program that will run quickly enough, so if you’re totally “mathed out” you can stop here! We can do a little bit better, through. Here are a couple of clever tricks: • ${{n}\choose0}={{n}\choose{n}}=1$, because there is 1 way of not choosing anything from n objects regardless of n – just don’t pick any of them! – and 1 way of choosing all n objects from n objects. So putting j=0 and j=n into $\bigg[{{n}\choose{j}}-1\bigg]$ gives zero. Now we can rewrite: $\text{good(n)}=\large\sum_{j=1}^{n-1}\bigg[{{n}\choose{j}}-1\bigg]=\large\sum_{j=0}^{n}\bigg[{{n}\choose{j}}-1\bigg]=\large\sum_{j=0}^{n}\bigg[{{n}\choose{j}}\bigg]-(n+1)$ • You might remember from Pascal’s triangle that $\large\sum_{j=0}^{n}\bigg[{n\choose{j}}\bigg]=2^n$ One way of seeing this result is: The sum is asking us about the number of ways we can select a subset of n elements by figuring out the number of subsets with 0 elements, the number of subsets with 1 element, et cetera. Since each element is either in a subset or not, there are 2^n different subsets. After all this work, we get: $\text{good}(n)=\text{(numberofgoodsequencesof[1,2,3,...,n])}=2^n-(n+1)$ ## Putting it all together From our overall strategy, we had ${26\choose{n}}$ ways of picking the n characters, and for each of our choices we had good(n) ways of arranging those characters into good strings. So the number of good strings of length n is: ${26\choose{n}}\times(2^n-n-1)$ ## …and now, some code! The hard part about this problem isn’t writing the code, but figuring out how to count efficiently. The code is pretty small in this case: ## Footnote: Ways of choosing r objects from n objects How do we get the formula for ${n\choose{r}}$ in the first place? Suppose we have n=6 objects: a flag, a cookie, a laptop, a pen, a cup of coffee, and an orange. Note that we have picked easy-to-distinguish items! We want to select r = 4 of them. How many different sets of items could we come up with? We have n choices for which item we pick first. We have n-1 choices for the second item, and so on. So it seems like the result is n(n-1)(n-2)(n-3). This becomes a little inconvenient to write for a general r (in this case, we know that r = 4), but notice that: $n(n-1)(n-2)(n-3)=n(n-1)(n-2)(n-3)\times\frac{(n-4)\ldots(2)(1)}{(n-4)\ldots(2)(1)}=\frac{n!}{(n-4)!}$ This formula over-counts the number of different sets of items we could have because selecting the laptop, then the coffee, then the orange, and then the pen would give you the same set of items as selecting the coffee, followed by the orange, the pen, and the laptop. In the formula above, we have counted these two arrangements separately! (This is called a permutation of selecting 4 items from n, and is another useful formula to have under your belt). How many different orders are there for selecting these 4 items? This is the number of times we have over-counted each set of items we could end up with. We’ll have 4 choices for whichever one we could have picked first (laptop, coffee, orange, or pen) without affecting the items we end up with. With the first item selected, we have 3 items to choose from for the second item, 2 choices for the third item, and then the last item is the 1 thing left over. So we can rearrange these 4 items in $4\times3\times2\times1=4!$ ways. The number of combinations for picking r=4 things from n items is: ${n\choose4}=\frac{n!}{(n-4)!4!}$ You can (and should!) run through this argument for a general value of r, and convince yourself that the number of ways of choosing r items from n is: ${n\choose{r}}=\frac{n!}{(n-r)!r!}$ ## Tell us… How did you solve this combinatorics interview question? Did you solution differ from mine? Let us know over on the CodeFights forum! CodeFights Solves It: findSubstrings ## CodeFights Solves It: findSubstrings The Interview Practice Task of the Week was findSubstrings. This is an Uber interview question, and their technical interviews are notoriously lengthy and difficult. And the solution implements tries, a data structure that you’ll see a lot in interview questions. It’s a double whammy! You know the drill by now: If you haven’t solved this challenge yet, hop on over to CodeFights and do it! Then come back here and we’ll walk through solving it together. We’ll start with a naive implementation and then work on optimizing it. …Done? Okay! ## Technical interview question The goal of this Uber interview question is to find the longest substring that appears in a given list parts in a set of words. You’ll return the list of words, with the substrings enclosed in square brackets. If there is a tie for the longest substring, you’ll mark the one that appears earliest in the input string. On a word-by-word basis, if our list is parts = ["An","a","men", "ing", "ie","mel","me"] then we would indicate substrings in the following way: • “word” becomes “word” (there is no substring of “word” in parts) • “anticipating” becomes “anticipat[ing]” (the substrings “a” and “ing” both appear, but “ing” is longer) • “interest” becomes “interest” • “metro” becomes “[me]tro” • “melrose” becomes “[mel]rose” • “melting” becomes “[mel]ting” (“mel” and “ing” are the same length, but “mel” appears first in the word) • “ingoltsmelt” becomes “[ing]oltsmelt” Our function, findSubstrings, should take a list of strings words and another list of strings parts, and return the list of words with the longest substring indicated. For example, with words = ["word", "anticipating", "ingolt", "melting"], and parts defined as before we would get: ## Naive solution Let’s try approaching this problem directly. This would mean going through our list of words, one at a time, and then checking each substring in parts to see if it appears in our word. We will keep track of the longest part that occurs earliest in the string. We can try running this function on our example, and it works. So we have a running implementation. But is it good enough to pass the CodeFights time tests, or will it take too long to execute? We submit it and… it passes! (I’m going to add an important caveat here and say that it passes at the time when I wrote this. Our engineers add new tests to challenges fairly frequently, based on user feedback and challenge results, so there’s a chance that when you read this my naive solution might not pass anymore. But the explanation below of its run time is still really useful, so don’t just skip to the optimized solution!) ### What’s the run time? We should take a moment to think about the running time of this code. If we call W the number of words, and P the number of parts, then we can tell that the running time is at least O(WP) from our nested loop. With a high-level language like Python, we also want to look for hidden loops in function calls. Inside the p loop, we call if p in w and w.index(p) multiple times. These functions are scanning the word w to look for p, and will have a running time of O(len(W)). We could speed up the function a little bit by calling w.index(p) once and saving the result, but we still have a running time that looks like O(WPN), where N is the length of a typical word in w. So how bad is O(WPN)? From the constraints page: * We have at most 5000 words, so in the worst-case scenario W = 5000; * Each word is at most 30 characters, so the worst case is N = 30; * We have at most 5000 substrings in parts, so the worst-case is P = 5000. Putting this together, this gives us 750 million loops. This means each loop would have to run in about 5 picoseconds to be done in 4 seconds! Put another way, we would have to have PetaHz processors for the worst case scenario. Can we argue that we solved the challenge anyway, and that the CodeFights test showed us that the cases we will be seeing aren’t the worst possible cases? Yes, you can absolutely argue that! When companies are hiring, they want people who can write code quickly to solve today’s problems. A working solution today is better than a perfect solution in a month! But you should make sure you demonstrate to the interviewer that you know how to think about algorithmic complexity. Microsoft famously had an exponential algorithm for determining which software patches were needed for Windows XP, but it was working fast enough for 10 years(!), and no one realized that the algorithm was exponential. So you can tell the interviewer that you know this is an algorithm with three multiplicative factors (length of a typical word, number of words, and number of substrings). You should explain that you pass the tests, but that your algorithm won’t handle the worst case in reasonable time. Go on to explain that this isn’t an exponential algorithm, so getting more powerful computers or waiting a little longer isn’t unreasonable. An interviewer may ask you to write a faster algorithm anyway, so if you really want to impress her you can preemptively mention that you think there are better algorithms that you would use if there was a good business case for spending the extra time on this. Remember that premature optimization is the root of all evil. (Maybe that’s a little over-dramatic… but it’s still bad!) ## Giving it our best Trie Instead of the naive implementation above, we will use a special type of tree data structure called a trie to store the substrings in parts. Like all trees in programming, a trie has a root node. Every other node keeps track of a letter, and getting to this point in the trie completes a substring from parts. Taking our example of parts = ["An","a","men", "ing", "ie","mel","me"] from earlier, we can think about the trie pictorially as: Note there are 7 red letters in the picture of the trie, which correspond to the 7 words we have in parts. Every leaf is a red letter, but the e below the m is also red to indicate that me is in parts. To check if a string s is in parts, we start at the root and move to the first level in the trie that matches the first letter of s. If we are trying to match me, for example, we would start by moving from the root to m, and then from m to e. Since we end up on a red node, we know that our word is on the list. To see if in is a member of parts, we would start at the root and follow the path to i, then to n. Since n is not red, we haven’t completed a string from parts, so we would conclude that in is not a member of parts. If we want to see whether dog was in parts, we would see there is no path from the root to a d, so we could report right away that dog isn’t in parts. We will be scanning a word character-by-character, and instead of having to search through all of parts, we only have to search through the subset of parts that matches what we have already seen. You have seen this data structure in action before – in fact, every time you see auto-complete working every time you use a browser or text! ### Implementing it in code Now we have to translate our picture into code. We will make a class TrieNode to store all the information in a node: To create a trie with the words “cat”, “cam”, and “cha” and “chat”, we could write the following (tedious) code: or, pictorially, we have Let’s make a function that adds a word to the trie so that we don’t have to do it manually. Our method will be to start at the root, reusing existing nodes along our path (or making new nodes as needed). The code for this is: Using this function, we can condense the code to create our trie to: ## Using the trie I am going to move most of the hard work – that is, finding the longest substring and enclosing it in square brackets – to a function findSubstringInWord(word, root). The function findSubstrings(words, parts) will be responsible for making the trie and collecting all the words together at the end. Here is our strategy for findSubstringInWord: 1. Initialize our lenLongSubstr to 0. 2. Go through word one character at a time, using that character as a starting point for matching substrings in parts. 3. For each character, use the trie to find the longest substring from parts starting from this character. If it is longer than the longest one we’ve seen so far, record the current position in the string and the current length. 4. Use the length of the longestSeenSubstring and its starting position to put the square brackets in the right place. Note how going through the string in order, and only updating the position when we see a strictly longer string, automatically encodes our tie-breaking condition (yay!). The trickiest part is finding the the longest substring from parts starting at a given location, because we need to know the last terminal node we passed though before failing to find a match on the trie. It would be a lot easier if we just had to keep track of the last node we were on before we failed to match. The function findSubstrings is simple by comparison! Note that it uses our earlier method of constructing the trie directly from parts. ## Running time for the trie Our algorithm still looped through each word. The constraint that we haven’t used yet is the length of the each element in parts. The worst case scenario is having to check 5 levels at each position at each word. In the worst case, we have: • The number of words, W = 5000; • The number of letters in a word, N = 30; • The number of levels to check in the trie, P = 5. This is smaller than the naive implementation by a factor of 1000. #### The final code! Our code was spread over a few functions. Collecting it all together, we have: We did it! High fives for everyone! ## Tell us… How did you solve this Uber interview question? Have you ever seen one like it in a coding interview? Let us know on the CodeFights user forum! CodeFights Solves It: isTreeSymmetric ## CodeFights Solves It: isTreeSymmetric The most recent Interview Practice Challenge of the Week was isTreeSymmetric. This interview question from LinkedIn and Microsoft is checking to see how comfortable you are with the tree data structure. Trees have a wide variety of applications, so your interviewer is going to make sure that you know your stuff! Have you solved it yet? If not, head over to CodeFights and give it a shot. Once you have a solution, come back here and I’ll walk you through how to answer this interview question. (Need to review first? Check out the CodeFights Explainer article about trees.) ## The technical interview question Done? Okay, let’s get started! Our job is to determine if a (binary) tree is a mirror image of itself. That is, we want to know if the left side mirrors the right side. Here is an example of a symmetric tree: In contrast, this is not a symmetric tree: The tree on the bottom is the same on the left and the right. In other words, starting at each 2, you get exactly the same tree. But it’s not a mirror image because the left and right sides — the 3 and 4 — don’t swap positions. We are going to implement a tree in Python with the following data structure: ## Subtrees and recursion A subtree is what we get by starting at any node, and treating it as the root of its own tree. For example, consider the tree: The subtree under node 2 is: While the subtree under node 6 is: ### A simpler question: Are left and right sides equal? Let’s start with a slightly simpler problem. We’ll ask if the left-hand side of the tree is equal to the right-hand side of the tree, and we’ll assume that we have at least three nodes. With these assumptions, this is the same as asking whether the subtree under the first left node is the equal to the subtree under the first right node. Since subtrees are trees in their own right, what we would like is a function areTreesEqual(treeLeft, treeRight). To figure out how to determine if two trees tree1 and tree2 are equal, we’ll turn to recursion. We have two trees, tree1 and tree2. Suppose that each tree has at the first two levels full (i.e. we have the root and both a left and right branch on level 1). Then tree1 looks like the following: with a similar structure for tree2. We can think of tree1 as: tree1 and tree2 are equal if and only if all of the following are true: 1. A1 and A2 are equal; 2. The tree {subtree L1} is equal to the {subtree L2} (determined by calling areTreesEqual(tree1.left, tree2.left)); 3. The tree {subtree R1} is equal to the {subtree R2} (determined by calling areTreesEqual(tree1.right, tree2.right)). Now we need to work out the base case, so that the recursion will terminate. We assumed above that the first two levels would be full. This certainly isn’t true for leaf nodes, and even close to the root we may not have two branches. Here are the cases we need to consider that don’t fall into the recursion above: • Both trees are at a leaf (i.e. no children on either side). Just check to see if the values of the nodes are the same; if they are then these two subtrees are equal. • Either the left or right side is missing from both nodes. If the sides are the same, just check that side. If the sides are different (e.g. tree1 has no left side, but tree2 has no right side) then the trees are not equal. • The left side or right side is missing from only one node. Then the two trees are automatically not equal. We now have enough information to write our function areTreesEqual: A function areBranchesEqual, which tests if the two branches of a tree are equal, could then be written to first check if we have a root node, then use areTreesEqual on the two subtrees root.left and root.right. ### The original problem: isTreeSymmetric? The big difference between isTreeSymmetric and areBranchesEqual is implementing the mirroring. When we investigate the left hand path of a node on one side, we have to investigate the right hand side on the other. Our full code is: ## The problem with recursion So far we have a really neat and elegant solution for this interview question. It passes all of the tests on CodeFights, including the hidden ones. A lot of tree problems lend themselves really well to being solved with recursion. However, in a technical interview, the interviewer might point out that the constraints of the problem told us that we were guaranteed the tree would be smaller than 5 * 10^4 values. Could we trigger a maximum recursion depth error in Python? (In other languages, the general term for this is “run out of stack space”). Our algorithm is a depth-first search check, so if we run into problems it will be on very deep trees. Whenever we move down a level, before calling the recursive step we check if the current left side and right side are equal. So our worst case scenario tree would be: The depth of this tree is 24999, and the (default) maximum recursion depth in Python is only 1000. Thankfully CodeFights didn’t give us this “torture test” for our algorithm! But how would we deal with it if our interviewer asked us to solve these trees? The solution is to convert the recursive algorithm into an iterative one. Phew! Though the recursion version is a lot easier to read, it’s a good idea to practice rewriting recursive algorithms as iterative ones since there are limitations on how many successive recursive calls you can make before returning. ## Tell us… How did you solve this Interview Practice challenge? Have you ever encountered it (or one like it!) in an interview? Let us know over on the forum! CodeFights Solves It, Interview Practice Edition: groupsOfAnagrams ## CodeFights Solves It, Interview Practice Edition: groupsOfAnagrams This week’s featured Interview Practice challenge was groupsOfAnagrams. It’s a Twitter interview question, and they’re known for their difficult technical interviews! If you haven’t solved it yet, go ahead and do that first, then head back here and we’ll discuss how to get to a solution with a good runtime. …Done? Okay, let’s dig into this coding challenge! ## The technical interview challenge You are given a list of n words. Two words are grouped together if they are anagrams of each other. For this interview question, our task is to determine how many groups of anagrams there are. For example, if the list words = [listen, admirer, tea, eta, eat, ate, silent, tinsel, enlist, codefights, married], we should have groupsOfAnagrams(words) return 4 because there are 4 groups: • listen, silent, tinsel, enlist are all anagrams of each other. • tea, eta, eat, ate are all anagrams of each other. • married and admirer are anagrams of each other. • codefights is not an anagram of anything in words except itself. In the challenge, we are guaranteed that each word only consists of lowercase letters. So we don’t need to worry about the rules regarding spaces and punctuation. Each word is guaranteed to be at most 10 letters. The hard part is that we have to do this on lists containing up to 10^5 words! ## First attempt: Determining if two words are anagrams There are a few standard ways to determine whether or not words are anagrams of each other. One way is to go through each letter of the alphabet, and check that it appears the same number of times in each word. What is the runtime of this algorithm? Suppose a typical word has L characters. In calling word.count(char), we are asking the computer to scan word to count the number of times character char appears. This is an O(L) operation. Technically, this makes areAnagrams an O(L) algorithm because we do this count operation for each letter of the alphabet, which doesn’t depend on the size of the word. However, big O notation suppresses the really bad constants hiding in the runtime! Consider that our words have at most 10 letters, and some may be repeats. That means that of our 26 iterations of the loop, at least 16 of them are wasted looking for letters that don’t appear. If we call the number of letters in the alphabet A, then the runtime of this algorithm is O(AL). Interview Tip: Asymptotic, or “big O”, notation misses constants. Analysis such as treating the size of the alphabet as a parameter is useful to estimate the size of the constants. This is particularly true in this case, where L is guaranteed to be less than the size of the alphabet A. If the interviewer questions why you’re doing this (after all, English is almost certainly going to have 26 lowercase letters for our lifetimes!) you have the chance to impress them by saying that you’re considering what happens if punctuation is allowed, or if the company decides to look at large unicode character sets. This shows that you’re forward thinking, and have design on your mind – two things that it’s important to emphasize in technical interviews. Basically, if using a constraint in the problem helps you significantly reduce the problem, feel free to use it. If you need time while you’re working on your solution, or are doing something odd like considering the number of letters in the alphabet as a variable, you can always defend your move as considering the design implications of the constraint! We can reduce this runtime of our solution significantly by going through each letter in word1 one at a time. This is at most 10 letters, rather than 26. Here’s one way of implementing it: While the code is longer, and it’s still O(L), it is no longer O(AL). Note that we are no longer assuming the strings are only made of lowercase letters! ## Approach 1: Works, but too slow… We have an anagram checker that will tell us if any pair of words are anagrams. What is the runtime here? We run through the loop O(N) times, where N is the number of words. If we are lucky, we remove a lot of words early. However, if each word is not an anagram of any other word, the while loop runs exactly N times. Inside the loop, we scan the remaining words for anagrams. So this is an O(N^2) algorithm. Since we can have up to 10^5 words in a list, O(N^2) probably won’t work. Sure enough, this code doesn’t pass the tests on CodeFights – and it’s not an answer that will knock the interviewer’s socks off during a technical interview. We can do better! ## Approach 2: Precomputing invariants We are going to find an invariant that all the words that are anagrams share with each other, but don’t share with any words they are not anagrams of. We will use these as keys in hash tables with dummy values. At the end, we will simply count the number of keys in the hash table. What are the invariants that we can use? A simple invariant is the number of as, the number of bs, …, etc. Any two words that are anagrams will have the same invariant, and any two words that aren’t anagrams will have different invariants. For each word, this is an O(AL) process, as we are iterating through both the letters in the word and in the alphabet. The gain is going to occur because we only calculate it once per word. Our algorithm is then: Because calcInvariant is O(AL), where L is the number of letters in the word, we see this algorithm has time complexity O(NAL). Note that we never compare invariants. Instead, we use the magic of hash tables to set the value of 1 to the appropriate entry. If that key already exists, we overwrite the value. If it doesn’t exist, a new value is created. It was the comparison between keys that gave us our problematic O(N^2). ### A cleaner, faster, invariant There is another natural invariant for the words: sorting the letters that occur in the words. We could use: This has the advantage of being easy to understand, and not making any assumptions about the underlying algorithm. It is a sorting algorithm, so it is O(L log L) instead of O(AL). (Technically we can’t compare algorithms using big O notation like this, because we don’t know the constants involved.) The sorted string can then be used for the invariant. In fact, the code for groupsOfAnagrams doesn’t have to change at all! This is faster because our sorting algorithm on small words is much faster than our O(AL) dictionary. If the words are long (e.g. we are comparing novels) then our compression of the string to count the number of times the characters are repeated would win. When accessing the hash table hash_of_invariants we need to use the hash function, which depends on the length of the keys. Since L < 10, we know that each key is at most 10 characters long, so the sorting invariant is elegant and understandable. ### Dictionaries where we don’t care about the value: Sets In our hash table, we didn’t really care about the values assigned to the keys. We were only interested in the number of distinct keys we had, as that told us how many groups of anagrams we had. There is nothing wrong with this, except that someone reading your code may wonder what the significance of the 1 is when you wrote: The comments certainly help. But Python has a built-in data structure for this case called set. If we made a set of invariants instead, we would simply have: There is not a visible dummy value that might confuse future maintainers of your code. Under the hood, the Python set is implemented as a dictionary (hash table) with hidden dummy values with a few optimizations, so the real reason for doing this is readability. Because hash tables are used more frequently than sets, there is a valid argument that the hash table is more readable for more programmers. # The final solution Here is our final, compact solution! Any interviewer would be happy to see this solution during a technical interview. ## Timing all the methods In an interview, you always have to consider the runtime. As you could see, a lot of the effort in creating a solution for this interview question was finding an algorithm that was quick enough! I timed the algorithms for two cases: • Our first example, words = [listen, admirer, tea, eta, eat, ate, silent, tinsel, enlist, codefights, married] • A pathological example, where each word was its own group of 10 letters, starting with abcdefghij, abcdefghik, …, to abgilnoz (this is a total of 10^5 words) I didn’t include the raw times, as that tells you more about my computer than the algorithm. Instead I normalized to the time for Method D on the list words (i.e. Method D has a time of 1 by definition). Method Description Time for words (compared to D on words) Time for 10^5 words (compared to D on words) A O(N^2) method 5.4 Too Long to Time B Using precalculated tuple invariant for hash table 6.4 70300 C Using sorted words for hash tables 1.2 10450 D Using sorted words for sets 1.0 7000 ## Tell us… Did your solution for this Interview Practice challenge look different than mine? How did you approach the problem? Let us know over on the CodeFights forum! CodeFights Solves It, Interview Practice Edition: productExceptSelf ## CodeFights Solves It, Interview Practice Edition: productExceptSelf If it’s been asked as an interview question at Amazon, LinkedIn, Facebook, Microsoft, AND Apple, you know it’s got to be a good one! Have you solved the challenge productExceptSelf in Interview Practice yet? If not, go give it a shot. Once you’re done, head back here. I’ll walk you through a naive solution, a better solution, and even a few ways to optimize. …Done? Okay, let’s get into it! The object of this problem is to calculate the value of a somewhat contrived function. The function productExceptSelf is given two inputs, an array of numbers nums and a modulus m. It should return the sum of all N terms f(nums, i) modulo m, where: f(nums,i) = nums[0] * nums[1] * .... * nums[i-1] * nums[i+1] * ... * nums[N-1] Whew! We can see this most easily with an example. To calculate productExceptSelf([1,2,3,4],12) we would calculate: • f([1,2,3,4], 0 ) = 2*3*4 = 24 • f([1,2,3,4], 1 ) = 1*3*4 = 12 • f([1,2,3,4], 2 ) = 1*2*4 = 8 • f([1,2,3,4], 3 ) = 1*2*3 = 6 The sum of all these numbers is 50, so we should return 50 % 12 = 2. ## A naive solution The explanation of the code suggests an implementation: This is technically correct, but the execution time is bad. Each call to the function f(nums, i) has to do a scan (and multiplication) in the array, so we know the function is O(N). We call f(nums,i) a total of N times, so this function is O(N2)! Sure enough, this function passes all the test cases. But it gives us a time length execution error on test case #16, so we have to find a more efficient solution. ## Division is a better solution (but still not good enough) A different way of approaching this problem is to find the product of all the numbers, and then divide by the one you are leaving out. We would have to scan to see if any of the numbers were zero first, as we can run into trouble dividing by zero. Essentially, we’d have to deal with that case separately, but it turns out that any array nums with a zero in it is easy to calculate. (This would be a good extension exercise!) If we look under the constraints of the code, we are told that 1 <= nums[i], so we don’t have to worry about this case. We can simplify our problem to: Again, we get a time execution error! Note that the running time is much better. We make a pass through the array once to get productAll, then a pass through the array again to get the f_i, and one more pass through the array to do the sum. That makes this is a O(N) solution! ## Why is the interviewer asking this question? In other words, what is this question testing? As I mentioned in the introduction, the function we’re calculating is a little contrived. Because it doesn’t seem to have any immediate applicability, the companies asking us this question in interviews are probably looking to see if we know a particular technique or trick. One of the assumptions that I made when calling the algorithms O(N) or O(N2) was that multiplication was a constant time operation. This is a reasonable assumption for small numbers, but even for a computer there is a significant difference between calculating 456 x 434 and 324834750321321355120958 x 934274724556120 There are a couple of math properties of residues (the technical name for the “remainders” the moduli give us) that we can use. One is: (a + b + c ) % m is the same as (a % m + b % m + c % m) % m This is nice because a%m, b%m, and c%m are all small numbers, so adding them is fast. The other property is: (a * b) % m is the same as ((a % m) * (b % m)) % m That is, I can multiply the remainders of a and b after division by m, and the result I get will have the correct remainder. At first glance, this doesn’t seem to be saving us much time because we’re doing a lot more operations. We are taking the modulus three times per multiplication, instead of just once! But it turns out that the modulus operation is fast. We more than make up for it by only multiplying small numbers. So we can change our calculation of f_i to This still isn’t good enough to pass the test, but we’re getting there. The problems we still have are: 1. The number productAll is still very large 2. Integer division is (relatively) slow Our next approach will eliminate both of these problems. Note: NOT a property The big number is productAll, so you might hope that we can find productAll % m, and _then_ do the division. This doesn’t work. The mathematical problem is that non-zero numbers can be multiplied to give 0, so division is problematic. Looking at division, and then taking a modulus: 48 / 6 = 8_ so _(48 / 6) % 12 = 8 but reversing the order (taking the modulus, then doing the division) yields: (48 % 12) / 6 = 0 / 6 = 0 So we can’t take the modulus of productAll and avoid big numbers altogether. ## Prefix products (aka cumulative products) We can speed up the execution by building by an array, prefixProduct, so that prefixProduct[i] contains the product of the first i-1 numbers in nums. We will leave prefixProduct[0] = 1. The neat thing about this array is that prefixProduct[i] contains the product of all elements of the array up to i, not including i. If we also made a suffixProduct such that suffixProduct[i] was equal to all the product of all numbers in nums past index position i, then the productExceptSelf for number i would just be the product of all numbers except the ith one = prefixProduct[i] * suffixProduct[i] We have eliminated one of the costly operations: division! We can also avoid seeing large numbers in the multiplication as well, by changing the step inside the loop to contain a modulus. Our new solution is: This finally works! We’ve eliminated all multiplication by big numbers (but still have multiplications by small numbers), and no divisions at all. But we can still do better… ## For the technical interview, an even better solution It turns out that we don’t need to have a suffixProduct. We can build it as we go! This is the accumulator pattern: ## Takeaways • Arithmetic operations aren’t always constant time. Multiplying big numbers is much slower than multiplying small numbers. • Operations are not all the same speed. Integer modulus is very fast, addition and multiplication are fast, while division is (relatively) slow. • Some number theory: You can multiply the residues of numbers, instead of the numbers themselves. But you cannot divide by residues, or divide the residues unless you have certain guarantees about divisibility. • The idea of precomputing certain operations, which is where the prefixProduct comes in. Other problems that use the cumulative or prefix techniques are finding the lower and upper quartiles of an array, or finding the equilibrium point of an array. (I cover prefix sums in a lot more detail in this article.) ## Footnote: Horner’s Method One of the solutions presented used a method of calculation known as Horner’s method. Take the cubic f(x) = 2 x^3 + 3 x^2 + 2 x + 6 To evaluate f(3) naively would require 8 multiplications (every power x^n is n copies of x multiplied together, and then they are multiplied by a coefficient), and three additions. There is a lot of wasted calculation here, because when we calculate x^3 we calculate x^2 in the process! We could store the powers of x separately to reduce the number of multiplications. Horner’s method is a way of doing this without using additional storage. The idea is, for example, that we can use operator precedence to store numbers for us: 3 x^2 + 2 x + 6 = (3 * x + 2) * x + 6 The left side has a (naive) count of 4 multiplications and 2 additions, while the right side has 2 multiplications and 2 additions. Moving to the cubic is even more dramatic: f(x) = 2 x^3 + 3x^2 + 2 x + 6 = ( (2 * x + 3) * x + 2 ) * x + 6 This takes our 8 multiplications and 3 additions to only 3 multiplications and 3 additions! The shortest solution so far, submitted by CodeFighter k_lee, uses Horner’s method, along with taking moduli at the different steps. See if you can decipher it. ## Tell us… Did your solution for this Interview Practice challenge look different than mine? How did you approach the problem? Let us know over on the CodeFights forum! Mastering the Basics for Technical Interviews ## Mastering the Basics for Technical Interviews It’s natural to want to focus on really tricky concepts when you’re preparing for interviews. You know you’re going to get some really hard problems, and so that’s the stuff that you want to practice! But we hear stories all the time about people who prepare for higher-level questions, only to completely blank out when they get questions about the basics. And we definitely don’t want that to happen to you! You absolutely need to be able to answer questions about programming basics quickly and easily, because for most interviewers, this represents the baseline of what you should be able to do. And if you don’t perform well, this can automatically put you out of the running even if you’ve done well on the rest of the interview. ## The basics Consider the fizzBuzz conundrum that Imran Ghory and others have written about: A surprising amount of seemingly well-qualified applicants are unable to answer even trivial programming questions during technical interviews. An example of this sort of question is the old standby fizzBuzz, which asks the interviewee to write a program that takes a number n and print out the numbers from 1 to n, replacing multiples of 3 with fizz, multiples of 5 with buzz, and multiples of both 3 and 5 with fizzbuzz. (Go ahead, take a minute and do it. We know you want to.) While the odds that an interviewer actually asks you to solve fizzBuzz is pretty low since it’s well-trod territory at this point, it’s a good example of the level of this type of “basic” question. Questions like this are aimed at making sure that you have a fundamental understanding of how to write code. The interviewer also wants to make sure that you can problem-solve in ways that take test cases and optimization into account. Since this sort of question is usually asked while you’re whiteboarding, interviewers also use this to gauge how you think while you’re working through a problem. It’s also important that you actually know your favored interviewing language well. Can you write loops, use appropriate methods when they’re available to you, and use the right terminology when you’re discussing elements of the code you’re writing? If not, it’s going to show and the interviewer is going to pick up on it. ## Technical interview topics What basic things should you be really solid on in order to prepare for technical interviews? They tend to fall into a few basic categories: • String manipulation (Generate permutations, find substrings, reverse a string, substitute specific letters…) • Array manipulation or traversal • Number manipulation • Pattern matching (If necessary, be ready to write your own regular expression rather than using a regex library) • Condition matching (Find the largest/smallest/missing element) Remember, this represents the baseline of what you should know in order to succeed in an interview (not to mention on the job). You’ll actually need to know a lot more advanced stuff to ace the interview – and don’t worry, Interview Practice has you covered on that front too. But even if you do well on more advanced topics, if you don’t wow the interviewer on the simple ones they’re going to question how capable you actually are. So don’t neglect the basics! Here are some great examples on Interview Practice to get you started: ## Tell us: Have you ever encountered a basic programming question in an otherwise hard interview? How did you handle it? Let us know on the CodeFights forum!
2017-11-21 00:32:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 19, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38085877895355225, "perplexity": 784.1329338945686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806309.83/warc/CC-MAIN-20171121002016-20171121022016-00567.warc.gz"}
https://www.physicsforums.com/threads/thermo-attributes-of-ideal-gas-in-3d-harmonic-potential.246523/
# Thermo attributes of ideal gas in 3D harmonic potential 1. Jul 23, 2008 ### SonOfOle 1. The problem statement, all variables and given/known data A classical system of N distinguishable, non-interacting particles of mass m is placed in a 3D harmonic potential, $$U(r) = c \frac{x^2 + y^2 + z^2}{2 V^{2/3}}$$ where V is a volume and c is a constant with units of energy. (a) Find the partition function and the Helmholtz free energy of the system. (b) Find the entropy, the internal energy and the total heat capacity at a constant volume for the system. 2. Relevant equations $$Z = \Sigma \exp{-E_i / kT}$$ $$H = U - TS$$ $$f(E)= A \exp{-E/kT}$$ 3. The attempt at a solution Unfortunately, I'm not sure where to start on this one. Anybody able to give me a tip in the right direction? 2. Jul 23, 2008 ### Mute Your system is a classical one, so the appropriate equation for the partition function is $$Z = \int d\mathbf{r}_1 d\mathbf{r}_2 \dots d\mathbf{r}_N d\mathbf{p}_1 d\mathbf{p}_2 \dots d\mathbf{p}_N \exp\left[-\frac{E(\mathbf{r},\mathbf{p})}{k_BT}\right]$$ The energy is $$E(\mathbf{r},\mathbf{p}) = \sum_{i=1}^{N}\left[\frac{\mathbf{p}_i^2}{2m} + U(\mathbf{r}_i)\right].$$ This should get you started with that part. For the free energy, instead of using $F = U - TS$, it's easy to use the equation $$F = -k_BT \ln Z$$ (it's more common to use F than H to denote the Helmholtz free energy, as H is typically used for the enthalpy). 3. Jul 24, 2008 ### SonOfOle Still stuck. How would the integration over the momentums work? That is, what would the $$\mathbf{p_i}^2$$ be? 4. Jul 25, 2008 ### Mute $$\mathbf{p}_i^2 = p_{i,x}^2 + p_{i,y}^2 + p_{i,z}^2$$ Hence, you end up with a sum of squares of terms in the exponential that can all be split into products of exponentials.
2017-06-22 22:45:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6472832560539246, "perplexity": 420.25357866353073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319912.4/warc/CC-MAIN-20170622220117-20170623000117-00455.warc.gz"}
https://www.shaalaa.com/textbook-solutions/c/scert-maharashtra-question-bank-solutions-12th-standard-hsc-mathematics-and-statistics-commerce-maharashtra-state-board-2021-chapter-1.4-applications-of-derivatives_5303
# SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 chapter 1 - Applications of Derivatives [Latest edition] ## Chapter 1: Applications of Derivatives Q.1Q.2Q.3Q.4Q.5Q.6 Q.1 ### SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 Chapter 1 Applications of Derivatives Q.1 #### MCQ [1 Mark] Q.1 | Q 1 Choose the correct alternative: The slope of the tangent to the curve y = x3 – x2 – 1 at the point whose abscissa is – 2, is • – 8 • 8 • 16 • – 16 Q.1 | Q 2 Choose the correct alternative: Slope of the normal to the curve 2x2 + 3y2 = 5 at the point (1, 1) on it is • -2/3 • 2/3 • 3/2 • -3/2 Q.1 | Q 3 Choose the correct alternative: The function f(x) = x3 – 3x2 + 3x – 100, x ∈ R is • increasing for all x ∈ R, x ≠ 1 • decreasing • neither increasing nor decreasing • decreasing for all x ∈ R, x ≠ 1 Q.1 | Q 4 Choose the correct alternative: If the marginal revenue is 28 and elasticity of demand is 3, then the price is • 24 • 32 • 36 • 42 Q.1 | Q 5 Choose the correct alternative: The price P for the demand D is given as P = 183 + 120D − 3D2, then the value of D for which price is increasing, is • D < 60 • D > 60 • D < 20 • D > 20 Q.1 | Q 6 Choose the correct alternative: If the elasticity of the demand η = 1, then demand is • constant • inelastic • unitary elastic • elastic Q.1 | Q 7 Choose the correct alternative: If 0 < η < 1, then the demand is • constant • inelastic • unitary elastic • elastic Q.2 ### SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 Chapter 1 Applications of Derivatives Q.2 #### Fill in the blanks [1 Mark] Q.2 | Q 1 The slope of tangent at any point (a, b) is also called as ______ Q.2 | Q 2 If the function f(x) = 7/x - 3, x ∈ R, x ≠ 0 is a decreasing function, then x ∈ ______ Q.2 | Q 3 The slope of the tangent to the curve x = 1/"t", y = "t" - 1/"t", at t = 2 is ______ Q.2 | Q 4 If the average revenue is 45 and elasticity of demand is 5, then marginal revenue is ______ Q.2 | Q 5 The total cost function for production of articles is given as C = 100 + 600x – 3x2, then the values of x for which the total cost is decreasing is  ______ Q.3 ### SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 Chapter 1 Applications of Derivatives Q.3 #### [1 Mark] Q.3 | Q 1 State whether the following statement is True or False: An absolute maximum must occur at a critical point or at an end point. • True • False Q.3 | Q 2 State whether the following statement is True or False: The function f(x) = 3/x + 10, x ≠ 0 is decreasing • True • False Q.3 | Q 3 State whether the following statement is True or False: The function f(x) = x - 1/x, x ∈ R, x ≠ 0 is increasing • True • False Q.3 | Q 4 State whether the following statement is True or False: The equation of tangent to the curve y = x2 + 4x + 1 at (– 1, – 2) is 2x – y = 0 • True • False Q.3 | Q 5 State whether the following statement is True or False: If the function f(x) = x2 + 2x – 5 is an increasing function, then x < – 1 • True • False Q.3 | Q 6 State whether the following statement is True or False: If the marginal revenue is 50 and the price is ₹ 75, then elasticity of demand is 4 • True • False Q.4 ### SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 Chapter 1 Applications of Derivatives Q.4 #### Solve the following: [3 Marks] Q.4 | Q 1 Find the equations of tangent and normal to the curve y = 3x2 – x + 1 at the point (1, 3) on it Q.4 | Q 2 Find the values of x such that f(x) = 2x3 – 15x2 + 36x + 1 is increasing function Q.4 | Q 3 Find the values of x such that f(x) = 2x3 – 15x2 – 144x – 7 is decreasing function Q.4 | Q 4 Show that the function f(x) = (x - 2)/(x + 1), x ≠ – 1 is increasing Q.4 | Q 5 Divide the number 20 into two parts such that their product is maximum Q.4 | Q 6. (i) If the demand function is D = 50 – 3p – p2. Find the elasticity of demand at p = 5 comment on the result Q.4 | Q 6. (ii) If the demand function is D = 50 – 3p – p2. Find the elasticity of demand at p = 2 comment on the result Q.4 | Q 7 If the demand function is D = (("p" + 6)/("p" - 3)), find the elasticity of demand at p = 4 Q.4 | Q 8. (i) The total cost of manufacturing x articles is C = 47x + 300x2 - x4.  Find x, for which average cost is increasing Q.4 | Q 8. (ii) The total cost of manufacturing x articles C = 47x + 300x2 – x4 . Find x, for which average cost is decreasing Q.5 ### SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 Chapter 1 Applications of Derivatives Q.5 #### Solve the following: [4 Marks] Q.5 | Q 1 Determine the maximum and minimum value of the following function. f(x) = 2x3 – 21x2 + 36x – 20 Q.5 | Q 2 A rod of 108 m long is bent to form a rectangle. Find it’s dimensions when it’s area is maximum Q.5 | Q 3 Find MPC, MPS, APC and APS, if the expenditure Ec of a person with income I is given as Ec = (0.0003) I2 + (0.075) I When I = 1000 Q.5 | Q 4. (i) The manufacturing company produces x items at the total cost of ₹ 180 + 4x. The demand function for this product is P = (240 – x). Find x for which revenue is increasing Q.5 | Q 4. (ii) The manufacturing company produces x items at the total cost of ₹ 180 + 4x. The demand function for this product is P = (240 − 𝑥). Find x for which profit is increasing Q.5 | Q 5 If x + y = 3 show that the maximum value of x2y is 4. Q.5 | Q 6 Find the equation of tangent to the curve x2 + y2 = 5, where the tangent is parallel to the line 2x – y + 1 = 0 Q.5 | Q 7 Find the equation of tangent to the curve y = sqrt(x - 3) which is perpendicular to the line 6x + 3y – 4 = 0 Q.5 | Q 8 Find the equation of tangent to the curve y = x2 + 4x at the point whose ordinate is – 3 Q.6 ### SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 Chapter 1 Applications of Derivatives Q.6 #### Activity: [4 Marks] Q.6 | Q 1 A metal wire of 36 cm long is bent to form a rectangle. By completing the following activity, find it’s dimensions when it’s area is maximum. Solution: Let the dimensions of the rectangle be x cm and y cm. ∴ 2x + 2y = 36 Let f(x) be the area of rectangle in terms of x, then f(x) = square ∴ f'(x) = square ∴ f''(x) = square For extreme value, f'(x) = 0, we get x = square ∴ f''(square) = – 2 < 0 ∴ Area is maximum when x = square, y = square ∴ Dimensions of rectangle are square Q.6 | Q 2 By completing the following activity, examine the function f(x) = x3 – 9x2 + 24x for maxima and minima Solution: f(x) = x3 – 9x2 + 24x ∴ f'(x) = square ∴ f''(x) = square For extreme values, f'(x) = 0, we get x = square or square ∴ f''(square) = – 6 < 0 ∴ f(x) is maximum at x = 2. ∴ Maximum value = square ∴ f''(square) = 6 > 0 ∴ f(x) is maximum at x = 4. ∴ Minimum value = square Q.6 | Q 3 By completing the following activity, find the values of x such that f(x) = 2x3 – 15x2 – 84x – 7 is decreasing function. Solution: f(x) = 2x3 – 15x2 – 84x – 7 ∴ f'(x) = square ∴ f'(x) = 6(square) (square) Since f(x) is decreasing function. ∴ f'(x) < 0 Case 1: (square) > 0 and (x + 2) < 0 ∴ x ∈ square Case 2: (square) < 0 and (x + 2) > 0 ∴ x ∈ square ∴ f(x) is decreasing function if and only if x ∈ square Q.6 | Q 4. (i) A manufacturing company produces x items at a total cost of ₹ 40 + 2x. Their price per item is given as p = 120 – x. Find the value of x for which revenue is increasing Solution: Total cost C = 40 + 2x and Price p = 120 – x Revenue R = square Differentiating w.r.t. x, ("dR")/("d"x) = square Since Revenue is increasing, ("dr")/("d"x) > 0 ∴ Revenue is increasing for square Q.6 | Q 4. (ii) A manufacturing company produces x items at a total cost of ₹ 40 + 2x. Their price per item is given as p = 120 – x. Find the value of x for which profit is increasing Solution: Total cost C = 40 + 2x and Price p = 120 − x Profit π = R – C ∴ π = square Differentiating w.r.t. x, ("d"pi)/("d"x) = square Since Profit is increasing, ("d"pi)/("d"x) > 0 ∴ Profit is increasing for square Q.6 | Q 4. (iii) A manufacturing company produces x items at a total cost of ₹ 40 + 2x. Their price per item is given as p = 120 – x. Find the value of x for which also find elasticity of demand for price ₹ 80. Solution: Total cost C = 40 + 2x and Price p = 120 – x p = 120 – x ∴ x = 120 – p Differentiating w.r.t. p, ("d"x)/("dp") = square ∴ Elasticity of demand is given by η = - "P"/x*("d"x)/("dp") ∴ η = square When p = 80, then elasticity of demand η = square ## Chapter 1: Applications of Derivatives Q.1Q.2Q.3Q.4Q.5Q.6 ## SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 chapter 1 - Applications of Derivatives SCERT Maharashtra Question Bank solutions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 chapter 1 (Applications of Derivatives) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the Maharashtra State Board 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 solutions in a manner that help students grasp basic concepts better and faster. Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. SCERT Maharashtra Question Bank textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students. Concepts covered in 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 chapter 1 Applications of Derivatives are Introduction of Derivatives, Increasing and Decreasing Functions, Maxima and Minima, Application of Derivatives to Economics. Using SCERT Maharashtra Question Bank 12th Board Exam solutions Applications of Derivatives exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in SCERT Maharashtra Question Bank Solutions are important questions that can be asked in the final exam. Maximum students of Maharashtra State Board 12th Board Exam prefer SCERT Maharashtra Question Bank Textbook Solutions to score more in exam. Get the free view of chapter 1 Applications of Derivatives 12th Board Exam extra questions for 12th Standard HSC Mathematics and Statistics (Commerce) Maharashtra State Board 2021 and can use Shaalaa.com to keep it handy for your exam preparation
2021-05-18 23:43:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4736503064632416, "perplexity": 2199.410531928735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00588.warc.gz"}
https://rhettallain.com/2019/03/15/macgyver-season-1-episode-21-science-notes-cigar-cutter/
# MacGyver Season 1 Episode 21 Science Notes: Cigar Cutter Dirty Bomb It’s not a Mac-Hack (I assume that’s clear), but let me just explain the difference between a nuclear bomb and a dirty bomb. A nuclear bomb uses a nuclear reaction to create energy. If you take some large mass element (let’s just say plutonium) and spit it into two pieces, you get some stuff. Obviously you get at least two smaller atoms. But you also get some neutrons and stuff. However, if you added up the mass of all the stuff after the split, it would be slightly less than the mass of the original plutonium. This lost mass is accounted for in energy. Here is the energy-mass relationship. $E = mc^2$ The “c” is the speed of light. This says that you get a BUNCH of energy for just a little bit of mass and this is the basis for a nuclear fission reaction. For a nuclear bomb, the split creates neutrons that can also split more atoms which produces MORE neutrons and more splits. Oh, the energy and the left over pieces tend to make stuff radioactive. The dirty bomb also uses radioactive material. However, the main explosion is not a nuclear reaction but instead a more conventional chemical-based bomb. The bomb includes radioactive material that gets spread around from the explosion. It’s dirty. Yes, it’s bad—but it’s not a nuclear explosion. Also, these are pretty easy to make since you just need a normal bomb and some radioactive material. Parsecs and Time and Distance Everyone (except Jack) is correct. The parsec is a unit of distance. It has to do with parallax. Here is a simple experiment. Hold your thumb out in front of your face. Now close one eye and look at your thumb. Hopefully there is something in the background that you can line it up with. Now close that eye and open the other one. Notice that your thumb now lines up with something else in the background? That’s parallax. Wait. You didn’t actually do the eye thumb thing. Really, you should do that. OK, back to the parsec. The motion of your thumb with respect to the background depend on the distance from your thumb to your face as well as the distance between your two eyes. What if you increase the distance between your eyes? What if this distance is the size of the Earth’s orbit around the Sun? In that case the change in observation locations (on different sides of the Sun) can be used to measure the distance to nearby stars. If a star has an apparent angular shift of 1 second of a degree, that’s a parsec. The “sec” in parsec is for “seconds of a degree”—not time seconds. Yes, they made a mistake in Star Wars. Here is even more details about measuring distances in astronomy. Blood Stopping Foam I don’t know what to call this stuff. MacGyver injects some liquid into Bozer’s knife wound and it sort of seals it up so it won’t bleed. It’s not so much of a hack, but it does appear to be real. https://www.dailymail.co.uk/sciencetech/article-2697428/The-injectable-foam-stop-soldiers-bleeding-death-battlefield.html It would be sort of like that expanding foam you use to seal cracks around your house—except for blood.
2019-09-23 07:50:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.666316568851471, "perplexity": 558.0798146951253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00351.warc.gz"}
https://ncetm.org.uk/classroom-resources/primm-2-25-using-compensation-to-calculate/
• Mastery PD Materials Using compensation to calculate Spine 2: Multiplication and Division – Topic 2.25 Introduction Learn how multiplication and division calculations are affected when one element of the calculation is multiplied or divided by a scale factor. Teaching points • Teaching point 1: For multiplication, if there is a multiplicative change to one factor, the product changes by the same scale factor. • Teaching point 2: For division, if there is a multiplicative change to the dividend and the divisor remains the same, the quotient changes by the same scale factor. • Teaching point 3: For division, if there is a multiplicative increase to the divisor and the dividend remains the same, the quotient decreases by the same scale factor; if there is a multiplicative decrease to the divisor and the dividend remains the same, the quotient increases by the same scale factor. • Primary • KS2 • Year 6
2021-10-21 17:15:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254713416099548, "perplexity": 930.3619797398632}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00028.warc.gz"}
https://xianblog.wordpress.com/tag/quarantine/
## hands-on probability 101 Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , on April 3, 2021 by xi'an When solving a rather simple probability question on X validated, namely the joint uniformity of the pair $(X,Y)=(A-B+\mathbb I_{A when A,B,C are iid U(0,1), I chose a rather pedestrian way and derived the joint distribution of (A-B,C-B), which turns to be made of 8 components over the (-1,1)² domain. And to conclude at the uniformity of the above, I added a hand-made picture to explain why the coverage by (X,Y) of any (red) square within (0,1)² was uniform by virtue of the symmetry between the coverage by (A-B,C-B) of four copies of the (red) square, using color tabs that were sitting on my desk..! It did not seem to convince the originator of the question, who kept answering with more questions—or worse an ever-changing question, reproduced in real time on math.stackexchange!, revealing there that said originator was tutoring an undergrad student!—but this was a light moment in a dreary final day before a new lockdown. ## visitors allowed in Svalbard Posted in Statistics with tags , , , , , , , , , , , , , , , on November 8, 2020 by xi'an Posted in Books, Travel with tags , , , , , , , , , on August 17, 2020 by xi'an Here are the five nominees for the World Fantasy Award 2020, not that I am familiar with this other award, which 2019 selection does not cover my reading list. And neither does the 2018 edition. Except for the unique ravenesque Ka. At least, this year, I have voraciously read one of them, tremendously enjoyed other books by Ann Leckie, and would be most tempted by reading Japanese fantasy. Adding to my already high pile of books to take on (potential) vacations for the end of the month… or to read at home if again quarantined. ## squash invasion Posted in pictures, Wines with tags , , , , , , on August 1, 2020 by xi'an ## a journal of the plague year [more deconfined reviews] Posted in Books, Kids, pictures with tags , , , , , , , , , , , , , , , , , , on July 18, 2020 by xi'an Took a copy of Room 10 by Åke Edwardson yet again on the book sharing shelves at Dauphine. And read it within a few days, with limited enthusiasm as the story proceeds quite sluggishly, every single clue is driven to its very end, e.g. detailing the examination of security recordings for pages!, the Swedish background is mostly missing, the personal stories of the policemen prove frankly boring, and the final explanations stand way beyond a mere suspension of belief. The book is back on the shelves. Watched the beginning of the Salvation series and quickly gave up. Because I soon realised it had nothing to do with the Peter Hamilton’s trilogy. And because the story did not seem to get anywhere, despite the impending destruction of Earth by a massive asteroid, turning into an East versus West spy story. And because the scientific aspects and characters were plain ridiculous. And also because the secondary plot about whom should be saved in case of a destruction was quite distasteful in its primitive eugenism. Read an Indriðason I had not yet read, Sons of dust [Synir duftsins], the first book he wrote, but ironically rather repetitive on the themes of missing fathers, child abuse, social consequences of the second World War allied occupation, found in the subsequent volumes. And a rather unconvincing plot, especially from a genetic engineering perspective. (The book is not currently available in English. I read it in French.) Eventually came to watch There will be blood, the 2007 masterpiece by Paul Anderson, with Daniel Day-Lewis rendering so impressively the descent into madness of the oil tycoon and his thirst for absolute control, loosing his adopted son in the process. And unable to stop at exposing the duplicity of the preacher whom he fought the entire film. The ending is somewhat less impressive than the rest, maybe because all is finished, but it does not diminish the raw power of this tale. And the music track is perfect, with Brahms’ Violin Concerto as a leitmotiv. A journey into oily darkness…
2023-03-29 15:42:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48589953780174255, "perplexity": 2171.2262900105875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00619.warc.gz"}
https://deepai.org/publication/multirate-partially-explicit-scheme-for-multiscale-flow-problems
Multirate partially explicit scheme for multiscale flow problems For time-dependent problems with high-contrast multiscale coefficients, the time step size for explicit methods is affected by the magnitude of the coefficient parameter. With a suitable construction of multiscale space, one can achieve a stable temporal splitting scheme where the time step size is independent of the contrast. Consider the parabolic equation with heterogeneous diffusion parameter, the flow rates vary significantly in different regions due to the high-contrast features of the diffusivity. In this work, we aim to introduce a multirate partially explicit splitting scheme to achieve efficient simulation with the desired accuracy. We first design multiscale subspaces to handle flow with different speed. For the fast flow, we obtain a low-dimensional subspace with respect to the high-diffusive component and adopt an implicit time discretization scheme. The other multiscale subspace will take care of the slow flow, and the corresponding degrees of freedom are treated explicitly. Then a multirate time stepping is introduced for the two parts. The stability of the multirate methods is analyzed for the partially explicit scheme. Moreover, we derive local error estimators corresponding to the two components of the solutions and provide an upper bound of the errors. An adaptive local temporal refinement framework is then proposed to achieve higher computational efficiency. Several numerical tests are presented to demonstrate the performance of the proposed method. Authors • 10 publications • 8 publications • Contrast-independent partially explicit time discretizations for multiscale wave problems In this work, we design and investigate contrast-independent partially e... 02/25/2021 ∙ by Eric T. Chung, et al. ∙ 0 • Partially Explicit Time Discretization for Time Fractional Diffusion Equation Time fractional PDEs have been used in many applications for modeling an... 08/30/2021 ∙ by Jiuhua Hu, et al. ∙ 0 • Temporal Splitting algorithms for non-stationary multiscale problems In this paper, we study temporal splitting algorithms for multiscale pro... 11/11/2020 ∙ by Yalchin Efendiev, et al. ∙ 0 • Contrast-independent partially explicit time discretizations for nonlinear multiscale problems This work continues a line of works on developing partially explicit met... 08/28/2021 ∙ by Eric T. Chung, et al. ∙ 0 • Design of High-Order Decoupled Multirate GARK Schemes Multirate time integration methods apply different step sizes to resolve... 04/20/2018 ∙ by Arash Sarshar, et al. ∙ 0 • Upscaling errors in Heterogeneous Multiscale Methods for the Landau-Lifshitz equation In this paper, we consider several possible ways to set up Heterogeneous... 04/07/2021 ∙ by Lena Leitenmaier, et al. ∙ 0 • Wavelet-based Edge Multiscale Parareal Algorithm for Parabolic Equations with Heterogeneous Coefficients We propose in this paper the Wavelet-based Edge Multiscale Parareal (WEM... 03/23/2020 ∙ by Guanglian Li, et al. ∙ 0 This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. 1 Introduction Modeling of flow and transport in complicated porous media in various physical and engineering applications encounters problems with multiscale features. In particular, the properties of the underlying media, such as thermal diffusivity or hydraulic conductivity, have values across different magnitudes. This poses challenges in the numerical simulation since the high contrast feature of the heterogeneous media introduces stiffness for the system. In terms of temporal discretization, the time-stepping depending on the magnitude of the multiscale coefficient is needed for explicit schemes. For the spatial discretization, many multiscale methods [27, 25, 10, 22, 5, 20] are introduced to handle the issue. The multiscale model reduction methods include both local [22, 2, 1, 4, 21] and global [26, 7, 9, 8] approaches to reduce computational expenses. The idea is to construct reduced order models to approximate the full fine-scale model and achieve efficient computation. Among these methodologies, the family of generalized multiscale finite element methods (GMsFEM) [20, 13, 16, 17] are proposed to effectively address multiscale problem with high-contrast parameters. It first formulates some local problems on coarse grid regions to get snapshot basis that can capture the heterogeneous properties, and then designs appropriate spectral problems to get important modes in the snapshot space. The GMsFEM approach share some similarities with multi-continuum methods. The basis functions can recognize the high-contrast features such as channels that need to be represented individually. The convergence of the GMsFEM depends on the eigenvalue decay, and the small eigenvalues correspond to the high permeable channels. To construct multiscale basis such that the convergence of the method is independent of the contrast and linearly decreases with respect to mesh size under suitable assumptions, the constraint energy minimizing GMsFEM (CEM-GMsFEM) was initiated[14, 11]. This approach begins with a suitable choice of auxiliary space, where some local spectral problems in coarse blocks are solved. The auxiliary space includes the minimal number of basis functions to identify the essential information of the channelized media. Then it will be used to compute the solutions of constraint energy minimizing problem in some oversampling coarse regions to handle the non-decaying property. The resulting localized solutions form the multiscale space. To adapt the CEM-GMsFEM for flow-based upscaling, the nonlocal multicontinuum upscaling method (NLMC) [12] is proposed by modifying the above framework. The idea is to use simplified auxiliary space by assuming that each separate fracture network within a coarse grid block is known. The auxiliary basis are piecewise constants corresponding to fracture networks and matrix, which are called continua. Then the local problems are formulated for each continuum by minimizing the local energy subject to appropriate constraints. This construction returns localized basis functions which can automatically identify each continuum. Further, due to the property of the NLMC basis, this approach will provide non-local transmissibilities which describe the transfer among coarse blocks in an oversampled region and among different continua. Consider the time-dependent problem with high-contrast coefficients, there have been various approaches to handle multiscale stiff systems [3, 6, 23, 24]. Recently, a temporal splitting method is combined with the spatial multiscale method [15] to produce a contrast-independent partially explicit time discretization scheme. It splits the solution of the problem into two subspaces which can be computed using implicit and explicit methods, instead of splitting the operator of the equation directly based on physics [32, 33, 34, 28, 29]. The multiscale subspaces are carefully constructed. The dominant basis functions stem from CEM-GMsFEM which have very few degrees of freedom and are treated implicitly. The additional space as a complement will be treated explicitly. It was shown that with the designed spaces, the proposed implicit-explicit scheme is unconditionally stable in the sense that the time step size is independent of the contrast. Following a similar idea in [15], in this work, we will propose a multirate time-stepping method for the multiscale flow problem. Multirate time integration method has been studied extensively for the flow problems [18, 19, 30, 31]. Based on different splittings of the target equation, multiple time stepping are utilized in different parts of the spatial domain according to computational cost or complexity of the physics. In this paper, we integrate the multirate approach with multiscale space construction. Due to the high contrast property of the coefficients, the solutions pass through different regions of the porous medium with different speeds in the flow problem. Different from the previous approach [15], where the multiscale basis functions are formulated for dominant features and missing information, we propose to design multiscale spaces to handle the fast and slow components of the slow separately. We use the simplified auxiliary space containing piecewise constant functions as in the NLMC framework. We only keep the basis representing the high-diffusive component in the first space and adopt an implicit time discretization scheme. The second space consists of bases representing the remaining component, it will take care of the slow flow and the corresponding degrees of freedom are solved explicitly. Next, we introduce a multirate approach where different time step sizes are employed in the partially explicit splitting scheme, such that different parts of the solution are sought with time steps in line with the dynamics. We start with a coarse step size for both equations and refine local coarse time blocks based on some error estimators. With a finer discretization, the accuracy of the approximation can be improved. We analyze the stability of the multirate methods for all four cases when we use coarse or fine time step size alternatively for the implicit and explicit parts of the splitting scheme. It shows that the scheme is stable as long as the coarse time step size satisfies some suitable conditions independent of the contrast. Moreover, we propose an adaptive algorithm for the implicit-explicit scheme by deriving error estimators based on the residuals. The two error estimators corresponding to the two components of the solutions can provide an upper bound of the errors. Compared with uniform refinement, an adaptive refining algorithm can enhance the efficiency significantly. Several numerical examples are presented to demonstrate the effectiveness of the proposed adaptive method. The paper is organized as follows. In Section 2, we describe the problem setup and the partially explicit scheme. The construction of the multiscale spaces is discussed in Section 3. In Section 4, the multirate method is presented, the subsection 4.1 is devoted to the stability analysis and the subsection 4.3 presents the adaptive algorithm. Numerical tests are shown in Section 5. A conclusion is drawn in Section 6. 2 Problem Setup Consider the parabolic equation dudt−∇⋅(κ∇u) =f on Ω×(0,T] u =0 on ∂Ω×(0,T] u =u0 on ∂Ω×{0} where is a heterogeneous coefficient with high contrast, that is, the value of the conductivity/permeability in different regions of can differ in magnitudes. For a finite element space , the discretization in space leads to seeking such that ddt(u,v)+a(u,v) =(f,v),∀v∈V,t∈(0,T] u(0,⋅) =u0 where . Now consider a coarse spatial partition of the computational domain , we will construct suitable multiscale basis functions on and form a multiscale space which is an approximation of the finite element space. Let be the time step size. The discretization in the space with implicit backward Euler scheme in time reads (uk+1H−ukHτ,v)+a(uk+1H,v)=(fk+1,v),∀v∈VH (1) where is the number of time steps, and . It is well-known that this implicit scheme is unconditionally stable. Suppose the multiscale space can be decomposed into two subspaces VH=VH,1+VH,2, then a partial explicit temporal splitting scheme [15] is to find and , for all satisfying ⎛⎝uk+1H,1−ukH,1τ,v1⎞⎠+⎛⎝ukH,2−uk−1H,2τ,v1⎞⎠+a(uk+1H,1+ukH,2,v1)=(fk+1,v1), (2) ⎛⎝uk+1H,2−ukH,2τ,v2⎞⎠+⎛⎝ukH,1−uk−1H,1τ,v2⎞⎠+a((1−ω)ukH,1+ ωuk+1H,1+ukH,2,v2) (3) =(fk+1,v2), , where is a customized parameter. In the case , the two equations are decoupled, and can be solved simultaneously. In the case , the second equation depends on the solution , thus the two equations will be solved sequentially. The solution at time step will be . It was shown in [15] that under appropriate choices of the multiscale spaces and , the above implicit-explicit scheme resulted from the temporal splitting method for multiscale problems are stable with time step independent of contrast. In [15], the dimension of is low and it contains some dominant multiscale basis functions, the second space includes additional bases representing the missing information. In this paper, we will construct multiscale spaces corresponding to different time scales, where the fast and slow parts of the solution are treated separately. 3 Construction of multiscale spaces In this section, we will present the construction of multiscale spaces. 3.1 Multiscale basis for Vh,1 We will first discuss the basis construction for based on the contraint energy minimizing GMsFEM (CEM-GMsFEM) [14] and the nonlocal multicontinuum method (NLMC)[12]. To start with, we introduce some notations for the fine and coarse discretization of the computational domain . Let be a coarse partition with mesh size , and be a conforming refinement of with mesh size , where . Denote by () the set of coarse blocks in , and is an oversampled region with respect to each , where the oversampling part contains a few layers of coarse blocks neighboring . Let be the restriction of on . Under the framework of CEM-GMsFEM, one first constructs an auxiliary space. Consider the spectral problem ai(ϕ(i)aux,k,v)=λiksi(ϕ(i)aux,k,v),∀v∈V(Ki), (4) where and are corresponding eigenpairs, and ai(u,v)=∫Ki∇u⋅∇v,si(u,v)=∫Ki~κuv, with , and denotes the multiscale partition of unity function. Upon solving the spectral problem, we arrange the eigenvalues of (4) in an ascending order, and select the first eigenfunctions to form the auxiliary basis functions. Define , where and is the number of coarse elements. The the global auxiliary space . We note that the auxiliary space needs to be chosen appropriately in order to get good approximation results. That is, the first few basis functions corresponding to small eigenvalues (representing all the channels) have to be included in the space. Define a projection operator satisfying πi(u)=li∑k=1si(u,ϕ(i)aux,k)si(ϕ(i)aux,k,ϕ(i)aux,k)ϕ(i)aux,k,∀u∈V. Then we let , and . Define the null space of to be : ~V={v∈V|π(v)=0}. Let the global basis be the solution of the optimization problem ψ(i)glo,j=argmin{a(v,v) |v∈V0(K+i),s(v,ϕ(i)aux,k)=1 and s(v,ϕ(i′)aux,k′)=0∀i′≠i,k′≠k}. Define . It can be seen that is -orthogonal to , that is a(ψ(i)glo,j,v)=0;∀v∈~V. Then the CEM multiscale basis is a localization of , and are also computed using the auxiliary space . The idea is to solve the constraint energy minimization problem in a localized region a(ψ(i)cem,j,w)+s(w,μ(i)j) =0,∀w∈V(K+i), (5) s(ψ(i)cem,j,ν) =s(ϕ(i)aux,j,ν),∀ν∈V(ℓ)aux% , where is an auxiliary basis. The multiscale space is then , it is an approximation to the global space . Note that the construction of CEM basis which we have presented here is general and can handle complex heterogeneous permeability field with high contrast. In a fractured media where the media configurations are explicitly known, here the assumption is valid in many real applications, we can consider the simplified construction. The domain for the media with fracture networks can be represented as follows Ω=Ωms⨁l=1dlΩf,l where the subscripts and denote the matrix and fractures correspondingly. In the fracture regions , the scalar denotes the aperture, and is the number of discrete fracture networks. The permeabilities of matrix and fractures usually have high contrast. In this setting, some simplified auxiliary basis can be adopted, the constraint energy minimizing basis can be constructed via NLMC [12] and the resulting basis functions which can separate the continua such as matrix and fracture automatically. To be specific, for a given coarse block, we use constants for each separate fracture network, and then a constant for the matrix for the simplified auxiliary space. Consider an oversampled region , for a coarse element , let be the set of discrete fractures, and be the number of elements in . The NLMC basis are obtained by solving the following localized constraint energy minimizing problem on the fine grid a(ψ(i)m,v)+∑Kj⊂K+i⎛⎝μ(j)0∫Kjv+∑1≤n≤mjμ(j)n∫f(j)nv⎞⎠=0, ∀v∈V0(K+i), (6) ∫Kjψ(i)m=δijδm0, ∀Kj⊂K+i, ∫f(j)nψ(i)m=δijδmn, ∀f(j)n∈Fj,∀Kj⊂K+i. We remark that the resulting basis separates the matrix and fractures automatically, and have spatial decay property. The NLMC basis for fractured media is then . We denote the the average of the NLMC basis representing the fractures by ¯ψ:=1LNc∑i=1mi∑m=1ψ(i)m (7) , where . Note that in this double summation, the subscript starts from , which indicates the basis functions corresponding to the matrix are not included here. Let . To simplify the notation, we omit the double scripts in and denote the set of basis by Finally, we define the space as follows: VH,1=span{~ψk,1≤k≤L−1}, (8) where we take away the last basis to remove linear dependency. We remark that, contains basis representing the high contrast regions only. The basis functions corresponding to the matrix and the basis will be included in the second subspace in the following section. 3.2 Multiscale basis for Vh,2 In this section, we will present basis construction for . We first include which are computed from equations (6), and the average basis obtained from equation (7) in . Additionally, we can also conduct some spectral decomposition in the local coarse region to construct more basis corresponding to the information in the non-high-contrast regions. Here be a coarse neighborhood with respect to the -th coarse node. The basis in the spaces needs to satisfy the stability condition (16) and (20). The additional basis are obtained through finding the eigenvalues and corresponding eigenfunctions from the spectral problem ∫ωiκ∇z(i)k⋅∇v=η(i)kH2∫ωiz(i)kv (9) for all . Again, rearranging eigenvalues in an ascending order, and select the first eigenfunctions correspondingly, we get {z(i)k,0≤k≤Li,1≤i≤N}. The space is then defined to be VH,2=span{ψ(i)0,¯ψ,z(i)k,0≤k≤Li,1≤i≤N}. (10) We remark that, for simplicity, we only use for the second space in our numerical examples. 4 Multirate time stepping for partially explicit scheme Based on the multiscale spaces constructed in Section 3, we introduce a multirate time stepping partially explicit temporal splitting scheme. Consider the coarse time step size and fine time step size , where . Denote by the fine partition of the time domain by 0=t0 The coarse partition of the time domain is formed by 0=T0 Further, we write each coarse time interval where . The multirate scheme is then defined as follows. For each coarse interval , we are seeking for with a given , such that defined in one of the following four cases which are using coarse time step size in both (2) and (3) (coarse-coarse), using coarse time step size in (2) and using fine time step size in (3) (coarse-fine), using coarse time step size in (2) and using fine time step size in (3) (fine-coarse), using fine time step size in (2) and using fine time step size in both (2) and (3) (fine-fine). • Case 1 (coarse-coarse): Coarse time step size for (2), coarse time step size for (3). That is, take in both equations, let : ⎛⎝unk+1H,1−unkH,1ΔT,v1⎞⎠+⎛⎝unkH,2−unk−1H,2ΔT,v1⎞⎠+a(unk+1H,1+unkH,2,v1)=0, (11) ⎛⎝unk+1H,2−unkH,2ΔT,v2⎞⎠+⎛⎝unkH,1−unk−1H,1ΔT,v2⎞⎠+a(¯unk+1H,1+unH,2,v2)=0, . • Case 2 (coarse-fine): Coarse time step size for (2), fine time step size for (3), let : ⎛⎝unk+1H,1−unkH,1ΔT,v1⎞⎠+⎛⎝unkH,2−unk−1H,2ΔT,v1⎞⎠+a(unk+1H,1+unkH,2,v1)=0, (12) ⎛⎝un+1H,2−unH,2Δt,v2⎞⎠+⎛⎝unkH,1−unk−1H,1ΔT,v2⎞⎠+a(¯unk+1H,1+unH,2,v2)=0, , and for . • Case 3 (fine-coarse): Fine time step size for (2), coarse time step size for (3) ⎛⎝un+1H,1−unH,1Δt,v1⎞⎠+⎛⎝unkH,2−unk−1H,2ΔT,v1⎞⎠+a(un+1H,1+unkH,2,v1)=0, (13) ⎛⎝unk+1H,2−unkH,2ΔT,v2⎞⎠+⎛⎝unkH,1−unk−1H,1ΔT,v2⎞⎠+a(¯unk+1H,1+unkH,2,v2)=0, , and for . • Case 4 (fine-fine): Fine time step for (2), fine time step for (3). That is, take in both equations, let : ⎛⎝un+1H,1−unH,1Δt,v1⎞⎠+⎛⎝unH,2−un−1H,2ΔT,v1⎞⎠+a(un+1H,1+unH,2,v1)=0, (14) ⎛⎝un+1H,2−unH,2ΔT,v2⎞⎠+⎛⎝unH,1−un−1H,1ΔT,v2⎞⎠+a(¯un+1H,1+unH,2,v2)=0, , and for We remark that may not be defined in some of the fine time steps. In this case, we use the linear interpolation of the nearest two coarse time step solutions to define intermediate time step solutions for . 4.1 Stability for different cases Consider a coarse time block , the stability of the multirate method for the above mentioned four cases is proved in this subsection. Let be a constant such that γ=supv1∈VH,1,v2∈VH,2(v1,v2)∥v1∥∥v2∥<1 (15) For case 1 and case 4 defined in section 4.3, following a similar proof in [15], the partially explicit scheme (2)-(3) is stable if τsupv∈VH,2∥v∥2a∥v∥2≤1−γ22−ω, and for case 1, for case 4. We will show the stability for case 2 and 3 in the following. 4.1.1 Stability for case 2 Use the coarse time step for and use the fine time step for , Lemma 1. The multirate partially explicit scheme in (12) satisfies the stability estimate γ2ΔT22∑j=1∥unk+1H,j−unkH,jΔT∥2+12∥unk+1H∥2a≤γ2ΔT22∑j=1∥unkH,j−unk−1H,jΔT∥2+12∥unkH∥2a. if ΔTsupv∈VH,2∥v∥2a∥v∥2≤(1−γ2)mm+1−mω. (16) Proof. The equations in (12) can be written as (unk+1H,1−unkH,1+unkH,2−unk−1H,2,v1)=−ΔTa(unk+1H,1+unkH,2,v1), (17) (m(un+1H,2−unH,2)+unkH,1−unk−1H,1,v2)=−ΔTa((1−ω)unkH,1+ωunk+1H,1+unH,2,v2). (18) Take in (17), take in (18) and sum over . Then for the left hand side of (17), we get (unk+1H,1−unkH,1+unkH,2−unk−1H,2,unk+1H,1−unkH,1) ≥∥unk+1H,1−unkH,1∥2−γ∥unkH,2−unk−1H,2∥∥unk+1H,1−unkH,1∥ ≥12∥unk+1H,1−unkH,1∥2−γ22∥unkH,2−unk−1H,2∥2 For the left hand side of (18), we have nk+1−1∑n=nk(m(un+1H,2−unH,2)+unkH,1−unk−1H,1,un+1H,2−unH,2) ≥nk+1−1∑n=nk∥m(un+1H,2−unH,2)∥2−γ22∥unk+1H,1−unkH,1∥2−12∥unkH,2−unk−1H,2∥2 ≥m2nk+1−1∑n=nk∥un+1H,2−unH,2∥2−γ22∥unk+1H,1−unkH,1∥2 since . Sum up the right hand side of (17) and (18), we have −ΔTa(unk+1H,1+unkH,2,unk+1H,1−unkH,1)−(1−ω)ΔTa(unkH,1,unk+1H,2−unkH,2) (19) −ωΔTa(unk+1H,1,unk+1H,2−unkH,2)−ΔTnk+1−1∑n=nk(unH,2,un+1H,2−unH,2) = −ΔTa(unk+1H,1,unk+1H,1−unkH,1)+ΔTa(unkH,2,unkH,1)−ΔTa(unk+1H,1,unk+1H,2) +(1−ω)ΔTa(unk+1H,1−unkH,1,unk+1H,2−unkH,2)−ΔTnk+1−1∑n=nk(unH,2,un+1H,2−unH,2) =:RHS Note that for the terms in RHS in the above inequalities, we have −a(unk+1H,1,unk+1H,1−unkH,1)≤−12(∥unk+1H,1∥2a+∥unk+1H,1−unkH,1∥2a−∥unkH,1∥2a), nk+1−1∑n=nk(unH,2,un+1H,2−unH,2)≤−12nk+1−1∑n=nk(∥unH,2∥2a+∥un+1H,2−unH,2∥2a−∥un+1H,2∥2a) =−12⎛⎝∥unkH,2∥2a−∥unk+1H,2∥2a+nk+1−1∑n=nk∥un+1H,2−unH,2∥2a⎞⎠, a(unk+1H,1−unkH,1,unk+1H,2−unkH,2)≤12(∥unk+1H,1−unkH,1∥2a+∥unk+1H,2−unkH,2∥2a). Substitute these into the left of (19) and regroup terms, we get RHS ≤−ΔT2∥unk+1H∥2a+ΔT2∥unkH∥2a+ΔT2nk+1−1∑n=nk∥un+1H,2−unH,2∥2a −ωΔT2∥u
2021-10-19 16:43:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8243075609207153, "perplexity": 655.224693279377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00245.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-concepts-through-functions-a-unit-circle-approach-to-trigonometry-3rd-edition/chapter-4-exponential-and-logarithmic-functions-section-4-7-financial-models-4-7-assess-your-understanding-page-346/24
## Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition) $6.168 \%$ Recall: Effective Rate of Interest Formula $$r_e = \left(1+\dfrac{r}{n} \right)^n - 1$$ where $r_e:$ Effective Rate of Interest $r:$ Annual Interest Rate $n:$ Number of compoundings per year The given problem has: $r = 0.06$ $\text{Compounded monthly} \to n = 12$ Thus, using these valules and the formula above gives: $r_e = \left(1+\dfrac{0.06}{12} \right)^{12} - 1$ $r_e \approx 0.06168$ $r_e \approx 6.168 \%$
2021-10-26 02:56:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008136987686157, "perplexity": 890.5908750986233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587794.19/warc/CC-MAIN-20211026011138-20211026041138-00302.warc.gz"}
https://scirate.com/arxiv/nlin
# Nonlinear Sciences (nlin) • Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time $\hat{u}_*$. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes $AdS_2$ gravity and the low-energy dynamics of the SYK model. We identify a particular set of $2k$-point functions, characterized as being both "maximally braided" and "k-OTO", which exhibit exponential growth until progressively longer timescales $\hat{u}^{(k)}_* = (k-1)\hat{u}_*$. We suggest an interpretation as scrambling of increasingly fine-grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values. • We consider free rotation of a body whose parts move slowly with respect to each other under the action of internal forces. This problem can be considered as a perturbation of the Euler-Poinsot problem. The dynamics has an approximate conservation law - an adiabatic invariant. This allows to describe the evolution of rotation in the adiabatic approximation. The evolution leads to an overturn in the rotation of the body: the vector of angular velocity crosses the separatrix of the Euler-Poinsot problem. This crossing leads to a quasi-random scattering in body's dynamics. We obtain formulas for probabilities of capture into different domains in the phase space at separatrix crossings. • Despite recent progress, laminar-turbulent coexistence in transitional planar wall-bounded shear flows is still not well understood. Contrasting with the processes by which chaotic flow inside turbulent patches is sustained at the local (minimal flow unit) scale, the mechanisms controlling the obliqueness of laminar-turbulent interfaces typically observed all along the coexistence range are still mysterious. An extension of Waleffe's approach [Phys. Fluids 9 (1997) 883--900] is used to show that, already at the local scale, drift flows breaking the problem's spanwise symmetry are generated just by slightly detuning the modes involved in the self-sustainment process. This opens perspectives for theorizing the formation of laminar-turbulent patterns. • Cellular Automata (CAs) are computational models that can capture the essential features of systems in which global behavior emerges from the collective effect of simple components, which interact locally. During the last decades, CAs have been extensively used for mimicking several natural processes and systems to find fine solutions in many complex hard to solve computer science and engineering problems. Among them, the shortest path problem is one of the most pronounced and highly studied problems that scientists have been trying to tackle by using a plethora of methodologies and even unconventional approaches. The proposed solutions are mainly justified by their ability to provide a correct solution in a better time complexity than the renowned Dijkstra's algorithm. Although there is a wide variety regarding the algorithmic complexity of the algorithms suggested, spanning from simplistic graph traversal algorithms to complex nature inspired and bio-mimicking algorithms, in this chapter we focus on the successful application of CAs to shortest path problem as found in various diverse disciplines like computer science, swarm robotics, computer networks, decision science and biomimicking of biological organisms' behaviour. In particular, an introduction on the first CA-based algorithm tackling the shortest path problem is provided in detail. After the short presentation of shortest path algorithms arriving from the relaxization of the CAs principles, the application of the CA-based shortest path definition on the coordinated motion of swarm robotics is also introduced. Moreover, the CA based application of shortest path finding in computer networks is presented in brief. Finally, a CA that models exactly the behavior of a biological organism, namely the Physarum's behavior, finding the minimum-length path between two points in a labyrinth is given. • In this paper, we firstly give the definition of the coupled Hall-Littlewood function and its realization in terms of vertex operators. Then we construct the representation of the two-site generalized $q$-boson model in the algebra of coupled Hall-Littlewood functions. Finally, we find that the vertex operators which generate coupled Hall-Littlewood functions can also be used to obtain the partition function of the A-model topological string on the conifold. • We report for the first time the observation of bunching of monoatomic steps on vicinal W(110) surfaces induced by step up or step down currents across the steps. Measurements reveal that the size scaling exponent \gamma, connecting the maximal slope of a bunch with its height, differs depending on the current direction. We provide a numerical perspective by using an atomistic scale model with a conserved surface flux to mimic experimental conditions, and also for the first time show that there is an interval of parameters in which the vicinal surface is unstable against step bunching for both directions of the adatom drift. • Many biological and cognitive systems do not operate deep within one or other regime of activity. Instead, they are poised at critical points located at transitions of their parameter space. The pervasiveness of criticality suggests that there may be general principles inducing this behaviour, yet there is no well-founded theory for understanding how criticality is found at a wide range of levels and contexts. In this paper we present a general adaptive mechanism that maintains an internal organizational structure in order to drive a system towards critical points while it interacts with different environments. We implement the mechanism in artificial embodied agents controlled by a neural network maintaining a correlation structure randomly sampled from an Ising model at critical temperature. Agents are evaluated in two classical reinforcement learning scenarios: the Mountain Car and the Acrobot double pendulum. In both cases the neural controller reaches a point of criticality, which coincides with a transition point between two regimes of the agent's behaviour. These results suggest that adaptation to criticality could be used as a general adaptive mechanism in some circumstances, providing an alternative explanation for the pervasive presence of criticality in biological and cognitive systems. Veaceslav Molodiuc Apr 19 2017 07:26 UTC http://ibiblio.org/e-notes/Chaos/intermit.htm Travis Scholten Oct 02 2015 03:25 UTC No worries with regards to the code - when it does get released, would you mind pinging me? You can find me on [GitHub](https://github.com/Travis-S). Nicola Pancotti Sep 23 2015 07:58 UTC Hi Travis Yes, that code is related to the work we did and that is my repo. However it is quite outdated. I used that repo for sharing the code with my collaborators. Now we are working for providing a human friendly version, commented and possibly optimized. If you would like to have a working ...(continued) Travis Scholten Sep 21 2015 17:08 UTC Has anyone found some source code for the SGD referenced in this paper? I came across a [GitHub repository](https://github.com/nicaiola/thesisproject) from Nicola Pancotti (at least, I think that is his username, and the code seems to fit with the kind of work described in the paper!). I am not sure ...(continued)
2017-12-16 13:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4511624574661255, "perplexity": 769.7373148050921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00209.warc.gz"}
https://askthetask.com/112320/observing-woodpecker-diagram-measure-elevation-rounded-nearest
0 like 0 dislike 48. Amari is observing a woodpecker near the top of a tree, as shown in the diagram below. What is the measure of angle x, the angle of elevation, rounded to the nearest degree? bird 201 36 n 0 like 0 dislike Here x is the angle of the elevation and the value of x is 31 degrees. We have given that, Amari is observing a woodpecker near the top of a tree, as shown in the diagram given. We have to determine, The measure of angle x, the angle of elevation, is rounded to the nearest degree. What is the formula for tan theta? $$tan(\theta)=\frac{Opposite \ side}{Hypotenuse}$$ From the diagram, Opposite side=20ft Use this value of the above formula. So we get, $$tan(\theta)=\frac{20}{36} \\\\tan(x)=5/9\\\\tan(x)=0.55\\x=tan^{-1}(0.55)\\x=31.51^0$$ Here x is the angle of the elevation and the value of x is 31 degrees. Therefore the value of x is 31.
2023-03-21 18:29:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7958524227142334, "perplexity": 1134.5967653998207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00260.warc.gz"}
https://math.stackexchange.com/questions/3558810/prove-that-e-pi-frac1-pi-pie1
# Prove that $e^\pi+\frac{1}{\pi} < \pi^e+1$ Prove that: $$e^\pi+\frac{1}{\pi}< \pi^{e}+1$$ Using Wolfram Alpha $$\pi e^{\pi}+1 \approx 73.698\ldots$$ and $$\pi(\pi^{e}+1) \approx 73.699\ldots$$ Can this inequality be proven without brute-force estimations (anything of the sort $$e\approx 2.7182...$$ or $$\pi \approx 3.1415...$$)? I've just seen this and I remembered I've seen the question asked here in an older paper, but I don't remember the details. Note that this is sharper because it can be written as: $$e^{\pi}-\pi^e<1-\frac{1}{\pi}<1$$ I've tried, but none of the methods in the linked question (which study the function $$x^\frac{1}{x}$$) can be applied here. • What about the function $f(x)=e^x-x^e$? Feb 24, 2020 at 20:50 • @WeierstraßRamirez, That is essentially the same as studying $x^{\frac{1}{x}}$. It has two critical points at $1$ (maximum) and $e$ (minimum). I think it's only enough to show $e^{\pi} > \pi^e$. Or is there something I'm missing? – LHF Feb 24, 2020 at 20:58 • Maybe useful the next link! Feb 24, 2020 at 21:38 • What exactly are "estimations"? What exactly may the solution involve? Feb 24, 2020 at 22:19 • Maybe interesting. Another sharp bound for the expression $e^\pi-\pi^e$ is given by $$e^\pi-\pi^e \approx \frac{1}{6}\,\sqrt [3]{75+7\,\sqrt {449}}-\,{\frac {2}{\sqrt [3]{75+7\,\sqrt {449}}}}<1-\frac{1}{\pi}$$ In fact, I changed the real root of the polynomial $x^3+x-1$, slightly! Feb 26, 2020 at 18:13 From the continued fraction expansion of $$\pi$$, we have $$\frac{333}{106}\lt\frac{103993}{33102}\lt\pi\lt\frac{355}{113}\;.$$ There are various ways of proving these inequalities without using decimal approximations: In the case of $$\mathrm e$$, the continued fraction expansion is regular and can be systematically derived (see e.g. A Short Proof of the Simple Continued Fraction Expansion of e by Henry Cohn, The American Mathematical Monthly, $$113(1)$$, $$57$$$$62$$, The Simple Continued Fraction Expansion of e by C. D. Olds, The American Mathematical Monthly, $$77(9)$$, $$968$$$$974$$, or Continued fraction for e at Topological Musings); it yields $$\frac{1264}{465}\lt\mathrm e\lt\frac{1457}{536}\;.$$ Thus it suffices to show that $$\left(\frac{1457}{536}\right)^\frac{355}{113}+\frac1{\frac{333}{106}}\lt\left(\frac{103993}{33102}\right)^\frac{1264}{465} + 1\;,$$ or $$\left(\frac{1457}{536}\right)^\frac{355}{113}\lt\left(\frac{103993}{33102}\right)^\frac{1264}{465} + \frac{227}{333}\;.$$ Since both sides contain fractional exponents, it’s hard to compare them directly; but we can find a fraction that lies between them and compare them to it separately. Among the suitable fractions, the one with the lowest denominator is $$\frac{4767}{206}$$. The rational inequalities $$\left(\frac{1457}{536}\right)^{355}\lt\left(\frac{4767}{206}\right)^{113}$$ and $$\left(\frac{4767}{206}-\frac{227}{333}\right)^{465}\lt\left(\frac{103993}{33102}\right)^{1264}$$ are readily checked with integer arithmetic, and thus with $$\left(\frac{1457}{536}\right)^\frac{355}{113}\lt\frac{4767}{206}\lt\left(\frac{103993}{33102}\right)^\frac{1264}{465} + \frac{227}{333}$$ the result follows. • I appreciate the great work and references. Given the sharpness of the inequality I doubt a better approach can be found, but I'll wait a little more time (a day or two) before accepting the answer. – LHF Feb 25, 2020 at 13:19 • With the utmost respect for you, I appreciate your intervention to this post but how to deny that underlying calculations of it are far more complicated than simply calculating both sides of the inequality. Impossible that I put you a downvote but in no way, due to the type of problem posed, would you put an upvote. Best regards. Feb 25, 2020 at 14:07 • @Piquito: I think you'd need to explicate your concept of "simply calculating". The question asked for a proof, so you'd need to be able to control the errors in this "simple calculation". Your own answer works with approximations ($\approx$) without specifying any error bounds for them; thus it doesn't constitute a proof. Moreover, the question explicitly asked for answers that don't use this sort of approximation. Feb 25, 2020 at 14:11 • Kind of response to problems with parallel thinking. In this case, calculate everything you want out of the problem and apply it here. My best wishes for you. Feb 25, 2020 at 15:51 • @joriki, it seems like an epidemy of mysterious senseless downvotes on MSE nowadays. – LHF Feb 25, 2020 at 17:14
2022-07-07 16:58:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389413475990295, "perplexity": 384.0846118913898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00050.warc.gz"}
https://solvedlib.com/n/94m-9-0f15ata-certain-temperature-0-4211-mol-ofnz-and-1,5691678
# 94m 9 0f15Ata certain temperature, 0.4211 mol ofNz and 1.681 molof Hy are placed in 2400L contsdier, Nz(g) + 3H,(g) ###### Question: 94m 9 0f15 Ata certain temperature, 0.4211 mol ofNz and 1.681 molof Hy are placed in 2400L contsdier, Nz(g) + 3H,(g) = 2NH;C (g) At equilibrium; 0.1401 mol of Nz is present: Calculate the equilibrium constang K: Kc TOOLS X1o' #### Similar Solved Questions ##### 3. a) Approximate Jo dx using Simpson Rule with n = 4 I+xb) Is your answer to a) an overestimate or an underestimate? Justify your answer 3. a) Approximate Jo dx using Simpson Rule with n = 4 I+x b) Is your answer to a) an overestimate or an underestimate? Justify your answer... ##### Question 63.03ptsA037skEsnpit mcthyent choree hasaderitc 0136etr-> Caloate itsvlmEecmy0737in]197'Z0Jcm' Question 6 3.03pts A037skEsnpit mcthyent choree hasaderitc 0136etr-> Caloate itsvlm Eecmy 0737in] 197' Z0Jcm'... ##### An inclined plane of angle 20.09 has spring of force constant k 460 N/m fastened securely at the bottom so that the spring is parallel to the surface as shown in the figure below_ block of mass m 2.45 kg is placed on the plane at distance 0.291 m from the spring_ From this position_ the block is projected downward toward the spring with speed 0.750 m/s: By what distance the spring compressed when the block momentarily comes to rest? An inclined plane of angle 20.09 has spring of force constant k 460 N/m fastened securely at the bottom so that the spring is parallel to the surface as shown in the figure below_ block of mass m 2.45 kg is placed on the plane at distance 0.291 m from the spring_ From this position_ the block is pro... ##### W.T. Ginsburg Engine Company manufactures part ACT31107 used in several of its engine models. Monthly production... W.T. Ginsburg Engine Company manufactures part ACT31107 used in several of its engine models. Monthly production costs for 1,000 units are as follows: Direct materials Direct labor Variable overhead costs Fixed overhead costs Total costs $43,000 10,500 30,500 19,000$103,000 It is estimated that 7% ... ##### How do you differentiate (4x^-2) - 8x + 1? How do you differentiate (4x^-2) - 8x + 1?... ##### The operations vice president of Security Home Bank has been interested in investigating the efficiency of... The operations vice president of Security Home Bank has been interested in investigating the efficiency of the bank's operations. She has been particularly concerned about the costs of handling routine transactions at the bank and would like to compare these costs at the bank's various branc... ##### Which bond is completely nonpolar? Group of answer choices C–HH–F H–O C–N F–F Which bond is completely nonpolar? Group of answer choices C–H H–F H–O C–N F–F... ##### Tides are periodic rise ;nd fall of witer in the occun low tide of 3.2 metres in White Rock RC OCcuns at 6345a,01 , ind the next high tide of H0.8 mettes OCcUrs UL [;4SpWon Ihe sume day: 4) Write sinusoidal 4unetions that describe the heveh of the tidle , "hours after the midnight using ROTH "sine" and "cosine (negative ones can be used i preferred) b) Graph the function. Please show At least two cycles and all the details needed. Tides are periodic rise ;nd fall of witer in the occun low tide of 3.2 metres in White Rock RC OCcuns at 6345a,01 , ind the next high tide of H0.8 mettes OCcUrs UL [;4SpWon Ihe sume day: 4) Write sinusoidal 4unetions that describe the heveh of the tidle , "hours after the midnight using ROTH &q... ##### The Haber process for production of ammonia is as follows: N2 (g) + 3H2(g) → 2NH3... The Haber process for production of ammonia is as follows: N2 (g) + 3H2(g) → 2NH3 (g) An experiment ran this process using 5.75 moles of N2 and excess hydrogen gas. The reaction produced 7.50 moles of NH3. Calculate the percent yield for this experiment. Round your answer to the nearest whole n... ##### X-ray absorption spectroscopy (XAS), sometimes called X-rayabsorption fine structure spectroscopy (XAFS), can provide chemicaland physical information about a target element. Explain. X-ray absorption spectroscopy (XAS), sometimes called X-ray absorption fine structure spectroscopy (XAFS), can provide chemical and physical information about a target element. Explain.... ##### 4nat 15 d5" [J (*)] Syppose that f(c) Q1* + 0222 aur" where n is any natural number (i.e,, n = 0,1,2,3,_) What is dzn [f (2)]? What is [f(z)]? 4nat 15 d5" [J (*)] Syppose that f(c) Q1* + 0222 aur" where n is any natural number (i.e,, n = 0,1,2,3,_) What is dzn [f (2)]? What is [f(z)]?... ##### In a certain year, 89% of all Caucasians in the U.S., 77% of allAfrican-Americans, 77% of all Hispanics, and 75% of residents notclassified into one of these groups used the Internet for e-mail.At that time, the U.S. population was 64% Caucasian, 10%African-American, and 10% Hispanic. What percentage of U.S.residents who used the Internet for e-mail were Hispanic? (Roundyour answer to the nearest whole percent.) In a certain year, 89% of all Caucasians in the U.S., 77% of all African-Americans, 77% of all Hispanics, and 75% of residents not classified into one of these groups used the Internet for e-mail. At that time, the U.S. population was 64% Caucasian, 10% African-American, and 10% Hispanic. What perce... dx ~1 N X _ [... ##### Assume that the fracture resistance of a wire used in the manufacture of drapery is normally... Assume that the fracture resistance of a wire used in the manufacture of drapery is normally distributed with o2 = 2. A random sample of 25 specimens have been examined and they yielded a mean resistance of = 98 psi. Give a 95% confidence interval for the mean resistance. A. [97.216,98.784] B. (97.2... ##### [-{1 Points]DETAILSTANAPCALC1O 3.7.033.MY NOTESASK YOUR TEACHERPRACTICE ANOTHERUnclogging Arteries Research done1930, Fre-ch ?hysiolcoi?- jetn Foiseli showe chat -he esista -E amect TeSis-ance uesune Constantpod ~esjelengih and radiu; ^ I=Ynereccns-ant_ SupposedoseIne 0 UCncreaseeFolmNeed Help?AaneehilnPoints]DETAILSTANAPCALC1O 3.7.049.MY NOTESASK YOUR TEACHERPRACTICE ANOTHERVetermine Yhether -he statetient(hereFonstantsNeed Help?ReAdile [-{1 Points] DETAILS TANAPCALC1O 3.7.033. MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER Unclogging Arteries Research done 1930, Fre-ch ?hysiolcoi?- jetn Foiseli showe chat -he esista -E amect TeSis-ance uesune Constant pod ~esjel engih and radiu; ^ I= Ynere ccns-ant_ Suppose dose Ine 0 UC ncreasee Folm... ##### MMMD 1 H 1 6 I 8 288838 L 8 wwd [ 1 1 a848 0 1 1 1 1 1 L 1 1 Wi 1 0 W V 1 1 2! 1 W 1 I 1 2 1 1 MMMD 1 H 1 6 I 8 288838 L 8 wwd [ 1 1 a848 0 1 1 1 1 1 L 1 1 Wi 1 0 W V 1 1 2! 1 W 1 I 1 2 1 1... ##### Let x(t) and y(t) be measures happiness for the husband and wife; respectively_ Negative values indicate unhappiness Let Xo and Yo be the "natural disposition" of the husband and wife, respectively. This is how happy they would be if they were single During marriage, the couple develops style of interaction that is called "validating: A model of their marriage dynamics is:=nlo n+m.nlu "+0where 01 measures how easily the husband is influenced by the wife's emotions_ and 4 Let x(t) and y(t) be measures happiness for the husband and wife; respectively_ Negative values indicate unhappiness Let Xo and Yo be the "natural disposition" of the husband and wife, respectively. This is how happy they would be if they were single During marriage, the couple develops st... ##### I need help in writing a 3pages investment policy statement for Bill and Joyce Owens, it... i need help in writing a 3pages investment policy statement for Bill and Joyce Owens, it need to include the client profile, recommended investment strategy, Present an allocation that is consistent with the strategy it has to match with the profile, Expecations (which have to get the same return fo... ##### Global Gaming SesamWace is a Japanese software company responsible for the most popular open source software... Global Gaming SesamWace is a Japanese software company responsible for the most popular open source software available on the market today. In operation since the mid-1990s, SesamWare initially gained international acclaim with an online, multiplayer, fantasy dimension game called Parallelwodd Paral... ##### -cises Obj. 2 costs of an automobile manufacturer would be classi- EX 15-1 Classifying costs as... -cises Obj. 2 costs of an automobile manufacturer would be classi- EX 15-1 Classifying costs as materials, labor, or factory overeau Indicate whether each of the following costs of an automobile man fied as direct materials cost, direct labor cost, or factory overhead co. A. Depreciation of robotic ... ##### Niles, an accountant, certifies several audit reports on Optimal Operational Processes, Inc., Niles’s client, knowing that... Niles, an accountant, certifies several audit reports on Optimal Operational Processes, Inc., Niles’s client, knowing that the company intends to use the reports to borrow money from Prime Business Lending Company to buy new equipment. Niles believes that the reports are true and does not inte... ##### 10. (8 points) company has marginal profit function given by P (z) 21 + 150 where P' (x) dollars per unit_ This means that the rate of change of total profit with respect to the number of units produced, I is P (z) Find the total profit from the production and sale of the first 40 units. 10. (8 points) company has marginal profit function given by P (z) 21 + 150 where P' (x) dollars per unit_ This means that the rate of change of total profit with respect to the number of units produced, I is P (z) Find the total profit from the production and sale of the first 40 units.... ##### An animal breeder can buy four types of food for Vietnamese pot-bellied pigs. Each case of... An animal breeder can buy four types of food for Vietnamese pot-bellied pigs. Each case of Brand A contains 25 units of fiber, 40 units of protein, and 40 units of fat. Each case of Brand B contains 50 units of fiber, 50 units of protein, and 30 units of fat. Each case of Brand C contains 75 units o... ##### 7. (L pt)If the haploid human genome contains 2 x 109 base pairs, what is the total length of DNA found in & single hair cell 7. (L pt)If the haploid human genome contains 2 x 109 base pairs, what is the total length of DNA found in & single hair cell... ##### The output approach to calculating GDP The following chart provides production data for selected cars. All... The output approach to calculating GDP The following chart provides production data for selected cars. All prices are quoted in U.S. dollars. ------------------Ferrari F1------Ford Focus--------Honda CRV ------------Origin / Price-----Origin / Price---------Origin / Price Engine-----Italy / 60,000-...
2022-07-06 01:19:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3515661656856537, "perplexity": 6814.26947771334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00799.warc.gz"}
https://www.freemathhelp.com/forum/threads/109333-Derivatives-help-for-economics-(PED-using-calculus-dQ-dP-x-P-Q)
# Thread: Derivatives help for economics (PED using calculus = dQ/dP x P/Q) 1. ## Derivatives help for economics (PED using calculus = dQ/dP x P/Q) Hi guys, I know this will be a really silly Q but I just cannot understand it. For any of you that know some economics, I understand that PED using calculus = dQ/dP x P/Q Q = 60 - 3P and so am I right in thinking that the derivative of Q (dQ) is equal to -3? We are asked to find the PED when price (P) is equal to 15 so am I also write in thinking that the derivative of P (dP) is 0? According to the constant rule, the derivative of any constant is always 0. Therefore, surely dQ/dP is the same as -3/0? and I know you cannot divide by 0? the is really bugging me and I know I'm probably making a very silly mistake but I hope someone can help me out. Also the book I'm using says that dQ/dP is -3 which is confusing because does that mean they're are just disregarding the fact that the derivative of 15 = 0? Many thanks, Student 2. Originally Posted by dylanpc Hi guys, I know this will be a really silly Q but I just cannot understand it. For any of you that know some economics, I understand that PED using calculus = dQ/dP x P/Q Q = 60 - 3P and so am I right in thinking that the derivative of Q (dQ) is equal to -3? We are asked to find the PED when price (P) is equal to 15 so am I also write in thinking that the derivative of P (dP) is 0? According to the constant rule, the derivative of any constant is always 0. Therefore, surely dQ/dP is the same as -3/0? and I know you cannot divide by 0? the is really bugging me and I know I'm probably making a very silly mistake but I hope someone can help me out. Also the book I'm using says that dQ/dP is -3 which is confusing because does that mean they're are just disregarding the fact that the derivative of 15 = 0? Many thanks, Student What is PED? 3. Originally Posted by Subhotosh Khan What is PED? PED is price elasticity of demand. It's an economics concept but you don't need to know it for the question really I don't think 4. ## hope this make my question more simple If this is more simple: what is the derivative of the equation Q = 60 - 3P? its -3 right? and then what is the derivative of 15? 0? so surely you can't work out dQ/dP as this means dividing by 0? 5. Originally Posted by dylanpc For any of you that know some economics, I understand that PED using calculus = dQ/dP x P/Q Q = 60 - 3P and so am I right in thinking that the derivative of Q (dQ) is equal to -3? We are asked to find the PED when price (P) is equal to 15 so am I also write in thinking that the derivative of P (dP) is 0? According to the constant rule, the derivative of any constant is always 0. Therefore, surely dQ/dP is the same as -3/0? and I know you cannot divide by 0? the is really bugging me and I know I'm probably making a very silly mistake but I hope someone can help me out. Also the book I'm using says that dQ/dP is -3 which is confusing because does that mean they're are just disregarding the fact that the derivative of 15 = 0? I think you are thoroughly misunderstanding the notation. When we write dQ/dP, it is not a division; it just means the derivative of Q with respect to P. You should think of it as a single symbol. Other books might avoid this notation and call it Q' instead. dQ is not the derivative of Q; it is called a "differential", and is not needed here. So if Q = 60 - 3P, then we say that dQ/dP = -3, not that dQ = -3. (Note also that we take the derivative of a function, not of an equation, and you must always state with respect to what.) Furthermore, you must take a derivative before assigning a specific value to any variable, because once you assign a value, nothing is varying any more, and derivatives are all about how something varies. So if P were a function of something and you took the derivative "when P = 15", you would not take the derivative of the function P = 15, which is 0; you would find the derivative of that function, and then replace P with 15. But here P is the independent variable; it is not a function of anything, so you don't take its derivative. You take the derivative of Q with respect to P. So, yes, this is a silly question; has it been a long time since you took calculus, or have you not yet really studied it, and are trying to understand the economics with no real knowledge of what derivatives are? We might have suggestions for what you need to study. 6. Originally Posted by dylanpc Hi guys, I know this will be a really silly Q but I just cannot understand it. For any of you that know some economics, I understand that PED using calculus = dQ/dP x P/Q Q = 60 - 3P and so am I right in thinking that the derivative of Q (dQ) is equal to -3? We are asked to find the PED when price (P) is equal to 15 so am I also write in thinking that the derivative of P (dP) is 0? According to the constant rule, the derivative of any constant is always 0. Therefore, surely dQ/dP is the same as -3/0? and I know you cannot divide by 0? the is really bugging me and I know I'm probably making a very silly mistake but I hope someone can help me out. Also the book I'm using says that dQ/dP is -3 which is confusing because does that mean they're are just disregarding the fact that the derivative of 15 = 0? Many thanks, Student PED means, I guess, the price elasticity of demand. Are you you too lazy to explain what an acronym means? There are many different types of elasticity. It is not true in general that Q = 60 - 3P. Who told you that it was? There is no reason to believe, and quite a few reasons to disbelieve, that demand curves are linear? What evidence do you have for the universal truth that all demand curves are linear with slope equal to - 3. Or is this supposed to be a specific case? Yes, in the specific case when $Q = 60 - 3P \implies \dfrac{dQ}{dP} = -\ 3 \implies dQ = -\ 3 dP.$ The idea that you can apply calculus to economics in a way that makes sense to people who do not understand calculus is goofy. Of course you cannot divide by zero. But that says nothing about limits, which, according to standard analysis, is what calculus is all about. Of course, most economists are abysmally ignorant of mathematics. How the POINT price elasticity of demand is defined is: $Q \ne 0, \text { price elasticity of demand } = \dfrac{dQ}{dP} \div \dfrac{Q}{P}.$ It is utter ignorance of modern mathematics that mis-states that as $\dfrac{Q}{dQ} \div \dfrac{P}{dP}.$ The average economist has no idea what the mathematical notation means. $P = 15 \text { and } \dfrac{dQ}{dP} \div \dfrac{Q}{P} = \dfrac{-\ 3}{\dfrac{60 - 3P}{P}} = \dfrac{-\ 3P}{60 - 3P} = \dfrac{- 3 * 15}{60 - 3 * 15} = -\ 1.$ 7. ## Thanks Originally Posted by JeffM PED means, I guess, the price elasticity of demand. Are you you too lazy to explain what an acronym means? There are many different types of elasticity. It is not true in general that Q = 60 - 3P. Who told you that it was? There is no reason to believe, and quite a few reasons to disbelieve, that demand curves are linear? What evidence do you have for the universal truth that all demand curves are linear with slope equal to - 3. Or is this supposed to be a specific case? Yes, in the specific case when $Q = 60 - 3P \implies \dfrac{dQ}{dP} = -\ 3 \implies dQ = -\ 3 dP.$ The idea that you can apply calculus to economics in a way that makes sense to people who do not understand calculus is goofy. Of course you cannot divide by zero. But that says nothing about limits, which, according to standard analysis, is what calculus is all about. Of course, most economists are abysmally ignorant of mathematics. How the POINT price elasticity of demand is defined is: $Q \ne 0, \text { price elasticity of demand } = \dfrac{dQ}{dP} \div \dfrac{Q}{P}.$ It is utter ignorance of modern mathematics that mis-states that as $\dfrac{Q}{dQ} \div \dfrac{P}{dP}.$ The average economist has no idea what the mathematical notation means. $P = 15 \text { and } \dfrac{dQ}{dP} \div \dfrac{Q}{P} = \dfrac{-\ 3}{\dfrac{60 - 3P}{P}} = \dfrac{-\ 3P}{60 - 3P} = \dfrac{- 3 * 15}{60 - 3 * 15} = -\ 1.$ I guess I was too lazy and I just assumed that people would understand the acronym but in future I will explain. And yes I should have specified that this was a specific case where the demand curve is linear. Thanks very much for your time and help! 8. ## Thanks!! Originally Posted by Dr.Peterson I think you are thoroughly misunderstanding the notation. When we write dQ/dP, it is not a division; it just means the derivative of Q with respect to P. You should think of it as a single symbol. Other books might avoid this notation and call it Q' instead. dQ is not the derivative of Q; it is called a "differential", and is not needed here. So if Q = 60 - 3P, then we say that dQ/dP = -3, not that dQ = -3. (Note also that we take the derivative of a function, not of an equation, and you must always state with respect to what.) Furthermore, you must take a derivative before assigning a specific value to any variable, because once you assign a value, nothing is varying any more, and derivatives are all about how something varies. So if P were a function of something and you took the derivative "when P = 15", you would not take the derivative of the function P = 15, which is 0; you would find the derivative of that function, and then replace P with 15. But here P is the independent variable; it is not a function of anything, so you don't take its derivative. You take the derivative of Q with respect to P. So, yes, this is a silly question; has it been a long time since you took calculus, or have you not yet really studied it, and are trying to understand the economics with no real knowledge of what derivatives are? We might have suggestions for what you need to study. Thank you very much for you responses and I understand the problem now. Yes I knew it would be a silly question! and yes it has been 3 years since I've studied any maths at all and then all of a sudden I am doing college/university level maths for economics so the jump has been very large and I would be grateful to hear any suggestions on books you may have to help me with this. many thanks, student 9. Originally Posted by dylanpc Thank you very much for you responses and I understand the problem now. Yes I knew it would be a silly question! and yes it has been 3 years since I've studied any maths at all and then all of a sudden I am doing college/university level maths for economics so the jump has been very large and I would be grateful to hear any suggestions on books you may have to help me with this. many thanks, student It isn't clear whether you are saying that you never studied calculus at all, or that you studied it a little but have forgotten it. A course that uses calculus should have a solid knowledge of calculus as a prerequisite; even with that, I would not be surprised if your textbook had an appendix reviewing the essentials of calculus that will be needed in the course. If so, you should start there. If I were you, I would be visiting your professor to ask what you will need to know, and how to get that knowledge. The answer will depend on the demands of the course, and on your level of preparation. You should be very open about what you do not understand, in order to get the best possible advice. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
2018-05-23 14:23:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099956512451172, "perplexity": 377.1801009682048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865679.51/warc/CC-MAIN-20180523141759-20180523161759-00477.warc.gz"}
http://stackoverflow.com/questions/1527071/tex-font-mapping/2702680
# TeX Font Mapping I am using a package written on top of XeLaTeX. This package uses fontspec to specify fonts for different parts of your text: latin, non-latin, math mode, ... The package comes with several sample files. I was able to xelatex most of them that depend on regular ttf or otf files. However, one of them tries to set the font of digits in math mode to some font, say "NonLatin Digits". But, the font doesn't seem to be a regular font. There are two files in the same directory called "nonlatindigits.map" and "nonlatindigits.tec". TECkit uses these mapping files to generate TeX fonts. However, for some reason it fails to create the files, and xelatex issues the following error message. `````` kpathsea: Invalid fontname `NonLatin Digits', contains ' ' ! Font \zf@basefont="NonLatin Digits" at 10.0pt not loadable: Metric (TFM) file or `````` The kpathsea program complains about the whitespace, but removing the whitespace does solve the problem with loading the TFM file. Any clues what I am doing wrong? - What's the actual font file name? There have been discussions recently on the XeTeX mailing-list, about a bug that prevented from loading font files with spaces in their names on Windows (look for it in the archives). If changing the file name works for you, you may have just run into this bug. The kpathsea invocation you see is only a side effect: it indicates that the font hasn't been found by the system libraries that XeTeX uses on top of TeX's default font lookup system, and XeTeX falls back to looking up a TFM file, the most basic file format. TECkit has nothing to do with fonts, it converts characters on the fly; in your case, I guess you could use a mapping to convert, say, Arabic numbers to Indic numbers (so that you don't need to input the latter in your source file directly). But it does not generate fonts in any way whatsoever. - The font I am trying to use is called "Parsi Digits". And, the file I am trying to compile is a sample file provided by the xepersian (ctan.org/tex-archive/macros/xetex/latex/xepersian) package. The error occurs when processing the command \setdigitfont[Scale=1]{Parsi Digits}. Removing the space from the fontname doesn't help. The package maintainer refused to answer my question for he judged it as a basic question from a newbie. Therefore, there should be something obvious going wrong. –  reprogrammer Oct 6 '09 at 18:34 OK, that's bad. Where can I download the font from? I could check if there is anything wrong it. –  Arthur Reutenauer Oct 6 '09 at 18:54 The link to the package is provided in my comment above. I have it installed as part of my TeXLive distribution. –  reprogrammer Oct 6 '09 at 18:59 I have the package, too, but not any fonts. Some of them can be downloaded from the links provided in the package's README file, but not all, in particular not Parsi Digits. Where did you download yours from? –  Arthur Reutenauer Oct 6 '09 at 19:11 That's exactly the problem! I don't know where to get that Parsi Digits font from. –  reprogrammer Oct 6 '09 at 19:12 As others have mentioned, you should try XeTeX, and you should make sure you have the correct fonts installed. Use the command xelatex in place of pdflatex, to enable use of non-Latin characters in .tex files. You didn't say which font encoding you want, but the following two should work pretty well: Linux Libertine, and Computer Modern Unicode. The OpenSuSE package names are LinuxLibertine and cm-unicode; hopefully it's similar on other systems. Add the following as the first imports in your document: ``````\usepackage{xunicode,fontspec,xltxtra} \usepackage[english]{polyglossia} % EXAMPLE: \setotherlanguages{russian} % set as "other" so English hyphenation active `````` and add the following after all other imports (so it won't be overridden by older package imports), ``````\defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \setromanfont{Linux Libertine O} \setsansfont{Linux Biolinum O} \setmonofont[Scale=0.9]{Courier New} `````` or, if you want Computer Modern fonts, ``````\setromanfont{CMU Serif} \setsansfont{CMU Sans Serif} \setmonofont{CMU Typewriter Text} `````` - with xetex or xelatex, the poitn is that you don't have to specify tex fonts, you should use your system fonts. you should post the code and preamble of the parts where you are getting an error. Much like html+css, different tex distros can render things slightly different from one another. Minimally, your preamble should look something like this: `````` \documentclass[12pt,letterpaper]{article} \usepackage{fontspec}% provides font selecting commands \usepackage{xunicode}% provides unicode character macros \usepackage{xltxtra} % provides some fixes/extras \setromanfont[Mapping=tex-text]{Font Name} `````` What is the effect of `\defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase}` ? –  Mark Richman Feb 14 at 19:57 Parsi Digits is a font that you currently do not have and the error, you are getting is because you do not have the font. Simply replace `Parsi Digits' with another font and it all should go fine. \setdigitfont is a command that makes digits in math mode Persian and it can accept `Scale' as an option.
2014-09-17 14:26:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414021372795105, "perplexity": 4559.055803388693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123617.30/warc/CC-MAIN-20140914011203-00285-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://solvedlib.com/n/hw-do-the-proof-by-using-the-bilinear-property-of-inner,18733213
# HW: Do the proof by using the bilinear property of inner product to show that (v, vi) (I,0;) = U ###### Question: HW: Do the proof by using the bilinear property of inner product to show that (v, vi) (I,0;) = U _ Vi, Uj =0 for all j = 1, (vi, Vi) #### Similar Solved Questions ##### (5 ponts) cell membrane receptor that binds growth hormones to promote cell division is mutated s0 that the intracellular domain always in the active state; leading to excessive cell division even in the absence of signals. Which of the following describes this mutation at the cellular level?Loss-of-function mutation in proto-oncogene that acts dominantly Gain-of-function mutation in proto-oncogene that acts dominatly Gain-of-function mutation in proto-oncogene that acts recessively Loss-of-func (5 ponts) cell membrane receptor that binds growth hormones to promote cell division is mutated s0 that the intracellular domain always in the active state; leading to excessive cell division even in the absence of signals. Which of the following describes this mutation at the cellular level? Loss-o... ##### 08:17 vital.liv.ac.uk 811 vodafone UK令 ร 100% Ol.a)i An electromagnetic wave is travelling along a microstrip... 08:17 vital.liv.ac.uk 811 vodafone UK令 ร 100% Ol.a)i An electromagnetic wave is travelling along a microstrip transmission line. Sketch the structure of the microstrip transmission line and its electrical field and magnetic field distribution at a cross section of the microstrip line. ... ##### 'ofthe 5 torm Acos(3t) 9yasscssmene complete ths Click - Submit [ equatlon Question differental € solution Particular sin(3t) cos(3t) - =a sin(3t) Ye +Bc cos(3t) =AL sin(3t) Ye +8 cos(3t) =Ac 'sin(3t) Ye Bt cos(3t) - =A Yp assessment: this complete Submit Click 'ofthe 5 torm Acos(3t) 9y asscssmene complete ths Click - Submit [ equatlon Question differental € solution Particular sin(3t) cos(3t) - =a sin(3t) Ye +Bc cos(3t) =AL sin(3t) Ye +8 cos(3t) =Ac 'sin(3t) Ye Bt cos(3t) - =A Yp assessment: this complete Submit Click... ##### A. Which of the following shows a right-tailedtest? Ha: p> 0.04, HO: p < 0.04 Ha: p<0.04, HO: p>0.04 B. 31% of people in a large country have sleepwalked atleast once in their lives. A random sample of 200 people showedthat 46 sleepwalked. Use a hypothesis test to determine if theproportion of people who have sleepwalked is different and reportthe p value in the context of study at the level ofsignificance α=0.05 1. Z test statistic=-2.6884; P-value=0.0036;There is a strongevidence A. Which of the following shows a right-tailed test? Ha: p> 0.04, HO: p < 0.04 Ha: p<0.04, HO: p> 0.04 B. 31% of people in a large country have sleepwalked at least once in their lives. A random sample of 200 people showed that 46 sleepwalked. Use a hypothesis test to determine if the ... ##### The use of personal borrowing to change the overall amount of financial leverage to which an... The use of personal borrowing to change the overall amount of financial leverage to which an individual is exposed is called:... ##### QUESTION 5Evaluate the following limit: Enter your answer as decimal rounded to four places 1-et Iim arctan()QUESTION 6Evaluate the following limit; Enter your answer as decimal rounded t0 four places. lim In(2 + x2) - In(l+x+x2) QUESTION 5 Evaluate the following limit: Enter your answer as decimal rounded to four places 1-et Iim arctan() QUESTION 6 Evaluate the following limit; Enter your answer as decimal rounded t0 four places. lim In(2 + x2) - In(l+x+x2)... ##### A simple random sample of 59 adults is obtained from a normally distributed population, and each... A simple random sample of 59 adults is obtained from a normally distributed population, and each person's red blood cell count (in cells per microliter) is measured. The sample mean is 5.27 and T-Test the sample standard deviation is 0.52. Use a 0.01 significance level and the given calculator d... ##### What would Vygotsky call a developmental level in a child's life when he or she is... What would Vygotsky call a developmental level in a child's life when he or she is primed biologically and is socially ready to learn a new behavior?... ##### Prove the statement using the &, & definition of limit:Given0, we need Select-- such that if 0 Ix - 1| < 6, then 4 - <8 4 <€ 4 Ix - 10 < € Ix -1| Seleci-Select-ButSo if we choose 6 =~Select-thenIx - 10 < & = 1(5t**) -2 < E,Thus, Iimby the definition of Ilmit;Viewing Saved Work Revert to Last ResponseSubmit Answer Prove the statement using the &, & definition of limit: Given 0, we need Select-- such that if 0 Ix - 1| < 6, then 4 - <8 4 <€ 4 Ix - 10 < € Ix -1| Seleci- Select- But So if we choose 6 = ~Select- then Ix - 10 < & = 1(5t**) -2 < E, Thus, Iim by the definiti... ##### Genetics - Question 4 [15 marks] Sequence alignment is a critical tool for the analysis of genome data. Explain the pur... Genetics - Question 4 [15 marks] Sequence alignment is a critical tool for the analysis of genome data. Explain the purpose of alignment scores in identifying the best alignments. [5 marks] What is the general concept upon which all protein scoring matrices are based? [3 marks] What is the differen... ##### Describe the followingHow is the heart arranged in terms of the pericardium anatomy,heart wall, and heart skeleton?Differentiate between semilunar and atrioventricular valves, aswell as chambers regarding structure and function.Describe the heart’s blood vessels, great vessels and coronary.Why are these significant?Relate the heart electrophysiology with the conduction system.Draw/label the cardiac cell action potential and the heart'sconduction system. Describe the following How is the heart arranged in terms of the pericardium anatomy, heart wall, and heart skeleton? Differentiate between semilunar and atrioventricular valves, as well as chambers regarding structure and function. Describe the heart’s blood vessels, great vessels and corona... ##### "(g)d jo pquS Jadlojidl 5 24 {y}d, wat '&; JC tasqus Jadord B 9 [ J! J! AO1J "Stas jq 4 pue [ 12] e81 T"z "(g)d jo pquS Jadlojidl 5 24 {y}d, wat '&; JC tasqus Jadord B 9 [ J! J! AO1J "Stas jq 4 pue [ 12] e81 T"z... ##### "choose the sentence in which the italicized pronoun agrees in number with its italicized antecedent or antecedents "choose the sentence in which the italicized pronoun agrees in number with its italicized antecedent or antecedents."... ##### Please solve this question fron the book Principles of Laser, written by Orazio Sevelto, 5th edition Chapter no 2 Quesrion no 2.2 2.2. Instead of ρν, a spectral energy density PA can be defined, p... Please solve this question fron the book Principles of Laser, written by Orazio Sevelto, 5th edition Chapter no 2 Quesrion no 2.2 2.2. Instead of ρν, a spectral energy density PA can be defined, pa being such that ρλdλ gives the nergy density for the e.m. waves of wavelength... ##### Discuss how could a change initiative can be impacted by influence control, innovation, and entrepreneurship Discuss how could a change initiative can be impacted by influence control, innovation, and entrepreneurship... ##### Baby weights: The wcight of male babies less than 2 months old in the United States is normally distributed with mcan 12 pounds and standard deviation 3.7 pounds Use the TI-84 calculator to answer the following: Round the answers to four decimal places:Part ] What - proportion of babies weigh more than 13 pounds?The proportion of babies wcighing more than 13 pounds is 0.3936Part 2 What proportion of babies Weigh less than 14 pouuds?The proportion of babies weighing less than 14 pounds is 0.2946 Baby weights: The wcight of male babies less than 2 months old in the United States is normally distributed with mcan 12 pounds and standard deviation 3.7 pounds Use the TI-84 calculator to answer the following: Round the answers to four decimal places: Part ] What - proportion of babies weigh more ... ##### (e)(P ~RIA(Q ' Rlis equivalent to (P^ Q1 ~ R (e) (P ~RIA(Q ' Rlis equivalent to (P^ Q1 ~ R... ##### Find the volume V of the solid obtained by rotating the region bounded by the given curves about the specified Iine. Y = 4 + sec(x) , sxsf,y = 6; about y =Sketch the region_1.0~0.5-1.00.5 Find the volume V of the solid obtained by rotating the region bounded by the given curves about the specified Iine. Y = 4 + sec(x) , sxsf,y = 6; about y = Sketch the region_ 1.0 ~0.5 -1.0 0.5... ##### Problem 11 Intro Your company's most recent income statement and balance sheet are given below: Income... Problem 11 Intro Your company's most recent income statement and balance sheet are given below: Income statement ($million) Sales 25 Costs 20 Net income 5 Balance sheet ($ million) Current assets 10.8 Fixed assets 43.2 Total assets 54 Debt 16.2 Equity 37.8 Total 54 Sales, assets and costs are e... ##### NeDate _Z_ISHIACHEM 1405 Lab Take home Quiz 2 41 [9r{ 7Draw a separation scheme for a mixture that contains caffeine and KBr: Label the techniques utikzed and the residues after each step. Use the information from the table at the end of this quizinSan Diegoor the Himalaya to boil at a higher temperature; Smiod 0 cacr;the would you expect the water 2ut nigher 24124 Vhere oesuxc pinaf h Cott ne Date _Z_ISHIACHEM 1405 Lab Take home Quiz 2 41 [9r{ 7 Draw a separation scheme for a mixture that contains caffeine and KBr: Label the techniques utikzed and the residues after each step. Use the information from the table at the end of this quiz inSan Diegoor the Himalaya to boil at a higher tem... ##### TCleeaverage growth per day of a inches per week You certain {ype lawn seed using standard fertilizer is 2.6 fertilizer wish test the cffects several different ertilizer on growth rate You add following patches results grass grown from that seed and oosenextheGrowthWeek (inches)Use [Wo-tailed test with & = Conduct the four steps for hypolhesis lesling and labeL Fch S[cp Steph , Step Step and Step 5 Caleulate Cohen' Are the data suflicient = conclude that thier significant difference bew TCleeaverage growth per day of a inches per week You certain {ype lawn seed using standard fertilizer is 2.6 fertilizer wish test the cffects several different ertilizer on growth rate You add following patches results grass grown from that seed and oosenexthe Growth Week (inches) Use [Wo-tailed tes... ##### Question 20bottle of krypton Gas has volume 0f 12,9 at STP Haw many moles of 0,154 Bas are Inthe bottle? 0,576 1,76 2.89 6.50 Question 20 bottle of krypton Gas has volume 0f 12,9 at STP Haw many moles of 0,154 Bas are Inthe bottle? 0,576 1,76 2.89 6.50... ##### What is Mcpb. And what it’s used for what is Mcpb. And what it’s used for... ##### Sixx AM Manufacturing has a target debt?equity ratio of 0.52. Its cost of equity is 20 percent, and its cost of debt is... Sixx AM Manufacturing has a target debt?equity ratio of 0.52. Its cost of equity is 20 percent, and its cost of debt is 10 percent. If the tax rate is 32 percent, the company's WACC is ______ percent. ( Round your answer to 2 decimal places. )... ##### Exercise 4-10 Entering data for closing entries and a post-closing trial balance LO P2, P3 The... Exercise 4-10 Entering data for closing entries and a post-closing trial balance LO P2, P3 The adjusted trial balance for Salon Marketing Co. follows. Complete the four right-most columns of the table by (1) entering information for the four closing entries in the middle columns and (2) completing t... ##### A proton starting from Iest, is accelerated by uniform electric field of 225 NIC. The distance (in m) the proton must travel t0 gain a kinetic energy of 14.4 x 10 164 HLis;]9 500 301520I544060 A proton starting from Iest, is accelerated by uniform electric field of 225 NIC. The distance (in m) the proton must travel t0 gain a kinetic energy of 14.4 x 10 164 HLis;] 9 50 0 30 1520 I5440 60... ##### If T = 298 K for all gases in separate containers with the volumes and pressures... If T = 298 K for all gases in separate containers with the volumes and pressures given the final pressure when the valves are opened? what is Negligible volume He Xe 3.00 L 2.50 bar 2.00 L 1.50 bar 1.00 L 1.00 bar... ##### 16.18 Draw structural formula for the product of each crossed aldol reaction and for the compound formed dehydration of each aldol product: (See Examples 163 16.4)(a) (CH;);CCH CH;CCH;CHO~CHOCHzO 16.18 Draw structural formula for the product of each crossed aldol reaction and for the compound formed dehydration of each aldol product: (See Examples 163 16.4) (a) (CH;);CCH CH;CCH; CHO ~CHO CHzO...
2022-08-13 05:54:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4293902814388275, "perplexity": 6179.7908790936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00500.warc.gz"}
https://projecteuclid.org/euclid.adjm/1465472747
## African Diaspora Journal of Mathematics ### Rational Pairing Rank of a Map Toshihiro Yamaguchi #### Abstract We define a rational homotopy invariant, the rational pairing rank $v_0(f)$ of a map $f:X\to Y$, which is a natural generalization of the rational pairing rank $v_0(X)$ of a space $X$ [16]. It is upper-bounded by the rational LS-category $cat_0(f)$ and lower-bounded by an invariant $g_0(f)$ related to the rank of Gottlieb group. Also it has a good estimate for a fibration $X\overset{j}{\to} E\overset{p}{\to} Y$ such as $v_0(E)\leq v_0(j) +v_0(p)\leq v_0(X) +v_0(Y)$. #### Article information Source Afr. Diaspora J. Math. (N.S.), Volume 19, Number 1 (2016), 1-11. Dates First available in Project Euclid: 9 June 2016
2018-07-15 19:26:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705297112464905, "perplexity": 1140.2636054174313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00216.warc.gz"}
https://en.m.wikiquote.org/wiki/Howard_P._Robertson
# Howard P. Robertson American mathematician and physicist Howard Percy Robertson (January 27, 1903 – August 26, 1961) was an American mathematician and physicist known for contributions related to physical cosmology and the uncertainty principle. He was Professor of Mathematical Physics at the California Institute of Technology and Princeton University. ## Quotes • We should, of course, expect that any universe which expands without limit will approach the empty de Sitter case, and that its ultimate fate is a state in which each physical unit—perhaps each nebula or intimate group of nebulae—is the only thing which exists within its own observable universe. • As quoted by Gerald James Whitrow, The Structure of the Universe: An Introduction to Cosmology (1949) ### "On Relativistic Cosmology" (1928) The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (1928) Series 7, Vol. 5, Issue 31: Supplement • The general theory of relativity considers physical space-time as a four-dimensional manifold whose line element coefficients ${\displaystyle g_{\mu \nu }}$  satisfy the differential equations ${\displaystyle G_{\mu \nu }=\lambda g_{\mu \nu }\qquad .\;.\;.\;.\;.\;.\;(1)}$ in all regions free from matter and electromagnetic field, where ${\displaystyle G_{\mu \nu }}$  is the contracted Riemann-Christoffel tensor associated with the fundamental tensor ${\displaystyle g_{\mu \nu }}$ , and ${\displaystyle \lambda }$  is the cosmological constant. • An "empty world," i.e., a homogeneous manifold at all points at which equations (1) are satisfied, has, according to the theory, a constant Riemann curvature, and any deviation from this fundamental solution is to be directly attributed to the influence of matter or energy. • In considerations involving the nature of the world as a whole the irregularities caused by the aggregation of matter into stars and stellar systems may be ignored; and if we further assume that the total matter in the world has but little effect on its macroscopic properties, we may consider them as being determined by the solution of an empty world. • The solution of (1), which represents a homogeneous manifold, may be written in the form: ${\displaystyle ds^{2}={\frac {d\rho ^{2}}{1-\kappa ^{2}\rho ^{2}}}-\rho ^{2}(d\theta ^{2}+sin^{2}\theta \;d\phi ^{2})+(1-\kappa ^{2}\rho ^{2})\;c^{2}d\tau ^{2},\qquad (2)}$ where ${\displaystyle \kappa ={\sqrt {\frac {\lambda }{3}}}}$ . If we consider ${\displaystyle \rho }$  as determining distance from the origin... and ${\displaystyle \tau }$  as measuring the proper-time of a clock at the origin, we are led to the de Sitter spherical world... ### Geometry as a Branch of Physics (1949) included in Albert Einstein: Philosopher-Scientist, ed. Paul Arthur Schilpp. • Euclidean geometry is a congruence geometry, or equivalently the space comprising its elements is homogeneous and isotropic; the intrinsic relations between... elements of a configuration are unaffected by the position or orientation of the configuration. ...[M]otions of Euclidean space are the familiar translations and rotations... made in proving the theorems of Euclid. • [O]nly in a homogeneous and isotropic space can the traditional concept of a rigid body be maintained. • That the existence of these motions (the "axiom of free mobility") is a desideratum, if not... a necessity, for a geometry applicable to physical space, has been forcefully argued on a priori grounds by von Helmholtz, Whitehead, Russell and others; for only in a homogeneous and isotropic space can the traditional concept of a rigid body be maintained. • Euclidean geometry is only one of several congruence geometries... Each of these geometries is characterized by a real number ${\displaystyle K}$ , which for Euclidean geometry is 0, for the hyperbolic negative, and for the spherical and elliptic geometries, positive. In the case of 2-dimensional congruence spaces... ${\displaystyle K}$  may be interpreted as the curvature of the surface into the third dimension—whence it derives its name... • [W]e propose... to deal exclusively with properties intrinsic to the space... measured within the space itself... in terms of... inner properties. • Measurements which may be made on the surface of the earth... is an example of a 2-dimensional congruence space of positive curvature ${\displaystyle K={\frac {1}{R^{2}}}}$ ... [C]onsider... a "small circle" of radius ${\displaystyle r}$  (measured on the surface!)... its perimeter ${\displaystyle L}$  and area ${\displaystyle A}$ ... are clearly less than the corresponding measures ${\displaystyle 2\pi r}$  and ${\displaystyle \pi r^{2}}$ ... in the Euclidean plane. ...for sufficiently small ${\displaystyle r}$  (i.e., small compared with ${\displaystyle R}$ ) these quantities on the sphere are given by 1): ${\displaystyle L=2\pi r(1-{\frac {Kr^{2}}{6}}+...)}$ , ${\displaystyle A=\pi r^{2}(1-{\frac {Kr^{2}}{12}}+...)}$ • In spherical geometry the sum ${\displaystyle \sigma }$  of the three angles of a triangle (whose sides are arcs of great circles) is greater than two right angles [180°]; it can... be shown that this "spherical excess" is given by 2) ${\displaystyle \sigma -\pi =K\delta }$ where ${\displaystyle \delta }$  is the area of the spherical triangle and the angles are measured in radians (in which 180° = ${\displaystyle \pi }$  [radians]). Further, each full line (great circle) is of finite length ${\displaystyle 2\pi R}$ , and any two full lines meet in two points—there are no parallels! • [T]he space constant ${\displaystyle K}$ ... "curvature" may in principle at least be determined by measurement on the surface, without recourse to its embodiment in a higher dimensional space. • These formulae [in (1) and (2) above] may be shown to be valid for a circle or a triangle in the hyperbolic plane... for which ${\displaystyle K<0}$ . Accordingly here the perimeter and area of a circle are greater, and the sum of the three angles of a triangle are less, than the corresponding quantities in the Euclidean plane. It can also be shown that each full line is of infinite length, that through a given point outside a given line an infinity of full lines may be drawn which do not meet the given line (the two lines bounding the family are said to be "parallel" to the given line), and that two full lines which meet do so in but one point. • The value of the intrinsic approach is especially apparent in considering 3-dimensional congruence spaces... The intrinsic geometry of such a space of curvature ${\displaystyle K}$  provides formulae for the surface area ${\displaystyle S}$  and the volume ${\displaystyle V}$  of a "small sphere" of radius ${\displaystyle r}$ , whose leading terms are 3) ${\displaystyle S=4\pi r^{2}(1-{\frac {Kr^{2}}{3}}+...)}$ , ${\displaystyle V={\frac {4}{3}}\pi r^{3}(1-{\frac {Kr^{2}}{5}}+...)}$ . • In all these congruence geometries, except the Euclidean, there is at hand a natural unit of length ${\displaystyle R={\frac {1}{K^{\frac {1}{2}}}}}$ ; this length we shall, without prejudice, call the "radius of curvature" of the space. • We have merely (!) to measure the volume ${\displaystyle V}$  of a sphere of radius ${\displaystyle r}$  or the sum ${\displaystyle \sigma }$  of the angles of a triangle of measured are ${\displaystyle \delta }$ , and from the results to compute the value of ${\displaystyle K}$ . • What is needed is a homely experiment which could be carried out in the basement with parts from an old sewing machine and an Ingersoll watch, with an old file of Popular Mechanics standing by for reference! This I am, alas, afraid we have not achieved, but I do believe that the following example... is adequate to expose the principles... • Let a thin, flat metal plate be heated... so that the temperature T is not uniform... clamp or otherwise constrain the plate to keep it from buckling... [and] remain [reasonably] flat... Make simple geometric measurements... with a short metal rule, which has a certain coefficient of expansion c... What is the geometry of the plate as revealed by the results of those measurements? ...[T]he geometry will not turn out to be Euclidean, for the rule will expand more in the hotter regions... [T]he plate will seem to have a negative curvature ${\displaystyle K}$ ... the kind of structure exhibited... in the neighborhood of a "saddle point." • What is the true geometry of the plate? ...Anyone examining the situation will prefer Poincaré's common-sense solution... to attribute it Euclidean geometry, and to consider the measured deviations... as due to the actions of a force (thermal stresses in the rule). ...On employing a brass rule in place of one of steel we would find that the local curvature is trebled—and an ideal rule (c = 0) would... lead to Euclidean geometry. • In what respect... does the general theory of relativity differ...? The answer is: in its universality; the force of gravitation in the geometrical structure acts equally on all matter. There is here a close analogy between the gravitational mass M...(Sun) and the inertial mass m... (Earth) on the one hand, and the heat conduction k of the field (plate)... and the coefficient of expansion c... on the other. ...The success of the general relativity theory... is attributable to the fact that the gravitational and inertial masses of any body are... rigorously proportional for all matter. • The field equation may... be given a geometrical foundation, at least to a first approximation, by replacing it with the requirement that the mean curvature of the space vanish at any point at which no heat is being applied to the medium—in complete analogy with... the general theory of relativity by which classical field equations are replaced by the requirement that the Ricci contracted curvature tensor vanish. • Footnote • Now it is the practice of astronomers to assume that brightness falls off inversely with the square of the "distance" of an object—as it would do in Euclidean space, if there were no absorption... We must therefore examine the relation between this astronomer's "distance" ${\displaystyle d}$ ... and the distance ${\displaystyle r}$  which appears as an element of the geometry. • All the light which is radiated... will, after it has traveled a distance ${\displaystyle r}$ , lie on the surface of a sphere whose area ${\displaystyle S}$  is given by the first of the formulae (3). And since the practical procedure... in determining ${\displaystyle d}$  is equivalent to assuming that all this light lies on the surface of a Euclidean sphere of radius ${\displaystyle d}$ , it follows... ${\displaystyle 4\pi d^{2}=S=4\pi r^{2}(1-{\frac {Kr^{2}}{3}}+...);}$ whence, to our approximation 4) ${\displaystyle d=r(1-{\frac {Kr^{2}}{6}}+...),}$  or ${\displaystyle r=d(1+{\frac {Kd^{2}}{6}}+...).}$ • [T]he astronomical data give the number N of nebulae counted out to a given inferred "distance" ${\displaystyle d}$ , and in order to determine the curvature... we must express N, or equivalently ${\displaystyle V}$ , to which it is assumed proportional, in terms of ${\displaystyle d}$ . ...from the second of formulae (3) and... (4)... to the approximation here adopted, 5) ${\displaystyle V={\frac {4}{3}}\pi d^{2}(1+{\frac {3}{10}}Kd^{2}+...);}$ ...plotting N against... ${\displaystyle d}$  and comparing... with the formula (5), it should be possible operationally to determine the "curvature" ${\displaystyle K}$ . • This... is an outrageously over-simplified account of the assumptions and procedures... • Footnote • The search for the curvature ${\displaystyle K}$  indicates that, after making all known corrections, the number N seems to increase faster with ${\displaystyle d}$  than the third power, which would be expected in a Euclidean space, hence ${\displaystyle K}$  is positive. The space implied thereby is therefore bounded, of finite total volume, and of a present "radius of curvature" ${\displaystyle R={\frac {1}{K^{\frac {1}{2}}}}}$  which is found to be of the order of 500 million light years. Other observations, on the "red shift" of light from these distant objects, enable us to conclude with perhaps more assurance that this radius is increasing... • It was the work of... Friedmann, Robertson and Walker, which resulted in the general mathematical framework that is still used today when discussing relativistic cosmological models of a homogeneous and isotropic universe. • David J. Adams, Alan Cayless, Anthony W. Jones, Barrie W. Jones, Mark H. Jones, Robert J. A. Lambourne, Lesley I. Onuora, Sean G. Ryan, Elizabeth Swinbank, Andrew N. Taylor, An Introduction to Galaxies and Cosmology (2003) ed., Mark H. Jones, Robert J. Lambourne • [I]nvestigations of cosmological issues, whose only observational basis was the redshift observations. ...involved ...mostly of mathematicians like Howard Robertson and Cornelius Lanczos and astronomers with strong mathematical training such as Eddington, Lemaître, George McVittie, and William McCrea. They were mainly interested in how to apply general relativity to cosmological problems, which involved not only understanding cosmic dynamics but also solving the intricate problem of interpreting cosmological solutions to the Einstein equations, in particular separating time (which determined the evolution of the universe) from space (to which simplified assumptions concerning the structure of the universe, such as homogeneity and isotropy, were to be applied). • Alexander Blum, Roberto Lalli, Jürgen Renn, "The Reinvention of General Relativity: A Historiographical Framework for Assessing One Hundred Years of Curved Space-time," Isis, Vol. 106, No. 3 (Sept. 2015), pp. 598-620. • Distinguished scientist, selfless servant of the national interest, courageous champion of the good and the right, warm human being, he gave richly to us and to all from his own great gifts. We are grateful for the years with him. We mourn the loss of his presence but rejoice in the legacy of his wisdom and strength. • Detlev Bronc (1961) as quoted by Jesse L. Greenstein, "Howard Percy Robertson 1903-1961: A Biographical Memoir," (1980) National Academy of Sciences. • Weyl published a third appendix to his Raum, Zeit, Materie, and an accompanying paper], where he calculated the redshift for the ‘de Sitter cosmology’, ${\displaystyle ds^{2}=-dt^{2}+e^{2{\sqrt {{\frac {\Lambda }{3}}t}}}(dx^{2}+dy^{2}+dz^{2})}$ , the explicit form of which would only be found later, independently by Lemaître and Robertson. • The basic cause and nature of cosmic expansion, along with its recently-observed acceleration, are significant problems of the standard model; so, condisering the evidence that the acceleration is best described by pure ${\displaystyle \Lambda }$ , there is strong motivation to search for an alternative big bang model that would respect the pioneering concept of expansion, as a direct consequence of the ‘de Sitter effect’ in the modified Einstein field equations. It is therefore worth investigating the axiomatic basis of the Robertson-Walker (RW) line-element. • Daryl Janzen, "A Critical Look at the Standard Cosmological Picture" (Sept. 11, 2014) arXiv:1303.2549v3. • [I]n deriving the general line-element for the background geometry of FLRW [Friedmann–Lemaître–Robertson–Walker, sometimes called the Standard Model] cosmology, Robertson required four basic assumptions: i. a congruence of geodesics, ii. hypersurface orthogonality, iii. homogeneity, and iv. isotropy. i. and ii. are required to satisfy Weyl's postulate of a causal coherence amongst world lines in the entire Universe, by which every single event in the bundle of fundamental world lines is associated with a well-defined three-dimensional set of others with which it ‘really’ occurs simultaneously. However, it seems that ii. is therefore mostly required to satisfy the concept that synchronous events in a given inertial frame should have occurred simultaneously, against which I’ve argued... • Daryl Janzen, "A Critical Look at the Standard Cosmological Picture" (Sept. 11, 2014) arXiv:1303.2549v3. • Note.—The second part of this paper was considerably altered by me after the departure of Mr. Rosen for Russia since we had originally interpreted our results erroneously. I wish to thank my colleague Professor Robertson for his friendly in his assistance in the clarification of the original error. • Albert Einstein, "On Gravitational Waves," Journal of the Franklin Institute (1937) 223, pp. 43-54. • Important contributions to what would later appear as mainstream big bang cosmology were made by Americans Howard P. Robertson and Richard Tolman... Robertson (and independently, A. G. Walker in England) deduced in 1935 the most general form of the metric for a space-time satisfying the cosmological principle: the postulate that the universe is spatially homogeneous in its large scale appearance. This metric became generally known as the Robertson-Walker metric. Together with Tolman, Robertson pioneered the study of thermodynamics in the theory of the expanding universe. • Encyclopedia of Cosmology (Routledge Revivals): Historical, Philosophical, and Scientific Foundations of Modern Cosmology (2014) ed. Norriss S. Hetherington. • Many of the theoretical investigations of cosmology in the 1920s were examinations of De Sitter's model, which is particular because it can be understood both as a static model (as De Sitter did) and as an expanding model (as became the view after 1930). It is clearly problematic to read the pre-1930 literature in light of later knowledge. 'Expanding' versions of the De Sitter universe were found by Cornelius Lanczos in 1922, Hermann Weyl in 1923, and Lemaître in 1925, but were not conceived as expanding in any real sense. These works, as well as later works by Howard Percy Robertson and Richard Chase Tolman in the United States, consisted in transforming De Sitter's line element in such a way that it formally became static, that is, included a term ${\displaystyle F(t)}$  referring to the time parameter. The metric, giving the distance in space-time between two neighboring points, would then be in the form ${\displaystyle ds^{2}=c^{2}dt^{2}-F(t)(dx^{2}=dy^{2}+dz^{2}).}$ • In 1928 Robertson found a non-static line element similar to the one Lemaître had found three years earlier. Also like Lemaître, he derived a linear relationship between apparent recessional velocities and distances, and he discussed it in relation to observation data. Within the same tradition was Tolman's 1929 derivation of a 'Hubble law', that is, a relationship of the form ${\displaystyle v=kr}$ , with ${\displaystyle v={\frac {d\lambda }{\lambda }}}$ . Robertson and Tolman generalized the De Sitter model to an arbitrary scale factor ${\displaystyle F(t)}$ , but they remained within the static paradigm and did not realize the significance of ${\displaystyle F(t)}$ . In a paper of 1929, Robertson wrote the general line element of what would later be known as the Robertson-Walker models and he even referred to Friedmann's work. And yet, although he had evidently studied Friedmann, he 'misread' him and failed to realize the significance of the expanding metric. • Helge Kragh, Robert W. Smith, "Who Discovered the Expanding Universe?" (2003) History of Science, Vol. 41, p.141-162. • Robertson and Tolman developed much of the mathematics of the expanding universe, but without concluding or predicting prior to 1930 that the universe actually expands. They were not discoverers or codiscoverers of the expanding universe (and never claimed that they were). • Helge Kragh, Robert W. Smith, "Who Discovered the Expanding Universe?" (2003) History of Science, Vol. 41, p.141-162. • Howard Percy Robertson was a postdoctorate student at Göttingen and Munich from 1925 to 1927. While in Göttingen he completed an important paper on relativistic cosmology in which he derived a relation between the velocity of nebulae and their distances. For the radius of the observable world he calculated ${\displaystyle R=2\times 10^{25}}$  m. Although he had a velocity-distance relation and referred to Slipher's redshifts, he did not conclude that the universe was in a state of expansion. • Helge Kragh, Masters of the Universe: Conversations with Cosmologists of the Past (2014) footnote, p. 83. • Robertson wrote an influential [1933] review of relativistic world models in which he specifically excluded those which have "arisen in finite time from the singular state R = 0." Although he included the Einstein-de Sitter paper in his bibliography, he did not mention it in the review. He also did not mention Lemaître's primeval-atom hypothesis. • Helge Kragh, Masters of the Universe: Conversations with Cosmologists of the Past (2014) footnote, p. 106. Ref: Einstein, de Sitter, "On the relation between the expansion and the mean density of the universe" (1932) Proceedings of the National Academy of Sciences 18 (1): pp. 213–214. Ref: Einstein, de Sitter "On the relation between the expansion and the mean density of the universe" (1932) Proceedings of the National Academy of Sciences 18 (1): pp. 213–214. • As Daniel Dennefink explains in his article on the story behind the writings of the Einstein-Rosen gravitational paper, the singularity that Einstein and Rosen had encountered was an apparent singularity introduced by their choice of coordinate system, similar to the singularity one encounters when attempting to find the longitude of the North Pole. In fact, in his referee report Robertson had indicated that the singularity was removed by a change to a cylindrical coordinate system. • Silvan S. Schweber, Einstein and Oppenheimer: The Meaning of Genius (2008) p. 10. Referencing: Daniel Dennefink, "Einstein versus the Physical Review." 2005 Physics Today 58(9): pp. 43-46 & "On Gravitational Waves," Journal of the Franklin Institute (1937) 223, pp. 43-54. • In June 1936, Einstein and Rosen sent the paper... "Do Gravitational Waves Exist?" to The Physical Review... [which] rejected the paper, provoking Einstein's furious reaction. Einstein told the editor he... saw no reason to address the erroneous comments of his anonymous expert [Howard Percy Robertson] and... preferred to publish the paper elsewhere. ...Leopold Infeld arrived in Princeton to replace Rosen as... assistant. Einstein explained to him his proof of the non-existence of gravity waves. ...Infeld told Robertson [then professor of theoretical physics at Princeton] about Einstein's... paper[.] Robertson... found a trivial mistake [by] Infeld [and] clarified... the mistake in Einstein's explanation... The linearized approximation [led] to plane transverse gravitational waves... introduc[ing]... coordinate singularities... not real singularities. ...Robertson ...suggested ...the so-called Einstein-Rosen metric... be transformed... to cylindrical coordinates. ...the singularity can be regarded as describing a material source. The solution describe[s]...cylindrical... rather than plane gravitational waves. ...with Robertson's help (still not knowing it was Robertson who had [refereed] The Physical Review) ...Einstein ...revis[ed the] ...paper and added a section: "Rigorous Solution for Cylindrical Waves"... The new version of the paper was re-titled "On Gravitational Waves"... • Galina Weinstein, "Einstein and Gravitational Waves 1936-1938," (Feb. 16, 2016) arXiv:1602.04674 [physics.hist-ph], a summary of Section 1, Ch. 3 of General Relativity Conflict and Rivalries: Einstein's Polemics with Physicists (2016). Ref: "On Gravitational Waves," Journal of the Franklin Institute (1937) 223, pp. 43-54.
2020-08-05 17:29:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 69, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502532005310059, "perplexity": 1486.334927110987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00343.warc.gz"}
http://forums.devx.com/showthread.php?140834-need-help-with-ANT!!!!!&mode=hybrid
## need help with ANT!!!!! Hi, I am using Ant thru eclipse IDE, wheni runthe build script it gives me the following error BUILD FAILED: com.sun.tools.javac.Main is not on the classpath. Perhaps JAVA_HOME does not point to the JDK Total time: 1 second the problem is i have the JRE, and i have set the path as follows Java_Home = C:\Program Files\java\j2re1.4.2_01\bin classpath = C:\Program Files\java\j2re1.4.2_01\bin; Path = C:\Program Files\Java\j2re1.4.2_01\bin; and i run ant fromthe command prompt it even gives me this error saying "tools.jar not found!" i have a JRE, do ineed to get the JDK , if so which one and cani install it in the same folder c:\programfiles\java where the JRE is ;located? or should i store it somehweere different!
2014-10-20 22:22:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858691096305847, "perplexity": 11734.677722417238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443438.42/warc/CC-MAIN-20141017005723-00177-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.tutorialspoint.com/how-to-add-lifecycle-methods-to-a-class-in-reactjs
# How To Add Lifecycle Methods To a Class In ReactJS? Every component in React has a lifecycle that incorporates several phases. Programmers usually think of this lifecycle as the component’s ‘lifetime’. The components experience the following events: Mounting, Updating, Unmounting, and Error handling. Mounting refers to the procedure of adding nodes and updating requires programmers to alter and change these nodes in the DOM. Unmounting, on the other hand, removes nodes and error handling tracks your code to ensure it works and is bug-free. These occurrences can be compared to a component's birth, development, and eventual demise. You can override multiple lifecycle methods in each React lifecycle phase to execute code at particular points in the process. With this in mind, let's shed some light on how to add the above lifecycle methods to a Class component in ReactJS. ## Detailed Insights On React Lifecycle Methods As you know, mounting, updating, and unmounting are primary React Lifecycle methods. The methods used in each phase make it simpler to carry out common operations on the components. React developers can directly extend from React Component with class-based components to access the methods. The most popular method for managing lifecycle events needed ES6 class-based components prior to React 16.8. In other words, if our code were already written using functional React components, we would need to rewrite those as classes that extend with React.Component and include a specific render function. The three most popular lifecycle methods, componentDidMount, componentDidUpdate, and componentWillUnmount, could be accessed only then. ### How To Use Local State And Extra Features With Ease? In order to use local state along with extra features in React, you will first have to convert a functional complement to a class component. • Create a React.Component extending ES6 class of the same name • Add an empty method render(). • Place the function's body in the render() method. • Substitute props in the render() body with this.props • Delete any remaining empty function declarations. render() { return ( <div> <h1>Hello, world!</h1> <h2>It is {this.props.date.toLocaleTimeString()}.</h2> </div> ); } The definition of the React component clock is now a class instead of a function. Every time an update occurs, the render method will be invoked, but as long as the <Clock/> element is rendered in the same DOM node, just one instance of the class would be used. ## Adding Lifecycle Methods to a Class Component For applications that incorporate a multitude of components, it’s imperative to free up resources. When the clock is first shown to the DOM, we want to start a timer. The React term for this is "mounting." Additionally, once the DOM created by the clock is deleted, we want to reset that timer. In React, this is referred to as "unmounting." ### Example import react from ‘react’; class Clock extends React.Component { constructor(props) { super(props); this.state = {date: new Date()}; } componentDidMount() { } componentWillUnmount() { } render() { return ( <div> <h1>Hello, world!</h1> <h2>It is {this.state.date.toLocaleTimeString()}.</h2> </div> ); } } ### Output Hello, world! It is 10:27:03 AM. Successfully rendering the component’s output invokes a particular function. This is the componentDidMount() function. Insert a timer here − componentDidMount() { this.timerID = setInterval( () => this.tick(), 1000 ); } It is further possible to insert more fields manually to the Class component. Programmers usually do this when they need to keep something which wasn’t a part of the data flow. Reg despite ReactJS setting up this.state and this.props by itself. These further possess unique meanings like a timer ID. In the lifecycle function componentWillUnmount(), we shall deactivate the timer − componentWillUnmount() { clearInterval(this.timerID); } The Clock component will execute the tick() method, which we will implement last, once every second. It will employ this. To program updates to the component's local state, use setState() − ### Example class Clock extends React.Component { constructor(props) { super(props); this.state = {date: new Date()}; } componentDidMount() { this.timerID = setInterval( () => this.tick(), 1000 ); } componentWillUnmount() { clearInterval(this.timerID); } tick() { this.setState({ date: new Date() } ); } render() { return ( <div> <h1>Hello, world!</h1> <h2>It is {this.state.date.toLocaleTimeString()}.</h2> </div> ); } } const root = ReactDOM.createRoot(document.getElementById('root')); root.render(<Clock/>); ### Output Hello, world! It is 10:30:12 AM. //clock is ticking every second. The clock is now ticking every second. ## Using State Correctly You should be aware of three aspects concerning setState(). ## Do Not Modify State Directly This won't re-render a component, for example − // Wrong this.state.comment = 'Hello'; // Correct this.setState({comment: 'Hello'}); Only the constructor is capable of assigning this.state. ## State Updates May Be Asynchronous You shouldn't use this.props, and this.state's values to determine the next state because they might be modified asynchronously. For example, the following code won’t be applicable while updating the counter − // Wrong this.setState({ counter: this.state.counter + this.props.increment, } ); Instead, refer to other versions of setState() and utilize the one which accepts functions. This is because a few versions see functions as an object that requires fixing. The previous state will be passed as the first argument to that method, and the props present at the moment the update is applied will be passed as the second argument − // Correct this.setState((state, props) => ({ counter: state.counter + props.increment } ) ); Although we used an arrow function in the example above, normal functions also work − // Correct this.setState(function(state, props) { return { counter: state.counter + props.increment }; } ); Your state might include a number of independent variables − constructor(props) { super(props); this.state = { posts: [], }; } After that, you may update each one separately using different setState() calls − componentDidMount() { fetchPosts().then(response => { this.setState({ posts: response.posts } ); } ); this.setState({ } ); } ); } this.setState(comments) totally replaces this.state.comments but does just a superficial merging, leaving this.state.posts unaltered. Thus, the state component is often considered local or contained. Other than the component that owns and controls it, no other component has access to it. ## Bottom Line We have discussed everything you need to know to add lifecycle methods correctly to Class Components in React. Many programmers generally encounter several difficulties while doing the same because of the codes and technicality involved. Therefore, make sure to follow the steps correctly and cross-check your codes before running a command.
2023-03-26 12:02:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24249105155467987, "perplexity": 5718.732077223019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00334.warc.gz"}
https://chem.libretexts.org/Courses/Colorado_State_University_Pueblo/Elementary_Concepts_in_Physics_and_Chemistry/04%3A_Chapter_4_-_Molecules%2C_Bonding%2C_and_Forces/4.03%3A_Covalent_Bonding
# 4.3: Covalent Bonding A second method by which atoms can achieve a filled valence shell is by sharing valence electrons with another atom. Thus fluorine, with one unpaired valence electron, can share that electron with an unshared electron on another fluorine to form the compound, F2 in which the two shared electrons form a chemical bond holding the two fluorine atoms together. When you do this, each fluorine now has the equivalent of eight electrons in its valence shell; three unshared pairs and one pair that is shared between the two atoms. Note that when you are counting electrons, the electrons that are shared in the covalent bond are counted for each atom, individually. A chemical bond formed by sharing electrons between atoms is called a covalent bond. When two or more atoms are bonded together utilizing covalent bonds, the compound is referred to as a molecule. There is a simple method, given below, that we can use to construct Lewis diagrams for diatomic and for polyatomic molecules: • Begin by adding up all of the valence electrons in the molecule. For F2, each fluorine has seven, giving a total of 14 valence electrons. • Next, draw your central atom. For a diatomic molecule like F2, both atoms are the same, but if several different atoms are present, the central atom will be to the left (or lower) in the periodic table. • Next, draw the other atoms around the central atom, placing two electrons between the atoms to form a covalent bond. • Distribute the remaining valence electrons, as pairs, around each of the outer atoms, so that they all are surrounded by eight electrons. • Place any remaining electrons on the central atom. • If the central atom is not surrounded by an octet of electrons, construct multiple bonds with the outer atoms until all atoms have a complete octet. • If there are an odd number of valence electrons in the molecule, leave the remaining single electron on the central atom. Let’s apply these rules for the Lewis diagram for chlorine gas, Cl2. There are 14 valence electrons in the molecule. Both atoms are the same, so we draw them next to each other and place two electrons between them to form the covalent bond. Of the twelve remain electrons, we now place six around one chlorine (to give an octet) and then place the other six around the other chlorine (our central atom). Checking, we see that each atom is surrounded by an octet of valence electrons, and so our structure is complete. $:\underset{..}{\overset{..}{Cl}}\cdot \cdot \underset{..}{\overset{..}{Cl}}:$ All of the Group 7A elements (the halogens), have valence shells with seven electrons and all of the common halogens exist in nature as diatomic molecules; fluorine, F2; chlorine, Cl2; bromine, Br2 and iodine I2 (astatine, the halogen in the sixth period, is a short-lived radioactive element and its chemical properties are poorly understood). Nitrogen and oxygen, Group 5A and 6A elements, respectively, also exists in nature as diatomic molecules (N2 and O2). Let’s consider oxygen; oxygen has six valence electrons (a Group 6A element). Following the logic that we used for chlorine, we draw the two atoms and place one pair of electrons between them, leaving 10 valence electrons. We place three pairs on one oxygen atom, and the remaining two pairs on the second (our central atom). Because we only have six valence electrons surrounding the second oxygen atom, we must move one pair from the other oxygen and form a second covalent bond (a double bond) between the two atoms. Doing this, each atom now has an octet of valence electrons. $:\underset{..}{O}::\overset{..}{O}:$ Nitrogen has five valence electrons. Sharing one on each atom gives the first intermediate where each nitrogen is surrounded by six electrons (not enough!). Sharing another pair, each nitrogen is surrounded by seven electrons, and finally, sharing the third, we get a structure where each nitrogen is surrounded by eight electrons; a noble gas configuration (or the “octet rule”). Nitrogen is a very stable molecule and relatively unreactive, being held together by a strong triple covalent bond. $:N\vdots \vdots N:$ As we have constructed Lewis diagrams, thus far, we have strived to achieve an octet of electrons around every element. In nature, however, there are many exceptions to the “octet rule”. Elements in the first row of the periodic table (hydrogen and helium) can only accommodate two valence electrons. Elements below the second row in the periodic table can accommodate, 10, 12 or even 14 valence electrons (we will see an example of this in the next section). Finally, in many cases molecules exist with single unpaired electrons. A classic example of this is oxygen gas (O2). We have previously drawn the Lewis diagram for oxygen with an oxygen-oxygen double bond. Physical measurements on oxygen, however, suggest that this picture of bonding is not quite accurate. The magnetic properties of oxygen, O2, are most consistent with a structure having two unpaired electrons in the configuration shown below: $:\underset{.}{\overset{..}{O}}\cdot \cdot \underset{.}{\overset{..}{O}}:$ In this Lewis diagram, each oxygen atom is surrounded by seven electrons (not eight). This electronic configuration may explain why oxygen is such a reactive molecule (reacting with iron, for example, to form rust); the unpaired electrons on the oxygen molecule are readily available to interact with electrons on other elements to form new chemical compounds. Another notable exception to the “octet rule” is the molecule NO (nitrogen monoxide). Combining one nitrogen (Group 5A) with one oxygen (Group 6A) gives a molecule with eleven valence electrons. There is no way to arrange eleven electrons without leaving one electron unpaired. Nitric oxide is an extremely reactive molecule (by virtue of its unshared electron) and has been found to play a central role is biochemistry as a reactive, short-lived molecule involved in cellular communication. $\cdot \underset{.}{\overset{..}{N}}\cdot \cdot \underset{.}{\overset{..}{O}}:$ As useful as Lewis diagrams can be, chemists tire of drawing little dots and, for a shorthand representation of a covalent bond, a short line (a line-bond) is often drawn between the two elements. Whenever you see atoms connected by a line-bond, you are expected to understand that this represents two shared electrons in a covalent bond. Further, the unshared pairs of electrons on the bonded atoms are sometimes shown, and sometimes they are omitted. If unshared pairs ore omitted, the chemist reading the structure is assumed to understand that they are present. $\underset{..}{\overset{..}{F}}\cdot \cdot \underset{..}{\overset{..}{F}}:\; or\; :\underset{..}{\overset{..}{F}}-\underset{..}{\overset{..}{F}}:\; or\; F-F$ $:N\vdots \vdots N:\; or\; :N\equiv N:\; or\; N\equiv N$
2021-06-14 16:03:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5934089422225952, "perplexity": 1164.1969996526143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00277.warc.gz"}
https://www.physicsforums.com/threads/distance-between-energy-states.832455/
# Distance between energy states? 1. Sep 14, 2015 I'm doing some personal research on how matter interacts with radiation. Specifically, I am looking through the treatment of Bransden and Joachain. I've taken two semesters of quantum in the past (a while ago), but now I'm coming across something that I've either never seen or never stopped to question. In discussing Einstein Coefficients, I keep seeing this |rba|2 term, where b and a both refer to energy states, with Eb > Ea. Now, I'm having some trouble visualizing any significance to this term. It is defined in the usual way, as the sum of |xi|2, but I just have no idea what this means, as energy levels don't ever really have definite locations, as far as I understand. Could anyone help shed some intuition on what this "distance" means? Also, sorry in advance if this belongs in the homework/coursework forum, I wasn't quite sure. 2. Sep 14, 2015 ### Jilang Do you have a page number as there are nearly 700! 3. Sep 14, 2015 ### fzero That term is the matrix element for the dipole operator, which is proportional to the coordinate vector $\mathbf{r}$. It is not exactly a physical distance, but it is analogous to the classical idea that a dipole moment can be formed if a charge is displaced with respect to a neutral configuration according to $\mathbf{P} = \sum_i q_i (\mathbf{r}_i - \mathbf{r}_e)$. You can find a derivation of that matrix element at the chapter http://quantummechanics.ucsd.edu/ph130a/130_notes/node417.html. Briefly, the interaction of the atom with an photon is determined in terms of the electromagnetic field of the photon through an interaction Hamiltonian that is proportional to $\mathbf{A} \cdot \mathbf{p}$ where $\mathbf{A}$ is the EM vector potential and $\mathbf{p}$ is the momentum operator. For a photon of specific wavevector $\mathbf{k}$, the vector potential can be expanded as a plane wave $a e^{i\mathbf{k}\cdot \mathbf{r}}$, but the dipole approximation assumes that $\mathbf{k}\cdot \mathbf{r} \ll 1$ so that we can treat $e^{i\mathbf{k}\cdot \mathbf{r} }\sim 1$. Then the interaction operator is essentially the momentum operator, but we can write $$-\frac{i}{m} \mathbf{p} = [H_0, \mathbf{r}],$$ where $H_0$ is the unperturbed Hamiltonian (see the notes for a bit more justification of this). In an expectation value then $$\langle b| \mathbf{p} |a\rangle \sim \langle b| [H_0, \mathbf{r}]|a\rangle = \langle b| (E_b - E_a) \mathbf{r}|a\rangle ,$$ so the nontrivial data are the matrix elements $\mathbf{r}_{ab} = \langle b| \mathbf{r}|a\rangle$. 4. Sep 14, 2015 Jilang, yes, this stuff starts on page 166. Fzero, thank you for the extensive response! Is there a tie to the existence of an actual dipole anywhere, or is the dipole analogy just because of the dot product between the bra and ket vectors, forming a similar shape? 5. Sep 14, 2015 ### fzero In classical EM scattering, the plane wave $e^{i \mathbf{k}\cdot \mathbf{r}}$ is also expanded into spherical harmonics and this type of expansion is known as a multipole expansion. The lowest order term in the expansion for the vector potential is called the dipole term, so the same terminology is used here in the quantum case. This dipole nature is required because of the vector-nature of the potential. The initial and final states need not have electric dipole moments of their own. The transition matrix element is really a hybrid object that measures how the initial and final states are related. 6. Sep 14, 2015 Ok, so the reference to a dipole is only mentioned to denote the level of approximation. And the term I originally brought up, |rab|2, is simply the result of applying this dipole-approximation operator to the original state. Is that right? 7. Sep 15, 2015 ### fzero Almost, The matrix element $\langle b|\mathbf{r} |a\rangle$ also depends on the final state $|b\rangle$. The way to think about it is the following. You would recall that the quantity $|\langle \phi | \psi \rangle|^2$ can be thought of as the probability that a measurement finds the system in the state $|\phi\rangle$ given that it was in the state $|\psi\rangle$ at some point before the measurement. Here we can think of applying the dipole operator to the initial state to give some intermediate state $|\psi \rangle = \mathbf{r} |a\rangle$. Then $|\langle b| \mathbf{r} | a\rangle|^2= |\langle b| \psi \rangle|^2$ is the probability to measure the final state given that intermediate state. Typically the initial and final states of interest are eigenstates of the unperturbed Hamiltonian, while the intermediate state can be expressed as a linear combination of essentially all energy eigenstates with the allowed angular momentum quantum numbers. 8. Sep 17, 2015
2017-10-20 15:33:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073622226715088, "perplexity": 255.93809037988154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00706.warc.gz"}
https://math.stackexchange.com/questions/1362415/expected-value-and-the-standard-simple-regression-model
# Expected value and the standard simple regression model Given the standard simple regression model: $y_i = β_0 + β_1 x_i + u_i$ What is the expected value of the estimator $\hat\beta_1$in terms of $x_i, \beta_0$ and $\beta_1$ when $\hat\beta_1=\sum x_i y_i/\sum x_i^2$? • What are your ideas? – João Ramos Jul 15 '15 at 18:12 • You must substitute y_i into the expression to get $β̂_1=∑x_i(β_0 + β_1 x_i +u_i )/∑x^2_1$ and then split up the equation into separate summations, but I'm unclear how to separate treat the expected value of $∑x_i u_i / ∑x_i^2$ – Jan Aneill Jul 15 '15 at 18:19 \begin{align} \text{E} \left ( \frac{\sum_{i=1}^{n} x_i Y_i}{\sum_{j=1}^{n} x_j^2} \right ) &= \left ( \sum_{j=1}^{n} x_j^2 \right )^{-1} \sum_{i=1}^{n} x_i \text{E}(Y_i) \\ &= \left ( \sum_{j=1}^{n} x_j^2 \right )^{-1} \sum_{i=1}^{n} x_i (\beta_0 + \beta_1 x_i) \\ &= \beta_0 n\bar{x} \left ( \sum_{j=1}^{n} x_j^2 \right )^{-1} + \beta_1 . \end{align} • You used the right formula for $\hat \beta_1$, if there is no intercept. But here we have $\beta_0$. – callculus Jul 15 '15 at 18:24 • Correct, which is what the poster asked about. $\beta_0$ is an assumed part of the true model, not necessarily the model being estimated. The apparent point is that the estimator is unbiased if $\beta_0 = 0$ or $\bar{x} = 0$. – dsaxton Jul 15 '15 at 18:26 • If the estimators of the parameters are unbiased, then it doesn´t imply that $\beta_0=0$-only $E(\hat \beta_0)=\beta_0$ – callculus Jul 15 '15 at 18:37 • I don´t understand your comment. – callculus Jul 15 '15 at 18:44 Denote $x=(x_1,\dots,x_n)$. Assuming that $\mathbb{E}[u_i|x]=0$ $$\mathbb{E}[\hat\beta_1|x]=\mathbb{E}\left[\frac{\sum x_i y_i}{\sum x_i^2}\mid x\right]=\mathbb{E}\left[\frac{\sum x_i (\beta_0 + \beta_1 x_i + u_i)}{\sum x_i^2}\mid x\right]=\frac{\beta_0\sum x_i+\beta_1\sum x_i^2+\sum x_i\mathbb{E}[u_i|x]}{\sum x_i^2}=\beta_1+\beta_0\frac{\sum x_i}{\sum x_i^2}$$ and $$\mathbb{E}[\hat\beta_1]=\beta_1+\beta_0\mathbb{E}\left[\frac{\sum x_i}{\sum x_i^2}\right]$$ • I want to clear my concept in this topic. If $$\mathbb{E}[\hat{\beta_1}]=\beta_1$$ is true, then from the result (that you have proved nicely), how can I derive $\mathbb{E}\hat{\beta_1}=\beta_1$? – vbm Jun 26 '18 at 15:33 • @thevbm $\hat{\beta}_1$ is biased in this case (i.e. it's expectation is not $\beta_1$). – d.k.o. Jun 27 '18 at 3:35
2020-10-26 10:39:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987860321998596, "perplexity": 650.6851082546724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00573.warc.gz"}
https://electronics.stackexchange.com/questions/19230/how-do-i-determine-the-requirements-of-a-switching-fet
# How do I determine the requirements of a switching FET? A part we're using in a half-bridge LLC soft-switching DC-DC converter is getting hard to find (STW20NM60, 600 V, 20 A, 0.25 Ω, 192 W), and I need to find a replacement. What parameters do I need to care about? The first thing to check is the voltage rating. The voltage supplying the converter is nominally 400 V, but can temporarily rise to 460 V or so. 600 V parts should be plenty, right? According to Selection of MOSFETs in Switch Mode DC-DC Converters, it's probably dissipating around 10 W, so the >140 W power ratings of every TO-247 component I've found seem perfectly adequate. So how about the current rating? I'm not sure how to calculate this. The AN2492 example circuit is for a 400 W supply, and they use a 14 A FET. I think the actual peak current through them is more like 3 A, though. Why such a big margin? What other parameters are important? I think a "fast diode" version would be best for reliability and efficiency: This revolutionary Power MOSFET associates a new vertical structure to the company's strip layout and associates all advantages of reduced on-resistance and fast switching with an intrinsic fast-recovery body diode. It is therefore strongly recommended for bridge topologies, in ZVS phase-shift converters. The gate characteristics are important. The total gate charge will dictate the size of the gate resistor you'll need, and subsequently the power dissipation due to drive. It will also affect the switching speed of the MOSFET, which means if you choose a replacement with different gate characteristics, you'll also likely have to play with the gate resistor values in order to achieve similar switching characteristics. The gate threshold voltage will also play a role in the switching speed of the device. Make sure your replacement is in the same ballpark as the original. Other parasitics ($C_{iss}$, $C_{oss}$) are important in soft-switched topologies, but for an ordinary half-bridge shouldn't be too crucial. 600V should be fine for a 460V half-bridge, since the FETs only see Vin worst-case. Big-current MOSFETs tend to have low $R_{DS(on)}$ values, and are often chosen to minimize conduction losses. There may be high peak currents during abnormals (transformer short, etc.) which may make the part appear to be over-rated at first glance. Calculating the peak current without knowing things like the transformer inductance, switching frequency, etc. can be hard - it might be easier to just stick a current probe in there and measure it (or measure across any current-sense resistor you may find). Fast body diodes may improve robustness (assuming there aren't discrete diodes on the PCB in parallel with the body diodes) and wouldn't hurt in my estimation, so long as their current rating is sufficient. Again, this is more important for soft-switching topologies. • I don't have a current probe, but there is a current-sense resistor for overcurrent protection. What's the worst-case scenario for current testing? Sep 8 '11 at 16:01 • Use a differential probe if possible, to avoid inadvertently earthing the primary. It's much safer than floating a scope. You should be able to use limited bandwidth (20 MHz) and still get a meaningful signal. Sep 8 '11 at 17:18 • Well... now that I have a diff probe, this looks pretty ugly. At idle, according to the 0.12 ohm current sensing resistor, there are spikes reaching +9 A and -21 A during the rising edge of the low side gate driver, and smaller current spikes during the falling edge. Shoot-through current? If the scope graphs in app notes are to be believed, these spikes should not be here. imgur.com/a/mEIag Sep 9 '11 at 20:50 • I am skeptical of those spikes. There could be a lot of current freewheeling around during the bridge dead-times which could be picked up on your probe. I'd make sure that BWL is on and try to establish the maximum 'ramp' (non-spike) current across the resistor. Sep 9 '11 at 21:24 • The SOA is generally based on power dissipation, yes - it's a function of how well the channel is established (how 'on' the FET is), the drain current and the current characteristic (DC, pulsed, etc.). The drain-source voltage influences how 'on' the FET is for a given gate-source voltage (see electronics.stackexchange.com/questions/18885/…) but ultimately, it's a thermal situation - if the part doesn't overheat with the existing cooling, it's most likely fine. Sep 12 '11 at 15:48
2022-01-23 18:35:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5775303244590759, "perplexity": 2092.0273471281307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00420.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/the-speed-time-graph-car-shown-following-figure-graphical-representation-motion-distance-time-graphs_7703
Share # The Speed-time Graph for a Car is Shown in the Following Figure - CBSE Class 9 - Science ConceptGraphical Representation of Motion - Distance-time Graphs #### Question The speed-time graph for a car is shown in the following figure (a) Find out how far the car travels in the first 4 seconds. Shade the area on the graph that represents the distance travelled by the car during the period. (b) Which part of the graph represents uniform motion of the car? #### Solution (a) The shaded area which is equal to 1/2xx4xx6=12m represents the distance travelled by the car in the first 4 s. (b) The part of the graph in red colour between time 6 s to 10 s represents uniform motion of the car. Is there an error in this question or solution? #### Video TutorialsVIEW ALL [1] Solution The Speed-time Graph for a Car is Shown in the Following Figure Concept: Graphical Representation of Motion - Distance-time Graphs. S
2019-12-13 10:41:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3876964747905731, "perplexity": 808.8676520703987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00062.warc.gz"}
https://www.nature.com/articles/s41598-020-73007-1?error=cookies_not_supported&code=189c57e9-7747-4e3b-8e59-e6602592ba4b
## Introduction Soluble surfactants play a fundamental role in many microfluidic applications1. For instance, it is well-known that surfactants can stabilize both foams and emulsions due to Marangoni convection effects2,3,4. The surface viscosity of surfactant monolayers is also believed to play a significant role in such stabilization. In fact, the drainage time during the coalescence of two bubbles/droplets can considerably increase due to the monolayer viscosity5. However, there are serious doubts about whether small-molecule surfactants commonly used in microfluidic applications exhibit measurable surface viscosities. For instance, Zell et al.6 reported that the surface shear viscosity of Sodium Dodecyl Sulfate (SDS) was below the sensitivity limit of their experimental technique ($$\sim 10^{-8}$$ Pa s m). This raises doubts about the role played by surface shear rheology in the stability of foams and emulsions treated with soluble surfactants. The disparity among the reported values of shear and dilatational viscosities of both soluble and insoluble surfactants reflects the complexity of measuring such properties. The lack of precise information about these values, as well as the mathematical complexity of the calculation of the surface viscous stresses, has motivated that most of the experimental and theoretical works in microfluidics do not take into account those stresses. However, one can reasonably expect surface viscosity to considerably affect the dynamics of interfaces for sufficiently small spatiotemporal scales even for nearly-inviscid surfactants7. A paradigmatic example of this is the pinch-off of an interface covered with surfactant7, where both the surface-to-volume ratio and surface velocity can diverge for times and distances sufficiently close to this singularity. In the pinching of a Newtonian liquid free surface, the system spontaneously approaches a finite-time singularity, which offers a unique opportunity to observe the behavior of fluids with arbitrarily small length and time scales. This property and its universal character (insensitivity to both initial and boundary conditions) turn this problem into an ideal candidate to question our knowledge of fundamental aspects of fluid dynamics. Both theoretical8,9,10,11,12 and experimental7,13,14,15 studies on the free surface pinch-off have traditionally considered the dependence of the free surface minimum radius, $$R_{\text{min}}$$, with respect to the time to the pinching, $$\tau$$, as an indicator of the relevant forces arising next to the pinching spatiotemporal coordinate. For small viscous effects, the thinning of the liquid thread passes through an inertio-capillary regime characterized by the power law \begin{aligned} R_{\text{min }}=A \left( \frac{\sigma }{\rho }\right) ^{1/3} \tau ^{2/3}, \end{aligned} (1) where $$\sigma$$ and $$\rho$$ are the liquid surface tension and density, respectively9,16. The dimensionless prefactor A can exhibit a complex, nonmonotonic behavior over many orders of magnitude in $$\tau$$. In fact, its asymptotic value $$A\simeq 0.717$$ is never reached because there are very long-lived transients, and then viscous effects take over17. The addition of surfactant confers a certain degree of complexity on Newtonian liquids, which may lead to unexpected behaviors during the pinch-off of their free surfaces. For instance, Marangoni stress can produce microthread cascades during the breakup of interfaces loaded with surfactants18. It is still a subject of debate whether surfactants are convected away from the pinching region. In that case, the system would follow the self-similar dynamics of clean interfaces at times sufficiently close to the breakup7,13,19,20,21,22,23,24,25. The persistence of a surfactant monolayer in the pinching of an interface potentially entails the appearance of several effects. The first and probably more obvious is the so-called solutocapillarity, i.e., the local reduction of the surface tension due to the presence of surface-active molecules24,26. The other effect that has been accounted for is the Marangoni stress induced by the surface tension gradient due to uneven distribution of surfactant along the free surface12,18,19,20,22,27,28,29,30,31,32. However, some other effects might be considered in the vicinity of the pinching region as well. Among them, the shear and dilatational surface viscosities have already been shown to affect considerably the breakup of pendant drops covered with insoluble (viscous) surfactants7. SDS is one of the most commonly used surfactants in microfluidic experiments. The adsorption/desorption times of SDS are several orders of magnitude larger than the characteristic time of the breakup of free surfaces enclosing low-viscosity liquids. This allows one to regard SDS as an insoluble surfactant, which considerably simplifies the problem. Under the insolubility condition, bulk diffusion and adsorption/desorption processes can be ruled out. Due to its small molecular size, the SDS monolayer is assumed to exhibit a Newtonian behavior33. In addition, the sphere-to-rod transition of SDS micelles (and its associated viscoelastic behavior) does not take place unless some specific salt is added to the solution34. Therefore, viscoelastic effects are not expected to come up even for concentrations larger than the cmc. Surface viscosities of small-size surfactant molecules, such as SDS, are believed not to affect the breakage of a pendant drop due to their small values. However, and as mentioned above, the surface-to-volume ratio diverges in the vicinity of the pinching region and, therefore, surface viscous effects can eventually dominate both inertia and viscous dissipation in the bulk of that region. In addition, the surface tension is bounded between the values corresponding to the clean free surface and the maximum packaging limit, while surface velocity can diverge at the pinch-off singularity. This suggests that surface viscous stresses (which are proportional to the surface velocity gradient) can become comparable with, or even greater than, Marangoni stress (which is proportional to surface tension gradient) in the pinching region for times sufficiently close to the breakup. One can hypothesize that surface viscous stresses can eventually have a measurable influence on the evolution of the free surface even for very low-viscosity surfactants. This work aims to test this hypothesis. The comparison between numerical simulations and experimental data will allow us to determine upper bounds for both the shear and dilatational viscosities of SDS. We will propose a scaling law which reflects the balance between the driving capillary force and the resistant surface viscous stresses in the last stage of the free surface breakup. ## Results and discussion In this work, experiments were conducted with unprecedented spatiotemporal resolution to determine the free surface minimum radius as a fuction of the time to the pinching. The experimental results were compared with a numerical solution of the full Navier–Stokes equations which includes the effects of the shear and dilatational viscosities. The experimental procedure, theoretical model, and numerical method are described in “Methods” section. Figure 1 shows images of the pinch-off of a drop of deionized water (DIW), DIW+SDS 0.8cmc, and DIW+SDS 2cmc. A microthread forms next to the pinching point when the surfactant is added. The breakup of that microthread produces a tiny subsatellite droplet 1–2 $$\upmu$$m in diameter. This droplet is significantly smaller than that observed in previous experiments with 5-cSt silicone oil in the absence of surfactant, which seems to confirm that the silicone oil subsatellite droplet was formed by viscoelastic effects35. Figure 2 shows the free surface minimum radius, $$R_{\text{min }}$$, as a function of the time to the pinching, $$\tau$$, for experiments conducted with two feeding capillary radii $$R_0$$ (see“Methods” section). The agreement among the results obtained for the same liquid shows both the high reproducibility of the experiments and the universal character (independency from $$R_0$$) of $$R_{\text{min }}(\tau )$$ for the analyzed time interval. In fact, the differences between the results obtained with $$R_0=115$$ and 205 $$\upmu$$m are smaller than the effect attributed to the surface viscosities, as will be described below. The results for DIW follow the scaling law (1) with $$A\simeq 0.55$$. As can be seen in Fig.  3, there is a remarkable agreement between the experiments and numerical simulations for the pure DIW case for times to the pinching as small as $$\sim 300$$ ns, which constitutes a stringent validation of both experiments and simulations. When SDS is dissolved in water, it creates a monolayer which substantially alters the pinch-off dynamics. The function $$R_{\text{min }}(\tau )$$ takes smaller values than in the pure DIW case over the entire process due to the reduction of the surface tension. More interestingly, if only solutocapillarity and Marangoni convection are considered in the numerical simulations (blue solid lines), there is a measurable deviation with respect to the experimental results for $$R_{\text{min }}(\tau )\lesssim 5$$ $$\upmu$$m. Specifically, the free surface in the experiment evolves towards its pinching slower than in the numerical simulation. We added surface viscous stresses (see “Methods” section) to the simulation to reproduce the entire range of experimental data. To this end, we set to zero one of the surface viscosities and modulated the other. In this way, one can establish upper bounds of both the surface shear $$\mu _1^{S*}$$ and dilatational $$\mu _2^{S*}$$ viscosity at the cmc (see “Methods” section). The numerical results fit the experimental measurements for $$\mu _1^{S*}=1.2 \times 10^{-9}$$ Pa s m and $$\mu _2^{S*}=0$$ (Fig. 3-left) or $$\mu _1^{S*}=0$$ and $$\mu _2^{S*}=6\times 10^{-7}$$ Pa s m (Fig. 3-right). As can be observed, the optimum value of the dilatational viscosity $$\mu _2^{S*}$$ is more than two orders of magnitude larger than that of the shear viscosity $$\mu _1^{S*}$$. This means that the effect of the dilatational viscosity is much smaller than that of the shear viscosity. If one assumes that the values of both viscosities are commensurate with each other, the dilatational viscosity plays a negligible role in the filament thinning. This result has practical consequences because it means that the breakup of a pendant drop can be used to measure the shear surface viscosity of a nearly-inviscid surfactant monolayer. The value $$\mu _1^{S*}=1.2 \times 10^{-9}$$ Pa s m is consistent with the results obtained by Zell et al.6, who concluded that the shear viscosity of SDS in DIW must take values below $$10^{-8}$$ Pa s m (the sensitivity limit of their technique). Figure 4 shows the values of the axial distribution of the Marangoni stress M and tangential shear viscous stress SV, the surfactant surface concentration $${\widehat{\Gamma }},$$ and the free surface radius $$R/R_0$$ for DIW+SDS 0.8cmc. Here, \begin{aligned} \text {M}\equiv \mathbf{t}\cdot \varvec{\nabla }^S{\hat{\sigma }},\quad \text {SV}\equiv \mathbf{t}\cdot \{\varvec{\nabla }^S[-\text {Oh}_1^S(\varvec{\nabla }^S\cdot \mathbf{v})]+2\varvec{\nabla }^S\cdot (\text {Oh}_1^S{{\mathbf {\mathsf{{D}}}}}^S)\}, \end{aligned} (2) where $$\text {Oh}_{1}^S$$ is the superficial Ohnesorge number defined in terms of the surface shear viscosity (see “Methods” section). The relative importance of the shear viscosity increases as the minimum radius decreases. The presence of shear viscosity slightly reduces the magnitude of the Marangoni stress. The viscous surface stress hardly alters the surfactant distribution and the free surface shape. As mentioned in the Introduction, there is still a certain controversy about whether surfactants are convected away from the pinching region7,13,19,20,21,22,23,24,25. Our results show that, when Marangoni and surface viscous stresses are taken into account, the surfactant is not swept away from the thread neck in the time interval analyzed ($${\widehat{\Gamma }}\gtrsim {0.7}$$ in this region). These stresses operate in a different way but collaborate to keep the surfactant in the vicinity of the pinching point. Marangoni stress tries to restore the initial uniform surfactant concentration, while surface viscosity opposes to the variation of the surface velocity, and, therefore, to the extensional flow responsible for the surfactant depletion that would occur in the absence of Marangoni and viscous stresses. Interestingly, the free surface shape for $$\mu _1^{S*}=\mu _2^{S*}=0$$ is practically the same as that with the adjusted value of $$\mu _2^{S*}$$. This indicates that surface viscosity simply delays the time evolution of that shape. In fact, the values of the minimum radius obtained with and without surface viscosity significantly differ from each other when they are calculated at the same time to the pinching. For instance, $$R_{\text{min }}=0.24$$ and 0.42 $$\upmu$$m at $$\tau \simeq 0.25$$ $$\upmu$$s for $$\{\mu _1^{S*}=1.2 \times 10^{-9}$$ Pa s m, $$\mu _2^{S*}=0\}$$ and $$\mu _1^{S*}=\mu _2^{S*}=0$$, respectively. However, the free surface shapes are practically the same if they are compared when the same value $$R_{\text{min }}=0.24$$ $$\upmu$$m of the minimum radius is reached. In addition, the surfactant density distribution is not considerably affected by the surface viscosity. We can conclude that the surface viscosities of the SDS monolayer hardly alter the satellite droplet diameter and the amount of surfactant trapped in it. In this sense, solutocapillarity and Marangoni convection are the major factors associated with the surfactant12. These results differ from those obtained for a much more viscous surfactant7. We now study how the scaling of the minimum radius depends on the surfactant viscosities. In general, we have $$R_{\text{min }}=f(\tau ,\mu _{1,2}^S)$$. Assume that we can write this equation in the form $$R_{\text{min }}=R_s H(\tau /\tau _s)$$, where $$R_s$$ and $$\tau _s$$ are the length and time scales associated with the surface viscosities, respectively. We suppose that these scales depend on the viscosities as \begin{aligned} R_s=A (\mu _{1,2}^{S*})^{\alpha }, \quad \tau _s=B (\mu _{1,2}^{S*})^{\beta }. \end{aligned} (3) The cross-over function $$H(\xi )$$ behaves as $$H(\xi )\sim \xi ^{2/3}$$ for $$\xi \gg 1$$ (inviscid limit) and $$H(\xi )\sim \xi ^{\gamma }$$ for $$\xi \ll 1$$ (viscous regime), with a crossover at $$\xi \sim 1$$. Therefore, $$R_{\text{min }}=A B^{-2/3} (\mu _{1,2}^S)^{\alpha -2\beta /3}\tau ^{2/3}$$ in the inviscid limit. Assuming that $$R_{\text{min }}\sim \tau ^{2/3}$$ in that limit, we conclude that $$\alpha =2\beta /3$$. The value of the exponent $$\beta$$ can be guessed from the balance of forces. Both Marangoni and surface viscous stresses delay the free surface pinch-off (Fig. 3) acting against the driving capillary force. For sufficiently small values of $$R_{\text{min }}$$, the effect of surface viscous stresses become comparable to that caused by Marangoni stress (Fig. 4). The value of $$R_{\text{min }}$$ below which this occurs decreases as the surface viscosities decrease. Therefore, we expect surface viscous stresses to be commensurate with the driving capillary pressure in the pinch-off region for $$R_{\text{min }}\rightarrow 0$$. The balance between the capillary pressure and the normal surface viscous stresses in Eq.  (10) yields $$\sigma _0/R_s\sim \mu _{1,2}^{S*}/(R_s\tau _s)$$, where we have taken into account that the variation of surface velocity scales as $$(R_s/\tau _s)/R_s$$ due to the continuity equation. The above balance allows us to conclude that $$\beta =1$$, and therefore $$\alpha =2/3$$. According to our analysis, \begin{aligned} \frac{R_{\text{min }}}{(\mu _{1,2}^{S*})^{2/3}}\sim \left( \frac{\tau }{\mu _{1,2}^{S*}}\right) ^{\gamma }, \end{aligned} (4) in the viscous regime. According to our previous results (Fig.  3), we can assume that the dilatational viscosity plays a negligible role. Then, we have \begin{aligned} \frac{R_{\text{min }}}{(\mu _{1}^{S*})^{2/3}}\sim \left( \frac{\tau }{\mu _{1}^{S*}}\right) ^{\gamma }, \end{aligned} (5) in surface viscosity-dominated regime. Figure 5 shows the results scaled with those exponents. The simulations show the transition from the inertio-capillary regime $$R_{\text{min }}\sim \tau ^{2/3}$$ to the asymptotic behavior given by power law $$\gamma =1$$. The asymptotic behavior $$R_{\text{min }}\sim \tau$$ coincides with that recently derived by Wee et al.36. Figure 6 shows the axial distribution of the capillary pressure Pc and normal shear viscous stress $$\widehat{\text {SV}}$$ for DIW+SDS 0.8cmc at three instants as indicated by the value of $$R_{\text{min }}$$. Here, \begin{aligned} \text {Pc}=-(\varvec{\nabla }^S\cdot \mathbf{n}){\hat{\sigma }}, \quad \widehat{\text {SV}}=\text {Oh}_1^S(\varvec{\nabla }^S\cdot \mathbf{n})(\varvec{\nabla }^S\cdot \mathbf{v}). \end{aligned} (6) We consider the shear viscous stress $$\widehat{\text {SV}}$$ because the results indicate that shear viscosity plays a more significant role than the dilatational one. The normal shear viscous stress becomes comparable with the capillary pressure as $$R_{\text{min }}\rightarrow 0$$. To summarize, we studied both numerically and experimentally the breakup of a pendant water droplet loaded with SDS. We measured a delay of the droplet breakup with respect to that predicted when only solutocapillarity and Marangoni stress are accounted for. This delay is attributed to the role played by surface viscosities. When Marangoni and surface viscous stresses are accounted for, then surface convection does not sweep away the surfactant from the thread neck, at least in the time interval analyzed. The results show that surface viscous stresses have little influence on both the free surface position and the surfactant distribution along the free surface. Therefore, the size of the satellite droplet and the amount of surfactant accumulated in it are hardly affected by the surface viscosities. These results differ from those obtained for a much more viscous surfactant7. As the free surface approaches its breakup, an inertio-capillary regime gives rise to that in which surface viscous stresses become commensurate with the driving capillary pressure. We have proposed a scaling law to account for the effect of surface viscosities on $$R_{\text{min }}(\tau )$$ in this last regime. In the presence of surfactant, both the simulations and experiments show the formation of a quasi-cylindrical filament near the pinching point for $$\tau \lesssim 0.1$$ $$\upmu$$s (see Figs. 1, 4c). This filament is the precursor of the subsatellite droplet formed later on in the experiments. For $$\tau \lesssim 0.1$$ $$\upmu$$s, a bead seems to protrude from the filament in the experiments, which gives rise to the formation of the subsatellite droplet. The temporal resolution of the image acquisition system does not enable describing this process to determine the instant at which the filament bulges. In the simulations, we did not observe the filament protrusion preceding the formation of the subsatellite droplet. Therefore, discrepancies between the simulations and experiments associated with the growth of subsatellite droplets can arise for $$\tau \lesssim 0.1$$ $$\upmu$$s. The surface viscosities are estimated by fitting the numerical solution to the experiments for $$\tau \gtrsim 1$$ $$\upmu$$s. Therefore, this fitting is not expected to be affected by those discrepancies. However, Figs. 4, 5 and 6 show numerical results for times to the pinching down to 0.1–0.2 $$\upmu$$s. There can be differences between the experiments and simulations for those times. These differences can be attributed not only to the spatial resolution of the numerical method, but also to possible physical effects not accounted for in the governing equations and brought to light by the extremely small spatial and temporal scales, such as surface-active impurities in the free surface37, non-linear contributions to the dependency of the surface viscosities on the surfactant concentration, and interfacial rheology. The pinching of an interface is a singular phenomenon that allows us to test theoretical models under extreme conditions. The vanishing spatiotemporal scales reached by the system as the interface approaches its breakup unveil physical effects hidden in phenomena occurring on much larger scales. This work is an example of this. Surface viscous stresses become relevant in the vicinity of the pinching region long before thermal fluctuations become significant38,39, even for practically inviscid surfactants, such as SDS. Besides, the effect of the dilatational surface viscosity on the thinning has shown to be negligible with respect to the shear viscosity. In this sense, the surfactant-laden pendant droplet can be seen as a very sensitive surfactometer to determine the values of the surface shear viscosity, which constitutes a difficult problem40. A series of experiments for different surfactant concentrations and needle radii may lead to accurate measurements of $$\mu _1^{S}(\Gamma )$$ characterizing the behavior of low-viscosity surfactants. ## Methods ### Theoretical model Consider a liquid drop of density $$\rho$$ and viscosity $$\mu$$ hanging on a vertical capillary (needle) of radius $$R_0$$ due to the action of the (equilibrium) surface tension $$\sigma _0$$ (Fig. 7a). In this section, all the variables are made dimensionless with the needle radius $$R_0$$, the inertio-capillary time $$t_0=(\rho R_0^3/\sigma _0)^{1/2}$$, the inertio-capillary velocity $$v_0=R_0/t_0$$, and the capillary pressure $$\sigma _0/R_0$$. The velocity $$\mathbf{v}(\mathbf{r},t)$$ and reduced pressure $$p(\mathbf{r},t)$$ fields are calculated from the continuity and Navier–Stokes equations \begin{aligned}&{\varvec{\nabla }}\cdot \mathbf{v}=0, \end{aligned} (7) \begin{aligned}&\frac{\partial \mathbf{v}}{\partial t}+\mathbf{v}\cdot {\varvec{\nabla }}\mathbf{v}=-{\varvec{\nabla }}p+{\varvec{\nabla }}\cdot \mathbf{T}, \end{aligned} (8) respectively, where $$\mathbf{T}=\text {Oh}[{\varvec{\nabla }}\mathbf{v}+({\varvec{\nabla }}{} \mathbf{v})^T]$$ is the viscous stress tensor, and $$\text {Oh}=\mu (\rho \sigma _0 R_0)^{-1/2}$$ is the volumetric Ohnesorge number. These equations are integrated over the liquid domain of (dimensionless) volume V considering the non-slip boundary condition at the solid surface, the anchorage condition at the needle edge, and the kinematic compatibility condition at the free surface. Neglecting the dynamic effects of the surrounding gas, the balance of normal and tangential stresses at the free surface yields \begin{aligned} -p+B\, z+\mathbf{n}\cdot \mathbf{T}\cdot \mathbf{n}=\mathbf{n}\cdot \varvec{\tau }^S, \quad \mathbf{t}\cdot \mathbf{T}\cdot \mathbf{n}=\mathbf{t}\cdot \varvec{\tau }^S, \end{aligned} (9) where $$B=\rho g R_0^2/\sigma _0$$ is the Bond number, g the gravitational acceleration, $$\mathbf{n}$$ the unit outward normal vector, $$\mathbf{t}$$ the unit vector tangential to the free surface meridians, and \begin{aligned} {\varvec{\tau }}^S=-\mathbf{n}(\varvec{\nabla }^S\cdot \mathbf{n}){\hat{\sigma }}+ \varvec{\nabla }^S{\hat{\sigma }} -\mathbf{n}(\varvec{\nabla }^S\cdot \mathbf{n})\left( \text {Oh}_2^S- \text {Oh}_1^S\right) (\varvec{\nabla }^S\cdot \mathbf{v}) +\varvec{\nabla }^S[\left( \text {Oh}_2^S-\text {Oh}_1^S \right) (\varvec{\nabla }^S\cdot \mathbf{v})]+2\varvec{\nabla }^S\cdot (\text {Oh}_1^S{{\mathbf {\mathsf{{D}}}}}^S), \end{aligned} (10) is the surface stress tensor41. Here, $${{\mathbf {\mathsf{{ D}}}}}^S=1/2\, [\varvec{\nabla }^S \mathbf{v}\cdot {{\mathbf {\mathsf{{ I}}}}}^S+{{\mathbf {\mathsf{{I}}}}}^S\cdot (\varvec{\nabla }^S \mathbf{v})^T]$$, $$\varvec{\nabla }^ S$$ is the tangential intrinsic gradient along the free surface, $$\mathbf{v}$$ the (3D) fluid velocity on the free surface, $${\mathbf {\mathsf{{I}}}}^S$$ is the tensor that projects any vector on that surface, $${\widehat{\sigma }}\equiv \sigma /\sigma _0$$ is the ratio of the local value $$\sigma$$ of the surface tension to its equilibrium value $$\sigma _0$$, $$\text {Oh}_{1,2}^S=\mu _{1,2}^S(\rho \sigma _0 R_0^3)^{-1/2}$$ are the superficial Ohnesorge numbers defined in terms of the surface shear and dilatational viscosities $$\mu _1^S$$ and $$\mu _2^S$$, respectively. The surface viscosities are expected to depend on the surfactant surface concentration. For the sake of simplicity, we assume the linear relationships $$\mu _{1,2}^S=\mu _{1,2}^{S*}{\widehat{\Gamma }}/{\widehat{\Gamma }}_{\text{cmc }}$$, where $$\mu _{1,2}^{S*}$$ are the surfactant viscosities at the cmc. In addition, $${\widehat{\Gamma }}\equiv \Gamma /\Gamma _0$$ and $${\widehat{\Gamma }}_{\text{cmc }}\equiv \Gamma _{\text{cmc }}/\Gamma _0$$, where $$\Gamma$$ and $$\Gamma _{\text{cmc }}$$ are the surfactant surface concentration and its value at the cmc, respectively, both in terms of the equilibrium value $$\Gamma _0$$. Therefore, \begin{aligned} \text {Oh}_{1,2}^S=\text {Oh}_{1,2}^{S*} \frac{{\widehat{\Gamma }}}{{\widehat{\Gamma }}_{\text{cmc }}}, \end{aligned} (11) where $$\text {Oh}_{1,2}^{S*}=\mu ^{S*}_{1,2}(\rho \sigma _0 R_0^3)^{-1/2}$$ are the superficial Ohnesorge numbers at the cmc. To calculate the surfactant surface concentration, we take into account that the droplet breakup time is much smaller than the characteristic adsorption–desorption times, and, therefore, surfactant solubility can be neglected over the breakup process. In this case, one must consider the equation governing the surfactant transport on the free surface: \begin{aligned} \frac{\partial {\widehat{\Gamma }}}{\partial t}+{\varvec{\nabla }}^S\cdot ({\widehat{\Gamma }}{\mathbf {v}}^S)+{\widehat{\Gamma }}{} \mathbf{n}\cdot ({\varvec{\nabla }}^{S}\cdot \mathbf{n})\mathbf{v}=\frac{1}{\text {Pe}^S}\, {\varvec{\nabla }}^{S2}{\widehat{\Gamma }}, \end{aligned} (12) where Pe$$^S$$ = $$R_0^2/(t_0 {{{\mathcal {D}}}}^S)$$ and $${{\mathcal {D}}}^S$$ are the surface Peclet number and diffusion coefficient, respectively. The equation of state $${\widehat{\sigma }}({\widehat{\Gamma }})$$ is obtained from experimental data as explained below. The free surface becomes saturated for $${\widehat{\Gamma }}\simeq {\widehat{\Gamma }}_{\text{cmc }}$$. To reproduce this effect in the simulations, if $${\widehat{\Gamma }}$$ exceeds $${\widehat{\Gamma }}_{\text{cmc }}$$ at some point and time, we set $${\widehat{\Gamma }}={\widehat{\Gamma }}_{\text{cmc }}$$ at that point and time. ### Numerical simulation The theoretical model is numerically solved by mapping the time-dependent liquid region onto a fixed numerical domain through a coordinate transformation. The transformed spatial domains were discretized using 11 Chebyshev spectral collocation points in the transformed radial direction and 5001 equally spaced collocation points in the transformed axial direction. The axial direction was discretized using fourth-order finite differences. Second-order backward finite differences were used to discretize the time domain42. The time step was adapted in the course of the simulation according to the formula $$\Delta t=0.025 R_{\text{min }}/v_0$$. To deal with the free surface overturning taking place right before the droplet breakup, a quasi-elliptic transformation43 was applied to generate the mesh. To trigger the pendant drop breakup process, a very small force was applied to a stable shape with a volume just below the critical one. This perturbation was expected to affect neither the pendant drop dynamics close to the free-surface pinch-off nor the formation of the satellite droplet. The time-dependent mapping of the physical domain does not allow the algorithm to surpass the free surface pinch-off, and therefore the evolution of the satellite droplet cannot be analyzed. The breakup time in the simulation was calculated from the linear extrapolation of the last $$N_b=10$$ values of $$R_{\text{min }}(t)$$. We verified that the results are practically the same for the time interval analyzed in this study when the total number of grid points is doubled (see Supplementary Information). We checked that the value of $$N_b$$ does not significantly affect the curve $$R_{\text{min }}(\tau )$$ over the time interval considered in our analysis (see Supplementary Information). ### Experimental method The experimental method is similar to that used by Rubio et al.44 to study the extensional flow of very weakly viscoelastic polymer solutions. In the experimental setup (Fig. 7b), a cylindrical feeding capillary (A) $$R_0=115$$ $$\upmu$$m in outer radius was placed vertically. To analyze the role of the capillary size, we also conducted experiments with $$R_0=205$$ $$\upmu$$m. A pendant droplet was formed by injecting the liquid at a constant flow rate with a syringe pump (Harvard Apparatus PHD 4400) connected to a stepping motor. We used a high-precision orientation system and a translation stage to ensure the correct position and alignment of the feeding capillary. Digital images of the drop were taken using an ultra-high-speed video camera (kirana-5M) (B) equipped with optical lenses (an Optem HR $$\times$$ 50 magnification zoom-objective and a NAVITAR $$\times$$ 12 set of lenses) (C) (Fig. 7c). As explained below, the images were acquired either at $$5 \times 10^6$$ fps with a magnification 101.7 nm/pixel or at $$5 \times 10^5$$ fps with a magnification 156 nm/pixel. The camera could be displaced both horizontally and vertically using a triaxial translation stage (D) with one of its horizontal axes (axis x) motorized (THORLABS Z825B) and controlled by the computer, which allowed as to set the droplet-to-camera distance with an error smaller than 29 nm. The camera was illuminated with a laser (SI-LUX 640, specialised imaging) (E) synchronized with the camera, which reduced the effective exposure time down to 100 ns. The camera was triggered by an optical trigger (SI-OT3, specialised imaging) (F), equipped with optical lenses (G) and illuminated with cold white backlight (H). All these elements were mounted on an optical table with a pneumatic anti-vibration isolation system (I) to damp the vibrations coming from the building. In the experiment, a pendant droplet hanging on the feeding capillary was inflated by injecting the liquid at 1 ml/h. The triple contact lines anchored to the outer edge of the capillary. The drop reached its maximum volume stability limit after around 20 s. We analyzed images of the quasi-static process with the Theoretical Image Fitting Analysis (TIFA)46 method to verify that the surface tension right before the droplet breakup was the same (within the experimental uncertainty) as that measured at equilibrium. In this way, one can ensure that the surfactant surface concentration corresponded to the prescribed volumetric concentration at equilibrium. This conclusion can be anticipated from the fact that the characteristic surfactant adsorption process is much smaller than the droplet inflation time. When the maximum volume stability limit was reached, the droplet broke up spontaneously. We recorded 180 images at $$5 \times 10^6$$ fps of the final stage of the breakup process within a spatial window $$94 \times 78$$ $$\upmu$$m. This experiment was repeated several times to assess the degree of reproducibility of the experimental results. The flow rate at which the pendant droplet is inflated was reduced down to 0.1 ml/h to verify that this parameter did not affect the final stage of the breakup process. Besides, 180 images of a spatial window $$144 \times 120$$ $$\upmu$$m were taken at $$5 \times 10^5$$ fps to describe the process on a larger scale. We selected SDS in deionized water (DIW) because it is a solution widely used in experiments and very well characterized. The dependence of the surface tension with respect to the surface surfactant concentration $$\Gamma$$ has been determined from direct measurements (Fig. 7d)45. We use the fit \begin{aligned} \sigma =10^3\frac{-17.94\,\Gamma +60.76}{\Gamma ^2-240.9\,\Gamma +841.8}, \end{aligned} (13) to that experimental data in our simulations. In this equation, $$\sigma$$ and $$\Gamma$$ are measured in mN/m and $$\upmu$$mol/m$$^2$$, respectively. It should be noted that there is no theoretical justification for the above equation of state. It simply represents an accurate approximation for the numerical simulations. Other equations may be equally valid for our purposes. Table 1 shows some physical properties of SDS in DIW. The shear $$\mu _1^{S*}$$ and dilatational $$\mu _2^{S*}$$ surface viscosities of aqueous solutions of SDS at the cmc have been widely measured with different methods over the last decades. Zell et al.6 reported the surface shear viscosity to be below $$10^{-8}$$ Pa s m (the sensitivity limit of their technique). Other authors have measured values up to five orders of magnitude higher than that upper bound47,48. Table 2 shows the values of the superficial Ohnesorge numbers, Boussinesq numbers $$\text {Bq}_{1,2}=\mu _{1,2}^S/(\mu \ell _c)$$, and surface Peclet number calculated from the values shown in Table 1. The superficial Ohnesorge numbers are much smaller than the volumetric one, $$\text {Oh}\simeq 0.02$$, which indicates that the superficial viscosities play no significant role on a scale given by the feeding capillary radius $$R_0$$. The Boussinesq numbers are defined in terms of the characteristic length $$\ell _c\equiv 1$$ $$\upmu$$m of the pinching region (see “Results and discussion” section). Due to the smallness of this length, superficial viscous stresses may become comparable with the bulk ones, and, therefore, may produce a measurable effect on that scale. The value of the Peclet number indicates that surfactant surface diffusion is negligible at the beginning of the droplet breakup. The Peclet number defined in terms of $$\ell _c$$ and the corresponding capillary time $$(\rho \ell _c^3/\sigma _0)^{1/2}$$ takes values of the order of $$10^3$$$$10^4$$. Therefore, one can expect surface diffusion to play a secondary role on that scale too.
2022-11-27 16:45:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7755412459373474, "perplexity": 1032.062204747657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00746.warc.gz"}
https://astronomy.stackexchange.com/questions/38686/are-binary-stars-with-only-one-visible-sibling-common-enough-to-contribute-notic/38689
Are binary stars with only one visible sibling common enough to contribute noticeable to dark matter? Single-lined spectroscopic binaries is a binary system where only one star can be detected. Are they common enough to contribute noticeable to dark matter? The luminosity of normal stars is a strongly increasing function of mass. e.g. $$L \propto M^3$$. If another star is "hidden" in a binary system, then it is of lower mass. So the amount of hidden mass is less than what is seen. Of course this can (and is) be accounted for when estimating the mass present in luminous matter because we know typical binary frequencies and properties. The luminous matter is only about 1% of that required to explain the properties of the universe as a whole and less than 10% of the mass required to explain the dynamics of ours and other galaxies, so even a factor of 2 is not going to solve the problem. Further, from big bang nucleoynthesis calculations and estimates of the primordial abundances of helium and deuterium, we know the missing mass is not "baryonic". It is not normal matter made if protons and neutrons. So hidden binary companions are not an important contributor to the dark matter problem. You ask for a quantitative answer. Well, a typical binary frequency for low-mass stars is 30-40% and they have a flat mass ratio distribution (i.e. the binary companions have a uniform distribution of mass). Given that stars make up less than 1% of the critical density of the universe then even if one ignored the "binary problem" (which astronomers don't), then the additional mass in hidden binary system is less than 0.15-0.2% of the critical density. This is to be compared with the ~30% that needs to be there to satisfy the dark matter problem (and as I mentioned, the vast majority of that cannot be in the form of normal matter like binary star companions in any case). To be even more specific. Hidden binary companions make no contribution to dark matter at all, since they are luminous. What they change is the average mass/luminosity ratios of stars that must be used in order to estimate a mass from a luminosity. The values that are used by astronomers to estimate the amount of luminous matter attempt to take account of a binary population. Are the effects of binarity noticeable? Yes of course we have many means to show that stars have a less massive companion. Are they noticeable on large scales - no. The precision with which the dark matter content of an individual galaxy is estimated (e.g. from a rotation curve) will have uncertainties of at least 10% and the contribution of hidden mass in stellar binary systems will be tiny compared with that. • Check the Dragonfly telescope dragonflytelescope.org results to see how important background noise level is for dark matter/energy detection. – David Jonsson Nov 9 '20 at 13:34
2021-02-28 01:12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842824578285217, "perplexity": 584.6409188992513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00262.warc.gz"}
http://www.rnta.eu/CLDRMW210/ctalks.html
Leuca2022 Celebrating Claude Levesque's, Damien Roy's and Michel Waldschmidt's birthdays Joint with the sixth mini symposium of the Roman Number Theory Association May 16-21, 2022, Marina di San Gregorio, Patù (Lecce), Italy Contributed talks Ranjan Bera (Indian Statistical Institute) Friday 9:30 - 9:40 There is an absolute constant $D_0> 0$ such that if $f(x)$ is an integer polynomial, then there is an integer $\lambda$ with $|\lambda | \leq D_0$ such that $x^n +f(x) +\lambda$ is irreducible over the rationals for infinitely many integers $n\geq 1$. Furthermore, if $\deg f \leq 25$, then there is a $\lambda$ with $\lambda\in \{-2,-1,0,1,2,3\}$ such that $x^n +f(x) +\lambda$ is irreducible over the rationals for infinitely many integers $n\geq 1$. These problems arise in connection with an irreducibility theorem of Andrzej Schinzel associated with coverings of integers and an irreducibility conjecture of Pál Turán. On primitivity and vanishing of Dirichlet Series Abhishek Bharadwaj (Queens University) Friday9:15 - 9:25 For a rational valued periodic function, we associate a Dirichlet series and provide a new necessary and sufficient condition for the vanishing of this Dirichlet series specialized at positive integers. This question was initiated by Chowla and implemented by Okada for a particular infinite sum. Our approach relies on the decomposition of the Dirichlet characters in terms of primitive characters. Using this, we find some new family of natural numbers for which a conjecture of Erdős holds. $\theta$-Congruent Number over Number Fields Shamik Das (Harish-Chandra Institute) Thursday 12:35 - 12:45 The notion of $\theta$-congruent number is a generalization of congruent number, where one considers the area of a triangle with all possible angles $\theta$ such that $\cos \theta$ is rational rather than just $\theta=\frac{\pi}{2}.$ In this talk we provide a criterion for a natural number to be a $\theta$-congruent number over certain classes of real number fields. On generalized Diophantine m-tuples Anup Dixit (Institute of Mathematical Sciences Chennai) Tuesday 12:35 - 12:45 A set of positive integers $\{a_1, a_2, ... , a_m\}$ is said to be a Diophantine m-tuple if $a_i a_j +1$ is a perfect square for all distinct $i$ and $j$. A natural question is how large can a Diophantine tuple be. In this context, the folklore Diophantine quintuple conjecture, recently settled by He, Togbe and Ziegler, states that there are no Diophantine quintuples. In this talk, we will discuss a generalization of this problem to k-th powers. This is joint work with Ram Murty and Seoyoung Kim Repdigits in various linear recurrent sequences Bernadette Faye (University of Dakar) Thursday12:05 - 12:15 The Lucas sequences $U_n(r,s)$ and $V_n(r,s)$ are integers sequences that satisfy the recurrence relation \begin{equation*} %\label{eq:2} U_{n+2}= ru_{n+1}-su_{n} \end{equation*} Where $r$ and $s$ are fixed positif integers.\medskip More generally, Lucas sequences $U_n(r,s)$ and $V_n(r,s)$ represents sequence of polynomial $r$ and $s$ with integer coefficients. Let $\alpha$ and $\beta$ denote the two roots of the equation \begin{equation*} %\label{eq:2} x^2 -rx -s = 0 \end{equation*} It has a Discriminant $\Delta = r^2-4s$ and then the roots are $$\alpha=\frac{r + \sqrt{\Delta}}{2} and \beta=\frac{r - \surd\Delta}{2}.$$ Thus $$\alpha + \beta = r, \alpha\beta= s, \alpha - \beta = \sqrt{\Delta}$$ and, \begin{equation*} \label{eq:2} U_{n}= a\alpha^n +b\beta^n ~~for~~ all~~ n\geq 0 . \end{equation*} where $a$ and $b$ are two constants which can be determined. The binary recurrence sequence $(u_n)_{n\geq0}$ is called nondegenerate if $ab \neq0$ and $\alpha/\beta$ is not a root of unity. \medskip Given an integer $g>1$, a base $g$-repdigit is a number of the form $$N=a\cdot\frac{g^m-1}{g-1} \quad\quad \hbox{for some m\geq 1 and a\in\{1,\ldots,(g-1)\}}.$$ When $g=10$, such number are better know as a repdigit. Recently, investigation of the repdigits in the second-order linear recurrence sequences has been of interest to mathematicians. In this talk, we will make a survey of the recents results obtained on this subject. A density result for universal quadratic forms over number fields Vitezslav Kala (Charles University, Prague) Monday 12:25 - 12:35 A quadratic form over a totally real number field $K$ is universal if it represents all totally positive integers in $K$. I'll start with a brief overview of the known results and main tools for studying universal forms, which prominently include continued fractions and good rational approximations. Then I'll explain our new result saying that: For every fixed positive integer $r$, real quadratic fields $K$ that admit a universal quadratic form of rank $r$ have density $0$. Joint work with Dayoon Park, Pavlo Yatsyna, and Blazej Zmija. Dedekind Zeta values at 1/2. Neelam Kandhil (IMSc, Chennai) Tuesday12:25 - 12:35 Let $\zeta_K(s)$ denote the Dedekind zeta function of a number field $K$ and $\zeta_K'(s)$ denote its derivative. In this talk, we will discuss the non-vanishing of $\zeta_K(1/2)$ and $\zeta_K'(1/2)$ and their interrelation. We have a satisfactory answer for lower degree number fields. For abelian extensions, we improve a result of Murty and Tanabe, both qualitatively and quantitatively. We also extend our investigation to Galois as well as arbitrary number fields, borrowing tools from algebraic as well as transcendental number theory. An infinite family of non-Pólya fields arising from Lehmer quintics Nimish Kumar Mahapatra (Indian Institute of Science Education and Research Berhampur) Friday9:00 - 9:10 A number field $K$, with ring of integers $O_K$, is said to be a Pólya field if the $O_K$-module formed by the ring of integer-valued polynomials on $O_K$ admits a regular basis. The Pólya group $Po(K)$ of $K$ is a particular subgroup of the ideal class group $cl(K)$ of $K$, that measures the failure of $K$ being a Pólya field. In this talk we discuss a new family of quintic non-Pólya fields associated to Lehmer quntics. It is an interesting problem to study the embedding of a number field in a Pólya field. For this family, We will also explore bounds on the degree of smallest Pólya fields containing them. Finally we show that such non-Pólya fields are non-monogenic number fields. This is a joint work with Prem Prakash Pandey. On densities of multiplicative subgroups of rational numbers Andam Mustafa ( Salahaddin University & Università Roma Tre) Friday8:45 - 8:55 For a given finitely generated multiplicative subgroup of the rationals which possibly contain negative numbers, we derive the densities of primes for which the index of the reduction group has a given value (under GRH). Likewise, we completely classify, in the case of rank one, torsion groups for which the density vanishes, moreover we prove that the set of primes for which the index of the reduction group has a given value, is finite. For higher rank groups we propose some partial results. Furthermore, we compute the density of the set of primes for which the order of the reduction group is divisible by a given integer. On the density of visible lattice points along polynomials Ashish Kumar Pandey (IIIT-Delhi) Tuesdat 12:05 - 12:15 The notion of classical visibility from the origin has been generalized by viewing lattice points through curved lines of sights. In this talk, I will generalize the notion of visible lattice points for any polynomial family of curves passing through the origin. I will show that except for the family of curves represented by monomials $x^k$, the density of visible lattice points for any other polynomial family of curves passing through the origin is always one. This is joint work with Sneha Chaubey and Shvo Regavim. Special values of L-functions Siddhi Pathak (Chennai Mathematical Institute) Thursday12:20 - 12:30 In this short talk, we will discuss the question of linear independence of special values of Dirichlet L-functions. This problem is intimately connected to relations among the special values of the Hurwitz zeta-function as well as the polylogarithms. We will mention recent progress in this context, in joint works with Ram Murty and Abhishek Bharadwaj. On the abc Conjecture in Algebraic Number Fields Andrew Scoones (University of York) Monday 12:05 - 12:15 While the abc conjecture remains open, much work has been done on weaker versions, and on generalising the conjecture to number fields. Stewart and Yu were able to give an exponential bound for $\max \{a, b, c\}$ in terms of the radical over the integers, while Győry was able to give an exponential bound for the projective height $H (a, b, c)$ in terms of the radical for algebraic integers. We generalise Stewart and Yu's method to give an improvement on Győry's bound for algebraic integers, before briefly discussing applications to the effective Skolem-Mahler-Lech problem and the $XY Z$ conjecture. Reductions of Algebraic Numbers and Artin's Conjecture on primitive roots Pietro Sgobba (Université du Luxembourg) Monday12:35 - 12:45 In 1967 Hooley proved (under GRH) Artin's conjecture on primitive roots: for any $g\in\mathbb Z\setminus\{-1,0,1\}$ which is not a square, there are infinitely many primes $p$ such that $g$ is a primitive root modulo $p$ (i.e. the multiplicative order of $(g \bmod p)$ equals $p-1$). Several variations of this problem have been studied since then. We consider the condition that the multiplicative order of $(g \bmod p)$ is divisible by a given integer, or more generally that it lies in a given arithmetic progression. We address such questions for number fields and the reductions of algebraic numbers.
2022-07-01 03:47:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.791420578956604, "perplexity": 447.8732792203623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00468.warc.gz"}
http://mathhelpforum.com/algebra/116170-inverse-functions-print.html
# inverse functions • November 22nd 2009, 03:20 PM katt inverse functions i need to find the inverse of f(x) = x^2-4x+3 how do you do it when you have x^2 and x? • November 22nd 2009, 03:21 PM skeeter Quote: Originally Posted by katt i need to find the inverse of f(x) = x^2-4x+3 how do you do it when you have x^2 and x? complete the square. • November 22nd 2009, 04:10 PM mosta86 Y=x^2-4x+3 => x^2-4x+3-y=0 delta'=4-3+y=1+y => x1=2+squareroot(1+y) and x2=2-squareroot(1-y) this is the inverse you choose x1 or x2 depending on the constrains given . • November 22nd 2009, 04:58 PM skeeter $y = x^2 - 4x + 3 $ $y = x^2 - 4x + 4 + 3 - 4$ $y = (x - 2)^2 - 1$ swap x and y ... $x = (y - 2)^2 - 1$ $x + 1 = (y - 2)^2$ $\pm \sqrt{x+1} = y - 2$ $y = 2 \pm \sqrt{x+1}$ restrict the domain of the original function to $x \ge 2$, and the inverse function is $f^{-1}(x) = 2 + \sqrt{x+1} $
2016-05-28 05:33:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7836898565292358, "perplexity": 2645.9282724402724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277313.35/warc/CC-MAIN-20160524002117-00015-ip-10-185-217-139.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/im-confused_11
+0 # im confused 0 150 1 A car has one driver's seat and four different passenger seats. A group of five students wants to use the car to go to school, but only two of those five students are legally allowed to drive the car. What is the number of ways that the five students can be seated in the car for the drive? Apr 22, 2019 $$\text{First choose the driver. There are }\dbinom{2}{1}=2 \text{ ways to do this.}\\ \text{Now arrange the remaining 4 students. As the seats are distinct there are }4!=24 \text{ ways}\\ \text{Thus we have }2\cdot 24 = 48 \text{ ways off arranging the students}$$
2019-11-12 18:42:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7132185697555542, "perplexity": 694.9875539077469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00462.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-10-radical-expressions-and-equations-10-2-simplifying-radicals-practice-and-problem-solving-exercises-page-624/72
## Algebra 1: Common Core (15th Edition) $y=\frac{2 \pm \sqrt 10}{3}$ $3y^2-4y-2=0$ Using the quadratic formula: $y=\frac{-(-4) \pm \sqrt (-4)^2-4.3.(-2)}{2.3}$ $y=\frac{4 \pm 2\sqrt 10}{6}$ $y=\frac{2 \pm \sqrt 10}{3}$
2021-05-12 16:42:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6802335977554321, "perplexity": 2957.211699911807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00329.warc.gz"}
https://julianlsolvers.github.io/LsqFit.jl/latest/tutorial/
Tutorial # Tutorial ## Introduction to Nonlinear Regression Assume that, for the $i$th observation, the relationship between independent variable $\mathbf{x_i}=\begin{bmatrix} x_{1i},\, x_{2i},\, \ldots\, x_{pi}\ \end{bmatrix}'$ and dependent variable $Y_i$ follows: $Y_i = m(\mathbf{x_i}, \boldsymbol{\gamma}) + \epsilon_i$ where $m$ is a non-linear model function depends on the independent variable $\mathbf{x_i}$ and the parameter vector $\boldsymbol{\gamma}$. In order to find the parameter $\boldsymbol{\gamma}$ that "best" fit our data, we choose the parameter ${\boldsymbol{\gamma}}$ which minimizes the sum of squared residuals from our data, i.e. solves the problem: $\underset{\boldsymbol{\gamma}}{\mathrm{min}} \quad s(\boldsymbol{\gamma})= \sum_{i=1}^{n} [m(\mathbf{x_i}, \boldsymbol{\gamma}) - y_i]^2$ Given that the function $m$ is non-linear, there's no analytical solution for the best $\boldsymbol{\gamma}$. We have to use computational tools, which is LsqFit.jl in this tutorial, to find the least squares solution. One example of non-linear model is the exponential model, which takes a one-element predictor variable $t$. The model function is: $m(t, \boldsymbol{\gamma}) = \gamma_1 \exp(\gamma_2 t)$ and the model becomes: $Y_i = \gamma_1 \exp(\gamma_2 t_i) + \epsilon_i$ To fit data using LsqFit.jl, pass the defined model function (m), data (tdata and ydata) and the initial parameter value (p0) to curve_fit(). For now, LsqFit.jl only supports the Levenberg Marquardt algorithm. julia> # t: array of independent variables julia> # p: array of model parameters julia> m(t, p) = p[1] * exp.(p[2] * t) julia> p0 = [0.5, 0.5] julia> fit = curve_fit(m, tdata, ydata, p0) It will return a composite type LsqFitResult, with some interesting values: * fit.dof: degrees of freedom * fit.param: best fit parameters * fit.resid: vector of residuals * fit.jacobian: estimated Jacobian at the solution ## Jacobian Calculation The Jacobian $J_f(\mathbf{x})$ of a vector function $f(\mathbf{x}): \mathbb{R}_m \to \mathbb{R}_n$ is defined as the matrix with elements: $[J_f(\mathbf{x})]_{ij} = \frac{\partial f_i(\mathbf{x})}{\partial x_j}$ The matrix is therefore: $J_f(\mathbf{x}) = \begin{bmatrix} \frac{\partial f_1}{\partial x_1}&\frac{\partial f_1}{\partial x_2}&\dots&\frac{\partial f_1}{\partial x_m}\\ \frac{\partial f_2}{\partial x_1}&\frac{\partial f_2}{\partial x_2}&\dots&\frac{\partial f_2}{\partial x_m}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{\partial f_n}{\partial x_1}&\frac{\partial f_n}{\partial x_2}&\dots&\frac{\partial f_n}{\partial x_m}\\ \end{bmatrix}$ The Jacobian of the exponential model function with respect to $\boldsymbol{\gamma}$ is: $J_m(t, \boldsymbol{\gamma}) = \begin{bmatrix} \frac{\partial m}{\partial \gamma_1} & \frac{\partial m}{\partial \gamma_2} \\ \end{bmatrix} = \begin{bmatrix} \exp(\gamma_2 t) & t \gamma_1 \exp(\gamma_2 t) \\ \end{bmatrix}$ By default, the finite difference method, Calculus.jacobian(), is used to approximate the Jacobian for the data fitting algorithm and covariance computation. Alternatively, a function which calculates the Jacobian can be supplied to curve_fit() for faster and/or more accurate results. function j_m(t,p) J = Array{Float64}(length(t),length(p)) J[:,1] = exp.(p[2] .* t) #df/dp[1] J[:,2] = t .* p[1] .* J[:,1] #df/dp[2] J end fit = curve_fit(m, j_m, tdata, ydata, p0) ## Linear Approximation The non-linear function $m$ can be approximated as a linear function by Talor expansion: $m(\mathbf{x_i}, \boldsymbol{\gamma}+\boldsymbol{h}) \approx m(\mathbf{x_i}, \boldsymbol{\gamma}) + \nabla m(\mathbf{x_i}, \boldsymbol{\gamma})'\boldsymbol{h}$ where $\boldsymbol{\gamma}$ is a fixed vector, $\boldsymbol{h}$ is a very small-valued vector and $\nabla m(\mathbf{x_i}, \boldsymbol{\gamma})$ is the gradient at $\mathbf{x_i}$. Consider the residual vector functon $r({\boldsymbol{\gamma}})=\begin{bmatrix} r_1({\boldsymbol{\gamma}}) \\ r_2({\boldsymbol{\gamma}}) \\ \vdots\\ r_n({\boldsymbol{\gamma}}) \end{bmatrix}$ with entries: $r_i({\boldsymbol{\gamma}}) = m(\mathbf{x_i}, {\boldsymbol{\gamma}}) - Y_i$ Each entry's linear approximation can hence be written as: \begin{align} r_i({\boldsymbol{\gamma}}+\boldsymbol{h}) &= m(\mathbf{x_i}, \boldsymbol{\gamma}+\boldsymbol{h}) - Y_i\\ &\approx m(\mathbf{x_i}, \boldsymbol{\gamma}) + \nabla m(\mathbf{x_i}, \boldsymbol{\gamma})'h - Y_i\\ &= r_i({\boldsymbol{\gamma}}) + \nabla m(\mathbf{x_i}, \boldsymbol{\gamma})'h \end{align} Since the $i$th row of $J(\boldsymbol{\gamma})$ equals the transpose of the gradient of $m(\mathbf{x_i}, \boldsymbol{\gamma})$, the vector function $r({\boldsymbol{\gamma}}+\boldsymbol{h})$ can be approximated as: $r({\boldsymbol{\gamma}}+\boldsymbol{h}) \approx r({\boldsymbol{\gamma}}) + J(\boldsymbol{\gamma})h$ which is a linear function on $\boldsymbol{h}$ since ${\boldsymbol{\gamma}}$ is a fixed vector. ## Goodness of Fit The linear approximation of the non-linear least squares problem leads to the approximation of the covariance matrix of each parameter, from which we can perform regression analysis. Consider a least squares solution $\boldsymbol{\gamma}^*$, which is a local minimizer of the non-linear problem: $\boldsymbol{\gamma}^* = \underset{\boldsymbol{\gamma}}{\mathrm{arg\,min}} \ \sum_{i=1}^{n} [m(\mathbf{x_i}, \boldsymbol{\gamma}) - y_i]^2$ Set $\boldsymbol{\gamma}^*$ as the fixed point in linear approximation, $r({\boldsymbol{\gamma^*}}) = r$ and $J(\boldsymbol{\gamma^*}) = J$. A parameter vector near $\boldsymbol{\gamma}^*$ can be expressed as $\boldsymbol{\gamma}=\boldsymbol{\gamma^*} + h$. The local approximation for the least squares problem is: $\underset{\boldsymbol{\gamma}}{\mathrm{min}} \quad s(\boldsymbol{\gamma})=s(\boldsymbol{\gamma}^*+\boldsymbol{h}) \approx [Jh + r]'[Jh + r]$ which is essentially the linear least squares problem: $\underset{\boldsymbol{\beta}}{\mathrm{min}} \quad [X\beta-Y]'[X\beta-Y]$ where $X=J$, $\beta=\boldsymbol{h}$ and $Y=-r({\boldsymbol{\gamma}})$. Solve the equation where the partial derivatives equal to $0$, the analytical solution is: $\hat{\boldsymbol{h}}=\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}^*\approx-[J'J]^{-1}J'r$ The covariance matrix for the analytical solution is: $\mathbf{Cov}(\hat{\boldsymbol{\gamma}}) = \mathbf{Cov}(\boldsymbol{h}) = [J'J]^{-1}J'\mathbf{E}(rr')J[J'J]^{-1}$ Note that $r$ is the residual vector at the best fit point $\boldsymbol{\gamma^*}$, with entries $r_i = Y_i - m(\mathbf{x_i}, \boldsymbol{\gamma^*})=\epsilon_i$. $\hat{\boldsymbol{\gamma}}$ is very close to $\boldsymbol{\gamma^*}$ and therefore can be replaced by $\boldsymbol{\gamma^*}$. $\mathbf{Cov}(\boldsymbol{\gamma}^*) \approx \mathbf{Cov}(\hat{\boldsymbol{\gamma}})$ Assume the errors in each sample are independent, normal distributed with zero mean and same variance, i.e. $\epsilon \sim N(0, \sigma^2I)$, the covariance matrix from the linear approximation is therefore: $\mathbf{Cov}(\boldsymbol{\gamma}^*) = [J'J]^{-1}J'\mathbf{Cov}(\epsilon)J[J'J]^{-1} = \sigma^2[J'J]^{-1}$ where $\sigma^2$ could be estimated as residual sum of squares devided by degrees of freedom: $\hat{\sigma}^2=\frac{s(\boldsymbol{\gamma}^*)}{n-p}$ In LsqFit.jl, the covariance matrix calculation uses QR decomposition to be more computationally stable, which has the form: $\mathbf{Cov}(\boldsymbol{\gamma}^*) = \hat{\sigma}^2 \mathrm{R}^{-1}(\mathrm{R}^{-1})'$ estimate_covar() computes the covariance matrix of fit: julia> cov = estimate_covar(fit) 2×2 Array{Float64,2}: 0.000116545 0.000174633 0.000174633 0.00258261 The standard error is then the square root of each diagonal elements of the covariance matrix. standard_error() returns the standard error of each parameter: julia> se = standard_error(fit) 2-element Array{Float64,1}: 0.0114802 0.0520416 margin_error() computes the product of standard error and the critical value of each parameter at a certain significance level (default is 5%) from t-distribution. The margin of error at 10% significance level can be computed by: julia> margin_of_error = margin_error(fit, 0.1) 2-element Array{Float64,1}: 0.0199073 0.0902435 confidence_interval() returns the confidence interval of each parameter at certain significance level, which is essentially the estimate value ± margin of error. To get the confidence interval at 10% significance level, run: julia> confidence_intervals = confidence_interval(fit, 0.1) 2-element Array{Tuple{Float64,Float64},1}: (0.976316, 1.01613) (1.91047, 2.09096) ## Weighted Least Squares curve_fit() also accepts weight parameter (wt) to perform Weighted Least Squares and General Least Squares, where the parameter $\boldsymbol{\gamma}^*$ minimizes the weighted residual sum of squares. Weight parameter (wt) is an array or a matrix of weights for each sample. To perform Weighted Least Squares, pass the weight array [w_1, w_2, ..., w_n] or the weight matrix W: $\mathbf{W} = \begin{bmatrix} w_1 & 0 & \cdots & 0\\ 0 & w_2 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & w_n\\ \end{bmatrix}$ The weighted least squares problem becomes: $\underset{\boldsymbol{\gamma}}{\mathrm{min}} \quad s(\boldsymbol{\gamma})= \sum_{i=1}^{n} w_i[m(\mathbf{x_i}, \boldsymbol{\gamma}) - Y_i]^2$ in matrix form: $\underset{\boldsymbol{\gamma}}{\mathrm{min}} \quad s(\boldsymbol{\gamma})= r(\boldsymbol{\gamma})'Wr(\boldsymbol{\gamma})$ where $r({\boldsymbol{\gamma}})=\begin{bmatrix} r_1({\boldsymbol{\gamma}}) \\ r_2({\boldsymbol{\gamma}}) \\ \vdots\\ r_n({\boldsymbol{\gamma}}) \end{bmatrix}$ is a residual vector function with entries: $r_i({\boldsymbol{\gamma}}) = m(\mathbf{x_i}, {\boldsymbol{\gamma}}) - Y_i$ The algorithm in LsqFit.jl will then provide a least squares solution $\boldsymbol{\gamma}^*$. Note In LsqFit.jl, the residual function passed to levenberg_marquardt() is in different format, if the weight is a vector: r(p) = sqrt.(wt) .* ( model(xpts, p) - ydata ) lmfit(r, g, p0, wt; kwargs...) $r_i({\boldsymbol{\gamma}}) = \sqrt{w_i} \cdot [m(\mathbf{x_i}, {\boldsymbol{\gamma}}) - Y_i]$ Cholesky decomposition, which is effectively a sqrt of a matrix, will be performed if the weight is a matrix: u = chol(wt) r(p) = u * ( model(xpts, p) - ydata ) lmfit(r, p0, wt; kwargs...) $r_i({\boldsymbol{\gamma}}) = \sqrt{w_i} \cdot [m(\mathbf{x_i}, {\boldsymbol{\gamma}}) - Y_i]$ The solution will be the same as the least squares problem mentioned in the tutorial. Set $r({\boldsymbol{\gamma^*}}) = r$ and $J(\boldsymbol{\gamma^*}) = J$, the linear approximation of the weighted least squares problem is then: $\underset{\boldsymbol{\gamma}}{\mathrm{min}} \quad s(\boldsymbol{\gamma}) = s(\boldsymbol{\gamma}^* + \boldsymbol{h}) \approx [J\boldsymbol{h}+r]'W[J\boldsymbol{h}+r]$ The analytical solution to the linear approximation is: $\hat{\boldsymbol{h}}=\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}^*\approx-[J'WJ]^{-1}J'Wr$ Assume the errors in each sample are independent, normal distributed with zero mean and different variances (heteroskedastic error), i.e. $\epsilon \sim N(0, \Sigma)$, where: $\Sigma = \begin{bmatrix} \sigma_1^2 & 0 & \cdots & 0\\ 0 & \sigma_2^2 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \sigma_n^2\\ \end{bmatrix}$ We know the error variance and we set the weight as the inverse of the variance (the optimal weight), i.e. $W = \Sigma^{-1}$: $\mathbf{W} = \begin{bmatrix} w_1 & 0 & \cdots & 0\\ 0 & w_2 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & w_n\\ \end{bmatrix} = \begin{bmatrix} \frac{1}{\sigma_1^2} & 0 & \cdots & 0\\ 0 & \frac{1}{\sigma_2^2} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \frac{1}{\sigma_n^2}\\ \end{bmatrix}$ The covariance matrix is now: ${Cov}(\boldsymbol{\gamma}^*) \approx [J'WJ]^{-1}J'W \Sigma W'J[J'W'J]^{-1} = [J'WJ]^{-1}$ If we only know the relative ratio of different variances, i.e. $\epsilon \sim N(0, \sigma^2W^{-1})$, the covariance matrix will be: $\mathbf{Cov}(\boldsymbol{\gamma}^*) = \sigma^2[J'WJ]^{-1}$ where $\sigma^2$ is estimated. In this case, if we set $W = I$, the result will be the same as the unweighted version. However, curve_fit() currently does not support this implementation. curve_fit() assumes the weight as the inverse of the error covariance matrix rather than the ratio of error covariance matrix, i.e. the covariance of the estimated parameter is calculated as covar = inv(J'*fit.wt*J). Note Passing vector of ones as the weight vector will cause mistakes in covariance estimation. Pass the vector of 1 ./ var(ε) or the matrix inv(covar(ε)) as the weight parameter (wt) to the function curve_fit(): julia> wt = inv(cov_ε) julia> fit = curve_fit(m, tdata, ydata, wt, p0) julia> cov = estimate_covar(fit) Note If the weight matrix is not a diagonal matrix, General Least Squares will be performed. ## General Least Squares Assume the errors in each sample are correlated, normal distributed with zero mean and different variances (heteroskedastic and autocorrelated error), i.e. $\epsilon \sim N(0, \Sigma)$. Set the weight matrix as the inverse of the error covariance matrix (the optimal weight), i.e. $W = \Sigma^{-1}$, we will get the parameter covariance matrix: $\mathbf{Cov}(\boldsymbol{\gamma}^*) \approx [J'WJ]^{-1}J'W \Sigma W'J[J'W'J]^{-1} = [J'WJ]^{-1}$ Pass the matrix inv(covar(ε)) as the weight parameter (wt) to the function curve_fit(): julia> wt = 1 ./ yvar julia> fit = curve_fit(m, tdata, ydata, wt, p0) julia> cov = estimate_covar(fit) ## Estimate the Optimal Weight In most cases, the variances of errors are unknown. To perform Weighted Least Square, we need estimate the variances of errors first, which is the squared residual of $i$th sample: $\widehat{\mathbf{Var}(\epsilon_i)} = \widehat{\mathbf{E}(\epsilon_i \epsilon_i)} = r_i(\boldsymbol{\gamma}^*)$ Unweighted fitting (OLS) will return the residuals we need, since the estimator of OLS is unbiased. Then pass the reciprocal of the residuals as the estimated optimal weight to perform Weighted Least Squares: julia> fit_OLS = curve_fit(m, tdata, ydata, p0) julia> wt = 1 ./ fit_OLS.resid julia> fit_WLS = curve_fit(m, tdata, ydata, wt, p0) julia> cov = estimate_covar(fit_WLS) ## References Hansen, P. C., Pereyra, V. and Scherer, G. (2013) Least squares data fitting with applications. Baltimore, Md: Johns Hopkins University Press, p. 147-155. Kutner, M. H. et al. (2005) Applied Linear statistical models. Weisberg, S. (2014) Applied linear regression. Fourth edition. Hoboken, NJ: Wiley (Wiley series in probability and statistics).
2019-01-21 04:27:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071821570396423, "perplexity": 1423.4286470054121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583755653.69/warc/CC-MAIN-20190121025613-20190121051613-00514.warc.gz"}
http://www.ck12.org/analysis/Composition-of-Functions/lesson/Function-Composition-PCALC/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> Due to system maintenance, CK-12 will be unavailable on 8/19/2016 from 6:00p.m to 10:00p.m. PT. # Composition of Functions ## Two or more functions where the range of the first becomes the domain of the second. Estimated11 minsto complete % Progress Practice Composition of Functions Progress Estimated11 minsto complete % Function Composition Functions can be added, subtracted, multiplied and divided creating new functions and graphs that are complicated combinations of the various original functions. One important way to transform functions is through function composition. Function composition allows you to line up two or more functions that act on an input in tandem. Is function composition essentially the same as multiplying the two functions together? ### Composition of Functions A common way to describe functions is a mapping from the domain space to the range space: Function composition means that you have two or more functions and the range of the first function becomes the domain of the second function. There are two notations used to describe function composition. In each case the order of the functions matters because arithmetically the outcomes will be different. Squaring a number and then doubling the result will be different from doubling a number and then squaring the result. In the diagram above, \begin{align*}f(x)\end{align*} occurs first and \begin{align*}g(x)\end{align*} occurs second. This can be written as: \begin{align*}g \big( f(x) \big)\end{align*} or \begin{align*}(g \circ f) (x)\end{align*} You should read this “\begin{align*}g\end{align*} of \begin{align*}f\end{align*} of \begin{align*}x\end{align*}.” In both cases notice that the \begin{align*}f\end{align*} is closer to the \begin{align*}x\end{align*} and operates on the \begin{align*}x\end{align*} values first. ### Examples #### Example 1 Earlier, you were asked if function composition is the same as multiplying two functions together. Function composition is not the same as multiplying two functions together. With function composition there is an outside function and an inside function. Suppose the two functions were doubling and squaring.  It is clear just by looking at the example input of the number 5 that 50 (squaring then doubling) is different from 100 (doubling then squaring). Both 50 and 100 are examples of function composition, while 250 (five doubled multiplied by five squared) is an example of the product of two separate functions happening simultaneously. For the next two examples, use the functions below: \begin{align*}f(x) = x^2 - 1 \end{align*} \begin{align*}h(x) = \frac{x - 1}{x + 5}\end{align*} \begin{align*}g(x) = 3e^x - x\end{align*} \begin{align*}j(x) = \sqrt{x + 1}\end{align*} #### Example 3 Show \begin{align*}f \big( h(x) \big) \neq h \big( f(x) \big)\end{align*} \begin{align*}f \big( h(x) \big) = f \left( \frac{x - 1}{x + 5} \right) = \left( \frac{x - 1}{x + 5} \right)^2 - 1\end{align*} \begin{align*}h \big( f(x) \big) = h(x^2 - 1) = \frac{(x^2 - 1) - 1}{(x^2 - 1) + 5} = \frac{x^2 - 2}{x^2 + 4}\end{align*} In order to truly show they are not equal it is best to find a specific counter example of a number where they differ.  Sometimes algebraic expressions may look different, but are actually the same. You should notice that \begin{align*}f \big( h(x) \big)\end{align*}is undefined when \begin{align*}x = -5\end{align*} because then there would be zero in the denominator. \begin{align*}h \big( f(x) \big)\end{align*} on the other hand is defined at \begin{align*}x = -5\end{align*}. Since the two function compositions differ, you can conclude: \begin{align*}f \big( h(x) \big) \neq h \big( f(x) \big)\end{align*} #### Example 4 What is \begin{align*}f \Big( j \big( h \big( g ( x ) \big) \big) \Big) \end{align*}? These functions are nested within the arguments of the other functions. Sometimes functions simplify significantly when composed together, as \begin{align*}f\end{align*} and \begin{align*}j\end{align*} do in this case. It makes sense to evaluate those two functions first together and keep them on the outside of the argument. \begin{align*}f(x) = x^2 - 1; h(x) = \frac{x - 1}{x + 5}; g(x) = 3e^x -x; j(x) = \sqrt{x + 1}\end{align*} \begin{align*}f \big( j(y) \big) = f \left ( \sqrt{y + 1} \right ) = \left ( \sqrt{y + 1} \right ) ^2 - 1 = y + 1 - 1 = y\end{align*} Notice how the composition of \begin{align*}f\end{align*} and \begin{align*}j\end{align*} produced just the argument itself? Thus, \begin{align*}f \Big( j \big( h \big( g(x) \big) \big) \Big) = h \big( g(x) \big) & = h(3e^x - x) \\ & = \frac{(3e^x - x) - 1}{(3e^x - x) + 5} \\ & = \frac{3e^x - x - 1}{3e^x - x + 5}\end{align*} For the next two examples, use the graphs shown below: \begin{align*}f(x) = | x |\end{align*} \begin{align*}g(x)=e^x\end{align*} \begin{align*}h(x) = -x\end{align*} #### Example 4 Compose \begin{align*}g \big( f(x) \big)\end{align*} and graph the result. Describe the transformation. \begin{align*}g(f(x)) = g (|x|) = e^{|x|}\end{align*} The positive portion of the exponential graph has been mirrored over the \begin{align*}y\end{align*} axis and the negative portion of the exponential graph has been entirely truncated. #### Example 5 Compose \begin{align*}h \big( g(x) \big)\end{align*} and graph the result. Describe the transformation. \begin{align*}h \big( g(x) \big) = h(e^x) = -e^x\end{align*} The exponential graph has been reflected over the \begin{align*}x\end{align*}-axis. ### Review For questions 1-9, use the following three functions: \begin{align*}f(x) = |x|, h(x) = -x, g(x) = (x - 2)^2 -3\end{align*}. 1. Graph \begin{align*}f(x), h(x)\end{align*} and \begin{align*}g(x)\end{align*}. 2. Find \begin{align*}f \big( g(x) \big)\end{align*} algebraically. 3. Graph \begin{align*}f \big( g(x) \big)\end{align*} and describe the transformation. 4. Find \begin{align*}g \big( f(x) \big)\end{align*} algebraically. 5. Graph \begin{align*}g \big( f(x) \big) \end{align*} and describe the transformation. 6. Find \begin{align*}h \big( g(x) \big)\end{align*} algebraically. 7. Graph \begin{align*}h \big( g(x) \big)\end{align*} and describe the transformation. 8. Find \begin{align*}g \big( h(x) \big)\end{align*} algebraically. 9. Graph \begin{align*}g \big( h(x) \big)\end{align*} and describe the transformation. For 10-16, use the following three functions: \begin{align*}j(x) = x^2, k(x) = |x|, m(x) = \sqrt{x}\end{align*}. 10. Graph \begin{align*}j(x), k(x)\end{align*} and \begin{align*}m(x)\end{align*}. 11. Find \begin{align*}j \big( k(x) \big)\end{align*} algebraically. 12. Graph \begin{align*}j \big( k(x) \big)\end{align*} and describe the transformation. 13. Find \begin{align*}k \big( m(x) \big)\end{align*} algebraically. 14. Graph \begin{align*}k \big( m(x) \big)\end{align*} and describe the transformation. 15. Find \begin{align*}m \big( k(x) \big)\end{align*} algebraically. 16. Graph \begin{align*}m \big( k(x) \big)\end{align*} and describe the transformation. To see the Review answers, open this PDF file and look for section 1.11. ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English domain The domain of a function is the set of $x$-values for which the function is defined. Function A function is a relation where there is only one output for every input. In other words, for every value of $x$, there is only one value for $y$. Function composition Function composition involves 'nested functions' or functions within functions. Function composition is the application of one function to the result of another function. Range The range of a function is the set of $y$ values for which the function is defined.
2016-08-27 07:16:15
{"extraction_info": {"found_math": true, "script_math_tex": 59, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 4, "texerror": 0, "math_score": 0.8843173384666443, "perplexity": 1730.1276839209797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298551.3/warc/CC-MAIN-20160823195818-00209-ip-10-153-172-175.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/17740/is-there-a-version-of-inclusion-exclusion-for-vector-spaces/26735
# Is there a version of inclusion/exclusion for vector spaces? I am asking for a way to compute the rank of the 'join' of a bunch of subspaces whose pairwise intersections might be non-zero. So in the case n=2 this is just $\dim(A_1+A_2) = \dim(A_1) + \dim(A_2) - \dim(A_1\cap A_2)$. For general $n$, I don't know a good formula. • Nice to see my words get a second airing... ;) Mar 10, 2010 at 18:08 • To those who haven't seen it, this is a redo of mathoverflow.net/questions/17702. Most of the wording here is copied from Yemon's comment there. Mar 10, 2010 at 20:04 • The example already kicking around (the spaces generated by v,w,v+w vs the spaces generated by e1,e2,e3) already shows that dim(A1+A2+A3) cannot be computed from the dimensions of sum_{i in I}A_i for I running through all the proper subsets of {1,2,3}, and cap_{i in I}A_i for all I. So what is left of this question? Isn't the answer "if you only allow dim(sum_{i in I}A_i) etc then there's no formula, and if you allow more general things then the answer is that it's dim(A1+A2+A3)". Mar 10, 2010 at 23:11 • Indeed it is one of the favourite wrong common beliefs on MO mathoverflow.net/questions/23478/… Jun 1, 2010 at 15:56 As the blog link points out, for any finite collection of subspaces U_1,...,U_n there's a chain complex $$0 \to \cap U_i \to \ldots \to \bigoplus U_i \cap U_j \cap U_k \to \bigoplus U_i \cap U_j \to \bigoplus U_i \to \sum U_i \to 0$$ where the rth term after the left hand zero is the external direct sum of (n-r+1)-fold intersections of U_i's. The differential sends $$x \in U_{i_1} \cap \cdots \cap U_{i_r}$$ to $$\sum_j (-1)^j (x \in U_{i_1} \cap \cdots \hat{U_{i_j}} \cap \cdots \cap U_{i_r})$$ Failure of "inclusion-exclusion for vector spaces" is failure of exactness of this sequence. For example, for three subspaces the only non-trivial homology is H^1 which is $(U \cap (V+W))/(U\cap V + U \cap W)$ i.e. it measures failure of distributivity. You can find out what the Euler characteristic of the sequence is by repeatedly using the formula for dim(U+V). What this gives for 4 subspaces is that the alternating sum of Betti numbers is $$|((U \cap V) \cap (U\cap W + U \cap X)) / (U \cap V \cap W + U \cap V \cap X) |$$ minus the sum of $$| (V \cap (W+X))/(V \cap W + V \cap X) |$$ and $$| ( U \cap (V+W+X) )/ (U\cap V + U\cap W + U \cap X) |$$ where I've written |s for "dim". First homology is the direct sum $$( U \cap (V+W+X) )/ (U\cap V + U\cap W + U \cap X) \oplus (V \cap (W+X))/(V \cap W + V \cap X)$$ Example1: 4 planes in R^3, with the intersection of any 3 being {0}. Then 2nd homology is 1-dim and 1st homology is zero. Euler char is 3-8+6=1. Example2: 4 planes in R^3, three of which meet in a line, the other in "general position". Euler char 3-8+6-1=0, no homology. This ought to be part of some kind of homology theory for subspace configurations, but it doesn't seem to be in the literature. Disclaimer: this is all from some ancient notes of mine, and I can't vouch for how reliable it is • One reference for this is Section 1.7 of the book Quadratic Algebras, by A. Polishchuk and L. Positselski. (ams.org/bookstore-getitem/item=ulect-37) (I think I've seen the second author on Math Overflow.) Jun 9, 2010 at 20:04 • for those who don't have access to this book, what they prove (amongst other things) is that if you have a finite set S of subspaces every proper subset of which is distributive (this fails in my Example 2), then S is distributive if and only if the complex above is exact. – M T Jun 21, 2010 at 18:23 • This is an interesting application of chain complexes for linear algebra! I like it very much! Jan 30, 2019 at 12:26 What goes wrong with a proof of inclusion-exclusion, as was tried in the post that Qiaochu links to, is that $A \cap (B + C) \neq (A \cap B) + (A \cap C)$. You might be able to get a useful expression for the dimension of the join of a bunch of subspaces by looking at the dimension of more complicated spaces expressed using meets and joins. In the paper "A Quantum Lovasz Local Lemma" (arxiv link; don't be scared by the "quantum" in the title -- the motivation is quantum, but the theorems are just about dimensions of joins, meets, and complements of subspaces) there is a definitely non-trivial theorem proved about dimensions of spaces using this calculus of meets and joints. So depending on what you want it for, you might be able to find some interesting expressions for the dimension of the join of a set of spaces. • I wasn't as clear as I should have been in my answer. Both the Lovasz Local Lemma and the inclusion-exclusion principle are theorems about probability. If you express the Lovasz Local Lemma properly, it generalizes to meets and joins of subspaces. (You have to replace some quantities in the usual formulation by their complements; the reformulated statement is equivalent for probability distributions). We don't know how to do that for inclusion-exclusion. But there might be a more complicated, equivalent, statement which generalizes to meets, joins, and orthogonal complements of subspaces. Mar 14, 2010 at 17:14 One way to look at this question is via quiver representations. Two subspaces of a vector space form a representation of the quiver $$A_3$$ with orientations $$\bullet \rightarrow \bullet \leftarrow \bullet$$ with the additional condition that both maps are injective (that's tautology). Now, every representation of $$A_3$$ is a sum of indecomposables, whose dimension vectors are (1,0,0), (0,1,0), (0,0,1), (1,1,0), (0,1,1), (1,1,1), where for the first and the third one the maps are not injective, and for the remaining four the maps are injective. Thus, the dimension vector for a generic representation with injective maps is a(0,1,0)+b(1,1,0)+c(0,1,1)+d(1,1,1)=(b+d,a+b+c+d,c+d). Clearly, the dimension of the sum of the two subspaces is b+c+d (the complement is represented by the first summand a(0,1,0)), which is (b+d)+(c+d)-d, and d is the dimension of the intersection. Now, for the three subspaces we deal with representations of the quiver $$D_4$$ with injective maps: \begin{align} & \bullet\\ & \big\downarrow\\ \bullet \longrightarrow &\bullet\longleftarrow \bullet \end{align} Indecomposable representations have dimension vectors $$(d_1,d_2,d_3,d)$$ (note different ordering of dimensions - the largest one is the last one) being (1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1), (1,0,0,1), (0,1,0,1), (0,0,1,1), (1,1,0,1), (1,0,1,1), (0,1,1,1), (1,1,1,1), (1,1,1,2) - altogether 12 vectors. Among them, the first three have non-injective maps, and the fourth one captures the complement of the sum of our three subspaces. Thus, there are 8 numbers through which the dimension can be expressed (not 7, as in the inclusion-exclusion formula), and what remains is to choose the 8th number, in addition to the dimensions of all possible intersections, reasonably for your needs. For $$k>3$$ subspaces the classification problem stops being of finite type so it becomes a bit more nasty... • I like you line of proof (+1). Cannot this be reformulated as "In order that a collection of subspaces in a f.d. space satisfy the dimension formula, it is necessary and sufficient that they generate a distributive sublattice of that of subspaces" i.e. "that the set of atoms of the sublattice generated be in direct sum position" ? Nov 21, 2021 at 9:00 • I put a depiction of the quiver $D_4$ in. I messed around with alignment until it looked about right.
2022-12-05 21:10:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398760557174683, "perplexity": 426.7994991225691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00278.warc.gz"}
https://blog.logancyang.com/note/python/2020/07/02/python-basics-iii.html
## Generators ### Iteration protocol Many objects support iteration: a = 'hello' for c in a: # Loop over characters in a ... b = { 'name': 'Dave', 'password':'foo'} for k in b: # Loop over keys in dictionary ... c = [1,2,3,4] for i in c: # Loop over items in a list/tuple ... f = open('foo.txt') for x in f: # Loop over lines in a file ... What happens under the hood of a for loop? for x in obj: ... # Is equivalent to _iter = obj.__iter__() # Get iterator object while True: try: x = _iter.__next__() # Get next item except StopIteration: # No more items break # statements ... All objects that support for loop implement this low level iteration protocol. This is a manual iteration through a list: >>> x = [1,2,3] >>> it = x.__iter__() >>> it <listiterator object at 0x590b0> >>> it.__next__() 1 >>> it.__next__() 2 >>> it.__next__() 3 >>> it.__next__() Traceback (most recent call last): File "<stdin>", line 1, in ? StopIteration #### Support iteration in your custom object Knowing about iteration is useful if you want to add it to your own objects. For example, making a custom container. class Portfolio: def __init__(self): self.holdings = [] def __iter__(self): return self.holdings.__iter__() ... port = Portfolio() for s in port: ... For container objects, supporting iteration, indexing, containment, and other kinds of operators is an important part of being Pythonic. Side note: __contains__() is a function for the in check, for example: def __contains__(self, name): return any([s.name == name for s in self._holdings]) #### next() built-in function The next() built-in function is a shortcut for calling the __next__() method of an iterator. Try using it on a file: >>> f = open('Data/portfolio.csv') >>> f.__iter__() # Note: This returns the file itself <_io.TextIOWrapper name='Data/portfolio.csv' mode='r' encoding='UTF-8'> >>> next(f) 'name,shares,price\n' >>> next(f) '"AA",100,32.20\n' >>> next(f) '"IBM",50,91.10\n' ### Customizing iteration Now we look at how we can generalize iteration using a generator function. Suppose you wanted to create your own custom iteration pattern. For example, a countdown. >>> for x in countdown(10): ... print(x, end=' ') ... 10 9 8 7 6 5 4 3 2 1 There is an easy way to do this. #### Generator A generator is a function that defines iteration. def countdown(n): while n > 0: yield n n -= 1 >>> for x in countdown(10): ... print(x, end=' ') ... 10 9 8 7 6 5 4 3 2 1 Definition: A generator is any function that uses the yield statement. The behavior of generators is different than a normal function. Calling a generator function creates a generator object. It does not immediately execute the function. def countdown(n): print('Counting down from', n) while n > 0: yield n n -= 1 >>> x = countdown(10) # There is NO PRINT STATEMENT >>> x # x is a generator object <generator object at 0x58490> The function only executes on __next__() call. >>> x = countdown(10) >>> x <generator object at 0x58490> >>> x.__next__() Counting down from 10 10 >>> yield produces a value, but suspends the function execution. The function resumes on next call to __next__(). >>> x.__next__() 9 >>> x.__next__() 8 When the generator finally returns, the iteration raises an error. >>> x.__next__() 1 >>> x.__next__() Traceback (most recent call last): File "<stdin>", line 1, in ? StopIteration This means a generator function implements the same low-level protocol that the for statements uses on lists, tuples, dicts, files, etc. #### Generator example: find matching substring from lines in file >>> def filematch(filename, substr): with open(filename, 'r') as f: for line in f: if substr in line: yield line >>> for line in open('Data/portfolio.csv'): print(line, end='') name,shares,price "AA",100,32.20 "IBM",50,91.10 "CAT",150,83.44 "MSFT",200,51.23 "GE",95,40.37 "MSFT",50,65.10 "IBM",100,70.44 >>> for line in filematch('Data/portfolio.csv', 'IBM'): print(line, end='') "IBM",50,91.10 "IBM",100,70.44 >>> #### Generator example: monitoring a streaming data source Suppose there is a running program that keeps writing to Data/stocklog.csv in realtime. We use the code below to monitor the stream. # follow.py import os import time f = open('Data/stocklog.csv') f.seek(0, os.SEEK_END) # Move file pointer 0 bytes from end of file while True: if line == '': time.sleep(0.1) # Sleep briefly and retry continue fields = line.split(',') name = fields[0].strip('"') price = float(fields[1]) change = float(fields[4]) if change < 0: print(f'{name:>10s} {price:>10.2f} {change:>10.2f}') This while True loop along with some if checks and small sleep time keeps checking the end of the file. readline() will either return new data or an empty string, so if we get empty string we continue to the next retry. It is just like the Unix tail -f command that is used to watch a log file. ### Producers, consumers and pipelines Generators are a useful tool for setting various kinds of producer/consumer problems and dataflow pipelines. #### Producer-Consumer Problems # Producer def follow(f): ... while True: ... yield line # Produces value in line below ... # Consumer for line in follow(f): # Consumes vale from yield above ... yield produces values that for consumes. #### Generator Pipelines You can use this aspect of generators to set up processing pipelines (like Unix pipes). producer → processing → processing → consumer Processing pipes have an initial data producer, some set of intermediate processing stages and a final consumer. • The producer is typically a generator. Although it could also be a list of some other sequence. yield feeds data into the pipeline. • Intermediate processing stages simultaneously consume and produce items. They might modify the data stream. They can also filter (discarding items). • Consumer is a for-loop. It gets items and does something with them. """ producer → processing → processing → consumer """ def producer(): ... yield item # yields the item that is received by the processing ... def processing(s): for item in s: # Comes from the producer ... yield newitem # yields a new item ... def consumer(s): for item in s: # Comes from the processing ... """ To actually use it and setup the pipeline """ a = producer() b = processing(a) c = consumer(b) You can create various generator functions and chain them together to perform processing involving data-flow pipelines. In addition, you can create functions that package a series of pipeline stages into a single function call #### Generator expressions Generator expressions are like list comprehension, except that generator expressions use () instead of []. >>> b = (2*x for x in a) >>> b <generator object at 0x58760> >>> for i in b: ... print(i, end=' ') ... 2 4 6 8 Differences with List Comprehensions. • Does not construct a list. • Only useful purpose is iteration. • Once consumed, can’t be reused. General syntax: (<expression> for i in s if <conditional>). It can also serve as a function argument: sum(x*x for x in a). It can be applied to any iterable. >>> a = [1,2,3,4] >>> b = (x*x for x in a) >>> c = (-x for x in b) >>> for i in c: ... print(i, end=' ') ... -1 -4 -9 -16 The main use of generator expressions is in code that performs some calculation on a sequence, but only uses the result once. For example, strip all comments from a file. f = open('somefile.txt') lines = (line for line in f if not line.startswith('#')) for line in lines: ... f.close() With generators, the code runs faster and uses little memory. It’s like a filter applied to a stream. #### Why Generators • Many problems are much more clearly expressed in terms of iteration. • Looping over a collection of items and performing some kind of operation (searching, replacing, modifying, etc.). • Processing pipelines can be applied to a wide range of data processing problems. • Better memory efficiency. • Only produce values when needed. • Contrast to constructing giant lists. • Can operate on streaming data • Generators encourage code reuse • Separates the iteration from code that uses the iteration • You can build a toolbox of interesting iteration functions and mix-n-match. #### itertools module The itertools is a library module with various functions designed to help with iterators/generators. itertools.chain(s1,s2) itertools.count(n) itertools.cycle(s) itertools.dropwhile(predicate, s) itertools.groupby(s) itertools.ifilter(predicate, s) itertools.imap(function, s1, ... sN) itertools.repeat(s, n) itertools.tee(s, ncopies) itertools.izip(s1, ... , sN) All functions process data iteratively. They implement various kinds of iteration patterns. These are some useful advanced topics that you will use day-to-day. ### Variable arguments A function that accepts any number of arguments is said to use variable arguments. For example, *args is a tuple that contains any number of positional arguments: def f(x, *args): ... f(1,2,3,4,5) def f(x, *args): # x -> 1 # args -> (2,3,4,5), a tuple A function can also accept any number of keyword arguments. For example: def f(x, y, **kwargs): ... def f(x, y, **kwargs): # x -> 2 # y -> 3 # kwargs -> { 'flag': True, 'mode': 'fast', 'header': 'debug' }, a dict Combining both we have: def f(*args, **kwargs): ... def f(*args, **kwargs): # args = (2, 3) # kwargs -> { 'flag': True, 'mode': 'fast', 'header': 'debug' } ... This function takes any combination of positional or keyword arguments. It is sometimes used when writing wrappers or when you want to pass arguments through to another function. #### Passing tuples and dicts We can also use * to expand tuple, ** to expand dict, and pass into a function. numbers = (2,3,4) f(1, *numbers) # Same as f(1,2,3,4) options = { 'color' : 'red', 'delimiter' : ',', 'width' : 400 } f(data, **options) # Same as f(data, color='red', delimiter=',', width=400) ### Callback function, and Lambda anonymous function If we want to sort a dictionary in-place, we do: def stock_name(s): return s['name'] # stock_name is a callback portfolio.sort(key=stock_name) """ # Check how the dictionaries are sorted by the name key [ {'name': 'AA', 'price': 32.2, 'shares': 100}, {'name': 'CAT', 'price': 83.44, 'shares': 150}, {'name': 'GE', 'price': 40.37, 'shares': 95}, {'name': 'IBM', 'price': 91.1, 'shares': 50}, {'name': 'IBM', 'price': 70.44, 'shares': 100}, {'name': 'MSFT', 'price': 51.23, 'shares': 200}, {'name': 'MSFT', 'price': 65.1, 'shares': 50} ] """ The key function is an example of a callback function. The sort() method “calls back” to a function you supply. Callback functions are often short one-line functions that are only used for that one operation. Programmers often ask for a short-cut for specifying this extra processing. Use a lambda instead of creating the function. In our previous sorting example. portfolio.sort(key=lambda s: s['name']) This creates an unnamed function that evaluates a single expression. Using lambda • lambda is highly restricted. • Only a single expression is allowed. • No statements like if, while, etc. • Most common use is with functions like sort(). ### Returning functions We can use functions to create other functions. Consider this example: def add(x, y): # x and y are defined outside do_add() return x + y x and y are defined outside do_add(). Further observe that those variables are somehow kept alive after add() has finished! >>> a = add(3,4) >>> a >>> a() Adding 3 4 # Where are these values coming from? 7 #### Closures When an inner function is returned as a result, that inner function is known as a closure. def add(x, y): # do_add is a closure return x + y Essential feature: A closure retains the values of all variables needed for the function to run properly later on. Think of a closure as a function plus an extra environment that holds the values of variables that it depends on. #### Use Closure in callback functions Closure are an essential feature of Python. However, their use is often subtle. Common applications: • Use in callback functions • Delayed evaluation • Decorator functions Consider a function like this: def after(seconds, func): time.sleep(seconds) func() Usage example: def greeting(): print('Hello Guido') after(30, greeting) after executes the supplied function… later. Closures carry extra information around. def add(x, y): print(f'Adding {x} + {y} -> {x+y}') def after(seconds, func): time.sleep(seconds) func() # do_add has the references x -> 2 and y -> 3 #### Use closure to avoid code repetition Closures can also be used as technique for avoiding excessive code repetition. You can write functions that make code. Consider this code: class Stock: def __init__(self, name, shares, price): self.name = name self.shares = shares self.price = price ... @property def shares(self): return self._shares @shares.setter def shares(self, value): if not isinstance(value, int): raise TypeError('Expected int') self._shares = value ... You want the type check to apply not just on shares, but on all other things, and you want to avoid typing this code again and again, what do you do? # typedproperty.py def typedproperty(name, expected_type): private_name = '_' + name @property def prop(self): return getattr(self, private_name) @prop.setter def prop(self, value): if not isinstance(value, expected_type): raise TypeError(f'Expected {expected_type}') setattr(self, private_name, value) return prop # stock.py from typedproperty import typedproperty class Stock: name = typedproperty('name', str) shares = typedproperty('shares', int) price = typedproperty('price', float) def __init__(self, name, shares, price): self.name = name self.shares = shares self.price = price >>> s = Stock('IBM', 50, 91.1) >>> s.name 'IBM' >>> s.shares = '100' ... should get a TypeError ... >>> ### Decorators A decorator function is a function that wraps the decorated function with some additional stuff. Say you want to do logging for add and sub, def add(x, y): return x + y def sub(x, y): print('Calling sub') return x - y This is repetitive. I could have: def logged(func): def wrapper(*args, **kwargs): print('Calling', func.__name__) return func(*args, **kwargs) return wrapper return x + y logged_add(3, 4) # You see the logging message appear This example illustrates the process of creating a so-called wrapper function. A wrapper is a function that wraps around another function with some extra bits of processing, but otherwise works in the exact same way as the original function. The logged() function creates the wrapper and returns it as a result. Putting wrappers around functions is extremely common in Python. So common, there is a special syntax for it – the decorator. def add(x, y): return x + y # Special syntax @logged return x + y A decorator is just syntactic sugar. It’s exactly the same as the first approach. There are many more subtle details to decorators than what has been presented here. For example, using them in classes. Or using multiple decorators with a function. However, the previous example is a good illustration of how their use tends to arise. Usually, it’s in response to repetitive code appearing across a wide range of function definitions. A decorator can move that code to a central definition. ### Static and class methods There are a few built-in decorators that are used in combination with method definitions. class Foo: def bar(self,a): ... @staticmethod def spam(a): ... @classmethod def grok(cls,a): ... @property def name(self): ... #### Static methods: for generic functionality or design patterns @staticmethod is used to define a so-called static class methods (from C++/Java). A static method is a function that is part of the class, but which does not operate on instances. class Foo(object): @staticmethod def bar(x): print('x =', x) >>> Foo.bar(2) x = 2 Static methods are sometimes used to implement internal supporting code for a class. For example, code to help manage created instances (memory management, system resources, persistence, locking, etc). They’re also used by certain design patterns (not discussed here). #### Class Methods: for alternative constructors @classmethod is used to define class methods. A class method is a method that receives the class object as the first parameter instead of the instance. class Foo: def bar(self): print(self) @classmethod def spam(cls): print(cls) >>> f = Foo() >>> f.bar() <__main__.Foo object at 0x971690> # The instance f >>> Foo.spam() <class '__main__.Foo'> # The class Foo Class methods are most often used as a tool for defining alternate constructors. class Date: def __init__(self,year,month,day): self.year = year self.month = month self.day = day @classmethod def today(cls): # Notice how the class is passed as an argument tm = time.localtime() # And used to create a new instance return cls(tm.tm_year, tm.tm_mon, tm.tm_mday) d = Date.today() Class methods solve some tricky problems with features like inheritance. class Date: ... @classmethod def today(cls): # Gets the correct class (e.g. NewDate) tm = time.localtime() return cls(tm.tm_year, tm.tm_mon, tm.tm_mday) class NewDate(Date): ... d = NewDate.today() ## Reference https://dabeaz-course.github.io/practical-python/Notes
2020-08-04 02:29:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1849653422832489, "perplexity": 10365.250274703872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735851.15/warc/CC-MAIN-20200804014340-20200804044340-00492.warc.gz"}
http://ideas.repec.org/p/hal/wpaper/halshs-00648884.html
# Entropy and the value of information for investors ## Author Info • Antonio Cabrales () • Olivier Gossner (PSE - Paris-Jourdan Sciences Economiques - CNRS : UMR8545 - École des Hautes Études en Sciences Sociales (EHESS) - École des Ponts ParisTech (ENPC) - École normale supérieure [ENS] - Paris - Institut national de la recherche agronomique (INRA), EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris, LSE - London School of Economics and Political Science - LSE) • Roberto Serrano ## Abstract Consider any investor who fears ruin when facing any set of investments that satisfy no-arbitrage. Before investing, he can purchase information about the state of nature in the form of an information structure. Given his prior, information structure $\alpha$ is more informative than information structure $\beta$ if, whenever he is willing to buy $\beta$ at some price, he is also willing to buy $\alpha$ at that price. We show that this informativeness ordering is complete and is represented by the decrease in entropy of his beliefs, regardless of his preferences, initial wealth, or investment problem. We also show that no prior-independent informativeness ordering based on similar premises exists. If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large. File URL: http://halshs.archives-ouvertes.fr/docs/00/64/88/84/PDF/wp201140.pdf ## Bibliographic Info Paper provided by HAL in its series Working Papers with number halshs-00648884. as in new window Length: Date of revision: Handle: RePEc:hal:wpaper:halshs-00648884 Note: View the original document on HAL open archive server: http://halshs.archives-ouvertes.fr/halshs-00648884 Contact details of provider: Web page: http://hal.archives-ouvertes.fr/ ## Related research Keywords: Informativeness ; Information structures ; Entropy ; Decision under uncertainty ; Investment ; Blackwell ordering; This paper has been announced in the following NEP Reports: ## References References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: as in new window 1. Binswanger, Hans P, 1981. "Attitudes toward Risk: Theoretical Implications of an Experiment in Rural India," Economic Journal, Royal Economic Society, vol. 91(364), pages 867-90, December. 2. Itzhak Gilboa & Ehud Lehrer, 1989. "The Value of Information -- An Axiomatic Approach," Discussion Papers 835, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 3. Azrieli, Yaron & Lehrer, Ehud, 2008. "The value of a stochastic information structure," Games and Economic Behavior, Elsevier, vol. 63(2), pages 679-693, July. 4. Charles A. Holt & Susan K. Laury, 2002. "Risk Aversion and Incentive Effects," American Economic Review, American Economic Association, vol. 92(5), pages 1644-1655, December. 5. Sergiu Hart, 2011. "Comparing Risks by Acceptance and Rejection," Journal of Political Economy, University of Chicago Press, vol. 119(4), pages 617 - 638. 6. Olivier Gossner & Penelope Hernandez & Abraham Neyman, 2004. "Optimal Use of Communication Resources," Discussion Paper Series dp377, The Center for the Study of Rationality, Hebrew University, Jerusalem. 7. Robert J. Aumann & Roberto Serrano, 2006. "An Economic Index of Riskiness," Working Papers 2006-20, Brown University, Department of Economics. 8. Bourguignon, Francois, 1979. "Decomposable Income Inequality Measures," Econometrica, Econometric Society, vol. 47(4), pages 901-20, July. 9. Mas-Colell, Andreu & Whinston, Michael D. & Green, Jerry R., 1995. "Microeconomic Theory," OUP Catalogue, Oxford University Press, number 9780195102680. 10. Dean Foster & Sergiu Hart, 2007. "An Operational Measure of Riskiness," Levine's Bibliography 843644000000000095, UCLA Department of Economics. 11. Alvaro Sandroni, 2000. "Do Markets Favor Agents Able to Make Accurate Predicitions?," Econometrica, Econometric Society, vol. 68(6), pages 1303-1342, November. 12. Sims, Christopher A., 2003. "Implications of rational inattention," Journal of Monetary Economics, Elsevier, vol. 50(3), pages 665-690, April. 13. Olivier Gossner, 2011. "Simple Bounds on the Value of a Reputation," Econometrica, Econometric Society, vol. 79(5), pages 1627-1641, 09. 14. Fabio Maccheroni & Massimo Marinacci & Aldo Rustichini, 2006. "Ambiguity Aversion, Robustness, and the Variational Representation of Preferences," Econometrica, Econometric Society, vol. 74(6), pages 1447-1498, November. 15. Peng, Lin, 2005. "Learning with Information Capacity Constraints," Journal of Financial and Quantitative Analysis, Cambridge University Press, vol. 40(02), pages 307-329, June. 16. Blume, Lawrence & Easley, David, 1992. "Evolution and market behavior," Journal of Economic Theory, Elsevier, vol. 58(1), pages 9-40, October. 17. Nicola Persico, 2000. "Information Acquisition in Auctions," Econometrica, Econometric Society, vol. 68(1), pages 135-148, January. Full references (including those not matched with items on IDEAS) ## Lists This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS. ## Corrections When requesting a correction, please mention this item's handle: RePEc:hal:wpaper:halshs-00648884. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (CCSD). If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If references are entirely missing, you can add them using this form. If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form. If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services.
2014-03-10 07:39:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4606819450855255, "perplexity": 8223.05103050218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010693428/warc/CC-MAIN-20140305091133-00083-ip-10-183-142-35.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/42274/reaction-of-alcohols-with-bromine
# Reaction of alcohols with bromine In a reaction between $\ce{R-OH}$ and $\ce{Br2}$, what will the product be? Initially I thought it would be $\ce{R-Br}$, but that doesn't seem right. I'm oscillating between the product being either $\ce{R-OBr}$ or no reaction occurring at all. I've searched a bit on the web too, but couldn't get an answer. What is the product?
2019-09-16 11:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7520972490310669, "perplexity": 315.80182924668804}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00438.warc.gz"}
https://www.whitman.edu/mathematics/calculus_late_online/section14.06.html
Coordinate systems are tools that let us use algebraic methods to understand geometry. While the rectangular (also called Cartesian) coordinates that we have been discussing are the most common, some problems are easier to analyze in alternate coordinate systems. A coordinate system is a scheme that allows us to identify any point in the plane or in three-dimensional space by a set of numbers. In rectangular coordinates these numbers are interpreted, roughly speaking, as the lengths of the sides of a rectangular "box.'' In two dimensions you may already be familiar with an alternative, called polar coordinates. In this system, each point in the plane is identified by a pair of numbers $(r,\theta)$. The number $\theta$ measures the angle between the positive $x$-axis and a vector with tail at the origin and head at the point, as shown in figure 14.6.1; the number $r$ measures the distance from the origin to the point. Either of these may be negative; a negative $\theta$ indicates the angle is measured clockwise from the positive $x$-axis instead of counter-clockwise, and a negative $r$ indicates the point at distance $|r|$ in the opposite of the direction given by $\theta$. Figure 14.6.1 also shows the point with rectangular coordinates $\ds (1,\sqrt3)$ and polar coordinates $(2,\pi/3)$, 2 units from the origin and $\pi/3$ radians from the positive $x$-axis. Figure 14.6.1. Polar coordinates: the general case and the point with rectangular coordinates $\ds (1,\sqrt3)$. We can extend polar coordinates to three dimensions simply by adding a $z$ coordinate; this is called cylindrical coordinates . Each point in three-dimensional space is represented by three coordinates $(r,\theta,z)$ in the obvious way: this point is $z$ units above or below the point $(r,\theta)$ in the $x$-$y$ plane, as shown in figure 14.6.2. The point with rectangular coordinates $\ds (1,\sqrt3, 3)$ and cylindrical coordinates $(2,\pi/3,3)$ is also indicated in figure 14.6.2. Figure 14.6.2. Cylindrical coordinates: the general case and the point with rectangular coordinates $\ds (1,\sqrt3, 3)$. Some figures with relatively complicated equations in rectangular coordinates will be represented by simpler equations in cylindrical coordinates. For example, the cylinder in figure 14.6.3 has equation $\ds x^2+y^2=4$ in rectangular coordinates, but equation $r=2$ in cylindrical coordinates. Given a point $(r,\theta)$ in polar coordinates, it is easy to see (as in figure 14.6.1) that the rectangular coordinates of the same point are $(r\cos\theta,r\sin\theta)$, and so the point $(r,\theta,z)$ in cylindrical coordinates is $(r\cos\theta,r\sin\theta,z)$ in rectangular coordinates. This means it is usually easy to convert any equation from rectangular to cylindrical coordinates: simply substitute \eqalign{ x&=r\cos\theta\cr y&=r\sin\theta\cr} and leave $z$ alone. For example, starting with $\ds x^2+y^2=4$ and substituting $x=r\cos\theta$, $y=r\sin\theta$ gives \eqalign{ r^2\cos^2\theta+r^2\sin^2\theta&=4\cr r^2(\cos^2\theta+\sin^2\theta)&=4\cr r^2&=4\cr r&=2.\cr } Of course, it's easy to see directly that this defines a cylinder as mentioned above. Cylindrical coordinates are an obvious extension of polar coordinates to three dimensions, but the use of the $z$ coordinate means they are not as closely analogous to polar coordinates as another standard coordinate system. In polar coordinates, we identify a point by a direction and distance from the origin; in three dimensions we can do the same thing, in a variety of ways. The question is: how do we represent a direction? One way is to give the angle of rotation, $\theta$, from the positive $x$ axis, just as in cylindrical coordinates, and also an angle of rotation, $\phi$, from the positive $z$ axis. Roughly speaking, $\theta$ is like longitude and $\phi$ is like latitude. (Earth longitude is measured as a positive or negative angle from the prime meridian, and is always between 0 and 180 degrees, east or west; $\theta$ can be any positive or negative angle, and we use radians except in informal circumstances. Earth latitude is measured north or south from the equator; $\phi$ is measured from the north pole down.) This system is called spherical coordinates ; the coordinates are listed in the order $(\rho,\theta,\phi)$, where $\rho$ is the distance from the origin, and like $r$ in cylindrical coordinates it may be negative. The general case and an example are pictured in figure 14.6.4; the length marked $r$ is the $r$ of cylindrical coordinates. Figure 14.6.4. Spherical coordinates: the general case and the point with rectangular coordinates $\ds (1,\sqrt3 , 3)$. As with cylindrical coordinates, we can easily convert equations in rectangular coordinates to the equivalent in spherical coordinates, though it is a bit more difficult to discover the proper substitutions. Figure 14.6.5 shows the typical point in spherical coordinates from figure 14.6.4, viewed now so that the arrow marked $r$ in the original graph appears as the horizontal "axis'' in the left hand graph. From this diagram it is easy to see that the $z$ coordinate is $\rho\cos\phi$, and that $r=\rho\sin\phi$, as shown. Thus, in converting from rectangular to spherical coordinates we will replace $z$ by $\rho\cos\phi$. To see the substitutions for $x$ and $y$ we now view the same point from above, as shown in the right hand graph. The hypotenuse of the triangle in the right hand graph is $r=\rho\sin\phi$, so the sides of the triangle, as shown, are $x=r\cos\theta=\rho\sin\phi\cos\theta$ and $y=r\sin\theta=\rho\sin\phi\sin\theta$. So the upshot is that to convert from rectangular to spherical coordinates, we make these substitutions: \eqalign{ x&=\rho\sin\phi\cos\theta\cr y&=\rho\sin\phi\sin\theta\cr z&=\rho\cos\phi.\cr} Figure 14.6.5. Converting from rectangular to spherical coordinates. Example 14.6.1 As the cylinder had a simple equation in cylindrical coordinates, so does the sphere in spherical coordinates: $\rho=2$ is the sphere of radius 2. If we start with the Cartesian equation of the sphere and substitute, we get the spherical equation: \eqalign{ x^2+y^2+z^2&=2^2\cr \rho^2\sin^2\phi\cos^2\theta+ \rho^2\sin^2\phi\sin^2\theta+\rho^2\cos^2\phi&=2^2\cr \rho^2\sin^2\phi(\cos^2\theta+\sin^2\theta)+\rho^2\cos^2\phi&=2^2\cr \rho^2\sin^2\phi+\rho^2\cos^2\phi&=2^2\cr \rho^2(\sin^2\phi+\cos^2\phi)&=2^2\cr \rho^2&=2^2\cr \rho&=2\cr } Example 14.6.2 Find an equation for the cylinder $\ds x^2+y^2=4$ in spherical coordinates. Proceeding as in the previous example: \eqalign{ x^2+y^2&=4\cr \rho^2\sin^2\phi\cos^2\theta+ \rho^2\sin^2\phi\sin^2\theta=4\cr \rho^2\sin^2\phi(\cos^2\theta+\sin^2\theta)&=4\cr \rho^2\sin^2\phi&=4\cr \rho\sin\phi&=2\cr \rho&={2\over\sin\phi}\cr } ## Exercises 14.6 Ex 14.6.1 Convert the following points in rectangular coordinates to cylindrical and spherical coordinates: a. $(1,1,1)$ b. $(7,-7,5)$ c. $(\cos(1),\sin(1),1)$ d. $(0,0,-\pi)$ (answer) Ex 14.6.2 Find an equation for the sphere $\ds x^2+y^2+z^2=4$ in cylindrical coordinates. (answer) Ex 14.6.3 Find an equation for the $y$-$z$ plane in cylindrical coordinates. (answer) Ex 14.6.4 Find an equation equivalent to $\ds x^2+y^2+2z^2+2z-5=0$ in cylindrical coordinates. (answer) Ex 14.6.5 Suppose the curve $\ds \ds z=e^{-x^2}$ in the $x$-$z$ plane is rotated around the $z$ axis. Find an equation for the resulting surface in cylindrical coordinates. (answer) Ex 14.6.6 Suppose the curve $\ds z=x$ in the $x$-$z$ plane is rotated around the $z$ axis. Find an equation for the resulting surface in cylindrical coordinates. (answer) Ex 14.6.7 Find an equation for the plane $y=0$ in spherical coordinates. (answer) Ex 14.6.8 Find an equation for the plane $z=1$ in spherical coordinates. (answer) Ex 14.6.9 Find an equation for the sphere with radius 1 and center at $(0,1,0)$ in spherical coordinates. (answer) Ex 14.6.10 Find an equation for the cylinder $\ds x^2+y^2=9$ in spherical coordinates. (answer) Ex 14.6.11 Suppose the curve $\ds z=x$ in the $x$-$z$ plane is rotated around the $z$ axis. Find an equation for the resulting surface in spherical coordinates. (answer) Ex 14.6.12 Plot the polar equations $r=\sin(\theta)$ and $r=\cos(\theta)$ and comment on their similarities. (If you get stuck on how to plot these, you can multiply both sides of each equation by $r$ and convert back to rectangular coordinates). Ex 14.6.13 Extend exercises 6 and 11 by rotating the curve $z=mx$ around the $z$ axis and converting to both cylindrical and spherical coordinates. (answer) Ex 14.6.14 Convert the spherical formula $\rho=\sin \theta \sin \phi$ to rectangular coordinates and describe the surface defined by the formula (Hint: Multiply both sides by $\rho$.) (answer) Ex 14.6.15 We can describe points in the first octant by $x >0$, $y>0$ and $z>0$. Give similar inequalities for the first octant in cylindrical and spherical coordinates. (answer)
2019-10-14 02:27:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9564377665519714, "perplexity": 214.98353609749879}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00430.warc.gz"}
https://www.gamedev.net/forums/topic/551873-what-are-derivatives-in-hlslglsl/
What are derivatives in HLSL/GLSL? This topic is 3277 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts A recent thread on GDNet sparked my interest once again. I've been taking calculus classes for 3 years now and I completely understand what a derivative calculates, and I know what partial derivatives calculate, but for some reason I fail to understand the point and application of the derivative functions in HLSL and GLSL. Specifically ddx(), ddy(), dFdx(), and dFdy(). Can someone explain what exactly these do? I've looked at the MSDN documentation and that doesn't explain much. I've also searched google which turned up minimal results that I still could not understand. Also could you use these functions any place special for shaders in a 2D quad-based graphics engine? Share on other sites I recently had use of a derivative and while i didn't use those specific functions, i think the concept still applies maybe (: I was simulating waves a pool of lava (could have been an ocean) in a vertex shader by taking sin(position.x + CurrentTime) + sin(position.z + CurrentTime). That made it so there were waves on both the X and Z axis (X-Z describes the "flat plane" of my world, Y is up). Anyhow, i needed to get the normal of the points to be able to do lighting. Well, the slope of sine is cosine, so i was able to use that to get the slope vector (in 2d) per axis, and was able to flip X and Y and negate one to get the normal vector (in 2d) per axis, then combine them together to get the 3d normal. I'll bet that maybe you'd use those functions for similar reasons? I could be totally wrong though Share on other sites That's a good code example, but no I don't think that's what they are used for exactly. That's more of a conventional analysis of the derivative that I'm familiar with: d/dx[sin x] = cos x. But here's the definition for ddx(). Share on other sites It tells you how fast a value is changing at the current pixel. It can be used for mip-maps for example, since if the value is changing more than one texel per pixel, then a lower mip-map level should be used. Share on other sites I'm sorry, but I'm just not understanding what you mean by that. Do you have any diagrams that might be able to help? I don't really understand how it measures "how fast a value is changing" at an instant when you only have the data from one frame. And it takes scalars, vectors, and matrices as input? I'm looking for some good and simple example usages of these functions. Share on other sites It returns how fast the value is changing compared to that value in the next/previous pixel in screen space. So if you're rendering the pixel at screen coordinates (x, y) then ddx(n) will tell you how fast the value n has been changing between (x-1,y) and (x+1,y). Similarly with ddy. Share on other sites Quote: Original post by CodekaIt returns how fast the value is changing compared to that value in the next/previous pixel in screen space. So if you're rendering the pixel at screen coordinates (x, y) then ddx(n) will tell you how fast the value n has been changing between (x-1,y) and (x+1,y). Similarly with ddy. Alright, it's starting to make a little sense. The last thing I need clarification on is where is 'n' coming from. Is that texture coordinates, a color? And could you provide a simple example of ddx with a common input and then give the output. Share on other sites Yeah the docs (Cg & MSDN) aren't good at explaining what they do. Here's an excellent thread even with examples of input & output. Another excellent one (see mattnewport post) If I understood it correctly (no one seems to do correctly) those functions have little to do with calculus (but the main reason is to compute deltas though) What I DO know is that those two functions are very very useful when using doing dynamic conditional branching. Since the derivatives are used to calculate the mipmap & anisotropic filtering, if one of the adjacent pixels in the 2x2 block (note not all video cards could use a 2x2 grid) follow a different code path from the rest, the deltas can't be correctly calculated; therefore you can't use the tex2D instruction inside a conditional, because tex2D can't calculate the right lod or axis of anisotropy. Example: The following code won't work in PS 3.0 (where 'x' isn't constant): if( x < 5 ){ tex2D( mySampler, position ); //Error some of the pixels in the 2x2 block may //have x >= 5; the delta can't be computed} To solve this, you can calculate the derivative outside the conditional (as the deltas will be the same for all the pixels in the block), and providing the deltas manually later in the conditional branch: float2 derivX = ddx( position );float2 derivY = ddy( position );if( x < 5 ){ //Whether x < 5 or not, derivX & derivY will stay consistent in all the pixels //from the grid tex2Dgrad( mySampler, position, derivX, derivY );} Though, if you don't care about mips/lod and all that stuff (which often happens with PSSM shadows where you use conditional branching and you have just one lod) you may just use tex2Dlod and manually select the top mip (i.e. tex2Dlod( mySampler, position.xyyy ) Cheers Dark Sylinc Share on other sites Thanks Goldberg that helped a lot! That covers everything I needed. 1. 1 Rutin 28 2. 2 3. 3 4. 4 5. 5 • 11 • 13 • 11 • 10 • 13 • Forum Statistics • Total Topics 632952 • Total Posts 3009438 • Who's Online (See full list) There are no registered users currently online ×
2018-10-19 15:09:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3874782621860504, "perplexity": 1611.0347514285838}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512411.13/warc/CC-MAIN-20181019145850-20181019171350-00190.warc.gz"}
https://byjus.com/questions/who-stated-the-law-of-conservation-of-mass/
# Who stated the Law of conservation of mass? Antoine Laurent Lavoisier stated the law of conservation of mass. ## Law of Conservation of Mass The law of conservation of mass states that the mass in an isolated system can neither be created nor be destroyed but can be transformed from one form to another. Law of conservation of mass can be represented in the differential form using the continuity equation in fluid mechanics and continuum mechanics as: $$\frac{\partial \rho }{\partial t}+\bigtriangledown (\rho v)=0$$ Where, • $$ρ$$ is the density • $$t$$ is the time • $$v$$ is the velocity • $$\bigtriangledown$$ is the divergence
2021-09-25 11:45:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516505479812622, "perplexity": 367.7460942467331}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00212.warc.gz"}
https://nrich.maths.org/152/solution
### Writing Digits Lee was writing all the counting numbers from 1 to 20. She stopped for a rest after writing seventeen digits. What was the last number she wrote? ### What Number? I am less than 25. My ones digit is twice my tens digit. My digits add up to an even number. ### One of Thirty-six Can you find the chosen number from the grid using the clues? ##### Stage: 1 Challenge Level: We had over 60 correct answers to this challenge. Here are just some of them that also say a bit about what they did. Nicholas from Congleton wrote: I used six lego bricks to pretend they were beads on an abacus. I started with six bricks on one side (the units) to make $6$ then swapped it to the tens side to make $60$. I then took one brick and moved it over to the units side to make $51$, then swapped them over (making $15$). I moved another from the $5$ to the $1$. Then I repeated the process until I got to three on each side. Answers: $6, 60, 15, 51, 42, 24$ and $33$. P.S.: I wrote down the answers on a jotter as I went along. Emily from Mount School wrote: With six beads I made $6, 15, 24, 33, 42, 51, 60$. I started with all the beads on the units and then just kept moving one across. I got seven numbers. They were $6, 15, 24, 33, 42, 51$ and $60$.
2016-08-27 04:41:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5590519309043884, "perplexity": 1206.5473458742388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982297973.29/warc/CC-MAIN-20160823195817-00050-ip-10-153-172-175.ec2.internal.warc.gz"}
https://muellersandra.github.io/news/
# My latest posts • 16 Aug 2022 » 10th Indian Conference on Logic and its Applications (ICLA) - TBA I am invited to give an invited talk at the 10th Indian Conference on Logic and its Applications (ICLA) in Indore, India, March 3-5, 2023. Title and abstract of this talk will be announced in due course. • 02 May 2022 » (with P. Schlicht, D. Schrittesser, and T. Weinert) Lebesgue's density theorem and definable selectors for ideals Israel Journal of Mathematics. Appeared online. DOI: 10.1007/s11856-022-2312-8. PDF. arXiv. Bibtex. • 16 Feb 2022 » Winter Meeting of the ASL with the JMM - Universally Baire Sets, Determinacy, and Inner Models I am invited to give an invited address at the 2023 Winter Meeting of the Association for Symbolic Logic with the JMM in Boston, USA, January 4-7, 2023. Universally Baire Sets, Determinacy, and Inner Models Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g., measurable cardinals, strong cardinals or Woodin cardinals, which were introduced and studied by Jensen, Mitchell, Steel, Woodin, Sargsyan, and others. We will outline the role universally Baire sets play in the study of inner models and their connections to determinacy axioms. In particular, we will discuss recent results on $\mathsf{Sealing}$, a formalization due to Woodin of the statement that the theory of the universally Baire sets cannot be changed by forcing. • 15 Jan 2022 » Baltic Set Theory Seminar - A stationary-tower-free proof of Woodin's Sealing Theorem I am invited to give a series of talks in the Baltic Set Theory Seminar, an online seminar organized by Grigor Sargsyan. This will be after the summer break, probably in September 2022. A stationary-tower-free proof of Woodin’s Sealing Theorem I will present a proof of $\mathsf{Sealing}$ from a supercompact and a class of Woodin cardinals using genericity iterations. The proof is joint work with Sargsyan and Wcisło, and builds on the work of Sargsyan-Trang. • 03 Jan 2022 » PhDs in Logic XIII, Turin - Highlights from infinite games, mice, and their connection I am invited to give a talk at the PhDs in Logic XIII conference in Turin, Italy, September 5 - 7, 2022. Highlights from infinite games, mice, and their connection The study of inner models was initiated by Gödel’s analysis of the constructible universe. Later, the study of canonical inner models with large cardinals, e.g., measurable cardinals, strong cardinals or Woodin cardinals, was pioneered by Jensen, Mitchell, Steel, and others. Around the same time, the study of infinite two-player games was driven forward by Martin’s proof of analytic determinacy from a measurable cardinal, Borel determinacy from ZFC, and Martin and Steel’s proof of levels of projective determinacy from Woodin cardinals with a measurable cardinal on top. First Woodin and later Neeman improved the result in the projective hierarchy by showing that in fact the existence of a countable iterable model, a mouse, with Woodin cardinals and a top measure suffices to prove determinacy in the projective hierarchy. This opened up the possibility for an optimal result stating the equivalence between local determinacy hypotheses and the existence of mice in the projective hierarchy. In this talk, we will outline the main concepts and results connecting determinacy hypotheses with the existence of mice with large cardinals as well as recent progress in the area. • 02 Jan 2022 » European Set Theory Conference, Turin - Universally Baire Sets and the Inner Model Program I am invited to give a plenary talk at the European Set Theory Conference 2022 in Turin, Italy, August 29 - September 2, 2022. Universally Baire Sets and the Inner Model Program Universally Baire sets originate in work of Schilling and Vaught, and they were first systematically studied by Feng, Magidor, and Woodin. Since then they play a prominent role in many areas of set theory. We will discuss recent progress on their relationship to the Inner Model Program. First, we will outline the resolution of Sargsyan’s Conjecture on the large cardinal strength of determinacy when all sets are universally Baire. The second part of the talk will focus on sealing the theory of the universally Baire sets. Woodin showed in his famous Sealing Theorem that in the presence of a proper class of Woodin cardinals $\mathsf{Sealing}$, a generic absoluteness principle for the theory of the universally Baire sets of reals, holds after collapsing a supercompact cardinal. We will outline the importance of $\mathsf{Sealing}$ and discuss a new and stationary-tower-free proof of Woodin’s Sealing Theorem that is based on Sargsyan’s and Trang’s proof of $\mathsf{Sealing}$ from iterability. The second part is joint work with Grigor Sargsyan and Bartosz Wcisło. • 05 Oct 2021 » (with P. Lücke) Sigma_1-definability at higher cardinals: Thin sets, almost disjoint families and long well-orders Submitted. PDF. arXiv. Bibtex. • 23 Aug 2021 » (with P. Schlicht) Uniformization and Internal Absoluteness Accepted for publication in the Proceedings of the AMS. PDF. arXiv. Bibtex. • 21 Aug 2021 » Set Theory Conference in Jerusalem - Inner Models, Determinacy, and Sealing I am invited to give a talk at the conference Advances in Set Theory that will take place at the Hebrew University of Jerusalem, July 10-14, 2022. Inner Models, Determinacy, and Sealing Inner model theory has been very successful in connecting determinacy axioms to the existence of inner models with large cardinals and other natural hypotheses. Recent results of Larson, Sargsyan, and Trang suggest that a Woodin limit of Woodin cardinals is a natural barrier for our current methods to prove these connections. One reason for this comes from Sealing, a generic absoluteness principle for the theory of the universally Baire sets of reals introduced by Woodin. Woodin showed in his famous Sealing Theorem that in the presence of a proper class of Woodin cardinals Sealing holds after collapsing a supercompact cardinal. I will outline the importance of Sealing and discuss a new and stationary-tower-free proof of Woodin’s Sealing Theorem that is based on Sargsyan’s and Trang’s proof of Sealing from iterability. This is joint work with Grigor Sargsyan and Bartosz Wcisło. • 20 Aug 2021 » Set Theory Workshop at the Erwin Schrödinger Institute, Vienna - Preserving universally Baire sets and Sealing I am invited to give a talk at a Set Theory Workshop at the Erwin Schrödinger Institute that will take place in Vienna, July 4-8, 2022. Preserving universally Baire sets and Sealing Universally Baire sets play a central role in many areas of set theory. In inner model theory many objects we construct are universally Baire and this is crucial as it allows us to extend them onto generic extensions. Sealing, a generic absoluteness principle for the theory of the universally Baire sets introduced by Woodin, is therefore an obstruction to construct canonical inner models. In his famous Sealing Theorem, Woodin showed that in the presence of a proper class of Woodin cardinals Sealing holds after collapsing $2^{2^\kappa}$ for a supercompact cardinal $\kappa$. We will outline a new and stationary-tower-free proof of Woodin’s Sealing Theorem that is based on Sargsyan’s and Trang’s proof of Sealing from iterability. A key new technical concept in our proof is the preservation of universally Baire sets in ultrapowers by extenders. This is joint work with Grigor Sargsyan and Bartosz Wcisło. • 19 Aug 2021 » Logic Colloquium Reykjavik, Iceland - A stationary-tower-free proof of Sealing from a supercompact I am invited to give a talk in the special session on set theory at the Logic Colloquium 2022 taking place in Reykjavik, Iceland, June 27 - July 1, 2022. A stationary-tower-free proof of $\mathsf{Sealing}$ from a supercompact $\mathsf{Sealing}$ is a generic absoluteness principle for the theory of the universally Baire sets of reals introduced by Woodin. It is deeply connected to the Inner Model Program and plays a prominent role in recent advances in inner model theory. Woodin showed in his famous Sealing Theorem that in the presence of a proper class of Woodin cardinals $\mathsf{Sealing}$ holds after collapsing a supercompact cardinal. I will outline the importance of $\mathsf{Sealing}$ and discuss a new and stationary-tower-free proof of Woodin’s Sealing Theorem that is based on Sargsyan’s and Trang’s proof of $\mathsf{Sealing}$ from iterability. This is joint work with Grigor Sargsyan and Bartosz Wcisło. • 18 Aug 2021 » Münster conference on inner model theory - A stationary-tower-free proof of Woodin's Sealing Theorem I am invited to give a talk at the Münster conference on inner model theory that will take place June 20 - July 1, 2022. A stationary-tower-free proof of Woodin’s Sealing Theorem I will present a proof of $\mathsf{Sealing}$ from a supercompact and a class of Woodin cardinals using genericity iterations. The proof is joint work with Sargsyan and Wcisło, and builds on the work of Sargsyan-Trang. • 16 Aug 2021 » Set Theory Seminar, University of Vienna - Inner Models, Determinacy, and Sealing I was invited to give a talk in the Set Theory Seminar of the University of Vienna on May 24, 2022. Inner Models, Determinacy, and Sealing Inner model theory has been very successful in connecting determinacy axioms to the existence of inner models with large cardinals and other natural hypotheses. Recent results of Larson, Sargsyan, and Trang suggest that a Woodin limit of Woodin cardinals is a natural barrier for our current methods to prove these connections. One reason for this comes from Sealing, a generic absoluteness principle for the theory of the universally Baire sets of reals introduced by Woodin. Woodin showed in his famous Sealing Theorem that in the presence of a proper class of Woodin cardinals Sealing holds after collapsing a supercompact cardinal. I will outline the importance of Sealing and discuss a new and stationary-tower-free proof of Woodin’s Sealing Theorem that is based on Sargsyan’s and Trang’s proof of Sealing from iterability. This is joint work with Grigor Sargsyan and Bartosz Wcisło. • 15 Aug 2021 » Set Theory Seminar, University of Barcelona - Inner Models, Determinacy, and Sealing I was invited to give a talk in the Barcelona Set Theory Seminar on May 11, 2022. Inner Models, Determinacy, and Sealing Inner model theory has been very successful in connecting determinacy axioms to the existence of inner models with large cardinals and other natural hypotheses. Recent results of Larson, Sargsyan, and Trang suggest that a Woodin limit of Woodin cardinals is a natural barrier for our current methods to prove these connections. One reason for this comes from Sealing, a generic absoluteness principle for the theory of the universally Baire sets of reals introduced by Woodin. Woodin showed in his famous Sealing Theorem that in the presence of a proper class of Woodin cardinals Sealing holds after collapsing a supercompact cardinal. I will outline the importance of Sealing and discuss a new and stationary-tower-free proof of Woodin’s Sealing Theorem that is based on Sargsyan’s and Trang’s proof of Sealing from iterability. This is joint work with Grigor Sargsyan and Bartosz Wcisło. • 14 Aug 2021 » Algebra Seminar, TU Wien - A journey through the world of mice and games I was invited to give a talk in the TU Wien FG1 Seminar, the Research Seminars of the Set Theory and the Universal Algebra groups, on April 29, 2022. A journey through the world of mice and games This talk will be an informal and gentle introduction to the research area called “inner model theory”. I will introduce determinacy of infinite two player games and outline classical as well as recent results connecting this notion with canonical models of set theory (called “mice”). We will discuss versions of the inner model problem and why they are central to set theory together with recent results suggesting that new methods are required for major advance on these problems. • 13 Aug 2021 » Logic Colloquium, University of Vienna - The Interplay of Determinacy, Large Cardinals, and Inner Models I was invited to give a talk in the Logic Colloquium of the University of Vienna on March 10, 2022. The Interplay of Determinacy, Large Cardinals, and Inner Models The standard axioms of set theory, Zermelo-Fraenkel set theory with Choice (ZFC), do not suffice to answer all questions in mathematics. While this follows abstractly from Kurt Gödel’s famous incompleteness theorems, we nowadays know numerous concrete examples for such questions. In addition to a large number of problems in set theory, even many problems outside of set theory have been showed to be unsolvable, meaning neither their truth nor their failure can be proven from ZFC. A major part of set theory is devoted to attacking this problem by studying various extensions of ZFC and their properties with the overall goal to identify the “right” axioms for mathematics that settle these problems. Determinacy assumptions are canonical extensions of ZFC that postulate the existence of winning strategies in natural infinite two-player games. Such assumptions are known to enhance sets of real numbers with a great deal of canonical structure. Other natural and well-studied extensions of ZFC are given by the hierarchy of large cardinal axioms. Inner model theory provides canonical models for many large cardinal axioms. Determinacy assumptions, large cardinal axioms, and their consequences are widely used and have many fruitful implications in set theory and even in other areas of mathematics. Many applications, in particular, proofs of consistency strength lower bounds, exploit the interplay of determinacy axioms, large cardinals, and inner models. In this talk I will survey recent developments as well as my contribution to this flourishing area. • 13 Aug 2021 » (with G. Sargsyan) HOD in inner models with Woodin cardinals The Journal of Symbolic Logic. Volume 86, Issue 3, September 2021. Pages 871-896. DOI: 10.1017/jsl.2021.61. PDF. arXiv. Bibtex. • 12 Aug 2021 » Leeds University Models and Sets seminar - The Interplay of Determinacy, Large Cardinals, and Inner Models I was invited to give a virtual talk in the Leeds University Models and Sets seminar on March 8, 2022. The talk will be at 13:45 local time, i.e., 14:45 Vienna time. The Interplay of Determinacy, Large Cardinals, and Inner Models The standard axioms of set theory, Zermelo-Fraenkel set theory with Choice (ZFC), do not suffice to answer all questions in mathematics. While this follows abstractly from Kurt Gödel’s famous incompleteness theorems, we nowadays know numerous concrete examples for such questions. In addition to a large number of problems in set theory, even many problems outside of set theory have been showed to be unsolvable, meaning neither their truth nor their failure can be proven from ZFC. A major part of set theory is devoted to attacking this problem by studying various extensions of ZFC and their properties with the overall goal to identify the “right” axioms for mathematics that settle these problems. Determinacy assumptions are canonical extensions of ZFC that postulate the existence of winning strategies in natural infinite two-player games. Such assumptions are known to enhance sets of real numbers with a great deal of canonical structure. Other natural and well-studied extensions of ZFC are given by the hierarchy of large cardinal axioms. Inner model theory provides canonical models for many large cardinal axioms. Determinacy assumptions, large cardinal axioms, and their consequences are widely used and have many fruitful implications in set theory and even in other areas of mathematics. Many applications, in particular, proofs of consistency strength lower bounds, exploit the interplay of determinacy axioms, large cardinals, and inner models. In this talk I will survey recent developments as well as my contribution to this flourishing area. • 22 Jul 2021 » Winter Meeting of the ASL with the JMM - Lower bounds in set theory (cancelled) I was invited to give an invited address at the 2022 Winter Meeting of the Association for Symbolic Logic with the JMM in Seattle, Washington, January 5-8, 2022. Due to the pandemic situation the conference is postponed and will be held as a virtual meeting later in 2022. The ASL part of the meeting is cancelled and I have been invited to speak at the 2023 JMM in Boston instead. Lower bounds in set theory Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced and studied by Jensen, Mitchell, Steel, Woodin, Sargsyan, and others. We will outline two recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$. The second result studies the strength of a model of determinacy in which all sets of reals are universally Baire. Sargsyan conjectured that the existence of such a model is as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. • 21 Jul 2021 » Vorstellungsvortrag an der Fakultät für Mathematik und Geoinformation - The Interplay of Determinacy, Large Cardinals, and Inner Models On November 17th, 2021, I will give a talk at the Faculty of Mathematics and Geoinformation at TU Wien. If you want to attend this zoom talk, please send me an e-mail. The Interplay of Determinacy, Large Cardinals, and Inner Models The standard axioms of set theory, Zermelo-Fraenkel set theory with Choice (ZFC), do not suffice to answer all questions in mathematics. While this follows abstractly from Kurt Gödel’s famous incompleteness theorems, we nowadays know numerous concrete examples for such questions. In addition to a large number of problems in set theory, even many problems outside of set theory have been showed to be unsolvable, meaning neither their truth nor their failure can be proven from ZFC. A major part of set theory is devoted to attacking this problem by studying various extensions of ZFC and their properties with the overall goal to identify the “right” axioms for mathematics that settle these problems. Determinacy assumptions are canonical extensions of ZFC that postulate the existence of winning strategies in natural infinite two-player games. Such assumptions are known to enhance sets of real numbers with a great deal of canonical structure. Other natural and well-studied extensions of ZFC are given by the hierarchy of large cardinal axioms. Inner model theory provides canonical models for many large cardinal axioms. Determinacy assumptions, large cardinal axioms, and their consequences are widely used and have many fruitful implications in set theory and even in other areas of mathematics. Many applications, in particular, proofs of consistency strength lower bounds, exploit the interplay of determinacy axioms, large cardinals, and inner models. In this talk I will survey my contribution to this flourishing area. This, in particular, includes results on connecting the determinacy of longer games to canonical inner models with many Woodin cardinals, a new lower bound for a combinatorial statement about infinite trees, as well as an application of determinacy answering a question in general topology. • 20 Jul 2021 » Cross-Alps Logic Seminar - Large Cardinals and Determinacy I am invited to give a talk on Nov 12, 2021, in the Cross-Alps Logic Seminar, an online seminar organized by the logic groups of Udine, Turin, Genoa and Lausanne. Large Cardinals and Determinacy Determinacy assumptions, large cardinal axioms, and their consequences are widely used and have many fruitful implications in set theory and even in other areas of mathematics. Many applications, in particular, proofs of consistency strength lower bounds, exploit the interplay of determinacy axioms, large cardinals, and inner models. I will survey some recent results in this flourishing area. This, in particular, includes results on connecting the determinacy of longer games to canonical inner models with many Woodin cardinals, a new lower bound for a combinatorial statement about infinite trees, as well as an application of determinacy answering a question in general topology. A recording of this talk is available here. • 20 Jul 2021 » (with C. J. Eagle, C. Hamel, and F. D. Tall) An undecidable extension of Morley's theorem on the number of countable models Submitted. PDF. arXiv. Bibtex. • 20 Jul 2021 » DMV-ÖMG Annual Conference 2021 - Uniformization and internal absoluteness I was invited to give a talk in the Sektion Logik at the DMV-ÖMG Annual Conference 2021 taking place virtually from Passau, Germany, Sep 27 - Oct 1, 2021. Uniformization and internal absoluteness Measurability with respect to ideals is known to be tightly connected to absoluteness principles for certain forcing notions. In joint work with Philipp Schlicht we study a uniformization principle that postulates the existence of a uniformizing function on a large set, relative to a given ideal. We prove that for all ideals I such that Borel/I is proper, this uniformization principle is equivalent to an absoluteness principle for projective formulas that we call internal absoluteness. In addition, we show that it is equivalent to measurability with respect to I together with 1-step absoluteness for the poset Borel/I. These equivalences are new even for Cohen and random forcing and are, to the best of our knowledge, the first precise equivalences between regularity and absoluteness beyond the second level of the projective hierarchy. Slides • 18 Jul 2021 » 16th International Luminy Workshop in Set Theory - The strength of determinacy when all sets are universally Baire During the week of September 13 - 17, 2021 I virtually attended the 16th International Luminy Workshop in Set Theory and gave a talk on September 15, 2021. The strength of determinacy when all sets are universally Baire Abstract: The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. • 23 Jun 2021 » Logic Colloquium Poznan - The strength of determinacy when all sets are universally Baire I was invited to give a talk in the special session on set theory at the Logic Colloquium 2021 taking place at the Adam Mickiewicz University, Poznan, Poland, July 19-24, 2021. The strength of determinacy when all sets are universally Baire The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. Slides • 21 Jun 2021 » A short talk series: Research in Set Theory - Determinacy Axioms I gave a virtual talk in the short talk series: Research in Set Theory on June 18, 2021. The talk was at 9:30pm. Determinacy Axioms Set theory can be understood as the search for the right axioms in mathematics, i.e., axioms that answer the questions left open by the standard axioms for set theory introduced by Zermelo and Fraenkel with the Axiom of Choice (ZFC). Canonical candidates for these axioms are large cardinals, determinacy axioms, and forcing axioms. We will focus on the second one, determinacy axioms, and say a bit about connections between large cardinals and determinacy. Slides A recording of this talk is available on request. • 30 May 2021 » The consistency strength of determinacy when all sets are universally Baire Submitted. PDF. arXiv. Bibtex. • 22 Feb 2021 » ASL North American Annual Meeting - The strength of determinacy when all sets are universally Baire I was invited to give a talk on June 23, 2021 in the special session on set theory at the 2021 ASL North American Annual Meeting, June 22-25, 2021. This conference took place online. The strength of determinacy when all sets are universally Baire The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. Slides • 21 Feb 2021 » Bristol/Oxford Set Theory seminar - The strength of determinacy when all sets are universally Baire I gave a virtual talk in the Bristol/Oxford Set Theory seminar (see also) or on May 26, 2021. The talk was at 4:30pm local time, i.e., 5:30pm Vienna time. The strength of determinacy when all sets are universally Baire The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. • 20 Feb 2021 » CMU Logic Seminar, Pittsburgh - Large cardinals and determinacy when all sets are universally Baire I am invited to give a talk in the Logic Seminar at CMU on April 20th, 2021 at 3:30pm Pittsburgh time (which is 9:30pm Vienna time). This one hour talk will be for a general audience. Afterwards I will give a 90min talk in the reading group where I will give more technical details. This seminar will take place virtually via zoom. Large cardinals and determinacy when all sets are universally Baire The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. After a gentle introduction to the connection between determinacy axioms and large cardinals we will sketch a proof of Sargsyan’s conjecture. Slides The exact consistency strength of “AD + all sets are universally Baire” In this second talk, we will outline the proof of Sargsyan’s conjecture with more details. In particular, we will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan that is crucial in the construction of a model with a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals from a model of the Axiom of Determinacy in which all sets of reals are universally Baire. Slides • 19 Feb 2021 » CUNY Set Theory Seminar, New York - The exact consistency strength of AD + all sets are universally Baire I am invited to give a talk in the CUNY Set Theory Seminar on April 9th, 2021. The seminar will take place via zoom. The exact consistency strength of “AD + all sets are universally Baire” The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. • 19 Feb 2021 » KGRC Research Seminar, Vienna - The exact consistency strength of $AD^+$ + all sets are universally Baire I gave a talk in the KGRC Research Seminar at University of Vienna on Mar 11th, 2021 at 3pm Vienna time. The seminar took place via zoom. The exact consistency strength of “$AD^+$ + all sets are universally Baire” The large cardinal strength of the Axiom of Determinacy when enhanced with the hypothesis that all sets of reals are universally Baire is known to be much stronger than the Axiom of Determinacy itself. In fact, Sargsyan conjectured it to be as strong as the existence of a cardinal that is both a limit of Woodin cardinals and a limit of strong cardinals. Larson, Sargsyan and Wilson showed in 2014 that this would be optimal via a generalization of Woodin’s derived model construction. We will discuss a new translation procedure for hybrid mice extending work of Steel, Zhu and Sargsyan and use this to prove Sargsyan’s conjecture. • 16 Jan 2021 » IMU DMV meeting Jerusalem - Cancelled I was invited to give a talk in the special session on set theory at the joint meeting of the Israeli Mathematical Union (IMU) and the German Mathematical Society (DMV) that was supposed to take place at the Hebrew University, Jerusalem, Israel, Mar 08-10, 2021. This meeting got cancelled due to the ongoing Covid-19 pandemic. • 15 Jan 2021 » World Logic Day, Hamburg - Unendliche Spiele und die Grenzen der Mathematik I was invited to give a talk for a general audience at the celebration of the World Logic Day taking place virtually in Hamburg, Germany on January 15, 2021. This talk was in German. Unendliche Spiele und die Grenzen der Mathematik • 27 Nov 2020 » Set Theory Seminar, The Fields Institute, Toronto - The Large Cardinal Strength of Determinacy Axioms I gave a talk in the Set Theory Seminar at the Fields Institute in Toronto on Nov 27th, 2020 at 1:30pm Toronto time (which is 7:30pm Vienna time). This seminar took place virtually via zoom. The Large Cardinal Strength of Determinacy Axioms Abstract: The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it became necessary to study canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others. Around the same time, the study of infinite two-player games was driven forward by Martin’s proof of analytic determinacy from a measurable cardinal, Borel determinacy from ZFC, and Martin and Steel’s proof of levels of projective determinacy from Woodin cardinals with a measurable cardinal on top. First Woodin and later Neeman improved the result in the projective hierarchy by showing that in fact the existence of a countable iterable model, a mouse, with Woodin cardinals and a top measure suffices to prove determinacy in the projective hierarchy. This opened up the possibility for an optimal result stating the equivalence between local determinacy hypotheses and the existence of mice in the projective hierarchy, just like the equivalence of analytic determinacy and the existence of $x^\sharp$ for every real $x$ which was shown by Martin and Harrington in the 70’s. The existence of mice with Woodin cardinals and a top measure from levels of projective determinacy was shown by Woodin in the 90’s. Together with his earlier and Neeman’s results this estabilishes a tight connection between descriptive set theory in the projective hierarchy and inner model theory. In this talk, we will outline some of the main results connecting determinacy hypotheses with the existence of mice with large cardinals and discuss a number of more recent results in this area. Slides. A recording of this talk is available here. • 05 Nov 2020 » Ghent–Leeds Virtual Logic Seminar - Determinacy and inner models I gave a talk in the Ghent–Leeds Virtual Logic Seminar on Nov 05, 2020. This seminar took place virtually via zoom. Determinacy and inner models Abstract: The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it became necessary to study canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others. Around the same time, the study of infinite two-player games was driven forward by Martin’s proof of analytic determinacy from a measurable cardinal, Borel determinacy from ZFC, and Martin and Steel’s proof of levels of projective determinacy from Woodin cardinals with a measurable cardinal on top. First Woodin and later Neeman improved the result in the projective hierarchy by showing that in fact the existence of a countable iterable model, a mouse, with Woodin cardinals and a top measure suffices to prove determinacy in the projective hierarchy. This opened up the possibility for an optimal result stating the equivalence between local determinacy hypotheses and the existence of mice in the projective hierarchy, just like the equivalence of analytic determinacy and the existence of $x^\sharp$ for every real $x$ which was shown by Martin and Harrington in the 70’s. The existence of mice with Woodin cardinals and a top measure from levels of projective determinacy was shown by Woodin in the 90’s. Together with his earlier and Neeman’s results this estabilishes a tight connection between descriptive set theory in the projective hierarchy and inner model theory. In this talk, we will outline some of the main results connecting determinacy hypotheses with the existence of mice with large cardinals and discuss a number of more recent results in this area, some of which are joint work with Juan Aguilera. • 29 Sep 2020 » (with P. Lücke) Closure properties of measurable ultrapowers The Journal of Symbolic Logic. Volume 86, Issue 2, June 2021. Pages 762-784. DOI: 10.1017/jsl.2021.29. PDF. arXiv. Bibtex. • 08 May 2020 » CUNY Set Theory Seminar, New York - How to obtain lower bounds in set theory I gave a talk in the CUNY Set Theory Seminar at CUNY, New York, on May 08, 2020. This seminar took place virtually via zoom. How to obtain lower bounds in set theory Abstract: Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others. We will outline two recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, in part joint with J. Aguilera, is an analysis of the strength of determinacy for certain infinite two player games of fixed countable length, and the second result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$. Finally, we will comment on obstacles, questions, and conjectures for lifting these results higher up in the large cardinal hierarchy. • 30 Apr 2020 » Research Seminar, Vienna - How to obtain lower bounds in set theory I gave a talk in the Research Seminar at the University of Vienna on April 30, 2020. This seminar took place virtually via zoom. How to obtain lower bounds in set theory Abstract: Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others. We will outline two recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, in part joint with J. Aguilera, is an analysis of the strength of determinacy for certain infinite two player games of fixed countable length, and the second result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$. • 06 Mar 2020 » Consistency strength lower bounds for the proper forcing axiom via the core model induction Bulletin of Symbolic Logic. Volume 26, Issue 1, December 2020. Pages 89-92. DOI: 10.1017/bsl.2020.6. PDF. Bibtex. • 05 Mar 2020 » North American Annual Meeting of the ASL - How to obtain lower bounds in set theory I was invited to give a plenary talk at the 2020 North American Annual Meeting of the Association for Symbolic Logic taking place at UC Irvine March 25-28, 2020. Due to the public health concerns over COVID-19, this meeting was cancelled and instead held as a virtual meeting. How to obtain lower bounds in set theory Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others. We will outline three recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, in part joint with J. Aguilera, is an analysis of the strength of determinacy for certain infinite two player games of fixed countable length, the second result studies the strength of a model of determinacy in which all sets of reals are universally Baire, and the third result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$. • 04 Mar 2020 » International Day of Mathematics, Vienna - Das Unbegreifliche verstehen - die Faszination Unendlichkeit I am invited to give a talk for a general audience at the celebration of the International Day of Mathematics taking place at the TU Wien on March 14, 2020. This talk will be in German. This meeting was cancelled due to the COVID-19 restrictions. Das Unbegreifliche verstehen - die Faszination Unendlichkeit Jeder kennt sie, sie ist immer da, aber nie so richtig - die Unendlichkeit. Aber was meinen wir eigentlich, wenn wir sagen, dass etwas unendlich groß ist? Gibt es nur die eine Unendlichkeit oder zwei oder vielleicht sogar ganz viele? Wir werden diese Fragen und natürlich auch die dazugehörigen Antworten etwas genauer unter die Lupe nehmen. Dabei wird sich nicht nur zeigen, wozu Mathematik in der Lage ist, sondern wir werden uns auch auf den Weg zu den Grenzen der Mathematik machen, wie sie zum Beispiel von Cantor und Gödel aufgezeigt wurden. Diese Grenzen besser zu verstehen - getreu nach dem Spruch von Oliver Tietze „Wer mit dem Feuer spielen will, muss wissen, wo das Wasser steht.“ - ist noch heute Gegenstand mathematischer Spitzenforschung. • 19 Nov 2019 » Oberseminar mathematische Logik, Bonn - Infinite decreasing chains in the Mitchell order I gave a talk in the Oberseminar mathematische Logik at the University of Bonn on January 14, 2020. Infinite decreasing chains in the Mitchell order Abstract: It is known that the behavior of the Mitchell order substantially changes at the level of rank-to-rank extenders, as it ceases to be well-founded. While the possible partial order structure of the Mitchell order below rank-to-rank extenders is considered to be well understood, little is known about the structure in the ill-founded case. We make a first step in understanding this case by studying the extent to which the Mitchell order can be ill-founded. Our main results are (i) in the presence of a rank-to-rank extender there is a transitive Mitchell order decreasing sequence of extenders of any countable length, and (ii) there is no such sequence of length $\omega_1$. This is joint work with Omer Ben-Neria. As this is a blackboard talk there are no slides available, you can find a preprint related to this talk here. • 11 Oct 2019 » (with Y. Hayut) Perfect Subtree Property for Weakly Compact Cardinals Accepted for publication in the Israel Journal of Mathematics. PDF. arXiv. Bibtex. • 05 Oct 2019 » (with S.-D. Friedman and V. Gitman) Structural Properties of the Stable Core Accepted for publication in the Journal of Symbolic Logic. PDF. arXiv. Bibtex. • 28 Aug 2019 » CUNY Logic Workshop - Infinite decreasing chains in the Mitchell order I was invited to give a talk in the CUNY Logic Workshop on Nov 15, 2019. Infinite decreasing chains in the Mitchell order Abstract: It is known that the behavior of the Mitchell order substantially changes at the level of rank-to-rank extenders, as it ceases to be well-founded. While the possible partial order structure of the Mitchell order below rank-to-rank extenders is considered to be well understood, little is known about the structure in the ill-founded case. We make a first step in understanding this case by studying the extent to which the Mitchell order can be ill-founded. Our main results are (i) in the presence of a rank-to-rank extender there is a transitive Mitchell order decreasing sequence of extenders of any countable length, and (ii) there is no such sequence of length $\omega_1$. This is joint work with Omer Ben-Neria. As this is a blackboard talk there are no slides available, you can find a preprint related to this talk here. • 28 Aug 2019 » (with O. Ben-Neria) Infinite decreasing chains in the Mitchell order Archive for Mathematical Logic. Volume 60, March 2021. Pages 771-781. DOI: 10.1007/s00153-021-00762-x. PDF. arXiv. Bibtex. • 15 Jul 2019 » Rutgers MAMLS 2019 - Sealed Trees and the Perfect Subtree Property for Weakly Compact Cardinals I was invited to give a talk at the 2019 edition of Rutgers MAMLS taking place Nov 1-3, 2019 at Rutgers University, USA. Sealed Trees and the Perfect Subtree Property for Weakly Compact Cardinals Abstract: We investigate the consistency strength of the statement: $\kappa$ is weakly compact and there is no tree on $\kappa$ with exactly $\kappa^{+}$ many branches. We show that this property fails strongly (there is a sealed tree) if there is no inner model with a Woodin cardinal. On the other hand, we show that this property as well as the related Perfect Subtree Property for $\kappa$, implies the consistency of $\operatorname{AD}_{\mathbb{R}}$. This is joint work with Yair Hayut. Slides. • 14 Jul 2019 » 15th International Luminy Workshop in Set Theory - Lower bounds for the perfect subtree property at weakly compact cardinals During the week of September 23 - 27, 2019 I attended the 15th International Luminy Workshop in Set Theory and gave a talk. Lower bounds for the perfect subtree property at weakly compact cardinals Abstract: By the Cantor-Bendixson theorem, subtrees of the binary tree on $\omega$ satisfy a dichotomy - either the tree has countably many branches or there is a perfect subtree (and in particular, the tree has continuum many branches, regardless of the size of the continuum). We generalize this to arbitrary regular cardinals $\kappa$ and ask whether every $\kappa$-tree with more than $\kappa$ branches has a perfect subtree. From large cardinals, this statement is consistent at a weakly compact cardinal $\kappa$. We show using stacking mice that the existence of a non-domestic mouse (which yields a model with a proper class of Woodin cardinals and strong cardinals) is a lower bound. Moreover, we study variants of this statement involving sealed trees, i.e. trees with the property that their set of branches cannot be changed by certain forcings, and obtain lower bounds for these as well. This is joint work with Yair Hayut. • 12 Jul 2019 » (with R. Carroy and A. Medini) Constructing Wadge classes Accepted for publication in the Bulletin of Symbolic Logic. PDF. arXiv. Bibtex. • 08 Jul 2019 » (with J. Aguilera) Projective Games on the Reals Notre Dame Journal of Formal Logic. Volume 61, Issue 4, November 2020. Pages 573-589. DOI: 10.1215/00294527-2020-0027. PDF. arXiv. Bibtex. • 01 Mar 2019 » (with Niklas Mueller, Joachim Streis, Hermann Pavenstädt, Thomas Felderhoff, Stefan Reuter und Veit Busch) Pulse Wave Analysis and Pulse Wave Velocity for Fistula Assessment Kidney & Blood Pressure Research. Volume 45, Issue 4, July 2020. Pages 576-588. DOI: 10.1159/000506741. PDF. Bibtex. • 09 Feb 2019 » The Core Model Induction and Other Inner Model Theoretic Tools Rutgers - Tutorial: HOD Computations As a part of the workshop on “The Core Model Induction and Other Inner Model Theoretic Tools” in Rutgers I gave a tutorial on HOD Computations. Abstract: An essential question regarding the theory of inner models is the analysis of the class of all hereditarily ordinal definable sets HOD inside various inner models $M$ of the set theoretic universe $V$ under appropriate determinacy hypotheses. Examples for such inner models $M$ are $L(\mathbb{R})$ or $L[x]$ on a cone of reals $x$. We will outline Steel’s and Woodin’s analysis of $HOD^{L(\mathbb{R})}$. Moreover, we will discuss their analysis of $HOD^{L[x,G]}$ on a cone of reals $x$, where $G$ is $Col(\omega,\kappa)$-generic and $\kappa$ is the least inaccessible cardinal in $L[x]$. We will point out were the problems are when trying to adapt this to analyze $HOD^{L[x]}$. • (Steel) An outline of inner model theory, Handbook of Set Theory, Section 8. • (Steel, Woodin) HOD as a core model, Cabal III. Necessary requirements: A good understanding of mice, the comparison process and genericity iterations, e.g. the fine structure tutorial given in the first week or the relevant parts of Steel’s handbook chapter (Sections 1-3 and 7). See here for more information about the meeting and here for lecture notes typed by James Holland. • 08 Feb 2019 » Logic Fest in the Windy City - The interplay between inner model theory and descriptive set theory in a nutshell On June 01, 2019 I was invited to give a talk at the Logic Fest in the Windy City in Chicago, USA. The interplay between inner model theory and descriptive set theory in a nutshell Abstract: The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it became necessary to study canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others. Around the same time, the study of infinite two-player games was driven forward by Martin’s proof of analytic determinacy from a measurable cardinal, Borel determinacy from ZFC, and Martin and Steel’s proof of levels of projective determinacy from Woodin cardinals with a measurable cardinal on top. First Woodin and later Neeman improved the result in the projective hierarchy by showing that in fact the existence of a countable iterable model, a mouse, with Woodin cardinals and a top measure suffices to prove determinacy in the projective hierarchy. This opened up the possibility for an optimal result stating the equivalence between local determinacy hypotheses and the existence of mice in the projective hierarchy, just like the equivalence of analytic determinacy and the existence of $x^\sharp$ for every real $x$ which was shown by Martin and Harrington in the 70’s. The existence of mice with Woodin cardinals and a top measure from levels of projective determinacy was shown by Woodin in the 90’s. Together with his earlier and Neeman’s results this estabilishes a tight connection between descriptive set theory in the projective hierarchy and inner model theory. In this talk, we will outline the main concepts and results connecting determinacy hypotheses with the existence of mice with large cardinals. Neeman’s methods mentioned above extend to show determinacy of projective games of arbitrary countable length from the existence of inner models with many Woodin cardinals. We will discuss a number of more recent results, some of which are joint work with Juan Aguilera, showing that inner models with many Woodin cardinals can be obtained from the determinacy of countable projective games. Slides. • 25 Jan 2019 » Set Theory Seminar Bar-Ilan University - Projective determinacy for games of length omega^2 and longer On February 25, 2019 I was invited to give a talk in the Set Theory Seminar at Bar-Ilan University, Israel. Projective determinacy for games of length $\omega^2$ and longer Abstract: We will study infinite two player games and the large cardinal strength corresponding to their determinacy. For games of length $\omega$ this is well understood and there is a tight connection between the determinacy of projective games and the existence of canonical inner models with Woodin cardinals. For games of arbitrary countable length, Itay Neeman proved the determinacy of analytic games of length $\omega \cdot \theta$ for countable $\theta > \omega$ from a sharp for $\theta$ Woodin cardinals. We aim for a converse at successor ordinals. In joint work with Juan P. Aguilera we showed that determinacy of $\boldsymbol\Pi^1_{n+1}$ games of length $\omega^2$ implies the existence of a premouse with $\omega+n$ Woodin cardinals. This generalizes to a premouse with $\omega+\omega$ Woodin cardinals from the determinacy of games of length $\omega^2$ with $\Game^{\mathbb{R}}\boldsymbol\Pi^1_1$ payoff. If time allows, we will also sketch how these methods can be adapted to, in combination with results of Nam Trang, obtain $\omega^\alpha+n$ Woodin cardinals for countable ordinals $\alpha$ and natural numbers $n$ from the determinacy of sufficiently long projective games. • 07 Nov 2018 » Logic and Set Theory Seminar Bristol - The consistency strength of long projective determinacy On February 05, 2019 I was invited to give a talk in the Logic and Set Theory Seminar in Bristol. The consistency strength of long projective determinacy Abstract: We will study infinite two player games and the large cardinal strength corresponding to their determinacy. For games of length $\omega$ this is well understood and there is a tight connection between the determinacy of projective games and the existence of canonical inner models with Woodin cardinals. For games of arbitrary countable length, Itay Neeman proved the determinacy of analytic games of length $\omega \cdot \theta$ for countable $\theta > \omega$ from a sharp for $\theta$ Woodin cardinals. We aim for a converse at successor ordinals. In joint work with Juan P. Aguilera we showed that determinacy of $\boldsymbol\Pi^1_{n+1}$ games of length $\omega^2$ implies the existence of a premouse with $\omega+n$ Woodin cardinals. This generalizes to a premouse with $\omega+\omega$ Woodin cardinals from the determinacy of games of length $\omega^2$ with $\Game^{\mathbb{R}}\boldsymbol\Pi^1_1$ payoff. If time allows, we will also sketch how these methods can be adapted to, in combination with results of Nam Trang, obtain $\omega^\alpha+n$ Woodin cardinals for countable ordinals $\alpha$ and natural numbers $n$ from the determinacy of sufficiently long projective games. • 07 Nov 2018 » (with J. Aguilera) The consistency strength of long projective determinacy The Journal of Symbolic Logic. Volume 85, Issue 1, March 2020. Pages 338-366. DOI: 10.1017/jsl.2019.78. PDF. arXiv. Bibtex. • 06 Nov 2018 » Arctic Set Theory Workshop 4 - Homogeneous Spaces and Wadge Theory On January 22, 2019 I gave a talk at the (Arctic Set Theory Workshop 4) in Kilpisjärvi, Finland. Abstract: In his PhD thesis Wadge characterized the notion of continuous reducibility on the Baire space ${}^\omega\omega$ in form of a game and analyzed it in a systematic way. He defined a refinement of the Borel hierarchy, called the Wadge hierarchy, showed that it is well-founded, and (assuming determinacy for Borel sets) proved that every Borel pointclass appears in this classification. Later Louveau found a description of all levels in the Borel Wadge hierarchy using Boolean operations on sets. Fons van Engelen used this description to analyze Borel homogeneous spaces and show that every homogeneous Borel space is in fact strongly homogeneous. In this talk, we will show how to generalize these results under the Axiom of Determinacy. In particular, we will outline that under AD every homogeneous space is in fact strongly homogeneous. This is joint work with Raphaël Carroy and Andrea Medini. Slides can be found here. • 06 Nov 2018 » The Axiom of Determinacy implies Dependent Choice in mice Mathematical Logic Quarterly. Volume 65, Issue 3, October 2019. Pages 370-375. DOI: 10.1002/malq.201800077. PDF. arXiv. Bibtex. • 05 Nov 2018 » Rutgers Logic Seminar - The consistency strength of long projective determinacy On December 10, 2018 I gave a talk in the Rutgers Logic Seminar. The consistency strength of long projective determinacy Abstract: We will study infinite two player games and the large cardinal strength corresponding to their determinacy. For games of length $\omega$ this is well understood and there is a tight connection between the determinacy of projective games and the existence of canonical inner models with Woodin cardinals. For games of arbitrary countable length, Itay Neeman proved the determinacy of analytic games of length $\omega \cdot \theta$ for countable $\theta > \omega$ from a sharp for $\theta$ Woodin cardinals. We aim for a converse at successor ordinals. In joint work with Juan P. Aguilera we showed that determinacy of $\boldsymbol\Pi^1_{n+1}$ games of length $\omega^2$ implies the existence of a premouse with $\omega+n$ Woodin cardinals. This generalizes to a premouse with $\omega+\omega$ Woodin cardinals from the determinacy of games of length $\omega^2$ with $\Game^{\mathbb{R}}\boldsymbol\Pi^1_1$ payoff. If time allows, we will also sketch how these methods can be adapted to obtain for example $\omega^2+n$ Woodin cardinals from the determinacy of sufficiently long projective games. • 04 Nov 2018 » Oberseminar mathematische Logik Bonn - Structural properties of the Stable Core On December 04, 2018 I gave a talk in the Oberseminar mathematische Logik in Bonn. Structural properties of the Stable Core Abstract: The Stable Core $\mathbb{S}$, introduced by Sy Friedman in 2012, is a proper class model of the form $(L[S],S)$ for a simply definable predicate $S$. He showed that $V$ is generic over the Stable Core (for $\mathbb{S}$-definable dense classes) and that the Stable Core can be properly contained in HOD. These remarkable results motivate the study of the Stable Core itself. In the light of other canonical inner models the questions whether the Stable Core satisfies GCH or whether large cardinals is $V$ imply their existence in the Stable Core naturally arise. In a joint work with Sy Friedman and Victoria Gitman we give some answers to these questions and show that GCH can fail at all regular cardinals in the Stable Core. Moreover, we show that measurable cardinals in general need not be downward absolute to the Stable Core, but in the special case where $V = L[\mu]$ is the canonical inner model for one measurable cardinal, the Stable Core is in fact equal to $L[\mu]$. • 01 Sep 2018 » (with J. Aguilera and P. Schlicht) Long games and sigma-projective sets Annals of Pure and Applied Logic. Volume 172, Issue 4, April 2021. 102939. DOI: 10.1016/j.apal.2020.102939. PDF. arXiv. Bibtex. • 21 Aug 2018 » UMI-SIMAI-PTM, Wroclaw - Large Cardinals in the Stable Core On September 19, 2018 I was invited to give a talk in the Thematic Session in Set Theory and Topology at the joint meeting of the Italian Mathematical Union, the Italian Society of Industrial and Applied Mathematics and the Polish Mathematical Society (UMI-SIMAI-PTM) in Wrocław. Large Cardinals in the Stable Core Abstract: The Stable Core $\mathbb{S}$, introduced by Sy Friedman in 2012, is a proper class model of the form $(L[S],S)$ for a simply definable predicate $S$. He showed that $V$ is generic over the Stable Core (for $\mathbb{S}$-definable dense classes) and that the Stable Core can be properly contained in HOD. These remarkable results motivate the study of the Stable Core itself. In the light of other canonical inner models the questions whether the Stable Core satisfies GCH or whether large cardinals is $V$ imply their existence in the Stable Core naturally arise. We answer these questions and show that GCH can fail at all regular cardinals in the Stable Core. Moreover, we show that measurable cardinals in general need not be downward absolute to the Stable Core, but in the special case where $V = L[\mu]$ is the canonical inner model for one measurable cardinal, the Stable Core is in fact equal to $L[\mu]$. This is joint work with Sy Friedman and Victoria Gitman. Slides for this talk are available on request. • 20 Aug 2018 » CUNY Set Theory Seminar - How to obtain Woodin cardinals from the determinacy of long games On September 7, 2018 I gave a talk in the CUNY Set Theory Seminar in New York. How to obtain Woodin cardinals from the determinacy of long games Abstract: We will study infinite two player games and the large cardinal strength corresponding to their determinacy. For games of length $\omega$ this is well understood and there is a tight connection between the determinacy of projective games and the existence of canonical inner models with Woodin cardinals. For games of arbitrary countable length, Itay Neeman proved the determinacy of analytic games of length $\omega \cdot \theta$ for countable $\theta > \omega$ from a sharp for $\theta$ Woodin cardinals. We aim for a converse at successor ordinals and sketch how to obtain $\omega+n$ Woodin cardinals from the determinacy of $\boldsymbol\Pi^1_{n+1}$ games of length $\omega^2$. Moreover, we outline how to generalize this to construct a model with $\omega+\omega$ Woodin cardinals from the determinacy games of length $\omega^2$ with $\Game^{\mathbb{R}}\boldsymbol\Pi^1_1$ payoff. This is joint work with Juan P. Aguilera. • 08 Jun 2018 » KNAW Academy Colloquium on Generalised Baire Spaces - Lebesgue's Density Theorem for tree forcing ideals On August 24th, 2018 I gave a short talk at the KNAW Academy Colloquium on Generalised Baire Spaces taking place August 23rd and 24th in Amsterdam, The Netherlands. Abstract: Lebesgue introduced a notion of density point of a set of reals and proved that any Borel set of reals has the density property, i.e. it is equal to the set of its density points up to a null set. We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to well-known tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stem-linked and show that every stem-linked tree forcing has the density property. This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert. Slides are available here. • 07 Jun 2018 » 1st Girona conference on inner model theory - Long games and Woodin cardinals On July 17, 2018 I gave a talk at the 1st Girona conference on inner model theory in Girona. Long games and Woodin cardinals Abstract: Itay Neeman proved the determinacy of analytic games of length $\omega \cdot \theta$ for countable $\theta > \omega$ from a sharp for $\theta$ Woodin cardinals. We aim for a converse at successor ordinals and show how to obtain $\omega+1$ Woodin cardinals from the determinacy of analytic games of length $\omega \cdot (\omega+1)$. This is joint work with Juan P. Aguilera. Notes for this talk are available here. • 01 Jun 2018 » (with R. Carroy and A. Medini) Every zero-dimensional homogeneous space is strongly homogeneous under determinacy Journal of Mathematical Logic. Volume 20, Issue 3, March 2020. 2050015. DOI: 10.1142/S0219061320500154. PDF. arXiv. Bibtex. • 22 Mar 2018 » Oberseminar Konstanz - Large cardinals from the determinacy of games On July 9th, 2018 I gave a 90min talk in the Oberseminar Mathematical Logic at the University of Konstanz, Germany. Large cardinals from the determinacy of games Abstract: We will study infinite two player games and the large cardinal strength corresponding to their determinacy. In particular, we will consider mice, which are sufficiently iterable models of set theory, and outline how they play an important role in measuring the exact strength of determinacy hypotheses. After summarizing the situation within the projective hierarchy for games of length $\omega$, we will go beyond that and consider the determinacy of even longer games. In particular, we will sketch the argument that determinacy of analytic games of length $\omega \cdot (\omega+1)$ implies the consistency of $\omega+1$ Woodin cardinals. This part is joint work with Juan P. Aguilera. • 15 Mar 2018 » Turin - Combinatorial Variants of Lebesgue's Density Theorem In the first week of April 2018 I will visited the Mathematical Logic group of Turin and gave a 2h talk on April 6. Abstract: Lebesgue introduced a notion of density point of a set of reals and proved that any Borel set of reals has the density property, i.e. it is equal to the set of its density points up to a null set. We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to well-known tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stem-linked and show that every stem-linked tree forcing has the density property. This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert. Slides are available on request. • 06 Mar 2018 » GDMV Jahrestagung Paderborn - Projective homogeneous spaces and the Wadge hierarchy On March 06, 2018 I gave an invited talk in the Section in Logic at the annual meeting GDMV of the German Mathematical Society (DMV) joint with GDM taking place March 5th to 9th in Paderborn, Germany. Abstract: In his PhD thesis Wadge characterized the notion of continuous reducibility on the Baire space ${}^\omega\omega$ in form of a game and analyzed it in a systematic way. He defined a refinement of the Borel hierarchy, called the Wadge hierarchy, showed that it is well-founded, and (assuming determinacy for Borel sets) proved that every Borel pointclass appears in this classification. Later Louveau found a description of all levels in the Borel Wadge hierarchy using Boolean operations on sets. Fons van Engelen used this description to analyze Borel homogeneous spaces. In this talk, we will discuss the basics behind these results and show the first steps towards generalizing them to the projective hierarchy, assuming projective determinacy (PD). In particular, we will outline that under PD every homogeneous projective space is in fact strongly homogeneous. This is joint work with Raphaël Carroy and Andrea Medini. • 01 Dec 2017 » CUNY Set Theory Seminar - Canonical inner models and their HODs On Dec 1st, 2017, at 10:00am I gave a talk in the CUNY Set Theory Seminar. Abstract: An essential question regarding the theory of inner models is the analysis of the class of all hereditarily ordinal definable sets $\operatorname{HOD}$ inside various inner models $M$ of the set theoretic universe $V$ under appropriate determinacy hypotheses. Examples for such inner models $M$ are $L(\mathbb{R})$, $L[x]$ and $M_n(x)$. Woodin showed that under determinacy hypotheses these models of the form $\operatorname{HOD}^M$ contain large cardinals, which motivates the question whether they are fine-structural as for example the models $L(\mathbb{R})$, $L[x]$ and $M_n(x)$ are. A positive answer to this question would yield that they are models of $\operatorname{CH}, \Diamond$, and other combinatorial principles. The first model which was analyzed in this sense was $\operatorname{HOD}^{L(\mathbb{R})}$ under the assumption that every set of reals in $L(\mathbb{R})$ is determined. In the 1990’s Steel and Woodin were able to show that $\operatorname{HOD}^{L(\mathbb{R})} = L[M_\infty, \Lambda]$, where $M_\infty$ is a direct limit of iterates of the canonical mouse $M_\omega$ and $\Lambda$ is a partial iteration strategy for $M_\infty$. Moreover Woodin obtained a similar result for the model $\operatorname{HOD}^{L[x,G]}$ assuming $\Delta^1_2$ determinacy, where $x$ is a real of sufficiently high Turing degree, $G$ is $\operatorname{Col}(\omega, {<}\kappa_x)$-generic over $L[x]$ and $\kappa_x$ is the least inaccessible cardinal in $L[x]$. In this talk I will give an overview of these results (including some background on inner model theory) and outline how they can be extended to the model $\operatorname{HOD}^{M_n(x,g)}$ assuming $\boldsymbol\Pi^1_{n+2}$ determinacy, where $x$ again is a real of sufficiently high Turing degree, $g$ is $\operatorname{Col}(\omega, {<}\kappa_x)$-generic over $M_n(x)$ and $\kappa_x$ is the least inaccessible cardinal in $M_n(x)$. This is joint work with Grigor Sargsyan. • 14 Aug 2017 » Logic Colloquium Stockholm - The hereditarily ordinal definable sets in inner models with finitely many Woodin cardinals On August 14th, 2017 I gave a talk in the special session on set theory at the Logic Colloquium 2017 (August 14-20, 2017). Abstract: An essential question regarding the theory of inner models is the analysis of the class of all hereditarily ordinal definable sets $\operatorname{HOD}$ inside various inner models $M$ of the set theoretic universe $V$ under appropriate determinacy hypotheses. Examples for such inner models $M$ are $L(\mathbb{R})$, $L[x]$ and $M_n(x)$. Woodin showed that under determinacy hypotheses these models of the form $\operatorname{HOD}^M$ contain large cardinals, which motivates the question whether they are fine-structural as for example the models $L(\mathbb{R})$, $L[x]$ and $M_n(x)$ are. A positive answer to this question would yield that they are models of $\operatorname{CH}, \Diamond$, and other combinatorial principles. The first model which was analyzed in this sense was $\operatorname{HOD}^{L(\mathbb{R})}$ under the assumption that every set of reals in $L(\mathbb{R})$ is determined. In the 1990’s Steel and Woodin were able to show that $\operatorname{HOD}^{L(\mathbb{R})} = L[M_\infty, \Lambda]$, where $M_\infty$ is a direct limit of iterates of the canonical mouse $M_\omega$ and $\Lambda$ is a partial iteration strategy for $M_\infty$. Moreover Woodin obtained a similar result for the model $\operatorname{HOD}^{L[x,G]}$ assuming $\Delta^1_2$ determinacy, where $x$ is a real of sufficiently high Turing degree, $G$ is $\operatorname{Col}(\omega, {<}\kappa_x)$-generic over $L[x]$ and $\kappa_x$ is the least inaccessible cardinal in $L[x]$. In this talk I will give an overview of these results and outline how they can be extended to the model $\operatorname{HOD}^{M_n(x,g)}$ assuming $\boldsymbol\Pi^1_{n+2}$ determinacy, where $x$ again is a real of sufficiently high Turing degree, $g$ is $\operatorname{Col}(\omega, {<}\kappa_x)$-generic over $M_n(x)$ and $\kappa_x$ is the least inaccessible cutpoint in $M_n(x)$ which is a limit of cutpoints in $M_n(x)$. This is joint work with Grigor Sargsyan. This abstract will be published in the Bulletin of Symbolic Logic (BSL). My slides can be found here. A preprint containing these results will be uploaded on my webpage soon. • 25 Jul 2017 » Münster conference on inner model theory - HOD in inner models with Woodin cardinals On July 25th I gave a talk at the 4th Münster conference on inner model theory. Abstract: We analyze $\operatorname{HOD}$ in the inner model $M_n(x,g)$ for reals $x$ of sufficiently high Turing degree and suitable generics $g$. Our analysis generalizes to other canonical minimal mice with Woodin and strong cardinals. This is joint work with Grigor Sargsyan. Notes taken by Ralf Schindler during my talk can be found here. These notes include a sketch of the proof of our main result, the corresponding preprint will be uploaded on my webpage soon. • 03 Jul 2017 » European Set Theory Conference - Combinatorial Variants of Lebesgue's Density Theorem On July 3rd 2017 I gave a contributed talk at the 6th European Set Theory Conference in Budapest. Abstract: We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to well-known tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stem-linked and show that every stem-linked tree forcing has the density property. This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert. • 26 Jan 2017 » Arctic Set Theory Workshop - HOD in $M_n(x,g)$ On January 26th 2017 I gave a talk at the Arctic Set Theory Workshop 3 in Kilpisjärvi, Finland, about $\operatorname{HOD}$ in $M_n(x,g)$. Here are my (very sketchy!) slides. The following pictures are taken by Andrés Villaveces. Thank you Andrés! • 01 Dec 2016 » KGRC Research Seminar - $HOD^{M_n(x,g)}$ is a core model On December 1st 2016 I gave a talk in the KGRC Research Seminar. Abstract: Let $x$ be a real of sufficiently high Turing degree, let $\kappa_x$ be the least inaccessible cardinal in $L[x]$ and let $G$ be $Col(\omega, {<}\kappa_x)$-generic over $L[x]$. Then Woodin has shown that $\operatorname{HOD}^{L[x,G]}$ is a core model, together with a fragment of its own iteration strategy. Our plan is to extend this result to mice which have finitely many Woodin cardinals. We will introduce a direct limit system of mice due to Grigor Sargsyan and sketch a scenario to show the following result. Let $n \geq 1$ and let $x$ again be a real of sufficiently high Turing degree. Let $\kappa_x$ be the least inaccessible strong cutpoint cardinal of $M_n(x)$ such that $\kappa_x$ is a limit of strong cutpoint cardinals in $M_n(x)$ and let $g$ be $Col(\omega, {<}\kappa_x)$-generic over $M_n(x)$. Then $\operatorname{HOD}^{M_n(x,g)}$ is again a core model, together with a fragment of its own iteration strategy. This is joint work in progress with Grigor Sargsyan. Many thanks to Richard again for the great pictures! • 21 Oct 2016 » (with R. Schindler and W. H. Woodin) Mice with Finitely many Woodin Cardinals from Optimal Determinacy Hypotheses Journal of Mathematical Logic. Volume 20, Issue Supp01, October 2020. 1950013. DOI: 10.1142/S0219061319500132. PDF. arXiv. Bibtex. • 21 Oct 2016 » Graduation from the University of Münster Last week I defended my PhD thesis and finally graduated from the University of Münster. Many thanks to everybody who was there to celebrate with me and especially to Anna, Dorothea, Fabiana and Svenja for the amazing graduation hat. • 20 Oct 2016 » Pure and Hybrid Mice with Finitely Many Woodin Cardinals from Levels of Determinacy Dissertation. PDF. Bibtex. • 19 Jul 2016 » 1st Irvine Conference on Descriptive Inner Model Theory and HOD Mice - Producing $M_n^{sharp}(x)$ from optimal determinacy hypotheses On July 19th and 21st I gave talks at the 1st IRVINE CONFERENCE on DESCRIPTIVE INNER MODEL THEORY and HOD MICE. Abstract: In this talk we will outline a proof of Woodin’s result that boldface $\boldsymbol\Sigma^1_{n+1}$ determinacy yields the existence and $\omega_1$-iterability of the premouse $M_n^\sharp(x)$ for all reals $x$. This involves first generalizing a result of Kechris and Solovay concerning OD determinacy in $L[x]$ for a cone of reals $x$ to the context of mice with finitely many Woodin cardinals. We will focus on using this result to prove the existence and $\omega_1$-iterability of $M_n^\sharp$ from a suitable hypothesis. Note that this argument is different for the even and odd levels of the projective hierarchy. This is joint work with Ralf Schindler and W. Hugh Woodin. You can find notes taken by Martin Zeman here and here. More pictures and notes for the other talks can be found on the conference webpage. • 13 Jun 2016 » YSTW 2016 Copenhagen - A Journey Through the World of Mice and Games - Projective and Beyond On June 13th, 2016 I gave a talk at the Young Set Theory Workshop in Copenhagen. For more information see the webpage of the YSTW 2016. Abstract: This talk will be an introduction to inner model theory the at the level of the projective hierarchy and the $L(\mathbb{R})$-hierarchy. It will focus on results connecting inner model theory to the determinacy of certain games. Mice are sufficiently iterable models of set theory. Martin and Steel showed in 1989 that the existence of finitely many Woodin cardinals with a measurable cardinal above them implies that projective determinacy holds. Neeman and Woodin proved a level-by-level connection between mice and projective determinacy. They showed that boldface $\boldsymbol\Pi^1_{n+1}$ determinacy is equivalent to the fact that the mouse $M_n^\sharp(x)$ exists and is $\omega_1$-iterable for all reals $x$. Following this, we will consider pointclasses in the $L(\mathbb{R})$-hierarchy and show that determinacy for them implies the existence and $\omega_1$-iterability of certain hybrid mice with finitely many Woodin cardinals, which we call $M_k^{\Sigma, \sharp}$. These hybrid mice are like ordinary mice, but equipped with an iteration strategy for a mouse they are containing, which enables them to capture certain sets of reals. We will discuss what it means for a mouse to capture a set of reals and outline why hybrid mice fulfill this task. Slides. • 09 Jun 2016 » KGRC Research Seminar - Hybrid Mice and Determinacy in the L(IR)-hierarchy On June 9th 2016 I gave a talk in the KGRC Research Seminar in Vienna. Abstract: This talk will be an introduction to inner model theory the at the level of the $L(\mathbb{R})$-hierarchy. It will focus on results connecting inner model theory to the determinacy of certain games. Mice are sufficiently iterable models of set theory. Martin and Steel showed in 1989 that the existence of finitely many Woodin cardinals with a measurable cardinal above them implies that projective determinacy holds. Neeman and Woodin proved a level-by-level connection between mice and projective determinacy. They showed that boldface $\boldsymbol\Pi^1_{n+1}$ determinacy is equivalent to the fact that the mouse $M_n^\sharp(x)$ exists and is $\omega_1$-iterable for all reals $x$. Following this, we will consider pointclasses in the $L(\mathbb{R})$-hierarchy and show that determinacy for them implies the existence and $\omega_1$-iterability of certain hybrid mice with finitely many Woodin cardinals, which we call $M_k^{\Sigma, \sharp}$. These hybrid mice are like ordinary mice, but equipped with an iteration strategy for a mouse they are containing, which enables them to capture certain sets of reals. We will discuss what it means for a mouse to capture a set of reals and outline why hybrid mice fulfill this task. If time allows we will sketch a proof that determinacy for sets of reals in the $L(\mathbb{R})$-hierarchy implies the existence of hybrid mice. Many thanks to Richard for the pictures! • 18 Jan 2016 » Hamburg Set Theory Workshop 2016 - Hybrid Mice with Finitely Many Woodin Cardinals from Determinacy On January 18, 2016, I gave a talk at the Hamburg Set Theory Workshop 2016. Hybrid Mice with Finitely Many Woodin Cardinals from Determinacy Abstract: Mice are countable sufficiently iterable models of set theory. Hybrid mice are mice which are equipped with some iteration strategy for a mouse they are containing. This allows them to capture stronger sets of reals. W. Hugh Woodin has shown in so far unpublished work that boldface $\Pi^1_{n+1}$-determinacy implies that $M_n^\sharp(x)$ exists and is $\omega_1$-iterable for all reals $x$. We will generalize parts of this to hybrid mice and levels of determinacy in the $L(\mathbb{R})$-hierarchy. • 23 Sep 2015 » Minisymposium Set Theory, DMV-Jahrestagung, Hamburg - Mice with finitely many Woodin cardinals from optimal determinacy hypotheses On September 23, 2015, I gave a talk in the Minisymposium Set Theory at the DMV-Jahrestagung in Hamburg. Mice with finitely many Woodin cardinals from optimal determinacy hypotheses Abstract: Mice are countable sufficiently iterable models of set theory. Itay Neeman has shown that the existence of such mice with finitely many Woodin cardinals implies that projective determinacy holds. In fact he proved that the existence and $\omega_1$-iterability of $M^{\sharp}_n(x)$ for all reals $x$ implies that boldface $\Pi^1_{n+1}$-determinacy holds. We prove the converse of this result, that means boldface $\Pi^1_{n+1}$-determinacy implies that $M^{\sharp}_n(x)$ exists and is $\omega_1$-iterable for all reals $x$. This level-wise connection between mice and projective determinacy is an old so far unpublished result by W. Hugh Woodin. As a consequence we can obtain the determinacy transfer theorem for all levels $n$. These results connect the areas of inner model theory and descriptive set theory, so we will give an overview of the relevant topics in both fields and briefly sketch a proof of the result mentioned above. The first goal is to show how to derive a model of set theory with Woodin cardinals from a determinacy hypothesis. The second goal is to prove that there is such a model which is iterable. For this part the odd and even levels of the projective hierarchy are treated differently. This is joint work with Ralf Schindler and W. Hugh Woodin • 06 Jul 2015 » Oberseminar Bonn - Mice with finitely many Woodin cardinals from optimal determinacy hypotheses On July 6st, 2015, I gave a talk in the Logik Oberseminar in Bonn. Title: Mice with finitely many Woodin cardinals from optimal determinacy hypotheses • 05 Mar 2015 » KGRC Research Seminar, Vienna - Mice with finitely many Woodin cardinals from optimal determinacy hypotheses I gave a talk in the KGRC Research Seminar at University of Vienna on Mar 3rd, 2015. Title: Mice with finitely many Woodin cardinals from optimal determinacy hypotheses • 12 Dec 2014 » CUNY Set Theory Seminar - Producing $M_n^{sharp}$ from Boldface Level-wise Projective Determinacy On Dec 12st, 2014, I gave a talk in the CUNY Set Theory Seminar. Title: Producing $M_n^\sharp$ from Boldface Level-wise Projective Determinacy • 21 Oct 2014 » Oberseminar Münster - Woodins HOD Conjecture On October 21 and 28, 2014, I gave talks in the Mengenlehre Oberseminar in Münster. Title: Woodins HOD Conjecture • 16 Sep 2014 » Workshop in Set Theory, Bedlewo, Poland - Producing Iterable Inner Models with Finitely many Woodin Cardinals from Optimal Determinacy Hypotheses On September 16th, 2014, I gave a talk in the Workshop in Set Theory in Bedlewo, Poland. Title: Producing Iterable Inner Models with Finitely many Woodin Cardinals from Optimal Determinacy Hypotheses Abstract • 05 Sep 2014 » Colloquium Logicum 2014, Neubiberg - Producing Iterable Inner Models with Finitely many Woodin Cardinals from Optimal Determinacy Hypotheses On September 5th, 2014, I gave at talk at the Colloquium Logicum in Neubiberg. Title: Producing Iterable Inner Models with Finitely many Woodin Cardinals from Optimal Determinacy Hypotheses • 19 May 2014 » Oberseminar Münster - Producing ${M_n^{sharp}}$ from optimal determinacy hypotheses On May 19, 2014, I gave a talk in the Mengenlehre Oberseminar in Münster. Title: Producing $M_n^\sharp$ from optimal determinacy hypotheses • 07 Mar 2014 » INFTY Final Conference, Bonn - Producing Inner Models with Woodin Cardinals from Projective Determinacy On March 07, 2014, I gave a talk at the INFTY Final Conference in Bonn. Title: Producing Inner Models with Woodin Cardinals from Projective Determinacy • 12 Nov 2013 » Doktorandenforum der Studienstiftung des deutschen Volkes, Köln - Nicht-Beweisbarkeit und unendliche Spiele On November 12th, 2013, I gave a talk (in German) in Köln at the Doktorandenforum of the Studienstiftung des deutschen Volkes (German National Academic Foundation). Title: Nicht-Beweisbarkeit und unendliche Spiele Slides • 28 Oct 2013 » Oberseminar Münster - Producing $M_n^{sharp}$ from Projective Determinacy On October 28 and November 04, 2013, I gave talks in the Mengenlehre Oberseminar in Münster. Title: Producing $M_n^\sharp$ from Projective Determinacy • 12 Jul 2013 » Graduate Summer School in Set Theory, Irvine - On a Generalization of a Lemma by Kechris and Solovay to an Inner Model Theoretic Context On July 12th, 2013, I gave a talk at the Graduate Summer School in Set Theory in Irvine. Title: On a Generalization of a Lemma by Kechris and Solovay to an Inner Model Theoretic Context • 02 May 2013 » Tecklenburg - Games in Set Theory On May 2nd, 2013, I gave a talk in Tecklenburg at a meeting for PhD students in mathematics at University of Münster. Title: Games in Set Theory Slides • 22 May 2012 » Writeup of an account of Woodin’s HOD conjecture Masterthesis (in German). PDF
2022-09-28 02:47:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502257823944092, "perplexity": 797.8663719062761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00253.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-1-section-1-1-linear-equations-1-1-exercises-page-84/22
# Chapter 1 - Section 1.1 - Linear Equations - 1.1 Exercises: 22 $3$ #### Work Step by Step Add 2.96 to the left side and subtract 0.01x from the right side so we have our x on the same side of the equation. $6.06=2.02x$ Now we divide by 2.02 on both sides. $(6.06/2.02)=(2.02x/2.02)$ $x=3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-18 03:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302850246429443, "perplexity": 692.4999095310841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00071.warc.gz"}
https://clanlu.net/tag/image
# Image5 The ring homomorphic image of an ideal is an ideal Exhibit a group homomorphism from Z/(8) to Z/(4) Describe the kernel and fibers of a given group homomorphism Absolute value is a group homomorphism on the multiplicative real numbers The image of a group homomorphism is a subgroup
2022-08-13 00:11:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910013675689697, "perplexity": 849.6430670953832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00661.warc.gz"}
http://math.stackexchange.com/questions/38836/devise-system-of-equations-to-solve-age-problem
# Devise System of Equations to Solve Age Problem Hi this may be simple silly problem but it is bugging me as I am not able to devise a system of equations to solve it. My husband's age," remarked a lady the other day, "is represented by the figures of my own age reversed. He is my senior, and the difference between our ages is one-eleventh of their sum. The answer is 54 and 45, but not able to find way to get it. - Let the lady's age be 10a+b, and the husband's age be 10b+a... –  J. M. May 13 '11 at 6:40 Also, (594,495), (5454,4545), (5994,4995), etc... –  user4143 May 13 '11 at 8:51 add comment ## 1 Answer Not sure if this totally solves what you want, but here's a way to get a stage where the numbers should be easier to guess and check. I assume the ages of the two people are two digit numbers. So we can represent the husband's age as $10x+y$, where $x$ and $y$ are the decimal digits, and the wife's age is then $10y+x$, with $10x+y\gt 10y+x$. Since the difference of their ages is one eleventh of the sum, this translates to $$(10x+y)-(10y+x)=\frac{1}{11}(10x+y+10y+x)$$ but this implies $$9x-9y=\frac{1}{11}(11x+11y)=x+y.$$ So you have $9(x-y)=x+y$. Since $0\leq x,y\leq 9$, it's not hard to experiment with these numbers to find that $x=5$ and $y=4$. - I have reached up to this but I think some piece is missing that is not allowing us to form strict set of equations. –  TheVillageIdiot May 13 '11 at 6:43 @Village: Sure, but you only have a few digits to try... so that "missing piece" doesn't really matter in this case. –  J. M. May 13 '11 at 6:45 The next step is $8x =10y$, i.e. $4x=5y$, so $x$ is a multiple of $5$ which means it is $0$ or $5$ ($10$ would be too big for a decimal digit) so $y=0$ or $4$, but $0$ would make them the same age and unable to talk. –  Henry May 13 '11 at 6:52 Actually, you don't need to do much trial and error, since you know that $x+y$ must be divisible by $9$, so either $x+y=9$ or $x+y=18.$ But $x+y=18$ only happens when $x=9, y=9$ which is obviously not a solution. If $x+y=9$, then $x-y=1$, and you get $45/54$ –  Thomas Andrews May 13 '11 at 6:52 add comment
2013-12-18 10:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316515684127808, "perplexity": 448.21862810058366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758389/warc/CC-MAIN-20131218054918-00097-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.last.fm/music/%D0%9C%D0%B0%D0%BA%D1%81%D0%B8%D0%BC+%D0%9C%D0%B0%D1%82%D0%B2%D0%B5%D0%B5%D0%B2/+similar
# Similar Artists 1. We don't have a wiki here yet... 2. We don't have a wiki here yet... 3. We don't have a wiki here yet... 4. We don't have a wiki here yet... 5. We don't have a wiki here yet... 6. We don't have a wiki here yet... 7. We don't have a wiki here yet... 8. We don't have a wiki here yet... 9. We don't have a wiki here yet... 10. We don't have a wiki here yet... 11. We don't have a wiki here yet... 12. We don't have a wiki here yet... 13. We don't have a wiki here yet... 14. We don't have a wiki here yet... 15. Плед (pronounced "Pl'ed", literally means "Throw blanket")is a contemporary Russian band which style can be described as a mixture of punk,… 16. We don't have a wiki here yet... 17. We don't have a wiki here yet... 18. We don't have a wiki here yet... 19. We don't have a wiki here yet...
2016-02-14 15:31:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130520820617676, "perplexity": 2636.715258431733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701987329.64/warc/CC-MAIN-20160205195307-00230-ip-10-236-182-209.ec2.internal.warc.gz"}
https://socratic.org/questions/it-took-david-an-hour-to-ride-20-km-from-his-house-to-the-nearest-town-he-then-s
# It took David an hour to ride 20 km from his house to the nearest town. He then spent 40 minutes on the return journey. What was his average speed? Sep 6, 2017 ${\text{24 km h}}^{- 1}$ #### Explanation: The average speed is simply the rate at which the distance travelled by David varies per unit of time. $\text{average speed" = "distance covered"/"unit of time}$ In your case, you can take a unit of time to mean $1$ hour. Since you know that $\text{1 h = 60 min}$ you can say that David needed 40 color(red)(cancel(color(black)("min"))) * "1 h"/(60color(red)(cancel(color(black)("min")))) = 2/3color(white)(.)"h" to make the return trip. Now, notice that on his way from his house to the town hall, David travels $\text{20 km}$ in exactly $1$ hour. This means that his average speed for the first part of the journey will be ${\text{average speed"_ 1 = "20 km"/"1 h" = "20 km h}}^{- 1}$ Since it takes less than an hour for David to complete the return trip, you can say that his average speed for the return trip will be higher $\to$ he will cover more distance per unit of time on his return trip. More specifically, David will cover 1 color(red)(cancel(color(black)("h"))) * "20 km"/(2/3color(red)(cancel(color(black)("h")))) = "30 km" in $1$ hour on his return trip, so his average speed will be ${\text{average speed"_2 = "30 km h}}^{- 1}$ So, you know the average speed for the first trip and the average speed for the return trip, so you can simply take the average of these two values, right? Wrong! It is absolutely crucial to avoid going $\textcolor{red}{\cancel{\textcolor{b l a c k}{{\text{average speed" = ("20 km h"^(-1) + "30 km h"^(-1))/2 = "25 km h}}^{- 1}}}}$ because you will get an incorrect answer $\to$ that's not how the average speed works! Instead, focus on the definition of average speed, which tells you that you must find the total distance covered by David per unit of time. You know that you have • $\text{total distance = 20 km + 20 km = 40 km}$ • $\text{total time" = "1 h" + 2/3color(white)(.)"h" = 5/3color(white)(.)"h}$ So if David covers $\text{40 km}$ in $\frac{5}{3}$ hours, how many kilometers does he cover in $1$ hour? 1 color(red)(cancel(color(black)("h"))) * "40 km"/(5/3color(red)(cancel(color(black)("h")))) = "24 km" Therefore, you can say that David has an average speed of "average speed" = color(darkgreen)(ul(color(black)("24 km h"^(-1)))) I'll leave the answer rounded to two sig figs, but don't forget that your values justify only one significant figure for the answer. This is why the equation for average speed is given as $\text{average speed" = "total distance"/"total time}$ ${\text{average speed" = "40 km"/(5/3color(white)(.)"h") = 40/(5/3)color(white)(.)"km"/"h" = "24 km h}}^{- 1}$
2019-03-24 06:41:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950977325439453, "perplexity": 684.3322855866484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203378.92/warc/CC-MAIN-20190324063449-20190324085449-00210.warc.gz"}
https://tex.stackexchange.com/questions/371743/a-parbox-inside-a-textblock
# A parbox inside a textblock I would like to have absolute positioning with centering vertical alignment. I am trying to achieve this by using a \parbox inside a \textblock (from textpos): \documentclass{article} \usepackage[absolute,overlay,showboxes]{textpos} \begin{document} \setlength{\fboxsep}{0pt} \begin{textblock*}{1in}(1in,1in)\noindent \fbox{\parbox[c][1in][c]{1in}{Hallo}} \end{textblock*} \end{document} However, it the widths of the textblock and the parbox do not match. Does anybody know where this unwanted extra spacing comes from and how to remove it? \documentclass{article}
2019-07-18 04:42:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8232900500297546, "perplexity": 2224.944802312576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00286.warc.gz"}
https://everything.explained.today/Smooth_morphism/
Smooth morphism explained In algebraic geometry, a morphism f:X\toS between schemes is said to be smooth if • (i) it is locally of finite presentation • (ii) it is flat, and \overline{s}\toS the fiber X\overline{s } = X \times_S is regular.(iii) means that each geometric fiber of f is a nonsingular variety (if it is separated). Thus, intuitively speaking, a smooth morphism gives a flat family of nonsingular varieties. If S is the spectrum of an algebraically closed field and f is of finite type, then one recovers the definition of a nonsingular variety. Equivalent definitions There are many equivalent definitions of a smooth morphism. Let f:X\toS be locally of finite presentation. Then the following are equivalent. 1. f is smooth. 2. f is formally smooth (see below). \OmegaX/S is locally free of rank equal to the relative dimension of X/S . 1. For any x\inX , there exists a neighborhood \operatorname{Spec}B of x and a neighborhood \operatorname{Spec}A of f(x) such that B=A[t1,...,tn]/(P1,...,Pm) and the ideal generated by the m-by-m minors of (\partialPi/\partialtj) is B. 1. Locally, f factors into X\overset{g}\to n A S \toS where g is étale. 1. Locally, f factors into X\overset{g}\to n A S \to n-1 A S \to\to 1 A S \toS where g is étale. A morphism of finite type is étale if and only if it is smooth and quasi-finite. A smooth morphism is stable under base change and composition. A smooth morphism is universally locally acyclic. Examples Smooth morphisms are supposed to geometrically correspond to smooth submersions in differential geometry; that is, they are smooth locally trivial fibrations over some base space (by Ehresmann's theorem). Smooth Morphism to a Point Let f be the morphism of schemes SpecC\left( C[x,y] (f=y2-x3-x-1) \right)\toSpec(C) It is smooth because of the Jacobian condition: the Jacobian matrix [3x2-1,y] vanishes at the points (1/\sqrt{3},0),(-1/\sqrt{3},0) which has an empty intersection with the polynomial, since \begin{align} f(1/\sqrt{3},0)&=1- 1 \sqrt{3 } - \frac \\f(-1/\sqrt,0) &= \frac + \frac - 1\endwhich are both non-zero. Trivial Fibrations Given a smooth scheme Y the projection morphism Y x X\toX is smooth. Vector Bundles Every vector bundle E\toX over a scheme is a smooth morphism. For example, it can be shown that the associated vector bundle of l{O}(k) over Pn is the weighted projective space minus a point O(k)=P(1,\ldots,1,k)-\{[0::0:1]\}\toPn sending [x0::xn:xn+1]\to[x0::xn] Notice that the direct sum bundles O(k)O(l) can be constructed using the fiber product O(k)O(l)=O(k) x XO(l) Separable Field Extensions Recall that a field extension K\toL is called separable iff given a presentation L= K[x] (f(x)) we have that gcd(f(x),f'(x))=1 . We can reinterpret this definition in terms of Kähler differentials as follows: the field extension is separable iff \OmegaL/K=0 Notice that this includes every perfect field: finite fields and fields of characteristic 0. Non-Examples Singular Varieties If we consider Spec of the underlying algebra R for a projective variety X , called the affine cone of X , then the point at the origin is always singular. For example, consider the affine cone of a quintic 3 -fold given by 5 x 0 + 5 x 1 + 5 x 2 + 5 x 3 + 5 x 4 Then the Jacobian matrix is given by 4 \begin{bmatrix} 5x 0 & 4 5x 1 & 4 5x 2 & 4 5x 3 & 4 5x 4 \end{bmatrix} which vanishes at the origin, hence the cone is singular. Affine hypersurfaces like these are popular in singularity theory because of their relatively simple algebra but rich underlying structures. Another example of a singular variety is the projective cone of a smooth variety: given a smooth projective variety X\subsetPn its projective cone is the union of all lines in Pn+1 intersecting X . For example, the projective cone of the points Proj\left( C[x,y] (x4+y4) \right) is the scheme Proj\left( C[x,y,z] (x4+y4) \right) If we look in the z0 chart this is the scheme Spec\left( C[X,Y] (X4+Y4) \right) and project it down to the affine line 1 A Y , this is a family of four points degenerating at the origin. The non-singularity of this scheme can also be checked using the Jacobian condition. Degenerating Families Consider the flat family Spec\left( C[t,x,y] (xy-t) \right)\to 1 A t Then the fibers are all smooth except for the point at the origin. Since smoothness is stable under base-change, this family is not smooth. Non-Separable Field Extensions For example, the field p) F p(t \toFp(t) is non-separable, hence the associated morphism of schemes is not smooth. If we look at the minimal polynomial of the field extension, f(x)=xp-tp then df=0 , hence the Kähler differentials will be non-zero. Formally smooth morphism See also: Formally smooth map and Geometrically regular ring. One can define smoothness without reference to geometry. We say that an S-scheme X is formally smooth if for any affine S-scheme T and a subscheme T0 of T given by a nilpotent ideal, X(T)\toX(T0) is surjective where we wrote X(T)=\operatorname{Hom}S(T,X) . Then a morphism locally of finite type is smooth if and only if it is formally smooth. In the definition of "formally smooth", if we replace surjective by "bijective" (resp. "injective"), then we get the definition of formally étale (resp. formally unramified). Smooth base change Let S be a scheme and \operatorname{char}(S) denote the image of the structure map S\to\operatorname{Spec}Z . The smooth base change theorem states the following: let f:X\toS be a quasi-compact morphism, g:S'\toS a smooth morphism and l{F} a torsion sheaf on Xet . If for every 0\nep in \operatorname{char}(S) , p:l{F}\tol{F} is injective, then the base change morphism g*(R if *l{F})\to *l{F}) R *(g' is an isomorphism.
2023-03-20 20:14:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885546565055847, "perplexity": 1602.49046057188}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00212.warc.gz"}
https://math.stackexchange.com/questions/3281074/can-every-maximal-planar-graph-be-obtained-as-a-minor-of-a-planar-graph-with-onl
# Can every maximal planar graph be obtained as a minor of a planar graph with only even vertex degrees? This question is about simple undirected planar graphs without loops. Starting with a planar graph $${\cal G}_0$$ in which all vertex degrees are even we perform edge contractions, thus obtaining new graphs that are minors of $${\cal G}_0$$ (and that are therefore also planar). The question is whether every maximal planar graph $${\cal G}_1$$ (with arbitrary even and/or odd vertex degrees) can be obtained in that way as a minor of some appropriate $${\cal G}_0$$ (clearly $${\cal G}_0$$ will in general be different for different $${\cal G}_1$$)? Also, if every $${\cal G}_1$$ can be obtained in that way, then given a $${\cal G}_1$$ how do I actually construct a $${\cal G}_0$$ and a corresponding contraction? Trivially, if $${\cal G}_1$$ itself only has even vertex degrees, then we can just choose $${\cal G}_0={\cal G}_1$$ (a graph is a minor of itself). The question is really about $${\cal G}_1$$ that also have odd vertex degrees. (I am only interested in maximal planar $${\cal G}_1$$, but maximality may not actually be important for this question - but I am not sure) Inspired by your correction of my earlier answer, define the operation of uncontracting an edge $$(u, v)$$ as inserting a new vertex $$w$$ in the interior of one of the faces of the edge and adding edges $$(u, w)$$ and $$(v, w)$$. This inserts one vertex of even degree and toggles the parity of $$u$$ and $$v$$. Contracting $$(u, w)$$ or $$(v, w)$$ restores the original graph. By the handshake lemma, the number of odd degree vertices in a connected component is even. Therefore the following algorithm gives a graph which can be edge-contracted to $$G$$: If $$G$$ has no odd vertices, we're done. Otherwise uncontract each edge along a path between two odd vertices, reducing the number of odd vertices by two, and recurse.
2022-01-26 09:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.761139988899231, "perplexity": 217.81175215775875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00128.warc.gz"}
https://easierwithpractice.com/what-is-budget-variance/
# What is budget variance? ## What is budget variance? A budget variance is the difference between the amount you budgeted for and the actual amount spent. When preparing energy budgets, it is practically impossible to be “right on the money;” therefore resulting in a budget surplus or deficit. What is the difference between budget and actual? Budget – an estimate of revenues and expenses for an account for a fiscal year. Actuals – the actuals reflect how much revenue an account has actually generated or how much money an account has paid out in expenditures at a given point in time during a fiscal year. What is the difference between a positive budget variance and a negative budget variance? A favorable budget variance refers to positive variances or gains; an unfavorable budget variance describes negative variance, indicating losses or shortfalls. Budget variances occur because forecasters are unable to predict future costs and revenue with complete accuracy. ### How do you find the actual variance of a budget? Start by finding the difference between the actual total expenses and total budgeted amount. In this case, that’s \$34. Next, divide by the total original budget and multiply by 100, yielding a percentage over budget of 4%. What are the four main reasons budget deviations occur? There are four common reasons why actual expenditure or income will show a variance against the budget. • The cost is more (or less) than budgeted. Budgets are prepared in advance and can only ever estimate income and expenditure. • Planned activity did not occur when expected. • Change in planned activity. • Error/Omission. How do you find a variance? How to Calculate Variance 1. Find the mean of the data set. Add all data values and divide by the sample size n. 2. Find the squared difference from the mean for each data value. Subtract the mean from each data value and square the result. 3. Find the sum of all the squared differences. 4. Calculate the variance. ## What exactly is variance? The variance is a measure of variability. It is calculated by taking the average of squared deviations from the mean. Variance tells you the degree of spread in your data set. What is the relationship between standard deviation and variance? Variance is the average squared deviations from the mean, while standard deviation is the square root of this number. Both measures reflect variability in a distribution, but their units differ: Standard deviation is expressed in the same units as the original values (e.g., minutes or meters). What is σ in statistics? The symbol ‘σ’ represents the population standard deviation. The term ‘sqrt’ used in this statistical formula denotes square root. The term ‘Σ ( Xi – μ )2’ used in the statistical formula represents the sum of the squared deviations of the scores from their population mean. ### What is this symbol Σ? Σ This symbol (called Sigma) means “sum up” I love Sigma, it is fun to use, and can do many clever things. What is σ called? Sigma /ˈsɪɡmʌ/ (uppercase Σ, lowercase σ, lowercase in word-final position ς; Greek: σίγμα) is the eighteenth letter of the Greek alphabet. In general mathematics, uppercase ∑ is used as an operator for summation. What percentage is 4 sigma? Five-sigma corresponds to a p-value, or probability, of 3×10-7, or about 1 in 3.5 million….Don’t be so sure. σ Confidence that result is real 3 σ 99.87% 3.5 σ 99.98% > 4 σ 100% (almost) ## How many Sigma is 1.67 Cpk? Sigma level table Two sided table Cpk Ppk Sigma level PPM out of tolerance 1.33 4.0 63.342 1.50 4.5 6.795 1.67 5.0 0.573 Is 7 Sigma possible? Given where the world is right now, many followers of Six Sigma (including myself) would say that a capability of 7-sigma is pessimistically possible, but not pragmatically probable. This would be a 5-sigma level of performance. A capability of 6-sigma would be 1 argument every 298,048 days or 805 years! What is the best sigma level? A process with 50% defects (DPMO = 500,000) would have a Sigma Level of 0. Usually, a process with a Sigma Level of 6 or greater is usually considered as an excellent process. ### What is a good sigma value? A Three Sigma quality level of performance produces roughly 66,800 defects per million opportunities. The goal companies should reach for is Six Sigma, meaning 3.4 defects for every one million opportunities. What percent is 3 sigma? 99.7 percent What’s the difference between S and Sigma? The distinction between sigma (σ) and ‘s’ as representing the standard deviation of a normal distribution is simply that sigma (σ) signifies the idealised population standard deviation derived from an infinite number of measurements, whereas ‘s’ represents the sample standard deviation derived from a finite number of … ## What is S in statistic? s refers to the standard deviation of a sample. s2 refers to the variance of a sample. p refers to the proportion of sample elements that have a particular attribute. How do you find Sigma? The symbol for Standard Deviation is σ (the Greek letter sigma)….Say what? 1. Work out the Mean (the simple average of the numbers) 2. Then for each number: subtract the Mean and square the result. 3. Then work out the mean of those squared differences. 4. Take the square root of that and we are done! What is Sigma XBAR bar? calculated test statistic. μ and σ can take subscripts to show what you are taking the mean or standard deviation of. For instance, σx̅ (“sigma sub x-bar”) is the standard deviation of sample means, or standard error of the mean. ### What is XBAR? The x-bar is the symbol (or expression) used to represent the sample mean, a statistic, and that mean is used to estimate the true population parameter, mu. What does XI mean in standard deviation? the individual data point What does the standard deviation tell you? The standard deviation is the average amount of variability in your data set. It tells you, on average, how far each score lies from the mean. ## How is deviation calculated? 1. The standard deviation formula may look confusing, but it will make sense after we break it down. 2. Step 1: Find the mean. 3. Step 2: For each data point, find the square of its distance to the mean. 4. Step 3: Sum the values from Step 2. 5. Step 4: Divide by the number of data points. 6. Step 5: Take the square root. How do you interpret the standard deviation? More precisely, it is a measure of the average distance between the values of the data in the set and the mean. A low standard deviation indicates that the data points tend to be very close to the mean; a high standard deviation indicates that the data points are spread out over a large range of values. What is acceptable standard deviation? For an approximate answer, please estimate your coefficient of variation (CV=standard deviation / mean). As a rule of thumb, a CV >= 1 indicates a relatively high variation, while a CV < 1 can be considered low. A “good” SD depends if you expect your distribution to be centered or spread out around the mean. ### How do you tell if a standard deviation is high or low? Low standard deviation means data are clustered around the mean, and high standard deviation indicates data are more spread out. A standard deviation close to zero indicates that data points are close to the mean, whereas a high or low standard deviation indicates data points are respectively above or below the mean. What is 2 standard deviations from the mean? For an approximately normal data set, the values within one standard deviation of the mean account for about 68% of the set; while within two standard deviations account for about 95%; and within three standard deviations account for about 99.7%. What is two standard deviations above IQ? 13.59% of the population is between the first and second standard deviation below the mean (IQ 70-85), and 13.59% is between the first and second standard deviation above the mean (IQ 115-130).
2022-07-03 02:02:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152459263801575, "perplexity": 1272.7658837918193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00700.warc.gz"}
https://www.gamedev.net/blogs/entry/842431-improvements/
• entries 232 1463 • views 961046 # Improvements 227 views This week end, i've been working on a few optimizations. As i suspected a huge CPU bottleneck, i added a CProfile class to my code, which works hierarchically. You call a CProfile::begin(title) function before the piece of code you want to profile, and a CProfile::end() function after. These calls can be nested. Two informations are gathered: the time (in milliseconds) elapsed between the begin() and end(), and the percentage of time spent in the block of code, compared to the total frame. In the log it might look like this: Frame (20 ms, 100%){ Setup (10 ms, 50%) { UpdateScene (5 ms, 25%) DoFrustumCulling (3 ms, 15%) SortObjects (2 ms, 10%) } Render (10 ms, 50%) { Planet1 (5 ms, 25%) { Atmosphere (2 ms, 10%) Terrain (3 ms, 15%) } Planet2 (5 ms, 25%) { Atmosphere (2 ms, 10%) Terrain (3 ms, 15%) } }} After i added a couple of begin() and end() calls in the important parts of the engine, i ran it, and without too much surprise, discovered it spent: - around 25% of its time in updating the planet - around 75% of its time in rendering the planet ( in that case, "rendering" means setting up the scene, sorting the objects, and sending them to the GPU ). I also displayed the number of terrain patches at ground level: around 700. That's an interesting information, because i've got a unique texture assigned for each of these. I was testing with a resolution of 512^2.. which means it was using 700 * 512 * 512 * 3 bytes of texture memory.. that's 550 Mb ! Despite this, it was running pretty well on my ATI X800 256 Mo, which means the driver is doing its job at paging textures between video and system memory, but i still decreased the standard resolution to 256^2. Finally, i discovered that a piece of code during the planet setup, was called twice ( while once was enough ). This saved 15% on the total framerate. On that X800, i'm now getting around 85 fps at ground level, but i'm not totally happy with that number ( especially since it'll decrease when i'll add additional effects, clouds, vegetation, the user interface, etc.. ). My goal is to reach at least 120 fps. To reach my number, my next step will be to review the materials sorting code, and especially the way the shaders and their constants are attached, since at the moment i set tens of constants ( which never change ) for each of the 700 objects, and enable/disable the same shaders. ## 1 Comment I saw a nice implementation of a profiler like you describe, but it had a couple of nice code-related characteristics: I forget the finer points, but something like: #define PROFILE_BLOCK CProfiler p( __FUNCTION__, __LINE__, __FILE__ ); and you'd then have: void myFunc( ... ) { PROFILE_BLOCK // other stuff } all you needed to do was add a PROFILE_BLOCK at the beginning of a block. Not perfect, but avoids forgetting to "end" a profiling fragment [smile] Are you profiling your OpenGL (?) usage as well as your CPU usage? I've taken to use PIX in a big way (OpenGL has 'GlDEBugger' or something like that, right?) and the information can be amazing. When viewed in a flat/sequential way, the things that the pipeline are being asked to do look very different to the way the algorithms in your code might appear. At the very least it'd allow you to confirm whether any rendering-related optimizations (esp. state change related) were at all effective [smile] Jack ## Create an account Register a new account
2018-09-18 17:50:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4182860553264618, "perplexity": 3545.0010997912377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155634.45/warc/CC-MAIN-20180918170042-20180918190042-00480.warc.gz"}
http://skepticsplay.blogspot.com/2009/07/minesweeper-solution.html?showComment=1247455321418
## Friday, July 10, 2009 ### Minesweeper solution See the original puzzle There are five different mine arrangements which are consistent with the given minesweeper board. See here: Spoiler alert! Each of five arrangements is equally likely.* So if we want to maximize our chances of winning, we should first try the square which is least likely to contain a mine. The upper left unknown has only a 1/5 chance of containing a mine, so we might pick that. If it turns out to be a 3 or a 5, then we can figure out the rest of the mines from there. If it turns out to be a 4, then we'll have to guess one more square. Let's just guess the square below it. In total, we have a 3/5 chance of surviving. *The reason for this is that we are assuming that all arrangements are equally likely at the beginning of the game. As we click more squares, we prune the possibilities, but never do we make any of the remaining possibilities more likely than others. We know there is no way to improve on this survival rate, because if we picked any other square first, we would already have at least a 2/5 chance of dying. However, there is one alternative strategy which matches the 3/5 survival rate. If we pick the lower left right unknown, there is a 2/5 chance of dying. If we survive and it is a 3, then we can deduce the rest of the mines. If we survive and it is a 2, then we know that the upper left unknown is clear, and we can use the number there to deduce the rest of the mines.
2014-10-26 00:15:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373611569404602, "perplexity": 263.6096042116518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119652530.43/warc/CC-MAIN-20141024030052-00099-ip-10-16-133-185.ec2.internal.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=8976
2019 Том 71 № 6 # Automorphisms of a finitary factor power of an infinite symmetric group Hudzenko S. V. Abstract We consider a semigroup $FP^{+}_{\text{fin}}(\mathfrak{S}_{\text{fin}}(\mathbb{N}))$ defined as a finitary factor power of a finitary symmetric group of countable order. It is proved that all automorphisms of $FP^{+}_{\text{fin}}(\mathfrak{S}_{\text{fin}}(\mathbb{N}))$ are induced by permutations from $\mathfrak{S}_{\text{fin}}(\mathbb{N})$. English version (Springer): Ukrainian Mathematical Journal 62 (2010), no. 7, pp 1158-1162. Citation Example: Hudzenko S. V. Automorphisms of a finitary factor power of an infinite symmetric group // Ukr. Mat. Zh. - 2010. - 62, № 7. - pp. 997–1001. Full text
2019-06-26 22:49:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6030821204185486, "perplexity": 1078.3831780438209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00253.warc.gz"}
https://lakejason0.wordpress.com/2019/01/
Merriam-Webster’s Word of the Day for January 31, 2019 is: 1 : being in a state of confusion : lacking composure 2 : broken-down, worn Examples: We were met at the door by a raddled old man who turned out to be the actor’s father, and who in his day had also been an estimable presence on the London stage. “The real skill of Swan Song is the kaleidoscopic portrait it paints of its raddled hero. The narrative moves through time from Capote’s tawdry childhood and friendship with Harper Lee to his withered end in Fu Manchu pyjamas.” — Alex Preston, The Observer (London), 22 July 2018 Did you know? Lake桑 January 31, 2019 at 01:00PM ## 每日一词:proliferate(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 30, 2019 is: proliferate • \pruh-LIF-uh-rayt\  • verb 1 : to grow or cause to grow by rapid production of new parts, cells, buds, or offspring 2 : to increase or cause to increase in number as if by proliferating : multiply Examples: Muskies in Lake St. Clair are a world-class presence because local folks 30 years ago got smart. They agreed on a catch-and-release ethic. Catch the muskie. Put it back into the water. And watch a species proliferate.” — Lynn Henning, The Detroit News, 26 December 2018 “The surge in the price of bitcoin, and of other cryptocurrencies, which proliferated amid a craze for initial coin offerings, prompted a commensurate explosion in the number of stories and conversations about this new kind of money….” — Nicholas Paumgarten, The New Yorker, 22 Oct. 2018 Did you know? Proliferate is a back-formation of proliferation. That means that proliferation came first (we borrowed it from French in the 18th century) and was later shortened to form the verb proliferate. Ultimately these terms come from Latin. The French adjective prolifère (“reproducing freely”) comes from the Latin noun proles and the Latin combining form -fer. Proles means “offspring” or “descendants,” and -fer means “bearing.” Both of these Latin forms gave rise to numerous other English words. Prolific and proletarian ultimately come from proles; aquifer and words ending in -ferous have their roots in -fer. Lake桑 January 30, 2019 at 01:00PM ## 每日一词:charisma(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 29, 2019 is: charisma • \kuh-RIZ-muh\  • noun 1 : a personal magic of leadership arousing special popular loyalty or enthusiasm for a public figure (such as a political leader) 2 : a special magnetic charm or appeal Examples: The young singer had the kind of charisma that turns a performer into a star. “Winner of seven Tony Awards including Best Musical, ‘Evita’ is the story of Eva Peron who used her charisma and charms to rise from her penniless origins to political power as the first lady of Argentina at the age of 27.” — Oscar Sales, The Press Journal (Vero Beach, Florida), 19 Dec. 2018 Did you know? The Greek word charisma means “favor” or “gift.” It is derived from the verb charizesthai (“to favor”), which in turn comes from the noun charis, meaning “grace.” In English, charisma has been used in Christian contexts since the mid-1500s to refer to a gift or power bestowed upon an individual by the Holy Spirit for the good of the Church, a sense that is now very rare. The earliest nonreligious use of charisma that we know of occurred in a German text, a 1922 publication by sociologist Max Weber. The sense began appearing in English contexts shortly after Weber’s work was published. Lake桑 January 29, 2019 at 01:00PM ## 每日一词:sleuth(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 28, 2019 is: sleuth • \SLOOTH\  • verb 1 : to act as a detective : search for information 2 : to search for and discover Examples: “Farmer would go sleuthing in the archives of Arizona State University’s Center for Meteorite Studies to find evidence of an undiscovered landfall in Canada, and Ward could build a rig that trailed an 11-foot metal detector behind a combine, which is how they unearthed \$1 million in pallasite fragments from several square miles of Alberta farmland.” — Joshuah Bearman and Allison Keeley, Wired, January 2019 “For more than five decades, Morse has sleuthed out long-lost family trees for a living. From his home base here in Haywood, Morse travels the world tracking down missing heirs.” — Becky Johnson, The Mountaineer (Haywood County, North Carolina), 20 Nov. 2018 Did you know? “They were the footprints of a gigantic hound!” Those canine tracks in Arthur Conan Doyle’s The Hound of the Baskervilles set the great Sherlock Holmes sleuthing on the trail of a murderer. It was a case of art imitating etymology. When Middle English speakers first borrowed sleuth from Old Norse, the term referred to “the track of an animal or person.” In Scotland, sleuthhound referred to a bloodhound used to hunt game or track down fugitives from justice. In 19th-century U.S. English, sleuthhound became an epithet for a detective and was soon shortened to sleuth. From there, it was only a short leap to turning sleuth into a verb describing what a sleuth does. Lake桑 January 28, 2019 at 01:00PM ## 每日一词:foray(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 27, 2019 is: foray • \FOR-ay\  • noun 1 : a sudden or irregular invasion or attack for war or spoils : raid 2 : an initial and often tentative attempt to do something in a new or different field or area of activity Examples: “Although she debuted a line of jewelry last year, this is her first foray into creating her own makeup line.” — Hayley Schueneman, The New York Magazine, 28 Nov. 2018 “Edgardo Defortuna has been flying high for years, … erecting a string of ultra-luxury condo and hotel towers on his way to becoming one of Miami’s most prominent developers. He recently announced his first foray outside South Florida, unveiling a design for a trio of luxury towers in Paraguay.” — Andres Viglucci and Rene Rodriguez, The Miami Herald, 16 Dec. 2018 Did you know? Foray comes from Middle English forrayen and probably traces back to an Anglo-French word that meant “raider” or “forager.” It’s related to the word forage, which commonly means “to wander in search of food (or forage).” Foray, in its earliest sense, referred to a raid for plunder. Relatively recently, foray began to take on a broader meaning. In a sense, foray still refers to a trip into a foreign territory. These days, though, looting and plundering needn’t be involved in a foray. When you take a foray, you dabble in an area, occupation, or pastime that’s new to you. Lake桑 January 27, 2019 at 01:00PM ## 每日一词:doldrums(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 26, 2019 is: doldrums • \DOHL-drumz\  • plural noun 1 : a spell of listlessness or despondency 2 often capitalized Doldrums : a part of the ocean near the equator abounding in calms, squalls, and light shifting winds 3 : a state or period of inactivity, stagnation, or slump Examples: “A vacation on a tropical island could be just the thing you need to fight against the winter doldrums,” said Christine as she handed me the resort’s brochure. “At the time, the bourbon industry was in the process of emerging from a lengthy period of doldrums and rebranding itself as not just something old men drank.” — The Kentucky Standard, 21 Nov. 2018 Did you know? Almost everyone gets the doldrums—a feeling of low spirits and lack of energy—every once in a while. The doldrums experienced by sailors, however, are usually of a different variety. In the early-19th century, the word once reserved for a feeling of despondency came to be applied to certain tropical regions of the ocean marked by the absence of strong winds. Sailing vessels, reliant on wind propulsion, struggled to make headway in these regions, leading to long, arduous journeys. The exact etymology of doldrums is not certain, though it is believed to be related to the Old English dol, meaning “foolish”—a history it shares with our adjective dull. Lake桑 January 26, 2019 at 01:00PM ## 每日一词:myopic(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 25, 2019 is: 1 : affected by myopia : of, relating to, or exhibiting myopia : nearsighted 2 : lacking in foresight or discernment : narrow in perspective and without concern for broader implications Examples: “This is, on the whole, an encouraging finding. If children became myopic due to looking at objects too closely, then we’d be stuck with an unsolvable dilemma: choosing between teaching children to read and protecting their eyesight.” — Brian Palmer, Slate, 16 Oct. 2013 “But even the most myopic seer can foretell with near certainty that our traditional use of privately owned vehicles running on fossil fuels is going to be giving way to new mobility options, and soon.” — John Gallagher, The Detroit Free Press, 9 Dec. 2018 Did you know? Myopia is a condition in which visual images come to a focus in front of the retina of the eye, resulting in defective vision of distant objects. Those with myopia can be referred to as “myopic” (or, less formally, “nearsighted”). Myopic has extended meanings, too. Someone myopic might have trouble seeing things from a different perspective or considering the future consequences before acting. Myopic and myopia have a lesser-known relative, myope, meaning “a myopic person.” All of these words ultimately derive from the Greek myōps, which comes from myein (meaning “to be closed”) and ōps (meaning “eye, face”). Lake桑 January 25, 2019 at 01:00PM Merriam-Webster’s Word of the Day for January 24, 2019 is: 1 : to make an official decision about who is right in (a dispute) : to settle judicially 2 : to act as judge Examples: “… Nichols said in addition to the nine dogs brought to the shelter, it is housing 31 dogs that were confiscated in animal cruelty or neglect cases. She said the shelter has to board the dogs, feed them and care for them until the cases are adjudicated.” — Russ Coreyemp, The Times Daily (Florence, Alabama), 16 Dec. 2018 “To qualify as a couture house, which is an official designation like champagne, a brand must maintain an atelier of a certain number of artisans full time and produce a specific number of garments twice a year for a show. There are only a very few that can fulfill the requirements…. A lot have dropped out over the years …, and the governing organization that adjudicates this has relaxed some of its rules to admit younger, less resourced and guest designers….” — Vanessa Friedman, The New York Times, 17 Dec. 2018 Did you know? Adjudicate is one of several terms that give testimony to the influence of jus, the Latin word for “law,” on our legal language. Adjudicate is from the Latin verb adjudicare, from judicare, meaning “to judge,” which, in turn, traces to the Latin noun judex, meaning “judge.” English has other judex words, such as judgment, judicial, judiciary, and prejudice. If we admit further evidence, we discover that the root of judex is jus. What’s the verdict? Latin “law” words frequently preside in English-speaking courtrooms. In addition to the judex words, jury, justice, injury, and perjury are all ultimately from Latin jus. Lake桑 January 24, 2019 at 01:00PM ## 每日一词:imbroglio(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 23, 2019 is: imbroglio • \im-BROHL-yoh\  • noun 1 a : an acutely painful or embarrassing misunderstanding b : a circumstance or action that offends propriety or established moral conceptions or disgraces those associated with it : scandal c : a violently confused or bitterly complicated altercation : embroilment d : an intricate or complicated situation (as in a drama or novel) 2 : a confused mass Examples: “He was close to scandal—GOP chairman during the Watergate years, vice president during the Iran-Contra imbroglio—yet was not tainted by it.” — David M. Shribman, The Boston Globe, 1 Dec. 2018 “The present imbroglio follows protracted struggles over the budget of the sheriff’s office, the fate of the 911 system, the county role in reducing blight and who should pay what for animal control.” — Rockford (Illinois) Register Star, 13 Dec. 2018 Did you know? Imbroglio and embroilment are more than just synonyms; they’re also linked through etymology. Both descend from the Middle French verb embrouiller (which has the same meaning as embroil), from the prefix em-, meaning “thoroughly,” plus brouiller, meaning “to mix” or “to confuse.” (Brouiller is itself a descendant of an Old French word for “broth.”) Early in the 17th century, English speakers began using embroil, a direct adaptation of embrouiller, as well as the noun embroilment. Meanwhile, the Italians were using their own alteration of embrouiller: imbrogliare, meaning “to entangle.” In the mid-18th century, English speakers embraced the Italian noun imbroglio as well. Lake桑 January 23, 2019 at 01:00PM ## 每日一词:cumulate(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 22, 2019 is: cumulate • \KYOO-myuh-layt\  • verb 1 : to gather or pile in a heap 2 : to combine into one 3 : to build up by addition of new material Examples: “In the alternative, the company may provide greater input to minority shareholders by allowing shareholders to cumulate their votes and cast them all for one director.” — Gregory Monday, The Milwaukee Business Journal, 5 Mar. 2018 “The report … compares various income estimates and reaches a similar conclusion: Most Americans have realized small annual increases that ultimately cumulated into meaningful gains.” — Robert Samuelson, The Sun Journal (Lewiston, Maine), 12 Dec. 2018 Did you know? Cumulate and its far more common relative accumulate both come from the Latin word cumulare, meaning “to heap up.” Cumulare, in turn, comes from cumulus, meaning “mass.” (Cumulus functions as an English word in its own right as well. It can mean “heap” or “accumulation,” or it can refer to a kind of dense puffy cloud with a flat base and rounded outlines.) Cumulate and accumulate overlap in meaning, but you’re likely to find cumulate mostly in technical contexts. The word’s related adjective, cumulative, however, is used more widely. Lake桑 January 22, 2019 at 01:00PM ## 每日一词:substantive(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 21, 2019 is: 1 : having substance : involving matters of major or practical importance to all concerned 2 : considerable in amount or numbers : substantial 3 a : real rather than apparent : firm; also : permanent, enduring b : belonging to the substance of a thing : essential c : expressing existence 4 a : having the nature or function of a noun b : relating to or having the character of a noun or pronominal term in logic 5 : creating and defining rights and duties Examples: “How many more carefully researched reports will need to be released before we finally act in a substantive way to protect our only home, planet Earth?” — Edwin Andrews, The New York Times, 14 Dec. 2018 “These are the moments—funny, yet substantive and cuttingly insightful—that will remain in the collective memory long after Ralph Breaks the Internet leaves cinemas and many of its meme jokes lose their relevance.” — Jim Vejvoda, IGN (ign.com), 20 Nov. 2018 Did you know? Substantive was borrowed into Middle English from the Anglo-French adjective sustentif, meaning “having or expressing substance,” and can be traced back to the Latin verb substare, which literally means “to stand under.” Figuratively, the meaning of substare is best understood as “to stand firm” or “to hold out.” Since the 14th century, we have used substantive to speak of that which is of enough “substance” to stand alone, or be independent. By the 19th century, the word evolved related meanings, such as “enduring” and “essential.” It also shares some senses with substantial, such as “considerable in quantity.” Lake桑 January 21, 2019 at 01:00PM ## 每日一词:wherewithal(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 20, 2019 is: wherewithal • \WAIR-wih-thawl\  • noun : means or resources for purchasing or doing something; specifically : financial resources : money Examples: If I had the wherewithal, I’d buy that empty lot next door and put in a garden. “Typically, when a person makes more money and has more savings, they add credit such as signing up for a new card or taking on a car loan. That’s because they’re confident they have the financial wherewithal to pay back the debt.” — Janna Herron, USA Today, 5 Dec. 2018 Did you know? Wherewithal has been with us in one form or another since the 16th century. It comes from our still-familiar word where, and withal, a Middle English combination of with and all, meaning “with.” Wherewithal has been used as a conjunction meaning “with or by means of which” and as a pronoun meaning “that with or by which.” These days, however, it is almost always used as a noun referring to the means or resources—especially financial resources—one has at one’s disposal. Lake桑 January 20, 2019 at 01:00PM ## 每日一词:gargantuan(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 19, 2019 is: : tremendous in size, volume, or degree : gigantic, colossal Examples: “In 1920, the town council of Chamonix … decided to change the municipality’s name to Chamonix-Mont-Blanc, thus forging an official link to the mountain … with a summit that soars 12,000 feet above the town center. The council’s goal was to prevent their Swiss neighbors from claiming the mountain’s glory, but there was really no need: It’s impossible when you’re in Chamonix to ignore the gargantuan, icy beauty that looms overhead.” — Paige McClanahan, The New York Times, 13 Dec. 2018 “Due to our gargantuan scope, Houston is a haven for live music. As the nation’s fourth largest city, we have become a destination for touring acts by default—it certainly isn’t because of our collective reputation as an audience….” — Matthew Keever, The Houston Press, 17 Dec. 2018 Did you know? Gargantua is the name of a giant king in François Rabelais’s 16th-century satiric novel Gargantua, the second part of a five-volume series about the giant and his son Pantagruel. All of the details of Gargantua’s life befit a giant. He rides a colossal mare whose tail switches so violently that it fells the entire forest of Orleans. He has an enormous appetite: in one memorable incident, he inadvertently swallows five pilgrims while eating a salad. The scale of everything connected with Gargantua gave rise to the adjective gargantuan, which since William Shakespeare’s time has been used of anything of tremendous size or volume. Lake桑 January 19, 2019 at 01:00PM ## 一、质量守恒定律的探究 ### 2、通过铁与硫酸铜溶液反应 $\rm Fe+CuSO_4\longrightarrow Cu+FeSO_4$ ### 通过盐酸与碳酸钠粉末反应 $\rm HCl+Na_2CO_3\longrightarrow NaCl+H_2O+CO_2$ ## 二、化学方程式 • 以客观事实为依据 • 遵守质量守恒定律 ### 书写步骤(以电解水为例) 1、写 $\rm H_2O -\!\!\!-\!\!\!-\!\!\!-\!\!\!- H_2+O_2$ 2、配(配平) $\rm 2H_2O =\!=\!=\!=2H_2+O_2$ 3、标 $\rm 2H_2O\overset{\text{Electrify}}{=\!=\!=\!=\!=}2H_2\uparrow +O_2\uparrow$ $\rm 2NO+O_2+4CO\overset{Catalyst}{=\!=\!=\!=\!=\!=}N_2+4CO_2$ $\rm 3Fe+4H_2O\overset{High\;temperature}{=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=}Fe_3O_4+4H_2$ $\rm 3Fe+4H_2O(g)\overset{High\;temperature}{=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=}Fe_3O_4+4H_2$ $\rm CO_2+Ca(OH)_{2}=\!=\!=CaCO_3 \downarrow +H_2O$ CuSO4 + 2NaOH ═══ Na2SO4 + Cu(OH)2 $\rm TiCl_4+Mg\overset{High\;temperature}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-}Ti+MgCl_2$ $\rm TiCl_4+2Mg\overset{High\;temperature}{=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=}Ti+2MgCl_2$ ## 三、利用化学方程式进行简单计算 $\rm \underset{2\times\left(1\times 2\right)}{2H_2}+\underset{16\times 2}{O_2}\overset{\text{Fire}}{=\!=\!=}\underset{2\times\left(1\times 2+16\right)}{2H_2O}$ Lake桑 2019.1.19 ## 一、水资源与水的净化 [1]:100纳米不溶的固体小颗粒悬浮于液体里形成的混合物叫悬浊液。这种状态不稳定,静置会导致固体颗粒沉降,即出现分层。 ## 二、水的组成 ### 1、通过氢气燃烧 $\rm H_2+O_2\overset{\text{Fire}}{\longrightarrow}H_2O$ ### 2、通过电解水 $\rm H_2O\overset{\text{Electrify}}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightarrow}H_2+O_2$ ## 四、化学式与化合价 ### 2、化合价 钾钠氢银正一价,钙镁钡锌正二价。氟氯溴碘负一价,通常氧为负二价。一二铜,二三铁,三铝四硅五价磷。莫忘单质价为零。 Lake桑 2019.1.18 ## 每日一词:teetotaler(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 18, 2019 is: teetotaler • \TEE-TOH-tuh-ler\  • noun : one who practices or advocates teetotalism : one who abstains completely from alcoholic drinks Examples: “… he is one of those fit older people who have redefined what 74 can look like. It probably helps that he is a teetotaler, a choice he made as a young man, having been disturbed by the effect that alcohol had on members of his family.” — David Kamp, Vanity Fair, December 2017 “The names Rockefeller and Diego Rivera are forever intertwined thanks to the Mexican artist’s infamous mural at Rockefeller Center, which the family commissioned in 1932 and had demolished two years later—due in part to its depiction of the teetotaler John D. Rockefeller Jr. sipping a martini.” — Adam Rathe, Town & Country, May 2018 Did you know? A person who abstains from alcohol might choose tea as his or her alternative beverage, but the word teetotaler has nothing to do with tea. More likely, the “tee” that begins the word teetotal is a reduplication of the letter “t” that begins total, emphasizing that one has pledged total abstinence. In the early 1800s, tee-total and tee-totally were used to intensify total and totally, much the way we now might say, “I’m tired with a capital T.” “I am now … wholly, solely, and teetotally absorbed in Wayne’s business,” wrote the folklorist Parson Weems in an 1807 letter. Teetotal and teetotaler first appeared with their current meanings in 1834, eight years after the formation of the American Temperance Society. Lake桑 January 18, 2019 at 01:00PM ## 每日一词:farouche(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 17, 2019 is: 1 : unruly or disorderly : wild 2 : marked by shyness and lack of social graces Examples: “Though she wrote three ‘novels’ (more extended free associations than novels as we know them), she is best thought of as a poet of small, farouche poems illustrated with doodles….” — Rosemary Dinnage, The New York Review of Books, 25 June 1987 “Jeremy Irons’s natural mode as an actor is fastidious rather than farouche, but he perfectly captures James Tyrone’s professional extravagance and personal meanness.” — Michael Arditti, The Sunday Express, 11 Feb. 2018 Did you know? In French, farouche can mean “wild” or “shy,” just as it does in English. It is an alteration of the Old French word forasche, which derives via Late Latin forasticus (“living outside”) from Latin foras, meaning “outdoors.” In its earliest English uses, in the middle of the 18th century, farouche was used to describe someone who was awkward in social situations, perhaps as one who has lived apart from groups of people. The word can also mean “disorderly,” as in “farouche ruffians out to cause trouble.” Lake桑 January 17, 2019 at 01:00PM ## 每日一词:nomothetic(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 16, 2019 is: : relating to, involving, or dealing with abstract, general, or universal statements or laws Examples: “Moreover, there is the often-incorrect assumption that crimes and offenders are sufficiently similar to be lumped together for aggregate study. In such cases the resulting nomothetic knowledge is not just diluted, it is inaccurate and ultimately misleading.” — Brent E. Turvey, Criminal Profiling, 2011 “First, they can expect to find an investigation of the ways in which males and females differ universally: that is, of the nomothetic principles grounded in biology and evolutionary psychology that govern sex-differentiated human development.” — Frank Dumont, A History of Personality Psychology, 2010 Did you know? Nomothetic is often contrasted with idiographic, a word meaning “relating to or dealing with something concrete, individual, or unique.” Where idiographic points to the specific and unique, nomothetic points to the general and consistent. The immediate Greek parent of nomothetic is a word meaning “of legislation”; the word has its roots in nomos, meaning “law,” and thetēs, meaning “one who establishes.” Nomos has played a part in the histories of words as varied as metronome, autonomous, and Deuteronomy. The English contributions of thetēs are meager, but thetēs itself comes from tithenai, meaning “to put,” and tithenai is the ancestor of many common words ending in –thesishypothesis, parenthesis, prosthesis, synthesis, and thesis itself—as well as theme, epithet, and apothecary. Lake桑 January 16, 2019 at 01:00PM ## 每日一词:liaison(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 15, 2019 is: liaison • \LEE-uh-zahn\  • noun 1 : a binding or thickening agent used in cooking 2 a : a close bond or connection : interrelationship b : an illicit sexual relationship : affair 3 a : communication for establishing and maintaining mutual understanding and cooperation (as between parts of an armed force) b : a person who establishes and maintains communication for mutual understanding and cooperation 4 : the pronunciation of an otherwise absent consonant sound at the end of the first of two consecutive words the second of which begins with a vowel sound and follows without pause Examples: “Brennan and Alejandro Castro agreed on a series of steps to build confidence. One called for the Cubans to post an officer in Washington to act as a formal liaison between the two countries’ intelligence agencies.” — Adam Entous, The New Yorker, 19 Nov. 2018 “… the book offers vignettes that describe Smith’s childhood as the youngest of seven Irish-American kids in Chicago; his sister’s short liaison with a married British man who shared the surname Smith; and a panicked hashish trip in Amsterdam.” — Kirkus Reviews, 1 Dec. 2018 Did you know? If you took French in school, you might remember that liaison is the term for the phenomenon that causes a silent consonant at the end of one word to sound like it begins the next word when that word begins with a vowel, so that a phrase like beaux arts sounds like \boh zahr\. We can thank French for the origin of the term, as well. Liaison derives from the Middle French lier, meaning “to bind or tie,” and is related to our word liable. Our various English senses of liaison apply it to all kinds of bonds—from people who work to connect different groups to the kind of relationship sometimes entered into by two people who are attracted to one another. Lake桑 January 15, 2019 at 01:00PM ## 语文相关:觊觎 1. (动)企图获取(非分的东西) 2. (名)非分的愿望或企图 • 祖国的领土,岂容列强觊觎! • 觊觎大位 | 心怀觊觎 Lake桑 2019.1.14 ## 每日一词:mea culpa(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 14, 2019 is: mea culpa • \may-uh-KOOL-puh\  • noun : a formal acknowledgment of personal fault or error Examples: The mayor’s public mea culpa for his involvement in the scandal didn’t satisfy his critics. “The internal investigation ended with a mea culpa from the sheriff’s department and a reprimand and reassignment for a deputy overseeing the property room.” — Allie Morris, The Houston Chronicle, 15 Nov. 2018 Did you know? Mea culpa, which means “through my fault” in Latin, comes from a prayer of confession in the Catholic Church. Said by itself, it’s an exclamation of apology or remorse that is used to mean “It was my fault” or “I apologize.” Mea culpa is also a noun, however. A newspaper might issue a mea culpa for printing inaccurate information, or a politician might give a speech making mea culpas for past wrongdoings. Mea culpa is one of many English terms that derive from the Latin culpa, meaning “guilt.” Some other examples are culpable (“meriting condemnation or blame especially as wrong or harmful”), culprit (“one guilty of a crime or a fault”), and exculpate (“to clear from alleged fault or guilt”). Lake桑 January 14, 2019 at 01:00PM ## 每日一词:clement(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 13, 2019 is: 1 : inclined to be merciful : lenient 2 : not severe : mild Examples: The judge decided to be clement and said she would forgive the young defendants so long as they paid back the money they stole from the fundraiser. “Eagle Scout Michael Eliason completed his project by literally blazing a trail: he created a half-mile-long trail along a Heights park still being developed along the Yellowstone River, Dover Park. ‘We rototilled and used pickaxes on it, and we had to wait until the weather was clement,’ he said.” — Mike Ferguson, The Billings Gazette, 24 Nov. 2014 Did you know? Defendants in court cases probably don’t spend much time worrying about inclement weather. They’re too busy hoping to meet a clement judge so they will be granted clemency. They should hope they don’t meet an inclement judge! Clement, inclement, and clemency all derive from the Latin clemens, which means “mild” or “calm.” All three terms can refer to an individual’s degree of mercy or to the relative pleasantness of the weather. Lake桑 January 13, 2019 at 01:00PM ## 每日一词:boycott(转自 韦氏词典) Merriam-Webster’s Word of the Day for January 12, 2019 is: boycott • \BOY-kaht\  • verb : to engage in a concerted refusal to have dealings with (a person, a store, an organization, etc.) usually to express disapproval or to force acceptance of certain conditions Examples: “Chinese boycotted Norwegian salmon over the awarding of the Nobel Peace Prize to the late dissident writer Liu Xiaobo. They stopped buying fruit from the Philippines amid a dispute over territory in the South China Sea.” — Associated Press, 13 Dec. 2018 “[Saul] Bellow … showed up at President Johnson’s White House Festival of the Arts in the summer of 1965, which other writers, such as Philip Roth (a friend and follower) and Robert Lowell, boycotted to protest against the war in Vietnam.” — Benjamin Markovits, The Spectator, 17 Nov. 2018 Did you know? In the 1870s, Irish farmers faced an agricultural crisis that threatened to result in a repeat of the terrible famine and mass evictions of the 1840s. Anticipating financial ruin, they formed a Land League to campaign against the rent increases and evictions landlords were imposing as a result of the crisis. Retired British army captain Charles Boycott had the misfortune to be acting as an agent for an absentee landlord at the time, and when he tried to evict tenant farmers for refusing to pay their rent, he was ostracized by the League and community. His laborers and servants quit, and his crops began to rot. Boycott’s fate was soon well known, and his name became a byword for that particular protest strategy. Lake桑 January 12, 2019 at 01:00PM
2021-09-17 01:04:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20802512764930725, "perplexity": 8590.123071711441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053918.46/warc/CC-MAIN-20210916234514-20210917024514-00114.warc.gz"}
https://indico.cern.ch/event/181055/contributions/308538/
# Quark Matter 2012 12-18 August 2012 US/Eastern timezone ## A detailed study of open heavy flavor production, enhancement, and suppression at RHIC 16 Aug 2012, 16:00 2h Poster Heavy flavor and quarkonium production ### Speaker J. Matthew Durham (Los Alamos National Laboratory) ### Description The flexibility of the beam species available at the Relativistic Heavy Ion Collider has enabled the PHENIX Collaboration to examine open heavy flavor production across a wide range of temperature, energy density, and system size. Charm and bottom production in $p+p$ collisions, which is dominated by gluon fusion, is largely consistent with FONLL pQCD calculations. New analysis techniques have extended the momentum coverage and provide constraints on the bottom cross section. Measurements in $d+$Au collisions exhibit a strong cold nuclear matter Cronin enhancement of electrons from $D-$mesons, which is roughly consistent with the mass-ordering observed for the lighter $\pi, K,$ and $p$ families. This also shows that the nuclear baseline for interpreting Au+Au data could be significantly modified from the $p+p$ shape. Collisions of Cu nuclei provide a crucial intermediary testing ground between the small $d$+Au collision system and the large Au+Au system, and show how the cold nuclear matter enhancement is overtaken by competing hot nuclear matter suppression as the system size increases towards the most central Au+Au collisions. The status of finalizing these results results and others will be discussed, in the context of recent measurements at RHIC and the LHC. ### Primary author J. Matthew Durham (Los Alamos National Laboratory) Poster
2020-03-31 12:55:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.707940936088562, "perplexity": 4514.692461666727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00338.warc.gz"}
https://www.physicsforums.com/threads/atomic-and-nuclear-transitions.171714/
# Atomic and Nuclear Transitions 1. May 25, 2007 ### popffabrik1 Can anybody tell me why atomic decays are usually only through electric dipole transitions, while nuclear decay often shows many different multipoles? i think that the transition rate for electric dipole transitions is much larger than for magnetic ones, but that doesnt really explain it. would appreciate any help! 2. May 31, 2007 ### Meir Achuz Nuclear states are generally more complicated, so lower L values may be forbidden.
2016-10-21 22:09:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835212767124176, "perplexity": 1638.0385803065624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00293-ip-10-171-6-4.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/259685/sum-of-fractions
# Sum of fractions ### Objective: • Create a function to sum a list of fractions (represented as pairs) ### Rules • Only Prelude functions allowed ### Notes • I was debating if I should create a Fraction data type. Is it worth "upgrading" from a simple pair? • Is the code easy to follow and well-structured? How are my function names? • Is a simple error call a good way to inform the caller of incorrect arguments? ### Code sumOfFractions :: [(Integer, Integer)] -> (Integer, Integer) sumOfFractions [] = error "empty list not allowed" sumOfFractions fractions = reduce (numerator, lcd) where reduce (n, d) = (n div gcd_, d div gcd_) numerator = sum . map (\(x, y) -> x * (lcd div y)) \$ fractions lcd = foldr1 lcm (map snd fractions) gcd_ = gcd numerator lcd There are two things that in my opinion would make your code easier to follow. 1. The sum of zero numbers can be (and is usually) defined as zero, which is the identity element 0 of addition in the sense that $$x = 0 + x = x + 0$$ holds for all x. 2. The sum of more than two numbers is perhaps best defined recursively, using the associativity property $$x + y + z = (x+ y) + z = x + (y + z).$$ One way to use these properties in code is the following. type MyFraction = (Integer, Integer) sumOfFractions :: [MyFraction] -> MyFraction sumOfFractions = foldl plus (0, 1) plus :: MyFraction -> MyFraction -> MyFraction plus (a, u) (b, v) = (c div x, w div x) where c = a*v + b*u w = u*v x = gcd c w It should be a fun exercise to define a datatype so that (+), and by extension sum works on them. Type t: (+) into an interpreter and go from there! • Your function names are fine. Perhaps reduce should be simplify. • There's no need for an exception here, but I believe error is fine for this purpose in general.
2021-10-16 00:02:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263499021530151, "perplexity": 4100.985105700582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00547.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1296357
MathSciNet bibliographic data MR1296357 32E30 (32E20 32H02) Forstnerič, Franc; Rosay, Jean-Pierre Erratum: "Approximation of biholomorphic mappings by automorphisms of \$\bold C^n\$$\bold C^n$'' [Invent. Math. 112 (1993), no. 2, 323–349; MR1213106]. Invent. Math. 118 (1994), no. 3, 573–574. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-02-22 22:00:42
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978245496749878, "perplexity": 7622.799896276154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00152-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.marksei.com/bitcoin-reaches-10kusd/
# Bitcoin reaches 10KUSD, is it going to pop soon? Bitcoin (BTC, ), the first cryptocurrency based on the revolutionary Blockchain technology, is now over 10KUSD. A new milestone for the most popular cryptocurrency and also a good sign for all the other cryptocurrencies such as Ethereum. But is it only a bubble that’s going to pop soon? Is this the start of another climb? ## A look at Bitcoin‘s past, present… and future Bitcoin was born in 2008 by the mind of the mysterious Satoshi Nakamoto. In the early days of the cryptocurrency the coin was worth no more than a few cents, a notable transaction involved using 10000 Bitcoins to purchase two pizzas; those pizzas, if paid today, would be worth 100000000 USD! After a few spikes the price of the crypto stabilized in order: around 1$, 20$, 120$and 200$, each period culminating with a spike and a correction. But it wasn’t until this year, which we may very well call “The year of Bitcoin” that the most popular cryptocurrency really reached its highest up to date: 10K. So it was when I was writing this article, but in a matter of hours, a new highest was reached: 11k. Although Bitcoin started the year with a value around 900$it is now over 11000$. Not bad considering all the events that took place this year. Let’s have a quick look: • Ethereum rise. • Concerns about the scalability of the Bitcoin network. • SegWit/SegWit2x proposals. • Birth of Bitcoin Cash (BCH). • JP Morgan calling it a “fraud”. • China banning ICOs and subsequently closing some important exchanges. ## Has the new climb started? When people started saying that Bitcoin would’ve reached 10K by the end of 2017, there was a lot of strange-looking skepticism around the affirmation. Many people wouldn’t even think it would be possible at the time. But here we are, with the goal reached and the affirmation becoming truth. There’s not much to say about it, it was unthinkable, but it happened. Thanks to speculations, belief or whatever, but it actually happened. The question will naturally arise: “Is it the end? Is this the beginning of a new climb?” And here we start with the next round of predictions! The first in line is none other than John McAfee, founder of McAfee, Inc, tweeted about drastic actions he would take if Bitcoin failed to reach 500K USD in three years. Some say 5 other 10 years at least, although we’re still not sure the most popular Bitcoin is the evolution of money or just a bubble one thing is for sure: people won’t stop making predictions. ## Is it just another bubble? Is it going to pop soon? Every Bitcoin investor has thought this at least once. What if Bitcoin isn’t the “Next Big Thing“, what if it is just a bubble. From time to time, fear and panic selling undermine the credibility and the stability of the most famous cryptocurrency, however this year many important personalities, such as Jamie Dimon, CEO of JP Morgan, publicly said that Bitcoin is a “fraud” even saying that he would fire any employee trading Bitcoin. That’s not very encouraging for something as volatile as a cryptocurrency. Whether it is just another bubble, or if it is going to pop soon we can’t really know for sure. But one thing we can steadily observe: today is a great day for the non-bubble Bitcoin. Image courtesy of Jason Benjamin #### You may also like... This site uses Akismet to reduce spam. Learn how your comment data is processed.
2022-11-30 14:17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20680876076221466, "perplexity": 2418.002712394317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00017.warc.gz"}
https://brilliant.org/problems/logarithm-of-the-factorial/
# Logarithm of the factorial $\large{ \lfloor \log_x x! } \rfloor = 11814375113$ What is the value of integer $$x$$ such that it satisfy the equation above? Details and Assumptions • $$\lfloor x \rfloor$$ is the floor function. ie $$\lfloor 123456.789 \rfloor = 123456$$. ×
2017-05-23 14:55:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029142022132874, "perplexity": 1503.6789017879828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00052.warc.gz"}
http://physics.stackexchange.com/questions/119729/why-doesnt-de-broglies-wave-equation-work-for-photons
# Why doesn't De Broglie's wave equation work for photons? Well, as I am learning about quantum physics, one of the first topics I came across was De Broglie's wave equation. $$\frac{h}{mc} = \lambda$$ As is obvious, it relates the wavelength to the mass of an object. However, what came to my mind is the photon. Doesn't the photon have zero mass? Therefore, won't the wavelength be infinity and the particle nature of the particle non existent? Pretty sure there is a flaw in my thinking, please point it out to me! - –  John Rennie Jun 18 at 5:43 I've linked an existing question that explains the de Broglie wavelength for photons. Basically $\lambda = h/p$ and photons have a non-zero momentum. –  John Rennie Jun 18 at 5:44 Yeah... I read the post, but I don't think it answers my question. –  Gummy bears Jun 18 at 5:45 @JohnRennie I don't think it's a duplicate either, since the root of the confusion here is the use of $h/mc$ rather than $h/p$. –  David Z Jun 18 at 5:46 @DavidZ I knew that it is momentum that is supposed to be in the equation, however I thought that momentum can be replaced with $mv$ for every object. However, I didn't know that there is a different equation for momentum of a photon. –  Gummy bears Jun 18 at 5:59 $$\lambda = \frac{h}{p}$$ And although photons have zero mass, they do have nonzero momentum $p = E/c$. So the wavelength relation works for photons too, you just have to use their momentum. As a side effect you can derive that $\lambda = hc/E$ for photons. The equation you included in your question is something different: it gives the Compton wavelength of a particle, which is the wavelength of a photon that has the same electromagnetic energy as the particle's mass energy. In other words, a particle of mass $m$ has mass energy $mc^2$, and according to the formulas in my first paragraph, a photon of energy $mc^2$ will have a wavelength $\lambda = hc/mc^2 = h/mc$. The Compton wavelength is not the actual wavelength of the particle; it just shows up in the math of scattering calculations.
2014-12-22 20:41:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902466297149658, "perplexity": 248.21715824151102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776563.13/warc/CC-MAIN-20141217075256-00028-ip-10-231-17-201.ec2.internal.warc.gz"}
https://gis.stackexchange.com/questions/242545/how-can-epsg3857-be-in-meters
# How can EPSG:3857 be in meters? It is said that unit of measure for EPSG:3857 is the metre. How it can be, if the range of these coordinates is Projected bounds: -20,026,376.39 -20,048,966.10 20,026,376.39 20,048,966.10 (from the same source). This means approximately a square of 40,000 x 40,000 kilometers. This length is correct for the equator, but how can it be in general if the circumference at high latitudes is smaller? UPDATE The length of the parallel at latitude 85.06 is $R_earth * cos(85.06) * 2 * \pi$ = 3454 kilometers and it is << 40000 My question is: is this parallel projected onto the entire upper edge of the 40000 x 40000 square or onto part of it 3453m long? In the first case the unit of measure IS NOT METRES, since it varies from metres at the origin to 3453/40000 = 9 centimeters at parallel 85.05. • Do you doubt that the unit of measure is meters, or the extent of the projected bounds? Jun 2 '17 at 10:34 • I don't understand, how is it projected. – Dims Jun 2 '17 at 11:37 • 3454 meters are projected into full width as it were 40000 meters. Because of projection error those projected meters are very far from ground truth meters. Jun 2 '17 at 12:01 • @user30184 which implies, that statement, that EPSG:3857 is in meters is just FALSE. – Dims Jun 2 '17 at 12:26 • It is true that near the poles length measurements in EPSG:3857 give results which do not match at all with ground truth. That is because of distortion in the transformation. However, projections have well defined mathematics for forward/backward transformations and you can trust that meter is the right unit and that pixel is not a unit at all. All projected coordinate systems have the same behavious with distance measured from the map not being the same as distance along the ellipsoid's surface but usually the difference is not that big. Jun 2 '17 at 15:03 It is a projection of a spheroid on a flat surface. Every projection has strengths and weaknesses and will preserve some elements of direction, distance or area better or worse than others (which is why careful selection of a suitable, that is local where possible, projection is so important). So, while the unit of measurement in EPSG:3857 is indeed meters, as you have correctly spotted, distance measurements become increasingly inaccurate away from the equator. All Mercator-style projections have the same defect as they move away from their reference point. But, where they are not global, the error can be minimized by appropriate positioning of this point. It is for this reason there are so many UTM zones for instance. Other projections have different strengths and weaknesses depending on use (e.g. navigation vs cartography). So what's the point of EPSG:3875? To understand that, you must understand that it was designed for web mapping and is square so that the map tiles fit nicely into a powers-of-two schema as you prgress through each sucessive zoom-level. This is EPSG:3857's particular strength. It is not designed for distance and area calculations. There are many much better alternatives. If you are not using the data for web mapping, I would strongly encourage you to consider some other projection that is more suitable to your use-case, especially if you intend to do distance calculations (or, in that event, cast your data in geographic coordinates and not Cartesian coordinates and use the Haversine formula to calculate distances on a sphere). The diagram added to the OP's question basically is a diagram of my first paragraph, viz that the project introduces distortion. For more information on how the Earth is projected in this case see here. The unit of measurement for Web Mercator is NOT pixels but meters. The OP's question results from a misunderstanding. Vector data can also be projected in Web Mercator. In this case there are no pixels. So you see, the concept of a unit of measurement as pixels has no relationship to the real world. However, pixels-per-meter is relevant for a raster as this tells us the resolution of the image. BUT the unit of measurement is still meters (in this case) and not pixels. Where the confusion for the OP possibly arises is the use of Web Mercator at various zoom levels where tiles are usually set to be 256x256 pixels and a given zoom level has a certain nominal ground resolution, so some web mapping applications use screen pixels as a means of calculating distance. But, the pixels are interpreted as a meters-distance in relation to the zoom level (and possibly latitude). • I understand all this but don't understand the answer. As far as I know, the unit of measurement of web mercator is pixel, not meter. – Dims Jun 2 '17 at 11:36 • I would not say 10 times error as "inaccurate", it is irrelevant. – Dims Jun 2 '17 at 11:55 • How would your theory handle different pixel sizes? Jun 2 '17 at 12:04 • @user30184 who's theory: mine or @MappaGnosis? – Dims Jun 2 '17 at 13:43 • @Dims, please see my edits and follow the link. The units definitely are meters and the discrepancy your diagram demonstrates is the result of projecting the Earth onto a cylinder that is then unwrapped to a flat map (other projections can work in different ways, but this is how the Mercator projection works). Jun 2 '17 at 15:50 There is a question in a comment "what is the distance between points (0; 20,000,000) and (0; 20,000,001)? Hint: it is one unit of x. Is it also 1 meter?" The answer is Yes, the cartesian distance between those points is 1 meter. Do you still consider that it is wrong? Do you expect that the correct answer is about 8.7 centimeters, which result you can get with this procedure: Find the long/lat coordinates for the EPSG:3857 coordinates: gdaltransform -s_srs epsg:3857 -t_srs epsg:4326 0 20000000 0 85.0219764762329 0 0 20000001 0 85.0219772557337 0 Insert the long/lat coordinates into the ST_Length example that is using geography type in http://postgis.net/docs/ST_Length.html SELECT ST_Length(the_geog) As length_spheroid, ST_Length(the_geog,false) As length_sphere FROM (SELECT ST_GeographyFromText( 'SRID=4326;LINESTRING(0 85.0219764762329, 0 85.0219772557337)') As the_geog) as Foo; Length_spheroid;length_sphere 0.0870589221287747;0.0866766564153779 The cartesian distance for the poins is 1 also for PostGIS SELECT ST_Length(the_geom) As length FROM (SELECT ST_GeomFromText( 'SRID=3857;LINESTRING(0 20000000, 0 20000001)') As the_geom) as Foo; length 1
2022-01-28 17:38:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7131770849227905, "perplexity": 1177.867557480905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00204.warc.gz"}
https://excelatfinance.com/xlf19/xlf-yahoo-data-dst-adjustment.php
# Yahoo data - daylight saving time (DST) date adjustment The DST error in Yahoo data was resolved by Yahoo Finance in late 2019. This file includes material relating to the GetData formula used on the Summary WS of the Stock Analyser project. ## 0. Background Australian stock price historical data sourced from Yahoo Finance has a systematic date error during the Australian daylight saving time period. The error first appeared at about the time Yahoo shut down its Stock Data API service in November 2017. Since then, Yahoo data record dates lag Australian Stock Exchange (ASX) Trading dates by one day in six monthly cycles. Figure 1 compares of list of ASX Trading dates to a list of Data dates for BHP.AX stored as Yahoo historical data records. As an example of the one day lag, trading day data for 2 October 2017, the first day of the DST period, is indexed as 1 October 2017 on the historical data file (figure 1). This module follows the familiar technique for stock market data, of a Summary WS with a list of trading days linked to Data ranges of for individual companies. Mapping the Summary WS to companies is performed with a WS VLOOKUP function. The next section develops an adjustment to the VLOOKUP argument lookup_value for the Yahoo DST error. ## 1. Correcting the DST date error ### 1.1 List of DST start and end dates Daylight saving dates for Melbourne are sourced from www.timeanddate.com. See figure 2 for the DST sample dates, 3 April 2016 to 6 October 2019, used in this development module. The dates are listed in ascending order. TRUE is start date of DST, and FALSE is the end date. ### 1.2 The DST switch routine The DST switch procedure is item 1 in figure 3 (DST switch (0,1) testing). • 1 Formula: =--VLOOKUP(\$R5, DST, 2, TRUE) Lookup the date 22-Sep-17 in the DST range, and return the approximate match from column 2: FALSE. Then convert the Boolean value FALSE to numeric equivalent (0) with the double negation operator (--). • Formulas 2, and 3 are designed for assignment to defined names • 2 Trading day ref Formula: =INDIRECT("R" & ROW() & "C" & COLUMN(W:W), FALSE) • 3 Trading day to Data date testing Formula (combining 1 and 2): =INDIRECT("R" & ROW() & "C" & COLUMN(W:W),FALSE) - (--VLOOKUP(INDIRECT("R" & ROW() & "C" & COLUMN(W:W), FALSE), DST, 2, TRUE)) ## 2. The DST adjusted lookup procedure The original lookup procedure is named GetData. Item in figure 4, with table_array as BHP_AX_1 Assigning DST switch routines to Defined Names • Assign modified 3 to Defined Name Dte_2 • Assign modified (--VLOOKUP(INDIRECT("R" & ROW() & "C" & COLUMN(W:W), FALSE), DST, 2, TRUE)) to Defined Name Dte_Adjust ### 2.2 Correct mapping Using VLOOKUP ref set to Dte - Dte_Adjust (figure 5), the trading dates and DST data are now aligned.
2021-07-30 17:26:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20577852427959442, "perplexity": 9291.494484496916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00089.warc.gz"}
https://bookvea.com/what-happened-to-saladins-sword/
# What happened to Saladin’s sword? ## What happened to Saladin’s sword? Since then, the Damascus blade has fought in battlefields around the world. But around 250 years ago, the manufacturing technique faded from the generations of artisans, and since then, it has not been restored. ## What weapons did Saladin use? Damascus swords sharp enough to slice a falling piece of silk in half, strong enough to split stones without dulling owe their legendary qualities to carbon nanotubes, says chemist and Nobel laureate Robert Curl. ## What is the sharpest sword ever made? Why eastern cultures preferred curved sword? Curved blades became so popular in Eastern cultures simply because the Middle East, Central Asia and India were famous for their wide expanses of land, which were ideal for cavalry charges. ## Where is the sword of Islam? The sword is now in the Topkapi Museum, Istanbul. ## Did Crusaders use Damascus steel? The remarkable characteristics of Damascus steel became known to Europe when the Crusaders reached the Middle East, beginning in the 11th century. They discovered that swords of this metal could split a feather in midair, yet retain their edge through many a battle with the Saracens. Arabs ## What is the sword of Saladin? The legendary sword of Saladin is known to be one of the sharpest blades in history. This 12th-century sword belonged to Saladin, a powerful Muslim leader and the first sultan of Egypt and Syria. … Featuring a thin edge geometry, the lightweight curved blade is designed to effortlessly deliver deep cuts and lethal chops. ## What was the name of Saladin’s army? Under Saladin’s command, the Ayyubid army defeated the Crusaders at the decisive Battle of Hattin in 1187, and thereafter wrested control of Palestineincluding the city of Jerusalemfrom the Crusaders, who had conquered the area 88 years earlier. ## What weapons did the Saracens use? The remarkable characteristics of Damascus steel became known to Europe when the Crusaders reached the Middle East, beginning in the 11th century. They discovered that swords of this metal could split a feather in midair, yet retain their edge through many a battle with the Saracens. ## Which is the sharpest sword in the world? The Honju014d Masamune represented the Tokugawa shogunate during most of the Edo period and was passed down from one shu014dgun to another. It is one of the best known of the swords created by Masamune and is believed to be among the finest Japanese swords ever made. ## Who makes the sharpest sword in the world? Former engineer turned master swordsmith makes the world’s sharpest sword. The sharpest swords in the world are being forged in Texas, where a former bored engineer has stunned Japanese experts with his handiwork. Daniel Watson runs Angel Sword, creating artistic weapons which sell from $2,000 to$20,000. ## What is the deadliest sword in history? • The claymore, the longsword, and William Wallace. • The katana and Masamune: Japan’s greatest sword smith. • Para 3: Saladin’s singing scimitar. ## Where is the sword of Islam now? After 1937, it was no longer used and was guarded in a small glass reliquary at Rocca delle Caminate, summer residence of Mussolini. ## Where is the real Zulfiqar? the Topkapi museum ## What does sword of Islam add? It allows the player to play as Muslim characters within the main game for the first time, and introduces a number of events and gameplay elements for those characters, such as decadence and polygamy. ## What happened to Zulfiqar? But Zulfiqar was not found anywhere. Few believe that after Imam Hussain, it was passed on to his sons and now Imam Mehdi may have it, or according to some, it was returned back to Allah. Arabs ## What is so special about Damascus steel? Once prized for centuries, Damascus steel lost prominence by the 18th century but today it’s made a resurgence. ## When was Damascus steel used? Damascus steel is a type of steel easily recognisable by its wavy patterned design. Aside from its sleek look and beautiful aesthetics, Damascus steel is highly valued as it is hard and flexible while maintaining a sharp edge. Weapons forged from Damascus steel were far superior to those formed from just iron. ## Who used the Damascus sword? The original Damascus steel swords may have been made in the vicinity of Damascus, Syria, in the period from 900 AD to as late as 1750 AD. Damascus steel is a type of steel alloy that is both hard and flexible, a combination that made it ideal for the building of swords. ## What’s the sharpest sword in the world? Damascus swords sharp enough to slice a falling piece of silk in half, strong enough to split stones without dulling owe their legendary qualities to carbon nanotubes, says chemist and Nobel laureate Robert Curl. ## Is Valyrian steel real? What’s amazing is that there is real-life Valyrian steel, also known as Damascus steel. It’s ability to flex and hold an edge is unparalleled. The remarkable characteristics of Damascus steel became known to Europe when the Crusaders reached the Middle East, beginning in the 11th century. ## What is an Arabic sword called? A scimitar is a short, curved sword that comes from the Middle East. It was commonly used back in the days of horse warfare.
2023-04-01 17:31:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2415003776550293, "perplexity": 8142.369708507232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00710.warc.gz"}
https://tex.stackexchange.com/questions/188094/how-to-use-poetrytex-with-cyrillic-letters
# How to use poetrytex with cyrillic letters I am trying to use poetrytex with cyrillic letters like this: \documentclass[11pt]{article} \usepackage{poetrytex} \usepackage[paperwidth=140mm,paperheight=210mm]{geometry} \begin{document} \thispagestyle{empty} \begin{poem}{Title}{Author\\2014} Мороз и солнце, день чудесный\\ The sea is calm to-night.\\ \end{poem} \end{document} But pdflatex generates this: How to tell poetrytex package what I want? You can load the T2A font encoding in addition to T1 and babel with options russian, english, plus a font that contains cyrillic glyphs. One of these, on CTAN, is Heuristica – an addition to Adobe Utopia, that also contains oldstyle and superior figures: \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage[TS1,T2A, T1]{fontenc}% \usepackage{heuristica}% \usepackage[russian, english]{babel} \usepackage{poetrytex} \usepackage[paperwidth=140mm,paperheight=210mm]{geometry} \newcommand*\English{\selectlanguage{english}} \begin{document} \thispagestyle{empty}% \begin{poem}{Title}{Author\\2014} {\Russian Мороз и солнце, день чудесный} \\ {\itshape The sea is calm to-night.}\\ \end{poem} \end{document} While with pdflatex you rely on LaTeX support for a given font, the choice of a font is much easier with XeLaTeX or LuaLaTeX and the fontspec package, since you can use any Opentype font known to the operating system (at least for ordinary text). Also, you don't have to choose a font encoding, nor an input encoding. Here is an example compiled with XeLaTeX that uses Minion Pro (available with Adobe Reader): \documentclass[11pt]{article} \usepackage{fontspec} \setmainfont{Minion Pro}% \usepackage[english, russian]{babel} \usepackage{poetrytex} \usepackage[paperwidth=140mm,paperheight=210mm]{geometry} \newcommand*\English{\selectlanguage{english}} \begin{document} \thispagestyle{empty} \begin{poem}{Title}{Author\\2014} \textup{\Russian Мороз и солнце, день чудесный}\\ \textit{The sea is calm to-night.} \end{poem} \end{document} Another beautiful font, available on CTAN is ebgaramond. Unfortunately it doesn't have (yet) a bold version, and it works well with LuaLaTeX but, for some reason, has problems with XeLaTeX (if you ask a part of the text in italic, the whole text is in italic). Last News: The very last version of ebgaramond.sty (2014/07/02, not yet updated in MiKTeX) now works fine with XeLaTeX. \documentclass[11pt]{article} \usepackage{fontspec} \setmainfont[ItalicFont = EBGaramond12-Italic]{EBGaramond12}% \usepackage[english, russian]{babel} \usepackage{poetrytex} \usepackage[paperwidth=140mm,paperheight=210mm]{geometry} \newcommand*\English{\selectlanguage{english}} \begin{document} \thispagestyle{empty} • @Pavel Morozkin: I forgot to include Paratype fonts as another font group that is designed with cyrillic characters (actually specifically designed for them) and has support in LaTeX. – Bernard Jul 6 '14 at 9:40
2019-10-23 07:22:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232203960418701, "perplexity": 10029.162045157836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00115.warc.gz"}
http://nrich.maths.org/5616
#### You may also like A quadrilateral changes shape with the edge lengths constant. Show the scalar product of the diagonals is constant. If the diagonals are perpendicular in one position are they always perpendicular? You only need elementary trigonometry and scalar products Given any right-angled triangle $\Delta ABC$ on a sphere of unit radius, right angled at $A$, and with lengths of sides $a, b$ and $c$, then Pythagoras' Theorem in Spherical Geometry is $$\cos a = \cos b \cos c.$$ Prove this result. Find a triangle containing three right angles on the surface of a sphere of unit radius. What are the lengths of the sides of your triangle? Use the Pythagoras' Theorem result above to prove that all spherical triangles with three right angles on the unit sphere are congruent to the one you found.
2015-09-03 19:18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5419381856918335, "perplexity": 182.0248379492405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00316-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.gradesaver.com/the-boy-in-the-striped-pajamas/q-and-a/did-bruno-cheat-pavel--239848
# Did Bruno cheat Pavel ? Bruno told Shmuel when he grows up going to be soldier , but he told Pavel when he grows up going to be explorer. Why he said something else to Pavel ?
2017-07-24 19:02:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624018669128418, "perplexity": 7274.920876288826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424909.50/warc/CC-MAIN-20170724182233-20170724202233-00146.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-2-section-2-2-the-multiplication-property-of-equality-exercise-set-page-130/13
# Chapter 2 - Section 2.2 - The Multiplication Property of Equality - Exercise Set: 13 x=-$\frac{3}{4}$ #### Work Step by Step Divide both sides by -8 to isolate the variable. $-8x=6$ $\frac{-8x}{-8}$=$\frac{6}{-8}$ x=-$\frac{6}{8}=-\frac{6\div2}{8\div2}$ x=-$\frac{3}{4}$ Check the answer. -8(-$\frac{3}{4}$)=6 $\frac{8(3)}{4}$=6 $\frac{24}{4}$=6 6=6 After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-22 09:17:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064322471618652, "perplexity": 3174.1004624483985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593142.83/warc/CC-MAIN-20180722080925-20180722100925-00109.warc.gz"}
https://www.dlubal.com/en/support-and-learning/support/knowledge-base/001482
# Displaying Curtailment of Longitudinal Reinforcement and Reinforcement Covering Line ### Technical Article 001482 4 October 2017 In the case of a huge amount of reinforcement, it might be useful to grade the longitudinal reinforcement of a beam. The grading corresponds to the tensile force distribution. Using RF-CONCRETE Members and CONCRETE, you can specify the curtailment of the reinforcement, which is considered in the automatically proposed reinforcement for the longitudinal reinforcement. When determining this reinforcement proposal, it is necessary to ensure that the envelope of the acting tensile force can be absorbed. #### Curtailment of Longitudinal Tension Reinforcement The design of the curtailment of the longitudinal tension reinforcement ensures that the envelope of the acting tensile force FSd can be absorbed by the provided reinforcement. 9.2.1.3(2) of EN 1992‑1‑1 [1] requires to also consider the additional tensile force ΔFtd due to the shear force. For structural components with shear reinforcement, this additional tensile force ΔFtd can be calculated according to 6.2.3 (7) of EN 1992‑1‑1 [1]. The tensile force FSd is calculated from the following equation: $\begin{array}{l}{\mathrm F}_\mathrm{sd}\;\;=\;\;\left(\frac{{\mathrm M}_\mathrm{Eds}}{\mathrm z}\;+\;{\mathrm N}_\mathrm{Ed}\right)\;+\;{\mathrm{ΔF}}_\mathrm{td}\\\mathrm{where}\\{\mathrm{ΔF}}_\mathrm{td}\;=\;0.5\;\cdot\;{\mathrm V}_\mathrm{Ed}\;\;\cdot\;(\cot\;\mathrm\theta\;–\;\cot\;\mathrm\alpha)\end{array}$ For structural components without shear reinforcement, the tensile force ΔFtd may be taken into account by shifting the tensile force curve from $\left(\frac{{\mathrm M}_\mathrm{Eds}}{\mathrm z}\;+\;{\mathrm N}_\mathrm{Ed}\right)$ about a distance of al = d in the unfavourable direction. This approach may also be used as an alternative to the approach for structural components with shear reinforcement mentioned above. In this case, this 'shift rule' is determined according to the formula ${\mathrm a}_\mathrm l\;=\;0.5\;\cdot\;\mathrm z\;\cdot\;(\cot\;\mathrm\Phi\;-\;\cot\;\mathrm\alpha)$. The resistant tensile force FRs of the provided reinforcement is determined within the anchorage lengths lbd. #### Graphical Display of Reinforcement Covering Line / Curtailment of Longitudinal Reinforcement If the tensile force of the curtailment of longitudinal reinforcement is divided by the design stiffness fyd of the reinforcing steel, a reinforcement covering line is obtained. In RF-CONCRETE Members and CONCRETE, the distribution of the required tension reinforcement As (the first result diagram in Figure 02) is shifted by the shift rule a1 in order to obtain the envelope of the required reinforcement (the second result diagram in Figure 02). You can display these shifted required reinforcement envelopes graphically by selecting 'Detailed results' under 'Provided Reinforcement' in the result navigator (see Figure 02). If the envelope of the required reinforcement (red), the shifted envelope of the required reinforcement (green), and the provided reinforcement (blue) are shown graphically together, a reinforcement covering line is obtained (the third result diagram in Figure 02). This diagram corresponds to the curtailment of longitudinal reinforcement in Figure 9.2 of EN 1992‑1‑1. #### Reference [1] Eurocode 2: Design of concrete structures - Part 1-1: General rules and rules for buildings; EN 1992‑1‑1:2011‑01
2018-06-18 11:44:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7519736289978027, "perplexity": 2221.207155566494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125539-00000.warc.gz"}
http://papers.neurips.cc/paper/5600-projective-dictionary-pair-learning-for-pattern-classification
# NIPS Proceedingsβ ## Projective dictionary pair learning for pattern classification [PDF] [BibTeX] [Supplemental] [Reviews] ### Abstract Discriminative dictionary learning (DL) has been widely studied in various pattern classification problems. Most of the existing DL methods aim to learn a synthesis dictionary to represent the input signal while enforcing the representation coefficients and/or representation residual to be discriminative. However, the $\ell_0$ or $\ell_1$-norm sparsity constraint on the representation coefficients adopted in many DL methods makes the training and testing phases time consuming. We propose a new discriminative DL framework, namely projective dictionary pair learning (DPL), which learns a synthesis dictionary and an analysis dictionary jointly to achieve the goal of signal representation and discrimination. Compared with conventional DL methods, the proposed DPL method can not only greatly reduce the time complexity in the training and testing phases, but also lead to very competitive accuracies in a variety of visual classification tasks.
2020-10-29 12:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24000264704227448, "perplexity": 1593.8733930005799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00258.warc.gz"}
https://socratic.org/questions/how-does-star-formation-occur-in-the-interstellar-medium
# How does star formation occur in the interstellar medium? Stars form inside relatively dense concentrations of interstellar gas and dust known as molecular clouds. These regions are extremely cold (temperature about $10 K$ to $20 K$, just above absolute zero). At these temperatures, gases become molecular, meaning that atoms bind together.
2022-12-08 13:40:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7885237336158752, "perplexity": 2296.9599741809548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00073.warc.gz"}
https://zulfahmed.wordpress.com/2014/05/10/maximizing-parsimony/
Feeds: Posts Ideas of comparison of scientific theories on a specific data set are at the fundamental level of statistics of model selection.  These can be extended to total theories although there is no solid theory that I know of.  Given the difficulty in simply producing any models that fit all known empirical facts in physics might imply that methods for comparison of total theories is a mere curiosity.  But the effort might still be justified on the grounds that there is an explosion of widely differing theories of physics unifying gravity and other forces including monumental efforts such as quantum gravity and string theories.  One of the attractive qualities in efforts toward an ‘S4 physics’ in which the shape of the universe is fixed to $1/h$ where $h$ is Planck constant is that the identification of radius of the universe to observed quantum spacing of energy reduces parameters for any theory describing the subatomic phenomena with macroscopic phenomena.  This is an old note which requires for its validity the observation that the Einstein’s equation with a cosmological constant $h^2$ appears as the Euler-Lagrange equation for the Einstein-Hilbert action over hypersurfaces.
2017-05-27 00:39:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.642989456653595, "perplexity": 522.203333291396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608726.4/warc/CC-MAIN-20170527001952-20170527021952-00300.warc.gz"}
https://gmatclub.com/forum/two-different-primes-may-be-said-to-rhyme-around-an-integer-107290.html?fl=similar
It is currently 21 Jan 2018, 16:44 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Two different primes may be said to"rhyme" around an integer Author Message TAGS: ### Hide Tags Current Student Status: Up again. Joined: 31 Oct 2010 Posts: 526 Kudos [?]: 570 [2], given: 75 Concentration: Strategy, Operations GMAT 1: 710 Q48 V40 GMAT 2: 740 Q49 V42 Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 03 Jan 2011, 09:58 2 KUDOS 7 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 37% (01:39) correct 63% (01:33) wrong based on 197 sessions ### HideShow timer Statistics Two different primes may be said to"rhyme" around an integer if they are the same distance from the integer on the number line. For instance, 3 and 7 rhyme around 5. What integer between 1 and 20, inclusive, has the greatest number of distinct rhyming primes around it? A. 12 B. 15 C. 17 D. 18 E. 20 Source: MGMAT Heaven knows what I'll do if I encounter such a question on GMAT!! It is solvable no doubt but very time consuming.. Please do post the time you take to solve this question.. I took 1.4 minutes to grasp the question, then left it as I thought it would eat away the valuable remaining time on the test. [Reveal] Spoiler: OA _________________ My GMAT debrief: http://gmatclub.com/forum/from-620-to-710-my-gmat-journey-114437.html Kudos [?]: 570 [2], given: 75 Math Expert Joined: 02 Sep 2009 Posts: 43348 Kudos [?]: 139709 [6], given: 12794 Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 03 Jan 2011, 12:30 6 KUDOS Expert's post 1 This post was BOOKMARKED gmatpapa wrote: Two different primes may be said to"rhyme" around an integer if they are the same distance from the integer on the number line. For instance, 3 and 7 rhyme around 5. What integer between 1 and 20, inclusive, has the greatest number of distinct rhyming primes around it? 1. 12 2. 15 3. 17 4. 18 5. 20 Source: MGMAT Heaven knows what I'll do if I encounter such a question on GMAT!! It is solvable no doubt but very time consuming.. Please do post the time you take to solve this question.. I took 1.4 minutes to grasp the question, then left it as I thought it would eat away the valuable remaining time on the test. As per definition two different primes $$p_1$$ and $$p_2$$ are "rhyming primes" if $$n-p_1=p_2-n$$, for some integer $$n$$ --> $$2n=p_1+p_2$$. So twice the number $$n$$ must equal to the sum of two different primes, one less than $$n$$ and another more than $$n$$. Let's test each option: A. 12 --> 2*12=24 --> 24=5+19=7+17=11+13: 6 rhyming primes (start from the least prime and see whether we can get the sum of 24 by adding another prime more than 12 to it); B. 15 --> 2*15=30 --> 30=7+23=11+19=13+17: 6 rhyming primes; C. 17 --> 2*17=34 --> 34=3+31=5+29=11+23: 6 rhyming primes; D. 18 --> 2*18=36 --> 36=5+31=7+29=13+23=17+19: 8 rhyming primes; E. 20 --> 2*20=40 --> 40=3+37=11+29=17+23: 6 rhyming primes. _________________ Kudos [?]: 139709 [6], given: 12794 Senior Manager Joined: 13 Aug 2010 Posts: 280 Kudos [?]: 36 [0], given: 1 ### Show Tags 04 Jan 2011, 01:23 great explanation Bunel, thanks a lot..... and a nice question Kudos [?]: 36 [0], given: 1 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7869 Kudos [?]: 18506 [2], given: 237 Location: Pune, India ### Show Tags 04 Jan 2011, 18:32 2 KUDOS Expert's post 1 This post was BOOKMARKED gmatpapa wrote: Two different primes may be said to"rhyme" around an integer if they are the same distance from the integer on the number line. For instance, 3 and 7 rhyme around 5. What integer between 1 and 20, inclusive, has the greatest number of distinct rhyming primes around it? 1. 12 2. 15 3. 17 4. 18 5. 20 Source: MGMAT Heaven knows what I'll do if I encounter such a question on GMAT!! It is solvable no doubt but very time consuming.. Please do post the time you take to solve this question.. I took 1.4 minutes to grasp the question, then left it as I thought it would eat away the valuable remaining time on the test. Alternative solution: Since we are concerned with integers between 1 and 20, write down the primes till 40. 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 (you should be very comfortable with the first few primes... ) 2, 3, 5, 7, 11, 12, 13, 17, 19, 23, 29, 31, 37 - Three pairs (11,13), (7,17), (5, 19) 2, 3, 5, 7, 11, 13, 15, 17, 19, 23, 29, 31, 37 - Three pairs (13, 17), (11, 19), (7, 23) 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 - Three pairs (11, 23), (5, 29), (3, 31) 2, 3, 5, 7, 11, 13, 17, 18, 19, 23, 29, 31, 37 - Four pairs (17, 19), (13, 23), (7, 29), (5, 31) 2, 3, 5, 7, 11, 13, 17, 19, 20, 23, 29, 31, 37 - definitely cannot be more than 4 since there are only 4 primes more than 20. So must be less than 4 pairs. Ignore. It doesn't take too much time to look for equidistant pairs... _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Kudos [?]: 18506 [2], given: 237 Intern Joined: 22 Apr 2013 Posts: 1 Kudos [?]: [0], given: 0 Re: rhyming primes [#permalink] ### Show Tags 20 May 2013, 22:43 VeritasPrepKarishma wrote: gmatpapa wrote: Two different primes may be said to"rhyme" around an integer if they are the same distance from the integer on the number line. For instance, 3 and 7 rhyme around 5. What integer between 1 and 20, inclusive, has the greatest number of distinct rhyming primes around it? 1. 12 2. 15 3. 17 4. 18 5. 20 Source: MGMAT Heaven knows what I'll do if I encounter such a question on GMAT!! It is solvable no doubt but very time consuming.. Please do post the time you take to solve this question.. I took 1.4 minutes to grasp the question, then left it as I thought it would eat away the valuable remaining time on the test. Alternative solution: Since we are concerned with integers between 1 and 20, write down the primes till 40. 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 (you should be very comfortable with the first few primes... ) 2, 3, 5, 7, 11, 12, 13, 17, 19, 23, 29, 31, 37 - Three pairs (11,13), (7,17), (5, 19) 2, 3, 5, 7, 11, 13, 15, 17, 19, 23, 29, 31, 37 - Three pairs (13, 17), (11, 19), (7, 23) 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 - Three pairs (11, 23), (5, 29), (3, 31) 2, 3, 5, 7, 11, 13, 17, 18, 19, 23, 29, 31, 37 - Four pairs (17, 19), (13, 23), (7, 29), (5, 31) 2, 3, 5, 7, 11, 13, 17, 19, 20, 23, 29, 31, 37 - definitely cannot be more than 4 since there are only 4 primes more than 20. So must be less than 4 pairs. Ignore. Answer (D). It doesn't take too much time to look for equidistant pairs... why are we considering till 40?? I did not get it Kudos [?]: [0], given: 0 Senior Manager Joined: 16 Dec 2011 Posts: 419 Kudos [?]: 248 [0], given: 70 Re: rhyming primes [#permalink] ### Show Tags 21 May 2013, 01:14 royal wrote: why are we considering till 40?? I did not get it As the highest integer, for which rhyming pair to be found, is 20, we need to consider equal range below the number 20 and above the number 20. In fact, we need to consider the range (2,38) as the lowest prime is 2. Kudos [?]: 248 [0], given: 70 Non-Human User Joined: 09 Sep 2013 Posts: 14201 Kudos [?]: 291 [0], given: 0 Re: Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 04 Jul 2014, 10:00 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Kudos [?]: 291 [0], given: 0 Intern Joined: 12 Aug 2014 Posts: 12 Kudos [?]: 12 [0], given: 27 Location: United States Concentration: Strategy, General Management GMAT 1: 710 Q50 V35 GMAT 2: 720 Q49 V40 WE: Other (Consulting) Re: Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 21 Dec 2014, 05:29 Bunuel and Karishma, 17 has four set of rhyming primes. You both haven't considered (3,31) as a possible answer. Kudos [?]: 12 [0], given: 27 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7869 Kudos [?]: 18506 [1], given: 237 Location: Pune, India Re: Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 21 Dec 2014, 20:29 1 This post received KUDOS Expert's post anon1111 wrote: Bunuel and Karishma, 17 has four set of rhyming primes. You both haven't considered (3,31) as a possible answer. Both Bunuel and I have considered 3 and 31 as rhyming primes for 17 in our solutions above. 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 - Three pairs (11, 23), (5, 29), (3, 31) _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199 Veritas Prep Reviews Kudos [?]: 18506 [1], given: 237 Intern Joined: 12 Aug 2014 Posts: 12 Kudos [?]: 12 [0], given: 27 Location: United States Concentration: Strategy, General Management GMAT 1: 710 Q50 V35 GMAT 2: 720 Q49 V40 WE: Other (Consulting) Re: Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 21 Dec 2014, 21:58 My apologies! Had accidentally counted 15, 19 as a rhyming pair. Didn't see the 31,3 pair. Thanks for the correction. Kudos [?]: 12 [0], given: 27 Senior Manager Status: Math is psycho-logical Joined: 07 Apr 2014 Posts: 432 Kudos [?]: 145 [0], given: 169 Location: Netherlands GMAT Date: 02-11-2015 WE: Psychology and Counseling (Other) Re: Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 30 Dec 2014, 11:07 Hello, I wanted to share how I ended up with the correct answer. It is probably a lucky choice, but just in case I wanted to share. So, I didn't see the connection with the mean (even though statistics is my biggest strength). What I did was to first find the primes up to 20, just to see if there is a pattern that makes sense. So, I lined them up, smaller to larger, and tried to find a number that is between 1 and 20. For me this meant 1<x<20, so I wanted a number that is one of these: 2,3,4....,19. Then, I realised that there is no upper limmit to the primes - so there is no reason why they should stop at 19. What I realised then, is that the number that has most primes should be the highest possible in the range we are given: one of 2,3,4,....,19. So, 19 being the highest value, it is logical that this one would have the most primes around it. I rejected 20, because of the range, so I chose 18 (D), because it was the second highest. Does it make any sense? Kudos [?]: 145 [0], given: 169 SVP Joined: 08 Jul 2010 Posts: 1914 Kudos [?]: 2479 [0], given: 51 Location: India GMAT: INSIGHT WE: Education (Education) Re: Two different primes may be said to"rhyme" around an integer [#permalink] ### Show Tags 22 Oct 2015, 06:40 gmatpapa wrote: Two different primes may be said to"rhyme" around an integer if they are the same distance from the integer on the number line. For instance, 3 and 7 rhyme around 5. What integer between 1 and 20, inclusive, has the greatest number of distinct rhyming primes around it? A. 12 B. 15 C. 17 D. 18 E. 20 Source: MGMAT Heaven knows what I'll do if I encounter such a question on GMAT!! It is solvable no doubt but very time consuming.. Please do post the time you take to solve this question.. I took 1.4 minutes to grasp the question, then left it as I thought it would eat away the valuable remaining time on the test. SOlution is right below... Attachments File comment: www.GMATinsight.com sol2.jpg [ 102.15 KiB | Viewed 1131 times ] _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html 22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION Kudos [?]: 2479 [0], given: 51 Re: Two different primes may be said to"rhyme" around an integer   [#permalink] 22 Oct 2015, 06:40 Display posts from previous: Sort by # Two different primes may be said to"rhyme" around an integer Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-01-22 00:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6827670335769653, "perplexity": 2818.3696359098503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890928.82/warc/CC-MAIN-20180121234728-20180122014728-00675.warc.gz"}