url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.nengo.ai/nengo-dl/v0.5.1/extra_objects.html
# Extra Nengo objects¶ NengoDL adds some new Nengo objects that can be used during model construction. These could be used with any Simulator, not just nengo_dl, but they tend to be useful for deep learning applications. ## Neuron types¶ Additions to the neuron types included with Nengo. class nengo_dl.neurons.SoftLIFRate(sigma=1.0, **lif_args)[source] LIF neuron with smoothing around the firing threshold. This is a rate version of the LIF neuron whose tuning curve has a continuous first derivative, due to the smoothing around the firing threshold. It can be used as a substitute for LIF neurons in deep networks during training, and then replaced with LIF neurons when running the network [1]. Parameters: sigma : float Amount of smoothing around the firing threshold. Larger values mean more smoothing. tau_rc : float Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay). tau_ref : float Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike. Notes References [1] (1, 2) Eric Hunsberger and Chris Eliasmith (2015): Spiking deep networks with LIF neurons. https://arxiv.org/abs/1510.08829. rates(x, gain, bias)[source] Always use LIFRate to determine rates. step_math(dt, J, output)[source] Compute rates in Hz for input current (incl. bias) ## Distributions¶ Additions to the distributions included with Nengo. These distributions are usually used to initialize weight matrices, e.g. nengo.Connection(a.neurons, b.neurons, transform=nengo_dl.dists.Glorot()). class nengo_dl.dists.TruncatedNormal(mean=0, stddev=1, limit=None)[source] Normal distribution where any values more than some distance from the mean are resampled. Parameters: mean : float, optional mean of the normal distribution stddev : float, optional standard deviation of the normal distribution limit : float, optional resample any values more than this distance from the mean. if None, then limit will be set to 2 standard deviations sample(n, d=None, rng=None)[source] Samples the distribution. Parameters: n : int Number samples to take. d : int or None, optional The number of dimensions to return. If this is an int, the return value will be of shape (n, d). If None, the return value will be of shape (n,). rng : numpy.random.RandomState, optional Random number generator state (if None, will use the default numpy random number generator). samples : (n,) or (n, d) array_like Samples as a 1d or 2d array depending on d. The second dimension enumerates the dimensions of the process. class nengo_dl.dists.VarianceScaling(scale=1, mode='fan_avg', distribution='uniform')[source] Variance scaling distribution for weight initialization (analogous to TensorFlow init_ops.VarianceScaling). Parameters: scale : float, optional overall scale on values mode : “fan_in” or “fan_out” or “fan_avg”, optional whether to scale based on input or output dimensionality, or average of the two distribution: “uniform” or “normal”, optional whether to use a uniform or normal distribution for weights sample(n, d=None, rng=None)[source] Samples the distribution. Parameters: n : int Number samples to take. d : int or None, optional The number of dimensions to return. If this is an int, the return value will be of shape (n, d). If None, the return value will be of shape (n,). rng : numpy.random.RandomState, optional Random number generator state (if None, will use the default numpy random number generator). samples : (n,) or (n, d) array_like Samples as a 1d or 2d array depending on d. The second dimension enumerates the dimensions of the process. class nengo_dl.dists.Glorot(scale=1, distribution='uniform')[source] Weight initialization method from [1] (also known as Xavier initialization). Parameters: scale : float, optional scale on weight distribution. for rectified linear units this should be sqrt(2), otherwise usually 1 distribution: “uniform” or “normal”, optional whether to use a uniform or normal distribution for weights References [1] (1, 2) Xavier Glorot and Yoshua Bengio (2010): Understanding the difficulty of training deep feedforward neural networks. International conference on artificial intelligence and statistics. http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf. class nengo_dl.dists.He(scale=1, distribution='normal')[source] Weight initialization method from [1]. Parameters: scale : float, optional scale on weight distribution. for rectified linear units this should be sqrt(2), otherwise usually 1 distribution: “uniform” or “normal”, optional whether to use a uniform or normal distribution for weights References [1] (1, 2) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. (2015): Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. https://arxiv.org/abs/1502.01852.
2022-11-27 05:37:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27173224091529846, "perplexity": 4722.283222396597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00455.warc.gz"}
https://www.physicsforums.com/threads/double-tic-tac-toe-strategy.577924/
# Double Tic-Tac-Toe Strategy 1. Feb 15, 2012 ### hammonjj 1. The problem statement, all variables and given/known data Consider the game of double move tic-tac-toe' played by the usual rules of tic- tac-toe, except that each player makes two marks in succession before relinquishing his turn to the other player (you may know tic-tac-toe by the name noughts and crosses'). Prove that there exists a strategy by which the first player always wins. 2. Relevant equations None that I can think of. 3. The attempt at a solution I have no clue how to prove this. The obvious strategy is that player one places an X at one of the corners of the board and then one in the center. Player two can't block all the winning strategies with their two moves. The question is, how do I show this in "math speak"? Thanks! Sorry for all the posting lately, I'm just terrible at Discrete Math. 2. Feb 15, 2012 ### rasmhop After the first turn the board looks like +-+-+-+ |X| | | +-+-+-+ | |X| | +-+-+-+ | | | | +-+-+-+ Where do player 2 need to place O's for you not to be able to win in your turn? Can this be done with just 2 O's? If you can find three disjoint sets of spots that must each be blocked, then you are finished. In other words, can you partition the last 7 empty spaces into 3 disjoint subsets such that if any of the 3 disjoint subsets are left untouched by player 2, then you can win in your turn? 3. Feb 15, 2012 ### Deveno obviously, player 2 cannot win in 1 move (he only can place two marks). since player 2 cannot win on their first move, their best strategy is to prevent player 1 from winning on player 1's second move. player 2 must place a mark at (3,3), or else player 1 will on the next move, and then win. there are 4 possible ways player 1 might place two marks and win on their next move, given that (3,3) is taken: complete the center row, complete the center column, or complete the left column, or the top row. player 2 must block all 4 of these possibilities with a single move. show that player 2 can at most only block 2 of these. a slightly more challenging question is: suppose player 1 allows player 2 to make his first move for him (still using X's for player 1, and O's for player 2. player 2 does NOT get to place 4 O's). does player 1 still always have a winning strategy? 4. Feb 16, 2012 ### dirk_mec1 +-+-+-+ |x| | | +-+-+-+ | | | | +-+-+-+ | | |x| +-+-+-+ In this case player two can never win because you have three rows to win, right?
2018-01-20 05:56:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4765099883079529, "perplexity": 912.685748053402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00051.warc.gz"}
https://trac-hacks.org/ticket/7591
Opened 7 years ago Closed 7 years ago API update in [8493] breaks old syntax Description Hi: After changeset [8493] the markers options like color are missing. Now address and coordinates works fine together: [[GoogleStaticMap(center="Providencia Santiago Chile",zoom=15,size=400x400,markers="Providencia Santiago Chile"|-33.432749:-70.615852)]] But if I add optional argument like color, the markers doesn't show up: [[GoogleStaticMap(center=-33.432749:-70.615852,zoom=15,size=400x400,markers=-33.432749:-70.615852:bluea)]] The actual features are ok for me, but I think that would be helpful to report this issue. Regards Javier comment:1 Changed 7 years ago by Martin Scharrer Priority: normal → high normal → critical new → assigned After changeset [8493] markers options are missing → API update in [8493] breaks old syntax Thanks for reporting this. Since [8493] the new API 2.0 of Google is used, which changes the markers syntax (and most likely other syntax as well). See http://code.google.com/apis/maps/documentation/staticmaps/#MarkerStyles for details. I will have to add an api option to select between the old and new API version in order to allow to support old code. comment:2 Changed 7 years ago by Martin Scharrer Resolution: → fixed assigned → closed (In [8557]) tracgooglestaticmap/macro.py:: Finished support for old (1) and new (2) Google Static Map API. This fixes #7591. comment:3 in reply to:  description ; follow-up:  4 Changed 7 years ago by Martin Scharrer [...] But if I add optional argument like color, the markers doesn't show up: [[[GoogleStaticMap(center=-33.432749:-70.615852,zoom=15,size=400x400,markers=-33.432749:-70.615852:bluea)]] The actual features are ok for me, but I think that would be helpful to report this issue. You can get this to work in the following way the new syntax: [[GoogleStaticMap(center="-33.432749,-70.615852",zoom=15,size=400x400,markers="color:blue|label:A|-33.432749,-70.615852")]] This gives you this image. comment:4 in reply to:  3 ; follow-up:  5 Changed 7 years ago by tatadeluxe@… [...] But if I add optional argument like color, the markers doesn't show up: [[GoogleStaticMap(center=-33.432749:-70.615852,zoom=15,size=400x400,markers=-33.432749:-70.615852:bluea)]] The old syntax for markers doesn't work anymore, Backward compatibility is missing, but for me is not a problem. The actual features are ok for me, but I think that would be helpful to report this issue. You can get this to work in the following way the new syntax: [[GoogleStaticMap(center="-33.432749,-70.615852",zoom=15,size=400x400,markers="color:blue|label:A|-33.432749,-70.615852")]] This gives you this image. The new syntax works OK, :D Regars Javier comment:5 in reply to:  4 Changed 7 years ago by Martin Scharrer The old syntax for markers doesn't work anymore, Backward compatibility is missing, but for me is not a problem. Yes, I saw that. This is now fixed in [8571]. The new syntax works OK, :D Thanks for verifying this. Modify Ticket Change Properties
2017-04-26 07:05:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1858430951833725, "perplexity": 4421.500461598437}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00432-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.bartleby.com/questions-and-answers/linear-approximate-the-change-in-the-lateral-surface-area-excluding-the-area-of-the-base-of-a-right-/4e71af11-ee18-4844-b23d-a6bb6693417a
# Linear Approximate the change in the lateral surface area (excluding the area of the base) of a right circular cone with a fixed height h=6m when its radius decreases from r=10m to r=9.9m (S=pi x r (square root of r2 +h2) Question Linear Approximate the change in the lateral surface area (excluding the area of the base) of a right circular cone with a fixed height h=6m when its radius decreases from r=10m to r=9.9m (S=pi x r (square root of r+h2)
2021-04-20 03:48:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8528634309768677, "perplexity": 921.971982673217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00316.warc.gz"}
https://socratic.org/questions/how-do-you-solve-x-2-10x-1575-by-completing-the-square
# How do you solve x^2 - 10x = 1575 by completing the square? $\implies {x}^{2} - 2 \times 5 \times x + {5}^{2} - 25 = 1575$ $\implies {\left(x - 5\right)}^{2} = 1575 + 25$ $\implies {\left(x - 5\right)}^{2} = 1600$ $\implies \left(x - 5\right) = \pm \sqrt{1600} = \pm 400$ $: x = 405 \mathmr{and} x = - 395$
2021-06-19 17:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49647510051727295, "perplexity": 848.8449625335805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00275.warc.gz"}
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/2317
Rational and Irrational Number | kullabs.com Notes, Exercises, Videos, Tests and Things to Remember on Rational and Irrational Number Please scroll down to get to the study materials. Note on Rational and Irrational Number • Note • Things to remember • Exercise • Quiz Rational Numbers The number can be in different form. Some numbers can be in a form of fraction, ratio, root and with the decimal. If the number is in the form of $$\frac{p}{q}$$ (fraction) ,of two integer p and q where numerator p and q≠0 are called rational numbers. 5 $$\frac{2}{3}$$, $$\frac{7}{4}$$, $$\frac{3}{4}$$, $$\frac{3}{5}$$ etc are the examples of rational numbers. Rational number can be: • All natural number • All whole number • All integer • All fraction Irrational Numbers Numbers which cannot be expressed in a ratio (as a fraction of integer) or it can be expressed in decimal form is known as irrational numbers. It can neither be terminated nor repeated. For example, √7 = 2.64575131............. √5 = 2.23620679....... etc are irrational numbers. √2,√3,√5,√6,√7, etc. are the examples of irrational number where the numbers are a non-terminating and a non-repeating number. Some Results on Irrational Numbers 1. If we made an irrational number negative then it is always an irrational number. For example, -√5 2. If we add a rational number and an irrational number then a result is always an irrational number. For example, 2 +√3 is irrational. 3. If we multiply a non-zero rational number with an irrational number then it is always an irrational number. For example, 5√3 is an irrational number. 4. The sum of two irrational number is not always an irrational number. For example, (2 +√3) + (2 -√3) = 4, which is irrational. 5. The product of two irrational number is not always an irrational number. For example, ( 2 +√3) x (2 -√3) = 4 -3 =1, which is rational. • The number in the form $$\frac{p}{q}$$, where p and q are integers and q≠0 are called rational number. • A rational number is a number that can be written as a ratio. • An irrational number is a real number that cannot be expressed as a ratio of integers. • Irrational numbers cannot be represented as terminating or repeating decimals. . Very Short Questions Solution: 1. 0.5 2. 0 3. -100 4. $$\frac{3}{5}$$ Solution: 1. √5 and 5-√5 2. √3+2 and 3-√3 Solution: 1. √3 and -√3 2. √5 and -√5 a) π is an irrational number. ( True) b) -√3 is an irrational number. (True) c) Irrational numbers cannot be represented by points on the number line. (False) d) All real number are rational ( False) e) Every real number is not a rational number. (True) Solution: √2 = 1.41421356....... Solution: 1) 0.75 2) -100 3) $$\frac{7}{20}$$ 4) 0 Solution: -6/25 ÷ 3/5 = -6/25 × 5/3 = {(-6) × 5}/(25 × 3) = -30/75 = -2/5 Solution: 11/24 ÷ (-5)/8 = 11/24 × 8/(-5) = (11 × 8)/{24 × (-5)} = 88/-120 = -11/15 Solution: (-25/9) × (-18/15) = (-25) × (-18)/9 × 15 = 450/135 = 10/3 Solution: (-11)/3 is not a positive rational. Since both the numerator and denominator are of the opposite sign. Solution: 25/(-27) is not a positive rational. Since both the numerator and denominator are of the opposite sign. 0% • Change the following decimal number into fraction:0.(overline{5}) (frac{9}{4}) (frac{5}{9}) (frac{6}{29}) (frac{3}{9}) ;i:3;s:15: ;i:2;s:15: ;i:4;s:15: ;i:1;s:15: • Change the following decimal number into fraction:0.(overline{24}) (frac{7}{16}) (frac{9}{41}) (frac{6}{10}) (frac{8}{33}) • Change the following decimal number into fraction:0.(overline{132}) (frac{44}{333}) (frac{55}{444}) (frac{14}{153}) (frac{36}{863}) • Change the following decimal number into fraction:0.(overline{27}) (frac{3}{11}) (frac{6}{18}) (frac{4}{20}) (frac{1}{10}) • Change the following decimal number into fraction:1.(overline{57}) (frac{43}{12}) (frac{52}{33}) (frac{12}{32}) (frac{2}{19}) • Change the following decimal number into fraction:0.(overline{365}) (frac{162}{222}) (frac{365}{999}) (frac{305}{125}) (frac{265}{888}) • Change the following decimal number into fraction:4.(overline{78}) (frac{189}{41}) (frac{111}{91}) (frac{158}{33}) (frac{135}{12}) • Change the following decimal number into fraction:0.(overline{445}) (frac{142}{669}) (frac{565}{555}) (frac{325}{189}) (frac{445}{999}) • Change the following decimal number into fraction:1.(overline{525}) (frac{500}{554}) (frac{508}{333}) (frac{458}{444}) (frac{226}{289}) • The sum of the rational numbers (frac{– 8}{19}) and (frac{-4}{57}) is? (frac{7}{22}) (frac{-5}{57}) (frac{4}{27}) (frac{-28}{57}) -2 10 8 -7 17 -21 -17 20 5 10 2 9 3 2 4 1
2019-04-20 13:21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198760390281677, "perplexity": 1701.6410226010971}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142902-00151.warc.gz"}
https://tex.stackexchange.com/questions/183331/greek-description-for-custom-variables-newkomavar-in-lco-file
# Greek description for custom variables (\newkomavar*) in lco-file Based on the templates/files asymTypB.lco, brieftemplate.tex, briefwbk.tex found at http://www.komascript.de/files/KOMA-Script-3-Buch-Beispielcode.zip, I derived a modified template for an invoice (initially as posted at https://tex.stackexchange.com/a/152558/8272). Attempting to translate this into Greek (available at https://github.com/NikosAlexandris/invoice_el), I've added the following instructions, as per KOMA's guide recommendations: \AtBeginDocument{% \providecaptionname{greek}{\datename}{Ημερομηνία}% \providecaptionname{greek}\subjectname{Θέμα}% \providecaptionname{greek}{\customername}{Πελάτης (Κωδικός, Αριθμός)}% \providecaptionname{greek}\yourmailname{Εγγραφή Πελάτη}% \providecaptionname{greek}{\yourrefname}{Διακριτικός τίτλος έργου}% \providecaptionname{greek}\emailname{η-Ταχυδρομείο}% \providecaptionname{greek}\wwwname{Url}% \providecaptionname{greek}\phonename{Τηλέφωνο}% \providecaptionname{greek}\faxname{Τηλεομοιότυπο}% \providecaptionname{greek}{\myrefname}{Εσωτερική εγγραφή}% \providecaptionname{greek}{\invoicename}{Τιμολόγιο No.}% \providecaptionname{greek}{\bankname}{Τραπεζικός Λογαριασμός}% \providecaptionname{greek}\ccname{cc}% \providecaptionname{greek}\enclname{Επισυναπτόμενα}% \providecaptionname{greek}\pagename{Σελίδα}% } In addition, I added new variables in the respective .lco file as explaiend in (this) KOMA's script guide (english version, page 371), a new variable accepts a pre-defined description, i.e.: newkomavar*[description ]{name }. For example, % New variable(s) here! \newkomavar{company}% \newkomavar{professiona}% \newkomavar{professionb}% \newkomavar{fromvatin}% So far is all fine. However, adding a greek description for a custom variable, won't work as expected. To exemplify, the following \newkomavar*[ΑΦΜ Πελάτη]{yourvatin}% appears in the compiled pdf (PDFLaTeX) as ὐἇὐᾔὐῂ ὐήὐᾡὐὢὐῇὓᾲὐᾣ. How should a greek description for custom koma variables be realised (inside PDFLaTeX) from inside an .lco file? • Kind of a minimal working example available at github.com/NikosAlexandris/invoice_el/blob/master/… – Nikos Alexandris Jun 5 '14 at 15:04 • I think refname and refvalue must also be declared with \newkomavar. I get "Class scrlttr2 Error: KOMA-Script variable not defined." – mvkorpel Jun 6 '14 at 19:51 • @mvkorpel Thanks for your attention. I don't know how to deal with this. Looking at it. Anyhow, I get a PDF, despite errors and my main problem is the "Greek" description of the custom variable. I will try to fix all errors in time. Ideas on where to look are, of course, welcome. – Nikos Alexandris Jun 7 '14 at 19:41 • I was able to compile the example without errors by applying the following changes: 1. In custom_invoice_asymTypB_el.lco, I moved \raggedright to be the last command inside the \parbox titled "Main block of Info-Column". I don't know why this helps. 2. In custom_invoice_template_el.tex, I replaced \smallskip with \smallskipamount: a length is required. 3. Also in the template file, I commented out \setkomavar lines where \includegraphics points to a non-existent file. Bonus: I moved \makeatletter down to just before \@setplength and used \makeatother to reset the change. – mvkorpel Jun 16 '14 at 14:53 • I made a pull request on github. – mvkorpel Jun 17 '14 at 7:04 Your example files seem to use UTF-8 encoding. The sequence of UTF-8 hex codes from the individual letters of "ΑΦΜΠελάτη" is ce91 cea6 ce9c cea0 ceb5 cebb ceac cf84 ceb7 (note: space was removed). I am not familiar with the intricate details of Greek font encodings in LaTeX, but the table of the LGR encoding in the LaTeX font encodings manual maps the bytes of the UTF-8 sequence (ce, 91, ce, a6, ...) to the wrong output you are seeing. ## A new example As I am not able to compile your example document, I must demonstrate the issue and proposed solutions with minimal examples of my own. First, the problem occurs when no inputenc has been defined: \documentclass{standalone} \usepackage[LGR]{fontenc} \begin{document} ΑΦΜ Πελάτη \end{document} The problem can be solved by: A. defining a suitable input encoding \documentclass{standalone} \usepackage[LGR]{fontenc} \usepackage[utf8]{inputenc} \begin{document} ΑΦΜ Πελάτη \end{document} or B. using character codes found in the LGR table. The mapping from "ΑΦΜΠελάτη" to decimal codes is 65, 70, 77, 80, 101, 108, 136, 116, 104. \documentclass{standalone} \usepackage[LGR]{fontenc} \begin{document} \char65\char70\char77{} \char80\char101\char108\char136\char116\char104 \end{document} Both A and B give the same result: ## Applying this to the original example I guess the character code solution (B) would work as such. For the input encoding solution to work, I think you would need to move the \inputenc declaration to an earlier location in your document, before any Greek text. Note that in your example the .lco template containing \newkomavar*[ΑΦΜ Πελάτη]{yourvatin} is included straight from \documentclass, before the declaration of an \inputenc. • Your answer (especially the very last paragrah), lead me to an answer :-). I modified the .lco file so as to be, regarding the custom variable in question, \newkomavar*[\yourvatinname]{yourvatin} and \providecaptionname{english}\yourvatinname{Customer's VATin}. It's a step forward. – Nikos Alexandris Jun 12 '14 at 11:37 • Appears that I wasn't persistent enough when trying to compile your example document. The current version requires 18 presses of <return> to pdflatex, but finally a pdf is produced. The previous version only requires 10 <return>s. Interesting. Every error happens on the \opening line. – mvkorpel Jun 14 '14 at 21:48 • I will get back on this once I have some more free time. I certainly want to have a clean template, after eliminating all errors, one by one. Your invaluable time is helping me getting there. – Nikos Alexandris Jun 15 '14 at 10:20
2018-12-15 13:14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4883098602294922, "perplexity": 4270.742564298027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00267.warc.gz"}
https://support.mozilla.org/mk/questions/1330946
#### Search Support Avoid support scams. We will never ask you to call or text a phone number or share personal information. Please report suspicious activity using the “Report Abuse” option. ## Firefox fails to use some local fonts • 1 has this problem After installing the Native MathML extension (to get MathML typeset equations on the Wikipedia, as suggested on mediawiki.org), all equations set by MathJax/MathML turned to using STIX instead of Latin Modern. I'd like them to stick to Latin Modern instead. The only possibly related setting I've found is font.name-list.serif.x-math in about:config, whose value is Latin Modern Math, STIX Two Math, […]. I tried to manually change the font family specification of some text to see if Firefox could use Latin Modern Math to begin with, but apparently it can't (see first screenshot attached), while it loads STIX allright (second screenshot). Latin Modern is not the first locally installed typeface I haven't been able to use, but it's the first one I have really tried to (I don't remember which one the other typeface was). My Latin Modern font comes from TeX Live 2019 installed using the TUG installer, not Fedora's package manager. fc-list | grep -i "latin modern math" returns /usr/local/texlive/2019/texmf-dist/fonts/opentype/public/lm-math/latinmodern-math.otf: Latin Modern Math:style=Regular. I'm running Firefox 87 on Fedora 33. Attached screenshots #### Chosen solution I had to add the font files' directory to security.sandbox.content.read_path_whitelist in about:config. See https://www.reddit.com/r/firefox/comments/mhtw38/fedora_i_cant_use_some_of_my_local_fonts_on/ and https://wiki.mozilla.org/Security/Sandbox#Customization_Settings I had to add the font files' directory to security.sandbox.content.read_path_whitelist in about:config. See https://www.reddit.com/r/firefox/comments/mhtw38/fedora_i_cant_use_some_of_my_local_fonts_on/ and https://wiki.mozilla.org/Security/Sandbox#Customization_Settings
2021-05-16 07:18:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597117066383362, "perplexity": 9956.250249066436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00065.warc.gz"}
https://www.physicsforums.com/threads/pointwise-convergence-of-integral-of-fourier-series.403902/
# Homework Help: Pointwise convergence of integral of Fourier series 1. May 17, 2010 ### twizzy 1. The problem statement, all variables and given/known data If $$f(x)$$ is a piecewise-continuous function in $$[-L,L]$$, show that its indefinite integral $$F(x) = \int_{-L}^x f(s) ds$$ has a full Fourier series that converges pointwise. 2. Relevant equations Full Fourier series: $$f(x)=\frac{1}{2}A_0 + \sum_{n=1}^\infty A_n \cos (\frac{n \pi }{L}x) + B_n \sin (\frac{n \pi}{L}x)$$ Definition: $$\sum_{n=1}^\infty f_n (x)$$ converges to $$f(x)$$ pointwise in $$(a,b)$$ if for each $$a<x<b$$ we have $$\Big| f(x) - \displaystyle{\sum_{n=1}^\infty f_n (x)} \Big| \to 0$$ as $$N\to\infty$$. 3. The attempt at a solution I think I need to somehow justify integrating term-by-term, but am not sure how to proceed. Any ideas? 2. May 17, 2010 ### ninty If you want to integrate term by term, you need uniform convergence. Haven't really looked at this, so not saying that term by term integration is the solution here.
2018-06-23 09:00:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577224254608154, "perplexity": 378.1905257906096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864953.36/warc/CC-MAIN-20180623074142-20180623094142-00343.warc.gz"}
https://studyqas.com/conjunction-or-disjunction-x-5-5-please-answer-this-or-my/
# Conjunction or disjunction: x + 5 5 PLEASE ANSWER THIS OR MY OTHER QUESTION PLEEAAASE WILL GIVE BRAINLIEST conjunction or disjunction: x + 5 5 PLEASE ANSWER THIS OR MY OTHER QUESTION PLEEAAASE WILL GIVE BRAINLIEST ## This Post Has 8 Comments 1. balka75 says: -0.8 Step-by-step explanation: Hope this helps ya! Have a great day! -Camila 2. stephany739 says: 1.7 Step-by-step explanation: As the illustration shows, the hypotenuse of one of the smallest triangles is 2 cm, and the base of this triangle is 1.  Our job is to determine the triangle height marked "?" in the drawing. We apply the Pythagorean Theorem, obtaining 1² + ?² = 2². Then ?² = 4 - 1, and ? = +√3.  This corresponds to Answer d:  √3, which is approx. 1.7. 3. 1031kylepoe03 says: Remove the cubic root operation by raising all the terms by 1/3 1/100^1/3 c^9^1/3 a^12^1/3 Simplify: 1/10c^3a^4 4. cupkakekawaii45 says: C, 1/10 c^3d^4 5. tasjanayroberts says: Step-by-step explanation: The answer is 45 square root of 110. Solved on picture $I neeeed help pleeaaase ill make you brainliestThe base of a triangle is 5 square root of 45cm and t$ 6. Dallas3506 says: -0.8  or 991/10 (also it can be 10%) Step-by-step explanation: 991 ——— = 99.10000 10 Step by step solution : Step  1  : 9 Simplify   —— 10 Equation at the end of step  1  : 9 100 -  —— 10 Step  2  : Rewriting the whole as an Equivalent Fraction : 2.1   Subtracting a fraction from a whole Rewrite the whole as a fraction using  10  as the denominator : 100     100 • 10 100 =  ———  =  ———————— 1         10 Equivalent fraction : The fraction thus generated looks different but has the same value as the whole Common denominator : The equivalent fraction and the other fraction involved in the calculation share the same denominator Adding fractions that have a common denominator : 2.2       Adding up the two equivalent fractions Add the two equivalent fractions which now have a common denominator Combine the numerators together, put the sum or difference over the common denominator then reduce to lowest terms if possible: 100 • 10 - (9)     991 ——————————————  =  ——— 10           10 Final result : 991 ——— = 99.10000 10 7. acrespo3425 says: ♫ - - - - - - - - - - - - - - - ~Hello There!~ - - - - - - - - - - - - - - - ♫ ➷ You can use Pythagoras' theorem: a^2 = c^2 - b^2 Substitute the values in: a^2 = 2^2 - 1^2 a^2 = 3 Square root it: a = $\sqrt{3}$
2023-02-08 07:57:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535039484500885, "perplexity": 3224.2320787131016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00875.warc.gz"}
https://plainmath.net/93599/joel-spends-1-2-of-his-monthly-income-on
# Joel spends 1/2 of his monthly income on food and rent, 1/4 of his income on clothing and 1/12 on his entertainment. He save the rest which is $500. what is his monthly income? oopsteekwe 2022-10-12 Answered Joel spends 1/2 of his monthly income on food and rent, 1/4 of his income on clothing and 1/12 on his entertainment. He save the rest which is$500. what is his monthly income? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Jimena Torres let his income be x Then, $\frac{x}{2}+\frac{x}{4}+\frac{x}{12}+500=x\phantom{\rule{0ex}{0ex}}x-\frac{x}{2}-\frac{x}{4}-\frac{x}{12}=500\phantom{\rule{0ex}{0ex}}\frac{12x-6x-3x-x}{12}=500\phantom{\rule{0ex}{0ex}}\frac{2x}{12}=500\phantom{\rule{0ex}{0ex}}x=3000$
2022-11-30 17:58:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6120607852935791, "perplexity": 3890.6920312352686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00658.warc.gz"}
http://pangea-model.org/index.php?section=description,glossary
T  h  e    P  a  n  g  e  a    M  o  d  e  l Glossary of terms and concepts • ArcGIS geoprocessor: ESRI/ArcGIS is distributed with a Python library (class) called geoprocessor in ArcGIS 9.x and arcpy in ArcGIS 10.x. It essentially provides all the tools from the toolbox (nested red folders in e.g. ArcMap), available as methods. This provided Pangea v1.x with all the tools that a GIS analyst would use to e.g. build grids and summarize data, allowing Pangea to automatize these processes. Pangea v2.x is progressively implementing all relevant tools in MATLAB. • Cell (grid -) : In Pangea, a grid cell is a geometric entity that is a component of a grid. It has a geographical "reality", often irregular boundaries, complex connections to other grid cells, and delinates a region of the space with hetergeneous content (in terms of environmental media, in the general case). • Compartment : In Pangea, a compartment is an abstract component of the virtual system. It has a simple geometry, simple connections to other compartments, and it represents one homogeneous component of one grid cell (e.g. medium fresh water of some grid cell south of Chicago). • Computation engine : Set of tools for solving large systems of equations (ODEs, linear systems, etc). In Pangea v1.x, the computation engine was implemented in Python with an external module implemented in MATLAB (for the computational part involving large sparse matrices). In Pangea v2.x it is fully implemented in MATLAB. • Environmental Model (EM) : Environmental Models (EMs, for lack of a better terminology) are models and data sets that parameterize/characterize environmental media (e.g. fresh water hydrology) or types of regions (e.g. land cover). They are substance-independent. The atmospheric model, for example, provides tools for creating 3D atmospheric grids, projecting wind speeds, computing horizontal and vertical flows, and processing rain data sets. EMs are explicitly spatial; they are based on the geometry and geography of the features that they cover. On the contrary, EPMs (see below) describe substance-related processes, e.g. diffusion or degradation. They are spatial in an abstract way: the atmospheric advection EPM for example receives flows, compartments dimensions, etc. from the atmospheric EM, but it uses them without knowing e.g. their "true" location or geometry (which do no exist as compartments are abstract/virtual). EMs characterize therefore part of the geometric system, whereas EPMs work in the virtual system. Pangea provides the following set of EMs by default: • PAM: Pangea Atmospheric Model, parameterized using a reference year (2005) of GEOS-Chem (GEOS-4, 2°$$\times$$2.5° global 55-layers grid) wind fields, or higher resolution years (2013 on, supporting 72-layers 0.25°$$\times$$0.3125° continental grids nested in a global 2°$$\times$$2.5° grid) defined by GEOS-FP. • PHM: Pangea Hydrological Model. Two versions are available, the first/historical is based on the WWDRII 0.5°$$\times$$0.5° fresh water model, and the second (in development) is based on the HydroBASINS model/database. • PTM: Pangea Terrestrial Model, parameterized using the GlobCover data set. It uses the hydrological grid as defined by PHM. At this stage, it defines the sediments grid as well. • POM: Pangea Oceans Model. In development. Not the focus currently. • Environmental Process Model (EPM) : Environmental Processes Models (EPMs, for lack of a better terminology) are models that characterize substance-specific environmental processes such as advection and degradation. The difference between EMs and EPMs is discribed in the EMs section. Pangea comes with several sets of EPMs: a first/historical set based on IMPACT2002 and USEtox 1.x, and second based on USEtox 2.0 (concensus model endorsed by UNEP/SETAC), and a third (default) that updates the USEtox set with EPMs related tos sediments from SimpleBox. • GIS engine : Set of tools and resources relavant for performing all GIS tasks necessary for the functioning of Pangea, mainly the creation of global 3D multi-scale grids, and to projection of geo-referenced data sets. The GIS engine is a cascade of functions that use the MATLAB Mapping toolbox, ArcGIS, and Quantum GIS depending their availability. Currently, MATLAB Mapping and ArcGIS and mandatory, but the objective is to build an engine that can work with any library and taken advantage of e.g. ArcGIS when available. • Local to global : The model works with grids whose cells surfaces areas range typically from a few square kilometersto the size of a continent. This allows to design grids with a high/local resolution at locations of interest while keeping the ability to obtain results globally, which allows e.g. to compare local versus global results. This is made possible by the GIS engine which allows to build project-specific multi-scale grids and to project spatial data onto them at run time. • Medium : Environmental medium, e.g. air, fresh water, or agricultural soil. • Multi-pathways exposure : Six or seven pathways are currently considered important for studying an environmentally mediated multi-pathway exposure: inhalation and ingestion, with the latter through drinking water, fish, beef, eggs, above-ground vegetation (e.g. cereals, fruits, and vegetables) and below-ground vegetation (e.g. carrots and potatoes). • Multi-scale (spatial) : Multi-scale grids are grids composed of cells whose sizes span multiple orders of magnitude, e.g. 10km×10km for the smallest (highest resolution) and 1000km×1000km for the largest (lowest resolution). Multi-scale approaches are relevant in contexts where the spatial extent of the modeling domain is large (here global), yet we need a high-enough spatial resolution for capturing e.g. the specifics of the direct vicinity of emission sources (of pollutants) and receptors (e.g. people). The basis of the multi-scale approach implemented in Pangea is what we call a refinement potential (RP): a scalar field which defines at each location the "need for having a high resolution". The RP is integrated in an iterative procedure that refines a low resolution background grid until the integral over each cell is below a given threshold (or until a given refinement depth is reached). The outcome of this process is a multi-scale grid called the results grid, the grid onto which all results are projected ultimately. Other grids are built based on the results grid and/or the refinement potential. • Multimedia fate (elimination) and transport : Transport and elimination of substances in the environment that involve multiple media, e.g. accounting for exchanges between air, various soil types and water, and for the degradation within each one of these media. • Spatial : Pangea is spatially explicit, which means that it uses real geo-referenced spatial data. It creates grids that cover real geographic spaces, uses watersheds delineations that correspond to real streams and watersheds, is parameterized using geo-referenced data (e.g. longitude and latitude of emission sources, and rasters of population densities), etc. This is opposed to (non-spatial) generic models (like SimpleBox) or to abstract spatial models (like USEtox™) that use abstract[1] nested regions whose features emulate local, continental, and global extents. [1] USEtox uses a continental scale which is parameterized based on the IMPACTWorld model. This scale corresponds therefore to specific geographic regions.
2019-07-24 00:04:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41310232877731323, "perplexity": 4756.427121916121}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00031.warc.gz"}
https://www.physicsforums.com/threads/harmonic-oscillator-in-3d-different-values-on-x-y-and-z.885042/
# I Harmonic Oscillator in 3D, different values on x, y and z Tags: 1. Sep 12, 2016 ### Ofinns Hi, For a harmonic oscillator in 3D the energy level becomes En = hw(n+3/2) (Note: h = h_bar and n = nx+ny+nz) If I then want the 1st excited state it could be (1,0,0), (0,1,0) and (0,0,1) for x, y and z. But what happens if for example y has a different value from the beginning? Like this: V(x,y,z) = 1/2mw2(x2+4y2+z2) and for this decide the energy level AND degeneracy for the 1st excited state. I can only find simple examples when x, y and z are equal and 1. Best regards 2. Sep 12, 2016 ### Demystifier In such a more general case you have $$E_{n_1n_2n_3}=\hbar \omega_1 \left( n_1+\frac{1}{2} \right) + \hbar \omega_2 \left( n_2+\frac{1}{2} \right) + \hbar \omega_3 \left( n_3+\frac{1}{2} \right)$$ 3. Sep 12, 2016 ### Ofinns Can you elaborate on that? Is 4y2 just n2 here? And in that case you will get three different energy values: E100 = 3hw1/2 E010 =6hw2 E001 =3hw3/2 Which one is the 1st excited state? Is it E010? 4. Sep 12, 2016 Staff Emeritus No, it enters in as the frequency. 5. Sep 12, 2016 ### Demystifier $$\omega_1=w$$ $$\omega_2=2w$$ $$\omega_3=w$$ Therefore $$E_{000}=2\hbar w$$ $$E_{100}=E_{001}=3\hbar w$$ $$E_{010}=4\hbar w$$ Hence the first excited states are $E_{100}=E_{001}$. 6. Sep 12, 2016 ### Ofinns Thank you, now I understand that part. What will the degeneracy become for the 1st excited state then? Can I use the same formula gn = 1/2(n+1)(n+2) for this case? 7. Sep 12, 2016 ### Demystifier It's 2. No. 8. Sep 12, 2016 ### Ofinns Why is it 2? What formula do you use to calculate that? (Sorry for all the questions..) 9. Sep 12, 2016 ### Demystifier It follows from the last line of post #5. There you see that there are 2 "first excited states" with equal energies. Hence the degeneracy of first excited state is 2. 10. Sep 12, 2016 ### Ofinns Oh! Thank you so much for the answers, this has been bugging me for a while now. Best regards 11. Sep 22, 2016 ### Ofinns Late questions.. but why is w2=2w and not 4w? 12. Sep 22, 2016 ### Demystifier Because, by definition, $$V(x)=\frac{1}{2}m\omega^2 x^2$$ 13. Sep 22, 2016 ### Ofinns Right, of course. Thank you.
2017-12-17 01:29:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812430620193481, "perplexity": 2395.8871618580956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592202.83/warc/CC-MAIN-20171217000422-20171217022422-00795.warc.gz"}
http://blog.benw.xyz/category/physics/
# Minecraft Physics: Steve in Drag An object falling under constant acceleration $g$ travels a distance $h$ in an amount of time $t$ given by $h = v_ot+\frac{1}{2}gt^2$ where $v_o$ is the object's starting velocity.  For an object dropped from rest, $v_o=0$.  Plugging that in and solving for $g$, we find $g=\frac{2h}{t^2}$   (1) Using this equation, we can approximate the acceleration due to gravity in Minecraft by timing how long it takes something to fall some distance.  This model assumes constant acceleration, which means it ignores things like drag forces from air resistance.  It turns out, Minecraft actually does have air resistance, but we will get there a bit later. Youtuber nopefully used the above approximation to determine the gravitational acceleration of sand blocks (further analyzed here).  Since then, though, the addition of command blocks and scoreboards provide a simple way to time things in game, so that's the approach I'll take. ## Experiment Figure 1: The dropper platform. Figure 2: Timer stop and reset. The clock circuit is in the background. The experiment is simple: jump off of stuff, time it with command blocks, and plug the result in the above equation (1) to figure out the gravitational acceleration of a player.  For timing, I used Sethbling's stopwatch design.  The timer-starting command block (figure 1) is activated by a lever which also triggers a trapdoor, causing you to fall onto a pressure plate below, stopping the timer (figure 2). I repeated this experiment five times at three different heights.  The data is in the table below (remember, a Minecraft block is 1 meter on each side): Table 1: Results Height (m) Average Fall Time (s) Acceleration (m/s2) 10 0.94 22.63 20 1.34 22.28 40 1.88 22.63 So using model (1), Minecraft's gravitational acceleration is around 23 m/s2.  But as I mentioned above, we're neglecting air resistance.  There isn't an easy way to experimentally measure the air resistance, but luckily a video game provides us with something that nature does not: the source code.  So let's cheat a little bit and take a look under the hood. ## Cheating In the EntityLivingBase class, there's a method named moveEntityWithHeading that is called 20 times per second (each "tick"), updating the entity's velocity.  If there isn't a block under the living entity (in other words, it's falling), the downward velocity is increased by 0.08 and decreased by 2% each tick.  This means there's a constant acceleration component that is 0.08 blocks/tick2 = 32 m/s2 and a drag force that is directly proportional to the velocity.  32 m/s2 is a lot different than our measured 23 m/s2, so clearly the drag force is not something that can be ignored.  Also, 32m/s2 is over 3 times greater than the gravity on Earth!  (An inventory of cobble is seriously heavy.) On Earth, the gravitational acceleration is about 9.8m/s2 near the surface, and is the same for all objects regardless of how much they weigh.  This isn't the case in Minecraft, as you can see in the more detailed table on Minecraft wiki.  The "drag" contribution is also different for different entities (which is slightly more realistic since air resistance depends on the size and shape of the object). ## Fluid Dynamics The existence of a non-negligible drag force complicates the task of experimentally determining Minecraft's gravitational acceleration.  But it also means that we have a more fun differential equation to play with!  We can start out by writing down the equations of motion for the object falling.  From the source code, we know that the drag force is proportional to the velocity, so we can use a linear drag model for the forces: $ma=mg - kv$ where $m$ is the object's mass, $a$ is the total acceleration, $g$ is the acceleration due to gravity, $k$ is some sort of drag coefficient and $v$ is the object's velocity.  Remembering from physics class that acceleration is the first derivative of velocity with respect to time (denoted $\dot{v}$), we can substitute $a=\dot{v}$.  Doing that and diving both sides by $m$ gives us an equation for the total acceleration: $\dot{v}=g-\frac{k}{m}v$ The value of $\frac{k}{m}$ is what's in the "Drag" column in the Minecraft wiki tableSolving for the velocity as a function of time, we find $v(t) = \frac{mg}{k}(1 - e^{-\frac{k}{m}t})$   (2) We can take some values of $g$ and $\frac{k}{m}$ from the table and graph the velocity (equation 2) of each of the different entities as they fall: The velocity increases for a bit, but the rate at which it increases (the acceleration) slows with time until the entity travels at a constant velocity.  This is called terminal velocity, and it's reached when the gravitational force and the force due to air resistance balance out.  You can see that a falling player can catch up to most other entities, except for fired arrows.* You can see the raw data in a Google spreadsheet here.  I encourage you to try out this set up and Minecraft physics experiments.  Please share your findings! *So if, for example, you're engaging in a PvP fight on Overcast Network and someone tries to jump out of the world to deny you a kill, look over the edge and shoot!  Your arrow has a chance of catching up to them. # From the notes: Two masses on a string going through a hole Two masses on a string going through a hole.  Yeah, that's the stuff we learn about in physics class.  Anyway, this is the first in a (possibly) series of posts where I work out some problem from my lecture notes or homework.  The reason for doing this is because 1) there's a (small) chance it will be useful or interesting to someone else, and 2) the drawn out process of typing this up and thinking about how to explain it helps me study. Figure 1, two masses on a string going through a hole. Consider two masses connected by a string (figure 1).  The ideal string passes through a hole in a plate which is parallel to the x-y plane.  One mass, $m$ is sitting on the plate and is free to rotate about the hole with no friction.  The second mass, $M$, is hanging below the plate and moves only in the vertical z direction under the influence of gravity.   The total length of the string is $l$ and we define $r$ and$s$ to be the distances between the hole and $m$ and $M$, respectively, so that $l=r+s$. We can use Lagrangian mechanics to explore the system.  In Cartesian coordinates, the total kinetic energy is the sum of the kinetic energy of each mass: $T=\frac{1}{2}m(\dot{x_{m}}^2+\dot{y_{m}}^2+\dot{z_{m}}^2)+\frac{1}{2}M(\dot{x_{M}}^2+\dot{y_{M}}^2+\dot{z_{M}}^2)$      (1) where we've used the dot notation for time derivatives. Considering that $m$ is only rotating about the $z$ axis and $M$ is moving only vertically, it makes sense to consider switching to a more reasonable coordinate system.  We can transform to cylindrical coordinates with: $x_{m}=r\cos\theta$ $y_{m}=r\sin\theta$ $z_{m}=0$ $x_{M}=y_{M}=0$ $z_{M}=-s=-(l-r)$ Taking derivatives of these with respect to time and inserting them into (1), we can rewrite the kinetic energy in cylindrical coordinates as $T=\frac{1}{2}m(\dot{r}^2+(r\dot{\theta})^2)+\frac{1}{2}M\dot{r}^2$ where $r$ and $\theta$ are now the generalized coordinates of the system.  The potential energy depends only on the height of $M$ since $m$ is confined to sit on the plate, so $V=-Mg(l-r)$ With $T$ and $V$ we can write the Lagrangian $L=T-V$: $L=\frac{1}{2}m(\dot{r}^2+(r\dot{\theta})^2)+\frac{1}{2}M\dot{r}^2+Mg(l-r)$ We see that $\theta$ does not appear in the Lagrangian, which means that the generalized momentum corresponding to $\theta$ is a conserved quantity: $p_{\theta}=\frac{\partial L}{\partial \dot{\theta}}=mr^2\dot{\theta}=rmv_m=a$     (2) $a$ can be identified as the angular momentum of mass $m$. We can also look at the total energy $E=T+V$: $E=\frac{1}{2}m(\dot{r}^2+(r\dot{\theta})^2)+\frac{1}{2}M\dot{r}^2-Mg(l-r)$     (3) From (2) we see that $\dot{\theta}=\frac{a}{mr^2}$.  Substituting this in (3) gives $E=\frac{1}{2}m\dot{r}^2+\frac{a^2}{2mr^2}+\frac{1}{2}M\dot{r}^2-Mg(l-r)$ We can separate the constants on the left hand side and use the remaining terms to define an effective potential $V'(r)$: $\frac{E+Mgl}{m+M}=\frac{1}{2}\dot{r}^2+\frac{1}{2}\frac{a^2}{m(m+M)r^2}+\frac{Mgr}{m+M}$ $V'(r)=\frac{1}{2}\frac{a^2}{m(m+M)r^2}+\frac{Mgr}{m+M}$ Imagine $m$ is rotating about the hole.  This means there will be some centripetal (or centrifugal, depending on your reference system) acceleration which will pull the hanging mass $M$ up.  We want to find $r$ such that the force is an extremum.  In other words: $\frac{\partial V'}{\partial r}=0$ Setting the derivative equal to zero gives $\frac{-a^2}{m(m+M)r^3}+\frac{Mg}{m+M}=0$ And solving for $r$: $r=\sqrt[3]{\frac{a^2}{mMg}}$ which gives the radius of "orbit" of mass $m$ about the hole for a given angular momentum $a$ such that $M$ remains suspended in the air. # The physics of Jake Brown's ollie 720 I'm a couple weeks late on this, but recently Jake Brown landed the first ever ollie 720.  To put that in layman's terms, he did two full rotations without holding the board on his feet with his hands.  Check it out: So this seems crazy.  How did he do that?  How did he keep the board on his feet?  Magnets?  It turns out, it's actually very simple physics: #### The Conservation of Energy That's it!  Jake Brown doesn't really have to do anything because the conservation of energy won't be violated (it's a law!).  So yeah, thanks physics!  Skateboarding is so easy. # Popularizing Science One Operator at a Time One thing I'm very interested in is the popularization of science and other STEM related fields.  I feel that it's very important to maintain a society that is both scientifically literate and scientifically enthusiastic. This has motivated me to purpose a new notation for the total angular momentum operator in quantum mechanics that will be more appealing to the general public and more specifically the "cool youth" of today: It's like flipping through a celebrity gossip magazine while you compute matrix elements! # Monday Exams If you do a series expansion of $studying(t)$ around $t = Sunday,$ you'll find that all the terms drop out and I don't study. # Fun with Diffraction Gratings A laser beam passing through a transmission diffraction grating straight on gives the standard diffraction pattern we all know and love.  It's a bit more interesting, though, if the beam hits the grating at an angle: figure 1 Most intro optics books cover this situation, and the result is (equation 1): $a[\sin(\theta_{m}) - \sin(\theta_{i})] = m\lambda$ where $a$ is the grating spacing, $\theta_{m}$ is the angle of the mth maxima, $\theta_{i}$ is the incident angle, $m$ is the maxima order, and $\lambda$ is the wavelength. We can rewrite this (homework) in a more useful way as (equation 2): $\theta_{m}=\arcsin[\frac{m\lambda}{a} + \sin(\theta_{i})]$ A similar, but slightly more complicated situation happens when you rotate the grating instead of the lazer: figure 2 With a rotated grating (figure 2), the laser is still hitting the grating at an angle as it is in figure 1.  So, starting with equation 2 and using some geometry we get the angles that satisfy the maxima condition: $\theta_{m'}=\arcsin(\frac{m\lambda}{a}) + \sin(\theta_{g}) - \theta_{g}$ This seemed liked a fun thing to model in Mathematica, especially since I had never played with any of the graphics features before. You can view the source here. You need Wolfram's CDF player, but it's totally worth because then you can look through all of the awesome demonstrations they have online.
2017-08-20 15:30:03
{"extraction_info": {"found_math": true, "script_math_tex": 81, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7696092128753662, "perplexity": 603.2404009801239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106779.68/warc/CC-MAIN-20170820150632-20170820170632-00044.warc.gz"}
https://math.stackexchange.com/questions/1419938/example-of-topological-spaces-where-sequential-continuity-does-not-imply-continu
Example of topological spaces where sequential continuity does not imply continuity Please give an example of a function $f : X \to Y$ where $X,Y$ are topological space , such that there exist $x \in X$ such that for every sequence $\{x_n\}$ in $X$ converging to $x$ , $\{f(x_n)\}$ converges to $f(x)$ but $f$ is not continuous at $x$ ; also please give such an example that $f$ is not continuous any where in the domain but for every $x \in X$ and sequence $\{x_n\}$ in $X$ converging to $x$ , $\{f(x_n)\}$ converges to $f(x)$. Let $X=(\Bbb R,\tau_{cc})$ be the real line with the cocountable topology, i.e. closed sets are the countable sets in $\Bbb R$. Note that any subset $A$ of $X$ is sequentially closed since $A$ contains the limit of every convergent sequence in $A$, as convergence in $X$ means that a sequence is eventually constant. Let $Y$ be the discrete real line, and let $f:X\to Y$ be given by the identity. Clearly $f$ is sequentially continuous, however, it is not continuous at any point $x$, since continuity at $x$ means that $\{x\}$ is open in $X$. Here is an example where the function $f$ is everywhere sequentially continuous but nowhere continuous, the space $X$ is completely regular (in particular Hausdorff) and the space $Y$ is finite discrete. Let $I$ be an uncountable set and consider the space $2^I$ with its product topology, considered as the set of all functions $x : I \to \{0,1\}$. Note that $2^I$ is compact Hausdorff by Tychonoff's theorem; in particular $2^I$ is completely regular. Let $X_0$ be the set of all $x \in 2^I$ such that $x^{-1}(\{1\})$ is countable (so an element of $X_0$ "mostly" takes the value 0). By definition of the product topology, $X_0$ is dense in $2^I$. Also, $X_0$ is sequentially closed. To see this, suppose $x_1, x_2, \dots \in X_0$ and $x_n \to x$, so that $x_n(i) \to x(i)$ for every $i \in I$. If we let $A_n = x_n^{-1}(\{1\})$, which is countable, then it's clear that $x^{-1}(\{1\}) \subset \bigcup_n A_n$ which is thus also countable. So $x \in X_0$. Likewise let $X_1$ be the set of all $x \in 2^I$ such that $x^{-1}(\{0\})$ is countable. Then $X_1$ is also dense and sequentially closed, and $X_0 \cap X_1 = \emptyset$. Set $X = X_0 \cup X_1$ with the subspace topology inherited from $2^I$. Then $X$ is also completely regular. Let $Y = \{0,1\}$ with the discrete topology, and define $f : X \to Y$ by $$f(x) = \begin{cases} 0, & x \in X_0 \\ 1, & x \in X_1. \end{cases}$$ Now $f$ is sequentially continuous everywhere, because $X_0, X_1$ are sequentially closed. But for any nonempty $U \subset X$, since $X_0, X_1$ are both dense in $X$, we see that $f$ takes both the values $0$ and $1$ on $U$. Therefore $f$ is nowhere continuous. For (locally) compact Hausdorff examples, see A function on an LCH space that is sequentially continuous but nowhere continuous. Let $\omega_1$ be the first uncountable ordinal and take $X=[0, \omega_1]$ (so really $\omega_1\cup\{\omega_1\}$) with the order topology. Every sequence converging in $[0,\omega_1)$ is bounded, hence every sequence in $X$ which converges to $\omega_1$ must be eventually constant. Therefore any $f\colon X\to Y$ is sequentially continuous in $\omega_1$. Now you set take $f(x)=0$, if $x<\omega_1$ and $f(\omega_1)=1$. This function is not continuous in $\omega_1$, because $\{\omega_1\}$ is not open, since $\omega_1$ is not a successor.
2019-10-17 15:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785241484642029, "perplexity": 42.56886799443715}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00469.warc.gz"}
http://openmath.org/pipermail/om/2003-September/000663.html
# [om] semantics of n-ary xor? Andreas Strotmann Strotmann at rrz.uni-koeln.de Wed Sep 17 13:07:27 CEST 2003 What exactly is the semantics of an n-ary xor? I'm not kidding -- I really don't know. Let me explain why. At first glance, an obvious definition is xor(a,b,c):=xor(a,xor(b,c)) -- i.e. n-ary xor is something like a parity predicate. It struck me that for infinite index sets I, xor_{i\in I}(P(i)) is always undefined and thus not very useful, which of course bugged me since I was the one who brought up the topic of "big" versions of n-ary operators, and I fell to wondering if the "obvious" semantics of n-ary xor is really the correct one. And I realized that textbooks tend to explain the meaning of binary xor as "either...or...(but not both)" -- and that the n-ary version of that phrase (either ... or... or... or...) does *not* mean parity -- it means "only one of these choices". Thus, a "natural" (as opposed to "obvious") semantics of n-ary xor is "true if exactly one of the arguments is true, false otherwise" -- which very nicely generalizes to a well-known "big-xor" operator, namely the "exists-uniquely" quanitifier, which I suspect retains a well-defined semantics even in arbitrary transfinite contexts. Now I ask you: what exactly *is* the meaning of n-ary xor in MathML (or OpenMath, for that matter)? -- Andreas -- om at openmath.org - general discussion on OpenMath Post public announcements to om-announce at openmath.org Automatic list maintenance software at majordomo at openmath.org Mail om-owner at openmath.org for assistance with any problems
2015-10-05 03:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282360434532166, "perplexity": 7039.261134394465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676622.16/warc/CC-MAIN-20151001215756-00071-ip-10-137-6-227.ec2.internal.warc.gz"}
https://answers.gazebosim.org/answers/12047/revisions/
# Revision history [back] Hi, depending on how you installed gazebo: a) If from source, then the file is where you downloaded the repository: /MyPath/gazebo/plugins/RandomVelocityPlugin.cc b) If from debian (sudo apt-get install gazebo..): you can find only the header file by running \$ locate RandomVelocityPlugin.hh, it should give you a path similar to: /usr/include/gazebo-5.1/gazebo/plugins/
2021-05-17 17:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2009340226650238, "perplexity": 9933.350132136184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00587.warc.gz"}
https://studyadda.com/sample-papers/jee-main-sample-paper-47_q61/301/303682
• # question_answer Let $f(x)={{x}^{2}}-5x+6,$$g(x)=f(|x|),$$h(x)=|g(x)|$ and $\phi (x)=h(x)-(x)$ are four functions where $(x)$ is the least integral function of $x\ge x$. Then, the number of solutions of the equation, $g(x)=0$ is A)  0                                 B)  2                  C)  4                                 D)  6 Given $g(x)=0$ $\Rightarrow$            $f(|x|)=0$ $\Rightarrow$            ${{x}^{2}}-5|x|+6=0$ $\Rightarrow$            $\left\{ \begin{matrix} {{x}^{2}}-5x+6=0,\,\,x\ge 0 \\ {{x}^{2}}+5x+6=0,\,\,x<0 \\ \end{matrix} \right.$ $\Rightarrow$            $\left\{ \begin{matrix} x=2,\,3,\,\,x\ge 0 \\ x=-3,\,-2,\,\,\,x<0 \\ \end{matrix} \right.$ $\therefore$    Number of solutions = 4
2022-01-19 16:21:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689487218856812, "perplexity": 6790.079316607148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301475.82/warc/CC-MAIN-20220119155216-20220119185216-00153.warc.gz"}
http://kenshi.miyabe.name/wordpress/?p=1194
# Unified Characterizations of Lowness Properties via Kolmogorov Complexity News 19 Jan 2014, submitted 24 Mar 2015, published Title Unified Characterizations of Lowness Properties via Kolmogorov Complexity (with T. Kihara) Type Full paper Journal Archive for Mathematical Logic: Volume 54, Issue 3 (2015), Page 329-358 DOI: 10.1007/s00153-014-0413-8 Abstract Consider a randomness notion $\mathcal C$. A uniform test in the sense of $\mathcal C$ is a total computable procedure that each oracle $X$ produces a test relative to $X$ in the sense of $\mathcal C$. We say that a binary sequence $Y$ is $\mathcal C$-random uniformly relative to $X$ if $Y$ passes all uniform $\mathcal C$ tests relative to $X$. Suppose now we have a pair of randomness notions $\mathcal C$ and $\mathcal D$ where $\mathcal{C}\subseteq \mathcal{D}$, for instance Martin-L\”of randomness and Schnorr randomness. Several authors have characterized classes of the form Low($\mathcal C, \mathcal D$) which consist of the oracles $X$ that are so feeble that $\mathcal C \subseteq \mathcal D^X$. Our goal is to do the same when the randomness notion $\mathcal D$ is relativized uniformly: denote by Low$^\star$($\mathcal C, \mathcal D$) the class of oracles $X$ such that every $\mathcal C$-random is uniformly $\mathcal D$-random relative to $X$. (1) We show that $X\in{\rm Low}^\star({\rm MLR},{\rm SR})$ if and only if $X$ is c.e.~tt-traceable if and only if $X$ is anticomplex if and only if $X$ is Martin-L\”of packing measure zero with respect to all computable dimension functions. (2) We also show that $X\in{\rm Low}^\star({\rm SR},{\rm WR})$ if and only if $X$ is computably i.o.~tt-traceable if and only if $X$ is not totally complex if and only if $X$ is Schnorr Hausdorff measure zero with respect to all computable dimension functions.
2018-12-16 09:35:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896921694278717, "perplexity": 443.48287493174587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827596.48/warc/CC-MAIN-20181216073608-20181216095608-00183.warc.gz"}
https://docs.duckietown.org/daffy/opmanual_autolab/draft/light_sensors.html
build details # DEMO - Light sensors Modified 2019-08-29 by gerni17 A fully operational Duckietown (unknown ref opmanual_duckietown/duckietowns) previous warning next (5 of 8) index warning ```I will ignore this because it is an external link. > I do not know what is indicated by the link '#opmanual_duckietown/duckietowns'.``` Location not known more precisely. Created by function `n/a` in module `n/a`. with watchtowers built to right specs. TODO : add the name of the right specs Accurate measure of the light field of the Autolab This container will show you how to mount the Adafruit TCS34725 sensor on a watchtower and how to correctly plug it in. ## Hardware setup Modified 2019-08-29 by gerni17 ### Requirements Modified 2019-08-29 by gerni17 the upper plate of the watchtower need apposite holes to fit the sensor. ### Soldering Modified 2019-08-29 by gerni17 The first thing to do is to solder the pin headers to the actual sensor: To do so the easiest way is to use a breadboard and cut a few pin headers in order to get a nice 90-degree angle between the sensor and the pins. ### Fix the sensor to the watchtower plate. Modified 2019-08-29 by gerni17 To do so put the pin headers into the big hole and fix it with the screws. ### Wire the sensor to the watchtower Modified 2020-07-18 by Andrea Censi The best way to figure out how to do this is to always take the sensor and turn it until you can read what is written on it and that will be our basic position. Plugins on the raspberry-pi: o o 1 o o 2 3 4 o 5 Plugins on TCS34725: o 1 2 4 3 o 5 This is the pin layout, every number corresponds to a wire color (it doesn’t matter which color is which number but just make sure that the wire is connected to the right pin header). Now the hardware construction part should be done. ## Software setup Modified 2019-08-29 by gerni17 ### Requirements *A sensor which is correctly setup on a running watchtower. Ideally an external light sensor that measures luminescence in [lux] that can be used as reference. *An environment where you can control the light intensity, it can also be very small (try to use two light intensities below and above the operating point). ### Introduction Modified 2019-08-29 by gerni17 To calibrate the sensor we need to measure the light at two different intensities in order to be able to make a linear approximation. To do so we measure two different light intensities and input the expected values to the sensor and then make a linear approximation. To do so measure the two intensities by running the container and follow the steps of the instruction provided on the console (don’t forget that you need an external reference sensor as well). ### Run the calibration Modified 2019-08-29 by gerni17 You can pull the image of the docker container to the agent ### Check calibration of sensor Modified 2019-08-29 by gerni17 You can check the calibration by going to the calibration folder, TODO ## Run Modified 2020-07-18 by Andrea Censi To run the sensor container first you need to pull the image: ``````$docker -H HOSTNAME.local pull gian1717/sensor:p2 `````` And then start the container: ``````$ docker -H HOSTNAME.local run -it --net host --privileged --name light-sensor -v /data:/data gian1717/sensor:p2 `````` If you want to know more: The whole sensor software is on my GitHub account , on the repository light sensor, sensor calibration.
2021-04-22 16:53:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22902515530586243, "perplexity": 2969.368585956923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00286.warc.gz"}
http://electronics.stackexchange.com/questions/8365/beginning-arm-cortex-ax-hardware-development/8367
# Beginning arm cortex Ax hardware development Where can one find information on how to put together a minimal linux bootable board based on the cortex A family (like the beagleboard)? Programming information is plentyfull, but hardware knowledge appear more arcane. I'm especially curious about: • What external components are needed and why. • Why dev boards seems to end up around 200$, even though I could get an omap3517 for 15$? • What kind of equipment is needed to create prototypes around a chip such as the omap3517? - I've been blogging about the Linux board I've been working on starting here and continuing here. I started thinking I would use a Cortex A8, but eventually settled on the Atmel AT91SAM9G20. • You can see the components I used in my schematic, but I don't know of a more general explanation of why each is necessary. • The processor is only about 10-20% of the total cost of parts. Assembly is another $4-40, depending on the quantity built. I suspect the profit margin is 30-50%. • Depending on the package, the OMAP3517 BGA package has either a 1 mm or 0.65 mm ball pitch. Generally, below 0.8 mm pitch, you need to X-ray at least some fraction of the finished boards to check for errors. (Just for the record, the OMAP3517 1 mm packages aren't actually available yet.) If you have other, more specific questions, I'd be glad to try to answer them. - Very concise and informative answer. Thank you. – Imbrondir Dec 28 '10 at 13:56 People don't build their own OMAP boards because you need either 1) an x-ray machine, or 2) extreme patience and experience with re-balling bgas and hot-air rework in order to successfully solder high density bga packages. Also asking why the board costs$200 while the chip costs $15 is like complaining about why software costs money even if you use free and open source compiler and libraries. - Not complaining. Just want to understand. Thank you. – Imbrondir Dec 28 '10 at 13:53 For a linux system your essential components will be a processor, ram (probably at least ddr) and flash. Then you'll need all the extra stuff, power, lcd connectors, usb, ethernet... any peripherals you want, etc. The Beagle board uses a special POP (package on package) technology where the Flash and RAM are literally mounted onto the top of the OMAP. This is how they can get the board so small. But... it cost some serious dough, like way more than$15. - As to why the board will cost 200 USD + when the chip is only 15 USD I can give you an answer: You need to check out the other required components: * OMAP: Alright, you'll certainly need that, but how can I power it? * PMIC: Power Management IC, alright, now I have my required 4 voltages, now for some memory * SDRAM: Depending on what you will need, you need an appropriate amount of SDRAM * NAND flash: Definitely you want some non-volatile memory * SD card: Maybe some exchangeable memory * Connectors: Count in the cost for connectors for USB, LAN, eventually you want WLAN Bluetooth or GPS Alright, now you have the components, but where do you put them on. Designing such a system usually leads to 10 or 12 layer PCBs, eventually including micro-vias, which are not cheap themselves. Finally, add testing and bring-up of the board, account for a few prototypes before the production can actually start and you have your USD 200 :) -
2013-05-25 11:22:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29207390546798706, "perplexity": 2287.4845144710403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705939136/warc/CC-MAIN-20130516120539-00008-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mymathforum.com/algebra/12196-index-fibonacci-number.html
My Math Forum Index of a fibonacci number Algebra Pre-Algebra and Basic Algebra Math Forum April 9th, 2010, 08:56 PM #1 Newbie   Joined: Nov 2009 Posts: 6 Thanks: 0 Index of a fibonacci number Given a fibonacci number, is there any efficient way to compute the its position in the sequence ? I found this on Wikipedia [img] http://upload.wikimedia.org/math/4/6/2/ ... ea259e.png [/img] But when F very large (more than 10000 digits) there wont be sufficient precision to get the answer correctly . Is there any way like matrix exponentiation or something that will get me the answer ? PS: if A= [ [0,1],[1,1] ] .. computing A^n (can be done in O(logn)) will give the nth fibonacci number Thank you April 10th, 2010, 03:21 AM #2 Math Team   Joined: Apr 2010 Posts: 2,778 Thanks: 361 Re: Index of a fibonacci number Hello abhijith, To find numbers of fibonacci, you can use this $f_n=\frac{(1+\sqrt{5})^n-(1-\sqrt{5})^n}{2^n\cdot\sqrt{5}}$ see here for dutch page, quite easily explained, or here for english page in more detail. Will this do? Hoempa April 10th, 2010, 01:17 PM #3 Senior Member   Joined: Apr 2008 Posts: 435 Thanks: 0 Re: Index of a fibonacci number I think you were trying to find the index of the Fibonacci number. Yes, there are a few ways of doing this. Perhaps the easiest way is, $If \ A \ is \ a \ Fibonacci \ number, \ then \ it's \ index \ can \ be \ found \ by \\ F_n = \left \lfloor log_\phi (A \sqrt{5} ) + 1/2 \right \rfloor$ Tags fibonacci, index, number Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post sigma123 Linear Algebra 0 August 7th, 2012 01:53 AM mikel03 Algebra 1 April 1st, 2009 05:30 AM totus Advanced Statistics 5 March 29th, 2009 08:12 PM jamil Algebra 0 December 2nd, 2007 04:33 AM soandos Number Theory 44 November 19th, 2007 12:07 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2018-08-15 00:50:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866596937179565, "perplexity": 2885.178532575132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209755.32/warc/CC-MAIN-20180815004637-20180815024637-00279.warc.gz"}
http://debasishg.blogspot.it/2013/03/an-exercise-in-refactoring-playing.html
## Monday, March 04, 2013 ### An exercise in Refactoring - Playing around with Monoids and Endomorphisms A language is powerful when it offers sufficient building blocks for library design and adequate syntactic sugar that helps build expressive syntax on top of the lower level APIs that the library publishes. In this post I will discuss an exercise in refactoring while trying to raise the level of abstraction of a modeling problem. Consider the following modeling problem that I recently discussed in one of the Scala training sessions. It's simple but offers ample opportunities to explore how we can raise the level of abstraction in designing the solution model. We will start with an imperative solution and then incrementally work on raising the level of abstraction to make the final code functional and more composable. A Problem of Transformation .. The problem is to compute the salary of a person through composition of multiple salary components calculated based on some percentage of other components. It's a problem of applying repeated transformations to a pipeline of successive computations - hence it can be generalized as a case study in function composition. But with some constraints as we will see shortly. Let's say that the salary of a person is computed as per the following algorithm : 1. basic = the basic component of his salary 2. allowances = 20% of basic 3. bonus = 10% of (basic + allowances) 4. tax = 30% of (basic + allowances + bonus) 5. surcharge = 10% of (basic + allowances + bonus - tax) Note that the computation process starts with a basic salary, computes successive components taking the input from the previous computation of the pipeline. But there's a catch though, which makes the problem a bit more interesting from the modleing perspective. Not all components of the salary are mandatory - of course the basic is mandatory. Hence the final components of the salary will be determined by a configuration object which can be like the following .. // an item = true means the component should be activated in the computation case class SalaryConfig( surcharge: Boolean = true, tax: Boolean = true, bonus: Boolean = true, allowance: Boolean = true ) So when we compute the salary we need to take care of this configuration object and activate the relevant components for calculation. A Function defines a Transformation .. Let's first translate the above components into separate Scala functions .. // B = basic + 20% val plusAllowance = (b: Double) => b * 1.2 // C = B + 10% val plusBonus = (b: Double) => b * 1.1 // D = C - 30% val plusTax = (b: Double) => 0.7 * b // E = D - 10% val plusSurcharge = (b: Double) => 0.9 * b Note that every function computes the salary up to the stage which will be fed to the next component computation. So the final salary is really the chained composition of all of these functions in a specific order as determined by the above stated algorithm. But we need to selectively activate and deactivate the components depending on the SalaryConfig passed. Here's the version that comes straight from the imperative mindset .. The Imperative Solution .. // no abstraction, imperative, using var def computeSalary(sc: SalaryConfig, basic: Double) = { var salary = basic if (sc.allowance) salary = plusAllowance(salary) if (sc.bonus) salary = plusBonus(salary) if (sc.tax) salary = plusTax(salary) if (sc.surcharge) salary = plusSurcharge(salary) salary } Straight, imperative, mutating (using var) and finally rejected by our functional mindset. Thinking in terms of Expressions and Composition .. Think in terms of expressions (not statements) that compose. We have functions defined above that we could compose together and get the result. But, but .. the config, which we somehow need to incorporate as part of our composable expressions. So direct composition of functions won't work because we need some conditional support to take care of the config. How else can we have a chain of functions to compose ? Note that all of the above functions for computing the components are of type (Double => Double). Hmm .. this means they are endomorphisms, which are functions that have the same argument and return type - "endo" means "inside" and "morphism" means "transformation". So an endomorphism maps a type on to itself. Scalaz defines it as .. sealed trait Endo[A] { /** The captured function. */ def run: A => A //.. } But the interesting part is that there's a monoid instance for Endo and the associative append operation of the monoid for Endo is function composition. That seems mouthful .. so let's dissect what we just said .. As you all know, a monoid is defined as "a semigroup with an identity", i.e. trait Monoid[A] { def append(m1: A, m2: A): A def zero: A } and append has to be associative. Endo forms a monoid where zero is the identity endomorphism and append composes the underlying functions. Isn't that what we need ? Of course we need to figure out how to sneak in those conditionals .. implicit def endoInstance[A]: Monoid[Endo[A]] = new Monoid[Endo[A]] { def append(f1: Endo[A], f2: => Endo[A]) = f1 compose f2 def zero = Endo.idEndo } But we need to append the Endo only if the corresponding bit in SalaryConfig is true. Scala allows extending a class with custom methods and scalaz gives us the following as an extension method on Boolean .. /** * Returns the given argument if this is true, otherwise, the zero element * for the type of the given argument. */ final def ??[A](a: => A)(implicit z: Monoid[A]): A = b.valueOrZero(self)(a) That's exactly what we need to have the following implementation of a functional computeSalary that uses monoids on Endomorphisms to compose our functions of computing the salary components .. // compose using mappend of endomorphism def computeSalary(sc: SalaryConfig, basic: Double) = { val e = sc.surcharge ?? plusSurcharge.endo |+| sc.tax ?? plusTax.endo |+| sc.bonus ?? plusBonus.endo |+| sc.allowance ?? plusAllowance.endo e run basic } More Generalization - Abstracting over Types .. We can generalize the solution further and abstract upon the type that represents the collection of component functions. In the above implementation we are picking each function individually and doing an append on the monoid. Instead we can abstract over a type constructor that allows us to fold the append operation over a collection of elements. Foldable[] is an abstraction which allows its elements to be folded over. Scalaz defines instances of Foldable[] typeclass for List, Vector etc. so we don't care about the underlying type as long as it has an instance of Foldable[]. And Foldable[] has a method foldMap that makes a Monoid out of every element of the Foldable[] using a supplied function and then folds over the structure using the append function of the Monoid. trait Foldable[F[_]] { self => def foldMap[A,B](fa: F[A])(f: A => B)(implicit F: Monoid[B]): B //.. } In our example, f: A => B is the endo function and the append is the append of Endo which composes all the functions that form the Foldable[] structure. Here's the version using foldMap .. def computeSalary(sc: SalaryConfig, basic: Double) = { val components = List((sc.surcharge, plusSurcharge), (sc.tax, plusTax), (sc.bonus, plusBonus), (sc.allowance, plusAllowance) ) val e = components.foldMap(e => e._1 ?? e._2.endo) e run basic } This is an exercise which discusses how to apply transformations on values when you need to model endomorphisms. Instead of thinking in terms of generic composition of functions, we exploited the types more, discovered that our tranformations are actually endomorphisms. And then applied the properties of endomorphism to model function composition as monoidal appends. The moment we modeled at a higher level of abstraction (endomorphism rather than native functions), we could use the zero element of the monoid as the composable null object in the sequence of function transformations. In case you are interested I have the whole working example in my github repo. Unknown said... Great post! The endomorphism monoid is often handy. Here's a haskell version for fun: https://gist.github.com/jhickner/5081100 Anonymous said... Nice article! I have a question as I'm new to Scala and I'm looking for a pattern similar to the one in the article, yet different. I'd like to define a map that has a key as an Action, and a list of methods to be applied as a value. I have this prototype, sorry for dumping this code here: def doOne(user: String): List[String] = List("one" + "*" + user) def doTwo(user: String): List[String] = List("two" + "*" + user) def doThree(user: String): List[String] = List("three" + "*" + user) def collectAll(funs: List[String => List[String]])(user: String): List[String] = (List[String]() /: funs)((a, b) => a ::: b(user)) val actMap = Map( "one" -> List(doOne _), "one+two" -> List(doOne _, doTwo _) ) def act(action: String, user: String): List[String] = { collectAll(actMap(action))(user) } So I can do something like this with partially applied functions but they all take the same type and number of arguments. In my case I need to pass various args to functions, and I want it to happen without explicitly providing them. I can't think of a proper way of doing it. Method 'act' will be called by client which will provide args in some way. I'm suspecting that either continuations, closures or something else :) should allow me to do this. I would appreciate to be pointed in a right direction. Thank you Debasish Ghosh said... Dear Anonymous - Your doOne, dotwo all seem to be String => String .. at least the implemented ones. Are they really String => String or String => List[String] ? Debasish Ghosh said... Dear Anonymous - Making the do* as vals will let u do away with the partial applications .. have a look at https://gist.github.com/debasishg/5097410 .. Let me know what u think. Thanks. Anonymous said... Dear Debasish Regarding the first question each do* function returns List[String]. Alternatively each function could be called as continuation if possible. Anonymous said... The best I could come up with is: trait ActionMapper { def id: Int def active: Boolean val doOne = (user: String) => List("one" + "*" + user) val doTwo = (user: String) => List("two" + "*" + user + ":" + id) val doThree = (user: String) => List("three" + "*" + user + ":" + active) def collectAll(funs: List[String => List[String]])(user: String): List[String] = (List[String]() /: funs)((a, b) => a ::: b(user)) val actMap = Map( "one" -> List(doOne), "one+two" -> List(doOne, doTwo), "one+two+three" -> List(doOne, doTwo, doThree)) def act(action: String, user: String): List[String] = { collectAll(actMap(action))(user) } } object ActionExecutioner extends App with ActionMapper { override val id = 321 override val active = false println(act("one+two+three", "void")) } Using either val of def for do* functions. I wonder if there is a more elegant way of doing it. The problem is that I have to implement/override both 'id' and 'active' in order to use the trait and thus I can't just do this: object Fails extends ActionMapper { act("one", "void") } although those args are not technically required for 'doOne'. Alex Anonymous said... Dear Debasish If you let me rephrase my question to make it more general... What is a pattern for implementing a map class that maps a key (action) to a list of methods and how would a client go about calling those methods and passing arguments to them? Debasish Ghosh said... You can try Kleisli >>= .. Will chalk out an implementation tonight .. Anonymous said... Dear Debasish, Thank you for your advice. I read up on Kleisli. It looks very cool indeed and similar to what I'm trying to do. I'll try to rework my data representation to fit that pattern. I'll share my code if it will have anything interesting in it. Once again big thanks, Alex xpmatteo said... Hi Debasish, you should not represent money with a Double. You are going to get inexact results. (See e.g. http://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency)
2017-03-23 04:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41417548060417175, "perplexity": 3438.8716542273737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186774.43/warc/CC-MAIN-20170322212946-00302-ip-10-233-31-227.ec2.internal.warc.gz"}
http://molmod.github.io/yaff/ug_sampling.html
# 9. Exploring the phase space¶ ## 9.1. Introduction¶ This section assumes that one has defined a force-field model as explained in the previous section, Force-field models. The tools discussed in this section allow one to explore the phase space of a system (and derive its thermodynamic properties) using a force field model. All algorithms are implemented such that they assume very little about the internals of the force field models. The force field takes atomic positions and cell vectors as input, and returns the energy (and optionally forces and a virial tensor). All algorithms below are only relying on this basic interface. Most of the algorithms are extensible through so-called hooks. These hooks are pieces of code that can be plugged into a basic algorithm (like a Verlet integrator) to add functionality like writing trajectory files, sampling other ensembles or computing statistical properties on the fly. One important aspect of yaff.analysis is that that trajectory data can be written to an HDF5 file. In short, HDF5 is a cross-platform format to store efficiently any type of binary array data. A HDF5 file stores arrays in a tree sturcture, which is similar to files and directories in a regular file system. More details about HDF5 can be found on wikipedia and on the non-profit HDF Group website. This format is designed to handle huge amounts of binary data and it greatly facilitates post-processing analysis of the trajectory data. By convention, Yaff stores all data in HDF5 files in atomic units. ## 9.2. Molecular Dynacmis¶ ### 9.2.1. Overview of the Verlet algorithms¶ The equations of motion in the NVE ensemble can be integrated as follows: verlet = VerletIntegrator(ff, 1*femtosecond, temp0=300) verlet.run(5000) This example just propagates the system with 5000 steps of 1 fs, but does nearly nothing else. After calling the run method, one can inspect atomic positions and velocities of the final time step: print verlet.vel print verlet.pos print ff.system.pos # equivalent to the previous line print verlet.ekin/kjmol # the kinetic energy in kJ/mol. By default all information from past steps is discarded. If one is interested in writing a trajectory file, one must add a hook to do so. The following example writes a HDF5 trajectory file: hdf5_writer = HDF5Writer(h5.File('output.h5', mode='w')) verlet = VerletIntegrator(ff, 1*femtosecond, hooks=hdf5_writer, temp0=300) verlet.run(5000) The parameters of the integrator can be tuned with several optional arguments of the VerletIntegrator constructor. See yaff.sampling.verlet.VerletIntegrator for more details. The exact contents of the HDF5 file depends on the integrator used and the optional arguments of the integrator and the yaff.sampling.io.HDF5Writer. The typical tree structure of a trajectory HDF5 file is as follows. (Comments were added manually to the output of h5dump to describe all the arrays.): \$ h5dump -n production.h5 HDF5 "production.h5" { FILE_CONTENTS { group / group /system # The 'system' group contains most attributes of the System class. dataset /system/bonds dataset /system/charges dataset /system/ffatype_ids dataset /system/ffatypes dataset /system/masses dataset /system/numbers dataset /system/pos dataset /system/rvecs group /trajectory # The 'trajectory' group contains the time-dependent data. dataset /trajectory/cell # cell vectors dataset /trajectory/cons_err # the root of the ratio of the variance on the conserved quantity # and the variance on the kinetic energy dataset /trajectory/counter # an integer counter for the integrator steps dataset /trajectory/dipole # the dipole moment dataset /trajectory/dipole_vel # the time derivative of the dipole moment dataset /trajectory/econs # the conserved quantity dataset /trajectory/ekin # the kinetic energy dataset /trajectory/epot # the potential energy dataset /trajectory/epot_contribs # the contributions to the potential energy from the force field parts. dataset /trajectory/etot # the total energy (kinetic + potential) dataset /trajectory/pos # the atomic positions dataset /trajectory/rmsd_delta # the RMSD change of the atomic positions dataset /trajectory/rmsd_gpos # the RMSD value of the Cartesian energy gradient (forces if you like) dataset /trajectory/temp # the instantaneous temperature dataset /trajectory/time # the time dataset /trajectory/vel # the atomic velocities dataset /trajectory/volume # the (generalized) volume of the unit cell } } The hooks argument may also be a list of hook objects. For example, one may include the yaff.sampling.nvt.AndersenThermostat to reset the velocities every 200 steps. The yaff.sampling.io.XYZWriter can be added to write a trajectory of the atomic positions in XYZ format: hooks=[ HDF5Writer(h5.File('output.h5', mode='w')), AndersenThermostat(temp=300, step=200), XYZWriter('trajectory.xyz'), ] By default a screen logging hook is added (if not yet present) to print one line per iteration with some critical integrator parameters. The output of the VerletIntegrator is as follows: VERLET ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ VERLET Cons.Err. = the root of the ratio of the variance on the conserved VERLET quantity and the variance on the kinetic energy. VERLET d-rmsd = the root-mean-square displacement of the atoms. VERLET g-rmsd = the root-mean-square gradient of the energy. VERLET counter Cons.Err. Temp d-RMSD g-RMSD Walltime VERLET ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ VERLET 0 0.00000 299.5 0.0000 93.7 0.0 VERLET 1 0.15231 286.4 0.0133 100.1 0.0 VERLET 2 0.17392 297.8 0.0132 90.6 0.0 VERLET 3 0.19803 306.8 0.0137 82.1 0.0 The screen output is geared towards detecting simulation errors. The parameters Cons.Err., Temp, d-RMSD, g-RMSD should exhibit only minor fluctuations in a proper MD run, except when the system only consists of just a few atoms. The wall time should increase at a somewhat constant rate. It is often desirable to control the amount of data processed by the hooks, e.g. to limit the size of the trajectory files and the amount of screen output. Most hooks have start and step arguments for this purpose. Consider the following example: hooks=[ VerletScreenLog(step=100) HDF5Writer(h5.File('output.h5', mode='w'), start=5000, step=10), XYZWriter('trajectory.xyz', step=50), AndersenThermostat(temp=300, step=1000), ] In this example, the screen output contains only one line per 100 NVE iterations. The HDF5 trajectory only contains trajectory data starting from step 5000 with intervals of 10 steps. The XYZwriter only contains the positions of the atoms every 50 steps. The Andersen thermostat only resets the atomic velocities every 1000 steps. For a detailed description of all options of the VerletIntegrator and the supported hooks, we refer to the reference documentation: ### 9.2.2. Initial atomic velocities¶ When no initial velocities are given to the constructor of the VerletIntegrator constructor, these velocities are randomly sampled from a Poisson-Boltzmann distribution. The temperature of the distribution is controlled by the temp0 argument and if needed, the velocities can be rescaled by using the scalevel0=True argument. The default behavior is to not remove center-of-mass and global angular momenta. However, for the Nose-Hoover thermostat, this is mandatory and done automatically. For the computation of the instantanuous temperature, one must know the number of degrees of freedom (ndof) in which the kinetic energy is distributed. The default value for ndof is in line with the default initial velocities. ndof is always set to 3N, except for the Nose-Hoover thermostat, where ndof is set to the number of internal degrees of freedom. One may specify custom initial velocities and ndof by using the vel0 and ndof arguments of the VerletIntegrator constructor. The module yaff.samplling.utils contains various functions to set up initial velocities. ## 9.3. Geometry optimization¶ A basic geometry optimization (with trajectory output in an HDF5 file) is implemented as follows: hdf5 = HDF5Writer(h5.File('output.h5', mode='w')) opt = CGOptimizer(CartesianDOF(ff), hooks=hdf5) opt.run(5000) The CartesianDOF() argument indicates that only the positions of the nuclei will be optimized. The convergence criteria are controlled through optional arguments of the yaff.sampling.dof.CartesianDOF class. The run method has the maximum number of iterations as the only optional argument. If run is called without arguments, the optimization continues until convergence is reached. One may also perform an optimization of the nuclei and the cell parameters as follows: hdf5 = HDF5Writer(h5.File('output.h5', mode='w')) opt = CGOptimizer(FullCellDOF(ff), hooks=hdf5) opt.run(5000) This will transform the degrees of freedom (DOFs) of the system (cell vectors and Cartesian coordinates) into a new set of DOF’s (scaled cell vectors and reduced coordinates) to allow an efficient optimization of both cell parameters atomic positions. One may replace yaff.sampling.dof.FullCellDOF by any of the following: The optional arguments of any CellDOF variant includes convergence criteria for the cell parameters and the do_frozen option to freeze the fractional coordinates of the atoms. ## 9.4. Harmonic approximations¶ Yaff can compute matrices of second order derivatives of the energy based on symmetric finite differences of analytic gradients for an arbitrary DOF object. This is the most general approach to compute such a generic Hessian: hessian = estimate_hessian(dof) where dof is a DOF object like CellDOF and others discussed in the previous section. The routines discussed in the following subsections are based on this generic Hessian routine. See yaff.sampling.harmonic for a description of the harmonic approximation routines. ### 9.4.1. Vibrational analysis¶ The Cartesian Hessian is computed as follows: hessian = estimate_cart_hessian(ff) This function uses the symmetric finite difference approximation to estimate the Hessian using many analytic gradient computations. Further vibrational analysis based on this Hessian can be carried out with TAMkin: hessian = estimate_cart_hessian(ff) gpos = np.zeros(ff.system.pos.shape, float) epot = ff.compute(gpos) import tamkin mol = tamkin.Molecule(system.numbers, system.pos, system.masses, epot, gpos, hessian) nma = tamkin.NMA(mol) invcm = lightspeed/centimeter print nma.freqs/invcm One may also compute the Hessian of a subsystem, e.g. for the first three atoms, as follows: hessian = estimate_cart_hessian(ff, select=[0, 1, 2]) ### 9.4.2. Elastic constants¶ Yaff can estimate the elastic constants of a system at zero Kelvin. Just like the computation of the Hessian, the elastic constants are obtained from symmetric finite differences of analytic gradient computations. The standard approach is: elastic = estimate_elastic(ff) where elastic is a symmetric 6 by 6 matrix with the elastic constants stored in Voight notation. If the system under scrutiny does not change its relative coordinates when the cell is deformed, one may use a faster approach: elastic = estimate_elastic(ff, do_frozen=True) A detailed description of this routine can be found here: yaff.sampling.harmonic.estimate_elastic().
2019-05-22 23:13:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6063348054885864, "perplexity": 2759.731503029666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00265.warc.gz"}
http://www.cliffsnotes.com/math/basic-math/basic-math-and-pre-algebra/integers-and-rationals/rationals-signed-numbers-including-fractions
# Rationals (Signed Numbers Including Fractions) Recall that integers are positive and negative whole numbers and zero. When fractions and terminating or repeating decimals between the integers are included, the complete group of numbers is referred to as rational numbers. They are signed numbers including fractions. A more technical definition of a rational number is any number that can be written as a fraction with the numerator being a whole number or integer and the denominator being a natural number. Notice that fractions can be placed on the number line, as shown in Figure 1 Fractions may be negative as well as positive. Negative fractions are typically written as follows: Although they are all equal. The rules for signs when adding integers applies to fractions as well. Remember: To add fractions, you must first get a common denominator. The rules for signs when adding integers applies to mixed numbers as well. The rules for signs when subtracting integers applies to fractions as well. Remember: To subtract fractions, you must first get a common denominator. Subtract the following. The rules for signs when subtracting integers applies to mixed numbers as well. Remember: To subtract mixed numbers, you must first get a common denominator. If borrowing from a column is necessary, be cautious of simple mistakes. Subtract the following. Problems, such as the preceding ones, are usually most easily done by stacking the number with the larger absolute value on top, subtracting, and keeping the sign of the number with the larger absolute value. The rules for signs when multiplying integers applies to fractions as well. Remember: To multiply fractions, multiply the numerators and then multiply the denominators. Always simplify to lowest terms if possible. Multiply the following. You can cancel when multiplying positive and negative fractions. Simply cancel as you do when multiplying positive fractions, but pay special attention to the signs involved. Follow the rules for signs when multiplying integers to obtain the proper sign. Remember: No sign means that a positive sign is understood. Multiply the following. Follow the rules for signs when multiplying integers to get the proper sign. Remember: Before multiplying mixed numbers, you must first change them to improper fractions. Multiply the following. Follow the rules for signs when dividing integers to get the proper sign. Remember: When dividing fractions, first invert the divisor and then multiply. Divide the following. Follow the rules for signs when dividing integers to get the proper sign. Remember: Before dividing mixed numbers, you must first change them to improper fractions. Then you must invert the divisor and multiply. Divide the following.
2014-09-22 18:44:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130091667175293, "perplexity": 650.3238132458844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00259-ip-10-234-18-248.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/19518/which-data-structure-to-use-for-accessing-min-max-in-constant-time
# Which data structure to use for accessing min/max in constant-time? I need a data structure which can include millions of elements, minimum and maximum must be accesable in constant time and inserting and erasing element time complexity must be better than linear. • What are your elements, integers in a range, strings, something else? You could use say a red-black tree; simply have two separate pointers for min and max that you update as you go. – Juho Jan 5 '14 at 23:26 • Why? What have you tried? What data structures do you know and why have you discounted them? – Raphael Jan 6 '14 at 6:26 A basic data structure that allows insertion and deletion in time $\Theta(\log n)$ are balanced binary search trees. Their memory overhead is reasonable (in case of AVL trees, two pointers and three bits per entry) so millions of entries are no problem at all on modern machines. Note that in a search tree, finding the minimum (or maximum) is conceptually easy by descending always left (right) starting in the root. This works in time $\Theta(\log n)$, too, which is too slow for you. However, we can certainly store pointers to these tree nodes, similar to front and end pointers in double-linked linear lists. But what happens when the elements are deleted? In this case, we have to find the in-order successor (predecessor) and update the pointer to the minimum (maximum). Finding this node works in time $O(\log n)$ so it does not hurt deletion time, asymptotically. You can, however, enable time $O(1)$ deletion of minimum and maximum by threading the tree, that is maintaining -- in addition to the binary search tree -- a double-linked list in in-order. Then, finding the new minimum/maximum is possible in time $O(1)$. This list requires additional space (two pointers per entry) and has to be maintained during insertions and deletions; this does not make the asymptotics worse but certainly slows down every such operation (I leave the details to you). So you have to trade-off the options given your application, that is which operations occur more often and which you want to be fastest. Note that trees, as all linked structures, tend to be bad for memory hierarchies since they don't necessarily preserve data locality. If your sets are so large that they don't fit into cache completely, you should check out B-trees which are designed to minimise page loads. The above works with them, too. • @ Raphael - Thanks for the details. Would you pls elaborate on the effect of balancing rotation in case a a doubly-linked list is maintained. Does the insertion/deletion would still be $O(\log n)$ ! – KGhatak May 27 '17 at 7:52 • @KGhatak Yes. You only need to touch constantly many pointers on nodes you already have at hand. Try implementing it to see the details. – Raphael May 28 '17 at 19:05 The name for the abstract data structure that you're interested in is a "double-ended priority queue" or sometimes "priority deque". A min-priority queue, as you probably know, is an abstract data structure which supports the following set of operations: • findMin (find the item with the smallest value) • deleteMin (remove the item with the smallest value) This is the minimal set; other typical operations may include: • delete (remove any item) • decreaseKey (alter an item so that its key is smaller) • merge (merge two priority queues into one) For the purpose of time analysis, it is usually assumed that all you have to compare keys is a binary comparison operator. You can also dually define a max-priority queue, where you're interested in the largest value rather than the smallest, by simply inverting the sense of the comparison operator. A double-ended priority queue is one that supports querying and efficiently removing the minimum or maximum value. If I'm reading you correctly, this is the set of operations that you definitely want, along with their time complexities: • insert - better than O(n) • findMin - O(1) • findMax - O(1) • deleteMin - better than O(n) • deleteMax - better than O(n) and there is one operation that you possibly want: • delete - better than O(n) I'm going to ignore this operation because it complicates things. To delete an arbitrary item, you must locate an arbitrary item. Some priority queue data structures (e.g. Fibonacci heaps) support the concept of a "location" (like an iterator in C++) which stays valid no matter what modifications you do to the queue (apart from deleting the item in question, obviously), but many do not, because items can move around in the data structure. If you really need this operation, then a variant of binary search trees which supports findMin and findMax in constant time is probably what you need. This turns out to be a very simple and pleasant exercise in algebra; see [1]‎ for details, including Haskell source code. There are a few obvious ways to do this if you already have a priority queue data structure available by maintaining a min-queue and a max-queue, and maintaining correspondences between them. See [2] for some details on how you might go about this. Most of the other interesting options are based on binary heaps, but combine min-heaps and max-heaps in one data structure, such as min-max heaps [3] and interval heaps [4]. By the way, if your keys are integers (not just binary-comparable blobs) then you can probably do better. vEB trees, for example, generalise to double-ended priority queues in a straightforward manner. 1. A fresh look at binary search trees by R. Hinze (2002) 2. Correspondence based data structures for double ended priority queues by K.-R. Chong and S. Sahni (1998) 3. Min-max heaps and generalized priority queues by M. D. Atkinson et al. (1986) 4. Data Structures, Algorithms, and Applications in C++ (Chapter 9.7) by S. Sahni (1998) • The question does not need such "strong" structures. Additionally, I'd like to see (not hidden behind links) how you delete in $o(n)$ (note how the expression "better than O(n)" is meaningless) in heaps -- this one the OP explicitly requests. – Raphael Jan 6 '14 at 6:28 • The phrase "better than O(n)" isn't meaningless, merely informal. You and I both understood what the questioner meant by that, no? Good point on the details of delete, though. I clarified why it complicates things so much. – Pseudonym Jan 6 '14 at 6:40 • Even though many people use "O" in this libearal fashion, it's still (mathematically) meaningless. So why not use $\Omega$ or $\Theta$, or even $o$ and $\omega$? That's what they are there for. (Note that the OP does not use Landau notation.) – Raphael Jan 6 '14 at 6:47 • Since the OP seems to want a dictionary with extras, not a priority queue, I think most of your answer does not relate to the question. – Raphael Jan 6 '14 at 8:41 • I think the question is unclear on that point. Thanks for the edits, though. – Pseudonym Jan 7 '14 at 0:00 You should look into https://en.wikipedia.org/wiki/Van_Emde_Boas_tree. It comes with some compromises, mostly your elements need to be integers and memory consumption may be high (but may be way lower than for binary trees for dense keys). Min and max are constant time, insert/delete/successor are O(log log M), M being key space. Careful implementation may out-preform a binary tree by a factor of 10 for millions of keys (mostly if they are dense). One of the best heaps to use for that purpose is Fibonacci Heap. It has O(1) insert and O(1) findMin, together with O(1) decreaseKey, if you need it. If you really need deleteMin and findMin consequtively (meaning you find multiple minimums) then I would not recommend using a heap. QuickSelect algorithm (which is O(n)) for searching all the minimums has worked faster for me. http://en.wikipedia.org/wiki/Quickselect • Extending Fibonacci heaps to support max operations as well as min operations turns out to be nontrivial. – Pseudonym Jan 6 '14 at 6:56 • How do you delete in time $o(n)$? – Raphael Jan 6 '14 at 7:19 • For Fibonacci heaps, you insert in $O(1)$ and extractMin in $O(1)$, but deleteMin in $O(log(n))$. So if you need multiple minimums, you end up with calling deleteMin over and over again, which kills the performance a bit (actually converges to heap sort in long-run). Of course, what I describe here is for applications where you actually need deleteMin. Quickselect however would give you k-min (or k-max) in linear time, which doesn't depend on k. Please check: en.wikipedia.org/wiki/Partial_sorting, where sorting k-max is not a requirement. An application I can just think of is pruning – Tolga Birdal Jan 6 '14 at 8:28 • You are stating true things, but few of them relate to the question. Based on their phrasing, the OP seems to want a dictionary with extras, not a priority queue. – Raphael Jan 6 '14 at 8:40 • From personal experience, Quickselect was a viable alternative, when I was wondering: "minimum and maximum must be accessible in constant time and inserting and erasing element time complexity must be better than linear" Of course, as I mentioned, the question demands further application details. – Tolga Birdal Jan 6 '14 at 9:19
2020-07-13 19:13:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4344107508659363, "perplexity": 1437.341322788587}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00300.warc.gz"}
http://mymathforum.com/elementary-math/38383-can-you-guys-solve-2.html
My Math Forum Can you guys solve this? Elementary Math Fractions, Percentages, Word Problems, Equations, Inequations, Factorization, Expansion April 9th, 2015, 08:09 PM #11 Newbie   Joined: Apr 2015 From: Brazil Posts: 1 Thanks: 0 n*n-n: 8 = 56 -> 8*8-8= 56 7 = 42 -> 7*7-7= 42 6 = 30 -> 6*6-6= 30 5 = 20 -> 5*5-5= 20 4 = 12 -> 4*4-4= 12 3 = 6 -> 3*3-3= 6 April 10th, 2015, 03:00 AM #12 Senior Member   Joined: Apr 2014 From: Glasgow Posts: 1,838 Thanks: 592 Math Focus: Physics, mathematical modelling, numerical and computational solutions My name isn't "Genius", so I don't have to solve it Thanks from topsquark April 24th, 2015, 02:19 AM #13 Newbie   Joined: Feb 2014 Posts: 12 Thanks: 0 Cherubic Cube Always got this query Its about Cherubic Cube. was it easy to any of you? Never solved it. Generally what duration it takes to complete it. Did any one try solving it before And got one and solve it now and then. May 9th, 2015, 05:34 PM #14 Senior Member   Joined: Aug 2014 From: United States Posts: 134 Thanks: 21 Math Focus: Learning Hmm... This is curious. I am getting $\pi$ as the answer. My formula was this: You just need the first equation. If $8=56$ then $8(3-\pi)=56(3-\pi)$ then $24-8\pi=168-56\pi$ Now move stuff around and get $56\pi-8\pi=168-24$ so $48\pi=144$ thus $\boxed{3=\pi}$ The same thing seems to happen if I do it to the other equations, thus the others are just extra unnecessary information. $\square$ May 9th, 2015, 06:17 PM #15 Math Team   Joined: Dec 2013 From: Colombia Posts: 6,394 Thanks: 2101 Math Focus: Mainly analysis and algebra It's not all that curious. Your work gives $$8(3 - a) = 56(3-a) \implies 48(3-a)=0$$ and so clearly $48 = 0$ or $a=3$. The method you use can therefore prove that $3$ is equal to anything you want it to be. Except that $$48(3-a)=0 \implies (56 - 8)(3-a)=0$$and the first line gives us that $56 = 8$, so we get no information about the value of $3-a$. Last edited by greg1313; May 10th, 2015 at 07:04 PM. May 10th, 2015, 12:44 PM #16 Newbie   Joined: May 2015 From: Rio de Janeiro Posts: 3 Thanks: 0 The answer is 6 3*2 = 6 Last edited by SamirD; May 10th, 2015 at 12:52 PM. May 10th, 2015, 12:47 PM #17 Math Team   Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 8,156 Thanks: 552 Answer to what? June 1st, 2015, 01:28 AM #18 Newbie   Joined: Jun 2015 From: los angeles Posts: 1 Thanks: 0 Yes, 3*2=6, if we follow the scenario which is being applied here. Then we will get to know that it's only multiplication of 1 preceding number. Last edited by skipjack; July 2nd, 2015 at 12:12 PM. July 2nd, 2015, 09:16 AM #19 Newbie   Joined: May 2015 From: INDIA Posts: 28 Thanks: 1 Let us denote it in the form a x b= c k x m= n Now you notice that. c - (k x 2) - 2 = n Using this on 3 x 4= 12 2 x m= n 12 - (2 x 2) -2 = n = 6. sir Einstien I am also genius, hoo hoo! Last edited by skipjack; July 2nd, 2015 at 12:13 PM. July 2nd, 2015, 10:14 AM #20 Math Team   Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 8,156 Thanks: 552 Geezzz...can someone pleeezzzze close this thread... Tags guys, solve , , , , , , , , , , , , , , # do u solve it Click on a term to search for related topics. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post RedDevil96 Physics 0 February 7th, 2014 04:12 AM PLUS-MINUS Algebra 2 December 23rd, 2013 06:34 PM anonimnystefy New Users 6 February 4th, 2013 06:37 AM colerelm New Users 2 February 7th, 2012 04:40 PM KateDaring52 New Users 3 July 6th, 2011 05:37 PM Contact - Home - Forums - Top
2017-02-22 21:58:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7338199019432068, "perplexity": 4646.743158432892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00593-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/122391-factorising.html
# Math Help - factorising 1. ## factorising Factorise $m^2+9m+20$ I thought it would be $(m+10)(m-1)$ but that wouldn't work 2. Originally Posted by Mukilab Factorise $m^2+9m+20$ I thought it would be $(m+10)(m-1)$ but that wouldn't work Both your signs will be + since 9 and 20 are both greater than 0. Think about what other numbers multiply to make 20. $20 = 1 \times 20 \: , 2 \times 10 , 4 \times 5$ From the above pick a pair that add up to 9 The answer is $(m+4)(m+5)$ 3. $m^2+9m+20=(m+5)(m+4)$ 4. Lol I am such an idiot >.<
2014-04-20 01:09:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7744023203849792, "perplexity": 755.6808549819729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
https://par.nsf.gov/biblio/10278875
The bursty origin of the Milky Way thick disc ABSTRACT We investigate thin and thick stellar disc formation in Milky Way-mass galaxies using 12 FIRE-2 cosmological zoom-in simulations. All simulated galaxies experience an early period of bursty star formation that transitions to a late-time steady phase of near-constant star formation. Stars formed during the late-time steady phase have more circular orbits and thin-disc-like morphology at z = 0, while stars born during the bursty phase have more radial orbits and thick-disc structure. The median age of thick-disc stars at z = 0 correlates strongly with this transition time. We also find that galaxies with an earlier transition from bursty to steady star formation have a higher thin-disc fractions at z = 0. Three of our systems have minor mergers with Large Magellanic Cloud-size satellites during the thin-disc phase. These mergers trigger short starbursts but do not destroy the thin disc nor alter broad trends between the star formation transition time and thin/thick-disc properties. If our simulations are representative of the Universe, then stellar archaeological studies of the Milky Way (or M31) provide a window into past star formation modes in the Galaxy. Current age estimates of the Galactic thick disc would suggest that the Milky Way transitioned from bursty to steady phase more » Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10278875 Journal Name: Monthly Notices of the Royal Astronomical Society Volume: 505 Issue: 1 Page Range or eLocation-ID: 889 to 902 ISSN: 0035-8711 1. ABSTRACT We study the growth of stellar discs of Milky Way-sized galaxies using a suite of cosmological simulations. We calculate the half-mass axis lengths and axis ratios of stellar populations split by age in galaxies with stellar mass $M_{*}=10^7\!-\!10^{10}\, \mathrm{M}_{\odot }$ at redshifts z > 1.5. We find that in our simulations stars always form in relatively thin discs, and at ages below 100 Myr are contained within half-mass height z1/2 ∼ 0.1 kpc and short-to-long axial ratio z1/2/x1/2 ∼ 0.15. Disc thickness increases with the age of stellar population, reaching median z1/2 ∼ 0.8 kpc and z1/2/x1/2 ∼ 0.6 for stars older than 500 Myr. We trace the same group of stars over the simulation snapshots and show explicitly that their intrinsic shape grows more spheroidal over time. We identify a new mechanism that contributes to the observed disc thickness: rapid changes in the orientation of the galactic plane mix the configuration of young stars. The frequently mentioned ‘upside-down’ formation scenario of galactic discs, which posits that young stars form in already thick discs at high redshift, may be missing this additional mechanism of quick disc inflation. The actual formation of stars within a fairly thin plane is consistent with the correspondingly flatmore » We use FIRE simulations to study disc formation in z ∼ 0, Milky Way-mass galaxies, and conclude that a key ingredient for the formation of thin stellar discs is the ability for accreting gas to develop an aligned angular momentum distribution via internal cancellation prior to joining the galaxy. Among galaxies with a high fraction ($\gt 70{{\ \rm per\ cent}}$) of their young stars in a thin disc (h/R ∼ 0.1), we find that: (i) hot, virial-temperature gas dominates the inflowing gas mass on halo scales (≳20 kpc), with radiative losses offset by compression heating; (ii) this hot accretion proceeds until angular momentum support slows inward motion, at which point the gas cools to $\lesssim 10^4\, {\rm K}$; (iii) prior to cooling, the accreting gas develops an angular momentum distribution that is aligned with the galaxy disc, and while cooling transitions from a quasi-spherical spatial configuration to a more-flattened, disc-like configuration. We show that the existence of this ‘rotating cooling flow’ accretion mode is strongly correlated with the fraction of stars forming in a thin disc, using a sample of 17 z ∼ 0 galaxies spanning a halo mass range of 1010.5 M⊙ ≲ Mh ≲ 1012 M⊙ and stellarmore » 3. ABSTRACT In hierarchical structure formation, metal-poor stars in and around the Milky Way (MW) originate primarily from mergers of lower mass galaxies. A common expectation is therefore that metal-poor stars should have isotropic, dispersion-dominated orbits that do not correlate strongly with the MW disc. However, recent observations of stars in the MW show that metal-poor ($\rm {[Fe/H]}\lesssim -2$) stars are preferentially on prograde orbits with respect to the disc. Using the Feedback In Realistic Environments 2 (FIRE-2) suite of cosmological zoom-in simulations of MW/M31-mass galaxies, we investigate the prevalence and origin of prograde metal-poor stars. Almost all (11 of 12) of our simulations have metal-poor stars on preferentially prograde orbits today and throughout most of their history: we thus predict that this is a generic feature of MW/M31-mass galaxies. The typical prograde-to-retrograde ratio is ∼2:1, which depends weakly on stellar metallicity at $\rm {[Fe/H]}\lesssim -1$. These trends predicted by our simulations agree well with MW observations. Prograde metal-poor stars originate largely from a single Large/Small Magellanic Cloud (LMC/SMC)-mass gas-rich merger $7\!-\!12.5\, \rm {Gyr}$ ago, which deposited existing metal-poor stars and significant gas on an orbital vector that sparked the formation of and/or shaped the orientation of a long-lived stellar disc, givingmore »
2022-12-01 12:30:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961564421653748, "perplexity": 3308.9551476509314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00127.warc.gz"}
https://cstheory.stackexchange.com/questions?tab=newest&page=241
# All Questions 12,298 questions Filter by Sorted by Tagged with 1 vote 1k views ### Removing all but a few cycles in a graph Let problem $S$ be defined as Given undirected graph $G$ and a set of cycles $C_1,C_2, \ldots, C_n$ in G, find minimum number of vertices that need to be deleted to remove all cycles in the ... • 467 472 views ### Visualizing Unique Games How would you design a picture to illustrate the unique games conjecture? This is for a "Current Events" presentation on unique games at the next AMS Joint Meeting and for the booklet that will be ... • 4,902 672 views ### Using Kolmogorov complexity to establish proof complexity lower bounds? The motivation for this question is the fact that most n-bit strings are incompressible. Intuitively, we can propose by analogy that most proofs for Tautologies are incompressible to polynomial size. ... 4k views ### Semantic vs. Syntactic Complexity Classes In his "Computational Complexity" book, Papadimitriou writes: RP is in some sense a new and unusual kind of complexity class. Not any polynomially bounded nondeterministic Turing machine can be the ... • 16.3k 4k views ### Optimal greedy algorithms for NP-hard problems Greed, for lack of a better word, is good. One of the first algorithmic paradigms taught in introductory algorithms course is the greedy approach. Greedy approach results in simple and intuitive ... • 10.5k 9k views ### NP-hard problems on trees Several optimization problems that are known to be NP-hard on general graphs are trivially solvable in polynomial time (some even in linear time) when the input graph is a tree. Examples include ... • 10.5k 1k views ### NP-complete variants of undecidable problems? Examples of bounded $NP$-complete variants of undecidable sets: Bounded Halting problem={ $(M, x, 1^t)$| NTM machine $M$ halts and accepts $x$ within $t$ steps} Bounded Tiling={ $(T, 1^t)$| there is ... 703 views ### Hardness Guarantees for AES Many public-key cryptosystems have some kind of provable security. For example, the Rabin cryptosystem is provably as hard as factoring. I wonder whether such kind of provable security exists for ... • 16.3k 46k views ### What videos should everybody watch? Stanford University now has a Youtube channel, with free access to HD video of full courses on everything from dynamical systems to quantum entanglement. More conferences and workshops are ... 1 vote 16k views ### What is the k-SAT problem? [closed] First of all I am of course aware of the wikipedia article: http://en.wikipedia.org/wiki/Boolean_satisfiability_problem However I still do not understand exactly what the problem is. To demonstrate ... • 145 966 views ### Graph Theory Fun Problem Show that in any graph $G$ with min-degree $k$ ($k \geq 1$ duh!) you can find as its subgraph any tree on $k+1$ vertices. I have not been able to solve the question so far. However, I would like if ... • 1,953 180k views ### What papers should everyone read? This question is (inspired by)/(shamefully stolen from) a similar question at MathOverflow, but I expect the answers here will be quite different. We all have favorite papers in our own respective ... 873 views ### Universal Turing Machines in "Computational Complexity" by Papadimitriou The first part of this question has been solved (see comments). In the book "computational complexity" by Papadimitriou, a Universal Turing Machine is given. But this machine is not concrete, in the ... 1k views ### Projective Plane of Order 12 Objective: Settle the conjecture that there is no projective plane of order 12. In 1989, using computer search on a Cray, Lam proved that no projective plane of order 10 exists. Now that God's ... • 6,974 351 views ### How can I model this usage scenario mathematically? I want to create a fairly simple mathematical model that describes usage patterns and performance trade-offs in a system. The system behaves as follows: clients periodically issue multi-cast packets ... • 153 2k views ### How do I formally describe a rooted, directed, acyclic graph? I need a formalism to describe the following requirements: I have a graph comprised of nodes and transitions between nodes Nodes maybe one of three types, all are sub-classes of a base abstract node ... • 153 2k views ### Introduction to spectral graph theory What are the basic references? Are there any good, high-level surveys of SGT and its applications to CS in general and machine learning more specifically? 440 views ### Is this problem mappable to 3SAT or is it weaker than 3SAT? Consider a variant of a satisifiability problem. Given n dimensions (n >= 3, n < 10,000 think of n as large but finite) The range of each dimension is either an interval over the integers or an ... • 141 530 views
2023-03-26 00:27:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6374027132987976, "perplexity": 1142.9171997601557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00141.warc.gz"}
http://www.mitp.ru/en/publ/abs_vol2/abs14.html
ON MAGNETIC FIELD GENERATION BY A CONDUCTING FLUID MOTION WITH INTERNAL SCALING. II\@ V. A. Zheligovsky Abstract The paper is the continuation of a previous paper with the same title. A complete asymptotic decomposition of induction operator eigenvalues and eigenfunctions is constructed in the kinematic dynamo problem of magnetic field generation with the $\alpha$ effect by a motion of a conducting fluid with an internal scaling along three spatial variables in a sphere. Formulas derived here describe the mean value of the magnetic field excited in the hydromagnetic system under study. In some cases these formulas are the same as Braginsky's generation formulas, though the underlying dynamo mechnism is quite different. The theory developed here can be used to interpret magnetic fields of cosmic bodies having liquid electrically conducting cores or sphericall shells where the velocity with a specific internal scaling is assumed. Back to Computational Seismology, Vol. 2.
2017-12-18 14:33:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824937701225281, "perplexity": 772.3858998973014}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00431.warc.gz"}
https://cs.stackexchange.com/questions/57930/computational-complexity-of-logistic-map
# Computational complexity of logistic map My question is pretty simple and to the point. Is there a known way to efficiently compute logistic maps to within a specified precision? In other words, the input is a value $x$ and integers $d,n$; the desired output is the result of $n$ iterations of the logistic map applied to $x$, to $d$ bits of precision. I know of a way to do this using exact real arithmetic but the representations of real numbers that I know and the algorithms I know all take exponential time with respect to the requested number of bits of precision. Using fixed-point arithmetic doesn't work because each multiplication doubles the number of bits of precision needed, so the number of bits needed is exponential in the number of iterations. Is there a known efficient way to compute logistic maps to with specified precision?
2019-10-20 21:28:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902329206466675, "perplexity": 160.37944154770085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986726836.64/warc/CC-MAIN-20191020210506-20191020234006-00493.warc.gz"}
http://www.emathzone.com/tutorials/basic-statistics/perfect-correlation.html
# Perfect Correlation Perfect Correlation: If there is any change in the value of one variable, the value of the others variable is changed in a fixed proportion, the correlation between them is said to be perfect correlation. It is indicated numerically as $+ 1$ and $- 1$. Perfect Positive Correlation: If the values of both the variables are move in same direction with fixed proportion is called perfect positive correlation. It is indicated numerically as $+ 1$. Perfect Negative Correlation: If the values of both the variables are move in opposite direction with fixed proportion is called perfect negative correlation. It is indicated numerically as $- 1$.
2016-05-02 21:14:59
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4865815341472626, "perplexity": 584.7114730283554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117783.16/warc/CC-MAIN-20160428161517-00067-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-6-section-6-6-vectors-exercise-set-page-782/32
Precalculus (6th Edition) Blitzer $3u+4v=-6i+13j$ We are given that the two vector equations for the vectors $v$ and $v$ as follows: $v=-3i+7j$ and $u=2i-5j$ Now, $3u+4v=3(2i-5j)+4(-3i+7j)=(6-12)i+(-15+28)j$ Hence, $3u+4v=-6i+13j$
2021-04-21 09:44:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969391226768494, "perplexity": 154.10162907196136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00628.warc.gz"}
https://www.lmfdb.org/L/rational/2/40%5E2/1.1/c1-0
## Results (25 matches) Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 2-40e2-1.1-c1-0-11 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 1 0 1.16753 Elliptic curve 1600.s Modular form 1600.2.a.s Modular form 1600.2.a.s.1.1 2-40e2-1.1-c1-0-12 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $1.17328$ Elliptic curve 1600.v Modular form 1600.2.a.v Modular form 1600.2.a.v.1.1 2-40e2-1.1-c1-0-14 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 1 0 1.26575 Elliptic curve 1600.w Modular form 1600.2.a.w Modular form 1600.2.a.w.1.1 2-40e2-1.1-c1-0-15 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.26607$ Elliptic curve 1600.b Modular form 1600.2.a.b Modular form 1600.2.a.b.1.1 2-40e2-1.1-c1-0-16 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 -1 1 1.27038 Elliptic curve 1600.a Modular form 1600.2.a.a Modular form 1600.2.a.a.1.1 2-40e2-1.1-c1-0-18 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.38798$ Elliptic curve 1600.c Modular form 1600.2.a.c Modular form 1600.2.a.c.1.1 2-40e2-1.1-c1-0-19 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 1 0 1.39439 Elliptic curve 1600.x Modular form 1600.2.a.x Modular form 1600.2.a.x.1.1 2-40e2-1.1-c1-0-2 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.647411$ Elliptic curve 1600.g Modular form 1600.2.a.g Modular form 1600.2.a.g.1.1 2-40e2-1.1-c1-0-20 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 1 0 1.48624 Elliptic curve 1600.y Modular form 1600.2.a.y Modular form 1600.2.a.y.1.1 2-40e2-1.1-c1-0-21 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.48860$ Elliptic curve 1600.d Modular form 1600.2.a.d Modular form 1600.2.a.d.1.1 2-40e2-1.1-c1-0-22 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 -1 1 1.51149 Elliptic curve 1600.e Modular form 1600.2.a.e Modular form 1600.2.a.e.1.1 2-40e2-1.1-c1-0-25 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.54134$ Elliptic curve 1600.h Modular form 1600.2.a.h Modular form 1600.2.a.h.1.1 2-40e2-1.1-c1-0-26 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 -1 1 1.73719 Elliptic curve 1600.l Modular form 1600.2.a.l Modular form 1600.2.a.l.1.1 2-40e2-1.1-c1-0-27 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.76058$ Elliptic curve 1600.m Modular form 1600.2.a.m Modular form 1600.2.a.m.1.1 2-40e2-1.1-c1-0-28 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 -1 1 1.87089 Elliptic curve 1600.p Modular form 1600.2.a.p Modular form 1600.2.a.p.1.1 2-40e2-1.1-c1-0-29 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.89767$ Elliptic curve 1600.o Modular form 1600.2.a.o Modular form 1600.2.a.o.1.1 2-40e2-1.1-c1-0-30 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 -1 1 1.90088 Elliptic curve 1600.q Modular form 1600.2.a.q Modular form 1600.2.a.q.1.1 2-40e2-1.1-c1-0-31 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $2.07698$ Elliptic curve 1600.r Modular form 1600.2.a.r Modular form 1600.2.a.r.1.1 2-40e2-1.1-c1-0-32 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 -1 1 2.13213 Elliptic curve 1600.t Modular form 1600.2.a.t Modular form 1600.2.a.t.1.1 2-40e2-1.1-c1-0-33 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $2.15062$ Elliptic curve 1600.u Modular form 1600.2.a.u Modular form 1600.2.a.u.1.1 2-40e2-1.1-c1-0-4 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 1 0 0.778156 Elliptic curve 1600.k Modular form 1600.2.a.k Modular form 1600.2.a.k.1.1 2-40e2-1.1-c1-0-5 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.806837$ Elliptic curve 1600.i Modular form 1600.2.a.i Modular form 1600.2.a.i.1.1 2-40e2-1.1-c1-0-6 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $$1.0 1 1 0 0.820603 Elliptic curve 1600.f Modular form 1600.2.a.f Modular form 1600.2.a.f.1.1 2-40e2-1.1-c1-0-7 3.57 12.7 2 2^{6} \cdot 5^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.919526$ Elliptic curve 1600.j Modular form 1600.2.a.j Modular form 1600.2.a.j.1.1 2-40e2-1.1-c1-0-8 $3.57$ $12.7$ $2$ $2^{6} \cdot 5^{2}$ 1.1 $1.0$ $1$ $1$ $0$ $0.923744$ Elliptic curve 1600.n Modular form 1600.2.a.n Modular form 1600.2.a.n.1.1
2021-05-13 16:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748189449310303, "perplexity": 1237.5620283958447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00164.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/Calc3rd/chapter/Ch9/lesson/9.2.2/problem/9-69
Home > CALC3RD > Chapter Ch9 > Lesson 9.2.2 > Problem9-69 9-69. Convert the following sets of parametric equations into rectangular form (in terms of $x$ and $y$). 1. $x = \cos(t) \text{ and } y = \sin(t)$ 2. $x = \cos(2t) \text{ and } y = \sin(2t)$ 3. $x = t^4 - 3t^2 \text{ and } y = t^2$ $\sin^2(x) + \cos^2(x) = 1$
2020-10-21 01:47:41
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504459857940674, "perplexity": 3915.233360793275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00149.warc.gz"}
http://timjones.io/blog/archive/2014/01/19/dryrunner-isolated-integration-testing-for-aspnet
# DryRunner: Isolated integration testing for ASP.NET ### TL/DR DryRunner is an open source library for .NET that enables isolated integration testing for ASP.NET websites. ### The problem ASP.NET might not have such a rich ecosystem of testing frameworks as, say, Ruby on Rails, but the situation is improving. Tools like SpecFlow, a .NET port of Cucumber, make integration testing much easier than it used to be. I like SpecFlow a lot. It’s a great way to do end-to-end testing of an ASP.NET website. Combined with a browser automation tool like Selenium, it lets you programmatically simulate a user clicking around in a real web browser, performing a sequence of steps - perhaps creating an account or logging in. The trouble is: on which instance of your site should you run these tests? If the test involves creating an account, for example, then you don’t want to clutter your production database with all those test accounts. You probably also don’t want to use your local development site - the one running in IIS or IIS Express on your own computer - partly to avoid clutter, and partly so that you can treat test data as disposable, in case you want to empty the database before a test run. I imagine a common solution is to have a test instance of the site running on a server somewhere, perhaps in combination with a continuous integration server like TeamCity. TeamCity could grab the latest code, deploy it to the test instance, empty the test database, and run the SpecFlow tests against the test instance. Which is all well and good - and it is good - but what if you want to run this type of test on your own computer, before checking in? Testing is supposed to be all about a quick feedback loop, after all. ### Introducing DryRunner I couldn’t find an existing solution that I was happy with, so I wrote DryRunner. DryRunner allows you to do isolated integration testing for ASP.NET websites - and by isolated, I mean that the test website is separate, uses its own database, exists in a separate folder structure, and can be deleted when you’ve finished with it. DryRunner itself is actually quite simple, because it’s built on a few existing Lego blocks: ASP.NET deployment packages, web.config transforms, and IIS Express. So what does DryRunner actually do? It will: • deploy a test version of your website to a temporary location, • host the test version of your website using IIS Express, and • clean up afterwards by deleting the test version. DryRunner requires you to create a Test build configuration (alongside the usual Debug, Release and any other build configurations you may already have). You can use Web.Test.config to configure test-specific database connection strings, and other test-specific settings. DryRunner is open source, and you’ll find the source code on GitHub. More usefully, there’s a DryRunner package on NuGet. ### For example… I think a concrete example will help more than an abstract explanation would, so here goes. (For more concise usage instructions, see the GitHub page.) First, we’ll create a new ASP.NET MVC 4 Web Application. Choose the Internet Application template, and don’t create a unit test project. We’ll create one ourselves later. If we start the website now, we’ll see this in the browser. Later on, we’ll write an integration test that ensures the phrase “To learn more” is present on the page. Before using DryRunner, we need to create a Test configuration for the website you want to test. DryRunner will build the website using the Test configuration, including the relevant web.config transform, if you have one. Right-click on the solution name in Solution Explorer, and click “Configuration Manager…”. Find your web project, and in the Configuration column, choose “". In the Name textbox, enter "Test". Uncheck the box to create new solution configurations. Now add a new Class Library project named [ProjectName].AcceptanceTests. Install the NUnit and DryRunner NuGet packages (skip NUnit if you prefer MSTest or another test framework): Add a test class named HomePageTests.cs to the test project: using System.Net; using DryRunner; using NUnit.Framework; namespace DryRunnerSample.AcceptanceTests { [TestFixture] public class HomePageTests { private const int Port = 9000; private TestSiteManager _testSiteManager; [SetUp] public void SetUp() { const string websiteProjectName = "DryRunnerSample"; _testSiteManager = new TestSiteManager(websiteProjectName, Port); _testSiteManager.Start(); } [TearDown] public void TearDown() { _testSiteManager.Stop(); } [Test] public void HomePageContainsCorrectContent() { using (var webClient = new WebClient()) { } } } } I’d normally use frameworks like SpecFlow and Selenium to avoid matching against the raw HTML, but I wanted to keep this example simple. Run this test - it should pass. And that’s pretty much it - we’re now able to run integration tests against an isolated copy of an ASP.NET website. So far, there’s not really any benefit over running integration tests in-place on your development website. But you will almost certainly want to manipulate a database as part of your tests. Here’s where DryRunner comes into its own: it let you use test-specific settings, such as a test-specific database connection string. This is best done using web.config transforms. In your web project, right-click on Web.config, and choose Add Config Transform. You’ll see a Web.Test.config file is added to the project. Refer to the web.config transformation syntax to see how you can insert new settings or replace settings inherited from the base Web.config. ### Give it a try I hope someone out there finds this useful. I know of DryRunner being successfully used in some ASP.NET shops already, and if you use it I’d love to hear from you, particularly if you have any suggestions for improvement.
2019-03-23 23:16:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2099410593509674, "perplexity": 3132.465023672179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00467.warc.gz"}
http://mathhelpforum.com/calculus/112263-need-help-solving-trig-limit.html
# Math Help - Need help solving a trig limit 1. ## Need help solving a trig limit Evaluate the following limit without the use of L'Hopital's rule: $\lim_{x\to0}\frac{sin^2x}{\sqrt{1+xsinx}-cosx}$ My first instinct was to multiply by the conjugate, leaving me with: $\lim_{x\to0}\frac{sin^2x(\sqrt{1+xsinx}+cosx)}{1+x sinx-cos^2x}$ And from here I tried rearranging as best I could but could not end up with any form without zero in the denominator. 2. Originally Posted by xxlvh Evaluate the following limit without the use of L'Hopital's rule: $\lim_{x\to0}\frac{sin^2x}{\sqrt{1+xsinx}-cosx}$ My first instinct was to multiply by the conjugate, leaving me with: $\lim_{x\to0}\frac{sin^2x(\sqrt{1+xsinx}+cosx)}{1+x sinx-cos^2x}$ And from here I tried rearranging as best I could but could not end up with any form without zero in the denominator. $\lim_{x\to0}\frac{sin^2x}{\sqrt{1+xsinx}-cosx}$ = $\lim_{x\to0}\frac{sin^2x(\sqrt{1+xsinx}+cosx)}{1+x sinx-cos^2x}$ = $\lim_{x\to0}\frac{sin^2x(\sqrt{1+xsinx}+cosx)}{sin ^2x+xsinx}$ = $\lim_{x\to0}\frac{\sqrt{1+xsinx}+cosx}{1+\frac{x}{ sinx}}$ = $\frac{\sqrt{1+0}+1}{1+1}$ = $\frac{2}{2}$ = $1$
2016-07-31 00:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938620924949646, "perplexity": 343.42901515475717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258944256.88/warc/CC-MAIN-20160723072904-00090-ip-10-185-27-174.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/23154/how-does-matter-transform-into-energy-and-vice-versa
How does matter transform into energy and vice versa? In what ways can energy transform into matter and vice versa? Annihilation is one way to tranform matter to energy. Fission is another (when splitting and atom, what happens to its two parts?) Are quantum fluctuations one way to transform energy to matter? - There's nothing special about nuclear reactions. Chemical reactions also result in a change in mass due to the energy released. It's just that the energy scale for chemical reactions is about $10^6$ times smaller. – Ben Crowell May 29 '13 at 14:59 In what ways can energy transform into matter and vice versa? Energy and matter are connected according to special relativity and this has been experimentally demonstrated . It is the famous formula: $E=mc^2$ , where $m$ is the relativistic mass and $c$ the velocity of light. or $E^2=m_0^2c^4 +p^2c^2$ , for a particle with rest mass $m_0$ moving with momentum $p$. The rules of transformation follow Quantum Mechanical solutions of kinematic and potential problem equations . Annihilation is one way to transform matter to energy. Yes Fission is another (when splitting and atom, what happens to its two parts?) In the quantum mechanical description of nuclei they are represented by potential wells with energy levels, some filled. The number of baryons ( protons and neutrons) bound in this potential well characterize the nucleus. Nucleus A that is struck by a neutron ( for example) becomes a nucleus B higher up in baryons by absorbing it into an energy level of this potential well. In fission this higher up nucleus is unstable and falls into a lower energy state, giving up part of its mass in energy according to the relativistic formulae, and breaking into smaller nuclei and free neutrons which go on to sustain the fission on another original nucleus. Generally a form of fission happens if a nucleus is unstable. There is also fusion, two deuterium nuclei adhering at a lower energy level and giving up energy. The binding energy curve shows whether nucleons can fuse or fission and give up as energy a part of their mass. Are quantum fluctuations one way to transform energy to matter? No, quantum fluctuations are virtual . If you mean tunneling, yes. - Thank you for your answer! I was thinking about splitting up things, is it teoretically possible to split a neutron, electron and other particles smaller than an atom? What would happen? Even more energy? – Rox Apr 2 '12 at 7:43 At the moment one cannot split elementary particles, and I think this will always be true. A neutron is not elementary, and it decays into a proton an electron and an electron_anti_neutrino, giving up the difference it has in mass with the proton also as energy . A proton, though not elementary because it composed of quarks, might decay in some theories, but cannot be split in the sense of separating the quarks because quarks are bound strongly, the further their distance from the center of mass of the proton, the stronger.I know no main stream theory that allows electrons to be composite. – anna v Apr 2 '12 at 7:50 @Rox you can 'accept' an answer that you're satisfied with by clicking the green tick next to it. You don't have to if you feel that the answers are incomplete, though. Oh, and don't accept my answer above, it only addresses half the question. – Manishearth Apr 2 '12 at 16:44 Related note: Fission isn't exactly turning matter into energy. It just releases the binding energy of the nucleus. This binding energy is part of the measured mass pf the nucleus, but if you want to separate "matter" and "energy" (not really possible), then it counts as energy. $\newcommand{\a}[3]{\mathrm{^{#1}_{#2}#3}}$ $$\a{235}{92}{U}+\a10n\to\a{236}{92}{U}^*\to\a{144}{56}{Ba}+\a{89}{36}{Kr}+3\a10n+|\Delta H|\approx177\:\rm{MeV}$$ Note that initially, we have 93 protons and 142 neutrons; and in the end this number does not change. From this POV, where particles count as "mass", we can say that no mass was created or destroyed, and the nuclear binding energy was released. Why do we call this a conversion from mass to energy if its just a converseion of types of energy? Well, that's because mass is energy. The fact is, if you "weighed" $\a{235}{92}{U}+\a10n$, it would weigh more than $\a{144}{56}{Ba}+\a{89}{36}{Kr}+3\a10n$. Actually, $\a{235}{92}{U}$ weighs less than $92\a11p+141\a10n$. That's because the binding energy of the nucleus is "negative" energy, and thus "annihilates" some mass (since mass is energy). It turns out that due to this, the fission products are lighter than the reactants, even if the number of nucleons is the same. And this loss of "mass" is converted into energy. So really, there's a bit of fuzziness on the border of "energy" and "mass". Anything with an energy density will have extra mass, and you won't be able to tell the difference between a body with mass $m$ and a body with mass $m-\frac{U}{c^2}$ and internal energy $U$. - Wood burning is an example of converting mass into energy. Another is a seed which takes energy from the sun (and water, air etc.) and converts it into matter. - I'm not at all sure of the downvote. Maybe Rohit has to write a little more detail, but his example is exactly in the spirit of Ben Crowell's comment, which I have expanded my answer around. (+1 BTW Rohit, but maybe you should write a little more to explain precisely what you mean). – WetSavannaAnimal aka Rod Vance Nov 18 '14 at 3:02 As well as the other answers, in particular, Anna V's comprehensive Answer, I would like to capture Ben Crowell's comment for permanence in this dicusssion: There's nothing special about nuclear reactions. Chemical reactions also result in a change in mass due to the energy released. It's just that the energy scale for chemical reactions is about $10^6$ times smaller. [My italics] and urge you to think of matter and energy to be different states of the same essential thing. People still get overwrought by the conversion of one into the other, but now physics and the physics culture has moved on to such a degree that the word "matter" has well and truly passed its use-by date. We (physicists) have kept the word "energy" for meaning the quantity that is conserved by dint of Noether's theorem applied to time-shift invariance of physical laws - and this word comprises everything that might be considered "stuff", i.e. all matter and energy in the old usage - more precisely: it comprises anything that constributes to the $T_{0\,0}$ term of the relativistic stress energy tensor. You might, for example, want to use the word "matter" for anything that has nonzero rest mass, but even this doesn't work properly as my writeup of the light-in-a-box thought experiment here shows that confined light has a rest mass. So, at the risk of sounding too colloquial for scientific discussion, I think simply of the word "stuff" for meaning anything that contributes to the stress energy tensor in the way described above, the word "energy" for quantifying the amount of "stuff" and if you need more precision than this, then you must specify the exact class of "stuff" in terms of the precise particle / quantum field names from the standard model and chemical reactants / products. To try to otherwise partition "stuff" into matter and energy is to grope for what is now a thoroughly artificial, imprecise and outdated dichotomy. - Are quantum fluctuations one way to transform energy to matter? Yes, at least in theory. This is how Hawking Radiation is predicted to work. In this case the gravitational energy of a black hole 'boosts' the energy of a quantum fluctuation to create an actual particle/antiparticle pair, one of which gets sucked into the black hole and one of which escapes. Have a look at this excerpt from the Wikipedia page on Gamma rays talking about a gamma ray turning into an electron-position pair. By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. Essentially creation of matter from energy (and vice versa) needs to follow Einstein's famous equation... E=mc2 - In what ways can energy transform into matter and vice versa? I am sure in special relativistic theory there is no such transformation. Why? Energy is an abstract mathematical quantity obeying local conservation law. Matter is a basic thing the world is made of. It is not a mathematical concept. One can quantify one aspect of it, say introduce inertial mass $m$. That ignores all the other things about matter we know - much of chemistry and physics. Obviously, the basic thing the world is made of does not change into abstract mathematical quantities. There is a real idea behind that quoted statement, but it needs to be stated differently. The idea is Einstein's conclusion that loss of energy $L$ from a body is accompanied by decrease of body's mass by $L/c^2$ (and vice versa). Based on this conclusion, he introduced the formula $$E = mc^2$$ as definition of total energy of body at rest. When someone says "mass can change into radiation energy " he really means "part of energy of massive body associated with mass $m$ has changed form and location from energy in the body to energy in the EM field". - One cannot obtain "clean" energy which is completely free off momentum, and cannot obtain "clean" matter which is free off momentum and potential energy. So question is ill-posed, there is no "clean" states which can be described as "energy into matter". That just cannot happen. When we consider reactions of elementary particles, the most common scenario is fission of one big particle, which is unstable by interactions which govern its stability. In this "one into many" scenario, you have energy released, because your momentum could be easily preserved. But mostly discussed electron-positron annihilation is very unprobable in "common random occurence". Because momentums of motion should satisfy $p_1+p_2<\delta$. In common scenario these two particles will just scatter, without any annihilation! - protected by Qmechanic♦Jul 16 '15 at 7:18 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site.
2016-02-08 14:37:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6105009317398071, "perplexity": 496.20373750280197}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153585.76/warc/CC-MAIN-20160205193913-00326-ip-10-236-182-209.ec2.internal.warc.gz"}
https://physics.stackexchange.com/tags/newtonian-mechanics/hot
# Tag Info 45 I've confirmed the experiment, using a McD_n_lds paper drinks cup and a beer can hollow plastic ball of about $5\mathrm{g}$, of about the same diameter as a ping pong ball (PPB): The observed effect depends largely on the cup being soft and permanently deformable (like an object made of blutack or playdough), so its collision with Earth is inelastic. A ... 20 I don't know if there's a beautiful solution for this. I'd love to see it, if it exists. What I can do is show you how I slogged my way through it. All praise to the mighty Mathematica. Part I: Obtaining the Equations of Motion First, we can dispense with the cylinder and rod and consider only a point mass $M$ on a ring of mass $m$ and radius $R$. Define $... 17 As mentioned in the comments above, the ball in the cup is similar to Galilean Cannon. The maximum height to which the ball can bounce$h_{max}$can be estimated using the law of energy conservation: $$(m+M)gH=mgh+E_{cup}+E_{water}+E_{heat},$$ where$m$is the mass of the ball,$M$is the mass of cup+water,$H$is the initial height from which the ball was ... 14 Potential energy is still a scalar quantity even in more than one dimension. This is because it only has a magnitude, there is no direction of potential energy. You can think of it as similar to the temperature in a room. Even though it varies with position, it still does not have a direction. 11 Turns out the tube does jump if the rod is no less than 13 times the mass of the tube. My previous answer had a couple of mistakes that yielded the wrong result, here is the updated one. Let$M$be the mass of the tube,$m$the mass of the rod and$R$the radius of the tube. Let$\theta$be the angle between the vertical and the direction of the rod from the ... 9 No, just because a value changes over space doesn't mean it is a vector quantity. Potential energy is not a vector. It is a scalar quantity related to its corresponding conservative force by $$\mathbf F=-\nabla U=-\left(\frac{\partial U}{\partial x}\hat x+\frac{\partial U}{\partial y}\hat y+\frac{\partial U}{\partial z}\hat z\right)$$ So the force components ... 8 Recall that if a ball normally hits a wall elastically, its velocity will be exactly reversed. Suppose the whole system hits the ground with speed$v$. Now, as the cup and the water hits the soft mat, their speed quickly reduces, and may start moving upward (depending on how soft the mat is) before the ping-pong ball is affected by a reaction force. Suppose ... 6 Since potential energy is function of position, ... hence can be considered as vector quantity? You make a conceptual mistake: "$n$-dimensional vector quantity" does not mean that a quantity depends on the position in an$n$-dimensional coordinate system. "$n$-dimensional vector quantity" means that a quantity requires$n$different real ... 6 My hypothesis why the ping pong ball receives a large upward impulse: The floating ping pong ball is displacing some water. The amount of displacement does not change much during the fall. As the cup hits the floor the deceleration of the quantity of water gives a short pressure peak. Because of that pressure peak the water that is in contact with the ping ... 5 The force on object$1$due to object$2$can be computed by doing a six-dimensional integral, $$-G\int_{V_1}d^3\mathbf{r}_1\rho_1(\mathbf{r}_1)\int_{V_2}d^3\mathbf{r}_2\rho_2(\mathbf{r}_2)\frac{\mathbf{r}_1-\mathbf{r}_2}{|\mathbf{r}_1-\mathbf{r}_2|^3},$$ where$\rho_1(\mathbf{r}_1)$is the mass density of object$1$and$\rho_2(\mathbf{r}_2)$that of object ... 4 Definition: The change in potential energy of the system is defined as the negative of work done by the internal conservative forces of the system. Potential energy may vary with space just like mass of a non uniform rod which may be represented as$f(x,y,z)$. After all potential energy is basically negative of work done by internal conservative forces which ... 4 The answer is that the moment of inertia is changing not only due to the instantaneous rotation about the contact point, but also because of the horizontal motion of the cylinder. The position of the rod (I won't write the z component of vectors) is $$r=R(\sin \theta, 1+\cos\theta)+\int_{t_0}^t (R\omega(t'),0)dt'$$ where at$t=t_0$, the cylinder is above the ... 3 Mathematically, moving between inertial and non-inertial frames correspond to moving terms from one side of Newton's second law to the other side. So, in your non-inertial frame accelerating with the incline you have for Newton's second law along the incline (using your notation) $$Mg\sin\theta+Ma_0\sin\theta=Ma_\text{net}$$ Moving to the inertial frame we ... 3 I don't understand why reaction force would decrease at the start of a countermovement jump. The reaction force changes because of changes in the vertical acceleration of the center of gravity of the persons body throughout the counter movement, as follows. At position A, before the person body starts dropping, the net force on the person is zero, so that$... 3 The article specifies the equation dealing with kinetic energy is looking at the relative kinetic energy. For a perfectly inelastic collision, the bodies are not moving relative to each other, so the relative kinetic energy is $0$. Thus there is no contradiction. To add more detail to this, the best thing to do is to work in the center of momentum frame, ... 3 Torque here is not external, you can tell because the total angular momentum in the system is the sum of the angular momentum of the two disks. Therefore the two disks are what makes up the system, neither of them are an external object. They only exchange momentum between each other, as they have both applied torques to each other. It is the same concept as ... 3 Both. Suppose that a particle of mass $m$ is in uniform circular motion with radius $r$ and tangential velocity $v_T$. We know that there must be a centripetal force maintaining this motion $$F_c = m \frac{v_T^2}{r}.$$ We also know that the system has an associated angular momentum whose magnitude is $$L = rmv_T.$$ Now if we only increase the centripetal ... 2 The slowing-down of your car is a function of all the resistive forces acting upon it: that is, all forces that are trying to dissipate the car's kinetic energy relative to the surface of the Earth. Hence, depending on the exact nature of those forces, the slow-down time, and the deceleration profile, can vary. That said, we can nonetheless come up with some ... 2 If the system is the two discs then the frictional forces apply internal torques which have a net value of zero - the internal torques are opposite in direction and equal in magnitude. If no external torques are applied then angular momentum is conserved. 2 Let's assume that tension increases down the rope then for this section of rope to be in equilibrium $$T-(T+\Delta T)=\Delta mg$$ As rope is massless, $\Delta m=0$ So, $\Delta T=0$ Therefore the magnitude of tension is constant throughout the massless rope. 2 Why is the tension dependent on the acceleration of the trolley? The acceleration of the large block $M$ and the attached pulley does put a force on the string because the string must accelerate with the pulley. This increases the tension of the string and so creates a larger force on both blocks $m_1$ and $m_2$. How can acceleration of the trolley prevent ... 2 To calculate the equation of motion we obtain the sum of the torques about point A, because we don't have to take care about the contact force. first I obtain the vector u from point B to A $$\vec{u}=R\,\begin{bmatrix} 0 \\ -1 \\ 0 \\ \end{bmatrix}- R\,\begin{bmatrix} \sin{\theta} \\ \cos(\theta) \\ 0 \\ \end{bmatrix}=-R\,\begin{bmatrix} \... 2 Assume that the water in the cup is compressible and inviscid, experiencing one-dimensional flow and thereby satisfying the one-dimensional Euler equations. Initial conditions, velocity =\sqrt{gh} downward and pressure =1 atm, are both uniform. The bottom of the cup is struck from below in such a way that the velocity of the water is reduced and the ... 2 In general, you can only apply \tau_C = \tfrac{\rm d}{{\rm d}t} L_C about the center of mass C. The expression about a different point is quite more complex. You can see that taking the torque about another point A (not the center of mass C), and the derivative of angular momentum about A isn't enough to solve the problem. Using the standard ... 2 If you define the gravitational potential energy between two bodies to be zero when the bodies are infinitely far apart, then naturally the gravitational potential is always negative. However, the potential energy can increase in a closed system (while remaining negative). Consider two bodies that are receding from each other. As they get farther apart, ... 2 The up and down movement of each individual point visualised only the fact that the wave does not transfer mass. However, the points are not independent, but coupled: If the position of a specific point x_i is x_i - x_0, where x_0 is the equilibrium position, the neighbouring points have similar positions. The energy is transferred due to this ... 1 The law of conservation of angular momentum states that when no external torque acts on an object, no change of angular momentum will occur. Yes there is friction between the discs,when they come into contact . Consider the resultant of the friction forces acting on the discs to be F. As shown above they are an action-reaction pair.They are internal forces. ... 1 The EOM's with x'=v(\tau)\,\tau+x where v(\tau) is the velocity between x' and x you obtain the kinetic energy and the potential energy of a pendulum that move in the prime system. Pendulum position vector$$\vec{R}=\left[ \begin {array}{c} v \left( \tau \right) \tau+L\sin \left( \varphi \right) \\L\cos \left( \varphi \right) \end {array} \right] ... 1 loose sand has no shear strength. This is why bike wheels skid on sand: the sand adheres to the tire, but that sand shears loose from the rest of the sand. Then you fall down go BOOM. 1 It seems I've figured out why the tube will jump if the mass of the rod is large enough, but I can't calculate the exact threshold. My proof of the jump is below. Let's suppose that the mass of the original tube (i.e., without the rod) is infinitesimal, whilst the mass of the rod is finite. Let's also assume that the tube won't jump. I'm going to prove the ... Only top voted, non community-wiki answers of a minimum length are eligible
2020-07-03 14:41:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8341646194458008, "perplexity": 331.82757416717425}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00013.warc.gz"}
https://gmatclub.com/forum/thurston-wrote-an-important-seven-digit-phone-number-on-a-na-159622.html?sort_by_oldest=true
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 14 Jul 2020, 15:44 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Thurston wrote an important seven-digit phone number on a na new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Manager Joined: 29 Aug 2013 Posts: 64 Location: United States Concentration: Finance, International Business GMAT 1: 590 Q41 V29 GMAT 2: 540 Q44 V20 GPA: 3.5 WE: Programming (Computer Software) Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags Updated on: 12 Sep 2013, 03:31 4 29 00:00 Difficulty: 95% (hard) Question Stats: 50% (02:44) correct 50% (02:43) wrong based on 212 sessions ### HideShow timer Statistics Thurston wrote an important seven-digit phone number on a napkin, but the last three numbers got smudged. Thurston remembers only that the last three digits contained at least one zero and at least one non-zero integer. If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly? A. 1/9 B. 10/243 C. 1/27 D. 10/271 E. 1/1000000 Originally posted by shameekv on 12 Sep 2013, 03:25. Last edited by Bunuel on 12 Sep 2013, 03:31, edited 1 time in total. Renamed the topic and edited the question. ##### Most Helpful Community Reply Intern Joined: 03 Sep 2013 Posts: 1 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 12 Sep 2013, 03:33 1 1 5 The answer is 1/27. Our first step is determining how many possible three-digit numbers there are with at least one zero and one nonzero. Treat this like a permutations question in which you could have any of the following six sequences, where N = non-zero integer. 0NN, N0N, NN0, N00, 00N, 0N0 There are 9 numbers that could appear in the N-slots and 1 number (zero) that could appear in the zero slots. Each sequence with two nonzero numbers will have 81 possible outcomes (1 * 9 * 9, or 9 * 1 * 9, or 9 * 9 * 1), while each sequence with one nonzero will have 9 possible outcomes (9 * 1 * 1, or 1 * 1 * 9, or 1 * 9 * 1). The total number of possible three-digit numbers here is 81 * 3 + 9 * 3 = 270. Thurston calls 10 of these numbers, so the odds of dialing the right one are 10/270 = 1/27. ##### General Discussion Math Expert Joined: 02 Sep 2009 Posts: 65290 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 12 Sep 2013, 03:36 1 2 shameekv wrote: Thurston wrote an important seven-digit phone number on a napkin, but the last three numbers got smudged. Thurston remembers only that the last three digits contained at least one zero and at least one non-zero integer. If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly? A. 1/9 B. 10/243 C. 1/27 D. 10/271 E. 1/1000000 If the last three digits have 1 zero (XX0), the total # of numbers possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numbers possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P = 10/(9*9*3+9*3) = 1/27. Answer: C. P.S. Please read carefully and follow: rules-for-posting-please-read-this-before-posting-133935.html Pay attention to the rule #3. Thank you. _________________ Math Expert Joined: 02 Sep 2009 Posts: 65290 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 12 Sep 2013, 03:39 Bunuel wrote: shameekv wrote: Thurston wrote an important seven-digit phone number on a napkin, but the last three numbers got smudged. Thurston remembers only that the last three digits contained at least one zero and at least one non-zero integer. If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly? A. 1/9 B. 10/243 C. 1/27 D. 10/271 E. 1/1000000 If the last three digits have 1 zero (XX0), the total # of numerous possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numerous possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P=10/(9*9*3+9*3)=1/27. Answer: C. P.S. Please read carefully and follow: rules-for-posting-please-read-this-before-posting-133935.html Pay attention to the rule #3. Thank you. Similar question to practice: john-wrote-a-phone-number-on-a-note-that-was-later-lost-94787.html _________________ Intern Joined: 21 Mar 2013 Posts: 36 GMAT Date: 03-20-2014 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 11 Mar 2014, 20:09 1 We know that atleast one digit is Zero and atleast one digit is non-zero. The third digit can be any single digit integer (zero or non-zero). Total # of combinations should be [One zero] * [One Non-zero] * [Any single digit integer] * $$\frac{3!}{2!}$$ = 1*9*10*3 = 270 P=10/270 = 1/27 Hence C Director Joined: 03 Aug 2012 Posts: 648 Concentration: General Management, General Management GMAT 1: 630 Q47 V29 GMAT 2: 680 Q50 V32 GPA: 3.7 WE: Information Technology (Investment Banking) Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 16 Mar 2014, 21:05 If the last three digits have 1 zero (XX0), the total # of numbers possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numbers possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P = 10/(9*9*3+9*3) = 1/27. Answer: C. Hi Bunuel, Since I got this question wrong, I need insights on this. We have two options of using either (1).two zeros and a non-zero or (2). two non-zero and a zero. In the above solution when you say XX0 can be arranged in 3 ways, since the problem is that you are considering XX as a unique single digit non-zero. However, there can be a case where 450 and 540 can be the numbers in which case the permutation will come out different. We can consider permutations in N00 as 3 since 0 is a unique number and we have 9 possibilities for 'N'.So, we have 9 possibilities for N and arrangement of NOO which would be !3/!2 (Divide by !2 since 0 are unique) =27 NN0 9 possibilities for each N and arrangement of NNO which would be !3 (Not divide by !2 since N is not unique) =9*9*6 Please suggest where I am going wrong in this one Rgds, TGC! Director Joined: 19 Apr 2013 Posts: 506 Concentration: Strategy, Healthcare Schools: Sloan '18 (A) GMAT 1: 730 Q48 V41 GPA: 4 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 26 Mar 2014, 10:03 Can someone please explain why we divide 10 to 270. I know that the probability means dividing desired outcome to possible outcomes. Here desired outcome is just one number not ten. Math Expert Joined: 02 Sep 2009 Posts: 65290 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 26 Mar 2014, 10:15 Ergenekon wrote: Can someone please explain why we divide 10 to 270. I know that the probability means dividing desired outcome to possible outcomes. Here desired outcome is just one number not ten. But Thurston tries 10 times not just 1: "If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly?" _________________ Intern Joined: 06 May 2013 Posts: 10 Location: United States GMAT 1: 700 Q49 V36 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 30 Mar 2014, 01:38 Hi. Please explain why after find the total possible number of the telephone numbers, we have 10 divided by 270? I have thought that the chance that there is one correct phone numbers and 9 incorrect phone numbers is: (1/270)*[(269/270)^9]*10! The correct answer choice seems to indicate that each pick does not relate to the later picks, but the chance to pick the correct phone numbers increases after each pick, it isn't? That is why I multiply the chance to get correct phone numbers and the chance to get incorrect phone numbers. What is wrong with my answer? Intern Joined: 24 Jun 2013 Posts: 30 Location: India Schools: Stanford '22 (S) GRE 1: Q170 V159 Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 21 Jul 2014, 07:16 Bunuel wrote: If the last three digits have 1 zero (XX0), the total # of numbers possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numbers possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P = 10/(9*9*3+9*3) = 1/27. Answer: C. Hi Bunuel, I have a Query. In case 1 where there is only one zero, XX0 can also be XY0, in that case should it not be multiplied by 3! (i.e. 6)? For. example 3,2,0 can be written in 6 ways. Thanks in advance for your clarification. Math Expert Joined: 02 Sep 2009 Posts: 65290 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 21 Jul 2014, 09:20 arichinna wrote: Bunuel wrote: If the last three digits have 1 zero (XX0), the total # of numbers possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numbers possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P = 10/(9*9*3+9*3) = 1/27. Answer: C. Hi Bunuel, I have a Query. In case 1 where there is only one zero, XX0 can also be XY0, in that case should it not be multiplied by 3! (i.e. 6)? For. example 3,2,0 can be written in 6 ways. Thanks in advance for your clarification. The point is that 9*9 gives all possible ordered pairs of the remaining two digits: 11 12 13 14 15 16 17 18 19 21 ... 99 Now, 0, in three digits can take either first, second or third place, hence multiplying by 3: XX0, X0X, 0XX. Hope it's clear. _________________ Manager Joined: 02 Jul 2012 Posts: 180 Location: India Schools: IIMC (A) GMAT 1: 720 Q50 V38 GPA: 2.6 WE: Information Technology (Consulting) Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 15 Oct 2014, 10:01 Bunuel wrote: arichinna wrote: Bunuel wrote: If the last three digits have 1 zero (XX0), the total # of numbers possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numbers possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P = 10/(9*9*3+9*3) = 1/27. Answer: C. Hi Bunuel, I have a Query. In case 1 where there is only one zero, XX0 can also be XY0, in that case should it not be multiplied by 3! (i.e. 6)? For. example 3,2,0 can be written in 6 ways. Thanks in advance for your clarification. The point is that 9*9 gives all possible ordered pairs of the remaining two digits: 11 12 13 14 15 16 17 18 19 21 ... 99 Now, 0, in three digits can take either first, second or third place, hence multiplying by 3: XX0, X0X, 0XX. Hope it's clear. Dear Bunuel, I didn't get this explanation. Why are we taking XX0 and not XY0, because the non-zero numbers can also be different. Such as 120 102 210 201 012 021 Which should lead to 6 combinations - $$3*2*1 = 6$$ Thanks Math Expert Joined: 02 Sep 2009 Posts: 65290 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 15 Oct 2014, 10:16 Thoughtosphere wrote: Bunuel wrote: arichinna wrote: [ Hi Bunuel, I have a Query. In case 1 where there is only one zero, XX0 can also be XY0, in that case should it not be multiplied by 3! (i.e. 6)? For. example 3,2,0 can be written in 6 ways. Thanks in advance for your clarification. The point is that 9*9 gives all possible ordered pairs of the remaining two digits: 11 12 13 14 15 16 17 18 19 21 ... 99 Now, 0, in three digits can take either first, second or third place, hence multiplying by 3: XX0, X0X, 0XX. Hope it's clear. Dear Bunuel, I didn't get this explanation. Why are we taking XX0 and not XY0, because the non-zero numbers can also be different. Such as 120 102 210 201 012 021 Which should lead to 6 combinations - $$3*2*1 = 6$$ Thanks 12 and 21 in your example are treated as two different numbers in my explanation. So, when I multiply by 3 I get the same result as you when you multiply by 6. Sorry, cannot explain any better than this: 11 12 13 14 15 16 17 18 19 21 ... 99 Total of 81 numbers. 0 in three digits can take either first, second or third place, hence multiplying by 3: XX0, X0X, 0XX. _________________ Intern Joined: 01 Sep 2015 Posts: 3 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 18 Sep 2016, 07:37 Please help me understand this - We need to find - If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly?. Please consider this while counting possible outcomes. Remember, logically he will stop trying once he gets the original number. When Thurston starts dialing 10 numbers, he - ->gets the original number in 1st attempt. So he tries just 1 out of 10 number. ->gets the original number in 2nd attempt. So he tries just 2 out of 10 number. .... ... .. gets the original number in 10th attempt. So he tries just 10 out of 10 number. But all the explanation seems to focus on finding numbers that fit in criteria - at least one 0 and at least one non-zero for counting favorable outcomes, and not on the number that is original and ONLY ONE. I think probability has to be calculated at two levels - Choosing 10 numbers from all favorable outcome i.e. from 270 X (original number found at 1st attempt + original number found at 2nd attempt +......+original number found at 10th attempt). Can somebody help where I am going wrong. Current Student Joined: 03 Apr 2013 Posts: 258 Location: India Concentration: Marketing, Finance GMAT 1: 740 Q50 V41 GPA: 3 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 16 Jul 2017, 22:25 1 Bunuel wrote: shameekv wrote: Thurston wrote an important seven-digit phone number on a napkin, but the last three numbers got smudged. Thurston remembers only that the last three digits contained at least one zero and at least one non-zero integer. If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly? A. 1/9 B. 10/243 C. 1/27 D. 10/271 E. 1/1000000 If the last three digits have 1 zero (XX0), the total # of numbers possible is 9*9*3 (multiply by 3 since XX0 can be arranged in 3 ways: XX0, X0X, or 0XX). If the last three digits have 2 zeros (X00), the total # of numbers possible is 9*3 (multiply by 3 since X00 can be arranged in 3 ways: X00, 00X, or X0X). P = 10/(9*9*3+9*3) = 1/27. Answer: C. P.S. Please read carefully and follow: http://gmatclub.com/forum/rules-for-pos ... 33935.html Pay attention to the rule #3. Thank you. How did you simply write 10/270? This is how I did it. Total possibilities for the numbers = 270 (found this one exactly how you did) Of these only 1 is correct and the other 269 are incorrect. Final probability = Probability of selecting 1 correct and 9 incorrect / probability of selecting any 10 out of 270 This will also give the same answer. I just want to know your "exact mathematical logic" why you wrote 10/270. Thank you for your help Intern Joined: 07 Apr 2020 Posts: 8 Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 24 Apr 2020, 19:16 The combination of the last 3 numbers could be 0ZZ (where I assume Z is the non-zero integer), which will be 3C1 x 81, or 00Z, which will be 3x9. In total we will have 270 combinations. It is indeed that out of that 270 combinations available there is only 1 correct combination, in which the probability is 1/270. However, given that he attempted 10 phone calls, it is possible that he will get the correct combination either in the 1st attempt, 2nd attempt.... or even in the 10th attempt. Each attempt has a probability of 1/270. So the total probability of him getting the right combination in 10 tries is 10x1/270 = 1/27. Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 11117 Location: United States (CA) Re: Thurston wrote an important seven-digit phone number on a na  [#permalink] ### Show Tags 28 May 2020, 15:44 shameekv wrote: Thurston wrote an important seven-digit phone number on a napkin, but the last three numbers got smudged. Thurston remembers only that the last three digits contained at least one zero and at least one non-zero integer. If Thurston dials 10 phone numbers by using the readable digits followed by 10 different random combinations of three digits, each with at least one zero and at least one non-zero integer, what is the probability that he will dial the original number correctly? A. 1/9 B. 10/243 C. 1/27 D. 10/271 E. 1/1000000 We can divide the last 3 digits of the phone number into 2 cases: 1) exactly 1 zero and 2 non-zero digits. 2) exactly 2 zeros and 1 non-zero digit. Case 1: ZNN, NZN, NNZ (where Z is the 0 digit and N is a nonzero digit) (1 x 9 x 9) x 3 = 243 Case 2: ZZN, ZNZ, NZZ (1 x 1 x 9) x 3 = 27 Therefore, the total number of ways the last 3 digits of the phone number can be formed given that there is at least one zero and at least one non-zero digit is 243 + 27 = 270. Since Thurston tries 10 of them, the probability he dials the correct phone number is 10/27 = 1/27. Answer: C _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 214 REVIEWS 5-STAR RATED ONLINE GMAT QUANT SELF STUDY COURSE NOW WITH GMAT VERBAL (BETA) See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews Re: Thurston wrote an important seven-digit phone number on a na   [#permalink] 28 May 2020, 15:44 # Thurston wrote an important seven-digit phone number on a na new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
2020-07-14 23:44:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7190185785293579, "perplexity": 2492.8652503309067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00435.warc.gz"}
http://ncatlab.org/nlab/show/algebraic+lattice
(0,1)-category (0,1)-topos # Contents ## Definition ###### Definition An algebraic lattice is a lattice which is An algebraic lattice is a complete lattice (equivalently, a suplattice, or in different words a poset with the property of having arbitrary colimits but with the structure of directed colimits/directed joins) in which every element is the supremum of the compact elements below it (an element $e$ is compact if, for every subset $S$ of the lattice, $e$ is less than or equal to the supremum of $S$ just in case $e$ is less than or equal to the supremum of some finite subset of $S$). Here is an alternative formulation: ###### Definition An algebraic lattice is a poset which is locally finitely presentable as a category. This formulation suggests useful a way of viewing algebraic lattices in terms of Gabriel-Ulmer duality (but with regard to enrichment in truth values, instead of in $Set$). As this last formulation suggests, algebraic lattices typically arise as subobject lattices for objects in locally finitely presentable categories. As an example, for any (finitary) Lawvere theory $T$, the subobject lattice of an object in $T$-$Alg$ is an algebraic lattice (this class of examples explains the origin of the term “algebraic lattice”, which is due to Garrett Birkhoff). ## Properties ### The category of algebraic lattices The morphisms most commonly considered between algebraic lattices are the finitary functors? between them, which is to say, the Scott-continuous functions between them; i.e., those functions which preserve directed joins (hence the parenthetical remarks above). The resulting category AlgLat is cartesian closed and is dually equivalent to the category whose objects are meet semilattices (construed as categories with finite limits enriched over truth values) and whose morphisms are meet-preserving profunctors between them (using the convention that a $V$-enriched profunctor from $C$ to $D$ is a functor $D^{op} \times C \rightarrow V$; of course, with an opposite convention, one could similarly state a covariant equivalence). There is a full embedding $i \colon AlgLat \to Top_0$ to the category of $T_0$-spaces, taking an algebraic lattice $L$ to the space whose points are elements of $L$, and whose open sets $U$ are defined by the property that their characteristic maps $\chi_U: L \to \mathbf{2}$ ($\chi_U(a) = 1$ if $a \in U$, else $\chi_U(a) = 0$) are poset maps that preserve directed colimits. The specialization order of $i(L)$ is $L$ again. Every $T_0$-space $X$ occurs as a subspace of some space $i(L)$ associated with an algebraic lattice. Explicitly, let $L(X)$ be the power set of the underlying set of the topology, $P{|\mathcal{O}(X)|}$, and define $X \to (i\circ L)(X)$ to take $x$ to $N(x) \coloneqq \{U \in \mathcal{O}(X): x \in U\}$. This gives a topological embedding of $X$ in $i(L(X))$. ###### Remark On similar grounds, if $U \colon AlgLat \to Set$ is the forgetful functor, then the 2-image of the projection functor $\pi \colon Set\downarrow U \to Set$ is the category of topological spaces $Top$. In more nuts-and-bolts terms, an object $(S, L, f \colon S \to U(L))$ gives a space with underlying set $S$ and open sets those of the form $f^{-1}(O)$, where $O$ ranges over the Scott topology on $L$. Notice that if $(f \colon S \to S', g \colon L \to L')$ is a morphism in $Set \downarrow U$, then $f$ is continuous with respect to these topologies. Therefore the projection $\pi \colon Set \downarrow U \to Set$ factors through the faithful forgetful functor $Top \to Set$. Thus, working in the factorization system (eso+full, faithful) on $Cat$, we have a faithful functor $2$-$im(\pi) \to Top$ filling in as the diagonal $\array{ Set \downarrow U & \to & Top \\ \downarrow & \nearrow & \downarrow \\ 2\text{-}im(\pi) & \to & Set. }$ But notice also that $Set \downarrow U \to Top$ is eso and full. It is eso because any topology $\mathcal{O}(S)$ on $S$ can be reconstituted from the triple $(S, P{|\mathcal{O}(S)|}, x \mapsto N(x) \colon S \to P{|\mathcal{O}(S)|})$. We claim it is full as well. For, every continuous map $X \to X'$ between topological spaces induces a continuous map between their $T_0$ reflections $X_0 \to X_{0}'$, and since algebraic lattices like $P{|\mathcal{O}(X)|}$ (being continuous lattices) are injective objects in the category of $T_0$ spaces, we are able to complete to a diagram $\array{ X & \to & X_0 & \to & P{|\mathcal{O}(X)|} \\ \downarrow & & \downarrow & & \downarrow \\ X' & \to & X_{0}' & \to & P{|\mathcal{O}(X')|} }$ where the rightmost vertical arrow is Scott-continuous (and the horizontal composites are of the form $x \mapsto N(x)$). Finally, since $Set \downarrow U \to Top$ is eso and full, it follows that $2$-$im(\pi) \to Top$ is eso, full, and faithful, and therefore an equivalence of categories. This connection is explored in more depth with the category of equilogical spaces, which can be seen either as a category of (set-theoretic) partial equivalence relations over $AlgLat$, or equivalently of (set-theoretic) total equivalence relations on $T_0$ topological spaces. ### Relation to locally finitely presentable categories One of our definitions of algebraic lattice is: a poset $L$ which is locally finitely presentable when viewed as a category. The completeness of $L$ means that right adjoints $L \to Set$ are representable, given by $L(p, -) \colon L \to Set$, and we are particularly interested in those representable functors that preserve filtered colimits. These correspond precisely to finitely presentable objects $p$, which in lattice theory are usually called compact elements. These compact elements are closed under finite joins. By Gabriel-Ulmer duality, $L$ is determined from the join-semilattice of compact elements $K$ by $L \cong Lex(K^{op}, Set)$. Since the elements of $K^{op}$ are subterminal, we can also write $L \cong Lex(K^{op}, 2)$ where $2 = Sub(1)$. ###### Theorem (Porst) If $C$ is a locally finitely presentable category and $X$ is an object of $C$, then • The lattice of subobjects $Sub(X)$, • the lattice of quotient objects (equivalence classes of epis sourced at $X$) $Quot(X)$, • the lattice of congruences (internal equivalence relations) on $X$ are all algebraic lattices. This is due to Porst. ### Completely distributive lattices ###### Proposition The category of Alexandroff locales is equivalent to that of completely distributive algebraic lattices. This appears as (Caramello, remark 4.3). The completely distributive algebraic lattices form a reflective subcategory of that of all distributive lattices. The reflector is called canonical extension. Locally presentable categories: Large categories whose objects arise from small generators under small relations. (n,r)-categoriessatisfying Giraud's axiomsinclusion of left exaxt localizationsgenerated under colimits from small objectslocalization of free cocompletiongenerated under filtered colimits from small objects (0,1)-category theory(0,1)-toposes$\hookrightarrow$algebraic lattices$\simeq$ Porst’s theoremsubobject lattices in accessible reflective subcategories of presheaf categories category theorytoposes$\hookrightarrow$locally presentable categories$\simeq$ Adámek-Rosický’s theoremaccessible reflective subcategories of presheaf categories$\hookrightarrow$accessible categories model category theorymodel toposes$\hookrightarrow$combinatorial model categories$\simeq$ Dugger’s theoremleft Bousfield localization of global model structures on simplicial presheaves (∞,1)-topos theory(∞,1)-toposes$\hookrightarrow$locally presentable (∞,1)-categories$\simeq$ Simpson’s theorem accessible reflective sub-(∞,1)-categories of (∞,1)-presheaf (∞,1)-categories$\hookrightarrow$accessible (∞,1)-categories ## References The relation to locally finitely presentable categories is discussed in • Hans Porst, Algebraic lattices and locally finitely presentable categories (pdf) Revised on April 10, 2014 01:54:19 by Zoran Škoda (193.136.196.12)
2014-11-27 04:28:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 97, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167525768280029, "perplexity": 568.8125546629598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007797.72/warc/CC-MAIN-20141125155647-00182-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.jiskha.com/users?name=Mariel
# Mariel Popular questions and responses by Mariel 1. ## physics a light string 4 meters long is wrapped around a solid cylindrical spool with a radius of 0.075 m and a mass of .5 kg.a 5kg mass is then attached to the end of the string, causing the string to unwind from the spool. a. what is the angular acceleration of 2. ## Geometry Find the sum of the measures of the exterior angles of a convex 39-gon? 3. ## math From a square piece of cartolina with a side of 60cm, liza cut the biggest cross. What is the area of the cross? 4. ## Calculus (math) A group of engineers is building a parabolic satellite dish whose shape will be formed by rotating the curve y=ax2 about the y-axis. If the dish is to have a 8-foot diameter and a maximum depth of 2 feet, find the value of a and the surface area (in square 5. ## Calculus (math) A painting in an art gallery has height h and is hung so its lower edge is a distance d above the eye of an observer (as in the figure). How far from the wall should the observer stand to get the best view? (In other words, where should the observer stand 6. ## Calculus (math) A boat leaves a dock at 2:00 P.M. and travels due south at a speed of 15 km/h. Another boat has been heading due east at 20 km/h and reaches the same dock at 3:00 P.M. How many minutes past 2:00 P.M. were the boats closest together? 7. ## science Consider the following reaction: CH3X + Y --> CH3Y + X At 25ºC, the following two experiments were run, yielding the following data: Experiment 1: [Y]0 = 3.0 M [CH3X] Time (hr) 7.08 x 10-3 M 1.0 4.52 x 10-3 M 1.5 2.23 x 10-3 M 2.3 4.76 x 10-4 M 4.0 8.44 x 8. ## Thermo Calculate the equilibrium constant for the reaction NiO(s) + H2(g) → Ni(s) + H2O(g) at 1023K from the following data: Ni(s) + ½ O2(g) → NiO(s) ΔG0 = -244,555 + 98.53 T [J] H2(g) + ½ O2(g) → H2O(g) ΔG0 = -246,438 + 54.81 T [J] Could a pure nickel 9. ## Physics Find th resultant and speed direction of the airplane relative to the ground.... AN airplane is heading west at velocity 950km/hr and wind is blowing northward at velocity of 55.o km/hr 10. ## Calculus (math) A conical water tank with vertex down has a radius of 12 feet at the top and is 23 feet high. If water flows into the tank at a rate of 20 {\rm ft}^3{\rm /min}, how fast is the depth of the water increasing when the water is 12 feet deep? 11. ## Math degrees A Ferris wheel has spokes that divide the wheel into 9 equal sections. What is the measure of the angle of the angle for each sections 12. ## Physics The Angle of depression of Boat A from the top of a cliff which is 32 m high is 24 degree 15 minutes .The angle of depression of Boat B from the same point is 18 degree 12 minutes .Find distance between the two side with diagram 13. ## math Solve the following algebraically. Trial and error is not an appropriate method of solution. You must show all of your work and write answers in rational form. 3 x + 7 = 9 Answer: x = 2/3 Show your work here: 3 x + 7 = 9 -7 - 7 3/3 x = 2/3 X = 2/3 b) 14. ## math is one half greater than three fourths 15. ## Geometry ABCD is a rectangle with B(-4,2) andD (10, 6). find the coordinates of A. Please help and tell me how you got it so I can understand 16. ## agebra Sam and Chris went to “Lots O Fun” to play laser tag and video games for Chris’s birthday. Sam played 3 games of laser tag, 5 video games, and spent $17 total. Chris played 4 games of laser tag, 7 video games, and spent$23 total. How much does one 17. ## science true or false? the graph distance versus time for n object moving at consatant speed is a curve. 18. ## Geometry ABCD is a rectangle with B(-4,2) andD (10, 6). find the coordinates of A. Please help and tell me how you got it so I can understand 1. ## calculus A lamp post 3m high is 6m from a wall. A 2m man tall is walking directly from the post toward at 2.5m/s. How fast is his 1.5 from the wall posted on May 22, 2017 2. ## HELLLPPPPPPPP!!!!!! 48 pieces of apples posted on February 12, 2017 3. ## Calculus 64π posted on January 24, 2016 4. ## Math(angle of depression) i need help please.. hey can you explain please step by step i'm really sorry but i really need your help posted on July 30, 2014 5. ## Geometry Okay thank you do much!:) posted on March 21, 2013 6. ## Geometry I'm sorry I put it wrong B(-4,6) c(-4,2) d(10,2) posted on March 21, 2013 7. ## Geometry Thank you so much, I understand this now! :) posted on March 21, 2013 8. ## Geometry Show* posted on March 21, 2013 9. ## Geometry Thank you but also how would I sow my work for this? posted on March 21, 2013 10. ## Statistics I do not understand what I need to do her I need some help. posted on February 10, 2012 11. ## precal find positive and negative coterminal angle for 55(pi)/18 posted on January 12, 2012 12. ## algebra Solve the following algebraically. Trial and error is not an appropriate method of solution. You must show all of your work and write answers in rational form. 3 x + 7 = 9 Answer: x = 2/3 Show your work here: 3 x + 7 = 9 -7 - 7 3/3 x = 2/3 X = 2/3 b) posted on July 27, 2008 13. ## Please Check My Math Work Solve the following algebraically. Trial and error is not an appropriate method of solution. You must show all of your work and write answers in rational form. 3 x + 7 = 9 Answer: x = 2/3 Show your work here: 3 x + 7 = 9 -7 - 7 3/3 x = 2/3 X = 2/3 b) posted on July 27, 2008
2020-04-04 15:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47357428073883057, "perplexity": 1422.3106647479397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00152.warc.gz"}
https://www.physicsforums.com/threads/qm-variation-method.288873/
QM Variation Method 1. Jan 31, 2009 Old Guy 1. The problem statement, all variables and given/known data Show that variation principle (parameters ci) leads to equations $$\sum\limits_{i = 1}^n {\left\langle i \right|H\left| j \right\rangle c_j = Ec_i {\rm{ where }}} \left\langle j \right|H\left| i \right\rangle = \int {d\textbf{r}^3 \chi _j^* \left( \textbf{r} \right)\left( {H\chi _i \left( \textbf{r} \right)} \right)}$$ 2. Relevant equations I've got $$\psi \left( \textbf{r} \right) = \sum\limits_{i = 1}^n {c_i \chi _i \left( \textbf{r} \right)}$$ , but 3. The attempt at a solution I'm confused about what's being asked, and what the expected result means. Indices as kets and bras? If they're different vectors within the Hilbert space, won't they be orthonormal implying all i not equal j would be zero? I have a nagging feeling I'm either confused by notation or overlooking something basic. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Feb 1, 2009 projektMayhem The left hand side is just the matrix elements of the hamiltonian in some orthonormal basis. If they are eigenstates of the Hamiltonian, this matrix will be diagonal. The expression they give you for the matrix elements is the position space representation 3. Feb 1, 2009 projektMayhem Beyond that, I don't know what you're asking. 4. Feb 1, 2009 Old Guy Well, thanks for your response. First, let me say that the problem statement as shown was exactly as presented, and we were given no other information. The only thing I can think of is that the indices in the bras and kets are really intended to be $$\chi_{i}$$ and $$\chi_{j}$$, but this seems to be trivial, because in that case wouldn't $$\left\langle j \right|H\left| i \right\rangle = \int {d\textbf{r}^3 \chi _j^* \left( \textbf{r} \right)\left( {H\chi _i \left( \textbf{r} \right)} \right)}$$ imply $$\left\langle i \right|H\left| j \right\rangle = \int {d\textbf{r}^3 \chi _i^* \left( \textbf{r} \right)\left( {H\chi _j \left( \textbf{r} \right)} \right)}$$ ? And if that's the case, doesn't that simply mean that the $$\chi$$'s are simply the eigenstates of the Hamiltonian?
2018-01-21 03:14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7681217193603516, "perplexity": 547.2001815927483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889917.49/warc/CC-MAIN-20180121021136-20180121041136-00539.warc.gz"}
https://www.asvabtestbank.com/arithmetic-reasoning/practice-test/920454/5
## ASVAB Arithmetic Reasoning Practice Test 920454 Questions 5 Topics Least Common Multiple, PEMDAS, Percentages, Rates, Sequence #### Study Guide ###### Least Common Multiple The least common multiple (LCM) is the smallest positive integer that is a multiple of two or more integers. ###### PEMDAS Arithmetic operations must be performed in the following specific order: 1. Parentheses 2. Exponents 3. Multiplication and Division (from L to R) 4. Addition and Subtraction (from L to R) The acronym PEMDAS can help remind you of the order. ###### Percentages Percentages are ratios of an amount compared to 100. The percent change of an old to new value is equal to 100% x $${ new - old \over old }$$. ###### Rates A rate is a ratio that compares two related quantities. Common rates are speed = $${distance \over time}$$, flow = $${amount \over time}$$, and defect = $${errors \over units}$$. ###### Sequence A sequence is a group of ordered numbers. An arithmetic sequence is a sequence in which each successive number is equal to the number before it plus some constant number.
2019-01-20 09:59:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5025811195373535, "perplexity": 1708.4069435041995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00081.warc.gz"}
https://robertfilter.net/blog/webtech/how-to-solve-mathjax-cumulative-layout-shift.html
Recently I got a message from google search console about a problem on photonics101.com. For a lot of pages. Cumulative layout shift (CLS) has been observed and I should fix it. What is that? Looking at a page by google I got the idea. Some contents on the pages are loaded in a way that they push other elements "dynamically" to the bottom. This causes a layout shift. And since quite a few elements can do that, the effect is cumulative. Ok, that's easy. On the above mentioned site I tacke some educational material in electromagnetism. The site uses quite some math. These formulas itself are rendered using MathJax. I cannot express how I love MathJax. It's a piece of art to have TeX formatting on a website. I fell in love quite some time ago and my admiration continues. However, have maybe a few thousand formulas on the site. And the formulas cause cumulative layout shift. I need to find a solution such that Google likes my page again. In turn, this little contribution is about search engine optimization (SEO). Not easy. ## Visualization of cumulative layout shift At first I was trying to understand the problem. I made some investigations and built a test case on this site. The test is fairly simple: a website containing Maxwell's equations and an image below to show the shift. Here's how it looks like to load the equations in real time (ok, maybe a bit slower): ## Solution to MathJax cumulative layout shift I have tried a couple of solutions on the actual MathJax code. There are lazy load options and (my favorite) you can set a fixed height of div's around MathJax. ### Half-Way Solution: fixed height div's This initial solution goes something like this: <div style="height: 120px;"> $E = mc^2$ </div> Nevertheless, this solution still causes some cumulative layout shift. Presumably because the first height of larger formulas renders quite unlike text. But this is a quick and dirty solution and I can recommend it. For purists, however, there is the ultimate solution. ### The ultimate solution: vector graphics The ultimate solution without any cumulative layout shift of course is the most involved one. No pain, no gain. In this solution, we replace the actual TeX / MathJax code by an svg image that is pre-rendered using the formula. The workflow is easy: 1. Locate TeX math formulas on your website 2. Convert the formulas from TeX to svg using, for example, Thomas Lochmatter's LaTeX to SVG tool 3. Replace the formulas on your site with the svg images Why vector graphics and not pixel ones? Simply zoom in on a png or jpg and you will see the difference. Wikipedia used to render (if I'm not mistaken) pixelized graphics and they looked horrible when zoomed in. Now they also use svg's for the above reasons. Note: You don't want to cause cumulative layout shift by your newly established vector graphics! Therefore, set the height attribute of the images explicitly! Here's an example of the aforementioned Maxwell equations, as svg vector graphics: And here's the associated code, that you can also look up in the html: <figure> <img src="/imagesblog/22/maxwells-equations.svg" height="130"> <figcaption>Maxwell's equations as vector graphics: a fixed height elimininates cumulative layout shift entirely.</figcaption> </figure> This method has also the advantage that you may use an "alt" attribute for the image. This attribute might give you a minimal advantage in terms of search engine optimization (SEO). However, most SEO advantage comes from no cumulative layout shift. ## The test: 99% pagespeed on my math-heavy page It was quite some time until I was able to update a number of pages on the mentioned site. The reason is simple: setting the height attribute or replacing formula's with svg graphics is quite the effort. Anyhow, I think I made some good progress. Here are the results of my MathJax adaptations! First of all it is very important to load a lightweight installation. I use the tex to chtml configuration, loaded from the jsdelivr cdn. The implementation is simple: <script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/[email protected]/es5/tex-chtml.js"> </script> Note that I do not use lazy loading or whatever. Lazy loading gives you a faster initial page load. However, once you scroll down, non-optimized math expressions will cause cumulative layout shift. So lazy load was not an option. The file of the tex to chtml module is only a little larger than 100kB. That is after automatic gzip compression by the cdn. This is quite small given the power of MathJax! Note that the tex to svg module is about twice the size, which gives you a bit of punishment for mobile pagespeed. For the site that we will see the test for I simply implemented the "half-way" method with fixed height. This is absolutely OK for my purposes. Ok, let's see the results from https://pagespeed.web.dev/ This is not bad! To be fair I am a bit proud here. It took me about a month to come back to the basic configuration after trying out a myriad of different ideas. ## Conclusions Google has introduced a number of measures to analyze the Core Vitals of a website. One of these measures is the cumulative layout shift. MathJax can cause considerable cumulative layout shifts, especially for longer formulas, i.e. equation arrays. The only way around MathJax-caused cumulative layout shift is to either - hard-code the height of the respective formula or to - use an svg image with given height and width It is imparative not to use MathJax' lazy load module. Furthermore an appropriate configuration for example the tex to chtml one reduces loading times, especially on mobile devices. The file is best loaded from a fast cdn.
2022-05-29 01:19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5759816765785217, "perplexity": 2263.3154421194363}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00155.warc.gz"}
https://www.jobilize.com/course/section/chapter-review-measures-of-the-spread-of-the-data-rrc-by-openstax?qcr=www.quizover.com
# 2.7 Measures of the spread of the data -- rrc math 1020  (Page 7/25) Page 7 / 25 ## For any data set, no matter what the distribution of the data is: • At least 75% of the data is within two standard deviations of the mean. • At least 89% of the data is within three standard deviations of the mean. • At least 95% of the data is within 4.5 standard deviations of the mean. • This is known as Chebyshev's Rule. ## For data having a distribution that is bell-shaped and symmetric: • Approximately 68% of the data is within one standard deviation of the mean. • Approximately 95% of the data is within two standard deviations of the mean. • More than 99% of the data is within three standard deviations of the mean. • This is known as the Empirical Rule. • It is important to note that this rule only applies when the shape of the distribution of the data is bell-shaped and symmetric. We will learn more about this when studying the "Normal" or "Gaussian" probability distribution in later chapters. ## References Data from Microsoft Bookshelf. King, Bill.“Graphically Speaking.” Institutional Research, Lake Tahoe Community College. Available online at http://www.ltcc.edu/web/about/institutional-research (accessed April 3, 2013). ## Chapter review The standard deviation can help you calculate the spread of data. There are different equations to use if are calculating the standard deviation of a sample or of a population. • The Standard Deviation allows us to compare individual data or classes to the data set mean numerically. • s = $\sqrt{\frac{{\sum }^{\text{​}}{\left(x-\overline{x}\right)}^{2}}{n-1}}$ or s = $\sqrt{\frac{{\sum }^{\text{​}}f{\left(x-\overline{x}\right)}^{2}}{n-1}}$ is the formula for calculating the standard deviation of a sample. To calculate the standard deviation of a population, we would use the population mean, μ , and the formula σ = $\sqrt{\frac{{\sum }^{\text{​}}{\left(x-\mu \right)}^{2}}{N}}$ or σ = $\sqrt{\frac{{\sum }^{\text{​}}f{\left(x-\mu \right)}^{2}}{N}}$ . ## Formula review ${s}_{x}=\sqrt{\frac{\sum f{m}^{2}}{n}-{\overline{x}}^{2}}$ where ## Practice Use the following information to answer the next two exercises : The following data are the distances between 20 retail stores and a large distribution center. The distances are in miles. 29; 37; 38; 40; 58; 67; 68; 69; 76; 86; 87; 95; 96; 96; 99; 106; 112; 127; 145; 150 Use a graphing calculator or computer to find the standard deviation and round to the nearest tenth. s = 34.5 Find the value that is one standard deviation below the mean. Two baseball players, Fredo and Karl, on different teams wanted to find out who had the higher batting average when compared to his team. Which baseball player had the higher batting average when compared to his team? Baseball Player Batting Average Team Batting Average Team Standard Deviation Fredo 0.158 0.166 0.012 Karl 0.177 0.189 0.015 For Fredo: z = = –0.67 For Karl: z = = –0.8 Fredo’s z -score of –0.67 is higher than Karl’s z -score of –0.8. For batting average, higher values are better, so Fredo has a better batting average compared to his team. Use [link] to find the value that is three standard deviations: • above the mean • below the mean Find the standard deviation for the following frequency tables using the formula. Check the calculations with the TI 83/84 . Find the standard deviation for the following frequency tables using the formula. Check the calculations with the TI 83/84. 49.5–59.5 2 59.5–69.5 3 69.5–79.5 8 79.5–89.5 12 89.5–99.5 5 2. Daily Low Temperature Frequency 49.5–59.5 53 59.5–69.5 32 69.5–79.5 15 79.5–89.5 1 89.5–99.5 0 3. Points per Game Frequency 49.5–59.5 14 59.5–69.5 32 69.5–79.5 15 79.5–89.5 23 89.5–99.5 2 1. ${s}_{x}=\sqrt{\frac{\sum f{m}^{2}}{n}-{\overline{x}}^{2}}=\sqrt{\frac{193157.45}{30}-{79.5}^{2}}=10.88$ 2. ${s}_{x}=\sqrt{\frac{\sum f{m}^{2}}{n}-{\overline{x}}^{2}}=\sqrt{\frac{380945.3}{101}-{60.94}^{2}}=7.62$ 3. ${s}_{x}=\sqrt{\frac{\sum f{m}^{2}}{n}-{\overline{x}}^{2}}=\sqrt{\frac{440051.5}{86}-{70.66}^{2}}=11.14$ Preparation and Applications of Nanomaterial for Drug Delivery Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology hi Loga Got questions? Join the online conversation and get instant answers!
2020-08-07 16:28:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5680137276649475, "perplexity": 2214.4455868425657}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00066.warc.gz"}
https://www.gamedev.net/forums/topic/643804-interoperability-opencl-directx11/
Followers 0 # Interoperability openCL DirectX11 ## 5 posts in this topic Hi, My problem is at the compilation stage, or even before : I don't have the functions : clGetDeviceIDsFromD3D11KHR, ...etc in my <CL/cl_d3d11.h>, instead I have this function pointer clGetDeviceIDsFromD3D11KHR_fn. I have the "cl_khr_d3d11_sharing" strng in my platform extension information. I tried to add #pragma OPENCL EXTENSION cl_khr_d3d11_sharing : enable, but doesn't work. So if you have any idea, or if you already used it I am really interested. Thanks, 0 ##### Share on other sites Have a look at the OpenCL Programming Guide book samples D3D10 interop (chapter 11) to get the picture. The clGetDeviceIDsFromD3D11KHR_fn is not a function pointer yet, but the type (signature) of your function pointer. Declare one like so clGetDeviceIDsFromD3D11KHR_fn clGetDeviceIDsFromD3D11KHR = NULL; To get the function you use clGetExtensionFunctionAddress. The sample uses some macro magic: #define INITPFN(x) \ x = (x ## _fn)clGetExtensionFunctionAddress(#x);\ if(!x) { printf("failed getting %s", #x); } ... to ease the call, so in this case: INITPFN(clGetDeviceIDsFromD3D11KHR); 1 ##### Share on other sites Thanks a lot !! This stuff is amazing, I was simulating 1 million force field particles at 30 FPS, now at more than 400 FPS :) I was expecting a big gain because of all return tickets I was paying from CPU to GPU, now there is none ! Edited by smallGame 0 ##### Share on other sites Glad to hear. You got some nice screen shots ? Out of curiosity: Why not use compute shaders ? Limitations because of D3D10 hardware (cs_4_0) ? 1 ##### Share on other sites I don't know exactly why I used openCL, I wanted to use this tech like a aim. I was thinking to port my code from openCL to Direct Compute but then I thought it will be easier to integrate the interoperability. And I think there is something really good with openCL, it is scalable even on CPU, so I don't have to do some multi-threading , openCL will do it for me. The other good thing with openCL it is that is portable. I also wanted to integrate my stuff in a friend engine which is in OpenGL on i Platform. So maybe now you d like to know why I used DirectX11 :) I wanted also used this tech because it was used in my job, and I knew some HLSL. I don't have any screen shot but I created a Youtube channel : http://www.youtube.com/user/uuuq78/videos Galaxy is the one I was talking about. In the video description I put the benchmarks in ms. I will update my videos and benchmarks with the interoperability when I will have integrated it correctly. 1 ##### Share on other sites Videos are even better. Quite some nice GPGPU samples you got there. Congrats. I just wanted to hear if you had any troubles with compute shaders at all. Because I had half a year ago, so be warned: The compiler sometimes took minutes or crashed. If it compiled, the shader sometimes produced silly results. Then again, I probably did something blatantly wrong design-wise and also still used the June 2010 SDK compiler (which also has troubles with tesselation). If you go compute shader make sure to use the newest one (coming with the Windows 8 Kit). I was so fed up to give OpenCL a shot. Played with Cloo (I'm using C#) and was positively surprised. Compilation took seconds at most (subsequent compilation even seems to be cached by the NVidia driver), and the results were fine. Edit: That performance number of your particles is quite impressive. Smells like the interop isn't that bad a staller. Or do you have such a beefy hardware ? . If you do transliterate that sample to a compute shader, don't forget to post a comparison, please. Edited by unbird 1 ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account
2017-07-20 12:45:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.354495108127594, "perplexity": 3139.089737179984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423183.57/warc/CC-MAIN-20170720121902-20170720141902-00090.warc.gz"}
https://brilliant.org/problems/a-classical-mechanics-problem-by-ken-osako/
# A classical mechanics problem by Ken Osako When the three blocks in the figure are released from rest, they accelerate with a magnitude of $1 \text{ m/s}^2$. Block 1 has mass $M$, block 2 has $2M$, and block 3 has $2M$. What is the coefficient of kinetic friction between block 2 and the table? Take $g=9.8 \text{ m/s} ^2$. ×
2020-10-21 08:26:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4764026999473572, "perplexity": 376.23535696586396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876136.24/warc/CC-MAIN-20201021064154-20201021094154-00124.warc.gz"}
https://www.josephkirwin.com/srp-talk/
## Secure Remote Password Protocol (SRP) ### What is SRP? A "strong" password authentication protocol created by Tom Wu to address a case where both sides only share a weak, human-readable passphrase Heavy details of the technicals I'm about to get into below can be found here ### The Setup 1. Client chooses $q$ and $N = 2q+1$ (both primes) 2. Client chooses small random salt and computes: • $x = Hash(salt,password)$ • $v = g^x$ Note that g is the cyclic group $\mathbb{Z}_n$, see this slide for more details. 3. The server then stores: • $v$ (the verifier) • $s$ (the salt) • $i$ (the username to index this stuff) • $x$ (the hash) ### Subsequent authentications 1. Client sends its username to the server 2. Server looks up the verifier ($v$) and salt ($s$) 3. Server sends the salt back to the Client 4. Client computes $x=Hash(s,password)$ 5. Client generates a random number ($a$) 6. Client sends $A = g^a$ to the server 7. Server generates its own ephemeral key in the form $B = v + g^b$, where $b$ is a random number 8. Server returns $B$ and another random number ($u$) called a random scrambling parameter Both sides now compute this a common value $S=g^{ab+bux}$ Server side derivation: $g^{ab+bux} = (g^{a+ux})^b = (g^a.g^{ux})^b = (A.(g^x)^u)^b = (A.v^u)^b$ Client side derivation: $g^{ab+bux} = (g^b)^{a+ux} = (B-v)^{a+ux} = (B-g^x)^{a+ux}$ Using their respective computed $S$ from the previous slide • Client sends the server: $message1 = Hash(A,B,S)$ • Server sends the client: $message2 = Hash(A,message1,S)$ ### What are its strengths? or what threats does it mitigate? • Client side: you can keep your password in key-derivation function output form. • In transport the raw password is never used. • Server side: no actual passwords need to be stored, just the verifier and salt. • You can go further to derive a secure session key from the initial exchange. see references page for more details ### Primitive Root Modulo N aka, what does this fracking mean?! In specific relation to the generator set $\mathbb{Z}_n$ coprime's of n are all the numbers ($k$), less than $n$ that satisfy $gcd(n, k) == 1$ The primative root(s), are the select few of those that have the same periodicity as the set. e.g. $\mathbb{Z}_{11}$ has coprimes of {1,2,3,4,5,6,7,8,9,10} x $x^1$ $x^2$ $x^3$ $x^4$ $x^5$ $x^6$ $x^7$ $x^8$ $x^9$ $x^{10}$ 1 1 2 2 4 8 5 10 9 7 3 6 1 3 3 9 5 4 1 4 4 5 9 3 1 5 5 3 4 9 1 6 6 3 7 9 10 5 8 4 2 1 7 7 5 2 3 10 4 6 9 8 1 8 8 9 6 4 10 3 2 5 7 1 9 9 4 3 5 1 10 10 1 So we can see that {2,6,7,8} are the primitive roots of $\mathbb{Z}_{11}$ This is important from the crypto side to ensure the inversion is as difficult as possible. Here's the thing an attacker would need to solve: $g^x \bmod 11 = result$ where $x$ is the hashed password The fact that $N$ is prime is also of relevance as it ensures that the size of the congruent class is $N-1$. If it wasn't prime the congruent class could be small, and in some cases no primative roots would exist, e.g. N=15 back to SRP Setup page
2021-01-27 04:38:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3703795075416565, "perplexity": 991.6860316020609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704820894.84/warc/CC-MAIN-20210127024104-20210127054104-00073.warc.gz"}
https://socratic.org/questions/what-is-the-boiling-point-of-water-in-kelvins
# What is the boiling point of water in kelvins? $\text{Degrees Kelvin}$ $=$ $\text{^@"Celsius} + 273$ And thus the normal boiling point of water $=$ $\left(100 + 273\right) K$ $=$ ??K
2020-11-29 08:30:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43337807059288025, "perplexity": 750.853504041205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00413.warc.gz"}
https://msp.org/gt/2020/24-2/gt-v24-n2-p05-p.pdf
#### Volume 24, issue 2 (2020) Recent Issues The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals A construction of the quantum Steenrod squares and their algebraic relations ### Nicholas Wilkins Geometry & Topology 24 (2020) 885–970 ##### Abstract We construct a quantum deformation of the Steenrod square construction on closed monotone symplectic manifolds, based on the work of Fukaya, Betz and Cohen. We prove quantum versions of the Cartan and Adem relations. We compute the quantum Steenrod squares for all ${ℂℙ}^{n}$ and give the means of computation for all toric varieties. As an application, we also describe two examples of blowups along a subvariety, in which a quantum correction of the Steenrod square on the blowup is determined by the classical Steenrod square on the subvariety. We have not been able to recognize your IP address 44.201.94.236 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form. ##### Keywords Gromov–Witten theory, quantum cohomology, Steenrod squares, symplectic geometry, symplectic topology ##### Mathematical Subject Classification 2010 Primary: 53D45 Secondary: 14N35, 55S10
2023-03-31 02:19:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26066887378692627, "perplexity": 2477.5189413975268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00592.warc.gz"}
https://www.quizover.com/calculus/course/4-1-related-rates-applications-of-derivatives-by-openstax?page=4
# 4.1 Related rates  (Page 5/7) Page 5 / 7 Two airplanes are flying in the air at the same height: airplane A is flying east at 250 mi/h and airplane B is flying north at $300\phantom{\rule{0.2em}{0ex}}\text{mi/h}.$ If they are both heading to the same airport, located 30 miles east of airplane A and 40 miles north of airplane B , at what rate is the distance between the airplanes changing? The distance is decreasing at $390\phantom{\rule{0.2em}{0ex}}\text{mi/h}.$ You and a friend are riding your bikes to a restaurant that you think is east; your friend thinks the restaurant is north. You both leave from the same point, with you riding at 16 mph east and your friend riding $12\phantom{\rule{0.2em}{0ex}}\text{mph}$ north. After you traveled $4\phantom{\rule{0.2em}{0ex}}\text{mi,}$ at what rate is the distance between you changing? Two buses are driving along parallel freeways that are $5\phantom{\rule{0.2em}{0ex}}\text{mi}$ apart, one heading east and the other heading west. Assuming that each bus drives a constant $55\phantom{\rule{0.2em}{0ex}}\text{mph,}$ find the rate at which the distance between the buses is changing when they are $13\phantom{\rule{0.2em}{0ex}}\text{mi}$ apart, heading toward each other. The distance between them shrinks at a rate of $\frac{1320}{13}\approx 101.5\phantom{\rule{0.2em}{0ex}}\text{mph}.$ A 6-ft-tall person walks away from a 10-ft lamppost at a constant rate of $3\phantom{\rule{0.2em}{0ex}}\text{ft/sec}.$ What is the rate that the tip of the shadow moves away from the pole when the person is $10\phantom{\rule{0.2em}{0ex}}\text{ft}$ away from the pole? Using the previous problem, what is the rate at which the tip of the shadow moves away from the person when the person is 10 ft from the pole? $\frac{9}{2}$ ft/sec A 5-ft-tall person walks toward a wall at a rate of 2 ft/sec. A spotlight is located on the ground 40 ft from the wall. How fast does the height of the person’s shadow on the wall change when the person is 10 ft from the wall? Using the previous problem, what is the rate at which the shadow changes when the person is 10 ft from the wall, if the person is walking away from the wall at a rate of 2 ft/sec? It grows at a rate $\frac{4}{9}$ ft/sec A helicopter starting on the ground is rising directly into the air at a rate of 25 ft/sec. You are running on the ground starting directly under the helicopter at a rate of 10 ft/sec. Find the rate of change of the distance between the helicopter and yourself after 5 sec. Using the previous problem, what is the rate at which the distance between you and the helicopter is changing when the helicopter has risen to a height of 60 ft in the air, assuming that, initially, it was 30 ft above you? The distance is increasing at $\frac{\left(135\sqrt{26}\right)}{26}$ ft/sec For the following exercises, draw and label diagrams to help solve the related-rates problems. The side of a cube increases at a rate of $\frac{1}{2}$ m/sec. Find the rate at which the volume of the cube increases when the side of the cube is 4 m. The volume of a cube decreases at a rate of $10$ m/sec. Find the rate at which the side of the cube changes when the side of the cube is 2 m. $-\frac{5}{6}$ m/sec The radius of a circle increases at a rate of $2$ m/sec. Find the rate at which the area of the circle increases when the radius is 5 m. The radius of a sphere decreases at a rate of $3$ m/sec. Find the rate at which the surface area decreases when the radius is 10 m. $240\pi$ m 2 /sec The radius of a sphere increases at a rate of $1$ m/sec. Find the rate at which the volume increases when the radius is $20$ m. #### Questions & Answers why n does not equal -1 K.kupar Reply ask a complete question if you want a complete answer. Andrew I agree with Andrew Bg f (x) = a is a function. It's a constant function. Darnell Reply proof the formula integration of udv=uv-integration of vdu.? Bg Reply Find derivative (2x^3+6xy-4y^2)^2 Rasheed Reply no x=2 is not a function, as there is nothing that's changing. Vivek Reply are you sure sir? please make it sure and reply please. thanks a lot sir I'm grateful. The i mean can we replace the roles of x and y and call x=2 as function The if x =y and x = 800 what is y Joys Reply y=800 Gift 800 Bg how do u factor the numerator? Drew Reply Nonsense, you factor numbers Antonio You can factorize the numerator of an expression. What's the problem there? here's an example. f(x)=((x^2)-(y^2))/2 Then numerator is x squared minus y squared. It's factorized as (x+y)(x-y). so the overall function becomes : ((x+y)(x-y))/2 The The problem is the question, is not a problem where it is, but what it is Antonio I think you should first know the basics man: PS Vishal Yes, what factorization is Antonio Antonio bro is x=2 a function? The Yes, and no.... Its a function if for every x, y=2.... If not is a single value constant Antonio you could define it as a constant function if you wanted where a function of "y" defines x f(y) = 2 no real use to doing that though zach Why y, if domain its usually defined as x, bro, so you creates confusion Antonio Its f(x) =y=2 for every x Antonio Yes but he said could you put x = 2 as a function you put y = 2 as a function zach F(y) in this case is not a function since for every value of y you have not a single point but many ones, so there is not f(y) Antonio x = 2 defined as a function of f(y) = 2 says for every y x will equal 2 this silly creates a vertical line and is equivalent to saying x = 2 just in a function notation as the user above asked. you put f(x) = 2 this means for every x y is 2 this creates a horizontal line and is not equivalent zach The said x=2 and that 2 is y Antonio that 2 is not y, y is a variable 2 is a constant zach So 2 is defined as f(x) =2 Antonio No y its constant =2 Antonio what variable does that function define zach the function f(x) =2 takes every input of x within it's domain and gives 2 if for instance f:x -> y then for every x, y =2 giving a horizontal line this is NOT equivalent to the expression x = 2 zach Yes true, y=2 its a constant, so a line parallel to y axix as function of y Antonio Sorry x=2 Antonio And you are right, but os not a function of x, its a function of y Antonio As function of x is meaningless, is not a finction Antonio yeah you mean what I said in my first post, smh zach I mean (0xY) +x = 2 so y can be as you want, the result its 2 every time Antonio OK you can call this "function" on a set {2}, but its a single value function, a constant Antonio well as long as you got there eventually zach volume between cone z=√(x^2+y^2) and plane z=2 Kranthi Reply answer please? Fatima It's an integral easy Antonio V=1/3 h π (R^2+r2+ r*R( Antonio How do we find the horizontal asymptote of a function using limits? Lerato Reply Easy lim f(x) x-->~ =c Antonio solutions for combining functions Amna Reply what is a function? f(x) Jeremy Reply one that is one to one, one that passes the vertical line test Andrew It's a law f() that to every point (x) on the Domain gives a single point in the codomain f(x)=y Antonio is x=2 a function? The restate the problem. and I will look. ty jon Reply is x=2 a function? The What is limit MaHeSh Reply it's the value a function will take while approaching a particular value Dan don ger it Jeremy what is a limit? Dlamini it is the value the function approaches as the input approaches that value. Andrew Thanx Dlamini Its' complex a limit It's a metrical and topological natural question... approaching means nothing in math Antonio is x=2 a function? The 3y^2*y' + 2xy^3 + 3y^2y'x^2 = 0 sub in x = 2, and y = 1, isolate y' Andrew Reply what is implicit of y³+x²y³=5 at (2,1) Estelita Reply tel mi about a function. what is it? Jeremy A function it's a law, that for each value in the domaon associate a single one in the codomain Antonio function is a something which another thing depends upon to take place. Example A son depends on his father. meaning here is the father is function of the son. let the father be y and the son be x. the we say F(X)=Y. Bg yes the son on his father pascal a function is equivalent to a machine. this machine makes x to create y. thus, y is dependent upon x to be produced. note x is an independent variable moe x or y those not matter is just to represent. Bg ### Read also: #### Get the best Calculus volume 1 course in your pocket! Source:  OpenStax, Calculus volume 1. OpenStax CNX. Feb 05, 2016 Download for free at http://cnx.org/content/col11964/1.2 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'Calculus volume 1' conversation and receive update notifications? By By Qqq Qqq By By
2018-05-26 06:16:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7198310494422913, "perplexity": 873.0367742397227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867311.83/warc/CC-MAIN-20180526053929-20180526073929-00038.warc.gz"}
https://mathoverflow.net/questions/442891/are-there-any-other-examples-where-weak-and-strong-are-confused-in-mathemati
# Are there any other examples where "weak" and "strong" are confused in mathematics? What are some examples of logically weak (when there are more objects satisfying) but being called 'strong' or logically strong (when there are fewer objects satisfying) being called 'weak'? • There are two partial orders on a Coxeter group $W$, often called the weak Bruhat order and the strong Bruhat order. Every relation in the weak order is a relation in the strong order, but not conversely. So, to say $u \leq v$ in weak order is a stronger statement than to say $u \leq v$ in strong order. However, note that nowadays it is more common to refer to "weak Bruhat order" as just "weak order" and "strong Bruhat order" as just "Bruhat order." Mar 17 at 2:10 • I always tell myself the strong Bruhat order is stronger in the architectural sense, as the extra edges in the Hasse diagram hold it together better. Mar 17 at 2:14 • @SamHopkins Interesting; I have never found this terminology confusing. It seems natural to me that a "partial" ordering is "weaker" than a "total" ordering. So the closer you are to a total ordering, the stronger it is, and the closer you are to an antichain, the weaker the ordering is. Mar 17 at 13:00 • Strong induction is stronger than weak induction because it has the same conclusion, but weaker hypothesis. So what makes you say that it is a “weaker condition” than weak induction? (What do you mean by “condition” here, anyway?) Mar 17 at 13:55 • ... theory such as Robinson’s arithmetic Q (or better, $\mathrm{PA}^-$). Over such a background theory: (1) strong induction for any particular formula $P$ implies weak induction for the same formula, but (2) in general, weak induction for $P$ does not imply strong induction for $P$. Thus strong induction is strictly stronger than weak induction in this sense. However, strong induction for $P$ follows from weak induction for a sutable different formula, hence if you postulate both schemata for all formulas, they are equivalent. Now, what happens if you drop the background theory? ... Mar 18 at 7:03 Munkres's book Topology (p. 78) contains the following warning about coarser and finer topologies: Many mathematicians use the words "weaker" and "stronger" in this context. Unfortunately, some of them (particularly analysts) are apt to say that $$\mathcal T'$$ is stronger than $$\mathcal T$$ if $$\mathcal T' \supset \mathcal T$$, while others (particularly topologists) are apt to say that $$\mathcal T'$$ is weaker than $$\mathcal T$$ in the same situation! If you run across the terms "strong topology" or "weak topology" in some book, you will have to decide from the context which inclusion is meant. We shall not use these terms in this book. For instance, the 'weak topology' in the definition of a CW complex is the finest topology for which all cell inclusions are continuous. But weak topologies on a topological vector space $$V$$ are usually the coarsest topology for which certain maps $$V \to \mathbf R$$ are continuous. The difference here comes from whether you think a topology is weaker if it has more opens or if more sequences converge. (Like Munkres, I have adapted the convention to stick to the unambiguous terms coarser and finer.) • 'Weak topology' occurs when the situation is at the lower edge of continuity (with the minimal number of continuous maps): For a topological space $X$, the finer the topology on $X$ is, the less possibility of continuity a map $f$ with target $X$ will have; Conversely, the coarser the topology on $X$ is, the less possibility of continuity a map $f$ with source $X$ will have. Similarly, 'strong topology' occurs when the situation is at the upper edge of continuity. In this sense, the use of 'strong' and 'weak' is subtle, but not quite 'confused'. Mar 17 at 19:05 • but I wonder how many people would really call "stronger" a coarser topology. My guess is very few. I think the situation is not symmetric. Mar 17 at 19:56 I believe that strong monoidal functors were originally called "weak monoidal functors". The newer name indicates that it's a stronger condition than being a lax monoidal functor, while the original name indicates that it's a weaker condition than being a strict monoidal functor. There are certainly examples where the terms "strong" and "weak" can be confusing, though I'm not sure I would call any of them "mistakes" per se. Even in your example of "strong induction" versus "weak induction," I am not sure what you mean when you say that "strong induction is a weaker condition." Are you saying that if an induction is weak then it is also strong, but not necessarily vice versa? That doesn't seem to be true, at least not according to how I've heard the terms "strong induction" and "weak induction" used. But the example that comes to mind is the "weak law of large numbers" and the "strong law of large numbers." The strong law is so called because its conclusion (almost sure convergence) is stronger than the conclusion of the weak law (convergence in probability). However, the terminology can give the impression that the weak law is a straightforward corollary of the strong law, since that's what we normally mean by saying that Theorem A is "stronger" than Theorem B. But the weak law does not follow straightforwardly from the strong law, because the hypotheses of the weak law are different. The type of example alluded to by Sam Hopkins in a comment also arises frequently. A structure on a set may be called "weaker" than another if there is less structure. So for example, a topology $$\mathscr{U}$$ on a set $$X$$ is weaker than a topology $$\mathscr{V}$$ on $$X$$ if $$\mathscr{U} \subseteq \mathscr{V}$$. But this means that for a given subset $$U \subseteq X$$, the condition $$U\in \mathscr{U}$$ is stronger than the condition $$U\in \mathscr{V}$$. I wouldn't call this an "error" but some people might find it confusing. • In the topology example, I thought the usual terms were "finer" and "coarser"? EDIT: Nevermind, you are right, "stronger" and "weaker" are also used for this... Mar 17 at 13:35 • @SamHopkins That terminology is used, but certainly "weak" is used as well. Certainly the terminology "weak-* topology" is pretty entrenched; I've never heard the Banach-Alaoglu theorem ever stated without using the term "weak-* topology." Mar 17 at 13:38 • In what sense are the hypotheses of the weak law of large numbers different to those of the strong law of large numbers? Obviously, one can come up with situations where the conclusion of the weak law holds and that of the strong law doesn't. But aren't the names both generally understood to refer to the case of a sequence of i.i.d. random variables with finite mean? Mar 17 at 17:29 • @JamesMartin Wikipedia lists a few examples where the weak law holds but the strong law doesn't. In particular, finite mean isn't always necessary for the weak law to hold. Mar 17 at 21:49 • Maybe it would have been clearer if I had said that the weak law does not always follow straightforwardly from the strong law, because the hypotheses can be different? In any case, the point is that it can be confusing: if the weak law is just an easy corollary of the strong law, then why do people bother talking about the weak law? Mar 18 at 1:54 As an example where the adjectives "weak" and "strong" refer to hypotheses, rather than conclusions, a favorite of mine is (Gelfand-Pettis) "weak" vector-valued integrals, as juxtaposed to (Bochner) "strong" vector-valued integrals. Apart from other hypotheses, such as that the vector space is locally convex and quasi-complete, the "weak" integral $$\int_X f(x)\,d\mu(x)$$ if a $$V$$-valued function $$f$$ is characterized by $$\lambda\Big(\int_X f(x)\,d\mu(x)\Big) \;=\; \int_X \lambda(f(x))\,d\mu(x)$$ for all $$\lambda$$ in the continuous dual of $$V$$, noting that the right-hand side is just a scalar-valued integral. By Hahn-Banach, there is at most one such. Then, under various (useful-in-practice) hypotheses, one proves existence. To me, it's fairly amazing that the "weak" (because it refers to the dual) condition gives so much. In contrast, the "strong" (Bochner) integral constructs an "integral", in analogy with Riemann and Lebesgue integrals, and then proves it has the desired properties (which would/do also follow from the "weak" condition!) • But in this case isn't at least the logical strengths correct? In that a Bochner integrable function is Gelfand-Pettis integrable? Mar 17 at 19:12 • @WillieWong, ah, well, yes, the hypotheses's implications run in that direction, for sure. (Though some traditional formulations of Bochner integrals are too restrictive... I think Anton Deitmar showed that quasi-complete, locally convex, which is needed for "weak", also works for "strong"...) Mar 17 at 19:21 It is sometimes said that the axioms of a group can be "weakened" by merely requiring the existence of a left identity element, and left inverses. A priori it seems that replacing the two-sided axioms with their one-sided counterparts could lead to more models. However, it turns out that in a semigroup with both a left identity and left inverses, the left identity must also be a right identity, and the left inverses are also right inverses. Therefore, the "weakened" axioms have exactly the same models as the usual group axioms, meaning that the term "weakened" is perhaps misleading. On the other hand, in a general algebraic structure, the existence of a left identity is still a strictly weaker condition than a two-sided identity. Thus, a collection of individually weaker axioms might still yield the same models as a collection of their individually stronger counterparts. This is an elementary observation, but an important one, since it reinforces how the terms "stronger" and "weaker" are sensitive to context. • Perhaps more striking is that commutativity of addition is superfluous for commutative rings, but obviously very strong in general. Mar 17 at 22:02 • @Carl-FredrikNybergBrodda: Did you mean to say that it is superfluous for unital rings? – Joe Mar 17 at 23:57 • Well, all my rings have an identity element (otherwise they are rngs). Mar 18 at 0:51 • @Carl-FredrikNybergBrodda: Assuming that all rings are unital, I think that commutativity of addition is a superfluous axiom even for noncommutative rings. It follows by expanding the product $(1+1)(x+y)$ (see Bill Dubuque's answer here). – Joe Mar 18 at 1:01 • Yes, that was what I wanted to refer to — but I added an extra commutative in my comment. Mar 18 at 6:46
2023-03-27 13:32:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370002508163452, "perplexity": 435.62606649222425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00197.warc.gz"}
https://stats.stackexchange.com/questions/214600/moderator-analyses-in-a-meta-analysis
# 'moderator' analyses in a meta-analysis Sorry if I use the wrong terms here, I am still learning how to do this. I am conducting an meta-analysis comparing two treatment types, using Cohen's d. I have completed the primary analyses of the two groups. So I know the overall effects, I know which ones need random-models instead of fixed. I know the q-, z-, and c-statistics as well for each group. Now however, I want to look at how a particular component of treatment influenced the overall mean effect size for each group. In particular, I want to see if treatments that used exposure techniques influenced the overall mean effect size for each group. How do I do this? Do I conduct another 'meta-analysis' on the subgroups and compare the overall mean effect size and z-score to the other sub-group (i.e., treatment group 1 with exposure versus treatment group 1 without exposure), do I compare the sub-group to the overall treatment group (i.e., treatment group 1 with exposure versus all of treatment group 1), or do I use some other statistical test? EDIT Or do I compute an over all q statistic comparing each group? And if so what do I do when one group according to its q-stat is random-model and another is a fixed-effects? Thank you. The issues have been discussed at some length here http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates. You may need to come back with some other issues, in particular using a fixed effect model for one group and random for the other would not in general be a good idea since they are different models with different goals. Many authors suggest you choose on the basis of observed heterogeneity but you should really pick the one which corresponds to you scientific question. More detail about that issue is available in this answer Meta-analysis of standard deviation using the metafor package in R: can we distinguish between the different types of variability?
2019-08-23 20:00:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7315989136695862, "perplexity": 782.8902074732321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00553.warc.gz"}
https://www.vedantu.com/question-answer/a-train-leaves-delhi-at-10-am-and-reaches-jaipur-class-8-maths-cbse-5f6027918f2fe2491828c545
# A train leaves Delhi at 10 am. and reaches Jaipur at 4 p.m. on the same day. Another trainleaves Jaipur at 12 p.m. and reaches Delhi at 5 p.m. on the same day. What is the time of day(approximately) when the two trains will meet ?(A) 1.42 pm(B) 1.27 pm(C) 04 pm(D) 1.49 pm Verified 146.4k+ views Hint: Distance- Length of line joining two given points is called distance between them. Unit of distance - metre(m), kilometer(km), centimeters (cm) etc. Speed- Distance covered per unit time is called speed or we can say that ratio of distance and Time is known as speed. Unit of speed - m/s (meter/second) , km/s(kilometer per second) , km/hr ( kilometer per hour) etc. Formulas for speed, distance and time are – Speed $= \dfrac{{Dis\tan ce}}{{Time}}$ Distance $=$ Speed $\times$ Time Time $= \dfrac{{Dis\tan ce}}{{Speed}}$ We can easily learn the above formulas with the help of this triangle. The positions of the words in the triangle will help us to learn these formulas. To find the speed, distance is over time in the triangle, so speed is distance divided by time. To find distance, speed is beside time, so distance is speed multiplied by time. Average speed-: If an object covers different distances with different speeds, then average speed of that object is an indication of the average rate at which it covers that particular distance. Or Average Speed $=$ (Total distance covered throughout the journey) $/$ (Total time taken for the journey) Average speed $= \left( {D_1 + D_2 + D_3 + \ldots } \right)/\left( {t_1 + t_2 + t_3 + \ldots } \right)$ Complete step by step solution: Time of leaving Delhi by train 1 $=$ 10 AM Time on which train 1 reached Jaipur $=$ 4 PM Time of leaving Jaipur by train 2 $=$ 12 PM Time on which train 2 reached Delhi $=$ 5 PM We need to find the time when both trains meet. For train 1 Total time taken to reach Jaipur $=$ 4 PM – 10 AM $=$ 6 hours Let the distance between 2 stations be x km Then, speed of train 1 $= \dfrac{{Distance}}{{Time}}$ ${S_1} = \dfrac{x}{6}km/hr$ …..(1) For train 2 Total time taken to reach Delhi $=$ 5 PM – 12 PM $=$ 5 hours Since, distance $=$ x km So, speed of train 2 $= \dfrac{{Dis\tan ce}}{{Time}}$ ${S_2} = \dfrac{x}{5}km/hr$ ….. (2) Let both trains meet at time $'t'$, distance covered by train ${T_1}$ after time $'t'$ ${D_1} =$ speed $\times$ time ${D_1} = {S_1} \times t$ ${D_1} = \dfrac{x}{6}t$ …..(3) Since train 2 leaves the station 2 hours later then train 1. So distance covered by train 2, ${D_2} =$ speed $\times$ time $= {S_2} \times (t - 2)$ ${D_2} = \dfrac{x}{5}(t - 2)$ …..(4) We know that the total distance between 2 stations is x. And also $x = {D_1} + {D_2}$ So, $x = \dfrac{{xt}}{6} + \dfrac{x}{5}(t - 2)$ $t = \dfrac{t}{6} + \dfrac{t}{5} - \dfrac{2}{5}$ $30 = 11t - 12$ $11t = 42$ $t = \dfrac{{42}}{{11}} = 3\dfrac{9}{{11}}$ or t $=$ 3 hours 38 minute 6 second Therefore, 10 PM + t is the time when both trains will meet. i.e., 10 PM + 3 hours 38 minutes 6 seconds $= 1:48:6PM$ Therefore, option (D) $t = 1.49PM$, is the correct option Note: 1. If two trains start at the same time from two points A and B towards each other and after crossing they take x and y hours in reaching B and A respectively, then, A's speed : B's speed $= \left( {\surd y:\surd x} \right)$ 2. If two trains of length L1 km and L2 km are moving in the opposite directions at S1 km/h and S2 km/h, then time taken by the trains to cross each other $= \left( {L1 + L2} \right)/\left( {S1 + S2} \right)hrs.$
2022-01-28 08:23:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7280986905097961, "perplexity": 2183.1660304593242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00274.warc.gz"}
https://www.bot-thoughts.com/2012/03/avc-is-3d-compass-necessary.html
## Friday, March 9, 2012 ### AVC: Is a 3D Compass Necessary? As you may know, I'm working on Data Bus, my Sparkfun AVC robot (in case you missed it, I'm now officially on the active participant list!) Having evaluated a couple of AHRS (attitude heading reference system) solutions, and after wondering if a compass was even necessary at all, another fundamental question arose. Suppose I do use a compass... Do I really need a 3D, tilt-compensated compass in the first place? Or can I get away with a much simpler 2D compass? How much heading error can I expect from a 2D versus 3D solution? Scott (Team Tobor), AVC winner for the last two years, employed a 2D compass heading calculation. Of course he also fed this value into an Extended Kalman Filter (EKF) along with odometry heading calculations. What should I do? Time for more math, Octave (Matlab), and Gnuplot fun. Let's figure this out... Rationale I'm not building an airplane, I'm building a car that will drive on relatively flat ground. That's important because it's substantially easier to compute a 2D compass heading than a 3D one for a moving vehicle. A 2D heading is calculated by taking the arctangent of the ratio of the y-axis and x-axis readings: atan2(mag_y/mag_x). That's really easy. Computing an accurate 3D compass heading for a moving vehicle requires an orientation estimate which relies on an IMU and complex algorithms to fuse gyro and accelerometer signals. In some cases one must consider the effects of acceleration on the accelerometers. Is it worth the effort? I mean, how bad can the error of a 2D compass be? Let's say the maximum tilt of the sensor with respect to "level" is 10 degrees of pitch. What's the error of a 2d heading for a range of true headings from 0 to 180 degrees? Calculations I wrote an Octave script that iterated from 0 to 180 degrees of actual heading and computed the error between actual heading and 2d heading for a range of vehicle tilt angles. How to model this?  Start with a magnetic field vector of magnitude 1 pointing north and  rotate it by 50° of pitch to represent magnetic inclination. Then rotate it counterclockwise about the z-axis in the direction by the true heading. The result is the normalized vector that would be read by our simulated magnetometer. For example, if the magnetometer is at a 45° heading, the vector will point approximately 50° down and 45° counterclockwise. In other words, this new vector is the magnetic vector is expressed in the sensor frame of reference. To simulate a pitching magnetometer, I rotated the magnetic vector about the y-axis, then calculated the heading using only the x and y components of the magnetic field vector. Finally, I determined the error between the actual heading and the 2d heading. A maximum error of less than half a degree at multiples of 45° heading. That's not too bad. How about 10° of pitch? Not surprisingly, a similar plot, shifted by 180 degrees. Suppose we look at 10°  of roll and 10° of pitch?  In Octave, I multiplied the magnetometer matrix by a y-axis rotation matrix then an x-axis rotation matrix. Here the error is about 1.8° and the maximum error occurs at 90° and 270°. Naturally, I'm curious about the relationship between tilt and error. How much tilt can I get away with before errors get too high? Below is a plot of heading error versus heading versus tilt, with tilt values of 0-10 where roll=pitch=tilt, and heading from 0 to 360 degrees. Let's focus in on just pitch+roll angles versus error for the maximum error at 90 degrees heading.. You can see that above about 10 degrees, error starts to grow pretty quickly. Conclusion Somewhere around 5° is probably a safe limit and that's a lot of tilt. It even looks steep. A 6% grade is only a 3.4° while 5° would be an 8.7% grade. In fact, I surveyed the slopes at the Sparkfun building parking lot; they looked terrible, but the angles are generally 3° or less. Informal SFE site survey, most angles are 0-3° Even if the combination of ground tilt and chassis pitch and roll stay under 10°, then heading error still will be less than 2°. Most digital compasses already have an accuracy of only about 1-2° under ideal conditions. But that's not the whole story because the compass isn't the only possible heading sensor anyway. In fact, relying on compass as the only heading sensor is probably unwise as it's subject to local field disturbances from ferrous objects like the drainage covers on the sidewalk around the SFE building. Source of compass distortion. Don't drive too close! So one can use odometry and/or GPS heading information. Typically, a gyro measures heading changes and the compass and/or GPS provides an absolute reference to correct gyro drift very gradually. In such a scenario, tilt will have a less significant and less immediate effect on heading error so that, for brief maneuvers or short stretches of uneven ground, the error may be acceptably low. Here's the Octave script if you're interested: TiltCompTest.m 1. Great thorough analysis. Certainly seems like tilt-compensation isn't needed for this application. But there's also what looks like a neat product that does all the math on-board to provide tilt-compensated output for about $35: http://www.robotshop.com/devantech-tilt-magnetic-compass.html That way you don't spend$100's and you don't add the additional trig processing to your main controller. I've never used one, but I've bought one to play with soon. I'd love to know if anyone's used one of these and what their experience has been.
2021-01-17 22:53:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5659917593002319, "perplexity": 1710.7758095462711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00398.warc.gz"}
http://clay6.com/qa/48385/if-a-and-b-are-any-two-sets-then-a-cup-a-cap-b-is-equal-to
Want to ask us a question? Click here Browse Questions Ad Home  >>  AIMS  >>  Class11  >>  Math  >>  Sets 0 votes # If A and B are any two sets, then A $\cup$ (A$\cap$ B) is equal to Can you answer this question? ## 1 Answer 0 votes A Hence (A) is the correct answer. answered Jun 24, 2014
2017-03-23 22:10:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124046921730042, "perplexity": 1388.6977346567755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00116-ip-10-233-31-227.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/49885/comparing-algebraic-group-orbits-over-big-and-small-algebraically-closed-fields/49899
# Comparing algebraic group orbits over big and small algebraically closed fields For an affine algebraic group $G$ it's often convenient (and harmless) to work concretely over an algebraically closed field of definition $k$ while identifying $G$ with its group of rational points over $k$. Once in a while, however, results obtained over $k$ need to be compared with results over a bigger algebraically closed field $K$. For example, in his 1976 Inventiones paper completing the proof that a semisimple algebraic group always has a finite number of unipotent conjugacy classes, Lusztig observed that in characteristic $p>0$ it suffices to assume that $k$ is an algebraic closure of the prime field. This in turn allowed him to apply indirectly the Deligne-Lusztig construction of characters for subgroups of $G$ over finite subfields of $k$. Here the number of unipotent classes is moreover the same for any algebraically closed field. To justify his reduction, he cites a "simple argument" shown to him by Deligne (which he later told me he viewed in retrospect as "obvious"). Independently, a formal statement of the principle was written down and proved as Proposition 1.1 in a 1997 Journal of Algebra paper: MR1474171 (98j:20058), Guralnick, Robert M. (1-SCA); Liebeck, MartinW. (4-LNDIC); Macpherson, Dugald (4-LEED); Seitz, Gary M. (1-OR), Modules for algebraic groups with finitely many orbits on subspaces. J. Algebra 196 (1997), no. 1, 211–250. Here the proof is fairly elementary, but requires more than a journal page to write down and involves an induction step left to the reader. Is there a short and transparent proof (perhaps from the scheme viewpoint) that finiteness of the number of orbits of a semisimple group acting on an affine variety is the same when an algebraically closed field of definition is extended to another such field, while the number of orbits is unchanged? It would also be interesting to know of other situations in which such a comparison occurs. (Historical remark: In the older version of foundations for algebraic geometry developed by Weil and others it was standard procedure to work over a "universal domain" having infinite transcendence degree over its prime field, to permit for instance the use of "generic points". Then it was usually troublesome to descend to a countable field.) ADDED: To be more precise about "the number of orbits is unchanged", implicit in Lusztig's work and explicit in the 1997 paper cited is the natural requirement on such a bijection that each orbit over $K$ should contain a point over $k$. (It's hard to visualize a proof that gives a bijection without this refinement.) On the other hand, it's unclear to me whether special assumptions on $G$ such as "reductive" and "connected" are essential for the proof of a general comparison principle. - An analogous situation where such a comparison occurs is this: if $X$ is finite type sept'd scheme over sep. closed field $k$ and if $K/k$ is sep. closed extn field, the natural pullback map of etale cohom. ${\rm{H}}^i(X,\mathbf{Q}_{\ell}) \rightarrow {\rm{H}}^i(X_K,\mathbf{Q}_{\ell})$ is an isom for any prime $\ell \ne {\rm{char}}(k)$ (and vast generalizations thereof). This is especially important when $k = \overline{\mathbf{Q}}$ and $K = \mathbf{C}$, since the former is where Galois gps act (when $X$ begins life over a number field) and the latter is topological (Artin comparison isom). –  BCnrd Dec 19 '10 at 20:24 Jim, in the argument I give below in extensive detail (& generality), I directly prove $X(k)/G(k) \rightarrow X(K)/G(K)$ is bijective. But can be seen a-posteriori (& is "formal"; reductivity irrelevant). Indeed, if you know same size, just need injectivity. And that injectivity I prove early in my long answer, in a concrete manner (using nothing beyond Nullstellensatz). Here it is in other terms: if $x, x' \in X(k)$, form the "transporter variety" $T_{x,x'}$ inside of $G$. You want that this has a $k$-pt iff it has a $K$-point. Each says variety is non-empty, again by Nullstellensatz... –  BCnrd Dec 21 '10 at 5:27 Thanks for these interesting ways of working out the answer. It's hard to single out one "correct" one, but Torsten has the edge on brevity plus transparency in the classical setting of the question. Brian has provided the most fascinating extended discussion and best community-wiki answer one can imagine. George's careful, detailed answer improves on the published 1997 proof (like Torsten working within the classical setting). Thomas gives the most interesting meta-proof though it involves for a person of my background something of a black box. Again, thanks for all the insights. –  Jim Humphreys Dec 21 '10 at 14:37 I think this will work. There are a finite number of orbits of the action of $G$ on $X$ precisely when there is an open orbit and a finite number of orbits on the complement of the orbit. Hence, it is enough to show that if there is an open orbit of a point over the smaller field precisely when there is an open orbit over the larger field. One direction is clear so it is enough to show that there is an open orbit over the larger field, there is one over the smaller field. However, consider the closed subscheme $S:=\{(g,x)\in G\times X | gx=x\}$ and the projection $S\to X$ on the second variable. A point of $X$ has an open orbit precisely when the fibre has the smallest possible dimension (when $X$ is irreducible which we may assume). However, there is an open subset (defined over the smaller field) of $X$ with fibres of minimal dimension. Addendum: As for the bijection between the orbits this is proved the same way, we have an open orbit which gives one orbit over each field and then we use Noetherian induction. - While still in a fairly classical setting, this way of approaching the finiteness appeals a lot to me compared with the more opaque method used in the 1997 paper. But as in the case of Thomas Scanlon's very different approach, I'm uncertain about how the bijection between orbits over the two fields in the refined formulation of my added paragraph fits here. –  Jim Humphreys Dec 20 '10 at 16:19 Thanks for the Addendum. I could see the Noetherian induction here but was less sure about the orbit representatives over $k$. –  Jim Humphreys Dec 21 '10 at 14:27 Since you ask about other situations where this sort of thing occurs, let me describe a general principle (applied to the context of the original question) which is widely applied in EGA and elsewhere, often called "spreading out and clever specialization". It is not as efficient as Torsten's argument, and may look quite long at first glance, but hopefully by the end you'll see that it conveys a quite simple idea that is broadly useful for this kind of issue. In particular, it doesn't use the finer information about local closedness of orbits (simple as that may be), so it is applicable to contexts way beyond the one of group actions. I will work with quite weak hypotheses to emphasize the general applicability and flexibility of the basic idea. Then you'll also see that in a discussion between two experts, this would all be disposed of in a few sentences (so the length of what follows may create the wrong impression about the complexity). Setup: Suppose a finite type group scheme $G$ over an algebraically closed field $k$ acts on a finite type $k$-scheme $X$ (assume $G$ and $X$ are affine if you wish), and $K$ is a nonzero $k$-algebra (perhaps not an algebraically closed field). I claim $j:X(k)/G(k) \rightarrow X(K)/G(K)$ is injective (so the source is finite whenever the target is finite), and (more interestingly) that it is also surjective if $X(k)/G(k)$ is finite and $K$ is an algebraically closed field. That will answer the original equivalence question. First let's do injectivity (which will be easy, and so correspondingly not so interesting). Since $K$ exhausted by finite type $k$-subalgebras $K_i$ (definitely not fields in general), we have $X(K)= \varinjlim X(K_i)$ and $G(K)= \varinjlim G(K_i)$ (as $X$ and $G$ are finite type, or alternatively it is clear in the affine case). Thus, $X(K)/G(K) = \varinjlim X(K_i)/G(K_i)$, so it enough to treat the $K_i$ in place of $K$. So we can assume $K$ is finitely generated as a $k$-algebra. [This is a powerful idea, even when the original $K$ is a field.] By the Nullstellensatz there is a $k$-algebra map $s:K \rightarrow k$ (quotient by any maximal ideal) with $k \rightarrow K$ as section; this is the "specialization" trick. It defines a map of sets $X(K)/G(K) \rightarrow X(k)/G(k)$ with the original map $j$ as a section (as $A \rightsquigarrow X(A)/G(A)$ is a functor on $k$-algebras $A$), so $j$ is injective. That was a more or less a formal kind of silliness (despite neat use of the Nullstellensatz), so now we come to the interesting part: assuming $K$ is an algebraically closed field and $X(K)/G(K)$ has at least $n$ points then so does $X(k)/G(k)$ (so surjectivity follows when $X(k)/G(k)$ is finite). The basic principle is this: whatever finite amount of stuff happens over an algebraically closed extension of an algebraically closed field already happens over the ground field via well-chosen specialization. (Kind of like those ads about Las Vegas.) Say $x_1,\dots,x_n$ in $X(K)$ lie in distinct orbits. Exhausting $K$ by finitely generated $k$-subalgebras $K_i$ as above, we can find a big enough $K_i$, call it $A$, so that $x_1,\dots,x_n \in X(A)$. We want to show that for a "sufficiently generic" specialization map $A \rightarrow k$, their images in $X(k)/G(k)$ remains distinct. Here, the valuable geometric intuition (which makes sense even within classical algebraic geometry, since $k$ is algebraically closed and $S :=$ Spec($A$) is basically a classical irreducible variety, as $A$ is a domain of finite type over $k$) is that the $x_i \in X(A)$ are sections to the projection $X \times S \rightarrow S$ such that on the geometric generic fiber over $S$ (i.e., pullbacks along $A \rightarrow K$) they are in pairwise distinct $G$-orbits, and we want to claim that under specialization over some dense open in $S$ they remain in pairwise distinct $G$-orbits. In other words, we aim to "verify" an instance of the Principle of the Geometric Generic Fiber: for a finite collection of finite type schemes over an irreducible noetherian scheme $S$, and any "finite information" structure involving them (maps among them, coherent sheaves on them, etc.), any reasonable property of this structure that holds over a geometric generic point of $S$ also holds on fibers over the geometric points supported in some dense open in $S$. [In practice it isn't always obvious that certain properties are "finite information", such as flatness or surjectivity of $S$-maps, but EGA IV$_3$ lays out the whole story on this principle.] Since any intersection of finitely many non-empty opens in the irreducible $S$ contains a $k$-point (Nullstellensatz once again), it suffices to prove a more general fact for a pair of points $x, x' \in X(A)$ (to then be applied to each of the finitely many pairs $x_i, x_{i'} \in X(A)$ with $i \ne i'$): I claim that if their images in $X(K)$ (the "geometric generic fiber") are in distinct $G(K)$-orbits, then there's a dense open $U$ in $S$ such that for any $u \in U$ (e.g., a $k$-point!) the specializations $x(u), x'(u) \in X(k(u))$ have disjoint orbits under the action of $G_{k(u)}$ on $X_{k(u)}$ (in the sense of their orbit subvarieties over $k(u)$, or geometric points thereof, which comes to the same thing). This will clearly do the job. OK, now comes the step where we make an actual group-theoretic construction (akin to what Torsten did more efficiently) to produce the required open: we view $G \times S$ as an $S$-group acting on the $S$-scheme $X \times S$ and form a "transporter scheme". That is, consider the closed subscheme $$T_{x,x'} = \{g \in G \times S\,|\,g(x) = x'\} \subset G \times S$$ over $S$. In more precise terms, writing $G_S$ and $X_S$ as shorthand for $G \times S$ and $X \times S$ to save space, we have the action map $G_S \rightarrow X_S$ over $S$ defined functorially by $g \mapsto g.x$, and $T_{x,x'}$ is the preimage of the closed subscheme of the target given by the (closed immersion) section $x':S \rightarrow X_S$ over $S$. For example, if $s \in S(F)$ for a field $F/k$ then the $s$-fiber of $T_{x,x'}$ is the closed subscheme of $G_F$ defined by the condition "$g.x(s) = x'(s)$" for the points $x(s), x'(s) \in X(F)$. In effect, $T_{x,x'}$ is just the relative version of this latter classical transporter construction as we vary across the pairs $(x(s),x'(s))$ for $s$ wandering in $S$. Finally we have assembled enough to finish. Consider the structural morphism $q:T_{x,x'} \rightarrow S$. This is a map between finite type $k$-schemes. What is its fiber (i.e., pullback) over a point $s:{\rm{Spec}}(F) \rightarrow S$ (such as a $k$-point, or more importantly the "geometric generic point" ${\rm{Spec}}(K) \rightarrow S$)? Well, we just saw what this is: it is the "classical" transporter for $x(s), x'(s) \in X(F)$ inside of $G_F$. So the fiber of $q$ over a physical point $s \in S$ is empty precisely when the corresponding transporter (a finite type $k(s)$-scheme) is empty, which is to say that $x(s), x'(s) \in X(k(s))$ have disjoint orbits under $G_{k(s)}$ acting on $X_{k(s)}$ (i.e., in distinct orbits under $G(\overline{k(s)})$ acting on $X(\overline{k(s)})$, not merely under $G(k(s))$ acting on $X(k(s))$, since emptiness of a finite type $k(s)$-scheme amounts to the absence of $\overline{k(s)}$-points and not merely of $k(s)$-points). Excellent, so if the image of $q:T_{x,x'} \rightarrow S$ misses a dense open $U$, that open will do the job (i.e., for all $u \in U$, the points $x(u), x'(u) \in X(k(u))$ lie in distinct $G_{k(u)}$-orbits in $X_{k(u)}$). Aha, but by (the scheme version of!) Chevalley we know that the image of $q$ is a constructible set even at the level of schemes, so if it misses the generic point then it misses a dense open as desired. So we are reduced to proving that $q$ has empty fiber over the generic point of $S$. But that in turn is exactly the original hypothesis that on the geometric generic fiber over ${\rm{Spec}}(K) \rightarrow S$ our points $x, x' \in X(K)$ lie in distinct orbits under the $G(K)$-action. Voila. QED Now you can see the one serious ingredient that uses schemes (going beyond classical algebraic geometry) in an essential way: the validity of Chevalley's theorem on constructible images in the scheme framework, and the ability to apply it in conjunction with the literal generic point (and geometric points over that). Hopefully you can see that (together with specialization) this is a broadly useful technique for propogating results from an algebraically closed extension of an algebraically closed field back down to the ground field (such as surjectivity on points valued in an algebraically closed field). And that once one realizes this idea, it is sort of simple in the end. In effect, the Principle of the Geometric Generic Fiber above (which is made precise in EGA IV$_3$) is the scheme-theoretic replacement for Weil's "universal domain" concept. - This scheme viewpoint looks natural, though of course it's always a problem to combine it with the limited goals of many papers that deal with fairly concrete questions about linear groups, etc. My own scheme involvement has been sparse, so I have to ponder your answer further. –  Jim Humphreys Dec 20 '10 at 16:14 Schemes can be avoided. First, G, X, and S can be viewed as varieties. The transporter $T_{x,x′}$ is a scheme-theoretic pullback of closed subvariety under a morphism, but could use its underlying variety (classical preimage of Zariski-closed set); fibers over geometric pts of S (inside G viewed over the corresponding alg. closed field) still have the "expected" geometric pts. And Chevalley is overkill: just need that if a map $Y\rightarrow Z$ of affine k-varieties with irreducible Z localizes to empty over k(Z), then factors through a proper closed subvariety, which is elementary –  BCnrd Dec 21 '10 at 6:08 So to continue my comment above, we have removed the scheme-theoretic apparatus in the end. However, certain ways of thinking which inspire the argument are very natural from the scheme viewpoint, and may be less likely to jump out of the page from a more classical viewpoint (even if we succeed to express the final argument in entirely classical terms, as we have just seen we can do). –  BCnrd Dec 21 '10 at 6:16 I think that the simplest explanation has nothing to do with schemes: the first-order theory of algebraically closed fields of a fixed characteristic is complete. The assertion that the group of $k$-points $G(k)$ of the algebraic group $G$ has $n$ orbits on the set of $k$-points $X(k)$ of the algebraic variety $X$ may be naturally expressed as a first-order logical formula and hence is true in one algebraically closed field of characteristic $p \geq 0$ just in case it is true in every such algebraically closed field. - Dear Thomas: The theorem is for connected semisimple groups, not all linear algebraic groups. Is that still first-order? Or if we fix a specific $G$ over some $k$ and only consider that single $G$ over all algebraically closed extensions of $k$ then is one in a setting where "completeness" applies (whatever it means; sorry, it is unfamiliar material to me)? That aside, "simple" is probably in the eye of the beholder. :) –  BCnrd Dec 20 '10 at 6:35 The Encyclopedia of Mathematics entry (eom.springer.de/T/t110050.htm ) on this transfer principle, a weak formalized version of the Lefshetz Principle, explains what I mean by complete. –  Thomas Scanlon Dec 20 '10 at 6:52 For a fixed G and fixed action of G on a variety X, it is fairly routine to formalize the assertion that G has n obits on X. For more sophisticated assertions, the coding can be more complicated. For instance, in the problem under consideration, we would apply completeness to the assertions that for every semisimple group G and action of G on a variety X for which G, X and the action are described by polynomials in at most n variables of degree at most d there are finitely many unipotent orbits. Finiteness is usually not a first-order condition but is for algebraically closed fields. –  Thomas Scanlon Dec 20 '10 at 6:55 Thomas, thanks for the clarifications. –  BCnrd Dec 20 '10 at 15:57 @Thomas This way of looking at the question is intriguing, though from a pedagogical viewpoint it adds extra prerequisites to the papers I cited. Something like a Lefschetz principle did seem to me to be lurking here. Certainly not all the specifics of the group actions in these papers can be needed for a comparison principle. At the same time, I wonder whether one can build into your approach the refined version in my added paragraph? In the applications, one wants orbit representatives to be compatible over the two fields beyond just counting numbers of orbits. –  Jim Humphreys Dec 20 '10 at 16:10 In section 1 of the following, I wrote down a proof of the result in question: McNinch, George "On the centralizer of the sum of commuting nilpotent elements." J. Pure Appl. Algebra 206 (2006), no. 1-2, 123–140. [arXiv version] The argument I gave is a less powerful application of some of the tools used in the nice answer given by BCnrd -- it is really just an application of Chevalley's Theorem (I used the form found in Springer's "Linear Algebraic Groups" which is good enough for varieties but not for application to schemes in general). And (again in contrast to BCnrd's answer) my argument used the fact that orbits are locally closed. I am glad to have read what is probably the "right" level of generality for this argument found in BCnrd's answer. If $k \subset K$ is an extension of algebraically closed fields, Prop. 4 of loc. cit. show that each $G_{/K}$ orbit has a point rational over $k$ (when $G= G_{/k}$ has finitely many orbits). This gives the bijection between orbits over $k$ and over $K$ -- which of course is already a consequence of BCnrd's answer -- as in Scanlon's answer. - Thanks, I had overlooked this treatment of the proof. It's much more transparent from my viewpoint than the one in the 1997 paper, though of course it still takes some space to write down precisely. –  Jim Humphreys Dec 21 '10 at 14:25 The key point is to show the following: let $f:V\rightarrow W$ be a regular map of varieties over an algebraically closed field $k$; if $V(k)\rightarrow W(k)$ is surjective, then $V(K)\rightarrow W(K)$ is surjective for every algebraically closed field $K$ containing $k$. We prove this by induction on the dimension of $W$. We may suppose that $W$ is an irreducible closed subvariety of some affine space. Let $P\in W(K)$ be not in the image. We know that $tr.deg.k(P)\leq dimW$. If equality holds, then $P$ and its conjugates under $Aut(K/k)$ are Zariski dense in $W_{K}$, contradicting the fact that $f$ is (obviously) dominant. Hence $P\in Z(K)$ for some proper closed irreducible subvariety $Z$ of $W$. Now apply induction to $f^{-1}(Z)\rightarrow Z$ to get a contradiction. Edited: I add the rest of the argument. No hypotheses on $G$ are needed. We prove: Let $G\times V\rightarrow V$ be an action of the group variety $G$ on the variety $V$, and let $K$ be an algebraically closed field containing $k$. Then $G$ has finitely many orbits on $V$ if and only if $G_{K}$ has finitely many orbits on $V_{K}$, in which case the numbers of orbits are the same and each $K$-orbit has a $k$-point. For the proof, we first remark that if $v_{1},v_{2}\in V(k)$ lie in distinct $G$-orbits, then they lie in distinct $G_{K}$-orbits. To see this, let $Z$ be the inverse image of $v_{2}$ under the regular map $g\mapsto gv_{1}\colon G\rightarrow V$. Then $Z(k)$ is empty if and only if $Z$ is the empty variety if and only if $Z(K)$ is empty. Suppose that $G$ has only finitely many orbits on $V$, and let $v_{1} ,\ldots,v_{m}\in V(k)$ represent the different orbits. The regular map $(g,v_{i})\mapsto gv_{i}\colon G(k)\times\{v_{1},\ldots,v_{m}\}\rightarrow V(k)$ is surjective, and hence remains surjective with $K$ for $k$. Together with the first remark, this shows that $v_{1},\ldots,v_{m}$ represent the different orbits of $G_{K}$ on $V_{K}$. Finally, suppose that $G_{K}$ has only finitely many orbits on $V_{K}$. Then the first remark shows that $G$ has only finitely many orbits on $V$, and the previous argument applies. -
2014-04-20 11:36:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752536177635193, "perplexity": 249.9462124793238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/24975/logistic-regression-getting-pearson-standardized-residuals-in-r-vs-stata
Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata I am working on an assignment involving a logistic regression model, where I need to plot the pearson standardized residuals against one of the predictors. Here's the basic setup: model <- glm(outcome ~ predictor1 + predictor2, family=binomial(logit)) res <- residuals(model, "pearson") When looking at the residuals' distribution, I see something totally different than my colleagues who use Stata (using predict and rstandard). Their residuals are more or less normal, whereas in mine there is a gap in the values (not a singe residual is between -0.05 and 1.15). That does make sense in the context of logistic regression, especially that the maximum predicted probability is not so high (38%). I'd like to understand what's happening here... What is Stata doing that R isn't, with those residuals? For logistic regression, Stata defines residuals and related quantities to be those you'd get if you grouped all the observations with the same values for all the predictor variables, counted up the successes and failures for those observations, and fitted a logistic regression model to the resulting binomial data instead of the original Bernoulli data. This is a useful thing to do as (if there are multiple observations with the same covariate pattern) the resulting residuals behave more like those you're used to from least squares. To get the same residuals from R, I suspect you will need to group the data and fit the model to the grouped data. But I'm not clear whether R is using the same definition of 'standardized residuals' as Stata as I don't presently have access to the numerous textbooks that the R documentation references. Here's an excerpt from 'Methods and formulas' section of the Stata manual entry for 'logistic postestimation' (one thing I like about Stata is that the manuals provide the full formulas for everything): Define $M_j$ for each observation as the total number of observations sharing $j$’s covariate pattern. Define $Y_j$ as the total number of positive responses among observations sharing $j$’s covariate pattern. The Pearson residual for the $j$th observation is defined as $$r_j = \frac{Y_j - M_j p_j}{\sqrt{M_j p_j(1 - p_j)}}$$ ... The unadjusted diagonal elements of the hat matrix $h_{Uj}$ are given by $h_{Uj} = (\mathbf{XVX}')_{jj}$, where $\mathbf{V}$ is the estimated covariance matrix of parameters. The adjusted diagonal elements $h_j$ created by hat are then $h_j = M_j p_j(1 - p_j)h_{Uj}$. The standardized Pearson residual $r_{Sj}$ is $r_j / \sqrt{1 - h_j}.$ Pearson residuals are obtained by dividing the each observation's raw residual by the square root of the corresponding variance. The idea is to get something that has variance 1, approximately. In your example, try this; set.seed(3141) x1 <- rnorm(100) x2 <- rnorm(100) y <- rbinom(100, 1, 0.25) glm1 <- glm(y~x1+x2, family=binomial) f1 <- fitted(glm1) # the fitted probability of y=1, for each observation plot( residuals(glm1, "pearson"), (y-f1)/sqrt(f1*(1-f1))) abline(0,1) # they match The 'gap' occurs because the residuals where $Y=1$ are on one side, and those with $Y=0$ are on the other. Standardized residuals are a different animal; they divide by the estimated standard deviation of the residual; you can obtain them in R using rstandard(), though for non-linear GLMs it uses a linear approximation in the calculation. NB residuals of any form tend not to be terribly helpful in logistic regression. With independent binary data, the only real concern is whether we've specified the mean correctly - and with modest sample sizes, plots of residuals typically provide little power to assess that. • Right. But my question was: why do I get this gap in R, but not in Stata... I gather the latter has a different way of calculating the residuals, but can't see why or what it would be. – Dominic Comtois Mar 21 '12 at 14:22 • Sorry, but there are too many options in Stata to diagnose what it's doing in your case, without full code, and descriptions of your covariates. See this discussion thread for more; stata.com/statalist/archive/2004-04/msg00205.html – guest Mar 22 '12 at 4:09
2020-09-24 11:51:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218793272972107, "perplexity": 1011.054094315832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00182.warc.gz"}
http://math.stackexchange.com/questions/169133/find-the-closed-form-of-the-sequence/169165
Find the closed form of the sequence Let $\{x_{n}\}_{n=1}^\infty$, with $x_{1}=a$ where $a>1$ be a sequence that satisfies the relation: $$x_{1}+x_{2}+...+x_{n+1}= x_{1}x_{2}\cdots x_{n+1}$$ For this problem, the requirement is to prove that $x_{n}$ is convergent, and then find its limit when $n$ goes to $\infty$. I think I can handle with these two requirements, but my curiosity is related to the way $x_{n}$ looks like and wonder if there is a nice closed form to it. - For $n \ge 2$, $x_n = \dfrac{a^{2^{n-2}}}{P_n(a)}$ where $\displaystyle P_{n}(a) = a^{2^{n-2}} - \prod_{j=2}^{n-1} P_j(a)$ is a polynomial in $a$ of degree $2^{n-2}$. \eqalign{P_2(a) &= a-1 \cr P_3(a) &= a^2-a+1 \cr P_4(a) &= a^4-a^3+2 a^2-2 a+1\cr P_5(a) &= a^8-a^7+3a^6-6a^5+9a^4-10a^3+8a^2-4a+1\cr} It looks like: The coefficient of $a^0$ in $P_n(a)$ is $1$ for $n \ge 3$. The coefficient of $a^1$ in $P_n(a)$ is $-2^{n-3}$ for $n \ge 3$. The coefficient of $a^2$ in $P_n(a)$ is $2^{2n-7}$ for $n \ge 4$. The coefficient of $a^3$ in $P_n(a)$ is $\dfrac{2^{n-3}-8^{n-3}}{6}$ for $n \ge 3$. EDIT: The coefficient of $a^4$ in $P_n(a)$ is $\dfrac{2^n}{32} - \dfrac{4^n}{384} + \dfrac{16^n}{98304}$ for $n \ge 5$. All this should be provable by induction. I got these by looking at the first few members of the sequence using Maple. - how did you get at this result? – I'm an artist Jul 10 '12 at 19:10 The sum $s_n := x_1 + ... + x_n$ satisfies $s_n = g(s_{n-1})$ where $g(x) = \frac{x^2}{x-1}$ (prove this by induction). Since $g(x) > x + 1$ for all $x > 1$, the sequence $s_n$ tends to infinity. So you have $$x_n = g(s_{n-1}) - s_{n-1},$$ which tends to $\lim_{x \rightarrow \infty} g(x) - x = \lim_{x \rightarrow \infty} \frac{x}{x-1} = 1$ - the limit may be easily found by applying AM-GM, as well. – I'm an artist Jul 11 '12 at 9:02
2016-02-06 21:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997376799583435, "perplexity": 139.37944828281667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00336-ip-10-236-182-209.ec2.internal.warc.gz"}
https://minimizeregret.com/post/2018/06/17/instacart-products-bought-together/
# Pointwise Mutual Information for Instacart Product Recommendations Using pointwise mutual information, we create highly efficient “customers who bought this item also bought” style product recommendations for more than 8000 Instacart products. The method can be implemented in a few lines of SQL yet produces high quality product suggestions. Check them out in this Shiny app. Back in school, I was a big fan of the Detective Conan anime. For whatever reason, one of the episodes stuck with me. In that episode, the protagonists “pick up receipts in a convenience store to guess what the people are buying for dinner.” While this leads them inadvertently to a crime they need to solve, we will rather stick with the idea of finding out which products appear together in customers’ baskets. Based on the Instacart Online Grocery Shopping dataset released a year ago, we analyze about 3 million orders of about 200,000 Instacart users. Similarly to how the detective boys used bought-together patterns to identify what customers were going to cook that evening, we’re going to find products that are bought together in order to create an effective, yet simple recommendation algorithm. So simple in fact, that the entire analysis could be productionized in plain SQL. ## Instacart Data Set From Wikipedia: Instacart is an American company that operates as a same-day grocery delivery service. Customers select groceries through a web application from various retailers and delivered by a personal shopper. The Instacart Online Grocery Shopping Dataset 2017 was made public by Instacart and can be downloaded here and offers a unique ability to try out recommendation algorithms on customer basket data. Then Instacart’s VP Data Science, Jeremy Kun introduced the data set in a Medium post. The dataset contains information on the products contained in about 3 million orders made by 200,000 Instacart customers. It thus lends itself as a testbed for machine learning methods that one would tend to apply at ecommerce companies–in particular those with a large variety of products, large basket sizes and returning customers. ## Expected Result In his blog post, Jeremy Kun highlights how Instacart uses the data for example to sort their Buy It Again listings, or to model the Frequently Bought With recommendations. Here I will restrict myself to recommendations in the style of the latter. The results of the algorithm should be able to run under a “Frequently Bought Together” or “Customers Who Bought This Item Also Bought” headline. Much like Amazon’s famous recommendations, or the ones that Instacart employs itself. Take for example this Whole Foods pita bread offered on Instacart. Its page features recommendations for hummus and baba ghannouj under an “Often Bought With” headline, offering them as common complements. This style of recommendation is the goal, where we find items that go well together based on past purchases made by all customers. Those recommendations serve as a simple way for customers to fill their baskets with items that increase the value of the items they have already added to their baskets. It also ensures that customers don’t forget to buy items they really ought to buy. This stands in contrast to “Similar Items” or “Related Items” recommendations that are often found on the same product detail pages. These recommendations usually aim at direct substitutes to the product on the current detail page. On Instacart’s page for Whole Foods Market Organic Whole Wheat Pita Bread, I got served recommendations for a couple of other pita varieties, for example. ## Methodology So how exactly are we going to find the products that are often bought with pita bread? How do we know what customers who bought this item also bought? The naive approach would be to count the pure item co-occurrence in orders: For every item, count how often it has been in an order with the pita bread, then recommend the item with the highest count. While this might surface a good recommendation from time to time, it will mostly surface bananas and toilet paper. Bananas and toilet paper are examples of a few very common items which appear in a large share of orders without being related to any product in particular. They would dominate any raw co-occurrence count just by their own purchase probability. To account for this difficulty, we will make use of a simple trick from natural language processing: Pointwise Mutual Information. ### Pointwise Mutual Information Pointwise Mutual Information is a measure of association from information theory and has found a popular application in natural language processing. There, it measures the association between a word and the word’s context, e.g. close words in a sentence (bi-grams, n-grams, etc.). It does so by comparing how often the word and the context appear together against how often they would appear together were they independent events. Following Wikipedia, we have for the outcomes $$x$$ and $$c$$ of two discrete random variables $$X$$ and $$C$$: $pmi(x;c) = \log \frac{p(x,c)}{p(x)p(c)}$ Here, the numerator describes the joint probability, while the denominator describes the joint probability under independence. Thus, were the two events independent, we would have $$pmi(x;c) = \log(1) = 0$$. Consequently, positive PMI values imply positive association between the events (e.g., the word and its context, or between two products). Similarly, negative PMI values should indicate negative relationships—but it’s generally not as easy to think in terms of words that do not appear together. While it’s easy to come up with co-occuring words (Google & Facebook, Scrum & Agile, Obama & Merkel), I failed to quickly come up with examples for the opposite. It’s not how we’re tuned to think. Also, the necessary corpus to correctly measure the PMI of words that do not appear together is very, very large—because they don’t appear together (see this book chapter by Daniel Jurafsky and James H. Martin). ## Implementation ### Data Preparation We’ll now prepare the Instacart data and apply the PMI measure on the observed orders to find products that have been bought together. To follow along, download the csv files from Instacart. First, we load some libraries and read the csv files. Note that these are the only packages we’ll need. This goes to show that we can do the same analysis in SQL, even though what follows is written in R. library(dplyr) # this is on order level # this is on product level # this is on order-product level order_products <- read_csv("order_products__prior.csv") Note that we only read one of the available order_products tables, since we will not perform an evaluation based on a test set. The orders table, however, contains information on more orders than those contained in order_products, so we slim it down: # number of all orders length(unique(orders$order_id)) ## [1] 3421083 # number of orders in our subset length(unique(order_products$order_id)) ## [1] 3214874 # we focus on the "prior" evaluation set for now orders <- orders %>% filter(eval_set == "prior") %>% select(-eval_set) In the next few steps, we trim down the set of considered customers and products to only include those for which we have enough observations. # first, get for every user his number of orders users <- orders %>% group_by(user_id) %>% summarize(orders = n()) summary(users$orders) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 3.00 5.00 9.00 15.59 19.00 99.00 # drop all users who had a single order or a very large number of orders # (customers who only made one order might have bought "trial baskets") good_users <- users %>% filter(orders <= 50, orders >= 1) %>% pull(user_id) # filter for the corresponding orders good_orders <- orders %>% filter(user_id %in% good_users) # count for every user the number of different items he bought product_by_customer_count <- order_products %>% inner_join(select(good_orders, order_id, user_id)) %>% distinct(user_id, product_id) %>% count(product_id) # A considered product should have been bought by # at least 0.1% of the customers product_threshold <- length(unique(good_orders$user_id)) * 0.001 good_products <- product_by_customer_count %>% filter(n >= product_threshold) %>% pull(product_id) op <- order_products %>% select(order_id, product_id) %>% inner_join(select(good_orders, order_id, user_id)) %>% filter(product_id %in% good_products) # as last step, exclude all orders with a basket size of 1 op_size_one <- op %>% group_by(order_id) %>% ungroup %>% pull(order_id) op <- op %>% filter(!(order_id %in% op_size_one)) After this initial data cleaning, let’s see how many orders, users, and products we are dealing here: length(unique(op$order_id)) ## [1] 2315386 length(unique(op$user_id)) ## [1] 194760 length(unique(op$product_id)) ## [1] 8979 So after dropping some customers and orders, we are left with about 200k users who bought about 9000 different products across 2.3 million orders. That should more than suffice to compute some PMI values. ### Pointwise Mutual Information for Instacart Products To compute the PMI value for every product, we first of all need to count how often products appear together. For the dataset at hand, the following expansion of the order_products table works fine (you should have quite some RAM though…); for every order, we join every product against every product in the order. This makes our table much longer, so depending on the average basket size and number of orders in another dataset, it might be a prohibitively expensive computation. We then immediately count how often products appear together: op_pp <- inner_join(op, op, by = c(order_id = "order_id")) %>% count(product_id.x, product_id.y) dim(op_pp) ## [1] 31708163 3 Next, we need to count how often every product appears in orders, generally. This is used to compute the probabilities $$p(x)$$ and $$p(c)$$. We add these counts to the co-occurrence counts, and add the number of total orders in the total_n column. At this point we have all ingredients to compute the empirical probabilities and the PMIs. product_count_train <- op %>% count(product_id) pp_common_count <- op_pp %>% inner_join(product_count_train, by = c(product_id.x = "product_id")) %>% inner_join(product_count_train, by = c(product_id.y = "product_id")) %>% rename(common_n = n.x, x_n = n.y, y_n = n) %>% # total number of orders considered mutate(total_n = length(unique(op$order_id))) Computing the PMI is now as simple as dividing columns and taking the logarithm. We add the corresponding product names to analyze the results afterwards. pp_pmi <- pp_common_count %>% mutate(common_freq = log(common_n / total_n), x_freq = log(x_n / total_n), y_freq = log((y_n / total_n)), pmi = common_freq - x_freq - y_freq) pp_rec <- pp_pmi %>% select(product_id.x, product_id.y, total_n, common_n, x_n, y_n, pmi) %>% left_join(select(products, product_id, product_name), by = c(product_id.x = "product_id")) %>% left_join(select(products, product_id, product_name), by = c(product_id.y = "product_id")) ### Detailed Look at Recommendations Given that we have not exactly fitted a model here, it’s not clear how to evaluate the results. We’re not explicitly optimizing for anything, so the following evaluation will be restricted to looking at some recommendations and judging whether the recommendations “make sense”.1 Given the large sortiment, I had to pick some products at random to evaluate the recommendations. Also, I had to pick products that I actually know–I’m not living in the U.S., so what is Glacier Freeze Frost? For a start, let’s act as if we are about to add Spicy Avocado Hummus to the cart. What could I buy with hummus? Apparently a lot of other hummus, yogurt, as well as crackers or chips: pp_rec %>% filter(product_id.x == 5973, product_id.x != product_id.y) %>% arrange(-pmi) %>% top_n(10, pmi) %>% select(product_name.x, product_name.y, pmi, common_n, x_n, y_n) %>% knitr::kable(digits = 2) product_name.x product_name.y pmi common_n x_n y_n Spicy Avocado Hummus Organic Jalapeno Cilantro Hummus 4.70 118 2541 982 Spicy Avocado Hummus Organic Kale Pesto Hummus 4.47 97 2541 1015 Spicy Avocado Hummus Organic Thai Coconut Curry Hummus 4.32 78 2541 945 Spicy Avocado Hummus Organic Sriracha Hummus 4.31 144 2541 1762 Spicy Avocado Hummus Hummus, Hope, Original Recipe 3.71 206 2541 4572 Spicy Avocado Hummus Total 2% Lowfat Greek Yogurt with Honey 3.12 12 2541 481 Spicy Avocado Hummus Organic Jalapeno Crackers 3.11 12 2541 489 Spicy Avocado Hummus Chomperz Original Crunchy Seaweed Chips 3.09 17 2541 707 Spicy Avocado Hummus Teriyaki Turkey Jerky 3.06 11 2541 472 Spicy Avocado Hummus Soft Toothbrush 3.03 9 2541 397 Observe how we don’t have the Hummus, Hope, Original Recipe as the top recommended product even though the avocado hummus was bought most often with it. That is because the PMI takes into account how often the two products appear in orders independently. We see that the Hummus, Hope, Original Recipe is quite popular, which is why the 206 common orders are not as impactful as the 118 orders together with Organic Jalapeno Cilantro Hummus for the PMI. And so we want to rank the jalapeno hummus higher. Notice also how some recommendations are based on just 9, 11, 12, or 17 common orders. If we think about how many customers we have, 12 orders can be noise. The toothbrush, for example, does not look like a good recommendation. We will address this with a smoothing method in a minute. If we pick a different hummus, Garlic Hummus, we get very different results. There is no other hummus recommended, and instead the recommendations focus on pita bread. But notice again how the PMI favors products with a small number of common orders. product_name.x product_name.y pmi common_n x_n y_n Garlic Hummus Whole Wheat Pita 2.76 23 6893 489 Garlic Hummus White Pita 2.75 71 6893 1518 Garlic Hummus Peanut Butter Dark Chocolate Fruit & Nut Protein Bars 2.62 15 6893 368 Garlic Hummus Gluten Free Black Bean and Quinoa Burrito 2.53 18 6893 480 Garlic Hummus Organic Spinach & Potatoes 2, 6 Months+ 2.50 18 6893 495 Garlic Hummus Turkey Meatball Bites 2.50 13 6893 358 Garlic Hummus 100% Whole Wheat Hot Dog Buns 2.46 23 6893 659 Garlic Hummus Organic White Pita Bread 2.42 37 6893 1102 Garlic Hummus Lotus Forbidden Rice Ramen 2.38 10 6893 312 Garlic Hummus Gochujang Fermented Garlic Chile Paste 2.20 7 6893 260 Similarly, here are recommendations for products that go well with Granny Smith Apples. product_name.x product_name.y pmi common_n x_n y_n Granny Smith Apples Royal Gala Apples 2.24 238 27712 2127 Granny Smith Apples Bag of Red Delicious Apples 2.00 74 27712 835 Granny Smith Apples Seedless Grapes Green 1.98 34 27712 392 Granny Smith Apples Dark Chocolate Chili Almond Nuts & Spices 1.96 34 27712 399 Granny Smith Apples Outshine Lime Fruit Bars 1.95 55 27712 651 Granny Smith Apples Golden Delicious Apple 1.95 359 27712 4258 Granny Smith Apples Braeburn Apple 1.93 126 27712 1523 Granny Smith Apples Garlic Parmesan Deli Style Pretzel Crisps 1.91 49 27712 606 Granny Smith Apples Bag of Oranges 1.89 131 27712 1657 Granny Smith Apples Whole Frozen Strawberries 1.88 35 27712 445 When you work yourself through a couple of examples, it might stand out to you that the PMI tends to favor products with a small probability, that is, rare products tend to be recommended more. This is not necessarily desired, in particular not from the standpoint of a business. ### Context Distribution Smoothing As explained in Jurafsky and Martin (2017) citing Levy at al. (2015), a simple way to address this bias is context distribution smoothing, where the context probability is raised to the power of $$\alpha$$, where $$\alpha \in (0,1)$$. Since, for example, $$0.1^{0.75} \approx 0.1778$$, doing so increases the probability of the context, and consequently decreases the PMI. While there is also an impact on events with larger probability, the effect on events with small probability can be more extreme as for example here, leading to a larger absolute discount of their PMI values: $\log(0.25) - \log(0.5) - \log(0.3) \approx 0.511$ $\log(0.01) - \log(0.5) - \log(0.01) \approx 0.693$ $\log(0.25) - \log(0.5) - \log(0.3^{0.75}) \approx 0.210$ $\log(0.01) - \log(0.5) - \log(0.01^{0.75}) \approx -0.458$ It also implies that everything that would have been perfectly independent previously does now become negatively associated: $\log(0.25) - \log(0.5) - \log(0.5) = 0$ $\log(0.25) - \log(0.5) - \log(0.5^{0.75}) \approx -0.173$ Setting this aside, the context distribution smoothing can help in many cases to make the top ranks more sensible by returning more mainstream results. We can add the exponent (here 0.75) and compare the results: context_exponent <- 0.75 pp_pmi_smooth <- pp_common_count %>% # smooth using the prior mutate(common_freq = log(common_n / total_n), x_freq = log(x_n / total_n), y_freq = log((y_n / total_n)^context_exponent), pmi = common_freq - x_freq - y_freq) pp_rec_smooth <- pp_pmi_smooth %>% select(product_id.x, product_id.y, total_n, common_n, x_n, y_n, pmi) %>% left_join(select(products, product_id, product_name), by = c(product_id.x = "product_id")) %>% left_join(select(products, product_id, product_name), by = c(product_id.y = "product_id")) For the apples we can observe that the seedless grapes, the Dark Chocolate Chili Almond Nuts & Spices, as well as Outshine Lime Fruit Bars have all been replaced by more apples, and the most frequently purchased item of them all: bananas. pp_rec_smooth %>% filter(product_id.x == 9387, product_id.x != product_id.y) %>% arrange(-pmi) %>% select(product_name.x, product_name.y, pmi, common_n, x_n, y_n) %>% knitr::kable(digits = 2) product_name.x product_name.y pmi common_n x_n y_n Granny Smith Apples Royal Gala Apples 0.49 238 27712 2127 Granny Smith Apples Golden Delicious Apple 0.38 359 27712 4258 Granny Smith Apples Gala Apples 0.37 1061 27712 18335 Granny Smith Apples Banana 0.25 8919 27712 365728 Granny Smith Apples Red Delicious Apple 0.21 236 27712 3062 Granny Smith Apples Mandarins Bag 0.20 295 27712 4175 Granny Smith Apples Bosc Pear 0.15 301 27712 4578 Granny Smith Apples Braeburn Apple 0.10 126 27712 1523 Granny Smith Apples Bag of Oranges 0.08 131 27712 1657 Granny Smith Apples Organic Fuji Apple 0.08 2156 27712 69495 We see a similar effect for the garlic hummus. Compared to the previous recommendations, we also observe that more of the recommended items now have larger common_n values, i.e., by introducing the smoothing, we have implicitly ensured that the ranking relies more on common purchases. product_name.x product_name.y pmi common_n x_n y_n Garlic Hummus White Pita 0.92 71 6893 1518 Garlic Hummus Sea Salt Pita Chips 0.90 351 6893 13135 Garlic Hummus Jalapeno Hummus 0.65 157 6893 6310 Garlic Hummus Whole Wheat Pita 0.64 23 6893 489 Garlic Hummus Lemon Hummus 0.59 250 6893 12625 Garlic Hummus Organic White Pita Bread 0.51 37 6893 1102 Garlic Hummus Pita Chips Simply Naked 0.48 175 6893 9110 Garlic Hummus Organic Peeled Whole Baby Carrots 0.44 532 6893 42519 Garlic Hummus Peanut Butter Dark Chocolate Fruit & Nut Protein Bars 0.43 15 6893 368 Garlic Hummus 100% Whole Wheat Hot Dog Buns 0.42 23 6893 659 ### Why PMI and not Common Order Count? To quickly show the impact of using pointwise mutual information to rank the recommendations instead of the raw count of common orders, consider the following example. If we use the pointwise mutual information to get products that are bought together with Birthday Candles, we will get the following items as the top recommendations. The lighter is a natural complement, and everything else is to prepare the cake on which the candles are placed: product_name.x product_name.y pmi common_n x_n y_n Birthday Candles Classic Lighters 2.69 4 208 331 Birthday Candles Super Moist Chocolate Fudge Cake Mix 2.63 4 208 360 Birthday Candles Creamy Classic Vanilla Frosting 2.55 3 208 273 Birthday Candles Rich and Creamy Milk Chocolate Frosting 2.51 3 208 285 Birthday Candles Funfetti Premium Cake Mix With Candy Bits 2.48 5 208 587 If we instead rank by the absolute count of common purchases, the recommended products would be the generally frequently purchased bananas, strawberries, etc., just as I alluded to in the beginning. This just goes to show that the raw count is not an alternative to come up with product recommendations. product_name.x product_name.y pmi common_n x_n y_n Birthday Candles Banana -0.78 24 208 365728 Birthday Candles Organic Strawberries -0.69 16 208 189410 Birthday Candles Large Lemon -0.74 11 208 122928 Birthday Candles Bag of Organic Bananas -1.66 8 208 274515 Birthday Candles Strawberries -1.00 8 208 113606 ## Closing Thoughts Recommendations based on pointwise mutual information alone are of course not perfect. It’s easy to find cases in which seemingly random products are recommended based on a few common orders. It’s difficult to filter these cases out by setting some threshold on the common orders; three common orders can produce good recommendations depending on the product (just consider the Birthday Candles example above). More, since we’re not training a model and optimizing a metric, there is no scalable way of evaluating the result. Without picking a few example products and comparing recommendations, it’s difficult to, for example, pick the optimal smoothing exponent. But the PMI ranking serves as excellent baseline solution. Given that only four columns have to be counted, the above recommendations can be written in a couple of lines of SQL. It doesn’t take more than a morning to go from no recommendations to a good solution. The PMI gives a lot of bang for the buck. Not only that, but the PMI is also a natural starting point for word embedding models. As indicated in the references below, one could for example extend the ranking here to a full product-product PMI matrix. This high dimensional matrix could then be reduced into a lower dimensional embedding using something as simple as singular value decomposition (see Chris Moody’s “Stop Using word2vec” post on the Stitchfix blog). A word embedding makes it easy to train other models, as for example a clustering to find groups of related products. In any case, take a look at the recommendations from the PMI ranking. I have published an interactive Shiny app which lets you select different products to simulate what could be presented on product display pages. The context smoothing parameter is adjustable as well. Try it out here. And next time your company needs product recommendations, try this as a cheap and good baseline. ## References The Instacart Online Grocery Shopping Dataset 2017. Accessed from https://www.instacart.com/datasets/grocery-shopping-2017 on May 2, 2018. Daniel Jurafsky and James H. Martin. Vector Semantics. Book chapter in Speech and Language Processing. Draft of August 7, 2017. Omer Levy and Yoav Goldberg. Neural Word Embedding as Implicit Matrix Factorization. In Advances in Neural Information Processing Systems 27 (NIPS 2014). Omer Levy, Yoav Goldberg and Ido Dagan. Improving Distributional Similarity with Lessons Learned from Word Embeddings. Transactions of the Association for Computational Linguistics, vol. 3, pp. 211–225, 2015. Chris Moody. Stop Using word2vec. Blog post in Stitchfix’ MultiThreaded blog. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. 1. An first alternative would be to compare recommendations we derive from this “training” against the product combinations that appear in a test set.
2022-05-24 21:29:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18500195443630219, "perplexity": 5906.963399880361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00110.warc.gz"}
http://physics.stackexchange.com/questions/83226/lagrangian-and-hamiltonian-of-interaction
# Lagrangian and hamiltonian of interaction How to prove that lagrangian of interaction is equal to hamiltonian of interaction with minus sign? For example, I can't prove it for special case - quantum electrodynamics. - start with the simplest possible case - free particle, with a harmonic oscillator potential. If you can do that, then extend to fields. –  levitopher Nov 3 '13 at 0:33 –  Qmechanic Nov 22 '13 at 12:51 Consider a general Lagrangian $$L(q,v,t)~=~L_{\rm free}+L_{\rm int}.$$ It is implicitly understood that the free part $L_{\rm free}$ is at most quadratic in position and velocity variables. (In field theory the $q$ variables are fields, and the $v$ variables are time derivatives of the fields. They may be Grassmann-odd.) Assume furthermore that the interaction term $L_{\rm int}=L_{\rm int}(q,t)$ does not depend on velocities $v$. Then one may prove that the Hamiltonian $$H(q,p,t)~=~H_{\rm free}+H_{\rm int},$$ satisfies $$H_{\rm int}=-L_{\rm int}.$$ This is most easily shown for regular Legendre transformations, but it also works quite generally for singular Legendre transformations, such as, e.g. QED. The main idea is that if $$L~=~\sum_{n=0}^{2}L_n,$$ where $L_n$ is homogeneous in velocities $v$ with weight $n$, then $$H~=~v^ip_i-L~=~\left(v^i\frac{\partial}{\partial v^i}-1\right) L =\sum_{n=0}^{2}(n-1)L_n = L_2 -L_0.$$ - Thanks. But you used definition of Hamiltonian which breaks its interpretation as full energy of the system (or, in rel. case, zero component of energy-momentum tensor). Why? –  John Taylor Nov 3 '13 at 10:22 The answer uses the standard definition of a Hamiltonian, which in many cases, such as, e.g. QED, is equal to the total energy. For discussions about Hamiltonian vs. total energy, see also e.g. this Phys.SE post and links therein. –  Qmechanic Nov 3 '13 at 10:47 What you have said is true only when the interaction part of the Lagrangian has no dependence on the derivatives of fields AND when the free part of of the Lagrangian is precisely quadratic. In such cases, $${\cal L}[\phi,\partial\phi] = \frac{1}{2} (\partial \phi)^2 - {\cal L}_{int}[\phi]$$ The canonical Hamiltonian in this case is $$\begin{split} {\cal H} &= \partial \phi \frac{\delta {\cal L}}{\delta (\partial \phi)} - {\cal L}[\phi,\partial\phi] \\ &= \partial \phi \left( \partial \phi \right) - \left[ \frac{1}{2} (\partial \phi)^2 - {\cal L}_{int}[\phi] \right] \\ &= \frac{1}{2} (\partial \phi)^2 + {\cal L}_{int}[\phi] \end{split}$$ In particular, QED does not have a Lagrangian of the type discussed above and therefore the canonical Hamiltonian cannot be obtained by replacing ${\cal L}_{int} \to - {\cal L}_{int}$. - But hamiltonian and lagrangian are really equal to each other (except the minus sign) for QED case (for Dirac spinor field, I forgot to write it). So do you know how to prove this? –  John Taylor Nov 3 '13 at 10:25 QED is an example of a constrained system. In this case, time evolution is determined by the primary Hamiltonian, not the canonical Hamiltonian which are related by $H_p = H_c + u^m \varphi_m$ where $\varphi_m$ are primary constraints and $u^m$ are some multipliers that are determined by some consistency requirements on the constraints. arxiv.org/abs/hep-th/9312078 provides a very good discussion of such systems. –  Prahar Nov 3 '13 at 14:47
2015-05-27 22:07:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9443145990371704, "perplexity": 364.68462857126946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929171.55/warc/CC-MAIN-20150521113209-00239-ip-10-180-206-219.ec2.internal.warc.gz"}
https://chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/Kinetics/Review_of_Chemical_Kinetics
# Review of Chemical Kinetics A reaction’s equilibrium position defines the extent to which the reaction can occur. For example, we expect a reaction with a large equilibrium constant, such as the dissociation of HCl in water $\ce{HCl}(aq) + \ce{H2O}(l) ⇋ \ce{H3O+}(aq) + \ce{Cl-}(aq)$ to proceed nearly to completion. A large equilibrium constant, however, does not guarantee that a reaction will reach its equilibrium position. Many reactions with large equilibrium constants, such as the reduction of MnO4 by H2O $\ce{4MnO4-}(aq) + \ce{2H2O}(l) ⇋ \ce{4MnO2}(s) + \ce{3O2}(g) + \ce{4OH-}(aq)$ do not occur to an appreciable extent. The study of the rate at which a chemical reaction approaches its equilibrium position is called kinetics. #### A17.1 Chemical Reaction Rates A study of a reaction’s kinetics begins with the measurement of its reaction rate. Consider, for example, the general reaction shown below, involving the aqueous solutes A, B, C, and D, with stoichiometries of a, b, c, and d. $a\ce{A} + b\ce{B} ⇋ c\ce{C} + d\ce{D}\tag{A17.1}$ The rate, or velocity, at which this reaction approaches its equilibrium position is determined by following the change in concentration of one reactant or one product as a function of time. For example, if we monitor the concentration of reactant A, we express the rate as $R = -\dfrac{d[\ce A]}{dt}\tag{A17.2}$ where R is the measured rate expressed as a change in concentration of A as a function of time. Because a reactant’s concentration decreases with time, we include a negative sign so that the rate has a positive value. We also can determine the rate by following the change in concentration of a product as a function of time, which we express as $R´ = −\dfrac{d[\ce C]}{dt}\tag{A17.3}$ Rates determined by monitoring different species do not necessarily have the same value. The rate R in equation A17.2 and the rate R´ in equation A17.3 have the same value only if the stoichiometric coefficients of A and C in reaction A17.1 are identical. In general, the relationship between the rates R and R´ is $R = \dfrac{a}{c} × R´$ #### A17.2 The Rate Law A rate law describes how a reaction’s rate is affected by the concentration of each species in the reaction mixture. The rate law for reaction A17.1 takes the general form of $R = k\mathrm{[A]^α[B]^β[C]^γ[D]^δ[E]^ε...}\tag{A17.4}$ where k is the rate constant, and α, β, γ, δ, and ε are the reaction orders of the reaction for each species present in the reaction. There are several important points about the rate law in equation A17.4. First, a reaction’s rate may depend on the concentrations of both reactants and products, as well as the concentration of a species that does not appear in the reaction’s overall stoichiometry. Species E in equation A17.4, for example, may be a catalyst that does not appear in the reaction’s overall stoichiometry, but which increases the reaction’s rate. Second, the reaction order for a given species is not necessarily the same as its stoichiometry in the chemical reaction. Reaction orders may be positive, negative, or zero, and may take integer or non-integer values. Finally, the reaction’s overall reaction order is the sum of the individual reaction orders for each species. Thus, the overall reaction order for equation A17.4 is α + β + γ + δ + ε. #### A17.3 Kinetic Analysis of Selected Reactions In this section we review the application of kinetics to several simple chemical reactions, focusing on how we can use the integrated form of the rate law to determine reaction orders. In addition, we consider how we can determine the rate law for a more complex system. ##### First-Order Reactions The simplest case we can treat is a first-order reaction in which the reaction’s rate depends on the concentration of only one species. The best example of a first-order reaction is an irreversible thermal decomposition of a single reactant, which we represent as $\mathrm{A → Products}\tag{A17.5}$ with a rate law of $R = -\dfrac{d[\ce A]}{dt} = k[\ce A]\tag{A17.6}$ The simplest way to demonstrate that a reaction is first-order in A, is to double the concentration of A and note the effect on the reaction’s rate. If the observed rate doubles, then the reaction must be first-order in A. Alternatively, we can derive a relationship between the concentration of A and time by rearranging equation A17.6 and integrating. $\dfrac{d[\ce A]}{[\ce A]} = -k dt$ $\int_{[\ce A]_0}^{[\ce A]_t} \dfrac{d[\ce A]}{[\ce A]} = -k \int_{0}^{t}dt\tag{A17.7}$ Evaluating the integrals in equation A17.7 and rearranging $\ln\dfrac{[\ce A]_t}{[\ce A]_0}= -kt\tag{A17.8}$ $\ln[\ce A]_t = -kt+ \ln[\ce A]_0\tag{A17.9}$ shows that for a first-order reaction, a plot of ln[A]t versus time is linear with a slope of –k and a y-intercept of ln[A]0. Equation A17.8 and equation A17.9 are known as integrated forms of the rate law. Reaction A17.5 is not the only possible form of a first-order reaction. For example, the reaction $\mathrm{A + B → Products}\tag{A17.10}$ will follow first-order kinetics if the reaction is first-order in A and if the concentration of B does not affect the reaction’s rate. This may happen if the reaction’s mechanism involves at least two steps. Imagine that in the first step, A slowly converts to an intermediate species, C, which rapidly reacts with the remaining reactant, B, in one or more steps, to form the products. $\mathrm{A → B \hspace{20px}(slow)}$ $\mathrm{B + C → Products \hspace{20px} (fast)}$ Because a reaction’s rate depends only on those species in the slowest step—usually called the rate-determining step—and any preceding steps, species B will not appear in the rate law. ##### Second-Order Reactions The simplest reaction demonstrating second-order behavior is $\mathrm{2A → Products}$ for which the rate law is $R = -\dfrac{d[\ce A]}{dt}= k[\ce A]^2$ Proceeding as we did earlier for a first-order reaction, we can easily derive the integrated form of the rate law. $\dfrac{d[\ce A]}{[\ce A]^2}= -k\, dt$ $\int_{[\ce A]_0}^{[\ce A]_t} \dfrac{d[\ce A]}{[\ce A]^2} = -k \int_{0}^{t}dt$ $\dfrac{1}{[\ce A]_t} = kt + \dfrac{1}{[\ce A]_0}$ For a second-order reaction, therefore, a plot of [A]t–1 versus t is linear with a slope of k and a y-intercept of [A]0–1. Alternatively, we can show that a reaction is second-order in A by observing the effect on the rate when we change the concentration of A. In this case, doubling the concentration of A produces a four-fold increase in the reaction’s rate. Example A17.1 The following data were obtained during a kinetic study of the hydration of p-methoxyphenylacetylene by measuring the relative amounts of reactants and products by NMR.1 time (min) % p-methyoxyphenylacetylene 67 85.9 161 70.0 241 57.6 381 40.7 479 32.4 545 27.7 604 24 Determine whether this reaction is first-order or second-order in p-methoxyphenylacetylene. Solution To determine the reaction’s order we plot ln(%pmethoxyphenylacetylene) versus time for a first-order reaction, and (%p-methoxyphenylacetylene)–1 versus time for a second-order reaction (see Figure A17.1). Because a straight-line for the first-order plot fits the data nicely, we conclude that the reaction is first-order in p-methoxyphenylacetylene. Note that when we plot the data using the equation for a second-order reaction, the data show curvature that does not fit the straight-line model. Figure A17.1 Integrated rate law plots for the data in Example A17.1 assuming (a) first-order kinetics and (b) second-order kinetics. ##### Pseudo-Order Reactions and the Method of Initial Rates Unfortunately, most reactions of importance in analytical chemistry do not follow the simple first-order or second-order rate laws discussed above. We are more likely to encounter the second-order rate law given in equation A17.11 than that in equation A17.10. $R = k\mathrm{[A][B]}\tag{A17.11}$ Demonstrating that a reaction obeys the rate law in equation A17.11 is complicated by the lack of a simple integrated form of the rate law. Often we can simplify the kinetics by carrying out the analysis under conditions where the concentrations of all species but one are so large that their concentrations remain effectively constant during the reaction. For example, if the concentration of B is selected such that [B] >> [A], then equation A17.11 simplifies to $R = k´[\ce A]$ where the rate constant k´ is equal to k[B]. Under these conditions, the reaction appears to follow first-order kinetics in A and, for this reason we identify the reaction as pseudo-first-order in A. We can verify the reaction order for A using either the integrated rate law or by observing the effect on the reaction’s rate of changing the concentration of A. To find the reaction order for B, we repeat the process under conditions where [A] >> [B]. A variation on the use of pseudo-ordered reactions is the initial rate method. In this approach we run a series of experiments in which we change one at a time the concentration of those species expected to affect the reaction’s rate and measure the resulting initial rate. Comparing the reaction’s initial rate for two experiments in which only the concentration of one species is different allows us to determine the reaction order for that species. The application of this method is outlined in the following example. Example A17.2 The following data was collected during a kinetic study of the iodation of acetone by measuring the concentration of unreacted I2 in solution.2 experiment number [C3H6O] (M) [H3O+] (M) [I2] (M) Rate (M s–1) 1 1.33 0.0404 6.65×10–3 1.78×10–6 2 1.33 0.0809 6.65×10–3 3.89×10–6 3 1.33 0.162 6.65×10–3 8.11×10–6 4 1.33 0.323 6.65×10–3 1.66×10–5 5 0.167 0.323 6.65×10–3 1.64×10–6 6 0.333 0.323 6.65×10–3 3.76×10–6 7 0.667 0.323 6.65×10–3 7.55×10–6 8 0.333 0.323 3.32×10–3 3.57×10–6 Solution The order of the rate law with respect to the three reactants is determined by comparing the rates of two experiments in which there is a change in concentration for only one of the reactants. For example, in experiment 2 the [H3O+] and the rate are approximately twice as large as in experiment 1, indicating that the reaction is first-order in [H3O+]. Working in the same manner, experiments 6 and 7 show that the reaction is also first order with respect to [C3H6O], and experiments 6 and 8 show that the rate of the reaction is independent of the [I2]. Thus, the rate law is $R = k\ce{[C3H6O][H3O+]}$ To determine the value of the rate constant, we substitute the rate, the [C3H6O], and the [H3O+] for each experiment into the rate law and solve for k. Using the data from experiment 1, for example, gives a rate constant of 3.31×10–5 M–1 sec–1. The average rate constant for the eight experiments is 3.49×10–5 M–1 sec–1. ### References 1. Kaufman, D,; Sterner, C.; Masek, B.; Svenningsen, R.; Samuelson, G. J. Chem. Educ. 1982, 59, 885–886. 2. Birk, J. P.; Walters, D. L. J. Chem. Educ. 1992, 69, 585–587.
2017-03-29 01:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356167078018188, "perplexity": 904.6807284401647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00180-ip-10-233-31-227.ec2.internal.warc.gz"}
https://academic.oup.com/mbe/article-lookup/doi/10.1093/molbev/msg215
## Abstract The estimation of evolutionary rates from serially sampled sequences has recently been the focus of several studies. In this paper, we extend these analyzes to allow the estimation of a joint rate of substitution, ω, from several evolving populations from which serial samples are drawn. In the case of viruses evolving in different hosts, therapy may halt replication and therefore the accumulation of substitutions in the population. In such cases, it may be that only a proportion, p, of subjects are nonresponders who have viral populations that continue to evolve. We develop two likelihood-based procedures to jointly estimate p and ω, and empirical Bayes' tests of whether an individual should be classified as a responder or nonresponder. An example data set comprising HIV-1 partial envelope sequences from six patients on highly active antiretroviral therapy is analyzed. ## Introduction Recently, there has been an increased interest in the analysis of serial nucleotide sequence samples that are gathered from the same population, each sample obtained at a different time. This includes samples from rapidly evolving viral populations such as HIV and Porcine Reproductive and Respiratory Syndrome Virus (PRRSV) (Leitner and Albert 1999; Forsberg et al. 2001) and ancient DNA samples obtained from preserved or fossilized tissue (Leonard, Wayne, and Cooper 2000; Barnes et al. 2002; Lambert et al. 2002). Several methods have been developed to estimate the value of some time-dependent evolutionary parameter when serially sampled sequences are available. Rodrigo et al. (1999) and, later, Fu (2001) developed methods to estimate the generation time of a population. Maximum-likelihood and least-squares estimators of single or multiple substitution rates have also been developed (Drummond and Rodrigo 2000; Rambaut 2000; Drummond, Forsberg, and Rodrigo 2001). Drummond and Rodrigo (2000) also described a method to reconstruct serial genealogies using serial sample UPGMA (sUPGMA). Most recently, Seo et al. (2002a) have explored optimal experimental designs for serial sampling, when the aim is to estimate substitution rate and/or divergence times. In addition, Seo et al. (2002b) and Drummond et al. (2002) have described more sophisticated methods for the estimation of substitution rates and effective population size. The estimation methods developed to date use only sequences sampled serially from a single population. However, certainly with viruses, it is quite common to sample viral sequences from several different hosts, and within each host, at different timepoints (e.g., Gunthard et al. 1999; Holmes et al. 1992; Rodrigo et al. 1999; Shankarappa et al. 1999). If we assume, as is frequently done for viruses such as HIV-1, that there is little likelihood of multiple transmission events, then viruses in each host are part of an isolated and unique population, evolving independently from a single founding variant. In this paper, we describe two likelihood-based methods for jointly estimating a substitution rate using serially sampled sequences, when these are obtained from different populations. These methods are analogous to those developed by Gu (2001) for the analysis of functional divergence in protein families, and we use Gu's terminology in this paper. In the first of these procedures, the subtree of sequences from each population is treated as an unrelated phylogeny. This “subtree likelihood” (STL) approach uses the likelihoods of the subtrees as independent contributors to the total likelihood of all samples. In an alternative approach, a phylogeny of all sequences is constructed and the “whole-tree likelihood” (WTL) is then used as a basis for estimation. The joint estimation of substitution rate is a reasonably simple extension to work previously done (Rambaut 2000) under both approaches. There is, however, an interesting problem that provides a more challenging application of the STL or WTL procedure. Gunthard et al. (1999) described a study in which HIV-1 partial envelope (env) gene sequences were obtained from individuals just before and 2 years after the commencement of combination antiretroviral therapy. The aim of the study was to determine if antiretroviral therapy effectively controlled viral replication, as had previously been suggested by several workers (Finzi et al. 1997; Wong et al. 1997). In the event that the patient responds to therapy and viral replication is halted, there would be no measurable (or statistically significant) accumulation of substitutions in env sequences sampled before and after therapy. In a study such as this, the aim is to quantify and test whether the virus population continues to evolve within a host over the period of the study. Gunthard et al. (1999) analyzed each patient separately, but such an analysis runs the risk of inflating the probability of a type I error. Here, we apply the STL and WTL procedures to provide joint estimators of both the proportion of individuals who do not respond to therapy (i.e., whose viral population continues to replicate and accumulate substitutions) and the rate of ongoing viral substitution in these patients. Finally, we show how each patient may be assigned to the class of nonresponders or responders using empirical Bayes' classifiers. ## Methodology Consider the case of sequences sampled serially from a single population for which there is exact information on sampling times and a known phylogeny. In a model where it is assumed that there is a uniform rate of substitution (the single-rate with dated tips [SRDT] model, Rambaut 2000), total branch lengths from the root of the tree to the tips are no longer required to be equal. Instead, branch lengths are determined by the number of sampling intervals the branches traverse and the substitution rate (fig. 1). The parameters of the tree are the substitution rate, ω, the vector of times, τ, corresponding to the dated tips and the (n − 1 for a bifurcating tree) internal node heights (h) measured in units of substitutions (following Rambaut 2000; note that the tip times may be measured either in generations or in some calendar unit, and a simple rescaling allows one to move between the two). As described previously by Drummond, Forsberg, and Rodrigo (2001), ω is only estimated within the interval bounded by the first and last samples. Specifically, no assumptions are made with regard to the rate between the earliest sampling time and the root of the tree. This is because the branch lengths, l, between the root and the earliest sampling time can only be optimized jointly as li = ωti. Setting ω = 0 is equivalent to terminating all tips an equal distance from the root and assuming that all sequences in the sample are contemporary, as is done under a standard molecular clock model. For a given phylogeny, T, for which only the topology is known, we may estimate the joint likelihood of ω and H, the vector of internal node heights on T, as the conditional probability of obtaining the sequence data, S, given ω, T, H, and τ, the vector of sampling times, as well as the instantaneous substitution rate matrix, M (also assumed to be known): Since T, τ, and M are fixed, we will write L(ω, H) = Prob(S | ω, H) without loss of generality. This likelihood is calculated in the standard manner (Felsenstein 1981; Goldman 1990; Rodriguez et al. 1990) for phylogenetic trees; the addition of ω and τ enters the calculations as constraints on the branch tip positions (fig. 1). The MLEs of the rate, $${\hat{{\omega}}}$$ , and elements of the vector of node heights, Ĥ, are constrained to be greater than or equal to zero and are chosen such that L( $${\hat{{\omega}}}$$ , Ĥ) is maximized. It is worth noting, at this point, that since we are only interested in ω, H is a nuisance parameter, and we estimate it only because it is necessary to do so. (Note: Although we are assuming M to be fixed, it is also possible to estimate M). ### Using Subtree Likelihoods (STLs) We wish to extend this model to the case where there are n serially sampled data sets, S1,…,Sn, each from a different population. Associated with each is a fixed tree, Ti, possibly a different model of evolution, Mi, and a different set of sampling times, τi. When the aim is to estimate a common substitution rate, the likelihood function can be written as As above, we will write L(ω, H1, ... , Hn) = Prob(S1, ... ,Sn | ω, H1, ... , Hn) since T1, ... ,Tn, τ1, ... ,τn and M1, ... , Mn are fixed. Here, we assume that S1, ... , Sn are drawn from independent populations, so that for a sample of aligned sequences, Si, from the ith population, That is, sequences obtained from the ith population do not depend on sequences obtained from any other population. For this assumption to be true, the sequences from other populations must not have any influence on the prior probabilities of obtaining the sequences in the ith population. If this condition is met, then any estimate of ω that is derived using a set of sequences, Si, is totally uninfluenced by estimates of ω obtained with other samples. Such a situation would apply if the subtrees are connected by very long branches on the joint phylogeny or by very short branches with equal prior probabilities at the roots of all subtrees. As Gu (2001) points out, this approach is computationally tractable and, as we will show, appears to give very similar results to the WTL approach. Given equation 3, it follows that For the ith population, Si depends on the evolutionary history of that population only and consequently on the node heights, Hi, associated with topology, Ti. Therefore, equation 4 can be rewritten as where Li(ω, Hi) is the likelihood defined in equation 1 of ω and H for the ith population. The joint MLE of ω and Hi, ... , Hn is chosen to maximize equation 4. Asymptotic (1 − α)% profile confidence limits of ω can be derived by locating upper and lower values, ω*, such that where $${\hat{{\omega}}}$$ and Ĥ1, ... , Ĥn are the MLEs of ω and H1, ... , Hn, respectively, and $$\mathbf{H}{^\prime}_{1}$$ , ... , $$\mathbf{H}{^\prime}_{\mathit{n}}$$ are the MLEs of H1, ... , Hn when ω = ω*. For 95% confidence limits, $${\chi}^{2}_{1,0.05}$$ /2 = 1.92 (Rambaut 2000; Drummond, Forsberg, and Rodrigo 2001; Ota et al. 2001). How do we modify equation 5 to allow groups of subpopulations to have different values of ω? As discussed above, we want to extend equation 4 to allow for the possibility that there is no measurable accumulation of substitutions in some populations so that for these, ω = 0. The following description applies specifically to this case, but it is general enough to be applied to other values of ω as well. More importantly, whereas we focus exclusively on two rate categories (i.e., ω > 0 and ω = 0), these methods can also be generalized to data with more than two rate categories. We define a Bernoulli random variable, R, where R = 0 classifies a population for which ω = 0, and R = 1, a population where ω > 0. Let R = (R1, ... ,Rn) represent the vector of population states. We can define the joint likelihood of R and a common positive-valued ω (for those populations for which R = 1) as The condition of independence given in equation 3 needs to be extended as follows: This means that the evolution of sequences sampled from any given population depends on the status of that population only and not on that of any other population. Therefore, Li(R, ω, Hi) is either the likelihood of the tree (topology, Ti, and node heights, Hi) with all terminal tips equidistant from the root (when Ri = 0; ω is included for completeness but does not feature in the likelihood calculations) or the likelihood of Ti with tips terminating according to the sampling times and substitution rate, ω, (when Ri = 1). This is equivalent to finding the particular configuration of population states, R1Rn, and the value of ω associated with those populations for which R = 1, such that L(R, ω) is maximized. The value of this approach is that it identifies the populations that show an accumulation of substitutions and those that do not. However, frequently what is required is an estimate of the proportion of populations that are classified as either R = 0 or R = 1. Of course, this can be estimated simply from R after maximizing equation 7. Ideally, however, if the intention is to obtain an MLE of this proportion, the likelihood function needs to be recast. Let the probabilities associated with R = 0 and R = 1 be (1 − p) and p respectively. MLEs of p and a positive-valued ω can be obtained jointly by maximizing the following likelihood function: where L(R = 0, ω, Hi0) is the ith likelihood of Ti with node heights, Hi0, optimized under a standard molecular clock, and L(R = 1, ω, Hi1) is a dated-tips tree with optimized node heights, Hi1, and ω common to all populations. Asymptotic bivariate (1 − α)%-profile confidence envelopes may be obtained by locating pairs of (ω*, p*) such that ln L( $${\hat{{\omega}}}$$ , , Ĥi, ... , Ĥn) − ln L(ω*, p*, $$\mathbf{H}{^\prime}_{1}$$ , ... , $$\mathbf{H}{^\prime}_{\mathit{n}}$$ ) = $${\chi}^{2}_{2,{\alpha}}$$ /2; here, $$\mathbf{H}{^\prime}_{1}$$ , ... , $$\mathbf{H}{^\prime}_{\mathit{n}}$$ are the node heights that give the highest likelihood for ω* and p*. Alternatively, a profile confidence likelihood interval may be obtained for each parameter (either ω or p). For p, for instance, locate upper and lower values, p*, such that The same procedure can be used to find upper and lower confidence values for ω. Joint estimation of p and ω does not specifically identify which populations are classified as R = 1 or R = 0. It is however possible to use an empirical Bayes' procedure to classify the ith population according to their relative posterior probabilities after fixing ω = $${\hat{{\omega}}}$$ , p = , and Hi to Ĥi0 or Ĥi1 depending on whether Ri = 0 or R = 1, respectively. To implement this, the ratio is calculated. If Λi > 1, then it is more probable that the ith population has a nonzero rate of substitution over the period of sampling. It is worthwhile noting that identifying the precise configuration of R = (R1,…Rn), as we do in equation 8, and deriving by calculating a posteriori the proportion of subpopulations classified as Ri = 1 (or 0) may lead to an inconsistent estimate of p, (i.e., there exists a small ϵ such that Prob(|p| < ϵ) → 0, as n → ∞). This is because as n increases, the probability of incorrectly classifying subpopulations increases; this affects our estimate of p when it is calculated a posteriori. For this reason, it may be more defensible theoretically to estimate p directly. For k (k > 2) categories of rates, there will be a vector p = {p1, … , pk} where pi corresponds to the proportion of subpopulations in rate category i. Equation 11 will be inapplicable in such a case. Nonetheless, for each subpopulation, it is easy enough to calculate the posterior probability of each rate category (for two categories, these correspond to the numerator and denominator of equation 11). These posterior probabilities can be thought of as “classification probabilities;” a subpopulation is assigned to the category with the highest classification probability. ### Using Whole-Tree Likelihoods (WTL) An alternative to the STL methods described above is to build and use a tree that represents the joint phylogeny of all sequences sampled from all populations. If the real sampling times from different populations are known, it is possible to build a serial phylogenetic tree of the entire set of sequences. In this circumstance, the complete phylogeny can be used to estimate a single mutation rate under the SRDT model. This would be analogous to what Gu (2001) refers to as the “whole-tree likelihood” approach. However, even if these times are available, it is still not obvious that a single tree and the SRDT analysis would be the appropriate approach, because it assumes that the rates of substitution of the virus between individuals are the same as those within individuals. Of course, this may not be true, since the accumulation of substitutions between individuals is subject to the evolutionary dynamics operating as a result of transmission from host to host. An alternative is to construct a partially-constrained serial phylogenetic tree that allows the sequences within each population to evolve according to the SRDT model but also allows the lengths of branches connecting the subtrees of sequences from the different populations to vary freely (fig. 1B). In this case, the likelihood of the partially-constrained serial tree, T, with a single model of evolution, M, and node heights, HT, some of which are free to vary, is In this case, there is a single rate, ω, estimated for all populations regardless of the sampling intervals. It is perhaps more interesting to modify equation 12 to allow the populations to have different rates (e.g., some populations to have rate ω > 0 and others have rate ω = 0). In principle, this is straightforward: we need only constrain the node heights of the respective samples on a tree appropriate to their assigned rates (fig. 1C). If we are interested in estimating the numbers of individuals with rates ω > 0 or ω = 0, it is possible to choose a partially constrained tree with the particular assignment of samples to each rate group that has the highest likelihood. So, by cycling through all 2n possible combinations of rate assignments, we are able to identify the ML combination. The approach above is equivalent to that applied using the subtree likelihood method, in which a particular combination of population states R1Rn (and the value of ω associated with those populations for which R = 1) that maximizes L(R, ω) is found. As with that approach, the disadvantage is that we do not estimate a proportion, p, of the number of populations that have state R = 1. It is possible, albeit tedious, to estimate both p and ω using the WTL approach. Let Ri = (Ri1, … Rin) represent the ith combination of states assigned to the n samples, i = 1, ... , 2n. Let ki be the number of samples assigned state R = 1 and (nki) the number assigned state R = 0 in Ri. The joint likelihood for ω and p is and is analogous to an expansion of equation 9. Obtaining equation 13 actually involves cycling through 2n possible instances of the fixed topology, each of which has a different configuration of subclades assigned to the two rate categories. Finally, we are interested in assigning subpopulations to different rate categories. As with the STL approach, we do this using an empirical Bayes' classifier. For the jth subpopulation, j = 1, ... , n, with rate assignment RijRi in the ith combination of rate assignments, where Ĥ $$_{\mathbf{T}(\mathit{R_{ij}}{=}1)}$$ and Ĥ $$_{\mathbf{T}(\mathit{R_{ij}}{=}0)}$$ indicate node heights of ML topologies for which the sequences associated with the jth subpopulation when it has rate assignments 1 and 0, respectively. The terms Prob(Φ | Rij = 1, ω̂, , Ĥ $$_{\mathbf{T}(\mathit{R_{ij}}{=}1)})$$ and Prob(Rij = 0 | Φ, ω̂, , Ĥ $$_{\mathbf{T}(\mathit{R_{ij}}{=}0)})$$ are, in fact, the likelihoods of the jth subpopulation having rate assignments 1 and 0, respectively, after fixing ω and p to their ML estimates. There are 2n−1 combinations in which the jth subpopulation has rate assignment 1 and the same number of combinations in which it has rate assignment 0. Therefore, The multiplier Rij/ needs a little explanation. The numerator, Rij, ensures that the only terms that are used are those that correspond to combinations of Ri for which Rij = 1. The denominator, , corrects the product $$^{\mathit{k_{i}}}$$ (1 − ) $$^{(\mathit{n}{-}\mathit{k_{i}})}$$ because we are fixing the jth subpopulation to have rate Rij = 1. Similarly Therefore, substituting equation 15 and equation 16 into equation 14, As with the STL approach, if Λj > 1 then the jth population is classified in rate category 1, that is, with rate ω = $${\hat{{\omega}}}$$ > 0. ### Example: Estimating the Proportion of Individuals Responding to Antiretroviral Therapy Gunthard et al. (1999) studied the evolution of partial (regions C2–C3) HIV-1 env sequences over 2 years of combination antiretroviral therapy in six individuals. Viral RNA sequences were obtained just before therapy began (“early” sequences) and cell-associated viral DNA sequences were obtained 2 years later (“late” sequences). As mentioned previously, if therapy is successful at halting viral replication, there is no opportunity for the virus to accumulate mutations, since this only happens when viral RNA is reverse transcribed to cDNA after infection of host cells. Therefore, one expects that successful therapy will leave behind a population of viral “fossils” embedded in the genomes of host cells infected before therapy began. This means that when therapy begins, the mutation rate, ω, becomes zero. Note that setting ω = 0 only makes sense if serial sequence samples are available. In the absence of serially sampled data, ω = 0 implies that there can be no differences between sequences, but with serial samples, ω = 0 only implies that over some period between sampling events, there was no accumulation of substitutions. Gunthard et al. (1999) reconstructed the phylogenies of each set of sequences from each subject. They then measured the evolutionary distance of each sequence to the root of each tree and compared the distances of early and late sequences using a nonparametric test. There are obvious problems with this approach, principally the genealogical dependence of evolutionary distances. In effect, this analysis assumes that each sequence terminates a lineage that evolved independently from the most recent common ancestor. Here, we reanalyze the data obtained by Gunthard et al. (1999). We used PAUP* (Swofford 1999) to construct individual maximum-likelihood phylogenies for sequences from each subject with a common GTR model of evolution that we had previously estimated for all subjects simultaneously. We applied the analyses described above for both the STL-based and WTL-based approaches. For the WTL analyses, an unrooted tree of the entire data set was used. As expected, sequences from each subject clustered together. For the STL analyses, the phylogeny of each set of sequences was rooted using sequences from other subjects as outgroups. Once the trees were rooted, the outgroup sequences were pruned from the trees so that the rooted topology contained only the sequences for that subject. ### STL Analyses First, for each subject, we derived a nonnegative MLE of ω by setting the times between early and late sequences at 2 years, using the estimated phylogeny and common GTR model of evolution. Interestingly, the MLEs of ω for four of the six subjects were, in fact, 0. The MLEs of ω for sequences obtained from Patient M was 1.8% per year, and that of Patient C was 0.3% per year. Using the asymptotic likelihood ratio test (LRT) described by Drummond, Forsberg, and Rodrigo (2001), we found that ω was statistically different from 0 only for Patient M (P < 0.01 [table 1]). This result is interesting because we were only able to find evidence for the continued accumulation of substitutions in one subject. In contrast, using the nonparametric approach and treating the distance-to-root of each sequence as an independent measurement, Gunthard et al. (1999) found that the viral populations in three subjects continued to evolve. This is not surprising, because the assumption that each sequence in a given sample sits at the tip of an independently evolving lineage falsely inflates both the degrees of freedom of a test (broadly defined, the apparent number of replicates) and our confidence that any estimated difference is statistically significant. Next, we searched for the combination of six population states, R = (R1, ... ,R6), representing sequence sets with statistically detectable increases in substitutions between sampling times (R = 1) and those without (R = 0). As described in equation 8, for any configuration in which R = 1 was assigned to more than one set of sequences, a common ω was estimated. The configuration that had the highest log-likelihood (−4,391.35) of all 64 possible configurations of R was one in which only Patient M had a non-0 ω (1.8% per year). This, of course, agrees with the result obtained above: only sequences from Patient M contained sufficient signal to detect a statistically significant non-0 substitution rate between sampling times. At this point, it is worth noting that the log-likelihood of one other configuration was only very slightly different from that of the ML configuration. The configuration in which Patients K and M have states R = 1, and a common value of ω = 0.017 (1.7% per year) has a log-likelihood of −4,391.37. At first glance, this is a curious result—an examination of table 1 indicates that for Patient K, the MLE of ω = 0. Why then should Patient K be assigned a state that signals a detectable accumulation of substitutions? The reason becomes obvious when we examine the topology of the sequences for Patient K (fig. 2). The tree is reciprocally monophyletic, with early sequences (labeled with the prefix “KV”) and late sequences (“KP”) clustering on different clades. This means that simply by moving the position of the root on the branch connecting the two clades, it is possible to get estimates of ω that range from 0 to some positive value without changing the log-likelihood. Finally, we estimated the proportion of responders and a common ω by maximizing the likelihood given in equation 9. This was done using a grid search with values of p between 0 and 1 at an interval of 0.01 and values of ω between 0 and 0.1 and with an interval of 0.001. The resulting surface plot of log-likelihoods is given in figure 3. The joint ML estimates of ω and p are 0.017 and 0.20, respectively. The value of ω agrees with estimates obtained above, and the estimate of p is also consistent with those results. Only one of six subjects (Patient M) had a rate that was statistically non-0, and if we discount Patient K because the sequence data is uninformative about rates, then we are left with only one in five patients (or 20%) showing statistical evidence of continued evolution and, by implication, nonresponsiveness to therapy. The bivariate 95% profile confidence envelope corresponds to the contour that represents ln L = −4,396.9, and the hatched bars on the horizontal and vertical axes represent the 95% profile confidence intervals for p and ω, respectively. Finally, using the ML values of ω and p to classify the status of each subject with an empirical Bayes' procedure (table 1), we found that, as expected, only Patient M had a value of Λ that was greater than 1, signifying a non-0 rate of evolution. It is worth noting that for Patient K, the likelihoods associated with ω = 0 and ω = 0.017 are effectively identical; consequently, the value of Λ is determined solely by the ratio of p to (1 − p), since these are the probabilities of belonging to the two rate categories. ### WTL Analyses When a partially constrained tree was fitted to the data and a single rate allowed for all subpopulations on the tree (see equation 12), the ML estimate of ω was, in fact, 0 (ln L = −2,717.5). However, if some populations were allowed to have a common ω > 0 and others, ω = 0, the combination of rate categories that had the highest likelihood (ln L = −2,709.7) was one in which only Patient M had a non-0 substitution rate, ω = 0.017 or 1.7% per year (fig. 4). This result is identical to that obtained above using STLs. Interestingly, the combination in which Patients K and M both have non-0 rates (ω = 0.012) has a log-likelihood that is very close (ln L = −2,711.2). This also agrees with the results obtained with the STL-based analysis. We next used the WTL analysis to jointly obtain the ML estimates of ω and p. We did this using a grid search over ω and p with the same dimensions as done for the equivalent STL analysis. The contour plot of the log-likelihoods is shown in figure 5. The ML estimates of ω and p are 0.0175 and 0.17, respectively. Once again, these results are almost identical to those obtained using the equivalent STL analysis. Finally, we applied the empirical Bayes' classifier given in equation 17 to each of the subpopulations and obtained the results shown in table 1. These results confirm our previous analyzes in showing that only Patient M can be classified as a nonresponder. The values of Λ are not markedly different from that obtained using the STL classifier. ## Discussion In this paper, we describe methods to estimate substitution rates of homologous genes from several independently sampled populations. We make one principal assumption in constructing these methods: a single substitution rate applies to all sequences and all populations or, as in the case of subjects undergoing antiretroviral therapy, to those populations that continue to accumulate substitutions. Departures from this assumption can, nonetheless, be accommodated in the same framework. For instance, it is possible to restate equation 4a to differentiate the populations into two or more sets, each with its own substitution rate. In fact, this is exactly what equation 9 does, except that it constrains one of the rates to be 0. At the extreme, it is possible to allow each population to have its own substitution rate. However, in this case, the solution is trivial, because the likelihood is maximized when the ML rate for each population is determined. The approaches we apply here based on subtree likelihoods and whole-tree likelihoods are equivalent to those used by Gu (2001). In our example, there appears to be very little difference in the results. It is worth reiterating the points that Gu (2001) makes in his comparison of the subtree likelihood versus whole-tree likelihood approaches. In essence, if there are very long or very short internal branch lengths between clades, the whole-tree method offers little improvement over the subtree method. This is because the value of the whole-tree approach depends on the extent to which the joint phylogeny influences the prior probabilities of nucleotide states at the roots of each of the individual subtrees and hence the likelihood of the tree and its attendant parameter estimates. If these prior probabilities and the resultant likelihood are unaffected or only marginally affected by combining the subtrees into a single joint phylogeny, then there is little added value in the whole-tree approach, particularly when we consider the computational overheads of the method. These can be quite substantial; for instance, the grid search to generate the likelihood contour plot using the STL method, programmed with JAVA version 1.4, took an average of 71 s on a PC with an AMD Athlon 1400+ processor and 512 Mb RAM. In contrast, the WTL grid search, on the same machine, took 8,195 s, or just over 2 h. The STL methods we have developed in this paper can also be applied to serial sequence samples drawn from different, unlinked loci. In this case, we may be interested in testing whether the different loci are evolving at the same rate. Alternatively, the sampled loci may be partitioned into groups each evolving with its own rate. The methods we have described here work as well with such data. Obviously, these methods should only be applied if each subpopulation is evolving in a clocklike manner. It should be routine to validate this assumption first using the tests described by Rambaut (2000) and Drummond, Forsberg, and Rodrigo (2001). Our analyses, as we have described them, rely on the assumption of a given topology. To relax this assumption, we need to allow for uncertainty in the evolutionary relationships of the sampled sequences. Two of us (A.D. and A.G.R.) have been involved in recent work on the use of Bayesian analysis of serially sampled sequences using Markov chain Monte Carlo (MCMC) methods (Drummond et al. 2002). This approach allows different topologies to be sampled in proportion to their contribution to the joint posterior probability of all unknown quantities. The plan for the immediate future is to incorporate the analyses, and different options, described here into the MCMC framework already available. 1 Present address: Department of Statistics, and Department of Zoology, University of Oxford, Oxford, United Kingdom. Diethard Tautz, Associate Editor Fig. 1. Alternative models of phylogenies with serially sampled sequences. (A) A “single-rate with dated tips” (SRDT) tree, with the times of serial sequence samples sequences known precisely. Under a molecular clock, sequences from each timepoint terminate at the same distance from the root of the tree. The branch lengths are extended by the product of a single substitution rate, ω, and the sampling interval. (B) A partially constrained phylogeny of sequences from two subpopulations. Only the lengths of the sampling intervals, Δ1 and Δ2, are known for the samples from both populations. A common rate, ω, is assumed. (C) A partially constrained phylogeny with sampling intervals known and with different rates, ω1 and ω2, for each population Fig. 1. Alternative models of phylogenies with serially sampled sequences. (A) A “single-rate with dated tips” (SRDT) tree, with the times of serial sequence samples sequences known precisely. Under a molecular clock, sequences from each timepoint terminate at the same distance from the root of the tree. The branch lengths are extended by the product of a single substitution rate, ω, and the sampling interval. (B) A partially constrained phylogeny of sequences from two subpopulations. Only the lengths of the sampling intervals, Δ1 and Δ2, are known for the samples from both populations. A common rate, ω, is assumed. (C) A partially constrained phylogeny with sampling intervals known and with different rates, ω1 and ω2, for each population Fig. 2. The midpoint-rooted phylogeny of sequences sampled from Patient K. Sequences with the prefix KV were obtained before therapy and those with the prefix KP were obtained 2 years after therapy Fig. 2. The midpoint-rooted phylogeny of sequences sampled from Patient K. Sequences with the prefix KV were obtained before therapy and those with the prefix KP were obtained 2 years after therapy Fig. 3. Contour plot of ln L, obtained using equation 9, as a function of the proportion of responders, p, and substitution rate, ω. The contour corresponding to ln L = −4,396.9 is the joint 95% profile confidence envelope of ω and p. The hatched bars on the horizontal and vertical axes correspond to the 95% profile confidence intervals of p and ω, respectively Fig. 3. Contour plot of ln L, obtained using equation 9, as a function of the proportion of responders, p, and substitution rate, ω. The contour corresponding to ln L = −4,396.9 is the joint 95% profile confidence envelope of ω and p. The hatched bars on the horizontal and vertical axes correspond to the 95% profile confidence intervals of p and ω, respectively Fig. 4. The partially-constrained joint phylogeny of sequences from all subjects indicating the ML combination of rate assignments. The ML combination only has Patient M assigned as a nonresponder Fig. 4. The partially-constrained joint phylogeny of sequences from all subjects indicating the ML combination of rate assignments. The ML combination only has Patient M assigned as a nonresponder Fig. 5. Contour plot of ln L, obtained using equation 13, as a function of the proportion of responders, p, and substitution rate, ω. The contour corresponding to ln L = −2,715.4 is the joint 95% profile confidence envelope of ω and p. The hatched bars on the horizontal and vertical axes correspond to the 95% profile confidence intervals of p and ω, respectively Fig. 5. Contour plot of ln L, obtained using equation 13, as a function of the proportion of responders, p, and substitution rate, ω. The contour corresponding to ln L = −2,715.4 is the joint 95% profile confidence envelope of ω and p. The hatched bars on the horizontal and vertical axes correspond to the 95% profile confidence intervals of p and ω, respectively Table 1 Maximum-Likelihood Estimate of Substitution Ratesa and Log-Likelihoods Associated with Different Values of ω. Subject MLE of ω (−log-likelihood) −log-likelihood when ω = 0 −log-likelihood when ω = 0.017  ΛSTL ΛWTL Patient A 0.0000 (905.27) 905.27 914.58 2.26 × 10−5 1.36 × 10−5 Patient B 0.0000 (525.91) 525.91 673.52 1.96 × 10−65 2.17 × 10−67 Patient C 0.0032 (798.34) 799.30 804.25 0.002 0.001 Patient K 0.0000 (585.32) 585.32 585.35 0.242 0.01 Patient L 0.0000 (707.41) 707.41 817.18 5.31 × 10−49 1.54 × 10−50 Patient M 0.0176** (868.14) 875.32 869.75 65.609 492.749 Subject MLE of ω (−log-likelihood) −log-likelihood when ω = 0 −log-likelihood when ω = 0.017  ΛSTL ΛWTL Patient A 0.0000 (905.27) 905.27 914.58 2.26 × 10−5 1.36 × 10−5 Patient B 0.0000 (525.91) 525.91 673.52 1.96 × 10−65 2.17 × 10−67 Patient C 0.0032 (798.34) 799.30 804.25 0.002 0.001 Patient K 0.0000 (585.32) 585.32 585.35 0.242 0.01 Patient L 0.0000 (707.41) 707.41 817.18 5.31 × 10−49 1.54 × 10−50 Patient M 0.0176** (868.14) 875.32 869.75 65.609 492.749 Note.—** Statistically different from ω = 0 (p < 0.01). The last two columns provide empirical Bayes' ratios, calculated under the STL (ΛSTL) and WTL (ΛWTL) approaches of the probabilities of ω > 0 versus ω = 0 for each population, as discussed in the main text. A value greater than 1 signifies that it is more probable that a population has a positive-valued substitution rate. aExpressed as number of substitutions per site per year. This work was supported by NIH grant RO1-GM59174. A.D. was supported by a New Zealand FRST Bright Futures Scholarship. R.F. was supported by a NIH grant R01-GM60729, by grant number 9901522 from the Danish Agricultural and Veterinary Research Council, and by grant numbers 51-00-0392 and 21-02-0206 from the Danish Natural Science Research Council. We thank two anonymous reviewers for comments that helped us improve the manuscript. ## Literature Cited Barnes, I., P. Matheus, B. Shapiro, D. Jensen, and A. Cooper. 2002 . Dynamics of Pleistocene population extinctions in Beringian brown bears. Science 295 : 2267 -2270. Drummond, A., R. Forsberg, and A. G. Rodrigo. 2001 . Estimating stepwise changes in substitution rates using serial samples. Mol. Biol. Evol. 18 : 1365 -1371. Drummond, A., G. K. Nicholls, A. G. Rodrigo, and W. Solomon. 2002 . Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data. Genetics 161 : 1307 -1320. Drummond, A., and A. G. Rodrigo. 2000 . Reconstructing genealogies of serial samples under the assumption of a molecular clock using serial-sample UPGMA (sUPGMA). Mol. Biol. Evol. 17 : 1807 -1815. Felsenstein, J. 1981 . Evolutionary trees from DNA sequences: a maximum likelihood approach. J. Mol. Evol. 17 : 368 -376. Finzi, D., M. Hermankova, and T. Pierson, et al. (15 co-authors). 1997 . Identification of a reservoir for HIV-1 in patients on highly active antiretroviral therapy. Science 278 : 1295 -1300. Forsberg R., M. B. Oleksiewicz, A. M. K. Petersen, J. Hein, A. Bøtner, and T. Storgaard. 2001 . A molecular clock dates the common ancestor of European-type porcine reproductive and respiratory syndrome virus at more than 10 years before the emergence of disease. Virology 289 : 174 -179. Fu, Y. X. 2001 . Estimating mutation rate and generation time from longitudinal samples of DNA sequences. Mol. Biol. Evol. 18 : 620 -626. Goldman, N. 1990 . Maximum likelihood inferences of phylogenetic trees, with special reference to the Poisson process model of DNA substitutions and to parsimony analysis. Syst. Zool. 39 : 345 -361. Gu, X. 2001 . Maximum-likelihood approach for gene family evolution under functional divergence. Mol. Biol. Evol. 18 : 453 -464. Gunthard, H. F., S. D. Frost, and A. J. Leigh Brown, et al. (12 co-authors). 1999 . Evolution of envelope sequences of human immunodeficiency virus type 1 in cellular reservoirs in the setting of potent antiviral therapy. J. Virol. 73 : 9404 -9412. Holmes, E. C., L. Q. Zhang, P. Simmonds, C. A. Ludlam, and A. J. Leigh Brown. 1992 . Convergent and divergent sequence evolution in the surface envelope glycoprotein of HIV-1 within a single infected patient. Proc. Natl. Acad. Sci. USA 89 : 4835 -4839. Lambert, D. M., P. A. Ritchie, C. D. Millar, B. Holland, A. J. Drummond, and C. Baroni. 2002 . Rates of evolution in ancient DNA from Adelie penguins. Science 295 : 2270 -2273. Leitner, T., and J. Albert. 1999 . The molecular clock of HIV-1 unveiled through analysis of a known transmission history. Proc. Natl. Acad. Sci. USA 96 : 10752 -10757. Leonard, J.A., R. K. Wayne, and A. Cooper. 2000 . Population genetics of Ice Age brown bears. Proc. Natl. Acad. Sci. USA. 97 : 1651 -1654. Ota, R., P. J. Waddell, M. Hasegawa, H. Shimodaira, and H. Kishino. 2000 . Appropriate likelihood ratio tests and marginal distributions for evolutionary tree models with constraints on parameters. Mol. Biol. Evol. 17 : 798 -803. Rambaut, A. 2000 . Estimating the rate of molecular evolution: incorporating non-contemporaneous sequences into maximum likelihood phylogenies. Bioinformatics 16 : 395 -399. Rodrigo, A. G., E. G. Shpaer, E. L. Delwart, A. K. N. Iversen, M. V. Gallo, J. Brojatsch, M. S. Hirsch, B. D. Walker, and J. I. Mullins. 1999 . Coalescent estimates of HIV-1 generation time in vivo. Proc. Natl. Acad. Sci. USA 96 : 2187 -2191. Rodriguez, F., J. F. Oliver, A. Marin, and J. R. Medina. 1990 . The general stochastic model of nucleotide substitution. J. Theor. Biol. 142 : 485 -501. Seo, T. K., J. L. Thorne, M. Hasegawa, and H. Kishino. 2002 . A viral sampling design for testing the molecular clock and for estimating evolutionary rates and divergence times. Bioinformatics 18 : 115 -123. Seo, T. K., J. L. Thorne, M. Hasegawa, and H. Kishino. 2002 . Estimation of effective population size of HIV-1 within a host. A pseudomaximum-likelihood approach. Genetics 160 : 1283 -1293. Shankarappa, R., J. B. Margolick, and S. J. Gange, et al. (12 co-authors). 1999 . Consistent viral evolutionary dynamics associated with the progression of HIV-1 infection. J. Virol. 73 : 10489 -10502. Swofford, D. L. 1999 . PAUP*: phylogenetic analysis using parsimony (*and other methods). Sinauer Associates, Sunderland, Mass. Wong, J. K., M. Hezareh, H. F. Gunthard, D.V. Havlir, C. C. Ignacio, C. A. Spina, and D. D. Richman. 1997 . Recovery of replication-competent HIV despite prolonged suppression of plasma viremia. Science 278 : 1291 -1295.
2017-02-20 20:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7787660956382751, "perplexity": 1404.0545871025133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170609.0/warc/CC-MAIN-20170219104610-00405-ip-10-171-10-108.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3820642/how-to-factor-1-3x-x2-x3
# How to factor $1 - 3x + x^2 + x^3$? By inspection, I can see that one of the roots is $$1$$. So we can write $$1 - 3x + x^2 + x^3 = (x - 1)f_2(x)$$ where $$f_n(x)$$ is an n-th order polynomial. I haven't used long division for polynomials in ages, but I feel like that might be overcomplicating things and there might be an easier way to determine $$f_2(x)$$. Is there an obvious approach to getting $$f_2(x)$$ here? I tried some guess and checking to obtain it. I know that the quadratic term in $$f_2(x)$$ must have a coefficient of $$1$$, since the coefficient of $$x^3$$ is $$1$$. So $$f_2(x) = x^2 + f_{1}(x)$$. Now $$f_{1}(x)$$ is some affine equation. I know that $$f_{1}(x)$$ must have the constant $$-1$$ since we have a constant $$1$$ in the cubic and $$(x-1)$$, so we know that $$f_2(x) = x^2 - 1 + f_a(n-1)$$, where $$f_a(x)$$ is some linear equation that goes through the origin. Now I checked $$(x-1)(x^2 - 1) = x^3 - x^2 - x + 1$$. When we compare this with the original cubic, we see that we're off by $$2x^2 - 2x$$. So this prompted me to use $$f_a(x) = 2x$$. So we have $$f_2(x) = x^2 + 2x - 1$$ and this checks out. This procedure that I used was just kind of just guess and checking. • You wrote the letter $n$ a lot where you should have been writing $x$. Also, you mean $f(x-1)$ is an affine function, not "equation." And no, $f(x-1)$ is just as quadratic as $f(x)$ is. – runway44 Sep 10 at 4:40 • @runway44 I'm really just using $n$ here to represent the order of the polynomial. I probably should've just used it as a subscript. – David Sep 10 at 4:41 • en.wikipedia.org/wiki/Synthetic_division – copper.hat Sep 10 at 4:42 Set $$x=0$$ in $$1 - 3x + x^2 + x^3 = (x - 1)(x^2+ax+b)$$ and you get $$b=-1$$. Taking the derivative on both sides, you have $$-3+2x+3x^2=(x-1)(2x+a)+x^2+ax-1.$$ Set $$x=0$$ in the above equation and you get $$a=2$$. One method to factor it is to check whether it can be separated into several parts that have a common factor. In this case, we can separate $$x^3+x^2-3x+1$$ into $$x^3-x$$ and $$x^2-2x+1$$. Since $$x^3-x=x(x^2-1)=x(x-1)(x+1)$$ and $$x^2-2x+1=(x-1)^2$$, the 2 parts have a common factor of $$(x-1)$$ and can be factored out. Thus, $$x^3+x^2-3x+1=x^3-x+x^2-2x+1=x(x-1)(x+1)+(x-1)^2=(x-1)(x(x+1)+(x-1))=(x-1)(x^2+2x-1)$$.
2020-11-24 10:15:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691298961639404, "perplexity": 116.11903570229383}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00066.warc.gz"}
http://lists.ellipsis.cx/archives/spoon-discuss/spoon-discuss-200104/msg00077.html
Joel Uckelman on 28 Apr 2001 18:08:17 -0000 Re: spoon-discuss: Joel, help us! Quoth Rob Speer: > The state of the Nomic is getting farther and farther behind. So far, the nam > es > of people who voted last nweek have not yet been posted, so I can't deal the > cards even though I should have dealt them already. Two proposals have not > been recognized, and while the voting period is halfway over, the Ballot has > not been released. The reason I haven't posted the names of the voters from last nweek is because I don't have a good way of noting that the Thug was used to affect the votes. I know what I need to do, but actually implementing it just isn't going to happen until I'm done with classes. However, I should still have posted the voting anyway, so here it is: 404notfound - - - - - - - - - Benjamin y y y y n y n y y Blest Lax Monk Pal y y y n n y y y y Dave y y y y n y y y n Ed - - - - - - - - - Feyd - - - - - - - - - Jeff Schroeder y y y y n y y y y Joel y y y y n n n y y Joerg y y y y n y y y y jonno - - - - - - - - - M'cachessilnath n n y n n n n y y Poulenc a a y y y a a y a PurpleBob y y y y n y n y y relet - - - - - - - - - Remo - - - - - - - - - The Kid y y y y n n n n y Zagarna y a y y n n y y y > So, Joel, I'll help with most of this. I'll tell you what to say, and you can > just copy and paste this into a spoon-business message when you have a minute > to spare. Then all we'll need is last week's voting results (or even just a > list of who voted). Thanks. This helps a lot. It's a problem that quorum works the way it does, though. I can't call a quorum vote during voting, so we may not make quorum this time. > \begin{putting words in Joel's mouth} > > I recognize the following Proposals: > P465/0, Fix the Showdown, by PurpleBob > P466/0, Remove the debt system, by Joerg > > The ballot for nweek 21 is as follows: > > The following Proposals are up for voting: > P465/0, Fix the Showdown, by PurpleBob > P466/0, Remove the debt system, by Joerg > > Elections for nweek 21: > * Banker > Joerg > > [[ Is this right? Can there be an election with only one nominee? ]] > > \end{putting words in Joel's mouth} > -- > Rob Speer -- J.
2020-02-18 22:01:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48678475618362427, "perplexity": 3235.3995433095597}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00169.warc.gz"}
https://richbeveridge.wordpress.com/2015/03/16/the-many-solutions-of-the-population-equation/
Feeds: Posts ## The Many Solutions of the Population Equation I never studied the logistic equation as a student.  I first encountered this relationship as an instructor in one of the College Algebra textbooks I was reviewing and/or teaching from and was intrigued by the application of this “growth with constraints” model to a natural resource.  In researching applying the logistic model to natural resource consumption, I immediately ran into M. King Hubbard’s work on Peak Oil. Then, when I was teaching integral calculus last winter, we began a unit on separable differential equations.  I was poking around looking for good application problems that would utilize separable ODEs and ran into the fundamental population differential relationship $\frac{dP}{dt}=kP$, followed by the relationship I had used for the logistic $\frac{dP}{dt}=kP(1-\frac{P}{N})$ with $N$ defined as the “carrying capacity” or maximum growth for the population. We went through the procedures for solving each of these ODEs (as well as the continuous mixing problems which follow a similar pattern) and then we moved on. This year when I was teaching this topic again I was reminded of a paper my thesis advisor had given to me back in 2003 about the application of differential equations to modeling fish populations.  I was intrigued by the profusion of models that could be generated by changing the constraints for a given relationship. Since I didn’t teach integral calculus for another 10 years after I read that paper, I had essentially forgotten most of the equations, formulas and relationships that generated the graphs that had stuck with me.  This year, while covering the separable ODEs with their applications, I began to look into the application of these relationships to fish populations and population in general. I found two great resources that go through the set up of these relationships in a very clear manner, and each of them includes wonderful graphs showing the multiple solutions that result when the same differential relationship is solved with different initial conditions. The opening section of Robert Borelli and Courtney Coleman’s Differential Equations: A Modeling Approach can be read here. A student project from James Madison University written by Bailey Steinworth, Yuhui Wang and Xing Zhang can be read here.
2017-03-30 12:42:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6349372267723083, "perplexity": 797.5395056584069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194600.27/warc/CC-MAIN-20170322212954-00280-ip-10-233-31-227.ec2.internal.warc.gz"}
https://thenoriegabook.wordpress.com/tag/bilinear-forms/
# The reverse Cauchy-Schwarz inequality Let  $I$ be a symmetric, hyperbolic bilinear form (that is, of signature $+,+,\dots,+,-$) over a finite dimensional real vector space (though this may hold in general as well). If $x$ is positive, meaning $I(x,x)>0$, then for all $y$ we have $I(x,y)^2 \geq I(x,x)I(y,y).$
2017-12-12 02:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595472812652588, "perplexity": 191.5091178999833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514250.21/warc/CC-MAIN-20171212021458-20171212041458-00554.warc.gz"}
http://superuser.com/questions/31568/group-policy-on-windows-sets-hidden-files-as-invisible-upon-startup
Group policy on windows sets hidden files as invisible upon startup I am an student and the computers at school are configured via a group policy to set hidden files to hidden. I can undo that by manually changing this in the menus however this gets really frustrating after that 1000th time (so to speak). Is there a program or does someone know a batch file that can do this for me with a single click? - To clarify - you currently have a group policy that sets your hidden files to hidden, and you are tired of manually unchecking the box each time? –  Jared Harley Aug 28 '09 at 21:11 I'm referring to the computers at my school. I'm an ICT student and I need to work with hidden folders every now and then. Thing is, not all students are ICT students, so not all people need them. They expect as to know where the setting can be manipulated. –  KdgDev Aug 28 '09 at 23:09 There is no such policy, but here is registry setting and registry settings can be enforced through GPO with Administrative templates. This is the registry key: Value Name: ShowSuperHidden Data Type: REG_DWORD (DWORD Value) Value Data: (0 = Hide Files, 1 = Show Files) Here is an ADM template I have used in the past: CLASS USER CATEGORY "System" CATEGORY "Folders Files" POLICY "Hide\Show Hidden Files" EXPLAIN "This setting will allow for you to set the show and hide files and folders by default Keep in mind that this information will be stored in cleartext in the systems registry." PART "SetThis" NUMERIC REQUIRED TXTCONVERT VALUENAME "Hidden" MIN 1 MAX 2 DEFAULT "2" END PART END POLICY END CATEGORY END CATEGORY • Save the text with an ".adm" extention and place it somewhere. • Create a GPO called "HiddenFiles" then open it up and add this template to your GPO • Right click the "Administrative Templates" -> View -> Filtering -> "Only show policy settings that can be fully managed" • Then go into your "User Configuration" -> Administrative Templates -> System -> Folders Files -> "Hide\Show Hidden Files" The only catch with using unmanaged ADM's is they need to be reversed not disabled when you want to remove them from your system. If you set ( Hidden = 1 ) You will need to set ( Hidden = 0 ) [1]: "How to create custom administrative templates in Windows 2000" http://support.microsoft.com/kb/323639 - This is actually pretty easy, it just takes a little setup. First, you'll want to create a registry key to set the "Show Hidden Files" option to true. Create a new file called "show.reg" and put the following text into it: Windows Registry Editor Version 5.00 Place the file wherever you want to keep it (such as My Documents) and make note of it's location. Then, on your desktop, create a new shortcut. In the path to the shortcut, put in: regedit.exe /s c:\users\ path to file \show.reg The /s command for regedit.exe suppresses the notifications regedit normally shows. Once you've done this, you should be able to just double-click the shortcut and your hidden files will show up. You may have to refresh your Explorer window (press F5) to get the hidden files to show. If you want to hide the files again with the same method, follow the instructions above, but use a new registry file called "hide.reg" with the following: Windows Registry Editor Version 5.00
2015-01-27 15:18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2529931366443634, "perplexity": 4936.281444864766}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00062-ip-10-180-212-252.ec2.internal.warc.gz"}
http://community.boredofstudies.org/13/mathematics-extension-1/335187/cambridge-prelim-mx1-textbook-marathon-q-5.html
# Thread: Cambridge Prelim MX1 Textbook Marathon/Q&A 1. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by DatAtarLyfe Can't you just let x->infinity, making the y values ->x^2, instead of doing all the algebra above You can I was taught this way though ~ (I can do all the algebra in my head but I posted it for OP's sake) 2. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by Crisium You can I was taught this way though ~ (I can do all the algebra in my head but I posted it for OP's sake) cool, just confirming 3. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Why is it the highest power of x in the denominator , i thought it was just the highest power in either the numerator or denominator? 4. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by appleibeats Why is it the highest power of x in the denominator , i thought it was just the highest power in either the numerator or denominator? It's because your are trying to get rid of the pronumeral in the denominator before you let x-> infinity 5. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread For 5)i y = (x+1)^3/ x i know there is a vertical asymptote of x = 0 But is there an horizontal asymptote. I expanded the numerator and did lim x ---> infinity and get x^2 + 3x + 3..... 6. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Yes, as x tends to $\pm{\infty}$, y values will tend to $x^2+3x+3$. Note that what we are trying to find is a curve which 'bounds' the graph of f. This is only just for graphical purposes. It is not correct to say $\lim_{x \to\infty} \dfrac{(x+1)^3}{x} = x^2+3x+3$ since the expression on the LHS still means that y tends on infinity as you increase x. Graphically, however, it will be 'bounded' off by $x^2+3x+3$ which becomes our oblique (parabolic) asymptote. 7. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by appleibeats For 5)i y = (x+1)^3/ x i know there is a vertical asymptote of x = 0 But is there an horizontal asymptote. I expanded the numerator and did lim x ---> infinity and get x^2 + 3x + 3..... In other words, a 'parabolic asymptote'. 8. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by appleibeats For 5)i y = (x+1)^3/ x i know there is a vertical asymptote of x = 0 But is there an horizontal asymptote. I expanded the numerator and did lim x ---> infinity and get x^2 + 3x + 3..... Just keep in mind these general rules when your trying to identify what kind of asymptotes exist: 1/ If the highest power of x in the denominator = the highest power of x in the numerator, then you have a horizontal asymptote 2/ If the highest power of x in the denominator < the highest power of x in the numerator (BY ONE), then you have an oblique asymptote 3/ If the highest power of x in the denominator < the highest power of x in the numerator (BY TWO OR MORE), then you have another graph as an asymptote So in your case, the highest power in numerator was 3 and highest power in denominator was 1, thus you have a parabolic asymptote 9. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread The graph of y=(x+1)^3/x in red And it's parabolic asymptote in black. 10. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread I never learnt how to do Pascals triangle, will I ever need it in 2U and or 3U in Year 12? 11. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by Speed6 I never learnt how to do Pascals triangle, will I ever need it in 2U and or 3U in Year 12? 3U 12. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by Drongoski 3U Ok thanks Drongoski, one last question, is this something which I can learn independently or will I need a teacher to guide me through it? Also, is it something which can be learnt in one day? 13. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread The Pascal Triangle itself is dead easy; nothing to it. But if you are talking about the many interesting properties associated with it, that requires a bit more algebra. 14. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Pascal's triangle is more of a reference point to begin the binomial theorem topic. It's not a major part in the syllabus. 1 (this is called the '0th' row) 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 etc. For the binomial theorem, what's interesting is that the expansion of (1+x)^n, the coefficients match up the nth row of Pascal's triangle. Also, Pascal's triangle can be written in combinations. It might also be worth mentioning that the sum of the coefficients on each row is 2^n. Besides that you don't really need much more for MX1. 15. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread How do you do question 3 a in exercise 7D? 16. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread so the question is y = x^2 1/2 so make it an improper fraction y = x^5/2 then it simply: y' = 5/2 x^ 3/2 which = 5/2 x^ 1 1/2 you bring down the 5/2 and minus 1 from the power. 17. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread for question 6a from 10F y = x^(1/2) + 1/x^(1/2) i don't understand why the curve flattens out as x ---> + infinity? why is it not a parabola shape? 18. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by appleibeats for question 6a from 10F y = x^(1/2) + 1/x^(1/2) i don't understand why the curve flattens out as x ---> + infinity? why is it not a parabola shape? $For large x, y\approx x^\frac{1}{2}, which would not give a parabolic shape, because this is a square root function (\emph{not} x^2 or a quadratic like that; the exponent is \frac{1}{2}, not 2). The square root function `flattens out' as x\to +\infty, which is why the curve also flattens out as x\to +\infty (we essentially have the graph of y=\sqrt{x} being an asymptote as x\to+\infty).$ 19. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread How do you find the focus length of the parabola x^2 = 28/5y 20. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by appleibeats How do you find the focus length of the parabola x^2 = 28/5y I take it to mean: x^2 = (28/5)y In that case simply express in the form: x2 = 4ay For this question: x^2 = 4* (7/5)*y so, the focal length(not focus length) "a" is 7/5. 21. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by Drongoski I take it to mean: x^2 = (28/5)y In that case simply express in the form: x2 = 4ay For this question: x^2 = 4* (7/5)*y so, the focal length "a" is 7/5. Lol, i was right but deleted my post cause i was like "wait wtf, it's 28/5y" 22. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread i though the focus length was the distance from the vertex and the focus. I got the focus was (0,7/5) and the vertex ( 14/5, 7/5) so I got the focus length = 14/5 Where did i go wrong? 23. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by DatAtarLyfe Lol, i was right but deleted my post cause i was like "wait wtf, it's 28/5y" sorry about not making it clear 24. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread For $x^2=\dfrac{28}{5}y$, vertex is $(0,0)$, focus is $(0, \dfrac{7}{5})$, hence focal length is $\dfrac{7}{5}$. 25. ## Re: Year 11 Mathematics 3 Unit Cambridge Question & Answer Thread Originally Posted by appleibeats i though the focus length was the distance from the vertex and the focus. I got the focus was (0,7/5) and the vertex ( 14/5, 7/5) so I got the focus length = 14/5 Where did i go wrong? Your vertex is incorrect. If you look at your original equation, x^2=28/5y, your vertex is actually (0,0) To determine the vertex from your equation, you use the standard equation (x-h)^2=4a(y-k), where (h,k) is your vertex. So in your equation, h=0, k=0 and 4a=28/5 Page 5 of 61 First ... 345671555 ... Last There are currently 1 users browsing this thread. (0 members and 1 guests) #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
2019-02-15 18:55:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862956762313843, "perplexity": 2372.027208372606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00599.warc.gz"}
https://www.fideuramlodi.it/u8j7r73n/relation-between-electron-volt-and-joule-f68089
Consumer Credit Union Customer Service, Nissan Micra K13 Parts Catalogue, Eagle Falls Cliff Jump, Thyme Meaning In Spanish, Thorichthys Ellioti -- Seriously Fish, Water Usgs Pa, Sonic Youth Tunings, "/> Consumer Credit Union Customer Service, Nissan Micra K13 Parts Catalogue, Eagle Falls Cliff Jump, Thyme Meaning In Spanish, Thorichthys Ellioti -- Seriously Fish, Water Usgs Pa, Sonic Youth Tunings, "/> Consumer Credit Union Customer Service, Nissan Micra K13 Parts Catalogue, Eagle Falls Cliff Jump, Thyme Meaning In Spanish, Thorichthys Ellioti -- Seriously Fish, Water Usgs Pa, Sonic Youth Tunings, "/> relation between electron volt and joule Joules are volt-coulombs; that is, when you put one coulomb. Calculez les électron-volts en joules, convertir eV vers J . ~624 EeV (6.24×1020 eV): energy needed to power a single 100 watt light bulb for one second. We assume you are converting between atomic mass unit [1960] and electronvolt. However a joule is a unit of energy or work done, to move an electric charge through an electric potential. For example, if you have a 9 volt battery putting out a current of 0.1 amps then each electron has an available energy 9 eV. What was decided after the war about the re-building of the chathedral? La formule pour convertir Joule en Électron-volt est 1 Joule = 6.241506363094E+18 Électron-volt. We get the following relation between Joule and one electron volt: To convert let's say 14 TeV (proton-proton collision energy at LHC when at full operation) to . Holt Physics 2002. I love Yucheng Jade Shao on January 23, 2018: This was an amazing presentation and it helped me alot to understand volts, Amps, Ohms, and Watts! Vérifiez notre. Electron volt: 1 electron volt is the energy change that takes place when a charge equal to 1 electron (1.6×10-19 C) is moved through a potential difference of 1 volt. We assume you are converting between electronvolt and kilojoule. J (electron volt to joule conversion) Relationship between the wavelength of a photon and its energy: λ = hc/E 1 Problem 1 Show that u ( x, t ) = e - i ( kx - ωt ) satisfies the … It is usually used as a measure of particle energies although it is not an SI (System International) unit. 1 electron volts is equal to 1.6021773E-19 joule. The charge of an electron is - 1.6 x 10^-19 coulombs. Atomic Physics. one joule of work on the system. The energy of the electron in electron volts is numerically the same as the voltage between the plates. Discussion. What is the relation between joule and electron volt. In a capacitor, the formula is E=0.5 * C * V^2, where C is in Farads, V is in Volts, E is in Joules. How many electron volts in 1 kilojoule? They are differ in a manner that volt is SI unit of electric potential o voltage while electron volt is one of the unit of energy. No Related Subtopics. How many somas can be fatal to a 90lb person? Since energy is related to voltage by $$\Delta PE=q\Delta V$$ we can think of the joule as a coulomb-volt. The relation between 1 electron volt and 1 joule will really depend on the scattering that takes place. The 0.1 amps is how mant electons per second are flowing, for 0.1 amps that is 6 x 10^17 electrons per second. Thus one electron volt, one eV, is 1.6 x 10^-19 joules. La formule pour convertir Joule en Électron-volt est 1 Joule = 6.241506363094E+18 Électron-volt. 2. Well, you’d be right, it does … but did you know that the electron volt is actually a unit of energy, like the erg or joule? Convertir joule en électron-volt. 1000 Electron volts = 0 Kilocalories 1000000 Electron volts = 0 Kilocalories Embed this unit converter in your page or blog, by copying the following HTML code: In physics, the electron volt (symbol eV; also written electronvolt12) is a unit of energy equal to approximately 1.602×1019 joule (Si unit J). 1 Joule est 1000000 fois Plus gros que 1 Microjoule. How many grams in a cup of butternut squash? One electron-volt is only 1.6 x 10-19 joules of energy, in other words, 0.16 billion-billionth of a joule. 2. A photon is a quantum of EM radiation. of electrons through a potential difference of 1 volt, you do. 1 electron volt = Charge on one electron x 1 volt . The electron energy in eV is the same as the battery voltage in volts. 1 Joule est égal à 1E-06 Mégajoule. how do they differ? 1 joule is equal to 6.2415064799632E+18 electron volts, or 0.001 kilojoule. An electronvolt is the work done to take a charge equal to one electron across a potential difference of one volt, so we will put the values of q and V accordingly in the equation. You can view more details on each measurement unit: electron volts or kilojoule The SI derived unit for energy is the joule. What is the relationship between a joule and an electron volt? In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Vérifiez notre Joule to Électron-volt Convertisseur. Exchange reading in electron volts unit eV into watt seconds unit Wsec as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). 5.25×1032 eV: Total energy released from a 20 kt Nuclear Fission Device. So the relationship between electronvolt(eV) and Joule can be expressed by the equation: 1 eV = 1.6 × 10 −19 J Joule to electronvolt (eV) conversion We know that 1 eV = 1.6 × 10 −19 J . For example, a 5000 V potential difference produces 5000 eV electrons. What does it mean when there is no flag flying at the White House? 1 eV = 1.6 x 10-19 joule . The energy of the electron in electron-volts is numerically the same as the voltage between the plates. Converting eV to J. The relation between volt and electron volt is :, Joule is the SI unit of energy. With three equations from physics we'll show the relationship between two units of energy: electron-volts (eV) and joules (J). What is the relation between joule and electron volt? You must be signed in to discuss. Recommended Videos. The electron volt is defined as the total amount of kinetic energy gained by an unbound electron as it is accelerated through a potential difference of one volt. Chapter 23. What is the relationship between a joule and an electron volt? Figure $$\PageIndex{3}$$: A typical electron gun accelerates electrons using a potential difference between two metal plates. Convertissez les unités de énergie. Therefore we use a much smaller unit of measurement, the electron-volt(eV). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moved across an electric potential difference of one volt. The energy per electron is very small in macroscopic situations like that in the previous example—a tiny fraction of a joule. 1 eV = 1.6 × 10 −19 C × 1 V = 1.6 × 10 −19 J. Entrez la valeur de A et appuyez sur Convert pour obtenir la valeur dans Électron-volt. The answer is 6.2415064799632E+18. Volts to joules Conversion formula E=The energy […] This work done is converted into kinetic energy of charge. The SI unit for energy is the JOULE. The googled website link: How to convert volts to joules shows that for higher level matter charges (ie masses with a net charge imbalance), quoted below: The energy E in joules (J) is equal to the voltage V in volts (V), times the electrical charge Q in coulombs (C), or : joule = volt … Therefore, we can rewrite the above constant for hc in terms of eV: 1 eV = 1.602 x 10-19 J 1 au = 27.211324570273 eV. Electron volt is the maximum kinetic energy gained by the electron in falling through a potential difference of 1 volt. Electronvolt is equal to energy gained by a single electron when it is accelerated through 1 volt of electric potential difference. A Volt is a measure of electric potential which means electric potential energy per unit of charge (J/C). 1[eV] = 1.6 x 10 19 J. Conversion factors: 1. How many electron volts in 1 joules? 3. Learn more, How are units of volts and electron volts related? What is the relation between a joule and an electron volt? We assume you are converting between electronvolt and joule. One eV is the energy change that takes place when a charge equal to one electron(1.6*10-19C) is moved through a potential difference of 1 volt Because an electron has a charge of 1.6*10-19C the value of 1eV = 1.6*10-19J. Copyright © 2021 Multiply Media, LLC. The relation between volt and electron volt is :, Joule is the SI unit of energy. The definition of an electron volt is the kinetic energy a single electron acquires when moving through an electric potential of 1V. Electron Volt: Electron volt is a unit of energy used in atomic and nuclear physics. Comment convertir Joule en Électron-volt? What was the unsual age for women to get married? One joule (abbreviated J) is equivalent to the amount of energy used by … Why don't libraries smell like bookstores? Always check the results; rounding errors may occur. Give the gift of Numerade. I love Yucheng Jade Shao on January 23, 2018: This was an amazing presentation and it helped me alot to understand volts, Amps, Ohms, and Watts! En physique et en chimie, l'électronvolt ou électron-volt (au pluriel électronvolts ou électrons-volts) [1] (symbole eV) est une unité de mesure d'énergie. Before i cant do the difference between volts and watts. In SI units, 1 eV is 1.6 x 10^-19 Joule and one Volt is 1 Joule/Coulomb Joule est 6.2415E+18 fois Plus gros que Électron-volt. One electron volt converted into watt second equals = 0.00000000000000000016 Wsec The conceptual construct, namely two parallel plates with a hole in one, is shown in (a), while a real electron gun is shown in (b). 1 Joule est égal à 0.001 kilojoule. 5 Joules = 3.120753×10 19 Electron volts: 50 Joules = 3.120753×10 20 Electron volts: 50000 Joules = 3.120753×10 23 Electron volts: 6 Joules = 3.7449036×10 19 Electron volts: 100 Joules = 6.241506×10 20 Electron volts: 100000 Joules = 6.241506×10 23 Electron volts: 7 Joules = 4.3690542×10 19 Electron volts: 250 Joules = 1.5603765×10 21 Electron volts: 250000 Joules = 1.5603765×10 24 El The energy of the electron in electron volts is numerically the same as the voltage between the plates. Learn more, How are units of volts and electron volts related? Calculate the number of photons per second emitted by a monochromatic source of specific wavelength and power. If your impeached can you run for president again? When determining the electrical needs of your fence system it is important to recognize the difference between volts and joules. An electron volt is the energy obtained by an electron as it accelerates across a potential difference of one volt. 1 kilogram is equal to 6.0229552894949E+26 amu, or 5.6095883571872E+35 electronvolt. ... What is the relationship between a joule and an electron volt? The kinetic energy acquired by the an electron, when it is accelerated through a potential difference of 1 volt in vacuum is called one electron volt (1 eV). The SI unit of energy is the joule. Explain the relationship between the energy of a photon in joules or electron volts and its wavelength or frequency. The 0.1 amps is how mant electons per second are flowing, for 0.1 amps that is 6 x 10^17 electrons per second. The relation between 1 electron volt and 1 joule will really depend on the scattering that takes place. One joule is equal to 6.241509⋅10 18 electron-volts: 1J = 6.241509e18 eV = 6.241509⋅10 18 eV So the energy in electron-volts E (eV) is equal to the energy in joules E (J) times 6.241509⋅10 18 : $1.6 \times 10^{-19} \mathrm{J}$ Topics. 1 eV = 1J/C* 1.602x10^-19 C = 1.602 x 10^-19 J When there is a potential difference (voltage) of 10V between two points, it means that we are doing 10 joules of work per unit charge (electron). What are the difference between Japanese music and Philippine music? how do they differ? 1 x 27.211324570273 eV = 27.211324570273 Electron Volt. Answer. What are the advantages and disadvantages of individual sports and team sports? The conversion factor is 1 electron volt (eV) = 1.602 x 10 -19 J We get the following relation between Joule and one electron volt: We often use energies which are of the order of several million electron volts so it is convenient to introduce the following . What is the WPS button on a wireless router? This physics video tutorial provides a basic introduction into the electron volt. Before i cant do the difference between volts and watts. Outil gratuit en ligne pour faire vos calculs d'unités. 1 joule is equal to 6.2415064799632E+18 electron volts. Answer The relationship between a joule and electron volt is that $1 \mathrm{eV}=1.60 \times 10^{-19}$ $\mathrm{J}$ or … … 1 eV (per atom) is 96.4853365(21) kJ/mol.For comparison: 1. Click hereto get an answer to your question ️ State the relationship between KWh and Joule. One volt is equal to one joule/coulomb, so the number of joules in a charge is equal to a given charge's volts How to convert joules to mega electron volts mev. How long will the footprints on the moon last? Author has 140 answers and 89.8K answer views. This term is … The joule is named for James Prescott Joule (1818 - 1889), who studied the relation between mechanical and heat energy discovered earlier by count Rumford. The work done on the charge is given by the charge times the voltage difference, therefore the work W on electron is: W = qV = (1.6 x 10-19 C) x (1 J/C) = 1.6 x 10-19 J. Electronvolt … The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. The electron volt (eV) is a unit of energy equal to approximately 1.6×10?19 joule (J). 1 eV = 1.602176487(40)×10−19 J (the conversion factor is numerically equal to the elementary charge expressed in coulombs). The relationship between electron volts and joules is 1 eV = 1.6 × 10 − 19 J Electron When light is incident on an atom, a photon can transfer its energy to an electron Introduction to optical fiber components 4 1 eV = 1.6 × 10 − 19 J Electron When light is incident on an atom, a photon can transfer its energy to an electron Introduction to optical fiber 1 Joule est 1000000 fois Plus petit que 1 Mégajoule. We get the following relation between Joule and one electron volt: To convert let's say 14 TeV (proton-proton collision energy at LHC when at full operation) to . 1 eV = 1.602 x 10-19 J. Entrez la valeur de A et appuyez sur Convert pour obtenir la valeur dans Électron-volt. (100 W = 100 J/s = ~6.24×1020 eV/s). A volt is a unit of measure assigned to the electrical potential or voltage across a conductor. The relation between 1 electron volt and 1 joule will really depend on the scattering that takes place. both represent energy, just not same unit. the number of coulombs by the number of volts. You can view more details on each measurement unit: amu or electronvolt The SI base unit for mass is the kilogram. So the relationship between electronvolt(eV) and Joule can be expressed by the equation: 1 eV = 1.6 × 10 −19 J 1 eV = 1.602 x 10 -19 joule. m). Note that rounding errors may occur, so always check the results. You can view more details on each measurement unit: electron volts or joules The SI derived unit for energy is the joule. They are differ in a manner that volt is SI unit of electric potential o voltage while electron volt is one of the unit of energy. Work done = charge x potential difference. Note that rounding errors may occur, so always check the results. = electric potential (volt)=Joules/Coulomb generated by a charge q in coulombs at a distance r in meters: J C r q / 4 0 564-17 Lec 34 Mon. One volt is equal to one joule/coulomb, so the number of joules in a charge is equal to a given charge's volts How to convert joules to mega electron volts … The symbol for the electron volt is eV – lower case e, upper case V. One electron-volt is equal to 1.602176565⋅10-19 joules: 1eV = 1.602176565e-19 J = 1.602176565⋅10 -19 J So the energy in joules E (J) is equal to the energy in electron-volts E (eV) times 1.602176565⋅10 -19 : With three equations from physics we'll show the relationship between two units of energy: electron-volts (eV) and joules (J). For example, a 5000-V potential difference produces 5000-eV electrons. brainly.in/question/8922281 This physics video tutorial provides a basic introduction into the electron volt. Choice (a) is correct. joule or electron volts The SI derived unit for energy is the joule. By another definition, the joule is equal to the energy required to pass an electric current of one ampere through a one ohm resistor for one second. It's really not too different from a year and light year - the compound unit is derived from a relationship between the two words that make up the name of the unit. 1 Joule équivaut à 6.2415E+18 Électron-volt, Joule à Milliards de barils de pétrole équivalent. Online Volts to joules calculator Use this tool for Volts (V) to joules (J) calculator Please Input the voltage(V) in volts,Input charge in coulombs(C) and Click the Calculate button to obtain result in joule. 01:03. brainly.in/question/8922281 When did organ music become associated with baseball? 1eV = 1.602×10-19 J: eV: Joule: 1 Joule is the work done or energy transferred on an object when a force of 1 newton acts on it in the direction of its motion along a distance of 1 meter. Given by: K.E (max) = eV. The energy acquired by one electron being accelerated through a potential difference of one volt is therefore (1 volt) x (1.6 x 10^-19 coulombs) = 1.6 x 10^-19 joules. An electron volt is the energy required to raise an electron through 1 volt, thus a photon with an energy of 1 eV = 1.602 × 10-19 J. David on January 15, 2018: This is an excellent video that I highly recommend for all beginners like me. I now have a good basic understanding of how electricity works in our everyday … joules of work (or energy put in to the system) by multiplying. How to convert Atomic Unit Of Energy to Electron Volt (au to eV)? Note that rounding errors may occur, so always check the results. When dealing with "particles" such as photons or electrons, a commonly used unit of energy is the electron-volt (eV) rather than the joule (J). The answer is 6.2415064799632E+21. electron volt: Numerical value: 1.602 176 634 x 10-19 J : Standard uncertainty (exact) Relative standard uncertainty (exact) Concise form 1.602 176 634 x 10-19 J : Click here for correlation coefficient of this constant with other constants 10Apr17. 1 Joule = 6.241506363094E+18 Électron-volt. Top Educators. The electron energy in eV is the same as the battery voltage in volts. 1 Joule est 1000 fois Plus petit que 1 kilojoule. volts: V coulombs: C Result (in joules): J How to convert volts to joules? Who is the longest reigning WWE Champion of all time? 1 eV = 1.602 x 10-19 J. Electron volt : “The electric work done when an electron moves through an electric field at a potential difference of 1 volt.” 1 electron volt (eV) = charge on 1 electron x 1 volt = 1.6 x 10-19 C x IV = 1.6 x 10-19 J [∵ 1C x IV = 1J) . The electron volt is defined as the amount of energy required to propel one electron through a potential difference of one volt. The Boltzmann constant sets up a relationship between wavelength and temperature (dividing hc/k by a wavelength gives a temperature) with one micrometer being related to 14 387.777 K, and also a relationship between voltage and temperature (multiplying the voltage by k in units of eV/K) with one volt being related to 11 604.518 K. Relation Between electron volt and joule: The electron-volt, a non-standard (non-SI) unit, is equal to about 1.6 x 10-19 joule. All Rights Reserved. A volt is one joule/coulomb. Définition et usages. kinetic energy of charge = charge x potential difference. 1 Joule est égal à 1000000 Microjoule. How is the electron volt related to the si unit of energy? So you find the number of. The relation between 1 electron volt and 1 joule will really depend on the scattering that takes place. By definition, it is the amount of energy gained by the charge of a single electron moved across an electric potential difference of one volt. Relation Between electron volt and joule: When a charge moves through the electric field work is done which is given by. Electron Volt (eV) The electron volt is the energy that we would give an electron if it were accelerated by a one volt potential difference. One joule represents a relatively small amount of energy; it takes roughly 100,000 J (10 5 J) to heat a cup of water from room temperature to its boiling point under standard conditions. David on January 15, 2018: This is an excellent video that I highly recommend for all beginners like me. Joule est 6.2415E+18 fois Plus gros que Électron-volt. Ionizing Radiation. Hope this helps! Hope this helps. What is the relation between a joule and an electron volt? Matite in campo. Disegna, Condividi e Vinci!!
2022-12-03 20:20:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5409474968910217, "perplexity": 1530.8866367890253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00109.warc.gz"}
https://www.hpmuseum.org/forum/post-73658.html
HP 29E IR GPS What???? 05-24-2017, 06:34 AM (This post was last modified: 05-24-2017 07:21 AM by Geoff Quickfall.) Post: #1 Geoff Quickfall Senior Member Posts: 771 Joined: Dec 2013 HP 29E IR GPS What???? A little teaser of the functions from the labs of Panamatik! Received my HP 29C upgrade from Bernhards' lab. This can be confusing and I recommend you visit Panamatik of you love LED HP calculators. I grew up with them and have been using an HP 29E IR that I put together using Bernhards ACT replacement chip. The GPS upgrade came after my talk on the Woodstocks in the 21st Century at HHC2015 Woodstocks in the 21st Century I now am a proud owner and user of the HP 29E IR GPS. Translated that is an enhanced HP 29C with Infrared print capability and a built in GPS which sends dynamic (updated over time) data to the memory registers in the 29C. Now my 29C ACT was dead but the keyboard was good. I had a 25 with a bad keyboard, good PCA with a bad ACT. Turns out that the 25 ROM has a larger alpha capability then the 29 ROM due to the increased functionality of the 29. However, and this is interesting, by using the 25 PCA with functional ROM, the Panamatik ACT programmed as a 29E in combination with the GPS the functions of the GPS can be displayed in alpha plus data. So the following video is a 29C shell, 25C board, Panamatik ACT with IR and GPS. In the video the following program is demonstrated: the really neat thing about this that the GPS will deposit by request, dynamically updated data to the register of your choice (0 - 7) dynamically updated GPS data. This program requires ZULU in real time to be placed into REG 0 for use by the program. Each call for REG 0 in the program results in the GPS depositing the latest (equivalent to the time of the program request) ZULU time. Any of the functions can be sent to the registers once the initial f STO x command in GPS mode is invoked. Here is the youtube video I created today. It is best viewed in 720 mode. HP 29E IR GPS 'world real time clock' enjoy 05-24-2017, 08:51 AM (This post was last modified: 05-24-2017 09:31 AM by Dieter.) Post: #2 Dieter Senior Member Posts: 2,397 Joined: Dec 2013 RE: HP 29E IR GPS What???? (05-24-2017 06:34 AM)Geoff Quickfall Wrote:  Now my 29C ACT was dead but the keyboard was good. I had a 25 with a bad keyboard, good PCA with a bad ACT. Turns out that the 25 ROM has a larger alpha capability then the 29 ROM due to the increased functionality of the 29. However, and this is interesting, by using the 25 PCA with functional ROM, the Panamatik ACT programmed as a 29E in combination with the GPS the functions of the GPS can be displayed in alpha plus data. Here is a shorter version of the program that frees one label so that you can enter one more timezone/city group. Also the labels now match the register numbers, so that e.g. GSB 1 uses the timezone stored in R 1. I think this is easier to memorize. Code: 01  LBL 1 02  RCL 1 03  GTO 0 04  LBL 2 05  RCL 2 06  GTO 0 07  LBL 3 08  RCL 3 09  GTO 0 10  LBL 4 11  RCL 4 12  GTO 0 13  LBL 5 14  RCL 5 15  GTO 0 16  LBL 6 17  RCL 6 18  GTO 0 19  LBL 7 20  RCL 7 21  GTO 0 22  LBL 8 23  RCL 8 24  GTO 0 25  LBL 9 26  RCL 9 27  LBL 0 28  RCL 0 29  + 30  2 31  4 32  X<>Y 33  X<0? 34  + 35  2 36  4 37  X>Y? 38  CLX 39  – 40  RTN You could even use the 29C's indirect register addressing (RCL i), which would require the GPS time in a register other than R 0 ....or maybe just a few more steps: I am not sure how the GPS time function works in detail, but assuming R 0 is still available as a storage register this could be a solution: Code: 01  LBL 0 02  RCL 0 03  X<>Y 03  STO 0 04  R↓ 05  RCL i 06  + 07  2 08  4 09  X<>Y 10  X<0? 11  + 12  2 13  4 14  X>Y? 15  CLX 16  – 17  RTN Simply enter 1, 2, 3, .... GSB 0 and get the respective local time. This would allow as many timezones / city groups as memory can hold. Since registers up to R ,5 (i.e. 15) are preserved by constant memory, this would mean up to 15 locations. Of course this requires that the content of R 0 is not changed / overwritten by GPS time during the execution of steps 04 and 05. You said "Each call for REG 0 in the program results in the GPS depositing the latest (...) ZULU time" so that I assume that the current time is generated by the RCL 0 command. In this case everything should work fine ...if R 0 can still be used for indirection. Maybe you can give it a try and report here? Dieter 05-24-2017, 10:13 AM Post: #3 Paul Dale Senior Member Posts: 1,733 Joined: Dec 2013 RE: HP 29E IR GPS What???? (05-24-2017 08:51 AM)Dieter Wrote:  I am not sure how the GPS time function works in detail, but assuming R 0 is still available as a storage register this could be a solution: Time is fixed to register 0 unfortunately. Your first program might even be able to be adapted to the 25E GPS, although likely losing some of the time zones. Pauli 05-24-2017, 01:35 PM Post: #4 Geoff Quickfall Senior Member Posts: 771 Joined: Dec 2013 RE: HP 29E IR GPS What???? Dynamic functions can only be stored in registers 0 through 7. Sort of the same idea as the Sigma stat registers being fixed. The program can be modified using indirect registers for extra zones ,but those are the ones I use at work. Next will be linking a program originally broken into three 98 line sections. Each program calling the next similar to the merge on the 67. Geoff 05-24-2017, 09:40 PM Post: #5 PANAMATIK Senior Member Posts: 1,025 Joined: Oct 2014 RE: HP 29E IR GPS What???? (05-24-2017 10:13 AM)Paul Dale Wrote:  Time is fixed to register 0 unfortunately. Pauli Every register 0-7 can be chosen for the actual time. In general, every register can be randomly chosen for every of the GPS data. Invoking f STO 1 when GPS time is shown uses register 1 for time. Of course you can use register 0 as indirect register. Bernhard That's one small step for a man - one giant leap for mankind. 05-25-2017, 12:48 AM Post: #6 Paul Dale Senior Member Posts: 1,733 Joined: Dec 2013 RE: HP 29E IR GPS What???? Thanks for the corrections. Seems I misunderstood the manual. Pauli 05-25-2017, 03:04 PM Post: #7 Dieter Senior Member Posts: 2,397 Joined: Dec 2013 RE: HP 29E IR GPS What???? (05-24-2017 09:40 PM)PANAMATIK Wrote:  Every register 0-7 can be chosen for the actual time. In general, every register can be randomly chosen for every of the GPS data. Invoking f STO 1 when GPS time is shown uses register 1 for time. Of course you can use register 0 as indirect register. If I understand this correctly – and after reading the ACT manual p. 60/61 – Geoff's original program assumes that the "dynamic data storage" feature has been enabled before by an initial f STO 0 so that the current time is stored in R 0 and updated every second. Any manual STO 0 during a regular calculation will stop the permanent update of this register. So there is no way of using any of the eight possible registers both for continuously updated GPS data and temporary use in regular calculations. If R 0 is required as the 29C's index register it cannot be used for GPS data without stopping the continuous data update. Is there a chance that a later 29C GPS version may use more than just R 0 ... R 7? Dieter 05-25-2017, 04:47 PM (This post was last modified: 05-25-2017 04:58 PM by PANAMATIK.) Post: #8 PANAMATIK Senior Member Posts: 1,025 Joined: Oct 2014 RE: HP 29E IR GPS What???? Sorry, that the manual does not describe everything clear enough. (05-25-2017 03:04 PM)Dieter Wrote:  If I understand this correctly – and after reading the ACT manual p. 60/61 – Geoff's original program assumes that the "dynamic data storage" feature has been enabled before by an initial f STO 0 so that the current time is stored in R 0 and updated every second. Yes. But Geoff should use f STO 1 and RCL 1 in the program instead, then R0 is still free to use as index register. (05-25-2017 03:04 PM)Dieter Wrote:  Any manual STO 0 during a regular calculation will stop the permanent update of this register. No, this is true only when you are in GPS menu, not during regular calculation mode. (05-25-2017 03:04 PM)Dieter Wrote:  So there is no way of using any of the eight possible registers both for continuously updated GPS data and temporary use in regular calculations. Yes, because your temporary value will be overwritten every second. Perhaps you think that f STO n enables continuous update generally to all registers? No, f STO n enables continuous update only for the value, which is actually displayed. You can choose which value will be updated individually. (05-25-2017 03:04 PM)Dieter Wrote:  If R 0 is required as the 29C's index register it cannot be used for GPS data without stopping the continuous data update. There are 8 different GPS data available, but you can choose which of them are continuously updated and to which register it is written, the other registers are still available for your calculations. (05-25-2017 03:04 PM)Dieter Wrote:  Is there a chance that a later 29C GPS version may use more than just R 0 ... R 7? This would be required only if you want to use all eight GPS values (UTC time, latitude, longitude, Speed, Heading, Elevation, doHP and No of Satellites) to be dynamically updated. But I don't think you need all of them. At least the HP-29E could be extended to use also registers 8-9 in a later version. Bernhard That's one small step for a man - one giant leap for mankind. 05-25-2017, 06:11 PM Post: #9 Dieter Senior Member Posts: 2,397 Joined: Dec 2013 RE: HP 29E IR GPS What???? (05-25-2017 04:47 PM)PANAMATIK Wrote:  Yes. But Geoff should use f STO 1 and RCL 1 in the program instead, then R0 is still free to use as index register. Geoff does not want to use any indirect register calls, it's me who suggested this way of extending the number of possible locations. (05-25-2017 04:47 PM)PANAMATIK Wrote: (05-25-2017 03:04 PM)Dieter Wrote:  Any manual STO 0 during a regular calculation will stop the permanent update of this register. No, this is true only when you are in GPS menu, not during regular calculation mode. (...) ...your temporary value will be overwritten every second. OK. Now please take a look at the second program version I suggested earlier in this thread: Code: 01  LBL 0 02  RCL 0 03  X<>Y 03  STO 0 04  R↓ 05  RCL i 06  + ..  ... Here R0 is recalled, then an index number (1, 2, 3, ...) is stored there and the indexed register containing the timezone is recalled and added. This should take not more than about 0,1...0,2 seconds. Would this mean that the proposed program should work in, say, 4 out of 5 cases, but sometimes the GPS data update happens during these first steps before the RCL i, thus overwrites R0 so that RCL i recalls the wrong register? Anyway, according to the above information this version should work once the GPS time is assigned to R 1: Code: 01  LBL 0 02  STO 0  ; store index 03  ISZ    ; increment index to get register number for timezone 04  RCL 1  ; recall GPS time 05  RCL i  ; add local timezone 06  + 07  2 08  4 09  X<>Y 10  X<0? 11  + 12  2 13  4 14  X>Y? 15  CLX 16  – 17  RTN This version uses R0 as index register while R1 holds the current GPS time (f STO 1). The timezone table is stored in R2 and up, so timezone no. (i) is returned by RCL (i+1). That's why I'd prefer GPS data to be stored in higher registers so that R1...R14 could be used for the timezone table. (05-25-2017 04:47 PM)PANAMATIK Wrote: (05-25-2017 03:04 PM)Dieter Wrote:  Is there a chance that a later 29C GPS version may use more than just R 0 ... R 7? This would be required only if you want to use all eight GPS values (UTC time, latitude, longitude, Speed, Heading, Elevation, doHP and No of Satellites) to be dynamically updated. But I don't think you need all of them. At least the HP-29E could be extended to use also registers 8-9 in a later version. If would be a nice feature even if you only need one or two of the GPS values because it may free up the lower registers for general use. So an f STO ,5 would leave R 0...14 unchanged. Dieter 05-25-2017, 07:05 PM Post: #10 PANAMATIK Senior Member Posts: 1,025 Joined: Oct 2014 RE: HP 29E IR GPS What???? (05-25-2017 06:11 PM)Dieter Wrote:  ... Would this mean that the proposed program should work in, say, 4 out of 5 cases, but sometimes the GPS data update happens during these first steps before the RCL i, thus overwrites R0 so that RCL i recalls the wrong register? Any register, which is assigned to dynamic GPS data, should to be used as read only value, because it could be updated any time. Just use any other register for indexing or calculations. (05-25-2017 06:11 PM)Dieter Wrote:  If would be a nice feature even if you only need one or two of the GPS values because it may free up the lower registers for general use. So an f STO .5 would leave R 0...14 unchanged. For compatibility with HP-25E GPS I will not implement the upper registers for dynamic data. I see no disadvantage to use the upper registers in your program for calculations instead. Bernhard That's one small step for a man - one giant leap for mankind. 05-25-2017, 07:32 PM (This post was last modified: 05-25-2017 07:33 PM by Geoff Quickfall.) Post: #11 Geoff Quickfall Senior Member Posts: 771 Joined: Dec 2013 RE: HP 29E IR GPS What???? Yes, I was lazy and did not use REG 0 for indexing, in fact I forgot about indexing and REG 00! Time to get the 29c manual out again. My initial program used manual input for GMT and this was an extremely minor change. Also, OCD caused me to change the GMT dynamic data to REG 0 :-) Indexing with REG 1 would increase the zones for a more complete world time clock. The YouTube video, (turn on hi def and captions), explains and demonstrates the active update of REG 0 and the ability to disable this update; not by manual input to the REG but by switching to GPS mode and pushing the CLX button. ------------------ You will see in the above coding that there is a second data register 3 with the time zones reversed (page 2). The GPS update of REG 0 must be disabled for this to routine to work. I have a pocket alarm set to GMT. It is a wonderful loud and vibrating alarm: Invisible clock This clock is awkward to change time zones so instead I use the second set of data to set the correct local alarm converted to GMT to the GMT alarm clock. Geoff « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2022-01-28 05:00:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22041864693164825, "perplexity": 2851.1981746003844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00659.warc.gz"}
https://www.ias.ac.in/listing/bibliography/jess/R_PURVAJA
• R PURVAJA Articles written in Journal of Earth System Science • Numerical modelling approach for the feasibility of shore protection measures along the coast of Kavaratti Island, Lakshadweep archipelago Erosion along Kavaratti Island has intensified in recent times due to infrastructure development and natural phenomenon. Numerical models were used to identify suitable foreshore protection structures, considering the near-shore coastal processes. For this purpose, shoreline change around the island was obtained from field surveys and results of the DSAS model. Subsequently, model simulations were conducted for the most appropriate use of structural protection measure to understand the change in hydrodynamics and sediment transport, which would ultimately result in stabilization of the Kavaratti Island coast. Based on the prevailing conditions, suitable site-specific coastal protection structures (e.g., groynes, revetment, breakwater, submerged geo-tubes structures and submerged breakwater) were assessed to determine the most feasible and suitable shore protection measure and observed the following: (a) Revetment and submerged geo-tube structure to be the most effective protection measures on the eastern part of the Kavaratti Island, (b) significant decrease in current speed from 0.48 to 0.05 m/s, and (c) significant decrease in wave height (from 2.5 to 0.3 m) and wave energy reduction about 50% from the prevailing conditions were observed. With this intervention, the existing shoreline of the island would at least be maintained, possibly preventing any further loss of land. $\bf{Highlights}$ $\bullet$ Net erosion rate is − 1.2 m/yr and it shows − 1.36 m/yr in the lagoon side and − 2.35 m/yr in the eastern side. $\bullet$ Erosion hotspots are identified along the east and west coast. Highest erosion rate of − 4.23 m/yr was estimated in the eastern side of the island and on the southwest side of the chicken neck area (− 2.94 m/yr). $\bullet$ Assessment for shoreline change predictions was carried out using Gencade model during 2018–2028. $\bullet$ Revetment and submerged geo-tube breakwater are to be the most effective and feasible foreshore protection structure. • # Journal of Earth System Science Volume 131, 2022 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
2022-09-25 01:19:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21074651181697845, "perplexity": 5582.660929026366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00536.warc.gz"}
https://uprivateta.com/shuxuedaixie-mat301-assignment-3daixie/
MAT301 Assignment 3 Show work in all problems. and mark pages for each submitted question. You will lose points otherwise. (1) Does there exist an element of order 42 in $A_{13} ?$ Justify your answer (2) Let $G$ be a group, $a, b \in G$ and $|a|=20,|b|=18$ and $\langle a\rangle \cap\langle b\rangle \neq{e}$. Prove that $a^{10}=b^{9}$. (3) Let $H \subset(\mathbb{Q},+)$ be a subgroup generated by finitely many elements. Prove that $H$ is cyclic. (4) Does there exist an element $\sigma \in S_{15}$ such that $\sigma^{4}=(3256) ?$ Justify your answer. (5) Let $$\sigma=\left[\begin{array}{cccccccccccccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \ 12 & 5 & 6 & 15 & 1 & 10 & 20 & 2 & 13 & 14 & 17 & 19 & 18 & 3 & 9 & 4 & 7 & 11 & 8 & 16 \end{array}\right]$$ (a) Find $|\sigma|$; (b) Is $\sigma$ even or odd? (6) Let $\alpha=(124)(3521)(542)$ Find $\alpha^{99}$. (7) Let $\alpha=(284975)(11063)$ and $\beta=(247)(589)(13610)$ be elements in $S_{10}$. Find $|\langle\alpha\rangle \cap\langle\beta\rangle|$. Justify your answer. (8) Let $H=\left{\sigma \in A_{7} \mid \sigma^{2}=\varepsilon\right} .$ Is $H$ a subgroup of $A_{7} ?$ MAT301 Assignment 3代写认准uprivateta™ uprivateta™是一个服务全球中国留学生的专业代写公司 real analysis代写analysis 2, analysis 3请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。 Fall 2018 This is the webpage for MAT301 during Fall 2018. All the course documents will be posted here. We will be using Quercus for the purposes of announcements and recording grades. This is a course on group theory for math major students (non-specialists). Please click here for the course syllabus (the document includes all the logistic information about the course, in particular the grading scheme and policies regarding missed term work). Instructor: Payman Eskandari Office location: 215 Huron St., Room 1012 (located on the 10th floor). Please note that the elevators in the building only go up to the 9th floor. From there you have to take the stairs. Office hours: Fridays 12:30-2:30 and Tuesdays 2:30-4:30 in HU1012 TAs: Thaddeus Janisse ([email protected]), Lennart Doppenschmitt ([email protected]), Jack Ding ([email protected]) TA office hours: Mondays 11-12 (Jack) and Wednesdays 1-2 (Lennart) in PG101 Recommended textbook: Contemporary Algebra by Gallian, 9th edition Lecture notes Please click here for the last version of the notes. Every week this file will be updated to include the material covered during that week. If you plan to print the notes, keep in mind that the notes are going to “evolve”: the first 10 pages of this week’s version may not be identical to the first 10 pages of next week’s. This is because for instance, new subsections will be added to the Preliminaries section, as needed. (Final version uploaded on Dec 7, 2018.) If you want to read ahead, here are the notes from a past (Winter 2017) offering of the course. Assignments Assignment 1 deadline extended to Friday Sep 21 at the beginning of the lecture (Note: New version uploaded on Sep 14. There was a typo in question 1b, which is now corrected.) Solutions Assignment 2 submission deadline Friday Oct 5 at 11:59 pm. The solutions are to be submitted on Crowdmark. Solutions Assignment 3 submission deadline Friday Oct 26 at 11:59 pm. The solutions are to be submitted on Crowdmark. Solutions Assignment 4 submission deadline Friday Nov 9 at 11:59 pm. The solutions are to be submitted on Crowdmark. Solutions Assignment 5 submission deadline Monday Nov 26 at 11:59 pm. The solutions are to be submitted on Crowdmark. Solutions Assignment 6 submission deadline Wednesday Dec 5 at 11:59 pm. The solutions are to be submitted on Crowdmark. Solutions Other documents Week 1 tutorial activity sheet more practice problems for Test 1 Test 1 Solutions Test 2 Solutions with indications on the marking scheme
2022-05-21 21:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40943029522895813, "perplexity": 1504.5035496234775}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00551.warc.gz"}
http://www.lastfm.es/user/rm1902/library/music/Gavin+DeGraw/_/I+Don't+Want+To+Be?setlang=es
# Colección Música » Gavin DeGraw » ## I Don't Want to Be 114 scrobblings | Ir a la página del tema Temas (114) Tema Álbum Duración Fecha I Don't Want to Be 3:38 15 Jun 2012, 6:46 I Don't Want to Be 3:38 17 May 2011, 18:41 I Don't Want to Be 3:38 31 Dic 2010, 18:02 I Don't Want to Be 3:38 12 Sep 2010, 4:51 I Don't Want to Be 3:38 29 Jun 2010, 16:37 I Don't Want to Be 3:38 24 Jun 2010, 5:27 I Don't Want to Be 3:38 31 May 2010, 17:19 I Don't Want to Be 3:38 15 Ene 2010, 10:32 I Don't Want to Be 3:38 6 Ene 2010, 18:42 I Don't Want to Be 3:38 8 Dic 2009, 16:50 I Don't Want to Be 3:38 20 Nov 2009, 18:29 I Don't Want to Be 3:38 12 Nov 2009, 14:59 I Don't Want to Be 3:38 30 Oct 2009, 8:39 I Don't Want to Be 3:38 6 Sep 2009, 16:06 I Don't Want to Be 3:38 5 Ago 2009, 19:17 I Don't Want to Be 3:38 17 Jul 2009, 14:50 I Don't Want to Be 3:38 17 Jul 2009, 14:50 I Don't Want to Be 3:38 14 Jul 2009, 16:32 I Don't Want to Be 3:38 7 Jul 2009, 13:23 I Don't Want to Be 3:38 5 Jul 2009, 19:19 I Don't Want to Be 3:38 4 Jul 2009, 8:59 I Don't Want to Be 3:38 3 Jul 2009, 21:47 I Don't Want to Be 3:38 2 Jul 2009, 17:35 I Don't Want to Be 3:38 2 Jul 2009, 15:24 I Don't Want to Be 3:38 1 Jul 2009, 19:25 I Don't Want to Be 3:38 30 Jun 2009, 12:45 I Don't Want to Be 3:38 29 Jun 2009, 20:40 I Don't Want to Be 3:38 29 Jun 2009, 19:27 I Don't Want to Be 3:38 29 Jun 2009, 19:27 I Don't Want to Be 3:38 29 Jun 2009, 19:27 I Don't Want to Be 3:38 29 Jun 2009, 19:27 I Don't Want to Be 3:38 29 Jun 2009, 19:23 I Don't Want to Be 3:38 29 Jun 2009, 17:29 I Don't Want to Be 3:38 29 Jun 2009, 16:00 I Don't Want to Be 3:38 28 Jun 2009, 15:17 I Don't Want to Be 3:38 28 Jun 2009, 8:39 I Don't Want to Be 3:38 28 Jun 2009, 8:23 I Don't Want to Be 3:38 27 Jun 2009, 11:19 I Don't Want to Be 3:38 27 Jun 2009, 8:33 I Don't Want to Be 3:38 25 Jun 2009, 19:21 I Don't Want to Be 3:38 25 Jun 2009, 18:37 I Don't Want to Be 3:38 23 Jun 2009, 12:09 I Don't Want to Be 3:38 23 Jun 2009, 12:09 I Don't Want to Be 3:38 23 Jun 2009, 12:09 I Don't Want to Be 3:38 22 Jun 2009, 17:51 I Don't Want to Be 3:38 22 Jun 2009, 17:00 I Don't Want to Be 3:38 21 Jun 2009, 21:08 I Don't Want to Be 3:38 18 Jun 2009, 10:14 I Don't Want to Be 3:38 18 Jun 2009, 10:14 I Don't Want to Be 3:38 18 Jun 2009, 10:14 I Don't Want to Be 3:38 16 Jun 2009, 19:05 I Don't Want to Be 3:38 15 Jun 2009, 18:13 I Don't Want to Be 3:38 15 Jun 2009, 16:57 I Don't Want to Be 3:38 14 Jun 2009, 19:10 I Don't Want to Be 3:38 12 Jun 2009, 7:02 I Don't Want to Be 3:38 12 Jun 2009, 7:02 I Don't Want to Be 3:38 9 Jun 2009, 15:08 I Don't Want to Be 3:38 8 Jun 2009, 17:39 I Don't Want to Be 3:38 7 Jun 2009, 13:57 I Don't Want to Be 3:38 7 Jun 2009, 13:53 I Don't Want to Be 3:38 5 Jun 2009, 16:33 I Don't Want to Be 3:38 4 Jun 2009, 17:11 I Don't Want to Be 3:38 3 Jun 2009, 17:36 I Don't Want to Be 3:38 3 Jun 2009, 17:36 I Don't Want to Be 3:38 3 Jun 2009, 17:36 I Don't Want to Be 3:38 3 Jun 2009, 17:32 I Don't Want to Be 3:38 2 Jun 2009, 19:42 I Don't Want to Be 3:38 2 Jun 2009, 16:06 I Don't Want to Be 3:38 1 Jun 2009, 16:08 I Don't Want to Be 3:38 1 Jun 2009, 12:01 I Don't Want to Be 3:38 31 May 2009, 11:22 I Don't Want to Be 3:38 28 May 2009, 18:46 I Don't Want to Be 3:38 28 May 2009, 18:46 I Don't Want to Be 3:38 28 May 2009, 18:43 I Don't Want to Be 3:38 26 May 2009, 18:20 I Don't Want to Be 3:38 26 May 2009, 16:16 I Don't Want to Be 3:38 24 May 2009, 20:16 I Don't Want to Be 3:38 24 May 2009, 20:12 I Don't Want to Be 3:38 23 May 2009, 11:59 I Don't Want to Be 3:38 22 May 2009, 15:54 I Don't Want to Be 3:38 22 May 2009, 15:54 I Don't Want to Be 3:38 22 May 2009, 15:54 I Don't Want to Be 3:38 22 May 2009, 15:54 I Don't Want to Be 3:38 20 May 2009, 14:36 I Don't Want to Be 3:38 20 May 2009, 9:21 I Don't Want to Be 3:38 19 May 2009, 18:32 I Don't Want to Be 3:38 19 May 2009, 16:41 I Don't Want to Be 3:38 18 May 2009, 20:13 I Don't Want to Be 3:38 17 May 2009, 21:09 I Don't Want to Be 3:38 17 May 2009, 19:53 I Don't Want to Be 3:38 17 May 2009, 18:31 I Don't Want to Be 3:38 17 May 2009, 13:56 I Don't Want to Be 3:38 17 May 2009, 13:56 I Don't Want to Be 3:38 17 May 2009, 13:53 I Don't Want to Be 3:38 17 May 2009, 11:28 I Don't Want to Be 3:38 17 May 2009, 10:30 I Don't Want to Be 3:38 17 May 2009, 9:46 I Don't Want to Be 3:38 16 May 2009, 21:31 I Don't Want to Be 3:38 16 May 2009, 15:35 I Don't Want to Be 3:38 16 May 2009, 14:22 I Don't Want to Be 3:38 16 May 2009, 13:51 I Don't Want to Be 3:38 16 May 2009, 9:04 I Don't Want to Be 3:38 15 May 2009, 17:24 I Don't Want to Be 3:38 15 May 2009, 12:17 I Don't Want to Be 3:38 15 May 2009, 11:20 I Don't Want to Be 3:38 15 May 2009, 11:20 I Don't Want to Be 3:38 15 May 2009, 11:20 I Don't Want to Be 3:38 15 May 2009, 11:20 I Don't Want to Be 3:38 15 May 2009, 11:20 I Don't Want to Be 3:38 15 May 2009, 11:20 I Don't Want to Be 3:38 14 May 2009, 21:20 I Don't Want to Be 3:38 14 May 2009, 19:39 I Don't Want to Be 3:38 14 May 2009, 18:38 I Don't Want to Be 3:38 14 May 2009, 18:35
2015-04-02 06:32:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937576949596405, "perplexity": 3070.6081154004046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131317570.85/warc/CC-MAIN-20150323172157-00042-ip-10-168-14-71.ec2.internal.warc.gz"}
https://reperiendi.wordpress.com/page/2/
# reperiendi ## Entropic gravity Posted in Astronomy, General physics, Math by Mike Stay on 2010 July 19 Erik Verlinde has been in the news recently for revisiting Ted Jacobson’s suggestion that gravity is an entropic force rather than a fundamental one. The core of the argument is as follows: Say we have two boxes, one inside the other: +---------------+ | | | +----------+ | | | | | | | | | | | | | | +----------+ | +---------------+ Say the inner box has room for ten bits on its surface and the outer one room for twenty. Each box can use as many “1”s as there are particles inside it: +---------------+ | X | | +----------+ | | | | | | | X | | | | | | | +----------+ | +---------------+ In this case, the inner box has only one particle inside, so there are 10 choose 1 = 10 ways to choose a labeling of the inner box; the outer box has two particles inside, so there are 20 choose 2 = 190 ways. Thus there are 1900 ways to label the system in all. If both particles are in the inner box, though, the number of ways increases: +---------------+ | | | +----------+ | | | | | | | X X | | | | | | | +----------+ | +---------------+ The inner box now has 10 choose 2 ways = 45, while the outer box still has 190. So using the standard assumption that all labelings are equally likely, it’s 4.5 times as likely to find both particles in the inner box, and we get an entropic force drawing them together. The best explanation of Verlinde’s paper I’ve seen is Sabine Hossenfelder’s Comments on and Comments on Comments on Verlinde’s paper “On the Origin of Gravity and the Laws of Newton”. ## A first attempt at re-winding Escher’s “Ascending and Descending” Posted in Uncategorized by Mike Stay on 2010 May 19 And he dreamed, and behold a ladder set up on the earth, and the top of it reached to heaven: and behold the angels of God ascending and descending on it. Edit (May 20): Even though it’s not a conformal transformation, this version looks better in a lot of ways. Rather than cramming the whole picture into a single window frame, it presumes there’s a concentric set of these castles, each half as small as the previous, and built within its open internal patio. Doing it really well would involve extending the walls out to the edge of the outer wall that obscures them. ## I’m not this guy Posted in Uncategorized by Mike Stay on 2010 May 18 ## Faith Posted in Uncategorized by Mike Stay on 2010 May 13 Blind faith is the best antisceptic. ## Tag clouds Posted in Uncategorized by Mike Stay on 2010 May 7 Tag clouds are cumulonymous. ## Soliton Posted in Poetry by Mike Stay on 2010 May 7 With a stroke, the pilot glides forward across the lake. He does not know the names of the vortices cast off by his oar; Neither is he known to the Sun. Posted in General physics, Quantum by Mike Stay on 2010 May 3 This works best with small groups of about 5-10 students and at least thirty dice. Divide the dice evenly among the students. 1. Count the number of dice held by the students and write it on the board. 2. Have everyone roll each die once. 3. Collect all the dice that show a ‘one’, count them, write the sum on the board, then set them aside. 4. Go back to step 1. A run with 30 dice will look something like this: dice number of ones 30 5 25 4 21 4 17 3 14 1 13 3 10 2 8 1 7 1 6 0 6 1 5 0 5 1 4 1 3 0 3 0 3 0 3 1 2 1 1 0 1 0 1 0 1 0 1 1 Point out how the number of dice rolling a one on each turn is about one sixth of the dice that hadn’t yet rolled a one on the previous turn. Also, that you lose about half of the remaining dice after about four turns. Send someone out of the room; do either four or eight turns, then bring them back and ask them to guess how many turns the group took. The student should be able to see that if half the dice are left, there were only four turns, but if a quarter of the dice are left, there were eight turns. If the students are advanced enough to use logarithms, try the above with some number other than four or eight and have the student use logarithms to calculate the number of turns: turns = log(number remaining/total) / log(5/6), or, equivalently, in terms of the half-life (which is really closer to 3.8 than 4): turns = 3.8 * log(number remaining/total) / log(1/2). When Zircon crystals form, they strongly reject lead atoms: new zircon crystals have no lead in them. They easily accept uranium atoms. Each die represents a uranium atom, and rolling a one represents decaying into a lead atom: because uranium atoms are radioactive, they can lose bits of their nucleus and turn into lead–but only randomly, like rolling a die. Instead of four turns, the half-life of U238 is 4.5 billion years. Zircon forms in almost all rocks and is hard to break down. So to judge the age of a rock, you get the zircon out, throw it in a mass spectrometer, look at the proportion of uranium to (lead plus uranium) and calculate years = 4.5 billion * log(mass of uranium/mass of (lead+uranium)) / log(1/2). Problem: given a zircon crystal where there’s one lead atom for every ninety-nine uranium atoms, how long ago was it formed? 4.5 billion * log(99/100) / log(1/2) = 65 million years ago. In reality, it’s slightly more complicated: there are two isotopes of uranium and several of lead. But this is a good thing, since we know the half-lives of both isotopes and can use them to cross-check each other; it’s as though each student had both six- and twenty-sided dice, and the student guessing the number of turns could use information from both groups to refine her guess. ## Escher and Mandelbrot Posted in Math by Mike Stay on 2010 May 3 If you take a complex number z with argument θ and square it, you double θ. The Mandelbrot/Julia iteration z ↦ z2 + c does pretty much the same thing, but adds wiggles to the curve. Since the iterations stop when |z| > 2, the boundary at the zeroth iteration is a circle; after the first it’s a pear shape, and so on. We can map any point in the region between bands to a point in a rectangular tile that’s periodic once along the outside edge and twice along the inside edge. Here’s a site with a few different examples. The transformation Escher used in “Print Gallery” takes concentric circles at r=1/rn to a logarithmic spiral. The concentric boundaries between iterations for the Julia set at c = 0 are circles. There ought to be a transformation for Mandelbrot / Julia sets similar to the Droste effect but spiraling inward so that the frequency is smoothly increasing just as the distance can be made to smoothly increase. ## The animated gif of Dorian Grey Posted in Uncategorized by Mike Stay on 2010 May 3 My friend Jas wrote a Perl module, Perl::Visualize, that makes Perl/Gif polyglots. He later adapted his technique to Javascript/Gif polyglots. Some guy generalized a quine to print out the source code of the program together with a comment containing the next frame in Conway’s Game of Life. So we have programs that can age themselves, as well as programs that double as pictures. The “aging” process can repeat. So someone make a JS/Gif polyglot quine that renders consecutive frames of Dorian Grey! ## Lazulinos Posted in Borges, Fun links, General physics, Perception, Quantum by Mike Stay on 2010 April 27 Lazulinos are quasiparticles in a naturally occurring Bose-Einstein condensate first described in 1977 by the Scottish physicist Alexander Craigie while at the University of Lahore [3]. The quasiparticles are weakly bound by an interaction for which neither the position nor number operator commutes with the Hamiltonian. A measurement of a lazulino’s position will cause the condensate to go into a superposition of number states, and a subsequent measurement of the population will return a random number; also, counting the lazulinos at two different times will likely give different results. Their name derives from the stone lapis lazuli and means, roughly, “little blue stone”. Lazulinos are so named because even though the crystals in which they arise absorb visible light, and would otherwise be jet black, they lose energy through surface plasmons in the form of near-ultraviolet photons, with visible peaks at 380, 402, and 417nm. Optical interference imparts a “laser speckle” quality to the emitted light; Craigie described the effect in a famously poetic way: “Their colour is the blue that we are permitted to see only in our dreams”. What makes lazulinos particularly interesting is that they are massive and macroscopic. Since the number operator does not commute with the Hamiltonian, lazulinos themselves do not have a well-defined mass; if the population is N, then the mass of any particular lazulino is m/N, where m is the total mass of the condensate. In a recent follow-up to the “quantum mirage” experiment [2], Don Eigler’s group at IBM used a scanning tunneling microscope to implement “quantum mancala”—picking up the lazulino ‘stones’ in a particular location usually changes the number of stones, so the strategy for winning becomes much more complicated. In order to pick up a fixed number of stones, you must choose a superposition of locations [1]. 1. C.P. Lutz and D.M. Eigler, “Quantum Mancala: Manipulating Lazulino Condensates,” Nature 465, 132 (2010). 2. H.C. Manoharan, C.P. Lutz and D.M. Eigler, “Quantum Mirages: The Coherent Projection of Electronic Structure,” Nature 403, 512 (2000). Images available at http://www.almaden.ibm.com/almaden/media/image_mirage.html 3. A. Craigie, “Surface plasmons in cobalt-doped Y3Al5O12,” Phys. Rev. D 15 (1977). Also available at http://tinyurl.com/35oyrnd. ## NY Times article on conditional probability Posted in Uncategorized by Mike Stay on 2010 April 27 There’s actually very good justification for this method of reasoning: it maximizes entropy. ## Coends Posted in Category theory, Math, Quantum by Mike Stay on 2010 April 11 Coends are a categorified version of “summing over repeated indices”. We do that when we’re computing the trace of a matrix and when we’re multiplying two matrices. It’s categorified because we’re summing over a bunch of sets instead of a bunch of numbers. Let $C$ be a small category. The functor $\mbox{hom}:C^{\mbox{op}} \times C \to Set$ assigns • to each pair of objects the set of morphisms between them, and • to each pair of morphisms $(f:c \to c', h:d\to d')$ a function that takes a morphism $g \in \mbox{hom}(c', d)$ and returns the composite morphism $h \circ g \circ f \in \mbox{hom}(c, d')$, where $c, c', d, d' \in \mbox{Ob}(C).$ It turns out that given any functor $S:C^{\mbox{op}} \times D \to \mbox{Set},$ we can make a new category where $C$ and $D$ are subcategories and $S$ is actually the hom functor; some keywords for more information on this are “collages” and “Artin glueing”. So we can also think of $S$ as assigning • to each pair of objects a set of morphisms between them, and • to each pair of morphisms $(f:c \to c', h:d\to d')$ a function that takes a morphism $g \in S(c', d)$ and returns the composite morphism $h \circ g \circ f \in S(c, d')$, where $c,c' \in \mbox{Ob}(C)$ and $d,d' \in \mbox{Ob}(D).$ We can think of these functors as adjacency matrices, where the two parameters are the row and column, except that instead of counting the number of paths, we’re taking the set of paths. So $S$ is kind of like a matrix whose elements are sets, and we want to do something like sum the diagonals. The coend of $S$ is the coequalizer of the diagram $\begin{array}{c}\displaystyle \coprod_{f:c \to c'} S(c', c) \\ \\ \displaystyle S(f, c) \downarrow \quad \quad \downarrow S(c', f) \\ \\ \displaystyle \coprod_c S(c, c) \end{array}$ The top set consists of all the pairs where • the first element is a morphism $f \in \mbox{hom}(c, c')$ and • the second element is a morphism $g \in S(c', c).$ The bottom set is the set of all the endomorphisms in $S.$ The coequalizer of the diagram, the coend of $S,$ is the bottom set modulo a relation. Starting at the top with a pair $(f, g),$ the two arrows give the relation $\displaystyle c \stackrel{f}{\to} c' \stackrel{g}{\multimap} c \stackrel{c}{\to} c \quad \sim \quad c' \stackrel{c'}{\to} c' \stackrel{g}{\multimap} c \stackrel{f}{\to} c',$ where I’m using the lollipop to mean a morphism from $S.$ So this says take all the endomorphisms that can be chopped up into a morphism $f$ from $\mbox{hom}$ going one way and a $g$ from $S$ going the other, and then set $fg \sim gf.$ For this to make any sense, it has to identify any two objects related by such a pair. So it’s summing over all the endomorphisms of these equivalence classes. To get the trace of the hom functor, use $S = \mbox{hom}$ in the analysis above and replace the lollipop with a real arrow. If that category is just a group, this is the set of conjugacy classes. If that category is a preorder, then we’re computing the set of isomorphism classes. The coend is also used when “multiplying matrices”. Let $S(c', c) = T(b, c) \times U(c', d).$ Then the top set consists of triples $(f: c\to c',\quad g:b \multimap c,\quad h:c' \multimap d),$ the bottom set of pairs $(g:b \multimap c, \quad h:c \multimap d),$ and the coend is the bottom set modulo $(\displaystyle b \stackrel{g}{\multimap} c \stackrel{c}{\to} c, \quad c \stackrel{f}{\to} c' \stackrel{h}{\multimap} d) \quad \sim \quad (\displaystyle b \stackrel{g}{\multimap} c \stackrel{f}{\to} c', \quad c' \stackrel{c'}{\to} c' \stackrel{h}{\multimap} d)$ That is, it doesn’t matter if you think of $f$ as connected to $g$ or to $h$; the connection is associative, so you can go all the way from $b$ to $d.$ Notice here how a morphism can turn “inside out”: when $f$ and the identities surround a morphism in $S$, it’s the same as being surrounded by morphisms in $T$ and $U$; this is the difference between a trace, where we’re repeating indices on the same matrix, and matrix multiplication, where we’re repeating the column of the first matrix in the row of the second matrix. ## 5-axis mill Posted in Uncategorized by Mike Stay on 2010 April 9 I really like the different tones on the metal, from polished to brushed, to whatever cool thing they did to get their name in lettering on the back. ## marginalia Posted in Uncategorized by Mike Stay on 2010 April 8 Idea for an annotation engine: • Annotation has the form (search query, regular expression, content) • search query should be in a form where given the content and URL of a page you can tell if it ought to match the query. • execute the search query; for each hit • run the regex on the result; if it matches ## Artificial photosynthesis Posted in Uncategorized by Mike Stay on 2010 March 19 96% efficiency at turning CO2 and H20 into sugar. ## PTSD in Haiti Posted in Uncategorized by Mike Stay on 2010 March 17 Lucas Williams writes: [This was written as a response to close friend of mine who is a social worker in the greater Hartford area. She asked me if I had any anecdotes from Haiti that she could share at an upcoming conference on mental health and trauma. I tried to jot down just a few thoughts, but found that I couldn’t stop typing. This is a letter to a friend, not an essay or paper, but I want to share it. I hope that it can be of help to someone else, either to a returning aid worker, or someone who hasn’t been to Haiti yet wants to understand, to feel a little bit of what it is like.] Dear ***** I’m in Miami right now, I landed last night. I’ve been in Haiti, working as an Emergency Department technician in a field hospital for almost 6 weeks, and it’s more than a little weird to be lying on a friends couch watching TV. I head back to Port-au-Prince on Saturday for another 5 weeks. I tried to write you a short reply about post-trauma and it turned into a novel. By the second paragraph I realized I was really writing it for me, but it just kept on going. I don’t know if this is going to be much help, but I’m sending it anyways, because each time I try and clean it up it just gets messier, and longer. I think about post-traumatic stress disorder a lot. Before I left for Haiti I thought of it as a Western illness. It wasn’t that I thought it wasn’t real — I just believed that you had to be emotionally “fragile” before the trauma in order to be severely affected by it afterwards. We had a psychologist on the plane with us, and my first thought when I heard she was coming was “what the hell is she going to do?” Haiti is the poorest country in this hemisphere. These people are tough as nails, they won’t need a shrink, and besides, there’s no way she could be effective dealing with such a monstrous language barrier. Less than 48 hours from the time we landed I would feel very differently. ## Burninatin’ the countryside Posted in Uncategorized by Mike Stay on 2010 March 15 With consummate Vs! ## Pretty jewelry Posted in Uncategorized by Mike Stay on 2010 March 9 The artist says the bands of color are due to a diffraction grating; the base is recycled titanium. Is it etched, or ground, or what? The page only says “a multistep process”. ## Life imitates Art Posted in Borges by Mike Stay on 2010 March 8 In “Tlön, Uqbar, Orbis Tertius“, Borges describes a group of dedicated people who describe a world, Tlön, in such detail and produce enough forged artifacts that the whole world adopts their vision and begins to convert itself into Tlön. I attributed a farcical version of Borges’ story “Death and the Compass” to a pseudonymous Umberto Eco in the last post; in that story, the detective’s supposition that there is a pattern induces his prey to begin using the pattern in order to entrap him. In fact, Eco borrowed that idea for his own detective story, The Name of the Rose. The Bible formed the basis for the production of thousands of fraudulent relics, and the Book of Mormon inspired Mark Hoffmann to create forgeries of letters from the early Mormon community (though his were designed to destroy the community instead of build it). Cellphones look like Star Trek communicators, and Bluetooth headsets look like Uhura’s earpiece. Of course, there’s all the merchandising from Star Wars and Lord of the Rings. And my brother David just gave my brother Doug a set of the six signs from Susan Cooper’s The Dark is Rising sequence. What story has motivated you to create your own artifacts? ## Creative ways to ask a girl on a date Posted in Borges by Mike Stay on 2010 March 8 In Utah, it is expected that high-school students and undergraduates will come up with extravagant ways, often including horrible puns, to ask a girl out on a date and to reply to such an invitation. My cousin’s family, having just moved there, asked on the family list for ideas. My response: My date read the “police beat” religiously, so I got Umberto Eco to write a story about a detective on the BYU campus police force who believes that two crimes (involving the vandalism of FARMS researchers’ webpages with screeds linking kabbalah to quantum field theory) are related even though, in fact, they are not. The detective was responsible for a student’s conviction of violating the Honor Code and eventual expulsion. The student learns of the detective’s interest and begins to commit crimes that fit the imagined pattern; the detective predicts where the final crime will be committed and stakes out the place, but because the student was expecting him, he is captured by the student and his friends and forced to wear U of U paraphernalia. I had it published pseudonymously in the Daily Herald, and then used my network of friends to commit 137 minor crimes across campus for the next six months prior to the dance that spelled out “[Name], will you go to the dance with me?” I waited at the place where the point of the question mark would be, but she never showed up. She did, however, wire my brakes to my horn and leave a sheet with a big “YES” painted on it under the hood. This would have been amusing had the new wiring not caused electrical arcing. The sheet caught fire, which caught other things on fire, which eventually burned out the interior of the car. It wasn’t too much of a loss, though, because it was a 1973 Honda Civic that I’d bought from a graduating senior for \$200. Anyway, neither of us wanted to dance all that much in the first place, so we went down to the underground laser lab where my roommate worked and then explored the steam tunnels. ## Birdfeeders in UK are splitting Blackcaps into two species Posted in Uncategorized by Mike Stay on 2010 March 5 There’s a mutation that sends blackcaps in the wrong direction when migrating for the winter, and they end up in Britain. Before humans started putting out bird feeders, they would merely die. But now, they’re surviving and living to reproduce. They spend part of their time in Europe, but they overlap less with the birds that migrate over the alps; the two groups are now more genetically distinct between groups than within them. This gap arose in around 50 years; if it holds up, the groups may cease to interbreed entirely and will become two different species. ## The Murder of Asher Ben-Judah Posted in Borges by Mike Stay on 2010 March 4 Here’s a story I wrote; it’s inspired by Borges’ collection of stories “A Universal History of Infamy.” The Murder of Asher Ben-Judah In the fourth year of the reign of Nebuchadnezzar II, Egypt successfully repelled the invasion by Babylon. Believing Babylon to be weakened, Jehoiakim of Jerusalem stopped paying tribute to Babylon, took a pro-Egyptian position, and promptly died. His son Jeconiah chose to continue the policy; one hundred days later, he was deposed by Nebuchadnezzar II for rebellion. The Babylonian king sacked the temple, took captive all the nobility and craftsmen who had not fled the city—some ten thousand people—and carried them off to Babylon; the prophet Ezekiel was among them. Before leaving, perhaps mockingly, Nebuchadnezzar annointed Jeconiah’s uncle Mattaniah, clothed him in the robes of kingship, and gave him the new name “Righteousness of the LORD.” Despite the destruction, the harvest that year was a good one for farmers, and the sale of the excess bought capital for rebuilding the city. Those wise and wealthy enough to have fled Jerusalem with their property in anticipation of the inevitable response to Jehoiakim’s stupidity returned; among them was the ward boss Asher Ben-Judah. Asher was a master at organizing labor; he was often and fruitfully compared to Father Jacob’s father-in-law for having the cunning to convince a man to work fourteen years in the hope of being paid someday. However, when cunning failed, Asher was not above resorting to other motivators: he was also a master at organizing crime. If one were to speak to a particular man in the bazaar, he would recite a list of Asher’s prices: • Punching – 2 shekels, • Both Eyes Blackened – 4 shekels, • Nose & Jaw Broken – 10 shekels, • Ear Chawed Off – 15 shekels, • Leg Or Arm Broken – 19 shekels, • Stab – 25 shekels, • Doing the Job – 100 shekels and up. As the armies of Babylon flooded the country, Asher came to rest in the mountains of Ararat. A generation before, the Arartian king Rusa II had built more cities than Solomon, Ramses, Semiramis and Sargon put together; the blind arches of Rusahinili and Teishebaini rivaled the fortifications of Ninevah. Asher knew there would be plenty of work for masons in rebuilding Jerusalem after Babylon was through with it. Another household returning to Jerusalem that year was that of Asher’s second cousin “Jawbone” Ben-Samson, a merchant dealing in precious metals and a smith in his own right, having received the secrets of metallurgy from his fathers. Ben-Samson had chosen to find refuge in Egypt, where Babylon could not follow, and returned with artifacts of gold, silver, brass, and steel. Though Asher cared nothing for working metal, he was the firstborn and had inherited the sword forged by their great-grandfather; the iron was cast down from heaven and laid waste to a forest near Damascus. Such iron was very rare and very valuable, since it was pure enough to be strengthened by forging in charcoal; iron extracted from ore already had too much of the black ash in it, and would become brittle. It’s unclear what happened to spark Ben-Samson’s madness. He began to accuse the king of plotting against Babylon; the king, who owed his throne to Nebuchadnezzar’s grace, ordered his death, but Ben-Samson escaped to the desert. He began to forget key metallurgical processes; he sent his sons to Asher to coerce him into giving them their great-grandfather’s records. Asher turned them away, but they returned and attempted to buy the records; insulted at the prospect of selling his birthright, Asher told his men to kill the intruders. “Jawbone” Ben-Samson was not so named because he was a weakling, and his sons lived up to their name: they fought off the thugs and escaped, but in the scuffle they dropped the keys to their family’s treasury. Since neither Ben-Samson nor his sons could reenter the city to claim their property, Asher became the second-richest man in Jerusalem. Asher, dressed in his finest, went out on the town to celebrate. He bought everyone drinks at the ward tavern and used his favorite prostitute; near the end of the third watch he stumbled out the door towards home. Asher Ben-Judah was found stripped and decapitated the next day; his sword and his great grandfather’s records were missing. Neither Ben-Samson nor his sons ever returned to Jerusalem. ## Theories and models Posted in Category theory, Math, Programming by Mike Stay on 2010 March 4 The simplest kind of theory is just a set $T,$ thought of as a set of concepts or Platonic ideals. We typically have some other set $S,$ thought of as the set of real things that are described by concepts. Then a model is a function $f:T \to S.$ For example, we could let $T$ = {0, 1}; this is the theory of the number “two”, since it has two elements. Whatever set we choose for $S$, the models of the theory are going to be pairs of elements of $S$. So if $S$ = the set of people, models of $T$ in $S$ are going to be pairs of people (where choosing the same person twice is allowed). Concepts, however, are usually related to each other, whereas in a set, you can only ask if elements are the same or not. So the way a theory is usually presented is as a category $T.$ For example let $T$ be the category with two objects $E, V$ and two parallel nontrivial morphisms $\sigma, \tau:E\to V,$ and let $S$ be the category Set of sets and functions. A model is a structure-preserving map from the category $T$ to Set, i.e. a functor. Each object of $T$ gets mapped to a set; here we think of the image of $V$ as a set of vertices and the image of $E$ as a set of edges. Each morphism of $T$ gets mapped to a function; $\sigma$ and $\tau$ take an edge and produce the source vertex or target vertex, respectively. The category $T$ = Th(Graph) is the theory of a graph, and its models are all graphs. Usually, our theories have extra structure. Consider the first example of a model, a function between sets. We can add structure to the theory; for example, we can take the set $T$ to be the ring of integers $\mathbb{Z}.$ Then a model is a structure-preserving function, a homomorphism between $T$ and $S.$ Of course, this means that $S$ has to have at least as much structure as $T.$ We could, for instance, take $S$ to be the real numbers under multiplication. Since this ring homomorphism is entirely determined by where we map 1, and we can choose any real number for its image, there would be one model for each real number; each integer $x$ would map to $a^x$ for some $a.$ Another option is to take $S = \mathbb{Z}_4,$ the integers modulo 4. There are three nonisomorphic models of $\mathbb{Z}$ in $\mathbb{Z}_4$. If we map 1 to 0, we get the trivial ring; if we map 1 to 1 or 3, we get integers modulo 4; and if we map 1 to 2, we get integers modulo 2. Similarly, we can add structure to a category. If we take monoidal categories $T, S,$ then we can tensor objects together to get new ones. A model of such a theory is a tensor-product-preserving functor from $T$ to $S.$ See my paper “Physics, Topology, Computation, and Logic: a Rosetta Stone” with John Baez for a thorough exploration of theories that are braided monoidal closed categories and models of these. An element of the set $\mathbb{Z}_4$ is a number, while an object of the category Th(Graph) is a set. A theory is a mathematical gadget in which we can talk about theories of one dimension lower. In Java, we say “interface” instead of “theory” and “class” instead of “model”. With Java interfaces we can describe sets of values and functions between them; it is a cartesian closed category whose objects are datatypes and whose morphisms are (roughly) programs. Models of an interface are different classes that implement that interface. And there’s no reason to stop at categories; we can consider bicategories with structure and structure-preserving functors between these; these higher theories should let us talk about different models of computation. One model would be Turing machines, another lambda calculus, a third would be the Java Virtual Machine, a fourth Pi calculus. ## Digital compositing Posted in Uncategorized by Mike Stay on 2010 March 3 Stargate studios shows off: ## Phort Tiger Posted in Uncategorized by Mike Stay on 2010 February 25 When I was seven, my parents gave me the frame of a clubhouse for my birthday; it had two walls and a roof. I and the neighborhood kids added the other walls; when someone down the street replaced the shingles on their roof, we took the discarded ones and shingled ours. We did half of it wrong before figuring out how shingles are supposed to go (start at the bottom!) Someone else found the remains of an alphabet used to put a surname on a mailbox. It was missing the letter “F”, so the clubhouse became “Phort Tiger”, anticipating the F -> PH meme by a decade and a half. We seceded from the union and declared our backyard to be the sovereign nation of Tigeria. ## Mechanistic creativity Posted in Uncategorized by Mike Stay on 2010 February 25 Computers are better now at face recognition than humans. My brother Doug has written photoshop filters that can do a watercolor painting over a pencil sketch given a photo. And now, David Cope has produced really beautiful music from a computer; the genius of it is his grammatical analysis of music: Again, Cope hit the books, hoping to discover research into what that something was. For hundreds of years, musicologists had analyzed the rules of composition at a superficial level. Yet few had explored the details of musical style; their descriptions of terms like “dynamic,” for example, were so vague as to be unprogrammable. So Cope developed his own types of musical phenomena to capture each composer’s tendencies — for instance, how often a series of notes shows up, or how a series may signal a change in key. He also classified chords, phrases and entire sections of a piece based on his own grammar of musical storytelling and tension and release: statement, preparation, extension, antecedent, consequent. The system is analogous to examining the way a piece of writing functions. For example, a word may be a noun in preparation for a verb, within a sentence meant to be a declarative statement, within a paragraph that’s a consequent near the conclusion of a piece. This kind of endeavor is precisely what the science of teaching is about; if Cope can teach a computer to make beautiful music, he can teach me to make beautiful music. By abstracting away the particular notes and looking at what makes music Bach-like as opposed to Beethoven-like or Mozart-like, he has shown us where new innovation will occur: first, in exploring the space, and second, in adding new dimensions to that space. ## Algorithmic thermodynamics Posted in Uncategorized by Mike Stay on 2010 February 22 John Baez and I just wrote a paper entitled “Algorithmic Thermodynamics.” Li and Vitányi coined this phrase for their study of the Kolmogorov complexity of physical microstates; in their model, given an encoding $x$ of a macrostate (a measurement of a set of observables of the system to some accuracy), the entropy $S(x)$ of the system is a sum of two parts, the algorithmic entropy $K(x)$ and the uncertainty entropy $H(x)$. The algorithmic entropy is roughly the length of the shortest program producing $x$, while the uncertainy is a measure of how many microstates there are that satisfy the description $x$. So roughly the microstates in their model are outputs of Turing machines. In our model, microstates are inputs to Turing machines, specifically inputs that cause the machine to halt and give an output. Then we specify a macrostate using some observables of the program (computable functions from bit strings to real numbers, like the length, or runtime, or output of the program). Once we’ve specified the macrostate by giving the average values $\overline{C_i}$ of some observables $C_i,$ we can ask what distribution on microstates (halting programs) maximizes the entropy; this will be a Gibbs distribution $\displaystyle p(x) = \frac{1}{Z} \exp\left(-\sum_i \beta_i C_i(x)\right),$ where $\displaystyle Z = \sum_{x \in X} \exp\left(-\sum_i \beta_i C_i(x)\right)$ and $\displaystyle -\frac{\partial}{\partial \beta_i} \ln Z = \overline{C_i}.$ The entropy of this system is $\displaystyle S(p) = -\sum_{x \in X} p(x) \ln p(x);$ from this formula we can derive definitions of the conjugates of the observables, just like in statistical mechanics. If we pick some observable $C_j$—say, the runtime of the program—to play the role of the energy $E,$ then its conjugate $\beta_j$ plays the role of inverse temperature $1/T:$ $\displaystyle \frac{1}{T} = \left.\frac{\partial S}{\partial E}\right|_{C_{i \ne j}}.$ Given observables to play the roles of volume and number of particles—say, the length and output, respectively—we can similarly define analogs of pressure and chemical potential. Given these, we can think about thermodynamic cycles like those that power heat engines, or study the analogs to Maxwell’s relations, or study chemical reactions–all referring to programs instead of pistons. And since the observables are arbitrary computable functions of the program bit string, we can actually recover Li and Vitányi’s meaning for ‘algorithmic thermodynamics’ by interpreting the output as a description of a physical macrostate; so our use of the term includes theirs as a special case. ## Tutorial on how to overlay tiles on Google maps Posted in Uncategorized by Mike Stay on 2010 February 21 ## The Science of Benjamin Button Posted in Uncategorized by Mike Stay on 2010 January 28 ## England on fasting Posted in Uncategorized by Mike Stay on 2010 January 27 ## volcano Posted in Uncategorized by Mike Stay on 2010 January 27 ## storm Posted in Uncategorized by Mike Stay on 2010 January 27 ## sky Posted in Uncategorized by Mike Stay on 2010 January 27 ## wave Posted in Uncategorized by Mike Stay on 2010 January 27 ## For Aidan’s birthday? Posted in Uncategorized by Mike Stay on 2010 January 26 Fifty dangerous things you should let your kids do. ## Signatures of consciousness Posted in Perception by Mike Stay on 2010 January 26 Things that happen in the brain that correlate well with being conscious of something. ## Incredibly cool essay on morality Posted in Uncategorized by Mike Stay on 2010 January 26 ## Aleph and Omega Posted in Borges, Math, Perception, Theocosmology, Time by Mike Stay on 2010 January 14 I shut my eyes — I opened them. Then I saw the Aleph. I arrive now at the ineffable core of my story. And here begins my despair as a writer. All language is a set of symbols whose use among its speakers assumes a shared past. How, then, can I translate into words the limitless Aleph, which my floundering mind can scarcely encompass? Mystics, faced with the same problem, fall back on symbols: to signify the godhead, one Persian speaks of a bird that somehow is all birds; Alanus de Insulis, of a sphere whose center is everywhere and circumference is nowhere; Ezekiel, of a four-faced angel who at one and the same time moves east and west, north and south. (Not in vain do I recall these inconceivable analogies; they bear some relation to the Aleph.) Perhaps the gods might grant me a similar metaphor, but then this account would become contaminated by literature, by fiction. Really, what I want to do is impossible, for any listing of an endless series is doomed to be infinitesimal. In that single gigantic instant I saw millions of acts both delightful and awful; not one of them occupied the same point in space, without overlapping or transparency. What my eyes beheld was simultaneous, but what I shall now write down will be successive, because language is successive. Nonetheless, I’ll try to recollect what I can. On the back part of the step, toward the right, I saw a small iridescent sphere of almost unbearable brilliance. At first I thought it was revolving; then I realised that this movement was an illusion created by the dizzying world it bounded. The Aleph’s diameter was probably little more than an inch, but all space was there, actual and undiminished. Each thing (a mirror’s face, let us say) was infinite things, since I distinctly saw it from every angle of the universe. I saw the teeming sea; I saw daybreak and nightfall; I saw the multitudes of America; I saw a silvery cobweb in the center of a black pyramid; I saw a splintered labyrinth (it was London); I saw, close up, unending eyes watching themselves in me as in a mirror; I saw all the mirrors on earth and none of them reflected me; I saw in a backyard of Soler Street the same tiles that thirty years before I’d seen in the entrance of a house in Fray Bentos; I saw bunches of grapes, snow, tobacco, lodes of metal, steam; I saw convex equatorial deserts and each one of their grains of sand; I saw a woman in Inverness whom I shall never forget; I saw her tangled hair, her tall figure, I saw the cancer in her breast; I saw a ring of baked mud in a sidewalk, where before there had been a tree; I saw a summer house in Adrogué and a copy of the first English translation of Pliny — Philemon Holland’s — and all at the same time saw each letter on each page (as a boy, I used to marvel that the letters in a closed book did not get scrambled and lost overnight); I saw a sunset in Querétaro that seemed to reflect the colour of a rose in Bengal; I saw my empty bedroom; I saw in a closet in Alkmaar a terrestrial globe between two mirrors that multiplied it endlessly; I saw horses with flowing manes on a shore of the Caspian Sea at dawn; I saw the delicate bone structure of a hand; I saw the survivors of a battle sending out picture postcards; I saw in a showcase in Mirzapur a pack of Spanish playing cards; I saw the slanting shadows of ferns on a greenhouse floor; I saw tigers, pistons, bison, tides, and armies; I saw all the ants on the planet; I saw a Persian astrolabe; I saw in the drawer of a writing table (and the handwriting made me tremble) unbelievable, obscene, detailed letters, which Beatriz had written to Carlos Argentino; I saw a monument I worshipped in the Chacarita cemetery; I saw the rotted dust and bones that had once deliciously been Beatriz Viterbo; I saw the circulation of my own dark blood; I saw the coupling of love and the modification of death; I saw the Aleph from every point and angle, and in the Aleph I saw the earth and in the earth the Aleph and in the Aleph the earth; I saw my own face and my own bowels; I saw your face; and I felt dizzy and wept, for my eyes had seen that secret and conjectured object whose name is common to all men but which no man has looked upon — the unimaginable universe. I felt infinite wonder, infinite pity… (Jorge Luis Borges, The Aleph) A finitely-refutable question is one of the form, “Does property X holds for all natural numbers?” Any mathematical question admitting a proof or disproof is in this category. If you believe the ideas of digital physics, then any question about the behavior of some portion of the universe is in this category. We can encode any finitely refutable question as a program that iterates through the natural numbers and checks to see if it’s a counterexample. If so, it halts; if not, it goes to the next number. The halting probability of a universal Turing machine is a number between zero and one. Given the first $n$ bits of this number, there is a program that will compute which $n$-bit programs halt and which don’t. Assuming digital physics, all those things Borges wrote about in the Aleph are in the Omega. There’s a trivial way–the Omega is a normal number, so every sequence of digits appears infinitely often–but there’s a more refined way: ask any finitely-refutable question using an $n$-bit program and the first $n$ bits of Omega contain the proper information to compute the answer. The bits of Omega are pure information; they can’t be computed from a fixed-size program, like the bits of $\pi$ can. ## goldie Posted in Uncategorized by Mike Stay on 2009 December 29 ## Scott Aaronson’s work in computational complexity Posted in Uncategorized by Mike Stay on 2009 November 30 A really fun paper by Scott Aaronson, part of which considers the physical implications of P != NP. ## waves Posted in Uncategorized by Mike Stay on 2009 November 22 Gallery here. ## leopard Posted in Uncategorized by Mike Stay on 2009 November 22 ## aspen Posted in Uncategorized by Mike Stay on 2009 November 1 ## Burning bright Posted in Uncategorized by Mike Stay on 2009 October 29 Bengal tiger portrait shoot, with all four varieties. ## Devils on Mars Posted in Uncategorized by Mike Stay on 2009 October 29
2017-06-28 01:58:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 106, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.405460000038147, "perplexity": 1635.4930441138636}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322275.28/warc/CC-MAIN-20170628014207-20170628034207-00697.warc.gz"}
https://tex.stackexchange.com/questions/524430/custom-hyphenation-rule-does-not-work-on-overleaf
# Custom \hyphenation rule does not work on Overleaf I want to break a long word to the next line in a 2-columns acmart template. The word is Sample_Super.Very.Uber.Long.Word. I have tried with \hyphenation{Sample_Super-.Very-.Uber-.Long-.Word} but it does not break the word. Below is the sample on Overleaf: \documentclass[sigconf]{acmart} \usepackage[utf8]{inputenc} \usepackage{hyphenat} \hyphenation{Sample_Super-.Very-.Uber-.Long-.Word} \title{test} \begin{document} \maketitle \section{Introduction} Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Sample_Super.Very.Uber.Long.Word Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \end{document} I also got an error notification on Overleaf that says Improper \hyphenation will be flushed. Not a letter. even though the document still compiles. How do I set this rule? the same word can appear at many places so I'd like to set a global rule on this. • Have you tried removing the dots from the \hyphenation entry? Jan 15, 2020 at 21:05 • @PhelypeOleinik the dot is part of the word. Jan 15, 2020 at 21:08 • Having dots in a "word" will almost certainly make it inappropriate for an ordinary \hyphenation pattern. What you can do is create a macro equivalent with explicitly defined hyphenation points, then always enter it using the macro. (This is not a limitation of Overleaf; it's built into TeX.) Jan 15, 2020 at 21:09 • @barbarabeeton could you help with the macro? I am new to macro... Basically, I have many words that contain . It's actually some variable names. Jan 15, 2020 at 21:11 • I can probably help, but will need to do some research first. (I don't delve into the guts of TeX hyphenation primitives every day. If someone else gets there first, that's okay with me.) Jan 15, 2020 at 21:13 To make something a letter for consideration for hypehnation it needs to have a non zero lower-case code (it can lowercase to itself) \documentclass[sigconf]{acmart} \usepackage[utf8]{inputenc} \usepackage{hyphenat} \lccode\_=\_ \lccode\.=\. \catcode\_=12 % use \sb for math subscripts \hyphenation{Sample_Super-.Very-.Uber-.Long-.Word} \showhyphens{Sample_Super-.Very-.Uber-.Long-.Word} \title{test} \begin{document} \maketitle \section{Introduction} Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Sample_Super.Very.Uber.Long.Word Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \end{document} • This works :) but is there a general way? Like a macro as @barbara beeton comments? This is because I have many similar long words with . and _ in them Jan 15, 2020 at 21:22 • that's just one document setting, it applies to all words, but I suspect that you do not want hyphenation at all and just allow breaking on . and _ in which case look at the url package (but the question as asked is about \hyphenation Jan 15, 2020 at 21:24 • url works too, but in the final pdf, if I hover the mouse over such word, it will act as if it's an URL, also, in some viewer, it may have a blue box around it. Secondly, what I meant about a macro is something that would behave like url in terms of breaking, but not treat the words as an URL. I have many different long words like Another_Super.Long.Word, etc. So defining rule for each would be tedious and may be used as last resort. Jan 15, 2020 at 21:36 • @hydradon no I said the url package not the \url command, the url package just arranges line breaking it is hyperef that adds linking. the url package would allow you to define a custom command, or use \path which is like \url but hyperref does not activate it Jan 15, 2020 at 21:56 • the \path changes the font and add some extra spaces around the word. Could you show me how to override that? Or how to use a customized command from the url` package? Jan 15, 2020 at 22:21
2022-10-07 05:32:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6997275352478027, "perplexity": 181.5601114020297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00156.warc.gz"}
https://tex.stackexchange.com/questions/486186/how-to-begin-with-a-paragraph-in-latex
# How to begin with a paragraph in latex [duplicate] I am writing my B.Sc. report. Here I have write a part of introduction. but the problem is I want begin the sentences with a gap like... ** ## marked as duplicate by user156344, Marcel Krüger, Phelype Oleinik, dexteritas, Henri MenkeApr 24 at 3:47 Don't do that! Anyway, you may need indentfirst package \documentclass{book} \usepackage{lipsum} \usepackage{indentfirst} \begin{document} \chapter{Hello} \lipsum[1-2] \end{document} As I commented, it is somewhat of a typographical standard that the first paragraph following a sectioning name is not indented. However, one can overcome that with \hspace*{\parindent}, which I have macro-fied as \indentthis. \documentclass{book} \newcommand\indentthis{\hspace*{\parindent}} \begin{document} \chapter{Introduction} \indentthis Blah blah is indented. This will be auto-indented. \end{document} • better to use indentfirst I think, rather than having to find all these again if you change your mind about the document style. – David Carlisle Apr 23 at 12:38 • @DavidCarlisle I agree. I therefore have macro-fied it so that the macro can be nullified if desired. – Steven B. Segletes Apr 23 at 12:43 • but you still miss the opportunity to use indentfirst: a package with impeccable heritage and the highest documentation-to-code ratio of any package on ctan:-) – David Carlisle Apr 23 at 14:01 • @DavidCarlisle LOL, no doubt. I was very tempted to plagiarize the answer of JouleV just so that I, too, could employ this exquisite package. But, at the last, my conscience got the better of me. – Steven B. Segletes Apr 23 at 14:06 • @DavidCarlisle Your package is simply the most understandable LaTeX package – user156344 Apr 23 at 16:14
2019-11-12 00:35:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122028112411499, "perplexity": 4027.5134674330784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664469.42/warc/CC-MAIN-20191112001515-20191112025515-00027.warc.gz"}
http://plasmolifting.ru/o-metode/news/?ELEMENT_ID=63859
8-800-100-68-29 : -2016 31.12.2016 ! 2016 , 31 , . , : , , -. -, , . , , . 2016 , . : ? 2016 70 3 238 2016 .  , , 3 238 ! Ù ѻ : , , . PLASMOLIFTING . , , beauty- Inamora. 2016 .  Plasmolifting. • , 8 800 100 68 29 ( ) Plasmolifting . : , , , , , , , . 2016 . PLASMOLIFTING ANIMAL , Plasmolifting Animal, . , , , . PLASMOLIFTING 2016 , 100%- , Plasmolifting.  , , , - Plasmolifting . .., .. -, . PLASMOLIFTING 10 Plasmolifting . 60 - , , 11 . , . - , Plasmolifting , Esthetic Guide Practice Book, , . Research of Medical Sciences. - 31 1 - Plasmolifting . ͨ 19-20 - . . ., . .,   . . . û hipster. , , , ! ! 2017 . , . , , , . ! , ! 2017 . . ! Plasmolifting ! - , . (http://wiki.plasmolifting.ru/index.php/, http://wiki.plasmolifting.ru/index.php/Library). , , . , , , ! ! Plasmolifting Dear friends, In the outgoing year I managed to collect a corpus of the most interesting and valuable publications relating to the therapeutic use of autologous blood plasma (now available on Plasmolifting website through our Library http://wiki.plasmolifting.ru/index.php/, http://wiki.plasmolifting.ru/index.php/Library). These publications reflect the results achieved by practitioners and researchers all over the world and provide ever more convincing evidence of the extraordinary healing power of our own blood. I am very pleased to realize that we are all part of this international society of blood plasma therapists and would like to wish the brothers in arms all the success in their professional and personal life. Happy New Year! Plasmolifting technology developer Akhmerov Renat
2017-04-24 11:13:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349288105964661, "perplexity": 8651.719562625687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119356.19/warc/CC-MAIN-20170423031159-00010-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.ademcetinkaya.com/2023/01/xrf-xrf-scientific-limited.html
Outlook: XRF SCIENTIFIC LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. Time series to forecast n: 23 Jan 2023 for (n+4 weeks) Methodology : Transductive Learning (ML) ## Abstract XRF SCIENTIFIC LIMITED prediction model is evaluated with Transductive Learning (ML) and Factor1,2,3,4 and it is concluded that the XRF stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy ## Key Points 1. Can stock prices be predicted? 2. What is prediction model? 3. Fundemental Analysis with Algorithmic Trading ## XRF Target Price Prediction Modeling Methodology We consider XRF SCIENTIFIC LIMITED Decision Process with Transductive Learning (ML) where A is the set of discrete actions of XRF stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Factor)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Transductive Learning (ML)) X S(n):→ (n+4 weeks) $∑ i = 1 n a i$ n:Time series to forecast p:Price signals of XRF stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## XRF Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: XRF XRF SCIENTIFIC LIMITED Time series to forecast n: 23 Jan 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for XRF SCIENTIFIC LIMITED 1. The assessment of whether an economic relationship exists includes an analysis of the possible behaviour of the hedging relationship during its term to ascertain whether it can be expected to meet the risk management objective. The mere existence of a statistical correlation between two variables does not, by itself, support a valid conclusion that an economic relationship exists. 2. When an entity designates a financial liability as at fair value through profit or loss, it must determine whether presenting in other comprehensive income the effects of changes in the liability's credit risk would create or enlarge an accounting mismatch in profit or loss. An accounting mismatch would be created or enlarged if presenting the effects of changes in the liability's credit risk in other comprehensive income would result in a greater mismatch in profit or loss than if those amounts were presented in profit or loss 3. This Standard does not specify a method for assessing whether a hedging relationship meets the hedge effectiveness requirements. However, an entity shall use a method that captures the relevant characteristics of the hedging relationship including the sources of hedge ineffectiveness. Depending on those factors, the method can be a qualitative or a quantitative assessment. 4. However, the fact that a financial asset is non-recourse does not in itself necessarily preclude the financial asset from meeting the condition in paragraphs 4.1.2(b) and 4.1.2A(b). In such situations, the creditor is required to assess ('look through to') the particular underlying assets or cash flows to determine whether the contractual cash flows of the financial asset being classified are payments of principal and interest on the principal amount outstanding. If the terms of the financial asset give rise to any other cash flows or limit the cash flows in a manner inconsistent with payments representing principal and interest, the financial asset does not meet the condition in paragraphs 4.1.2(b) and 4.1.2A(b). Whether the underlying assets are financial assets or non-financial assets does not in itself affect this assessment. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions XRF SCIENTIFIC LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. XRF SCIENTIFIC LIMITED prediction model is evaluated with Transductive Learning (ML) and Factor1,2,3,4 and it is concluded that the XRF stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy ### XRF XRF SCIENTIFIC LIMITED Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB2Caa2 Balance SheetBa3Caa2 Leverage RatiosCBaa2 Cash FlowBaa2B3 Rates of Return and ProfitabilityCaa2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 86 out of 100 with 733 signals. ## References 1. Lai TL, Robbins H. 1985. Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6:4–22 2. Cheung, Y. M.D. Chinn (1997), "Further investigation of the uncertain unit root in GNP," Journal of Business and Economic Statistics, 15, 68–73. 3. Robins J, Rotnitzky A. 1995. Semiparametric efficiency in multivariate regression models with missing data. J. Am. Stat. Assoc. 90:122–29 4. Chamberlain G. 2000. Econometrics and decision theory. J. Econom. 95:255–83 5. Van der Vaart AW. 2000. Asymptotic Statistics. Cambridge, UK: Cambridge Univ. Press 6. R. Sutton and A. Barto. Introduction to reinforcement learning. MIT Press, 1998 7. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Is TPL a Buy?. AC Investment Research Journal, 101(3). Frequently Asked QuestionsQ: What is the prediction methodology for XRF stock? A: XRF stock prediction methodology: We evaluate the prediction models Transductive Learning (ML) and Factor Q: Is XRF stock a buy or sell? A: The dominant strategy among neural network is to Buy XRF Stock. Q: Is XRF SCIENTIFIC LIMITED stock a good investment? A: The consensus rating for XRF SCIENTIFIC LIMITED is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of XRF stock? A: The consensus rating for XRF is Buy. Q: What is the prediction period for XRF stock? A: The prediction period for XRF is (n+4 weeks)
2023-02-06 09:47:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5341286659240723, "perplexity": 6568.841501712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00682.warc.gz"}
https://www.acooke.org/cute/Pneumonia0.html
## [Diary] Pneumonia From: andrew cooke <andrew@...> Date: Fri, 30 Aug 2019 11:25:00 -0400 I want to make some notes (similar to those on the bike accident) to help remember the sequence of recent events related to me being hospitalized for pneumonia. On Thu Aug 1 we flew to Edinburgh. The day before (or two days before?) Paulina's brother had stayed in our flat, apparently quite ill, coughing and vomiting. In Edinburgh we were in good condition, walking a fair amount (I was / am still recovering from the broken leg and ensuing problems). On Tue Aug 6 my sister drove me down to my parents (Paulina stayed in Edinburgh at a conference). In the car I was coughing a lot. On Fri Aug 9 I went to meet Paulina at the local train station and wasn't feeling so good. The plan was to take the family (including sister) to dinner on Sunday evening. I spent most of Sunday in bed, hoping I would be well enough for the meal to go ahead; in the later afternoon I had a temperature and we cancelled. The next few days I thought I had the flu - intermittent temperature, shivers, coughing, etc. At one point I noticed that I was coughing up phlegm that contained some blood. On Tue Aug 13 the rest of the family insisted I go see a local doctor. The doctor sent me directly to the local hospital, where I stayed for two nights. Initially there was concern I had TB (so I had a 'private' room), but test showed pneumonia (strep). I was on a drip for hydration (maybe 24 hours) antibiotics (48 hours). I had been taking Ibuprofen-based flu medication to help with MS symptoms, but apparently this raised the chance of Kidney problems so I was switched to Paracetamol. On Thu Aug 15 I was released with oral antibiotics (2 kinds, 6 days). On Fri Aug 16 Paulina flew to Chile. On the main flight (LHR - GRU) she had a fever and was placed on a drip in the airport clinic at Sao Paulo, but later flew on to Santiago. She saw a local doctor on Sunday, was diagnosed with pneumonia, and was prescribed antibiotics. One motivation for Paulina returning (apart from work which was the original reason for the early flight) was that her brother had disappeared. He was later found in a hospital in the South of Chile. I do not know what his diagnosis was. Meantime (sorry, don't have exact dates) my parents were also diagnosed with bronchitis and given antibiotics. My sister was OK. I was intending to fly back on Mon Aug 19, but the local doctor felt until that date. After some discussion with my doctors in Chile we decided to delay the flight a week and skip the Betaferon (the risk of an MS outbreak was low and the drug is not commonly available in the UK). I increased the spacing of my final two injections, so the final injection history was: August 2019 Mo Tu We Th Fr Sa Su - 2 - 4 - 6 - 8 - 10 - 12 - 14 - - 17 - - 20 - - - - - - 27 - 29 - 31 I flew back on the 26th, arriving 27th (injection on arrival). Currently we are all easily tired, with coughs, but otherwise OK. Andrew
2021-07-30 06:07:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4684361517429352, "perplexity": 8092.433581146573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00505.warc.gz"}
https://brilliant.org/discussions/thread/be-smart-than-intelligent/
# Be smart than intelligent! Interviewer said "I shall either ask you ten easy questions or one really difficult question. Think well before you make up your mind!" The boy thought for a while and said, "my choice is one really difficult question." "Well, good luck to you, you have made your own choice! Now tell me this. "What comes first, Day or Night?" The boy was jolted in! to reality as his admission depends on the correctness of his answer, but he thought for a while and said, "It's the DAY sir!" "Sorry sir, you promised me that you will not ask me a SECOND difficult question!" He was selected for IIM! Note by Vishwathiga Jayasankar 4 years, 8 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: I could have just as well answered, "Anyone that knows to the answer to the Chicken or the Egg question would know the answer to that one". And the interviewer would have asked, "And that is?", etc. Same result. For an organization looking for smart, intelligent candidates, it doesn't seem very smart or intelligent to ask such a thing. - 4 years, 8 months ago With this note I got to know the difference between smart and intelligent. And these organisations look for smart people rather than those with high intelligence but no communication skills or smartness. - 4 years, 8 months ago Every corporation that looks for "smart, intelligent people" have their methods. It wouldn't deeply surprise me if some of them would ask such a question. - 4 years, 8 months ago Yeah It didn't suprise me either. I am just saying that they are looking for people who are more smart than intelligent. - 4 years, 8 months ago
2020-06-01 06:22:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517214298248291, "perplexity": 3527.229634816758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00349.warc.gz"}
http://farside.ph.utexas.edu/teaching/qmech/lectures/node87.html
Next: Spin Space Up: Spin Angular Momentum Previous: Introduction # Spin Operators Since spin is a type of angular momentum, it is reasonable to suppose that it possesses similar properties to orbital angular momentum. Thus, by analogy with Sect. 8.2, we would expect to be able to define three operators--, , and --which represent the three Cartesian components of spin angular momentum. Moreover, it is plausible that these operators possess analogous commutation relations to the three corresponding orbital angular momentum operators, , , and [see Eqs. (531)-(533)]. In other words, (702) (703) (704) We can represent the magnitude squared of the spin angular momentum vector by the operator (705) By analogy with the analysis in Sect. 8.2, it is easily demonstrated that (706) We thus conclude (see Sect. 4.10) that we can simultaneously measure the magnitude squared of the spin angular momentum vector, together with, at most, one Cartesian component. By convention, we shall always choose to measure the -component, . By analogy with Eq. (538), we can define raising and lowering operators for spin angular momentum: (707) If , , and are Hermitian operators, as must be the case if they are to represent physical quantities, then are the Hermitian conjugates of one another: i.e., (708) Finally, by analogy with Sect. 8.2, it is easily demonstrated that (709) (710) (711) (712) Next: Spin Space Up: Spin Angular Momentum Previous: Introduction Richard Fitzpatrick 2010-07-20
2018-02-24 15:43:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398764729499817, "perplexity": 612.8244399679119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00707.warc.gz"}
https://www.ingilizcedili.com/2019/12/listening-greetings-3.html
# Listening - Greetings 3 A: Hello, it's been a long time since I have seen you. B: It is true that I have not seen you in a while. A: Exactly how long do you think that it has been? B: I believe that it has been two years since we last saw each other. A: So where have you been since I last saw you? B: I am working on my doctorate at USC. A: What is your field of emphasis? B: I decided to pursue international communications. A: I think that you will be very employable after you finish your degree. B: I hope that when I finish I will find good work. A: It has certainly been a long time since I saw you last. B: It has been a long time since I last saw you. A: Can you remember when we last saw each other? B: It was about two years ago that we saw each other. A: What have you been up to for the past two years? B: I am finishing up my doctorate at USC. A: What subject did you decide to study? B: International communications is my field. A: That sounds like a very marketable degree. B: I am expecting to get my degree and find an interesting position.
2023-03-21 03:42:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811387300491333, "perplexity": 793.7991881846118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00249.warc.gz"}
https://ask.sagemath.org/questions/44466/revisions/
# Revision history [back] ### I'm trying to substitute a value in a Laplace transform Hello, I'm trying to solve a DE by using Laplace transforms, but when I get the lapalce transfrom of my DE I get this part s*D0(0) and I don't know how can I substitute D[0], I tried D0(0) == -1) but I get an error that says that D is not defined, I can subs y(0) == 1 with no issues. So my question is how can I substitute the value for y'(0)=-1 in the Laplace transform. thank you for your help !. t = var('t') s = var('s') y = function('y')(t) ED = diff(y,t,2)-2*diff(y,t)-3*y==4 ED_Lap = ED.laplace(t,s) ED_Lap = solve(ED.laplace(t,s),laplace(y(t), t, s)) print(ED_Lap) show(ED_Lap[0].rhs().subs(y(0)==1,D[0](y)(0)==-1).inverse_laplace(s,t)) # I don't know how to subs D
2021-11-30 22:53:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7117622494697571, "perplexity": 1125.1585190921392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00372.warc.gz"}
https://physics.stackexchange.com/questions/649814/is-the-conservation-of-energy-simply-assumed-or-can-it-be-accurately-tested
# Is the conservation of energy simply assumed or can it be accurately tested? Has anyone conducted experiments demonstrating the conversation of energy or is it simply assumed to be true? Testing the conversation of energy would entail accurately measuring the joules of the energy input and output, including any energy lost to friction or heat. That would mean not simply assuming "some" energy was lost, but actually measuring the energy that was lost. ## 2 Answers The law of energy conservation has always been verified experimentally at the cost of often difficult measurements. In a way, Joule's experiments (mechanical equivalent of heat) are a good example. Another interesting example is the neutrino hypothesis by Pauli : In 1930, this evidence was problematic for physicists working in the field. What happened to the law of conservation of energy for beta decay? The seemingly missing energy even led Niels Bohr to propose doing away with that most fundamental conservation law. A mortal sin for a physicist. Every experiment has shown us that conservation of energy holds true. This is not only the case for macroscopic objects, but also for interactions at the quantum level$$^1$$. Many physicists at different times in history confirmed this conservation law, most notably James Joule and Nicolas Sadi Carnot. In cases where energy is lost to friction or heat, although it is a little harder to measure, we still find that the total energy of the system is constant. We can write this law mathematically as $$U=U_i+W+Q$$ where $$U$$ is the total energy of the system, $$U_i$$ is the initial energy, $$Q$$ is the heat added or removed from the system, and $$W$$ is the work done on or by the system. Conservation of energy is a fundamental law of nature, and there has not been an instance where this is law is not observed (there is debate about this$$^2$$ in cases such as universal expansion, gravitationally redshifted photons, some unusual circumstances in cosmology and in certain applications of general relativity). Noether’s theorem tells us that certain conservation laws result from symmetries. The law of conservation of energy results from time translation symmetry. So it is applicable to systems that have this symmetry. For almost all physical systems, this symmetry, and therefore the law of conservation of energy holds true. $$^1$$ It was once thought that energy was not conserved for beta decay since at that point in time, when nuclear beta decay was studied, it was found that the decay products and the energies were not consistent with conservation of energy. Some other mechanism had to be at play such that energy was conserved, otherwise physicists would have to accept that energy was not always conserved in fundamental processes. This idea was extremely unpalatable, and so it was hypothesized that there must be an additional decay particle, that was carrying away energy such that the total energy was conserved. This prediction was made by the physicist Pauli in 1933, and he called this particle a "neutrino" and in 1956, 23 years later, the neutrino was detected experimentally. The reason why it was hard to detect it originally, was because it had no charge and was virtually massless. This once again cemented the idea that energy is a conserved quantity. $$^2$$ The total energy of an isolated system is always constant. If we consider the universe to be an isolated system, one can say that the total energy in the universe is conserved. • This is true except in general relativity, where the concept of energy gets fuzzy. Jul 6 at 6:16 • Adding to that last point about Noether's theorem, the fact that we observe the lagrangian not to change over time is therefore indirect evidence for conservation of energy Jul 6 at 14:46 • @VincentThacker In what way does it get fuzzy? Is it not defined sufficiently by the energy–momentum relation: E^2 = (pc)^2 + (mc^2)^2 How is energy conservation no longer sufficiently defined by the above answer? – spex Jul 6 at 15:32 • @spex See physics.stackexchange.com/questions/2597/… for details more complicated than I can understand, but Noether's theorem and time invariance get weird when time can be bent. For an easy example of how it's not obviously conserved, where does the energy in a hubble-redshifted photon go? Jul 6 at 20:02 • Universal expansion only violates the conservation of energy when you assume W = 0. Now why would you think that? ; ) Jul 6 at 20:02
2021-09-24 03:06:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6831321716308594, "perplexity": 302.8658937252756}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00677.warc.gz"}
https://library.kiwix.org/politics.stackexchange.com_en_all_2021-04/A/question/27361.html
## What kind of approval does Donald Trump need to build his wall? 25 Can the president order the construction of a wall in the southern border just by himself, such as with an executive order? What kind of approval does he need to initiate construction of such a wall? 2 The approval of the Congress (https://en.wikipedia.org/wiki/United_States_budget_process). – Trilarion – 2018-01-11T15:34:55.740 2None really. All he has to do is buy up a strip of land and start building, like his buildings. Maybe he will put his name on them too? Oh, you mean with taxpayer money!? – Chloe – 2018-01-11T23:06:22.260 48 The main problem is getting the money from congress. The US executive has a budget plan which is made by Congress. This budget plan says how much money the President is allowed to spend for what purpose. Building a border fortification requires labor and material, and unless Trump can somehow find a way to get someone else to pay for it (good luck with that), these must be paid from the federal budget allocated for this purpose. Fortunately for Trump Congress decided last year to allocate $1.6 billion to improving border fortifications. Unfortunately this is still short of the$2.6 billion the Department of Homeland Security requested for this purpose. If the Trump administration can somehow finance the construction, there might still be some minor local problems to solve. For example there are some environmental concerns. Improved border fortification doesn't just prevent people from migrating but also blocks the natural migration paths for animals. Environmental protection groups might go to court over this. Also, the building plans might interfere with local property rights. The US government can't just build fortifications on land owned by private people. When the owner doesn't want to sell, the government would have to try to acquire the land with eminent domain power. But when the owners are willing to put up a fight, this might require a lengthy lawsuit. At least one group is planning to exploit this to oppose Trump's border wall project. 4Re the linked article labeled "Fortunately for Trump Congress decided last year to allocate \$1.6 billion to improving border fortifications". That wasn't Congress, it was the House, and it wasn't an allocation (aka appropriation), it was an authorization. Funny things happen to the US federal budget between the House and Senate, and between authorizations and appropriations. – David Hammen – 2018-01-10T23:32:22.493 4Hold on, the US doesn't own enough of a strip of land along its border to build a wall? – J Atkin – 2018-01-10T23:32:29.040 @JAtkin not the federal government. Some of the land near the border is owned by third parties. See the last link on the answer. – Mindwin – 2018-01-11T13:18:04.780 1@Mindwin - This isn't a nation-wide winning strategy. The amount of land owned by a single owner would be tiny (even acres of land is tiny when considering the scale of something as wide as this national border). If some owner was particularly problematic, the government could just build the wall on the North side of that particular land, essentially isolating that land from the rest of America (suitably penalizing the land owners for such extensive non-cooperation). And even a wall hole of acres would be easier for Border Patrol to patrol heavily than the entire border. – TOOGAM – 2018-01-11T13:25:51.590 1@TOOGAM I'd be surprised if the federal government could effectively cut somebody's private property off the rest of the country with a wall. If I were the land owner I'd take the government to court over that, and my layman's judicial gut feeling indicates I have a case. – Peter - Reinstate Monica – 2018-01-11T15:21:36.613 Re "environmental concerns": I suppose the federal government will have to file an environmental impact statement to California and Texas (not sure about Arizona and New Mexico), cf. the one from 2007 for the Rio Gande area. These can be challenged in court if deemed deficient. – Peter - Reinstate Monica – 2018-01-11T15:34:00.957 @JAtkin to make matters worse, some American citizens own land that technically crosses the border, or exists entirely on Mexican soil. Building a wall would cut them off. Sucks for them. I have no idea why you would intentionally buy land that is on or across a border, be it state or national. – BlackThorn – 2018-01-11T23:29:13.417 8 In addition to money, property rights will be an issue. In most parts of the country, the land is either privately owned, national/state park, or Indian land, right up to the line. In fact, in Arizona, one Native American tribe, the To'ono Odham, extends across the border into Mexico. One of my neighbors owns border farmland. He told me that under Roosevelt, the Federal Government used eminent domain to secure a 60 foot wide easement coast-to-coast along the border, including across his land. Recently, several media outlets have reported that Department of Justice is in the process of massively ramping up eminent domain processes along the border, so my assumption is that the government will try to either outright seize the properties, or vastly expand the width of the easement. One area that will likely not be a problem is environmental reviews. With the RealID Act, Congress gave DHS the power to waive most laws, including environmental laws, that could slow down building the border fence. This was very much a bipartisan law. The original RealID law was passed under George Bush in 2005, and That power was further expanded under Obama in 2013. Update: accepted suggested replacement of Indian with Native American. I had originally avoided "Native American" because this is a US-centric term that seemed inappropriate for a tribe that is at least partially in Mexico. For lack of a more comprehensive term, I had used Indian. On second thought, neither term seems particularly fitting. It's worth noting that the makers of Cards Against Humanity intentionally bought some land on the US-Mexico border to prevent at least that part of the wall being built. http://www.independent.co.uk/news/world/americas/us-politics/cards-against-humanity-trump-border-wall-buy-land-stop-building-a8055396.html 2Are you saying that Roosevelt has already done the hard part of getting the land? – Stig Hemmer – 2018-01-11T09:43:20.933 – DavePhD – 2018-01-11T19:38:48.633 Texas was not included by Roosevelt, just California, Arizona and New Mexico. – DavePhD – 2018-01-11T19:54:56.830 Nothing brings Republican and Democrat politicians together like quietly expanding government power. – jpmc26 – 2018-01-12T02:12:13.177 @AJFaraday Given the eminent domain process, I wonder how effective the Cards against Humanity approach will be. It may be enough to gum up the process with legal issues for a couple of years. – Kevin Keane – 2018-01-12T07:28:14.300 I think "Native American" is the correct terms, since "American" refers to The Americas, not to the United States of America. "Indian" is confusing it with people of India. – Bregalad – 2018-01-12T08:14:34.720 @Bregalad Technically, I agree, but I'm not sure if that applies here. In this context "American" seems to refer to specifically the United States of America. In Canada, for example, the equivalent term is "First Nation". I would be surprised to see Mexico's Tarahumara or Peru's Quechua described as Native Americans. I think the generic term would be "indigenous". But in the end - Native American is probably indeed the best choice. – Kevin Keane – 2018-01-12T08:25:56.833
2021-08-01 11:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19406232237815857, "perplexity": 3554.9230571130133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00678.warc.gz"}
https://brilliant.org/problems/bullet-problem-dynamics/
# Bullet problem (Dynamics) A bullet of mass 120 gram is fired with velocity of 390 m/sec. towards a wooden body of mass 3 kilogram , which is at rest if the bullet is imbedded in it , and the system moves after that with a certain velocity . Find their velocity given that the momentum of the system doesn't change due to impact. ×
2019-10-13 21:41:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570441603660583, "perplexity": 509.83361075833625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00011.warc.gz"}
http://motls.blogspot.co.uk/2010/03/al-gore-will-become-doctor.html
## Thursday, March 04, 2010 ... ///// ### Al Gore will become a doctor Well, an honorary one. The University of Tennessee in Knoxville decided to acknowledge one of the brightest and most achieved Tennesseans in history: Editorial: Al Gore a fine choice for honorary degree Ninety-five percent of the Knoxnews readers think that it is a bad idea - but who cares. Al Gore's greatest achievements are the lost 2000 elections, his invention of the Internet, the ManBearPig, his new kind of climate science, and especially his recent contributions to the physics of plasma: Plasma that used to be produced in complicated labs can suddenly be obtained by digging in your garden. This opens lots of new applications, including superclean and superefficient hybrids of geothermal and thermonuclear energy. Congratulations to Al Gore and congratulations to UT Knoxville for officially becoming the Univesity of Quacks.
2015-04-26 20:55:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19032317399978638, "perplexity": 7009.9034327761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656168.61/warc/CC-MAIN-20150417045736-00198-ip-10-235-10-82.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-rewrite-log-1-3-x-as-a-ratio-of-common-logs-and-natural-logs
How do you rewrite log_(1/3)x as a ratio of common logs and natural logs? Nov 26, 2017 I tried this: Explanation: We can change into an new base $c$ base as: ${\log}_{a} b = {\log}_{c} \frac{b}{\log} _ c a$ so in your case we get: • common log: ${\log}_{\frac{1}{3}} x = {\log}_{10} \frac{x}{\log} _ \left(10\right) \left(\frac{1}{3}\right)$ • natural log: ${\log}_{\frac{1}{3}} x = {\log}_{e} \frac{x}{\log} _ e \left(\frac{1}{3}\right)$
2019-06-20 03:11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4874474108219147, "perplexity": 6246.490317130237}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00183.warc.gz"}
https://samjshah.com/2010/01/04/the-calculus-of-friendship/
# The Calculus of Friendship In the past few weeks, I’ve read a few books about math. I don’t have a lot to say about the first book. I learned a few interesting vignettes and a few interesting facts, but overall, I’m not sure I would recommend it to others. The second book was actually incredibly fascinating, and I will maybe write a little somethin’ somethin’ about that later. However, I just finished The Calculus of Friendship and wanted to give it a major shout out. Let me first tell you how I came upon this book. Prof. Strogatz emailed me in October of 2008. I happened across your blog today (isn’t the Internet amazing?) and felt compelled to try contacting you for many reasons.  You seem like a great (former) student who has now turned into a great teacher. That’s wonderful.  Sorry we didn’t overlap at MIT.  And it’s very admirable that you’re now bringing your enthusiasm and training to help inspire high school kids. The email was longer than that, and incredibly sweet, but what was more amazing than getting this email was the timing of it. I explained in my reply: You won’t believe how coincidental your email is! In one of my calculus sections this past Friday, we finished the material we needed to go over ten minutes early and we got — somehow, don’t ask me how — on the topic of chaos. So of course I go off on this mini-lesson on the chaotic waterwheel. We watched youtube videos and talked about what it means for something to be chaotic, and how they should understand why weather is so unpredictable from this. One student came to my office after class, and I showed him your textbook (which I hold up as one paragon of what college math textbooks should strive to be; it was far and away the best math textbook I’ve used, besides the calculus textbook that I used in high school, which will always have a special place in my heart and on my bookshelf). I loved that. Then this year, Prof. Strogatz emailed me asking if he can send me a copy of his new book The Calculus of Friendship. The timing of that email was strange too. In my reply email, I said: Wow! Thank you so much for this super unexpected and thrilling surprise. Talk about things that brighten the day. Something in the zeitgeist must be in sync (ha, groan) because just yesterday I was looking around in our math department for your Chaos DVDs (I asked my department head purchase them last year). They were nowhere to be found. Turns out one of the math teachers took them home over the summer to watch, and her boyfriend then got hooked on them, and that’s why they were missing. Too good! And what a compliment to your digital teaching presence. I looked at the first few pages of your new book on Amazon.com, and it can’t but help but be an emotional read. Because I suspect that ensconsed in the pages is a portrait of the teacher that I strive to be. Enough prelude. I finally did get to sit down and read this book over winter break. It’s a short read, only about 150 pages. And it is broken up into sweet little vignettes. Although I could have polished this book off in a few hours, I wanted to savor it, let it linger. So I limited myself to only a dozen or so pages each day. I was introduced to two characters: a precocious high school student (Strogatz) and a veteran teacher (Joffray). And as I slowly devoured the book, I was taken on an emotional journey about two minds which played off each other, and two lives which slowly and inexorably intertwined with each other. Strogatz has written an honest and critical autobiographical piece, while at the same writing a sublime elegy for his former high school calculus teacher. The Calculus of Friendship is crafted by the author and narrator, Steve, by analyzing his epistolatory relationship with his teacher, Joff. The letters started after high school and focused on interesting mathematical questions. These letter exchanges continued for decades. What’s interesting is not only the contents of the letters (which I will talk about below), but the changing role that the letters played in the writers’ lives. The meaning of the correspondance between Steve and Joff changed, although the content itself was often intensely and narrowly focused on interesting mathematical problems and solutions. This book is indeed, as the publisher’s blurb says, “an exploration of change. It’s about the transformation that takes place in a student’s heart, as he and his teacher reverse roles, as they age, as they are buffeted by life itself.” As expected from the author of one of my favorite college textbooks, the actual math is explained clearly. The math problems the two worked on through the years are interesting (chase problems, some fun integrals and series, the gamma function, dimensional analysis, etc.). The problems were different and interesting enough, or the approaches out-of-the-box enough, that I wasn’t bored and didn’t skip any of the math explanations. Because the epistolatory nature of Steve and Joff’s relationship, and because they were each egging each other on with questions and observations, the puzzle-y aspect of problems solving came to the forefront. Some problems were attacked in a number of different ways, with a few different approaches. (My favorite one was finding $\frac{\sin 1}{1}+\frac{\sin 2}{2}+\frac{\sin 3}{3}+\frac{\sin 4}{4}+...$.) So yeah, I give the book two thumbs up. As an aside, this book gave me a thought: a textbook (or unit) written entirely via letters. A fresh back and forth exchange. A little back story to draw in the reader. This approach could showing how math really unfolds, how questions get raised and answered, how some approaches work while other approaches fail, etc. Basically a textbook showing the messy nature that math evolves, because it is written as a dialogue between two people trying to figure something out. Where everything isn’t presented in a sterile, whitewashed way. Where the driving question for a unit is something like “so I was wondering if you can find a curve that goes through the point (2,1) and (4,-2). I’ve figured out how to find a line that goes through these points (which I will explain in this letter below). But what about some other curves? I mean, I can draw an infinite number of curves between these two points. [figure included.] How do I find their equations?” Okay I should get to bed now. The twilight of my winter break is nigh and my alarm goes off in 7 hours to wake up for my first day back. 1. Matt E says: In the 2nd year of the PROMYS program, the participants are divided into pairs to do some independent exploratory “research” (not so much looking stuff up as figuring stuff out), which culminates in a paper and a presentation on what had been discovered. They pass around papers from previous years so that everyone can get a sense of what’s expected, and one of them was just such an epistolary style. It was the two participants trading “letters” back and forth about the material, and what they had “discovered”. It was very interesting, and very clever. I think such a textbook would be awesome. You’re right on about the sterile nature of most textbooks vs the messy nature of actually doing math and solving problems. (Sorry, problem-solving. ;-) ) File the idea somewhere, and when you actually get around to it, I’d love to be part of it! 2. It isn’t exactly what you’re asking for, but Knuth’s book Surreal Numbers has a similar flavor. 3. Sam, what a great post!! On the topic of a syllabus or unit taught entirely through letters, have you heard of the book Sophie’s World? It’s a novel about a high-school aged girl who gets a course in western philosophy via mysterious letters left in her mailbox. So it’s about philosophy, but not math, but defo a very inspiring correspondence of ideas. Also pedagogically, it seems like teaching through letters/correspondence — little niblets of information — might be less overwhelming than facing a textbook. And it seems like there’s a lot more potential to feel anticipation for a receiving a letter than for cracking open a textbook (though if the textbook is really good, that is also possible.) :) 1. I *loved* Sophie’s World. I definitely have it on my shelf, but haven’t thought of it in forever! That’s exactly the feeling I think would be great. Sam 1. aww…Sophie’s World! That brings back memories…I read that slowly whenever we had free time in World History my freshman year of high school…might have to dust it off the shelf and give it another read. *smile* 4. John Scammell says: Thanks for the recommendation. I have just ordered The Calculus of Friendship. I like your idea of a texbook written in a “messy” way. Our texts are far too sterile and only present one way of doing things. Such a book would encourage students to explore, fail, and try again. 5. I just found an interesting new series about math on the NYTimes, written by Steven Strogatz. Here’s the link to the first article.
2018-12-19 15:46:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4485236704349518, "perplexity": 1212.2312733483825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00604.warc.gz"}
http://www.wardsattic.com/joomla/ExternalPages/PHPMathJax/MatrixReductionQuestions.php?lvl=3
Enter values for the matrix. Enter fractions with a slash, as "3/4" or "-10/3". ### Multiply the row to make the red cell a one. $$\left[\begin{array}{rrr|r}\large\color{Red} {3} & -3 & -1 & -16 \\ -1 & 4 & 5 & -12 \\ -3 & -2 & -5 & 36\end{array}\right]\begin{array}{ll}\large\phantom{1} & \\ \phantom{1} & \\ \phantom{1} & \end{array}$$
2020-07-04 10:05:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7525259256362915, "perplexity": 5696.080157865412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886095.7/warc/CC-MAIN-20200704073244-20200704103244-00207.warc.gz"}
https://earthscience.stackexchange.com/tags/magnetosphere/hot
# Tag Info ## Hot answers tagged magnetosphere 15 Atmospheric escape is a topic with a long research history. It is complex and is being addressed with both measurements and simulations. For example, the question of atmospheric escape is still actively researched at Mars, and the MAVEN (Mars Atmosphere and Volatile Evolution) spacecraft mission is for example dedicated to this topic. Mars is a planet ... 13 Since atmospheric retention is largely dependent on escape velocity and temperature, removal of the Earth's magnetic field should not have a greatly noticeable effect, as current research shows that Earth's magnetic field changes the location of atmosphere loss due to the solar wind rather than eliminating it. Earth's temperature is not likely to change ... 10 You can definitely see a large geomagnetic storm with a compass, if you have the timing to catch one and the patience to sit and stare for a few minutes. If you look at these minutely measurements from Lerwick observatory for one of the more recent large storms (the 2003 Halloween storm), you can see a roughly 4.5 degree swing in declination (labelled "D", ... 9 Technically yes, but practically, usually no. The magnetic field varies in three dimensions and the variations are not parallel to the Earth's surface. However, horizontal distances varies usually on a larger magnitude than elevation and for everyday use, the declination is only based on horizontal position. The common model for the Earth's magnetic ... 9 It is not actual water what is lost to space, because in the high atmosphere water usually dissociate into other molecules or ions. The oxygen ion outflow is frequently assumed to be a proxy for the loss of water from the planetary atmosphere. In terms of global outflow rates for the Earth the rate varies from $10^{25}$ to $10^{26} s^{-1}$, depending on ... 9 Solar wind particles directly entering the Earth's magnetosphere are not responsible for the majority of bright auroral displays. As you have found, it is magnetic reconnection that accelerates magnetospheric plasma that collides with the upper atmosphere to cause the visible aurora. Polar rain The Solar wind does enter the magnetosphere directly, and ... 8 Magnetohydrodynamic experiments intended to create laboratory analogues for the Earth's magnetic field generally use molten sodium rather than nickel. You can read about the details of one such project, DRESDYN, in this arXiv preprint. The central part of the envisioned precession dynamo experiment… will be a cylindrical vessel of approximately 2 m diameter ... 3 On a quick approach: Magnetism. The copper itself have a weak magnetism, so a copper core will not create a magnetosphere. Chek here or here. Gravity. The Iron density is 7.874 g/cm³ and the nickel density is 8.908 g/cm³. Copper density is 8.96 g/cm³. So with those density data, the core will be heavier. (The actual core is supposed to have a 9.9-12 g/cm³ ... 3 The region where the Earth's internal magnetic field is weakest is known as the South Atlantic Anomaly (SAA). In this region at Earth's surface, the field is approximately one third the strength of the region of maximum strength (near the North and South magnetic dip poles), so roughly 20,000nanoTesla versus 60,000nanoTesla. The weaker field seen at Earth's ... 3 When the solar wind is funneled into the Earth's magnetic poles, those particles excite the electrons of molecules in the atmosphere which then bumps those electrons up into another orbital. When the electrons fall back down into their native orbital, they produce a photon of a particular wavelength whose energy is equal to the difference between the energy ... 2 It seems your question was more like a thinking exercise rather than a question. I cannot answer your question with robust confidence in the current state of knowledge. The fact is, I have always shared your skepticism on the matter, particularly when being taught this subject matter in graduate classes by the experts who work in the field! What I can offer ... 2 Dr. Robert Strangeway kindly shared with me the poster he presented at AGU fall meeting 2017, the one I cited in the question based in the abstract only. I've included below some of the key parts of the poster with some text highlight added by me. He focus on Oxygen loss as a proxy of water loss. And the answer to my question that can be derived from this ... 2 Disturbance storm time index is a measure of the weakened horizontal component of Earth magnetic field during great magnetic disturbances. The depression is often flanked by peaks, that initiates and ends the storm. Dst is published by Kyoto University and more information about the methods and references are available on the web page. The Dst index ... 2 Could you be specific in your question as to what data you need on storms? Do you just need dates when storms occurred, or global geomagnetic index activity levels, or ground magnetic field measurements? Are you interested in the storm effects at Earth, or do you want space-borne measurements of solar activity? You can find lists of some basic info for the ... 1 Short answer: Tsyganenko was not the first to attempt an observation based mathematical model, and older purely theoretical models also exist. The first mathematical magnetic field model was created from measurements made on Earth's surface by Carl Friedrich Gauss in the 1830s, when he derived the mathematical techniques we still use. For the solar wind and ... 1 Near to the Earth's surface there are small variations in the Earth's magnetic field, but these don't play a role in providing the magnetosphere which protects the Earth from charged particles emanating chiefly from the solar wind. Only top voted, non community-wiki answers of a minimum length are eligible
2021-09-23 09:02:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49846720695495605, "perplexity": 1019.97432455373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.92/warc/CC-MAIN-20210923074537-20210923104537-00367.warc.gz"}
https://security.stackexchange.com/questions/127354/rsa-1024-vs-dsa-1024-claim-dsa-ssh-key-is-much-faster-to-brute-force
# RSA-1024 vs. DSA-1024: Claim DSA SSH key is much faster to brute force Does the brute force speed vary significantly between 1024-bit RSA and DSA SSH keys? DSA like gpu keypairs are probably DSA/Elgamal? (Can't find docs) EDIT: The reason I ask is an instructor that "has been involved with computer security for 10+ years" reported that DSA was much faster to brute force than RSA which I find suspect. I understand brute forcing keys on average you need to test half of the possible keys. Of course brute-forcing keys is not generally practical and not the best attack vector if you have any other options. • Are you talking about a pure brute force attack, iterating every possible key? Because that's one of those things that's actually a physical impossibility and/or devolves into discussions about how long we have until the heat death of the universe. – HopelessN00b Jun 17 '16 at 18:47 • Agreed, an instructor claimed DSA was much faster to brute force which I think may be backwards and am trying to disprove. Either way we agree in practically it does not matter. – StackAbstraction Jun 17 '16 at 18:54 • Seems like he may be right, but I'm not sure it's really an apples-to-apples comparison, since DSA is used for signing, rather than encryption, and RSA's commonly used for asymmetrical encryption. <shrug> A note about speed: DSA is faster at signing, slow at verifying. RSA is faster at verifying, slow at signing. The significance of this is different from what you may think. Signing can be used to sign data, it can also be used for authentication. [...] rakhesh.com/infrastructure/… – HopelessN00b Jun 17 '16 at 20:06 • @HopelessN00b in SSH (precisely, current SSHv2 as actually practiced) RSA is used only for signature (for authentication) not encryption. That said, I agree 'brute force' doesn't really make sense here. – dave_thompson_085 Jun 18 '16 at 10:44 • Related discussion (maybe no a duplicate though since it is not dealing with the "brute-force" aspect explicitly but it compares both algorithms security): RSA vs. DSA for SSH authentication keys – WhiteWinterWolf Jun 19 '16 at 10:26
2021-04-21 20:30:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2568732798099518, "perplexity": 2093.3706055663783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039550330.88/warc/CC-MAIN-20210421191857-20210421221857-00524.warc.gz"}
https://oar.princeton.edu/handle/88435/pr1d28x
Skip to main content # Spatial distribution of electrons on a superfluid helium charge-coupled device ## Author(s): Takita, M; Bradbury, FR; Gurrieri, TM; Wilkel, KJ; Eng, K; et al To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1d28x Abstract: Electrons floating on the surface of superfluid helium have been suggested as promising mobile spin qubits. Three micron wide channels fabricated with standard silicon processing are filled with superfluid helium by capillary action. Photoemitted electrons are held by voltages applied to underlying gates. The gates are connected as a 3-phase charge-coupled device (CCD). Starting with approximately one electron per channel, no detectable transfer errors occur while clocking 109 pixels. One channel with its associated gates is perpendicular to the other 120, providing a CCD which can transfer electrons between the others. This perpendicular channel has not only shown efficient electron transport but also serves as a way to measure the uniformity of the electron occupancy in the 120 parallel channels. Publication Date: 2012 Electronic Publication Date: 2012 Citation: Takita, M, Bradbury, FR, Gurrieri, TM, Wilkel, KJ, Eng, K, Carroll, MS, Lyon, SA. (2012). Spatial distribution of electrons on a superfluid helium charge-coupled device. 400 (10.1088/1742-6596/400/4/042059 DOI: doi:10.1088/1742-6596/400/4/042059 Type of Material: Conference Article Series/Report no.: 26th International Conference on Low Temperature Physics, LT 2011; Beijing; China; 10 August 2011 through 17 August 2011; Journal/Proceeding Title: Journal of Physics: Conference Series Version: Author's manuscript Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.
2021-10-23 19:38:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2140928953886032, "perplexity": 9552.922376753271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00680.warc.gz"}
https://physics.stackexchange.com/questions/281963/time-dilation-for-the-clock-in-the-orbit
Time dilation for the clock in the orbit Suppose that we want to compute the total time dilation for a clock located in an orbiting satellite relative to the clock in our cell phone on the ground. Consider two different approaches below. 1. Use special relativity and compute time contraction due to the relative velocity. Use approximation of general relativity in the Newtown limits and compute time expansion due to the less gravity and then find the total time dilation. 2. Don't use special relativity. Stick to the approximation of general relativity based on the symmetry and find Schwarzschild metric and the geodesic for the Earth limits. Find the time dilation assuming a relative velocity in the metric. The question is: Which of them are more justified and provide a better approximation? Are they equivalent? What happens when the relative velocity of the satellite is zero? How good is the approximation in either of the two approaches above. When we pick the second approach and use the Schwarzschild metric we get this equation: $$dt' = \sqrt{1-\frac{3GM}{c^2r}}dt = \sqrt{1-\frac{3r_s}{2r}}dt$$ where $r_s$ is the Schwarzschild radius: $r_s = 2GM/c^2$. Here we not only assume the asymptotic flat metric to measure $r$ but also switch to Newtown gravity when we want to cancel $v$: $$v = \sqrt{\frac{GM}{r}}$$ So it appears that in the second approach there are many more approximation assumptions. • You'll find that these calculations have already been done on this site. You could compare both calculations and see how much difference there is (not much!). – John Rennie Sep 23 '16 at 18:14 • For calculation (2) see: What is the correct formula for gravitational time dilation for a satellite in a circular orbit?. Actually that contains enough info for you to do calculation (1) as well. – John Rennie Sep 23 '16 at 18:17 • @JohnRennie can you how the equation is derived from the metric in the circular orbit case? The $\frac{3}{2}$ factor in particular. – user56963 Sep 23 '16 at 18:37 • I derive the equation at the end of this answer – John Rennie Sep 23 '16 at 19:24 • @JohnRennie I mean how to cancel the velocity $v$ without using the Newtown gravity and only in GR. The factor $\frac{3}{2}$ is there because we combine Newton gravity law into Einstein's General Relativity. If we stick to the GR the velocity should remain. Yet again it is not a four velocity. – user56963 Sep 23 '16 at 20:58
2019-10-18 18:31:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7506735920906067, "perplexity": 339.43527636117363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00014.warc.gz"}
https://www.physicsforums.com/threads/geometric-absolute.750237/
# Geometric absolute 1. Apr 23, 2014 ### Jhenrique "Geometric absolute" If exist a function called absolute that ensures that the result have always the posite sign, so, exist some function that ensures that the sign of the result is always ×? $f(x) = x$ $f(\frac{1}{x}) = x$ 2. Apr 23, 2014 ### homeomorphic The question does not make sense as stated. x is not a sign. The first equation defines a function f(x) = x, and the function that satisfies the 2nd equation is 1/x, but obviously, there is no function that satisfies both. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2018-03-20 20:35:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5337067246437073, "perplexity": 1452.0251341426165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647530.92/warc/CC-MAIN-20180320185657-20180320205657-00240.warc.gz"}
https://stats.stackexchange.com/questions/198607/statistical-methods-to-validate-the-performance-of-a-linear-kalman-filter-algori
# Statistical methods to validate the performance of a linear Kalman filter algorithm I have a problem with a linear Kalman filter algorithm that gets as input some sensor measurements $z_i$ with known measurement error with standard deviation $\sigma_{i,{measured}}$ (assumed normally distributed) and gives as output the updated (a posteriori) state estimate of that measurement $x_i$ and the updated a posteriori covariance error of the estimate from which we get $\sigma_{i, {estimated}}$. I am searching for statistical methods to assess the performance of the estimator algorithm. As a first approach, I am thinking of computing the difference of the measured to the estimated value ($|z_i-x_i|$)and check if the 66.66% of these differences-assuming that the errors of both vectors are normally distributed- lies between the sum of their uncertainties $\sigma_{i,{measured}}+\sigma_{i,{estimated}}$. Do you think it is a good approach to understand if the estimator is erroneous or not? Is there any other idea of validating the performance of the Kalman filter? Searching in the literature I have found a lot of papers that compare the estimate to the true value but I do not know the true value of the model. I just want to infer from the measurements and the estimates along with their documented/predicted uncertainties the accuracy of the estimator. And if an error can be identified is there a way to separate the measurement model error(the error that is introduced from the multiplication $Hx$ ) from a process model error ?
2020-01-25 10:27:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7241169810295105, "perplexity": 197.38096508428592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00439.warc.gz"}
http://openstudy.com/updates/4f1c4ecde4b04992dd237df7
## anonymous 4 years ago $f(x)= d^2 x/dt +5x$ 1. anonymous is it linear operation? 2. anonymous here's what I did, need confirmation $kf(x_1)+jf(x_2)=d^2 /dt (x_1 k + x_2 j)+ 5(x_1 k + x_2 j)$ 3. Mr.Math What does $$\frac{d^2x}{dt}$$ mean? 4. anonymous $kf(x_1)+jf(x_2)=k(d^2 x_1/dt + 5 x_1 )+ j(d^2 x_2/dt + 5 x_2 ))$ 5. anonymous not sure what you are doing with the k and j. is this a non-homogeneous 2nd order DE? is f(x) like y or can you treat it like x(t) 6. anonymous I am not trying to solve it, just checking linearity 7. anonymous oh ok, yeah it seems to be linear. can you just look at f'(x) f'(x) = 5 implying a constant slope wrt x 8. anonymous oh okay, I will try that 9. anonymous i could be wrong though... i think it depends on what x(t) is since that will determine the behavior of d^2x/dt
2016-10-26 21:14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6803990006446838, "perplexity": 2503.2029431284654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00310-ip-10-171-6-4.ec2.internal.warc.gz"}