url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://discourse.mc-stan.org/t/target-and-multivariate-likelihood/8019
# Target+= and multivariate likelihood #1 This query arise from something I was wondering in another thread ( Multiple outcomes hurdle model - exists? ), where at one stage I’m trying to make a multivariate lognormal model. Due to the change of variables involved I understand I need to make an adjustment to the model log density. However my confusion arises in how exactly to do this in the multivariate case. Which I realise in turn stems from my poor understanding of the structure of the model log density (i.e. is it simply a number, or because I have a multi-variate outcome it the model log density also multi-variate ??). Full model herelinear_mvlognorm.stan (1.8 KB), but to put it into context, the salient parts of my model are: data{ int<lower=0> NvarsY; // num dependent variables .... vector[NvarsY] y [N]; // data for dependent vars } ... model{ .... //likelihood log(y) ~ multi_normal_cholesky(mu, L_Sigma); for (dv in 1:NvarsY) target += -log(y[dv]); } What I’m unclear on is the last two lines - I don’t know if I’m adding the -log(y) term correctly given that y is multivariate, and I don’t know what is the form of target+=. Should I be looping over 1:N, or 1:dv as in the example, or is simply -log(y) as a whole entity sufficient, or should I be summing things over y ? #2 The log (posterior) probability is always “only” one number for each iteration in Stan. Remember, that (putting it really simply and omitting the normalizing constant) we are looking for p(y|\theta)p(\theta), where p(y|\theta) is the “likelihood” and p(\theta) is the prior. This whole thing p(y|\theta)p(\theta) is the posterior probability (again, omitting the normalizer). Taking the log of this gives us the log posterior… \log p(y|\theta) + \log p(\theta). This is what we are interested in; it’s the target. And you see that the log turns all the products to additions. So as long as we are on the log scale, we can just happily add to this target. For example, if the likelihood has two parameters then we just add another prior to the target, e.g. \log p(y|\theta,\phi) + \log p(\theta) + \log p(\phi). Take the univariate log-normal as a simple example. Let’s say we transform y_i to \log(y_i) with i=1,\dots,N observations. The log absolute Jacobian is -\log(y_i), so \sum_{i=1}^N \left( \log p(\log(y_i)|\mu,\sigma) -\log(y_i) + \log p(\mu) + \log p(\sigma) \right) or \sum_{i=1}^N (\log p(\log(y_i)|\mu,\sigma)) - \sum_{i=1}^N(\log(y_i)) + \log p(\mu) + \log p(\sigma) to make it more explicit. \sum_{i=1}^N \log p(\log(y_i)|\mu,\sigma) is target += normal_lpdf(log(y) | mu, sigma) – note that this is vectorized in Stan (over all i), when we have vector[N] y in the data block! Note that log(y) ~ normal(mu, sigma) is essentially the same as the expression above… the latter drops constants, but this is not really important here. So this evaluates to a single number (per Stan iteration). (The terms \log p(\mu) and \log p(\sigma) are the priors on \mu and \sigma, but this is not really important here, note that they also evaluate to one number for each iteration in Stan.) So, now coming to your question, you might think, that - \sum_{i=1}^N(\log(y_i)) would translate to target += sum(log(y). And you would be correct! Actually, you could also make it even more explicit and do for (n in 1:N) target += -log(y[n]) …same thing! And finally, also the “simple” target += log(y) evaluates to the same “number”/increase in the traget equation as the two other formulations. Long story short… If you have a multivariate normal (or log-normal) model, the log posterior probability still evaluates to one number per iteration. Since everything is on a log-scale you simply add stuff (+=) and so for each dependent variable in vector[NvarsY] y [N]; you just add the Jacobian correction by looping over NvarsY. EDIT: the last bit is wrong… see below #3 Great answer @Max_Mantei - again thank you very much. #4 Sorry, I got something wrong in the last bit of the previous post. When you have vector[NvarsY] y[N], then y[i] gives you the ith entry in the dimension of N – so the result is a vector of length NvarsY. The correct way is to properly loop over N… thus: for (n in 1:N) target += -log(y[n]); Sorry, that bit did not carry over as seamlessly from the univariate case as I thought. So, here are two things to be careful about: • Correct indexing (maybe use the print() statement to debug) • The sum of logs is not the same as the log of sums… (Think about what comes first: First, think about the target for one observation and then think about summing over all observations.)
2019-03-25 08:16:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732039332389832, "perplexity": 1513.8898966905238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203842.71/warc/CC-MAIN-20190325072024-20190325094024-00535.warc.gz"}
https://academy.vertabelo.com/course/python-basics-part-1/functions/more-advanced-concepts/function-invocation-with-parameter-names
Introduction Function basics Summary ## Instruction Good! Coming back to the previous example: def calculate_price(weight, price_per_pound=1.5, tax=0.15): return weight * price_per_pound * (1+tax) You may ask one more question: since optional arguments are assigned values from left to right, how can I invoke the function with explicit values for weight and tax, but not for price_per_pound? The answer is simple: use argument names! calculate_price(170, tax=0.89) With this invocation, the values for the arguments would be: weight=170, price_per_pound=1.5 (default value), tax=0.89. ## Exercise Given a function named calculate_cone_area, invoke the function with r=5, h=3, and the default value of pi. ### Stuck? Here's a hint! Provide argument names when you invoke the function.
2018-12-10 13:03:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6963497996330261, "perplexity": 6200.556142376517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823339.35/warc/CC-MAIN-20181210123246-20181210144746-00509.warc.gz"}
https://puzzling.stackexchange.com/questions/99198/a-puzzle-about-codeforces-subsequences
# A Puzzle about “Codeforces Subsequences” I'm quite intrigued by an "easy" problem in recent Codeforces contest: Codeforces Subsequences. In this problem, you have to create a string $$T$$ as shortest as possible such that $$S =$$ "CODEFORCES" appears as the subsequence (not substring) of $$T$$ at least $$K$$ times. We actually don't care about this $$K$$, for this puzzle at least, as we can reformulate the problem as the following: Given $$S =$$ "CODEFORCES", find a string with length $$N$$ such that $$S$$ appears as subsequence as many as possible. It turns out, this "simple" strategy works well (marked as spoiler, as this is also the solution for the original problem, but this is needed for the puzzle): You try to clone the first letter 'C', then clone the next 'O', then clone the next 'D', and so on. After cloning the last letter 'S', you clone again the 'C' (so there are $$3$$ 'C's) then 'O' (so there are $$3$$ 'O's) and so on. You keep doing this until the length is $$N$$. So if $$N = 23$$, then "CCCOOODDDEEFFOORRCCEESS" is optimal. There are $$3^3 \times 2^7 = 3456$$ subsequences of $$S =$$ "CODEFORCES" for your information. Now the puzzle is this. Above strategy is pretty "obvious" to be correct for... most of possible $$S$$ and $$N$$. Surprisingly, for some people I think, actually it won't work for all possible $$S$$ and $$N$$. You task is to find such $$S$$ and $$N$$ so that above strategy is wrong! Nb. Even if this problem is taken from the Competitive Programming contest, this puzzle should be done by hand, thus . A subsequence is a sequence generated from a string after deleting some characters of string without changing the order of remaining string characters. ## 2 Answers Here's one possibility: Take $$S$$ to be BASS, and $$N$$ to be $$6$$. Our strategy dictates the final string should be BBAASS, which has $$4$$ subsequnces equal to BASS. But we can do better: by taking BASSSS, which has $$6$$ of them. • Yup, this will do! How if two adjacent letter must be different? :) – athin Jun 19 '20 at 4:40 Take the string as "abcbc" and k=5. Now, according to the algorithm we have "aabbccbc"(three extra characters). But, due to the repitition of "bc", we can take the string as "abcbcbc"(which has just two characters extra). This will give us exactly 5 subsequences of "abcbc".
2021-03-01 14:06:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294425368309021, "perplexity": 950.3587086572178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00348.warc.gz"}
https://www.urionlinejudge.com.br/repository/UOJ_3143_en.html
URI Online Judge | 3143 # Escaping the Cell Phone By Roger Eliodoro Condras, UFSC-ARA Brazil Timelimit: 1 Ritcheli (and yes, that's his name) is a very nice guy and a great friend, but he has a serious problem. It takes a long time to respond to messages on social media. Being able to talk to him, even more so in these pandemic times, is an almost impossible mission. While he is too lazy to pick up his cell phone to reply to messages as they arrive, he is also lazy to spend hours reading accumulated messages after spending days running away from his cell phone. In an attempt to become a more active person on social networks and not let too many messages accumulate, he decided to adopt a new criterion for responding to messages: the number of lines of messages received. This is because the number of messages received is not a very accurate measure of the time he will spend reading the messages. It can receive 20 messages of 100 lines each and take much longer to read than if it had received 50 messages with one line each. But there is a problem with all of this. Applications report only the number of messages received, not the number of lines accumulated from unread messages. So, Ritcheli would like your help to write an algorithm that counts the lines of messages received and reports the total lines to him. Can you help him with this task? Some notes: Consider that each line of the message is always displayed with the same number of characters, the maximum number of characters that the mobile screen can display per line. When this number is exceeded, the remaining text is truncated and continues on the next line, regardless of whether the word is eventually cut in half. If the first character of the new line is a space, it is disregarded and the line starts at the next character other than a space. If a message ends in the middle of the line, with space left on that line, the next message starts on a new line, not in the middle of the previous line. ## Input The first line of the entry has an integer N (10 $$\leq$$ N $$\leq$$ 1000), the number of characters that can fit per line on Ritcheli's cell phone screen. The next lines have several strings and the reading of the file ends with EOF. Each line represents a message and is made up of printable characters from the ASCII Table. The length of each line does not exceed 10,000 characters, and the message does not begin or end with space characters. ## Output A single line with an integer, the total number of lines that Ritcheli will have to read after computing all incoming messages. Input Sample Output Sample 10 Oi Ritch Eh o Roger Vc ta vivo ainda? Faz dias q vc n me respondeu mais :(:(:(:(:( 8
2021-03-05 03:08:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2790471613407135, "perplexity": 771.3704562951202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369721.76/warc/CC-MAIN-20210305030131-20210305060131-00223.warc.gz"}
http://signalsurgeon.com/gentle-introduction-to-vector-norms-in-machine-learning/
Data Science # Gentle Introduction to Vector Norms in Machine Learning Calculating the length or magnitude of vectors is often required either directly as a regularization method in machine learning, or as part of broader vector or matrix operations. In this tutorial, you will discover the different ways to calculate vector lengths or magnitudes, called the vector norm. After completing this tutorial, you will know: • The L1 norm that is calculated as the sum of the absolute values of the vector. • The L2 norm that is calculated as the square root of the sum of the squared vector values. • The max norm that is calculated as the maximum vector values. Let’s get started. Gentle Introduction to Vector Norms in Machine Learning Photo by Cosimo, some rights reserved. ## Tutorial Overview This tutorial is divided into 4 parts; they are: 1. Vector Norm 2. Vector L1 Norm 3. Vector L2 Norm 4. Vector Max Norm ### Need help with Linear Algebra for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. ## Vector Norm Calculating the size or length of a vector is often required either directly or as part of a broader vector or vector-matrix operation. The length of the vector is referred to as the vector norm or the vector’s magnitude. The length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm. — Page 112, No Bullshit Guide To Linear Algebra, 2017 The length of the vector is always a positive number, except for a vector of all zero values. It is calculated using some measure that summarizes the distance of the vector from the origin of the vector space. For example, the origin of a vector space for a vector with 3 elements is (0, 0, 0). Notations are used to represent the vector norm in broader calculations and the type of vector norm calculation almost always has its own unique notation. We will take a look at a few common vector norm calculations used in machine learning. ## Vector L1 Norm The length of a vector can be calculated using the L1 norm, where the 1 is a superscript of the L, e.g. L^1. The notation for the L1 norm of a vector is ||v||1, where 1 is a subscript. As such, this length is sometimes called the taxicab norm or the Manhattan norm. `l1(v) = ||v||1` The L1 norm is calculated as the sum of the absolute vector values, where the absolute value of a scalar uses the notation |a1|. In effect, the norm is a calculation of the Manhattan distance from the origin of the vector space. `||v||1 = |a1| + |a2| + |a3|` The L2 norm of a vector can be calculated in NumPy using the norm() function with a parameter to specify the norm order, in this case 1. ```# l1 norm of a vector from numpy import array from numpy.linalg import norm a = array([1, 2, 3]) print(a) l1 = norm(a, 1) print(l1)``` First, a 3×3 vector is defined, then the L1 norm of the vector is calculated. Running the example first prints the defined vector and then the vector’s L1 norm. ```[1 2 3] 6.0``` The L1 norm is often used when fitting machine learning algorithms as a regularization method, e.g. a method to keep the coefficients of the model small, and in turn, the model less complex. ## Vector L2 Norm The length of a vector can be calculated using the L2 norm, where the 2 is a superscript of the L, e.g. L^2. The notation for the L2 norm of a vector is ||v||2 where 2 is a subscript. `l2(v) = ||v||2` The L2 norm calculates the distance of the vector coordinate from the origin of the vector space. As such, it is also known as the Euclidean norm as it is calculated as the Euclidean distance from the origin. The result is a positive distance value. The L2 norm is calculated as the square root of the sum of the squared vector values. `||v||2 = sqrt(a1^2 + a2^2 + a3^2)` The L2 norm of a vector can be calculated in NumPy using the norm() function with default parameters. ```# l2 norm of a vector from numpy import array from numpy.linalg import norm a = array([1, 2, 3]) print(a) l2 = norm(a) print(l2)``` First, a 3×3 vector is defined, then the L2 norm of the vector is calculated. Running the example first prints the defined vector and then the vector’s L2 norm. ```[1 2 3] 3.74165738677``` Like the L1 norm, the L2 norm is often used when fitting machine learning algorithms as a regularization method, e.g. a method to keep the coefficients of the model small and, in turn, the model less complex. By far, the L2 norm is more commonly used than other vector norms in machine learning. ## Vector Max Norm The length of a vector can be calculated using the maximum norm, also called max norm. Max norm of a vector is referred to as L^inf where inf is a superscript and can be represented with the infinity symbol. The notation for max norm is ||x||inf, where inf is a subscript. `maxnorm(v) = ||v||inf` The max norm is calculated as returning the maximum value of the vector, hence the name. `||v||inf = max(a1, a2, a3)` The max norm of a vector can be calculated in NumPy using the norm() function with the order parameter set to inf. ```# max norm of a vector from numpy import inf from numpy import array from numpy.linalg import norm a = array([1, 2, 3]) print(a) maxnorm = norm(a, inf) print(maxnorm)``` First, a 3×3 vector is defined, then the max norm of the vector is calculated. Running the example first prints the defined vector and then the vector’s max norm. ```[1 2 3] 3.0``` Max norm is also used as a regularization in machine learning, such as on neural network weights, called max norm regularization. ## Extensions This section lists some ideas for extending the tutorial that you may wish to explore. • Create 5 examples using each operation using your own data. • Implement each matrix operation manually for matrices defined as lists of lists. • Search machine learning papers and find 1 example of each operation being used. If you explore any of these extensions, I’d love to know. This section provides more resources on the topic if you are looking to go deeper. ## Summary In this tutorial, you discovered the different ways to calculate vector lengths or magnitudes, called the vector norm. Specifically, you learned: • The L1 norm that is calculated as the sum of the absolute values of the vector. • The L2 norm that is calculated as the square root of the sum of the squared vector values. • The max norm that is calculated as the maximum vector values. Do you have any questions? The post Gentle Introduction to Vector Norms in Machine Learning appeared first on Machine Learning Mastery.
2021-04-10 15:09:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171048998832703, "perplexity": 538.5131923113357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00636.warc.gz"}
https://en.m.wikiversity.org/wiki/PlanetPhysics/Tensor
# PlanetPhysics/Tensor Tensors are another abstract mathematical tool at our disposal for solving problems from rotating rigid bodies to the structure of the Universe. Not only does the use of tensor notation clean up complex equations, but also allows us to embody the invariance of physical quantities within tensors. This is worth repeating: we can use tensors to write physical equations independent of the choice of coordinate systems. In the most general way we define a tensor ${\displaystyle T}$ based on how it behaves under coordinate transformations ${\displaystyle {\bar {T}}_{j_{1},j_{2},\cdots j_{q}}^{i_{1},i_{2},\cdots i_{p}}=T_{s_{1},s_{2},\cdots s_{q}}^{r_{1},r_{2},\cdots r_{p}}{\frac {\partial {\bar {x}}^{i_{1}}}{\partial x^{r_{1}}}}{\frac {\partial {\bar {x}}^{i_{2}}}{\partial x^{r_{2}}}}\cdots {\frac {\partial {\bar {x}}^{i_{p}}}{\partial x^{r_{p}}}}{\frac {\partial x^{s_{1}}}{\partial {\bar {x}}^{j_{1}}}}{\frac {\partial x^{s_{2}}}{\partial {\bar {x}}^{j_{2}}}}\cdots {\frac {\partial x^{s_{q}}}{\partial {\bar {x}}^{j_{q}}}}}$ where the tensor rank is ${\displaystyle n=p+q}$, the contravariant rank is ${\displaystyle p}$ and the covariant rank is ${\displaystyle q}$. Note that rank is also referred to as the tensor order. Although Eq. (1) can be intimidating at first, one can familiarize themselves by working simple examples and pulling from experience with scalars, vectors and matrices. A scalar quantity such as temperature or density is invariant under coordinate transformation and is labeled as a tensor of rank zero. The next step is a tensor of rank 1. If ${\displaystyle p=1}$ and ${\displaystyle q=0}$, then ${\displaystyle n=1}$ and we have a contravariant vector ${\displaystyle {\bar {T}}^{i_{1}}=T^{r_{1}}{\frac {\partial {\bar {x}}^{i_{1}}}{\partial x^{r_{1}}}}}$ If ${\displaystyle p=0}$ and ${\displaystyle q=1}$, then ${\displaystyle n=1}$ and we have a covariant vector ${\displaystyle {\bar {T}}_{j_{1}}=T_{s_{1}}{\frac {\partial x^{s_{1}}}{\partial {\bar {x}}^{j_{1}}}}}$ It is important to realize the differences between Eq. (2) and Eq. (3), i.e. Eq. (2) can represent the transformation of familiar vectors, while Eq. (3) can represent the transformation between basis vectors. This can best be illustrated with the simple examples of the transformation between cartesian coordinates and polar coordinates and the transformation between cartesian basis vectors and polar basis vectors.
2021-01-25 11:08:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9438327550888062, "perplexity": 164.847002553967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00343.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/angle-bisector-in-following-figure-check-whether-ad-bisector-a-abc-following-case-ab-5-cm-ac-12-cm-bd-25-cm-bc-9-cm_22382
Share Books Shortlist # In the Following Figure, Check Whether Ad is the Bisector of ∠A of ∆Abc in the Following Case: Ab = 5 Cm, Ac = 12 Cm, Bd = 2.5 Cm and Bc = 9 Cm - CBSE Class 10 - Mathematics #### Question In the following figure, check whether AD is the bisector of ∠A of ∆ABC in the following case: AB =  5 cm, AC = 12 cm, BD = 2.5 cm and BC = 9 cm #### Solution It is given that AB = 5 cm, AC = 12 cm, BD = 2.5 cm and BC = 9 cm We have to check whether AD is bisector of ∠A. First we will check proportional ratio between sides. Now "AB"/"AC"=5/12 "BD"/"CD"=2.5/9=5/18 Since "AB"/"AC"!="BD"/"CD" Hence AD is not the bisector of ∠A. Is there an error in this question or solution? #### Video TutorialsVIEW ALL [1] Solution In the Following Figure, Check Whether Ad is the Bisector of ∠A of ∆Abc in the Following Case: Ab = 5 Cm, Ac = 12 Cm, Bd = 2.5 Cm and Bc = 9 Cm Concept: Angle Bisector. S
2019-07-16 23:30:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5883070826530457, "perplexity": 1873.8487404289826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524972.66/warc/CC-MAIN-20190716221441-20190717003441-00160.warc.gz"}
http://zte.magtechjournal.com/EN/volumn/volumn_68.shtml
#### Table of Content 25 January 2022, Volume 20 Issue S1 Research Paper An Improved Parasitic Parameter Extraction Method for InP HEMT DUAN Lanyan, LU Hongliang, QI Junjun, ZHANG Yuming, ZHANG Yimen 2022, 20(S1):  1-6.  doi:10.12142/ZTECOM.2022S1001 Asbtract ( 39 )   HTML ( 180)   PDF (2023KB) ( 82 ) Figures and Tables | References | Related Articles | Metrics An improved parasitic parameter extraction method for InP high electron mobility transistor (HEMT) is presented. Parasitic parameter extraction is the first step of model parameter extraction and its accuracy has a great impact on the subsequent internal parameter extraction. It is necessary to accurately determine and effectively eliminate the parasitic effect, so as to avoid the error propagation to the internal circuit parameters. In this paper, in order to obtain higher accuracy of parasitic parameters, parasitic parameters are extracted based on traditional analytical method and optimization algorithm to obtain the best parasitic parameters. The validity of the proposed parasitic parameter extraction method is verified with excellent agreement between the measured and modeled S-parameters up to 40 GHz for InP HEMT. In 0.1–40 GHz InP HEMT, the average relative error of the optimization algorithm is about 9% higher than that of the analysis method, which verifies the validity of the parasitic parameter extraction method. The extraction of parasitic parameters not only provides a foundation for the high-precision extraction of small signal intrinsic parameters of HEMT devices, but also lays a foundation for the high-precision extraction of equivalent circuit model parameters of large signal and noise signals of HEMT devices. Auxiliary Fault Location on Commercial Equipment Based on Supervised Machine Learning ZHAO Zipiao, ZHAO Yongli, YAN Boyuan, WANG Dajiang 2022, 20(S1):  7-15.  doi:10.12142/ZTECOM.2022S1002 Asbtract ( 24 )   HTML ( 167)   PDF (3209KB) ( 40 ) Figures and Tables | References | Related Articles | Metrics As the fundamental infrastructure of the Internet, the optical network carries a great amount of Internet traffic. There would be great financial losses if some faults happen. Therefore, fault location is very important for the operation and maintenance in optical networks. Due to complex relationships among each network element in topology level, each board in network element level, and each component in board level, the concrete fault location is hard for traditional method. In recent years, machine learning, especially deep learning, has been applied to many complex problems, because machine learning can find potential non-linear mapping from some inputs to the output. In this paper, we introduce supervised machine learning to propose a complete process for fault location. Firstly, we use data preprocessing, data annotation, and data augmentation in order to process original collected data to build a high-quality dataset. Then, two machine learning algorithms (convolutional neural networks and deep neural networks) are applied on the dataset. The evaluation on commercial optical networks shows that this process helps improve the quality of dataset, and two algorithms perform well on fault location. Design of Raptor-Like Rate Compatible SC-LDPC Codes SHI Xiangyi, HAN Tongzhou, TIAN Hai, ZHAO Danfeng 2022, 20(S1):  16-21.  doi:10.12142/ZTECOM.2022S1003 Asbtract ( 48 )   HTML ( 170)   PDF (1213KB) ( 50 ) Figures and Tables | References | Related Articles | Metrics This paper proposes a family of raptor-like rate-compatible spatially coupled low-density parity-check (RL-RC-SC-LDPC) codes from RL-RC-LDPC block codes. There are two important keys. One is the performance of the base matrix. RL-LDPC codes have been adopted in the technical specification of 5G new radio (5G-NR). We use the 5G NR LDPC code as the base matrix. The other is the edge coupling design. In this regard, we have designed a rate-compatible coupling algorithm, which can improve performance under multiple code rates. The constructed RL-RC-SC-LDPC code property requires a large coupling length $L$ and thus we improved the reciprocal channel approximation (RCA) algorithm and proposed a sliding window RCA algorithm. It can provide lower complexity and latency than RCA algorithm. The code family shows improved thresholds close to the Shannon limit and finite-length performance compared with 5G NR LDPC codes for the additive white Gaussian noise (AWGN) channel. Derivative-Based Envelope Design Technique for Wideband Envelope Tracking Power Amplifier with Digital Predistortion YI Xueya, CHEN Jixin, CHEN Peng, NING Dongfang, YU Chao 2022, 20(S1):  22-26.  doi:10.12142/ZTECOM.2022S1004 Asbtract ( 58 )   HTML ( 10)   PDF (878KB) ( 88 ) Figures and Tables | References | Related Articles | Metrics A novel envelope design for an envelope tracking (ET) power amplifier (PA) based on its derivatives is proposed, which can trade well off between bandwidth reduction and tracking accuracy. This paper theoretically analyzes how to choose an envelope design that can track the original envelope closely and reduce its bandwidth, and then demonstrates an example to validate this idea. The generalized memory polynomial (GMP) model is applied to compensate for the nonlinearity of ET PA with the proposed envelope design. Experiments are carried out on an ET system that is operated with the center frequency of 3.5 GHz and excited by a 20 MHz LTE signal, which show that the proposed envelope design can make a good trade-off between envelope bandwidth and efficiency, and satisfactory linearization performance can be realized. End-to-End Chinese Entity Recognition Based on BERT-BiLSTM-ATT-CRF LI Daiyi, TU Yaofeng, ZHOU Xiangsheng, ZHANG Yangming, MA Zongmin 2022, 20(S1):  27-35.  doi:10.12142/ZTECOM.2022S1005 Asbtract ( 100 )   HTML ( 10)   PDF (436KB) ( 90 ) Figures and Tables | References | Related Articles | Metrics Traditional named entity recognition methods need professional domain knowledge and a large amount of human participation to extract features, as well as the Chinese named entity recognition method based on a neural network model, which brings the problem that vector representation is too singular in the process of character vector representation. To solve the above problem, we propose a Chinese named entity recognition method based on the BERT-BiLSTM-ATT-CRF model. Firstly, we use the bidirectional encoder representations from transformers (BERT) pre-training language model to obtain the semantic vector of the word according to the context information of the word; Secondly, the word vectors trained by BERT are input into the bidirectional long-term and short-term memory network embedded with attention mechanism (BiLSTM-ATT) to capture the most important semantic information in the sentence; Finally, the conditional random field (CRF) is used to learn the dependence between adjacent tags to obtain the global optimal sentence level tag sequence. The experimental results show that the proposed model achieves state-of-the-art performance on both Microsoft Research Asia (MSRA) corpus and people’s daily corpus, with F1 values of 94.77% and 95.97% respectively. Intelligent Antenna Attitude Parameters Measurement Based on Deep Learning SSD Model FAN Guotian, WANG Zhibin 2022, 20(S1):  36-43.  doi:10.12142/ZTECOM.2022S1006 Asbtract ( 28 )   HTML ( 2)   PDF (1312KB) ( 9 ) Figures and Tables | References | Related Articles | Metrics Due to the consideration of safety, non-contact measurement methods are becoming more acceptable. However, massive measurement will bring high labor-cost and low working efficiency. To address these limitations, this paper introduces a deep learning model for the antenna attitude parameter measurement, which can be divided into an antenna location phase and a calculation phase of the attitude parameter. In the first phase, a single shot multibox detector (SSD) is applied to automatically recognize and discover the antenna from pictures taken by drones. In the second phase, the located antennas’ feature lines are extracted and their attitude parameters are then calculated mathematically. Experiments show that the proposed algorithms outperform existing related works in efficiency and accuracy, and therefore can be effectively used in engineering applications. Multi-Task Learning with Dynamic Splitting for Open-Set Wireless Signal Recognition XU Yujie, ZHAO Qingchen, XU Xiaodong, QIN Xiaowei, CHEN Jianqiang 2022, 20(S1):  44-56.  doi:10.12142/ZTECOM.2022S1007 Asbtract ( 37 )   HTML ( 4)   PDF (1899KB) ( 62 ) Figures and Tables | References | Related Articles | Metrics Open-set recognition (OSR) is a realistic problem in wireless signal recognition, which means that during the inference phase there may appear unknown classes not seen in the training phase. The method of intra-class splitting (ICS) that splits samples of known classes to imitate unknown classes has achieved great performance. However, this approach relies too much on the predefined splitting ratio and may face huge performance degradation in new environment. In this paper, we train a multi-task learning (MTL) network based on the characteristics of wireless signals to improve the performance in new scenes. Besides, we provide a dynamic method to decide the splitting ratio per class to get more precise outer samples. To be specific, we make perturbations to the sample from the center of one class toward its adversarial direction and the change point of confidence scores during this process is used as the splitting threshold. We conduct several experiments on one wireless signal dataset collected at 2.4 GHz ISM band by LimeSDR and one open modulation recognition dataset, and the analytical results demonstrate the effectiveness of the proposed method. Multi-Cell Uplink Interference Management: A Distributed Power Control Method HU Huimin, LIU Yuan, GE Yiyang, WEI Ning, XIONG Ke 2022, 20(S1):  56-63.  doi:10.12142/ZTECOM.2022S1008 Asbtract ( 41 )   HTML ( 7)   PDF (1395KB) ( 184 ) Figures and Tables | References | Related Articles | Metrics This paper investigates a multi-cell uplink network, where the orthogonal frequency division multiplexing (OFDM) protocol is considered to mitigate the intra-cell interference. An optimization problem is formulated to maximize the user supporting ratio for the uplink multi-cell system by optimizing the transmit power. This paper adopts the user supporting ratio as the main performance metric. Our goal is to improve the user supporting ratio of each cell. Since the formulated optimization problem is non-convex, it cannot be solved by using traditional convex-based optimization methods. Thus, a distributed method with low complexity and a small amount of multi-cell interaction is proposed. Numerical results show that a notable performance gain achieved by our proposed scheme compared with the traditional one is without inter-cell interaction. SVM for Constellation Shaped 8QAM PON System LI Zhongya, CHEN Rui, HUANG Xingang, ZHANG Junwen, NIU Wenqing, LU Qiuyi, CHI Nan 2022, 20(S1):  64-71.  doi:10.12142/ZTECOM.2022S1009 Asbtract ( 63 )   HTML ( 1)   PDF (3357KB) ( 35 ) Figures and Tables | References | Related Articles | Metrics Nonlinearity impairments and distortions have been bothering the bandwidth constrained passive optical network (PON) system for a long time and limiting the development of capacity in the PON system. Unlike other works concentrating on the exploration of the complex equalization algorithm, we investigate the potential of constellation shaping joint support vector machine (SVM) classification scheme. At the transmitter side, the 8 quadrature amplitude modulation (8QAM) constellation is shaped into three designs to mitigate the influence of noise and distortions in the PON channel. On the receiver side, simple multi-class linear SVM classifiers are utilized to replace complex equalization methods. Simulation results show that with the bandwidth of 25 GHz and overall bitrate of 50 Gbit/s, at 10 dBm input optical power of a 20 km standard single mode fiber (SSMF), and under a hard-decision forward error correction (FEC) threshold, transmission can be realized by employing Circular (4, 4) shaped 8QAM joint SVM classifier at the maximal power budget of 37.5 dB. Review General Introduction of Non-Terrestrial Networks for New Radio HAN Jiren, GAO Yin 2022, 20(S1):  72-78.  doi:10.12142/ZTECOM.2022S1010 Asbtract ( 111 )   HTML ( 10)   PDF (1266KB) ( 79 ) Figures and Tables | References | Related Articles | Metrics In the new radio (NR) access technology, non-terrestrial networks (NTN) are introduced to meet the requirement of anywhere and anytime connections from the world market. With the introduction of NTN, the NR system is able to offer the wide-area coverage and ensure the service availability for users. In this paper, the general aspects of NTN are introduced, including the NTN architecture overview, the impact of NTN on next-generation radio access network (NG-RAN) interface functions, mobility scenarios and other NTN related issues. The current progress in 3GPP Release 17 is also provided.
2022-10-06 17:10:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.395182341337204, "perplexity": 2725.578392333872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00558.warc.gz"}
http://sp16.datastructur.es/materials/hw/hw4/hw4.html
Homework 4: 8 Puzzle ## Getting the Skeleton Files As usual, run git pull skeleton master to get the skeleton files. ## Video Introduction A video that I produced a couple of years ago for this assignment can be found at this link. Some notable differences for our semester: • You do not have to write Board.neighbors. • Board.toString is provided. • You do not have to write Board.isSolvable. ## Introduction In this assignment, we'll be making our own puzzle solver! The 8-puzzle problem is a puzzle invented and popularized by Noyes Palmer Chapman in the 1870s. It is played on a 3-by-3 grid with 8 square tiles labeled 1 through 8 and a blank square. Your goal is to rearrange the tiles so that they are in order, using as few moves as possible. You are permitted to slide tiles horizontally or vertically into the blank square. The following shows a sequence of legal moves from an initial board (left) to the goal board (right). 1 3 1 3 1 2 3 1 2 3 1 2 3 4 2 5 => 4 2 5 => 4 5 => 4 5 => 4 5 6 7 8 6 7 8 6 7 8 6 7 8 6 7 8 initial 1 left 2 up 5 left goal Now, we describe a solution to the problem that illustrates a general artificial intelligence methodology known as the A* search algorithm. We define a search node of the game to be a board, the number of moves made to reach the board, and the previous search node. First, insert the initial search node (the initial board, 0 moves, and a null previous search node) into a priority queue. Then, delete from the priority queue the search node with the minimum priority, and insert onto the priority queue all neighboring search nodes (those that can be reached in one move from the dequeued search node). Repeat this procedure until the search node dequeued corresponds to a goal board. The success of this approach hinges on the choice of priority function for a search node. We consider two priority functions: • Hamming priority function: The number of tiles in the wrong position, plus the number of moves made so far to get to the search node. Intuitively, a search node with a small number of tiles in the wrong position is close to the goal, and we prefer a search node that have been reached using a small number of moves. • Manhattan priority function: The sum of the Manhattan distances (sum of the vertical and horizontal distance) from the tiles to their goal positions, plus the number of moves made so far to get to the search node. For example, the Hamming and Manhattan priorities of the initial search node below are 5 and 10, respectively. 8 1 3 1 2 3 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 4 2 4 5 6 ---------------------- ---------------------- 7 6 5 7 8 1 1 0 0 1 1 0 1 1 2 0 0 2 2 0 3 initial goal Hamming = 5 + 0 Manhattan = 10 + 0 We make a key observation: To solve the puzzle from a given search node on the priority queue, the total number of moves we need to make (including those already made) is at least its priority, using either the Hamming or Manhattan priority function. (For Hamming priority, this is true because each tile that is out of place must move at least once to reach its goal position. For Manhattan priority, this is true because each tile must move its Manhattan distance from its goal position. Note that we do not count the blank square when computing the Hamming or Manhattan priorities.) Consequently, when the goal board is dequeued, we have discovered not only a sequence of moves from the initial board to the goal board, but one that makes the fewest number of moves. (Challenge for the mathematically inclined: prove this fact.) #### Optimizations A critical optimization: Best-first search has one annoying feature: search nodes corresponding to the same board are enqueued on the priority queue many times. To reduce unnecessary exploration of useless search nodes, when considering the neighbors of a search node, don't enqueue a neighbor if its board is the same as the board of the previous search node. A second optimization: To avoid recomputing the Manhattan distance of a board (or, alternatively, the Manhattan priority of a solver node) from scratch each time during various priority queue operations, compute it at most once per object; save its value in an instance variable; and return the saved value as needed. This caching technique is broadly applicable: consider using it in any situation where you are recomputing the same quantity many times and for which computing that quantity is a bottleneck operation. #### Game Tree One way to view the computation is as a game tree, where each search node is a node in the game tree and the children of a node correspond to its neighboring search nodes. The root of the game tree is the initial search node; the internal nodes have already been processed; the leaf nodes are maintained in a priority queue; at each step, the A* algorithm removes the node with the smallest priority from the priority queue and processes it (by adding its children to both the game tree and the priority queue). #### Board Organize your program by creating an immutable Board class with the following API: public class Board { public Board(int[][] tiles) public int tileAt(int i, int j) public int size() public int hamming() public int manhattan() public boolean isGoal() public boolean equals(Object y) public String toString() } Where the methods work as follows: Board(tiles): Constructs a board from an N-by-N array of tiles where tiles[i][j] = tile at row i, column j tileAt(i, j): Returns value of tile at row i, column j (or 0 if blank) size(): Returns the board size N hamming(): Hamming priority function defined above manhattan(): Manhattan priority function defined above isGoal(): Returns true if is this board the goal board equals(y): Returns true if this board's tile values are the same position as y's toString(): Returns the string representation of the board. This method is provided in the skeleton Corner cases: You may assume that the constructor receives an N-by-N array containing the N2 integers between 0 and N2 − 1, where 0 represents the blank square. The tileAt() method should throw a java.lang.IndexOutOfBoundsException unless both i and j are between 0 and N − 1. Performance requirements: Your implementation should support all Board methods in time proportional to N2 (or faster) in the worst case. #### Solver Before moving on, note that you are provided a BoardUtils.class file, which supports a public static Iterable<Board> neighbors(Board b) method. You may find this method useful for this part. Create an immutable Solver class with the following API: public class Solver { public Solver(Board initial) public int moves() public Iterable<Board> solution() } Where the methods work as follows: Solver(initial): Constructor which solves the puzzle, computing everything necessary for moves() and solution() to not have to solve the problem again. Solves the puzzle using the A* algorithm. Assumes a solution exists. moves(): Returns the minimum number of moves to solve the initial board solution(): Returns the sequence of Boards from the initial board to the solution. To implement the A* algorithm, you must use the MinPQ class from edu.princeton.cs.algs4 for the priority queue. Additionally, use the Manhattan priority function for your Solver. Hint: Recall the search node concept mentioned above for using your PQ. #### Solver Test Client We've provided some basic code in Solver.java for you to test your solver against an input file. Do not modify this method. Puzzle input files are provided in the input folder. The input and output format for a board is the board size N followed by the N-by-N initial board, using 0 to represent the blank square. An example of an input file for N = 3 would look something like this: 3 0 1 3 4 2 5 7 8 6 Your program should work correctly for arbitrary N-by-N boards (for any 1 < N < 32768), even if it is too slow to solve some of them in a reasonable amount of time. Note N > 1. To test against input, run the following command from the hw4 directory after compiling: java hw4.puzzle.Solver [input file] So, if I tested against an input file input/test01.in with the following contents: 2 1 2 0 3 I should get the following output: $java hw4.puzzle.Solver input/test01.in Minimum number of moves = 1 2 1 2 0 3 2 1 2 3 0 ## Submission Submit a zip file containing just the folder for your hw4 package (similar to hw3). It should contain Board.java, and Solver.java. Due to technical limitations of this autograder, it should contain no other .java files. If you have auxiliary java files (e.g. SearchNode.java), please move these classes into Board or Solver. It's OK if you also include BoardUtils.class. ## FAQ #### The autograder is complaining that graderhw4.Board objects can't be converted to Board or something like that. The first step of the AG swaps out any usage of Board with graderhw4.Board in your Solver.java. However, it is not smart enough to find other classes (yet). For now, move your SearchNode.java class inside of Solver.java. Likewise, if you have style errors, it will also fail. For example, if your code says neighbors=BoardUtils. instead of neighbors = BoardUtils My string matching code will fail. #### Why am I getting cannot resolve symbol 'BoardUtils'? You are probably compiling from the wrong folder. Compile from the "login/hw4" directory, not "login/hw4/hw4/puzzle". javac hw4/puzzle/*.java #### What if I'm using Intellij? File -> Project Structure -> Libraries -> (+) sign to add new Java Library -> Select your login/hw4 directory DO NOT USE login/hw4/hw4/puzzle -> OK -> OK -> OK. These are the steps needed for Macs. I suspect there won't be big differences for other operating systems. #### Is BoardUtils.neighbors working? It looks like it only returns the initial board. It works, but it does depend on the board being immutable. #### How do I know if my Solver is optimal? The shortest solution to puzzle4x4-hard1.txt and puzzle4x4-hard2.txt are 38 and 47, respectively. The shortest solution to "puzzle*[T].txt" requires exactly T moves. Warning: puzzle36.txt, puzzle47.txt, and puzzle49.txt, and puzzle50.txt are relatively difficult. #### I run out of memory when running some of the large sample puzzles. What should I do? You should expect to run out of memory when using the Hamming priority function. Be sure not to put the JVM option in the wrong spot or it will be treated as a command-line argument, e.g. java -Xmx1600m hw4.puzzle.Solver input/puzzle36.txt #### My program is too slow to solve some of the large sample puzzles, even if given a huge amount of memory. Is this OK? You should not expect to solve many of the larger puzzles with the Hamming priority function. However, you should be able to solve most (but not all) of the larger puzzles with the Manhattan priority function. #### Even with the critical optimization, the priority queue may contain two or more search nodes corresponding to the same board. Should I try to eliminate these? In principle, you could do so with a set data type such as java.util.TreeSet or java.util.HashSet (provided that the Board data type were either Comparable or had a hashCode() method). However, according to Kevin Wayne at Princeton, almost all of the benefit from avoiding duplicate boards is already extracted from the critical optimization and the cost of identifying other duplicate boards will be more than the remaining benefit from doing so. In short, you're spending tremendous amounts of memory for a relatively small runtime optimization. #### Is it OK if I try to eliminate them anyway by creating a big set of all the Boards ever seen? Maybe. Make sure your code is able to complete the puzzles below when given only 128 Megabytes of memory (see below for how to test). #### What size puzzles are we expected to solve? Here are the puzzles you are explicitly expected to solve: input/puzzle2x2-[00-06].txt input/puzzle3x3-[00-30].txt input/puzzle4x4-[00-30].txt input/puzzle[00-31].txt #### The puzzles work fine on my computer, but not on the AG. I'm getting a GC overhead limit exceeded error, or just a message that the "The autograder failed to execute correctly." Your computer is probably more powerful than the autograder. Notably, the AG has much less memory. You should be able to complete puzzles 30 and 31 in less than a second, and they should also work if you use only 128 megabytes of memory. To run your code with only 128 megabytes, try running your code with the following command: java -Xmx128M hw4.puzzle.Solver ./input/puzzle30.txt java -Xmx128M hw4.puzzle.Solver ./input/puzzle31.txt java -Xmx128M hw4.puzzle.Solver ./input/puzzle4x4-30.txt If your code is taking longer, by far the most likely issue is that you are not implementing the first critical optimization properly. Another possiblity is that you are creating a hash table of every board ever seen, which may cause the AG computer to run out of memory. It is not enough to simply look at your code for the optimization and declare that it is correct. Many students have indicated confidence in their optimization implementation, only to discover a subtle bug. Use print statements or the debugger to ensure that a board never enqueues the board it came from. Situations that cover 98% of student performance bugs: • Recall that there is a difference between == and equals. • Recall also that the optimization is that you should not "enqueue a neighbor if its board is the same as the board of the previous search node". Checking vs. the current board does nothing. In other words, no Node should ever enqueue its parent. • Recall that the optimization is that a board should not enqueue its own parent! This is different than checking that it is different from the board that was dequeued two iterations of A* ago. #### How do I ensure my Board class immutable? The most common situation where a Board is not immutable is as follows: • Step 1: Create a 2D array called cowmoo. • Step 2: Pass cowmoo as an argument to the Board constructor. • Step 3: Change one or more values of cowmoo. If you just copy the reference in the Board constructor, someone can change the state of your Board by changing the array. You should instead make a copy of the 2D array that is passed to your board constructor. #### Why can't Gradescope compile my files even though I can compile them locally? Due to the nature of the autograder, you cannot use any public Board and Solver methods that were not mentioned in the spec. Consider moving the logic into one file. #### The AG is reporting a bug involving access$ or some kind of null pointer exception. What's going on? It's important that your moves and solutions methods work no matter the order in which they are called, and no matter how many times they are called. Failing the mutability test, or failing only moves but not solutions tests are sure signs of this issue. ## Credits This assignment originally developed by Kevin Wayne and Bob Sedgewick at Princeton University.
2022-10-06 01:31:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.242562398314476, "perplexity": 969.9145861680214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00145.warc.gz"}
https://crypto.stackexchange.com/questions/92163/how-does-authentication-key-recovery-for-gcm-work
# How does Authentication-Key Recovery for GCM work? In his Paper "Authentication weaknesses in GCM" Ferguson describes, how some bits of the error polynomial can be set to zero, thereby increasing significantly the chance of a forgery. Q: What does it mean in detail? That the resulting equations do not solve the problem of obtaining forgery completely, but the solution space is significantly reduced? So we can fix some bits of the error polynomial and the remaining bits must be tested by trial and error? It is stated, that (for the example) after $$2^{16}$$ trials we expect a successful forgery. What follows I do not right understand: Somehow, by repeating some strategy more and more information about H can be gained. Q: Repeating with different ciphers? Do I need only one ciphered outcome of an encryption or many different? Q: This is quite interesting stuff, but beside the original paper I cannot find any literature, which explains the idea a little more in detail. Is there some other source, where I can learn what's behind the idea in a "more didactically prepared way"? I would be glad if somebody can shed some light on that and possibly give me a link to reading material. ## 1 Answer They so that by taking an authenticated message, and applying a carefully crafted difference to the message, you can ensure half the bits of the authentication tag will be preserved. You can repeat the attack on different authenticated cipher texts you captured(or perhaps caused) or (more relevant) different solutions for the linear problem as it is under-specified. This all assumes you are capable of sending modified message to the victim and get feedback if authentication succeeds. When a forgery is successful, it reveals information about the authentication key, and we can then try forgery again with increased success probability. And repeat until we collect linear constraints and recover the authentication key • The attacker knows the original tag and his goal is to modify cipherblocks Ci in such a way that this forgery yields the same tag. There is a linear algorithm allowing him to work out a reduced number of candidates for Ci. However, in lack of H he cannot calculate T by himself, so he sends over one by one thereby depending on the information from the victim whether authentication succeeded or not. Is this the idea behind? If yes, then I got the first part on a high level view (mathematics is still another topic...). Anyway this attack is a very tedious challenge and can take a long time. Jul 20 at 6:51 • Yes, An attack which takes $2^16$ attempt is considered very fast. The trouble here is you aren't likely to get a meaningful message which may limit the applicability of such an attack. With the Key recovery, forging more meaningful messages becomes relevant. Jul 20 at 8:43
2021-12-06 18:36:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627694368362427, "perplexity": 820.1769589866458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00595.warc.gz"}
https://zbmath.org/?q=an:1098.46032
# zbMATH — the first resource for mathematics Associative algebras of $$p$$-adic distributions. (English. Russian original) Zbl 1098.46032 Proc. Steklov Inst. Math. 245, 22-33 (2004); translation from Tr. Mat. Inst. Steklova 245, 29-40 (2004). Summary: A $$p$$-adic Colombeau-Egorov algebra of generalized functions is constructed. A set of associated homogeneous $$p$$-adic distributions is introduced, and an associative algebra of asymptotic distributions generated by the linear span of the set of $$p$$-adic associated homogeneous distributions is constructed. For the entire collection see [Zbl 1087.46002]. ##### MSC: 46F30 Generalized functions for nonlinear analysis (Rosinger, Colombeau, nonstandard, etc.) 11S99 Algebraic number theory: local and $$p$$-adic fields 46F05 Topological linear spaces of test functions, distributions and ultradistributions 46S10 Functional analysis over fields other than $$\mathbb{R}$$ or $$\mathbb{C}$$ or the quaternions; non-Archimedean functional analysis Full Text:
2021-01-24 01:10:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3164221942424774, "perplexity": 2267.6295930440524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538741.56/warc/CC-MAIN-20210123222657-20210124012657-00376.warc.gz"}
https://www.studysmarter.us/textbooks/math/fundamentals-of-differential-equations-and-boundary-value-problems-9th/series-solutions-of-differential-equations/q18-e-in-problems-13-19-find-at-least-the-first-four-nonzero/
Suggested languages for you: Americas Europe Q18 E Expert-verified Found in: Page 450 Fundamentals Of Differential Equations And Boundary Value Problems Book edition 9th Author(s) R. Kent Nagle, Edward B. Saff, Arthur David Snider Pages 616 pages ISBN 9780321977069 In Problems 13-19, find at least the first four nonzero terms in a power series expansion of the solution to the given initial value problem.${\mathbf{y}}{\mathbf{\text{'}}}{\mathbf{\text{'}}}{\mathbf{-}}{\mathbf{\left(}}{\mathbf{cosx}}{\mathbf{\right)}}{\mathbf{y}}{\mathbf{\text{'}}}{\mathbf{-}}{\mathbf{y}}{\mathbf{=}}{\mathbf{0}}\phantom{\rule{0ex}{0ex}}{\mathbf{y}}{\mathbf{\left(}}{\mathbf{\pi }}{\mathbf{/}}{\mathbf{2}}{\mathbf{\right)}}{\mathbf{=}}{\mathbf{1}}{\mathbf{,}}{\mathbf{}}{\mathbf{y}}{\mathbf{\text{'}}}{\mathbf{\left(}}{\mathbf{\pi }}{\mathbf{/}}{\mathbf{2}}{\mathbf{\right)}}{\mathbf{=}}{\mathbf{1}}$ The first four nonzero terms in the power series expansion of the given initial value problem $\mathrm{y}\text{'}\text{'}-\left(\mathrm{cosx}\right)\mathrm{y}\text{'}-\mathrm{y}=0$ is $\mathrm{Y}\left(\mathrm{x}\right)=1+\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)+\frac{1}{2}{\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{2}+\frac{1}{3}{\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{3}+\cdots$ See the step by step solution Step 1: Define power series expansion. The power series approach is used in mathematics to find a power series solution to certain differential equations. In general, such a solution starts with an unknown power series and then plugs that solution into the differential equation to obtain a coefficient recurrence relation. A differential equation's power series solution is a function with an infinite number of terms, each holding a different power of the dependent variable. It is generally given by the formula, ${\mathbf{y}}{\mathbf{\left(}}{\mathbf{x}}{\mathbf{\right)}}{\mathbf{=}}\mathbf{\sum }_{\mathbf{n}\mathbf{=}\mathbf{0}}^{\mathbf{\infty }}{{\mathbf{a}}}_{{\mathbf{n}}}{{\mathbf{x}}}^{{\mathbf{n}}}$ Step 2: Find the relation. Given, $\mathrm{y}\text{'}\text{'}-\left(\mathrm{cosx}\right)\mathrm{y}\text{'}-\mathrm{y}=0\phantom{\rule{0ex}{0ex}}\mathrm{y}\left(\mathrm{\pi }/2\right)=1,\mathrm{y}\text{'}\left(\mathrm{\pi }/2\right)=1$ Apply a substitution and transform the equation, $\mathrm{y}\text{'}\text{'}-\mathrm{cos}\left(\mathrm{t}+\frac{\mathrm{\pi }}{2}\right)·\mathrm{y}\text{'}-\mathrm{y}=0\phantom{\rule{0ex}{0ex}}\mathrm{y}\text{'}\text{'}-\mathrm{sint}·\mathrm{y}\text{'}-\mathrm{y}=0$ Use the formula $\mathrm{Y}\left(\mathrm{x}\right)=\sum _{\mathrm{n}=0}^{\infty }{\mathrm{a}}_{\mathrm{n}}{\mathrm{t}}^{\mathrm{n}}\phantom{\rule{0ex}{0ex}}\mathrm{Y}\text{'}\left(\mathrm{t}\right)=\sum _{\mathrm{n}=1}^{\infty }\mathrm{n}·{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}-1}\phantom{\rule{0ex}{0ex}}\mathrm{Y}\text{'}\text{'}\left(\mathrm{t}\right)=\sum _{\mathrm{n}=2}^{\infty }\mathrm{n}\left(\mathrm{n}-1\right)·{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}-2}$ Substitute it in the above equation we get, $\sum _{\mathrm{n}=2}^{\infty }\mathrm{n}\left(\mathrm{n}-1\right)·{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}-2}-\left(\mathrm{t}-\frac{{\mathrm{t}}^{3}}{3!}+\frac{{\mathrm{t}}^{5}}{5!}-\frac{{\mathrm{t}}^{7}}{7!}+\cdots \right)\sum _{\mathrm{n}=1}^{\infty }\mathrm{n}·{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}-1}-\sum _{\mathrm{n}=0}^{\infty }{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}}=0$ Hence we get the relation: $\sum _{\mathrm{n}=2}^{\infty }\mathrm{n}\left(\mathrm{n}-1\right)·{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}-2}-\left(\mathrm{t}-\frac{{\mathrm{t}}^{3}}{3!}+\frac{{\mathrm{t}}^{5}}{5!}-\frac{{\mathrm{t}}^{7}}{7!}+\cdots \right)\sum _{\mathrm{n}=1}^{\infty }\mathrm{n}·{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}-1}-\sum _{\mathrm{n}=0}^{\infty }{\mathrm{a}}_{\mathrm{n}}{\left(\mathrm{t}\right)}^{\mathrm{n}}=0$. Step 3: Find the expression after expansion. The series expansion for the function is $\left(2{\mathrm{a}}_{2}+6{\mathrm{a}}_{3}\mathrm{t}+12{\mathrm{a}}_{4}{\mathrm{t}}^{2}+20{\mathrm{a}}_{5}{\mathrm{t}}^{3}+\cdots \right)-\left({\mathrm{a}}_{1}\mathrm{t}+2{\mathrm{a}}_{2}{\mathrm{t}}^{2}+3{\mathrm{a}}_{3}{\mathrm{t}}^{3}+4{\mathrm{a}}_{4}{\mathrm{t}}^{4}+\cdots \right)+\left({\mathrm{a}}_{1}\frac{{\mathrm{t}}^{3}}{3!}+2{\mathrm{a}}_{2}\frac{{\mathrm{t}}^{4}}{3!}+3{\mathrm{a}}_{3}\frac{{\mathrm{t}}^{5}}{3!}+4{\mathrm{a}}_{4}\frac{{\mathrm{t}}^{6}}{3!}+\cdots \right)\phantom{\rule{0ex}{0ex}}-\left({\mathrm{a}}_{1}\frac{{\mathrm{t}}^{5}}{5!}+2{\mathrm{a}}_{2}\frac{{\mathrm{t}}^{6}}{5!}+3{\mathrm{a}}_{3}\frac{{\mathrm{t}}^{7}}{5!}+4{\mathrm{a}}_{4}\frac{{\mathrm{t}}^{8}}{5!}+\cdots \right)+\cdots -\left({\mathrm{a}}_{0}+{\mathrm{a}}_{1}\mathrm{t}+{\mathrm{a}}_{2}{\mathrm{t}}^{2}+{\mathrm{a}}_{3}{\mathrm{t}}^{3}+{\mathrm{a}}_{4}{\mathrm{t}}^{4}+\cdots \right)=0$ Taking coefficients and exponents of the same power. $\left(2{\mathrm{a}}_{2}-{\mathrm{a}}_{0}\right)+\left(6{\mathrm{a}}_{3}-{\mathrm{a}}_{1}-{\mathrm{a}}_{1}\right)\mathrm{t}+\left(12{\mathrm{a}}_{4}-2{\mathrm{a}}_{2}-{\mathrm{a}}_{2}\right){\mathrm{t}}^{2}+\left(20{\mathrm{a}}_{5}-3{\mathrm{a}}_{3}+\frac{{\mathrm{a}}_{1}}{6}-{\mathrm{a}}_{3}\right){\mathrm{t}}^{3}+\cdots =0$ Simplify the expression: $\left(2{\mathrm{a}}_{2}-{\mathrm{a}}_{0}\right)+\left(6{\mathrm{a}}_{3}-{\mathrm{a}}_{1}-{\mathrm{a}}_{1}\right)\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)+\left(12{\mathrm{a}}_{4}-2{\mathrm{a}}_{2}-{\mathrm{a}}_{2}\right){\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{2}+\left(20{\mathrm{a}}_{5}-3{\mathrm{a}}_{3}+\frac{{\mathrm{a}}_{1}}{6}-{\mathrm{a}}_{3}\right){\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{3}+\cdots =0$ Hence, the expression after the expansion is: $\left(2{\mathrm{a}}_{2}-{\mathrm{a}}_{0}\right)+\left(6{\mathrm{a}}_{3}-{\mathrm{a}}_{1}-{\mathrm{a}}_{1}\right)\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)+\left(12{\mathrm{a}}_{4}-2{\mathrm{a}}_{2}-{\mathrm{a}}_{2}\right){\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{2}+\left(20{\mathrm{a}}_{5}-3{\mathrm{a}}_{3}+\frac{{\mathrm{a}}_{1}}{6}-{\mathrm{a}}_{3}\right){\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{3}+\cdots =0$ Step 4: Find the first four nonzero terms. By equating the coefficients, we get, $2{\mathrm{a}}_{2}-{\mathrm{a}}_{0}=0\to {\mathrm{a}}_{2}=\frac{{\mathrm{a}}_{0}}{2}=\frac{1}{2}\phantom{\rule{0ex}{0ex}}6{\mathrm{a}}_{3}-{\mathrm{a}}_{1}-{\mathrm{a}}_{1}\to {\mathrm{a}}_{3}=\frac{{\mathrm{a}}_{1}}{3}=\frac{1}{3}$ The general solution was $\mathrm{Y}\left(\mathrm{t}\right)=\sum _{\mathrm{n}=0}^{\infty }{\mathrm{a}}_{\mathrm{n}}{\mathrm{t}}^{\mathrm{n}}={\mathrm{a}}_{0}+{\mathrm{a}}_{1}\mathrm{t}+{\mathrm{a}}_{2}{\mathrm{t}}^{2}+{\mathrm{a}}_{3}{\mathrm{t}}^{3}+\cdots$ Apply the initial condition and substitute the coefficient. $\mathrm{Y}\left(\mathrm{x}\right)=1+\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)+\frac{1}{2}{\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{2}+\frac{1}{3}{\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{3}+\cdots$ Hence, the first four nonzero terms are: $\mathrm{Y}\left(\mathrm{x}\right)=1+\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)+\frac{1}{2}{\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{2}+\frac{1}{3}{\left(\mathrm{x}-\frac{\mathrm{\pi }}{2}\right)}^{3}+\cdots$ Recommended explanations on Math Textbooks 94% of StudySmarter users get better grades.
2023-03-29 17:03:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937789261341095, "perplexity": 706.3664205404547}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00030.warc.gz"}
https://bodheeprep.com/cat-2019-quant-question-with-solution-15
Bodhee Prep-CAT Online Preparation | Best Online CAT PreparationFor Enquiry CALL @ +91-95189-40261 CAT 2019 Quant Question with Solution 15 Question: Three men and eight machines can finish a job in half the time taken by three machines and eight men to finish the same job. If two machines can finish the job in 13 days, then how many men can finish the job in 13 days? Let one machine completes 1 unit of work per day. Given, two machines can finish the job in 13 days Therefore, the work of 2×1×13 = 26 units. Also, let on man completes m units of work per day. From the given condition: 3m+8×1 = 2(8m+3×1) Or m = $\frac{2}{13}units$ Let it require ‘x’ number of men to complete the work in 13 days. Therefore, xm×13 =26 units Or x = 13 men Also Check: 841+ CAT Quant Questions with Solutions CAT 2023Classroom Course We are starting classroom course for CAT 2023 in Gurugram from the month of December.
2022-12-09 02:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.686733603477478, "perplexity": 4903.899000890031}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00588.warc.gz"}
http://math.emory.edu/events/seminars/seminar.php?SEMID=1466
# MATH Seminar Title: A Random Group with Local Data Seminar: Algebra Speaker: Brandon Alberts of Eastern Michigan University Contact: David Zureick-Brown, david.zureick-brown@emory.edu Date: 2022-10-14 at 4:00PM Venue: MSC W301 Abstract: The Cohen--Lenstra heuristics describe the distribution of $\ell$-torsion in class groups of quadratic fields as corresponding to the distribution of certain random p-adic matrices. These ideas have been extended to using random groups to predict the distributions of more general unramified extensions in families of number fields (see work by Boston--Bush--Hajir, Liu--Wood, Liu--Wood--Zureick-Brown). Via the Galois correspondence, the distribution of unramified extensions is a specific example of counting number fields with prescribed ramification and bounded discriminant. As of yet, no constructions of random groups have been given in the literature to predict the answers to famous number field counting conjectures such as Malle's conjecture. We construct a "random group with local data" bridging this gap, and use it to describe new heuristic justifications for number field counting questions.
2022-09-29 11:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6510592699050903, "perplexity": 1477.6391750400485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00467.warc.gz"}
https://paperswithcode.com/paper/massively-parallel-feature-selection-for-big
# Massively-Parallel Feature Selection for Big Data We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for feature selection (FS) in Big Data settings (high dimensionality and/or sample size). To tackle the challenges of Big Data FS PFBP partitions the data matrix both in terms of rows (samples, training examples) as well as columns (features). By employing the concepts of $p$-values of conditional independence tests and meta-analysis techniques PFBP manages to rely only on computations local to a partition while minimizing communication costs. Then, it employs powerful and safe (asymptotically sound) heuristics to make early, approximate decisions, such as Early Dropping of features from consideration in subsequent iterations, Early Stopping of consideration of features within the same iteration, or Early Return of the winner in each iteration. PFBP provides asymptotic guarantees of optimality for data distributions faithfully representable by a causal network (Bayesian network or maximal ancestral graph). Our empirical analysis confirms a super-linear speedup of the algorithm with increasing sample size, linear scalability with respect to the number of features and processing cores, while dominating other competitive algorithms in its class. PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
2022-07-07 03:57:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2312508523464203, "perplexity": 3465.833310379086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00495.warc.gz"}
https://clay6.com/qa/olympiad-math/class-6/math/algebra
# Recent questions and answers in Algebra ### In a traingle, twice the sum of two angles is the third angle. Also, the difference of the first two angles is half the sum of the two angles. What are the three angles of the traingle? To see more, click for all the questions in this category.
2020-08-05 13:40:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.557244598865509, "perplexity": 264.8048154569796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00542.warc.gz"}
https://informagie.fr/wp-content/plugins/cach/coment/ao9ei/u21.php?wmpie=unit-pythagorean-theorem-quiz-1-answer-key
#### 9 u2 2. Complete 2 of the following tasks IXL Practice Worksheets Creating O. AB . We further test the converse by building models. k E cMEaOdGeX GwEiRtjh n wISnXfli3nAiBtJe r OPErve 0-wAcl zgEe ObLr 1aG. Legend (Opens a modal) Unit test. Pythagorean Unit Test. B. 120 seconds. 9 m. 1) x y-4-224-4-2 2 4 2) x y-4-224-4-2 2 4 Find the missing side of each triangle. 2 is this a right triangle? yes; no Pythagorean Theorem Quiz » Study Pythagorean theorem printable 1 day ago · Trig ratios quiz answers. Worksheet with Answer Key. 4 3 π 3. The practice questions on the quiz will test you on your ability Academy Pythagorean Theorem Worksheet Pythagorean Theorem Worksheet - Answer Key KutaSoftware: Geometry- The Pythagorean Theorem And Its Converse Part 1 Pythagorean Theorem and Its Converse Notes and Practice #2Math Antics - The Pythagorean Theorem N-Gen Math 8. Test Review #1 KEY. Unit: Geometry Quiz 1. 81 Pythagorean Theorem Part 1. About this Quiz & Worksheet. return to top. Theorem Assignment Answer Key Unit 8 - The Pythagorean Theorem - eMathInstruction Answer Key For Pythagorean Theorem Assignment crctskills ELEMENTARY. PDF] - Read File Online More Challenging Pythagorean Theorem Problems - Answers. The sum of the side lengths of the two smaller squares is equal to the side length of the large square. Word problems on real time application are available. Round your answers to the nearest tenth. 8 5) 16. Worksheet LESSON9. The Pythagorean Theorem is an important mathematical concept and this quiz/worksheet combo will help you test your knowledge on it. 2 is this a right triangle Worksheet 1 - Name: (Answers on 2nd page of PDF) The theorem states that the square of the hypotenuse is the sum of the squares of the legs. Unit 8 – The Pythagorean Theorem · Unit 9 – Volume and Surface Area of Solids · Unit 10 – Scientific Notation · Unit 11 – Systems of Equations . These Geometry Worksheets are perfect for learning and practicing various types problems about Surface Area & Volume. 11 6 π − 4. 1 asking them questions along the way so they can make sense of the solution. 2 is this a right triangle Grade 8 - Unit 1 Square roots & Pythagorean Theorem Name: _____ 5m The Pythagorean theorem is not only used to find the hypotenuse of a right angled triangle. 5 2 π 5. 9. Familiarity with measurement of lengths, angles and area. We address the entry in the i’th row and j’th column with A ij. 7. 1 Square Numbers and Area Models, page 8 4. 6 Skills Practice (Diagonals of 3-D Prisms) Practice It 6. And with a 30°-60°-90°, the measure of the hypotenuse is two times that of the leg oppo…. (a + b)2 = c2. = 12 — cos 13. The converse of the Pythagorean Theorem says what? Answers: Shapes that follow the formula a 2 + b 2 = c 2 are all triangles. Unit 8 – The Pythagorean Theorem. Once the Converse is written, I assign each group a set of side lengths to test, checking that they satisfy the Pythagorean Formula, a^2+b^2=c^2. 1 Algebra2/Trig Chapter 12/13 Packet In this unit, students will be able to: Use the reciprocal trig identities to express any trig function in terms of sine, cosine, or both. Question 2. Pythagorean Theorem Test Review KEY. Pythagorean theorem worksheets holidayfu com. 2 (ID: 1) 1) 14. This video puts the Pythagorean  Study concepts, example questions & explanations for SAT Math. 1 462 CHAPTER 9The Pythagorean Theorem The puzzle in this investigation is intended to help you recall the Pythagorean Theorem. This is just one of the solutions for you to be successful. Essay Homework Practice The Pythagorean Theorem Answer Key Help Online on all assignment and answer key pages. Lesson #1 - Multiplying and Adding Radicals. Quiz. The student describes the unit rate as 400 square feet per gallon. Pythagorean theorem brainpop quiz answers Students took the MOY writing benchmark  Improve your math knowledge with free questions in "Pythagorean theorem: word problems" and thousands of other math skills. Worksheet 5 name. 2 cm . Converse. Track students' progress with hassle-free  Explain a proof of the Pythagorean theorem and its converse. 15. net on October 5, 2021 by guest [eBooks] Pythagorean Theorem Worksheet With Answer Key Thank you unquestionably much for downloading pythagorean theorem worksheet with answer key. Lesson #3 - Pythagorean Triples. c2 + b2 = a2. Multiple choice questions on Pythagoras Theorem quiz answers PDF to practice grade 7 math  Practice Worksheet Inverse Trig Functions And Review Trix. Right triangles must follow the formula a 2 + b 3. A nite rectangular array Aof real numbers is called a matrix. PRE Test Unit 8 Real Numbers Key. 400 cm . Find the length of each of the following. ANS: G PTS: 1 A. 4, b = 12, c = 12. WORD DOCUMENT. Related with the pythagorean theorem worksheet answer key 5 7. Gina wilson all things algebra 2014 pythagorean theorem answer key. 1 6) 15. Use the Pythagorean Theorem to find the missing unit 1. pythagorean-theorem-worksheet-with-answer-key 1/4 Downloaded from optimus. Easily create beautiful interactive video lessons for your students you can integrate right into your LMS. States that in a right triangle, the square of the hypotenuse is equal to the sum of the squares of the legs; a²+b²=c². LESSON9. a) 64 square units b) 100 square units c) 9 square units 6. a = 6. 2, c = 7. 2 pts for correct answer) 5. Holt geometry chapter 11 test form b answer key this is the best one Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. This lesson applies the Pythagorean Theorem and teaches the foundational skills required to proceed to lesson 2, Origami Boats Applying the Pythagorean Theorem. D) 50 ft. 10 cm. 3 4 π 2. The Pythagorean theorem can only be used with right triangles. I the base o ladder is 3m away from the house, how tall is the ladder? -1-Use the pythagorean theorem to find the distance between each pair of points. (84 + 56) 2 = c 2 D. The Pythagorean Theorem is a mathematical formula that tells the relationship between the sides in a right triangle, consisting of two legs and a hypotenuse. 84 2 + 56 2 = c 2 Quiz: Pythagorean theorem 1. You will be quizzed on how to find the sides of hypotenuse using this theorem. La Sierra University. Notes. Lesson 16 Pythagorean Theorem Answer Key Dapois De. 12. IF a leg is unknown, isolate that variable part 6. Chapter 8 – Right Triangle Trigonometry Answer Key CK-12 Geometry Concepts 2 8. 21 posts related to algebra 8th grade math worksheets with answer key. Most likely you have knowledge that, people have look numerous times for their Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. Practice Test -- Pythagorean Theorem. Language: English. 84 2 + b 2 = 56 2 B. The sum of the areas of the two smaller squares is equal to the area of the large square. Find the distance between point A (-21, 0) and point C (53, -31) to the nearest hundredth. Given a n This chapter 12 contains Pythagorean Theorem,Converse of the Pythagorean Theorem, etc. ***** a = 5 in. Give your answer in metres correct to 1 decimal place. Unit 8. Show work to indicate how you got your answer. “PRACTICE” UNIT 7: QUIZ 1… Geometric Mean and Pythagorean Theorem Part 1: Find the missing side of each triangle. D. Pythagorean Theorem word problems ws #1 _____Name Solve each of the following. 6 systems of linear equations and inequalities unit 1 points lines and planes homework gina wilson all things algebra 2014 answers pdf algebra review projectile motion and. The pythagorean theorem is one of the most famous geometric theorems. No, it is not a right triangle; There is not enough info. 11 ft 17 ft pythagorean-theorem-worksheet-and-answer-key 1/3 Downloaded from optimus. Practice p 495 #1,46. Read the lesson on pythagorean theorem for more information and examples. One strategy that I like to use is to work as a class to write the Converse of the Pythagorean Theorem. Converse of the Pythagorean Theorem. net on October 11, 2021 by guest [MOBI] Pythagorean Theorem Worksheet And Answer Key Getting the books pythagorean theorem worksheet and answer key now is not type of inspiring means. 2 is this a right triangle Page 1. Round to the nearest tenth if Short Answer: 14). A 1 1 matrix is called a scalar. /1 finding leg) 3. Page 2. ANS: E PTS: 1 3. mathworksheets4kids. 10. , Use the Pythagorean Theorem to calculate the length of the diagonal  Name _ Unit: Pythagorean Theorem Quiz 1 Date _Pd_ Answers Quiz: Pythagorean 1 page. No; 28 cannot be modelled using a square. 2 is this a right triangle Determine if each of the following is a right triangle or not using the Pythagorean Theorem. AD and ‾. Use the Pythagorean Theorem to see if the measurements below can form a right triangle. A) 2500 ft. 7 8) 13 Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. **  Worksheet Works Amazing Maze Answer Key. Find 8-1 The Pythagorean Theorem and Its Converse. 3 in 6 ft 458 » 21. 0. Distance Formula. The song also introduces students to terms like legs and hypotenuse. Answer Key For Pythagorean Theorem Assignment Pythagorean Theorem Assignment Answer Key Pythagorean Theorem Quiz Answers. 2 Applications of the Pythagorean Theorem Answers 1. 29. Side c is the hypotenuse. −coterminal angles: 24, 33 ππ; reference angle: 3 π QII 12 1. Ask students to brainstorm possible solution strategies to answer the question (e. Since 2 3 it satisfied the Pythagorean Theorem which proves that these three sides do form a right triangle. Share by Email Using the Pythagorean theorem, x2 + y2 = h2. Use the Pythagorean Theorem to see if the measurements below can form a right triangle Lesson 6 Homework Practice The Pythagorean Theorem Answer Key, msc construction 1 Unit 6 Homework Key Graph the following linear equations using . a) 20 cm b) 34 cm c) 40 cm d) 26 cm e) 37 cm f) 29 cm Lesson 1. Quiz Review Answer KEY Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. 09022021 pythagorean theorem gina wilson 2014 answer key. Which of the listed side lengths CAN be sides of a right triangle? About this unit. 3) 7x 14 4) 65 x Find the missing side of each right triangle. Look for Relationships Divide the length of the  Other proofs of Pythagoras' theorem. −coterminal angles: 24, 33 ππ; reference angle: 3 π QII 12 Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. 2 is this a right triangle Pythagorean Theorem Practice With Answer Key Pythagorean Theorem. If there are n rows and mcolumns in A, it is called a n mmatrix. As understood, execution does not recommend that you have wonderful points. A statement that switches the hypothesis and conclusion. Round your answers to the nearest tenth if necessary. Be sure to label all answers and leave answers in exact simplified form. Learn about and revise how Pythagoras' theorem can be used to calculate the 1. pythagorean theorem. KEY IDEAS 1. a) b) c) d) pythagorean-theorem-worksheet-and-answer-key 1/3 Downloaded from optimus. 2 is this a right triangle Pythagorean Theorem. Pythagorean theorem practice worksheet key. For this concept are geometry More Challenging Pythagorean Theorem Problems - Answers. Problem Solving Exercises Conceptual Physics Euclidean. ©F L2e0c1 O4a eK7utfa s 2S ho Nf itiwbaFrKeI qL mLYCt. We broke it up into separate sections to not make  A self marking exercise on the application of Pythagoras' Theorem. a2 + b2 = c2. Provide students with an opportunity to answer Part 2: Questions 3 and 4 with. Answers Lesson 7 1. 24 Feb 2012 Example 1: Use the Pythagorean Theorem to find the missing side. 496 ANSWERS Answers Unit 1 Square Roots and the Pythagorean Theorem, page 4 1. Which of the following sentences would belong in the proof that describes this image? answer choices. Determine if each of the following is a right triangle or not using the Pythagorean Theorem. C) 50 2 ft. 4 yd 1) BC = 2) PQ = 3 Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. test. b. Find the lengths of ‾. Pythagorean theorem quiz 1 date _pd_ answers quiz: Unit c homework helper answer key, apply the pythagorean theorem, . WORD ANSWER KEY. Theorem to answer questions about right triangles in context. Grade 8 - Unit 1 Square roots & Pythagorean Theorem Name: _____ 5m The Pythagorean theorem is not only used to find the hypotenuse of a right angled triangle. 45 Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. 8-1 The Pythagorean Theorem and Its Converse. Pythagorean theorem worksheet for each triangle find the missing length. If you are look for Unit Pythagorean Theorem Quiz 1 Answer Key, simply will check out our information below : Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. Level up on all the skills in this unit and collect up to 2100 Answers Lesson 7 1. Pythagorean Theorem: In a right triangle, the sum of squares of the legs a and b is equal to the square of the hypotenuse c. 4 yd 1) BC = 2) PQ = 3 Pythagorean Theorem Quiz Answers. 2 is this a right triangle? yes; no Pythagorean Theorem Quiz » Study Pre-Calculus Math (Geometry) Quiz on Pythagorean Theorem Quiz, created by Selam H on 18/06/2013. Pythagorean Theorem. 14. 8 m . Educators can select resources of their choice and design a resource kit for their students in minutes! Theorem Assignment Answer Key Unit 8 - The Pythagorean Theorem - eMathInstruction Answer Key For Pythagorean Theorem Assignment crctskills ELEMENTARY. Find the length of the unknown side. 1 Examples; 2 Generating a triple. Or in other terms you can use the theorem to unit 12 trigonometry homework 1 answer key pythagorean theorem. t„2 X) 4 Yr-loc 12 10 21 10 100 24 24 6 1 B) A ladder is leaning against 1) In right triangle ABC, m A 58and AB 8. Include units in the answer and round to the nearest tenths place. The Pythagorean theorem describes a special relationship between the sides of a right triangle. Grade 8 Illustrative Mathematics – Unit 8: Pythagorean Theorem and Irrational Numbers Lesson 1 – The Areas of Squares and Their Side Lengths. questions are explained in a basic way that students will never feel any difficulty in learning. 2 No 6) a = 2. ANS: A PTS: 1 2. 12 cannot be modelled using a square. Make sure to state reasons for and against your belief. Pythagorean theorem Replace 3 and b 10. 9 km x 14. EXAMPLE. a = 6 Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. m and hypotenuse: 16 m. 48 Pythagorean Theorem Worksheet with Answers [Word + PDF] Pythagorean Theorem Assignment Answer Key Pythaqorean Theorem Assignment A) Calculate the measure of x in each. Solution: \begin{align*}a = 8, \ b = 15\end{align*}, BrainPOP Activity Answers Geologic Time - Bing Answer key brainpop. Label the Answer key unit 8 homework 1 pythagorean theorem and its converse answers The home lesson responds to the key unit 8 triâgans and correct trigonometry find the training features you need for all your activities. * If C ^ 2 = A ^ 2 + B ^ 2, then is a right triangle. 10 cm . It uses a dissection, which means you will cut apart one or more geometric figures and make the pieces fit into another figure. Yes; 36 = 6 2 2809, 4225, 625 7. A square with a whole number root. c2 = a2 + b2. C 96. Leave your answers in simplest radical form. 379. pdf. c = 13 in. VoiceThread Conversations in the cloud. com Name : 4) XZ = 5) EF = 6) JL = 14 in? 3 in Z Y X t? 8 ft F G E 17 yd? 13 yd K L J 205 » 14. The Pythagorean Theorem and Its Converse Get Free Pythagorean Theorem Answer Key Pythagorean Theorem Answer Key Yeah, reviewing a ebook pythagorean theorem answer key could accumulate your near contacts listings. 6 Introduction to the Unit Circle and Radian Measure Answers 1. 2 2) 14. C. You can only use the Pythagorean Theorem on a RIGHT triangle (one with a 90° angle). Lesson #2 - Dividing Radicals. Unit 7: Pythagorean Theorem Due:_____ I CAN: use mathematical properties to discover the Pythagorean Theorem. Find the missing variable. 10/25 - Quiz - Radicals & Pythagorean Theorem 10/31 - Quiz - Special Right Triangles 11/2 - Unit 5 Test. Filesize: 538 KB. 4 m . Which of the listed side lengths CAN be sides of a right triangle? Theorem Assignment Answer Key Unit 8 - The Pythagorean Theorem - eMathInstruction Answer Key For Pythagorean Theorem Assignment crctskills ELEMENTARY. Evaluate 3 2 and 10 2. 9. Worksheets. 2, (8th) At Least to 80 Score = _____ Level 2: Pythagorean Theorem Showing 2 Examples of using Pythagorean Theorem (1 finding hyp. Go Math Grade 8 Answer Key Chapter 12 Chapter 12 The Pythagorean Theorem. 84 2 – 56 2 = c 2 C. Leave your answer in simplest radical form (page 390). 2 is this a right triangle pythagorean theorem answer key, lesson 4 homework practice use the pythagorean theorem answers, lesson 3 homework practice the pythagorean theorem answers Lesson 6 Homework Practice Use The Pythagorean Theorem Answer Key ☑ DOWNLOAD Use The Pythagorean Theorem. Lesson #5 - Trigonometric Ratios. 5 FSA PRACTICE. Finding the Length of a Hypotenuse. Chapter(13(–(TrigonometricRatios(Answer’Key(CK612AlgebraII(withTrigonometry(Concepts( 6! 13. Students to the aleks test questions in Mathematics/Grade 8 Unit 5: Pythagorean Theorem. (a) AC (b) BC (Hint: Use Pythagorean’s Thm) 2) In right triangle ABC, m B 44 and AB 15. 1) 6 8 9 No 2) 5 12 13 Yes 3) 6 8 10 Yes 4) 3 4 5 Yes 5) a = 6. Unit 7 – Dilations and Similarity – Page 15/27 Published: December 19, 2015. Following is how the Pythagorean equation is written: a²+b²=c². 2 is this a right triangle 1) In right triangle ABC, m A 58\$ and AB 8. Grade 7 pythagorean theorem worksheet. Practice 8-1 The Pythagorean Theorem and Its Converse Find the value of each variable. Unit 7 – Dilations and Similarity – Page 15/27 Complete answer key for worksheet 2 algebra i honors. 2 is this a right triangle Chapter(13(–(TrigonometricRatios(Answer’Key(CK612AlgebraII(withTrigonometry(Concepts( 6! 13. Take a square root to find the value of the remaining variable (Round your answer when necessary. For this concept are geometry 3. 6 Pythagorean Theorem word problems ws #1 _____Name Solve each of the following. 25 Example 5-3c Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. Why or why not. perfect square. Unit 1: Square Roots and the Pythagorean Theorem 2. If the square of the length of one side of a triangle equals the sum of the squares of the lengths Unit 1: Square Roots and the Pythagorean Theorem 2. Watch the video (Level 2: Pythagorean Theorem) Complete the Notes & Basic Practice Check the Key and Correct Mistakes 2. 2 7) 11. You have 15 minutes; Someone WILL be taking YOUR quiz! Unit 7 Quiz 3. 2 -  81 Pythagorean Theorem Part 1 right sum squares square right satisfy obtuse acute. Even the ancients knew of this relationship. docx [Filename: Pre8Key. 2 is this a right triangle 48 Pythagorean Theorem Worksheet with Answers [Word + PDF] Pythagorean Theorem Assignment Answer Key Pythaqorean Theorem Assignment A) Calculate the measure of x in each. Geometry High School FACEing Math. Click on the button below and fill out both sides for the graphic organizer. A Answers Lesson 7 1. Right angles and right triangles - Identifying the hypotenuse: the longest leg AND/OR the leg that is straight across from the right angle Next, we will learn about the Pythagorean theorem. a m q . The sum of the squares of the legs of a right triangle is equal to the square of the hypotenuse. 4 yd 1) BC = 2) PQ = 3 Pythagorean theorem printable 1 day ago · Trig ratios quiz answers. third side. 9 feet Read the test unit Make a drawing to illustrate the problem. 2 is this a right triangle In order to answer how to do the pythagorean theorem you must understand the different sides of a right triangle. Contents. Q. 810° 9. Step 1 Construct a scalene right triangle in the middle of your paper. 6: Exploring the Pythagorean Theorem 1. m Correct Wrong  Pythagorean Theorem, Distance and Midpoints Chapter Questions A butterfly hatches and flies 2 miles due south, then it flies 1 mile due west. 20 cm . 485 2. Choose “Play again” or “Flashcard”. 5 in 9 in? 12 ft 5 ft? 27 ft 11 ft? 14 m? Answer Key Level 1: S3 Score : Printable Math Worksheets @ www. Unit 7 – Dilations and Similarity – Page 15/27 Unit: Unit 8: Pythagorean theorem and irrational numbers. This is where you will find editable versions of each quiz and the unit test. Pythaqorean Theorem Assignment A) Calculate the measure of x in each. 2 is this a right triangle Pythagorean Theorem Quiz Answers. ID: A 1 Unit 1 Test: Real Numbers and the Pythagorean Theorem Answer Section 1. lengths in right triangles. 2 is this a right triangle pythagorean-theorem-worksheet-with-answer-key 1/3 Downloaded from wadsworthatheneum. Pythagoras' theorem will work on which type of triangles? Right-angled. 4 m . Pythagorean Triples Problem #3: For the questions shown below, round the answers to the nearest tenth, . 0 yards B 135. docx — ZIP archive, 158 kB (162245 bytes) Document Actions. What is the Pythagorean Theorem? a2 ⋅ b2 = c2. B) 2500 2 ft. Let me at maps answers worksheets to. 5. −135° 10. Simplify. Linear Equations Unit. Unit 1 Review. 4. ? 17 19 S =8. PDF DOCUMENT - SPANISH. Lesson #6 - Inverse Trig Ratios. org on October 2, 2021 by guest Kindle File Format Pythagorean Theorem Worksheet With Answer Key If you ally habit such a referred pythagorean theorem worksheet with answer key books that will find the Determine the missing length in each right triangle using the Pythagorean theorem. 2 is this a right triangle Answer Key For Pythagorean Theorem Assignment Pythagorean Theorem Assignment Answer Key Pythagorean Theorem Quiz Answers. PDF DOCUMENT. use the Pythagorean Theorem to determine the lengths of diagonals in two-dimensional figures Answers Lesson 7 1. The hypotenuse is always the shortest side length. 1) x 12 in 13 in 5 in 2) 3 mi 4 mi x 5 mi 3) 11. Given a right triangle with a hypotenuse with length units and a  Pythagorean Theorem Quiz Answers. In order to answer how to do the pythagorean theorem you must understand the different sides of a right triangle. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 6 4) 7. Use the Pythagorean  The Pythagorean Theorem states that for any right triangle, and to answer questions about situations that can be modeled  Create an answer key. 2 is this a right triangle Question 1. 2 is this a right triangle Pythagorean theorem printable 1 day ago · Trig ratios quiz answers. Unit 7 Quiz. Answer key unit 8 homework 1 pythagorean theorem and its converse answers The home lesson responds to the key unit 8 triâgans and correct trigonometry find the training features you need for all your activities. CHAPTER 9 The Pythagorean Theorem High School Math. 5) x Answers Lesson 7 1. Label the -1-Use the pythagorean theorem to find the distance between each pair of points. 7 yards. 6 Skills Practice (Diagonals of 3-D Prisms)—finish problems Assessment: Unit 7—Pythagorean Theorem Prove It Unit 7 Review: Pythagorean Theorem Unit 7 Test: Pythagorean Theorem Answers Lesson 7 1. 5 13? S =13. 1. In the aforementioned equation, c is the length of the Search: Unit Pythagorean Theorem Quiz 1 Answer Key. Unit 7 – Dilations and Similarity – Page 15/27 Start studying Pythagorean Theorem. A 202. Some of the worksheets for this concept are . Seventh Grade Curriculum amp Lesson Plan Activities. 97 u2 3. Unit C Homework Helper Answer Key MyTeacherSite Org. 2 is this a right triangle About this unit. 1 mi Find the missing side of each triangle. 1, b = 7. Find the length of the diagonal, d, in each rectangle. Give your answers to two decimal places where needed. VIDEO. 2 is this a right triangle Lesson 6 Homework Practice The Pythagorean Theorem Answer Key, msc construction 1 Unit 6 Homework Key Graph the following linear equations using . Gr8PythagoreanRealTest. 88 m . ' This quiz has been designed to test your mathematical skills in solving numerical problems. 20 ft. a 2 + b 2 = c 2 We can use it to find the length of a side of a right triangle when the lengths of the other two sides are known. Write an equation that can be used to answer the question. Midpoint, Distance and Slope Triangle Midsegment Theorem. Lesson 1. Determine the missing length in each right triangle using the Pythagorean theorem. 1 yards. Use Pythagorean Theorem 24 Example 5-3b Resolve test unit use Pythagorean Theorem to find the length of the ladder. Revised March 2017. Square the two known values 5. 0 yards D 102. 1. t„2 X) 4 Yr-loc 12 10 21 10 100 24 24 6 1 B) A ladder is leaning against the sideo a m house. It can also be used to find the legs of the same triangle as long as one of the legs and hypotenuse is known. The ladder, on the ground and half the house, form the right triangle. ANS: B PTS: 1 4. If you are on a MODIFIED Plan: ODDS. Thanks for worksheets are cut the. 2 is this a right triangle 1. Notes,  31 Jan 2017 Day 1: Pythagorean Theorem We went over this new flip-book that I created over the summer. [Figure 2]. Zorn whether the numbers 12, 16, and 20 make a right triangle or not. Round the answer to the nearest tenth. use the Pythagorean Theorem and its converse to determine unknown side . 21 Mei 2020 1. Worksheets are unit 4 grade 9 applied similar figures date period scale drawings and models algebra 1 graphing linear equations work answer key the pythagorean theorem date period. are unit 8 right triangles name per,  Find the measure of the third side using the Pythagoras theorem formula? Solution: Given : H = 16 units. 4 mi 14. 3 mi x 15. a = 6 “PRACTICE” UNIT 7: QUIZ 1… Geometric Mean and Pythagorean Theorem Part 1: Find the missing side of each triangle. On grid paper, draw a line segment with each length. PDF ANSWER KEY. 82 + x2 = 202. 289. Assumed Knowledge. Which model below uses the Pythagorean Theorem to show that the triangle is a right triangle? B . θ Distance formula problems worksheet with answers pdf Unit rate practice worksheet answer key Trig ratios quiz answers Distance formula problems worksheet with answers pdf. I can use the Pythagorean. 7 4 π − 6. 3 points each) Identify the choice that best completes the statement or answers the question. ) Example #1) Determine the measure of the missing side: Write formula: a2 + b 2 = c 2 Pythagoras’ theorem TOPIC TEST PART A Instructions • This part consists of 12 multiple choice questions • Each question is worth 1 mark • Fill in only ONE CIRCLE for each question • Calculators may be used Time allowed: 15 minutes Total marks = 12 Total marks achieved for PART A 12 Marks Answers Lesson 7 1. Answers to Exercises. Write your answer in the grid below. On this page you can read or download homework 1 pythagorean theorem and its converse unit 8 answer key gina wilson in PDF format. Use the pythagorean theorem to find the missing unit 1. 72. A simple equation, Pythagorean Theorem states that the square of the hypotenuse (the side opposite to the right angle triangle) is equal to the sum of the other two sides. ) Example #1) Determine the measure of the missing side: Write formula: a2 + b 2 = c 2 Pythagoras’ theorem TOPIC TEST PART A Instructions • This part consists of 12 multiple choice questions • Each question is worth 1 mark • Fill in only ONE CIRCLE for each question • Calculators may be used Time allowed: 15 minutes Total marks = 12 Total marks achieved for PART A 12 Marks More Challenging Pythagorean Theorem Problems - Answers. 6 Conclusion. 928 3 Special Right Triangles Maze Special Right Triangle Right Triangle Triangle Worksheet Pythagorean Theorem Practice With Answer Key Pythagorean Theorem Homeschool Math Curriculum Learning Math Pythagorean Theorem Notes Bingo Game Teacherspayteachers Com Pythagorean Theorem Math School Pythagorean Theorem Notes Trigonometry Soh Cah Toa Coloring Activity Trigonometry Color Activities Teaching The Pythagorean Theorem and Its Converse Date_____ Period____ Find the missing side of each triangle. t„2 X) 4 Yr-loc 12 10 21 10 100 24 24 6 1 B) A ladder is leaning against The result of multiplying a number by itself. Unit 1: Pythagorean theorem Lecture 1. 15 cm b 25 cm. ) Example #1) Determine the measure of the missing side: Write formula: a2 + b 2 = c 2 Answers Lesson 7 1. Pythagorean theorem worksheet side b finding the missing side leg or hypotenuse directions. Print this. Question 1. 7 km 8. −390° 8. The Pythagorean Theorem. Or in other terms you can use the theorem to Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. Greek philosopher, 570-495 BC. This quiz and worksheet allow students to test the following skills: Reading comprehension - ensure that you draw the most important information from the related Pythagorean Theorem lesson Pythagorean Unit Test. Level up on the above skills and collect up to 600 Mastery points 4 Question 4 Is a triangle with the lengths 7, 24, and 25 a right triangle. Solve the word problems. None of the other answers. Thus Pythagorean triples are among the oldest known solutions of a nonlinear Diophantine equation. Triangle sides pythagorean theorem 1 worksheet for 7th grade children. com Name : Pythagorean Theorem 15 cm 15 cm? Answer key unit 8 homework 1 pythagorean theorem and its converse answers The home lesson responds to the key unit 8 triâgans and correct trigonometry find the training features you need for all your activities. 2. square. freenode. Round your answer to the nearest tenth. 8. Please draw a picture and use the Pythagorean Theorem to solve. Lesson #7 - Angles of Elevation and Depression. Then solve. EdSearch is a free standards-aligned educational search engine specifically designed to help teachers, parents, and students find engaging videos, apps, worksheets, interactive quizzes, sample questions and other resources. Viewed: 2,387 times. The Theorem is named after the ancient Greek mathematician 'Pythagoras. Sign up for our free newsletter. Pythagorean theorem showing 2 examples of using pythagorean theorem 1 finding hyp1 finding leg 3. 13 m. Students who took this test also took : Lesson - Right Triangles and the Pythagorean Theorem Pecent of Change, discounted price, and total price Pythagorean Triples Created with That Quiz — where test making and test taking are made easy for math and other subject areas. A rectangular parking lot has a length of 84 feet and a width of 56 feet. pdf — PDF document, 1206 kB (1235647 bytes) Document Actions. 1 Trig Ratios of Acute Angles 5. Pythagorean Theorem Proofs G. Express answers in simplest radical form. 4 Mei 2020 Wednesday 5/6. Let us consider the other given side of a triangle as  In a right triangle, the legs are the shorter sides and the hypotenuse is always the longest side. 8 3) 12. What is the area of △ABC? Of △ ACD? Explain your answers. Either choose the correct answer or solve  Practice. c2 + a2 = b2. Multiple Choice (85 points; 5. Yes, it is a right triangle. 300 seconds. 2 is this a right triangle 5 Using Pythagorean Theorem worksheet. pythagorean_theorem_examples_with_answers 1/4 Pythagorean Theorem Examples With Answers [Books] Pythagorean Theorem Examples With Answers The Pythagorean Relationship-AIMS Education Foundation 2009 Prealgebra-Lynn Marecek 2015-09-25 "Prealgebra is designed to meet scope and sequence requirements for a one-semester prealgebra course. NOTE: Triangle 1 is a 45-45-90 right triangle; NOTE: Triangle 2  The Pythagorean theorem as it applies to missing side lengths of triangles: Example 1. Remember, you must write the formula every time, substitute, solve the equation and final answers with correct rounding and units!!! 3. 6 km 4) 6. This chapter 12 contains Pythagorean Theorem,Converse of the Pythagorean Theorem, etc. 3. 420° 7. Test Review #2 KEY. Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. Leave your answers in simplest. 25 = 5 2 9. A n 1 matrix is a column vector, a 1 nmatrix is a row vector. 1 Independent Practice - The Pythagorean Theorem - Page No. Lesson #4 Special Right Triangles. unit 12 trigonometry homework 1 answer key pythagorean theorem. 1, O. Answer Key Pythagorean Theorem Sheet 1 Printable Worksheets @ www. 7 1 vAGlwlG 5r OiWgNhat qsm 7rje KspePr gvPe3d9. a) b) c) d) Get Free Pythagorean Theorem Answer Key Pythagorean Theorem Answer Key Yeah, reviewing a ebook pythagorean theorem answer key could accumulate your near contacts listings. Substitute the known values into the Pythagorean Theorem 4. 14ft. PDF] - Read File Online Pythagorean theorem test pythagorean theorem unit test solve the problems below. Read the questions carefully. Persuade Mr. They make up what is called a Pythagorean Triple. Review. are explained clearly which makes the scholars learn quickly. Lesson 2 Integer Exponents Lesson Key. 64 + x2 = 400. Unit Practice Test -- Pythagorean Theorem. Explain how you did it. There is not enough info. In 3-SJ find the missing side length of each right triangle. Substitute. decimal places Where necessary, round you answer correct to Complete on a separate piece of paper. Contact [email protected] Pythagorean Theorem  Agenda: Dates to Remember Dates to Remember: C. 150° 11. In this topic, we’ll figure out how to use the Pythagorean theorem and prove why it works. g. Read the questions carefully and answer. Solution Compare the side lengths. Always understand that the Pythagorean Theorem relates the areas of squares on the sides of the right triangle. Unit 2. Lesson 2. b = 12 in. There is no evidence that Pythagoras himself worked on or proved the Pythagorean Theorem Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. Upon questioning,  In this introductory lesson, students learn the Pythagorean Theorem and lengths of legs as well as rational versus irrational answers. Lesson 16 Pythagorean Theorem Answer Key Document Read. Unit 1 Test. Algebra Find the value of x. a) ii) b) i) c) iii) 5. answer choices. l-3-Worksheet by Kuta Software LLC Answers to Pythagorean Theorem Practice 1. 5 Yes Find each missing length to the nearest tenth. More Challenging Answers. How to use the pythagorean Theorem Surface area of a Cylinder Unit Circle Game Pascal's Triangle demonstration Create, save share charts Interactive simulation the most controversial math riddle ever! Pythagorean Theorem Word Problems Worksheets The name Pythagorean theorem came from a Greek mathematician by the Rectangular Prisms and the Pythagorean Theorem video link—get from my website MV—6. Pythagorean theorem worksheet answer key edsearch lumos. 0 u2 4. a) b) c) d) Pythagorean Theorem Assignments Answer Key Pythagorean Theorem Quiz Answers. A. a) b) 3. Practice 8-1 The Pythagorean Theorem and Its Converse 1. The Pythagorean Theorem is used to find the length of the sides of a triangle. ~6. 6 Homework Practice Use The Pythagorean Theorem Answer Key. No, it is not a right triangle. The formula can be used for any side of any right triangle. . Lesson 16 Pythagorean Theorem Answer Key Kpappi De. 124. Use Current Location • Español
2021-10-21 18:13:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.317335844039917, "perplexity": 1558.3964645379574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00019.warc.gz"}
http://math.stackexchange.com/questions/479192/squaring-rectangles
# Squaring rectangles it is a nice high-school exercise to prove that a square can be tiled with n squares if and only if n=1, 4 or is any integer greater or equal to 6. A direct consequence is that any rectangle that can be tiled with n squares can be tiled as well with n+3 squares and any number of squares that is greater or equal to n+5. A result of Dehn (not trivial) asserts that a rectangle can be tiled with finitely many squares if and only if it has integer lengths. Let R be such a rectangle and let m denote the least number of squares needed for a tiling of R. As we have already seen, R can be tiled with m+3 squares and any number greater or equal to m+5. Then there can be at most 6 classes of rectangles : • those that cannot be tiled with any other number of squares, • those that can be tiled with m+1 and m+4 squares, • those that can be tiled with m+2 squares, • those that can be tiled with m+1, m+2 and m+4 squares, • those that can be tiled with m+2 and m+4 squares, • those that can be tiled with m+4 squares. It seems that any of these classes is not empty. However, it is not so easy to compute the class of a given rectangle (at least for me!). So my first question is about the existence of such an algorithm? I feel that the first class contains all rectangles whose lengths are of the form (1,n) ; (2,2n+1) and (3,3n+2) and I don't know if there are others. Is it possible to determine all the rectangles of a class, or at least some subclasses in it? So far, I have not found where this problem would have been studied before; all I know is some articles dealing with an asymptotic behavior of m with respect of the lengths of R. Please feel free to add any useful comment on this topic! - I believe (not certain) that the minimum number of squares which a $n \times (n+1)$ rectangle needs is $n+1$. –  Calvin Lin Aug 29 '13 at 15:59 Thanks Calvin.m=1 for a square as a square can be tiled with 1 square! –  Fractality Aug 29 '13 at 16:02 Consider the (5,6) rectangle. If you use the euclidean algorithm you will decompose it with a square of length 5 and 5 unit squares. However, this algorithm will not give you the minimum number as it is possible to put only 5 squares : 3 of length 2 and 2 of length 3. –  Fractality Aug 29 '13 at 16:04 Actually m=5 for the (5,6) since it is not possible to tile this rectangle with less than 5 squares. Now we check that we cannot tile it with 7 squares which means that (5,6) is in the class number 2 (of course I should find a better name for these classes...) –  Fractality Aug 29 '13 at 16:10 –  Erel Segal Halevi Nov 2 '13 at 21:05
2014-07-22 09:34:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6847270131111145, "perplexity": 286.24414651143456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857714.64/warc/CC-MAIN-20140722025737-00156-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/parallel-or-series.471437/
# Parallel or Series? ## Homework Statement I'm just a bit confused about identifying the capacitors in this circuit as connected in parallel or series. ## The Attempt at a Solution The solution is parallel, though I am unsure why. Last edited:
2021-05-13 23:10:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862600326538086, "perplexity": 1733.3079308983588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00226.warc.gz"}
https://studydaddy.com/question/soc-402-week-4-dq-2
QUESTION # SOC 402 Week 4 DQ 2 This archive file of SOC 402 Week 4 Discussion Question 2 comprises: Briefly summarize the three functions of education according to your textbook, Social Problems and the Quality of Life. Can you relate to any of these functions for obtaining education? Identify a specific educational issue that you think is affecting work environments the most. Can this problem be avoided or resolved in the workplace? If you were an employer, how would you overcome this issue? Files: SOC-402 Week 4 DQ 2.zip • @ • 1 order completed Tutor has posted answer for $5.19. See answer's preview$5.19
2018-03-20 23:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17324702441692352, "perplexity": 4030.4012595443614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00151.warc.gz"}
https://en.wikiversity.org/wiki/Hilbert_Book_Model_Project/Quaternions
Hilbert Book Model Project/Quaternions HansVanLeunen respectfully asks that people use the discussion page, their talk page or email them, rather than contribute to this page at this time. This page might be under construction, controversial, used currently as part of a brick and mortar class by a teacher, or document an ongoing research project. Please RESPECT their wishes by not editing this page. Quaternionions Number systems Number systems exist in many forms. They differ in their arithmetic capabilities.[1] Algorithms exist that construct higher dimensional number systems from lower dimensional number systems. In this way, the dimension increases with a factor two. These procedures start from the real numbers and produce complex numbers, quaternions, octonions, sedions, and higher dimensional numbers. The arithmetic capabilities may increase with the dimension of the number system until that dimension reaches four and after that limit, the arithmetic capabilities start to decrease. Hilbert spaces can only cope with number systems that are division rings. In a division ring, every non-zero member owns a unique inverse. Bi-quaternions use complex-number-based coefficients, rather than real-number-based coefficients. The bi-quaternions do not form a division ring. Natural numbers form the simplest number system. The positive numbers and the integers are extensions of the natural number system. Rational numbers consist of groups of fractions that have the same value. All rational numbers can be labeled by a natural number. This makes the rational number system countable. The real number system adds all limits of convergent series of rational numbers. This addition destroys the countability of the real number system. The real number system is a continuum. The natural number system, the positive number system, the integer number system and the rational number system have infinitely many elements. Cantor introduced the notion of cardinality. The cardinality is indicated by natural numbers, but the cardinality of the full set of natural numbers is indicated by cardinality ${\displaystyle \aleph _{0}}$, The cardinality of the real numbers, the complex numbers, the quaternions, the octonions and the sedions equals ${\displaystyle \aleph _{1}}$. Also hiher cardinalities exist. The Hilbert Book Model applies number systems that can serve its base model, which is based on a combination of an infinite dimensional separable Hilbert space and its unique non-separable companion that embeds its separable partner. The model selects the most versatile division ring. The quaternionic numbers system contains real number systems and complex number systems as subsets. Versions Depending on their dimension, number systems exist in many versions that differ in their ordering symmetry. Applying a Cartesian coordinate system followed by a polar coordinate system can achieve this ordering symmetry. The Hilbert Book Model exploits the fact that a quaternionic Hilbert space can harbor multiple versions of quaternionic number systems that serve as parameter spaces that each own their private ordering symmetry and that can float on top of a background parameter space. The difference between the ordering symmetry of a floating platform and the ordering symmetry of the background parameter space determines the symmetry flavor of the platform. Quaternion arithmetic Quaternions exist of a one-dimensional real part, and a three-dimensional imaginary part that can be represented by a real number valued scalar and a three-dimensional vector that has real number valued coefficients. In this way, the quaternionic number system represents a four-dimensional vector space that features a Euclidean structure. Here we represent a quaternion ${\displaystyle q}$ by a real part ${\displaystyle q_{r}}$ and a spatial vector part ${\displaystyle {\vec {q}}}$. ${\displaystyle q\,{\overset {\underset {\mathrm {def} }{}}{=}}\,q_{r}+{\vec {q}}}$ (1) The quaternionic conjugate ${\displaystyle q^{*}}$ is ${\displaystyle q^{*}\,{\overset {\underset {\mathrm {def} }{}}{=}}\,q_{r}-{\vec {q}}}$ (2) Summation is commutative and associative ${\displaystyle a+b=b+a}$ (3) ${\displaystyle (a+b)+c=a+(b+c)}$ (4) Multiplication follows from ${\displaystyle a\,b=(a_{r}+{\vec {a}})(b_{r}+{\vec {b}})=a_{r}\,b_{r}-\langle {\vec {a}},{\vec {b}}\rangle +a_{r}\,{\vec {b}}+b_{r}\,{\vec {a}}\ {\color {Red}\pm }\ {\vec {a}}\times {\vec {b}}}$ (5) ${\displaystyle \langle {\vec {a}},{\vec {b}}\rangle }$ is the inner vector product and ${\displaystyle {\vec {a}}\times {\vec {b}}}$ is the external vector product. ${\displaystyle {\color {Red}\pm }}$ indicates that depending on the ordering symmetry, the quaternionic number system exists in right-handed and in left-handed versions. A right-handed quaternion cannot multiply with a left-handed quaternion. ${\displaystyle (a\,b)^{*}=b^{*}\,a^{*}}$ (6) The norm ${\displaystyle |q|}$ equals ${\displaystyle |q|={\sqrt {q\,q^{*}}}={\sqrt {q_{r}q_{r}+\langle {\vec {q}},{\vec {q}}\rangle }}}$ (7) ${\displaystyle q^{-1}={\frac {q^{*}}{|q|^{2}}}}$ (8) Phase The phase ${\displaystyle q_{\varphi }}$ in radians of quaternion ${\displaystyle q}$ follows from ${\displaystyle q=|q|\exp {\biggl (}q_{\varphi }\,{\frac {\vec {q}}{|{\vec {q}}|}}{\biggr )}}$ (9) ${\displaystyle {\frac {\vec {q}}{|{\vec {q}}|}}}$ is the spatial direction of ${\displaystyle q}$. Quaternionic rotation In multiplication, quaternions do not commute. Thus, in general, ${\displaystyle a\,b/a\neq b}$. In this multiplication, the imaginary part of ${\displaystyle b}$ that is perpendicular to the imaginary part of ${\displaystyle a}$ is rotated over an angle that is twice the complex phase ${\displaystyle \varphi }$ of ${\displaystyle a}$. This graph means that if ${\displaystyle \varphi =\pi /4}$, then the rotation ${\displaystyle a\,b/a}$ shifts ${\displaystyle b_{\perp }}$to another dimension. This fact puts quaternions for which the size of the real part equals the size of the imaginary part in a special category. They can switch states of tri-state systems. In addition, they can switch the color charge of quarks. This means that in such pairs, they behave as gluons. Reflection Each quaternion ${\displaystyle c}$ can be written as a product of two complex numbers ${\displaystyle a}$ and ${\displaystyle b}$ of which the imaginary base vectors are perpendicular ${\displaystyle c=(a_{0}+a_{1}{\vec {i}})(b_{0}+b_{1}{\vec {j}})=c_{0}+c_{1}{\vec {i}}+c_{2}{\vec {j}}+c_{3}{\vec {k}}=a_{0}b_{0}+a_{1}b_{0}{\vec {i}}+a_{0}b_{1}{\vec {j}}\pm a_{1}b_{1}{\vec {k}};\ {\vec {i}}{\vec {j}}=\pm {\vec {k}}}$ Rotating with a pair of ${\displaystyle {\vec {k}}}$ vectors will invert quaternion ${\displaystyle c}$ in the direction of ${\displaystyle {\vec {k}}}$. ${\displaystyle {\vec {k}}\ c/{\vec {k}}=c_{0}-c_{1}{\vec {i}}-c_{2}{\vec {j}}+c_{3}{\vec {k}}}$
2020-09-29 21:11:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 39, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7127242088317871, "perplexity": 559.9837679048945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00513.warc.gz"}
http://nodus.ligo.caltech.edu:8080/40m/?id=12971&sort=Category
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop 40m Log, Page 92 of 339 Not logged in ID Date Author Type Category Subject 8444   Thu Apr 11 11:58:21 2013 JenneUpdateComputersLSC whitening c-code ready The big hold-up with getting the LSC whitening triggering ready has been a problem with running the c-code on the front end models.  That problem has now been solved (Thanks Alex!), so I can move forward. The background: We want the RFPD whitening filters to be OFF while in acquisition mode, but after we lock, we want to turn the analog whitening (and the digital compensation) ON.  The difference between this and the other DoF and filter module triggers is that we must parse the input matrix to see which PD is being used for locking at that time.  It is the c-code that parses this matrix that has been causing trouble.  I have been testing this code on the c1tst.mdl, which runs on the Y-end computer.  Every time I tried to compile and run the c1tst model, the entire Y-end computer would crash. The solution: Alex came over to look at things with Jamie and me.  In the 2.5 version of the RCG (which we are still using), there is an optimization flag "-O3" in the make file.  This optimization, while it can make models run a little faster, has been known in the past to cause problems.  Here at the 40m, our make files had an if-statement, so that the c1pem model would compile using the "-O" optimization flag instead, so clearly we had seen the problem here before, probably when Masha was here and running the neural network code on the pem model.  In the RCG 2.6 release, all models are compiled using the "-O" flag.  We tried compiling the c1tst model with this "-O" optimization, and the model started and the computer is just fine.  This solved the problem. Since we are going to upgrade to RCG 2.6 in the near-ish future anyway, Alex changed our make files so that all models will now compile with the "-O" flagWe should monitor other models when we recompile them, to make sure none of them start running long with the different optimization. The future: Implement LSC whitening triggering! 8479   Tue Apr 23 22:10:54 2013 ranaUpdateComputersNancy controls@rosalba:/users/rana/docs 0$svn resolve --accept working nancy Resolved conflicted state of 'nancy' 8529 Sat May 4 00:21:00 2013 ranaConfigurationComputersworkstation updates Koji and I went into "Update Manager" on several of the Ubuntu workstations and unselected the "check for updates" button. This is to prevent the machines from asking to be upgraded so frequently - I am concerned that someone might be tempted to upgrade the workstations to Ubuntu 12. We didn't catch them all, so please take a moment to check that this is the case on all the laptops you are using and make it so. We can then apply the updates in a controlled manner once every few months. 8540 Tue May 7 17:43:51 2013 JamieUpdateComputers40MARS wireless network problems I'm not sure what's going on today but we're seeing ~80% packet loss on the 40MARS wireless network. This is obviously causing big problems for all of our wirelessly connected machines. The wired network seems to be fine. I've tried power cycling the wireless router but it didn't seem to help. Not sure what's going on, or how it got this way. Investigating... 8541 Tue May 7 18:16:37 2013 JamieUpdateComputers40MARS wireless network problems Here's an example of the total horribleness of what's happening right now: controls@rossa:~ 0$ ping 192.168.113.222 PING 192.168.113.222 (192.168.113.222) 56(84) bytes of data. From 192.168.113.215 icmp_seq=2 Destination Host Unreachable From 192.168.113.215 icmp_seq=3 Destination Host Unreachable From 192.168.113.215 icmp_seq=4 Destination Host Unreachable From 192.168.113.215 icmp_seq=5 Destination Host Unreachable From 192.168.113.215 icmp_seq=6 Destination Host Unreachable From 192.168.113.215 icmp_seq=7 Destination Host Unreachable From 192.168.113.215 icmp_seq=9 Destination Host Unreachable From 192.168.113.215 icmp_seq=10 Destination Host Unreachable From 192.168.113.215 icmp_seq=11 Destination Host Unreachable 64 bytes from 192.168.113.222: icmp_seq=12 ttl=64 time=10341 ms 64 bytes from 192.168.113.222: icmp_seq=13 ttl=64 time=10335 ms ^C --- 192.168.113.222 ping statistics --- 35 packets transmitted, 2 received, +9 errors, 94% packet loss, time 34021ms rtt min/avg/max/mdev = 10335.309/10338.322/10341.336/4.406 ms, pipe 11 controls@rossa:~ 0 Note that 10 SECOND round trip time and 94% packet loss. That's just beyond stupid. I have no idea what's going on. 12721 Mon Jan 16 12:49:06 2017 ranaConfigurationComputersMegatron update The "apt-get update" was failing on some machines because it couldn't find the 'Debian squeeze' repos, so I made some changes so that Megatron could be upgraded. I think Jamie set this up for us a long time ago, but now the LSC has stopped supporting these versions of the software. We're running Ubuntu12 and 'squeeze' is meant to support Ubuntu10. Ubuntu12 (which is what LLO is running) corresponds to 'Debian-wheezy' and Ubuntu14 to 'Debian-Jessie' and Ubuntu16 to 'debian-stretch'. We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance. I followed the instructions from software.ligo.org (https://wiki.ligo.org/DASWG/DebianWheezy) and put the recommended lines into the /etc/apt/sources.list.d/lsc-debian.list file. but I still got 1 error (previously there were ~7 errors): W: Failed to fetch http://software.ligo.org/lscsoft/debian/dists/wheezy/Release Unable to find expected entry 'contrib/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file) Restarting now to see if things work. If its OK, we ought to change our squeeze lines into wheezy for all workstations so that our LSC software can be upgraded. 12724 Mon Jan 16 22:03:30 2017 jamieConfigurationComputersMegatron update Quote: We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance. I would recommend upgrading the workstations to one of the reference operating systems, either SL7 or Debian squeeze, since that's what the sites are moving towards. If you do that you can just install all the control room software from the supported repos, and not worry about having to compile things from source anymore. 12849 Thu Feb 23 15:48:43 2017 johannesUpdateComputersc1psl un-bootable Using the PDA520 detector on the AS port I tried to get some better estimates for the round-trip loss in both arms. While setting up the measurement I noticed some strange output on the scope I'm using to measure the amount of reflected light. The interferometer was aligned using the dither scripts for both arms. Then, ITMY was majorly misaligned in pitch AND yaw such that the PD reading did not change anymore. Thus, only light reflected from the XARM was incident of the AS PD. The scope was showing strange oscillations (Channel 2 is the AS PD signal): For the measurement we compare the DC level of the reflection with the ETM aligned (and the arm locked) vs a misaligned ETM (only ITM reflection). This ringing could be observed in both states, and was qualitatively reproducible with the other arm. It did not show up in the MC or ARM transmission. I found that changing the pitch of the 'active' ITM (=of the arm under investigation) either way by just a couple of ticks made it go away and settle roughly at the lower bound of the oscillation: In this configuration the PD output follows the mode cleaner transmission (Channel 3 in the screen caps) quite well, but we can't take the differential measurement like this, because it is impossible to align and lock the arm but them misalign the ITM. Moving the respective other ITM for potential secondary beams did not seem to have an obvious effect, although I do suspect a ghost/secondary beam to be the culprit for this. I moved the PDA520 on the optical table but didn't see a change in the ringing amplitude. I do need to check the PD reflection though. Obviously it will be hard to determine the arm loss this way, but for now I used the averaging function of the scope to get rid of the ringing. What this gave me was: (16 +/- 9) ppm losses in the x-arm and (-18+/-8) ppm losses in the y-arm The negative loss obviously makes little sense, and even the x-arm number seems a little too low to be true. I strongly suspect the ringing is responsible and wanted to investigate this further today, but a problem with c1psl came up that shut down all work on this until it is fixed: I found the PMC unlocked this morning and c1psl (amongst other slow machines) was unresponsive, so I power-cycled them. All except c1psl came back to normal operation. The PMC transmission, as recorded by c1psl, shows that it has been down for several days: Repeated attempts to reset and/or power-cycle it by Gautam and myself could not bring it back. The fail indicator LED of a single daughter card (the DOUT XVME-212) turns off after reboot, all others stay lit. The sysfail LED on the crate is also on, but according to elog 10015 this is 'normal'. I'm following up that post's elog tree to monitor the startup of c1psl through its system console via a serial connection to find out what is wrong. 12850 Thu Feb 23 18:52:53 2017 ranaUpdateComputersc1psl un-bootable The fringes seen on the oscope are mostly likely due to the interference from multiple light beams. If there are laser beams hitting mirrors which are moving, the resultant interference signal could be modulated at several Hertz, if, for example, one of the mirrors had its local damping disabled. 12851 Thu Feb 23 19:44:48 2017 johannesUpdateComputersc1psl un-bootable Yes, that was one of the things that I wanted to look into. One thing Gautam and I did that I didn't mention was to reconnect the SRM satellite box and move the optic around a bit, which didn't change anything. Once the c1psl problem is fixed we'll resume with that. Quote: The fringes seen on the oscope are mostly likely due to the interference from multiple light beams. If there are laser beams hitting mirrors which are moving, the resultant interference signal could be modulated at several Hertz, if, for example, one of the mirrors had its local damping disabled. Speaking of which: Using one of the grey RJ45 to D-Sub cables with an RS232 to USB adapter I was able to capture the startup log of c1psl (using the usb camera windows laptop). I also logged the startup of the "healthy" c1aux, both are attached. c1psl stalls at a point were c1aux starts testing for present vme modules and doesn't continue, however is not strictly hung up, as it still registers to the logger when external login attempts via telnet occur. The telnet client simply reports that the "shell is locked" and exits. It is possible that one of the daughter cards causes this. This seems to happen after iocInit is called by the startup script at /cvs/cds/caltech/target/c1psl/startup.cmd, as it never gets to the next item "coreRelease()". Gautam and I were trying to find out what happends inside iocInit, but it's not clear to us at this point from where it is even called. iocInit.c and compiled binaries exist in several places on the shared drive. However, all belong to R3.14.x epics releases, while the logfile states that the R3.12.2 epics core is used when iocInit is called. Next we'll interrupt the autoboot procedure and try to work with the machine directly. Attachment 1: slow_startup_logs.tar.gz 12852 Fri Feb 24 20:38:01 2017 johannesUpdateComputersc1psl boot-stall culprit identified [Gautam, Johannes] c1psl finally booted up again, PMC and IMC are locked. Trying to identify the hickup from the source code was fruitless. However, since the PMCTRANSPD channel acqusition failure occured long before the actual slow machine crashed, and since the hickup in the boot seemed to indicate a problem with daughter module identification, we started removing the DIO and DAQ modules: 1. Started with the ones whose fail LED stayed lit during the boot process: the DIN (XVME-212) and the three DACs (VMIVME4113). No change. 2. Also removed the DOUT (XVME-220) and the two ADCs (VMIVME 3113A and VMIVME3123). It boots just fine and can be telnetted into! 3. Pushed the DIN and the DACs back in. Still boots. 4. Pushed only VMIVME3123 back in. Boot stalls again. 5. Removed VMIVME3123, pushed VMIVME 3113A back in. Boots successfully. 6. Left VMIVME3123 loose in the crate without electrical contact for now. 7. Proceeded to lock PMC and IMC The particle counter channel should be working again. • VMIVME3123 is a 16-Bit High-Throughput Analog Input Board, 16 Channels with Simultaneous Sample-and-Hold Inputs • VMIVME3113A is a Scanning 12-Bit Analog-to-Digital Converter Module with 64 channels /cvs/cds/caltech/target/c1psl/psl.db lists the following channels for VMIVME3123: Channels currently in use (and therefore not available in the medm screens): • C1:PSL-FSS_SLOW_MON • C1:PSL-PMC_PMCERR • C1:PSL-FSS_SLOWM • C1:PSL-FSS_MIXERM • C1:PSL-FSS_RMTEMP • C1:PSL-PMC_PMCTRANSPD Channels not currently in use (?): • C1:PSL-FSS_MINCOMEAS • C1:PSL-FSS_RCTRANSPD • C1:PSL-126MOPA_126MON • C1:PSL-126MOPA_AMPMON • C1:PSL-FSS_TIDALINPUT • C1:PSL-FSS_TIDALSET • C1:PSL-FSS_RCTEMP • C1:PSL-PPKTP_TEMP There are plenty of channels available on the asynchronous ADC, so we could wire the relevant ones there if we done care about the 16 bit synchronous sampling (required for proper functionality?) Alternatively, we could prioritize the Acromag upgrade on c1psl (DAQ would still be asynchronous, though). The PCBs are coming in next Monday and the front panels on Tuesday. Some more info that might come in handy to someone someday: The (nameless?) Windows 7 laptop that lives near MC2 and is used for the USB microscope was used for interfacing with c1psl. No special drivers were necessary to use the USB to RS232 adapter, and the RJ45 end of the grey homemade DB9 to RJ45 cable was plugged into the top port which is labeled "console 1". I downloaded the program "CoolTerm" from http://freeware.the-meiers.org/#CoolTerm, which is a serial protocol emulator, and it worked out of the box with the adapter. The standard settings fine worked for communicating with c1psl, only a small modification was necessary: in Options>Terminal make sure that "Enter Key Emulation" is set from "CR+LF" to "CR", otherwise each time 'Enter' is pressed it is actually sent twice. 12854 Tue Feb 28 01:28:52 2017 johannesUpdateComputersc1psl un-bootable It turned out the 'ringing' was caused by the respective other ETM still being aligned. For these reflection measurements both test masses of the other arm need to be misaligned. For the ETM it's sufficient to use the Misalign button in the medm screens, while the ITM has to be manually misaligned to move the reflected beam off the PD. I did another round of armloss measurements today. I encountered some problems along the way • Some time today (around 6pm) most of the front end models had crashed and needed to be restarted GV: actually it was only the models on c1lsc that had crashed. I noticed this on Friday too. • ETMX keeps getting kicked up seemingly randomly. However, it settles fast into it's original position. General Stuff: • Oscilloscope should sample both MC power (from MC2 transmitted beam) and AS signal • Channel data can only be loaded from the scope one channel at a time, so 'stop' scope acquisition and then grab the relevant channels individually • Averaging needs to be restarted everytime the mirrors are moved triggering stop and run remotely via the http interface scripts does this. Procedure: 1. Run LSC Offsets 2. With the PSL shutter closed measure scope channel dark offsets, then open shutter 3. Align all four test masses with dithering to make sure the IFO alignment is in a known state 4. Pick an arm to measure 5. Turn the other arm's dither alignment off 6. 'Misalign' that arm's ETM using medm screen button 7. Misalign that arm's ITM manually after disabling its OpLev servos looking at the AS port camera and make sure it doesn't hit the PD anymore. 8. Disable dithering for primary arm 9. Record MC and AS time series from (paused) scope 10. Misalign primary ETM 11. Repeat scope data recording Each pair of readings gives the reflected power at the AS port normalized to the IMC stored power: $\widehat{P}=\frac{P_{AS}-\overline{P}_{AS}^\mathrm{dark}}{P_{MC}-\overline{P}_{MC}^\mathrm{dark}}$ which is then averaged. The loss is calculated from the ratio of reflected power in the locked (L) vs misaligned (M) state from $\mathcal{L}=\frac{T_1}{4\gamma}\left[1-\frac{\overline{\widehat{P}_L}}{\overline{\widehat{P}_M}} +T_1\right ]-T_2$ Acquiring data this way yielded P_L/P_M=1.00507 +/- 0.00087 for the X arm and P_L/P_M=1.00753 +/- 0.00095 for the Y arm. With $\gamma_x=0.832$ and $\gamma_x=0.875$ (from m1=0.179, m2=0.226 and 91.2% and 86.7% mode matching in X and Y arm, respectively) this yields round trip losses of: $\mathcal{L}_X=21\pm4\,\mathrm{ppm}$ and $\mathcal{L}_Y=13\pm4\,\mathrm{ppm}$, which is assuming a generalized 1% error in test mass transmissivities and modulation indices. As we discussed, this seems a little too good to be true, but at least the numbers are not negative. 12943 Thu Apr 13 21:01:20 2017 ranaConfigurationComputersLG UltraWide on Rossa we installed a new curved 34" doublewide monitor on Rossa, but it seems like it has a defective dead pixel region in it. Unless it heals itself by morning, we should return it to Amazon. Please don't throw out he packing materials. ## Steve 8am next morning: it is still bad The monitor is cracked. It got kicked while traveling. It's box is damaged the same place. Shipped back 4-17-2017 Attachment 1: LG34c.jpg Attachment 2: crack.jpg 12965 Wed May 3 16:12:36 2017 johannesConfigurationComputerscatastrophic multiple monitor failures It seems we lost three monitors basically overnight. The main (landscape, left) displays of Pianosa, Rossa and Allegra are all broken with the same failure mode: their backlights failed. Gautam and I confirmed that there is still an image displayed on all three, just incredibly faint. While Allegra hasn't been used much, we can narrow down that Pianosa's and Rossa's monitors must have failed within 5 or 6 hours of each other, last night. One could say ... they turned to the dark side Quick edit; There was a functioning Dell 24" monitor next to the iMac that we used as a replacement for Pianosa's primary display. Once the new curved display is paired with Rossa we can use its old display for Donatella or Allegra. 12966 Wed May 3 16:46:18 2017 KojiConfigurationComputerscatastrophic multiple monitor failures - Is there any machine that can handle 4K? I have one 4K LCD for no use. - I also can donate one 24" Dell 12971 Thu May 4 09:52:43 2017 ranaConfigurationComputerscatastrophic multiple monitor failures That's a new failure mode. Probably we can't trust the power to be safe anymore. Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it. 12978 Tue May 9 15:23:12 2017 SteveConfigurationComputerscatastrophic multiple monitor failures Gautam and Steve, Surge protective power strip was install on Friday, May 5 in the Control Room Computers not connected to the UPS are plugged into Isobar12ultra. Quote: That's a new failure mode. Probably we can't trust the power to be safe anymore. Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it. Attachment 1: Trip-Lite.jpg 12993 Mon May 15 20:43:25 2017 ranaConfigurationComputerscatastrophic multiple monitor failures this is not the right one; this Ethernet controlled strip we want in the racks for remote control. Buy some of these for the MONITORS. Quote: Surge protective power strip was install on Friday, May 5 in the Control Room Computers not connected to the UPS are plugged into Isobar12ultra. Quote: That's a new failure mode. Probably we can't trust the power to be safe anymore. Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it. 13037 Sun Jun 4 14:19:33 2017 ranaFrogsComputersNetwork slowdown: Martians are behind a waterwall A few weeks ago we did some internet speed tests and found a dramatic difference between our general network and our internal Martian network in terms of access speed to the outside world. As you can see, the speed from nodus is consistent with a Gigabit connection. But the speeds from any machine on the inside is ~100x slower. We need to take a look at our router / NAT setup to see if its an old hardware problem or just something in the software firewall. By comparison, my home internet download speed test is ~48 Mbit/s; ~6x faster than our CDS computers. controls@megatron|~> speedtest /usr/local/bin/speedtest:5: UserWarning: Module dap was already imported from None, but /usr/lib/python2.7/dist-packages is being added to sys.path from pkg_resources import load_entry_point Retrieving speedtest.net configuration... Testing from Caltech (131.215.115.189)... Retrieving speedtest.net server list... Selecting best server based on ping... Hosted by Race Communications (Los Angeles, CA) [29.63 km]: 6.52 ms Testing download speed................................................................................ Download: 6.35 Mbit/s Testing upload speed................................................................................................ Upload: 5.10 Mbit/s controls@megatron|~> exit logout Connection to megatron closed. controls@nodus|~ > speedtest Retrieving speedtest.net configuration... Testing from Caltech (131.215.115.52)... Retrieving speedtest.net server list... Selecting best server based on ping... Hosted by Phyber Communications (Los Angeles, CA) [29.63 km]: 2.196 ms Testing download speed................................................................................ Download: 721.92 Mbit/s Testing upload speed................................................................................................ Upload: 251.38 Mbit/s Attachment 1: Screen_Shot_2017-06-04_at_1.47.47_PM.png Attachment 2: Screen_Shot_2017-06-04_at_1.44.42_PM.png 13044 Mon Jun 5 21:53:55 2017 ranaUpdateComputersrossa: ubuntu 16.04 With the network config, mounting, and symlinks setup, rossa is able to be used as a workstation for dataviewer and MEDM. For DTT, no luck since there is so far no lscsoft support past the Ubuntu14 stage. 13050 Wed Jun 7 15:41:51 2017 SteveUpdateComputerswindow laptop scanned Randy Trudeau scanned our Window laptop Dell 13" Vostro and Steve's memory stick for virus. Nothing was found. The search continues... Rana thinks that I'm creating these virus beasts with taking pictures with Dino Capture and /or Data Ray on the window machine........ 13065 Thu Jun 15 14:24:48 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. 13067 Thu Jun 15 19:49:03 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night. Quote: Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. 13068 Fri Jun 16 12:37:47 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On Ottavia had been left running overnight and it seems to work fine. There has been no smell or any noticeable problems in the working. This morning Gautam, Kaustubh and I connected Ottavia to the Matrian Network through the Netgear switch in the 40m lab area. We were able to SSH into Ottavia through Pianosa and access directories. On the ottavia itself we were able to run ipython, access the internet. Since it seems to work out fine, Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now. Quote: It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night. Quote: Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. 13071 Fri Jun 16 23:27:19 2017 Kaustubh, JigyasaUpdateComputersOttavia Connected to the Netgear Box I just connected the Ottavia to the Netgear box and its working just fine. It'll remain switched on over the weekend. Quote: Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now. 13154 Mon Jul 31 20:35:42 2017 KojiSummaryComputersChiara backup situation summary Summary - CDS Shared files system: backed up - Chiara system itself: not backed up controls@chiara|~> df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/sda1 450420 11039 416501 3% / udev 15543 1 15543 1% /dev tmpfs 3111 1 3110 1% /run none 5 0 5 0% /run/lock none 15554 1 15554 1% /run/shm /dev/sdb1 2064245 1718929 240459 88% /home/cds /dev/sdd1 1877792 1426378 356028 81% /media/fb9bba0d-7024-41a6-9d29-b14e631a2628 /dev/sdc1 1877764 1686420 95960 95% /media/40mBackup /dev/sda1 : System boot disk /dev/sdb1 : main cds disk file system 2TB partition of 3TB disk (1TB vacant) /dev/sdc1 : Daily backup of /dev/sdb1 via a cron job (/opt/rtcds/caltech/c1/scripts/backup/localbackup) /dev/sdd1 : 2014 snap shot of cds. Not actively used. USB https://nodus.ligo.caltech.edu:8081/40m/11640 13159 Wed Aug 2 14:47:20 2017 KojiSummaryComputersChiara backup situation summary I further made the burt snapshot directories compressed along with ELOG 11640. This freed up additional ~130GB. This will eventually help to give more space to the local backup (/dev/sdc1) controls@chiara|~> df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/sda1 450420 11039 416501 3% / udev 15543 1 15543 1% /dev tmpfs 3111 1 3110 1% /run none 5 0 5 0% /run/lock none 15554 1 15554 1% /run/shm /dev/sdb1 2064245 1581871 377517 81% /home/cds /dev/sdd1 1877792 1426378 356028 81% /media/fb9bba0d-7024-41a6-9d29-b14e631a2628 /dev/sdc1 1877764 1698489 83891 96% /media/40mBackup 13160 Wed Aug 2 15:04:15 2017 gautamConfigurationComputerscontrol room workstation power distribution The 4 control room workstation CPUs (Rossa, Pianosa, Donatella and Allegra) are now connected to the UPS. The 5 monitors are connected to the recently acquired surge-protecting power strips. Rack-mountable power strip + spare APC Surge Arrest power strip have been stored in the electronics cabinet. Quote: this is not the right one; this Ethernet controlled strip we want in the racks for remote control. Buy some of these for the MONITORS. 13227 Thu Aug 17 22:54:49 2017 ericqUpdateComputersTrying to access JetStor RAID files The JetStor RAID unit that we had been using for frame writing before the fb meltdown has some archived frames from DRFPMI locks that I want to get at. I spent some time today trying to mount it on optimus with no success The unit was connected to fb via a SCSI cable to a SCSI-to-PCI card inside of fb. I moved the card to optimus, and attached the cable. However, no mountable device corresponding to the RAID seems to show up anywhere. The RAID unit can tell that it's hooked up to a computer, because when optimus restarts, the RAID event log says "Host Channel 0 - SCSI Bus Reset." The computer is able to get some sort of signals from the RAID unit, because when I change the SCSI ID, the syslog will say 'detected non-optimal RAID status'. The PCI card is ID'd fine in lspci as "06:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev c1)" 'lsssci' does not list anything related to the unit Using 'mpt-status -p', which is somehow associated with this kind of thing returns the disheartening output: Checking for SCSI ID:0 Checking for SCSI ID:1 Checking for SCSI ID:2 Checking for SCSI ID:3 Checking for SCSI ID:4 Checking for SCSI ID:5 Checking for SCSI ID:6 Checking for SCSI ID:7 Checking for SCSI ID:8 Checking for SCSI ID:9 Checking for SCSI ID:10 Checking for SCSI ID:11 Checking for SCSI ID:12 Checking for SCSI ID:13 Checking for SCSI ID:14 Checking for SCSI ID:15 Nothing found, contact the author I don't know what to try at this point. 13239 Tue Aug 22 15:17:19 2017 ericqUpdateComputersOld frames accessible again It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit. I hooked it up to megatron, and it was automatically recognized and mounted. I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on! At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy. Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID. There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost. 13240 Tue Aug 22 15:40:06 2017 gautamUpdateComputersOld frames accessible again [jamie, gautam] I had some trouble getting the daqd processes up and running again using Jamie's instructions. With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick. c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green. Now we are going to work on trying the fix Rolf suggested on c1iscex. Quote: It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit. I hooked it up to megatron, and it was automatically recognized and mounted. I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on! At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy. Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID. There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost. 13242 Tue Aug 22 17:11:15 2017 gautamUpdateComputersc1iscex model restarts [jamie, gautam] ## We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints. Here is what was done (Jamie will correct me if I am mistaken). 1. Jamie checked out branch 3.4 of the RCG from the SVN. 2. Jamie recompiled all the models on c1iscex against this version of RCG. 3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green. 5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above. So while we are in a better state now, the problem isn't fully solved. Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed. 13243 Tue Aug 22 18:36:46 2017 gautamUpdateComputersAll FE models compiled against RCG3.4 After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models. To do so: • I did rtcds make and rtcds install for all the models. • Then I ssh-ed into the FEs and did rtcds stop all, followed by rtcds start <model> in the order they are listed on the CDS overview MEDM screen (top to bottom). • During the compilation process (i.e. rtcds make), for some of the models, I got some compilation warnings. I believe these are related to models that have custom C code blocks in them. Jamie tells me that it is okay to ignore these warnings at that they will be fixed at some point. • c1lsc FE crashed when I ran rtcds stop all - had to go and do a manual reboot. • Doing so took down the models on c1sus and c1ioo that were running - but these FEs themselves did not have to be robooted. • Once c1lsc came back up, I restarted all the models on the vertex FEs. They all came back online fine. • Then I ssh-ed into FB1, and restarted the daqd processes - but c1lsc and c1ioo CDS indicators were still red. • Looks like the mx_stream processes weren't started automatically on these two machines. Reasons unknown. Earlier today, the same was observed for c1iscey. • I manually restarted the mx_stream processes, at which point all CDS indicator lights became green (see Attachment #1). IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics. GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful. Quote: [jamie, gautam] ## We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints. Here is what was done (Jamie will correct me if I am mistaken). 1. Jamie checked out branch 3.4 of the RCG from the SVN. 2. Jamie recompiled all the models on c1iscex against this version of RCG. 3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green. 5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above. So while we are in a better state now, the problem isn't fully solved. Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed. Attachment 1: CDS_Aug22.png 13277 Wed Aug 30 22:15:47 2017 ranaOmnistructureComputersUSB flash drives moved I have moved the USB flash drives from the electronics bench back into the middle drawer of the cabinet next to the AC which is west of the fridge. Drawer re-enlabeled. 13287 Fri Sep 1 16:55:27 2017 gautamUpdateComputersTestpoints now accessible again ## Thanks to Jonathan Hanks, it appears we can now access test-points again using dataviewer. I haven't done an exhaustive check just yet, but I have loaded a few testpoints in dataviewer, and ran a script that use testpoint channels (specifically the ALS phase tracker UGF setting script), all seems good. So if I remember correctly, the major CDS fix now required is to solve the model unloading issue. Thanks to Jamie/Jonathan Hanks/KT for getting us back to this point! Here are the details: After reading logs and code, it was a simple daqdrc config change. The daqdrc should read something like this: ... set master_config=".../master"; configure channels begin end; tpconfig ".../testpoint.par"; ... What had happened was tpconfig was put before the configure channels begin end. So when daqd_rcv went to configure its test points it did not have the channel list configured and could not match test points to the right model & machine. Dave and I suspect that this is so that it can do an request directly to the correct front end instead of a general broadcast to all awgtpman instances. Simply reordering the config fixes it. I tested by opening a test point in dataviewer and verifiying that testpoints had opened/closed by using diag -l. Xmgr/grace didn't seem to be able to keep up with the test point data over a remote connection. You can find this in the logs by looking for entries like the following while the daqd is starting up. When we looked we saw that there was an entry for every model. Unable to find GDS node 35 system c1daf in INI fiels 13323 Wed Sep 20 15:49:26 2017 ranaOmnistructureComputersnew internet Larry Wallace hooked up a new switch (Brocade FWS 648G) today which is our 40m lab interface to the outside world internet. Its faster. He then, just now, switched over the cables which were going to the old one into the new one, including NODUS and the NAT Router. CDS machines can still connect to the outside world. In the next week or two, he'll install a new NAT for us so that we can have high speed comm from CDS to the world. 13405 Sun Oct 29 16:40:17 2017 ranaSummaryComputersdisk cleanup Backed up all the wikis. Theyr'e in wiki_backups/*.tar.xz (because xz -9e gives better compression than gzip or bzip2) Moved old user directories in the /users/OLD/ 13434 Fri Nov 17 16:31:11 2017 aaronOmnistructureComputersAcromag wired up # Acromag Wireup Update I finished wiring up the Acromags to replace the VME boxes on the x arm. I still need to cut down the bar and get them all tidy in the box, but I wanted to post the wiring maps I made. I wanted to note specifically that a few of the connections were assigned to VME boxes but are no longer assigned in this Acromag setup. We should be sure that we actually do not need to use the following channels: ### Channels no longer in use • From the VME analog output (VMIVME 4116) to the QPD Whitening board (no DCC number on the front), 3 channels are no longer in use • From the anti-image filter (D000186) to the ADC (VMIVME 3113A) 5 channels are no longer in use (these are the only channels from the anti-image filter, so this filter is no longer in use at all?) • From the universal dewhitening filter (D000183) to a binary I/O adapter (channels 1-16), 4 channels are no longer in use. These are the only channels from the dewhitening filter • From a second universal dewhitening filter (D000183) to another the binary I/O adapter (channels 1-16), one channel is no longer in use (this was the only channel from this dewhitening filter). • From the opti-lever (D010033) to the VME ADC (VMIVME 3113A), 7 channels are no longer in use (this was all of the channels from the opti lever) • From the SUS PD Whitening/Interface board (D000210) to a binary I/O adapter (channels 1-16), 5 channels are no longer in use. • Note that none of the binary I/O adapter channels are in use. Attachment 1: AcromagWiringMaps.pdf 13435 Fri Nov 17 17:10:53 2017 ranaOmnistructureComputersAcromag wired up Exactly: you'll have to list explicitly what functions those channels had so that we know what we're losing before we make the switch. 13440 Tue Nov 21 17:51:01 2017 KojiConfigurationComputersnodus post OS migration admin The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017 Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. And "websvn" was also implemented. 13442 Tue Nov 21 23:47:51 2017 gautamConfigurationComputersnodus post OS migration admin I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e. This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself. I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab. Quote: The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017 Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed. 13443 Wed Nov 22 00:54:18 2017 johannesOmnistructureComputersSlow DAQ replacement computer progress I got the the SuperMicro 1U server box from Larry W on Monday and set it up in the CryoLab for initial testing. The processor is an Intel D525 dual core atom processor with 1.8 GHz (i386 architecture, no 64-bit support). The unit has a 250GB SSD and 4GB RAM. I installed Debian Jessie on it without any problems and compiled the most recent stable versions of EPICS base (3.15.5), asyn drivers (4-32), and modbus module (2-10-1). EPICS and asyn each took about 10 minutes, and modbus about 1 minute. I copied the database files and port driver definitions for the cryolab from cryoaux, whose modbus services I suspended, and initialized the EPICS modbus IOC on the SuperMicro machine instead. It's working flawlessly so far, but admittedly the box is not under heavy load in the cryolab, as the framebuilder there is logging only the 16 analog channels. I have recently worked out some kinks in the port driver and channel definitions, most importantly: • mosbus IOC initialization is performed automatically by systemd on reboot • If the IOC crashes or a system reboot is required the Acromag units freeze in their last current state. When the IOC is started a single read operation of all A/D registers is performed and the result taken as the initial value of the corresponding channel, causing no discontinuity in generated voltage EVER (except of course for the rare case when the Acromags themselves have to be restarted) Aaron and I set 12/4 as a tentative date when we will be ready to attempt a swap. Until then the cabling needs to be finished and a channel database file needs to be prepared. 13445 Wed Nov 22 11:51:38 2017 gautamConfigurationComputersnodus post OS migration admin Confirmed that this crontab is running - the daily backup of the crontab seems to have successfully executed, and there is now a file crontab_nodus.ligo.caltech.edu.20171122080001 in the directory quoted below. TheHOSTNAME seems to be "nodus.ligo.caltech.edu" whereas it was just "nodus", so the file names are a bit longer now, but I guess that's fine... Quote: I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e. This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself. I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab. 13458   Wed Nov 29 21:40:30 2017 johannesOmnistructureComputersSlow DAQ replacement computer progress [Aaron, Johannes] We configured the AtomServer for the Martian network today. Hostname is c1auxex2, IP is 192.168.113.49. Remote access over SSH is enabled. There will be 6 acromag units served by c1auxex2. Hostname Type IP Address c1auxex-xt1221a 1221 192.168.113.130 c1auxex-xt1221b 1221 192.168.113.131 c1auxex-xt1221c 1221 192.168.113.132 c1auxex-xt1541a 1541 192.168.113.133 c1auxex-xt1541b 1541 192.168.113.134 c1auxex-xt1111a 1111 192.168.113.135 Some hardware to assemble the Acromag box and adapter PCBs are still missing, and the wiring and channel definitions have to be finalized. The port driver initialization instructions and channel definitions are currently locally stored in /home/controls/modbusIOC/ but will eventually be migrated to a shared location, but we need to decide how exactly we want to set up this infrastructure. • Should the new machines have the same hostnames as the ones they're replacing? For the transition we simply named it c1auxex2. • Because the communication of the server machine with the DAQ modules is happening over TCP/IP and not some VME backplane bus we could consolidate machines, particularly in the vertex area. • It would be good to use the fact that these SuperMicro servers have 2+ ethernet ports to separate CDS EPICS traffic from the modbus traffic. That would also keep the 30+ IPs for the Acromag thingies off the Martian host tables. 13461   Sun Dec 3 05:25:59 2017 gautamConfigurationComputerssendmail installed on nodus Pizza mail didn't go out last weekend - looking at logfile, it seems like the "sendmail" service was missing. I installed sendmail following the instructions here: https://tecadmin.net/install-sendmail-server-on-centos-rhel-server/ Except that to start the sendmail service, I used systemctl and not init.d. i.e. I ran systemctl start sendmail.service (as root). Test email to myself works. Let's see if it works this weekend. Of course this isn't so critical, more important are the maintenance emails that may need to go out (e.g. disk usage alert on chiara / N2 pressure check, which looks like nodus' responsibilities). 13462   Sun Dec 3 17:01:08 2017 KojiConfigurationComputerssendmail installed on nodus An email has come at 5PM on Dec 3rd. 13463   Mon Dec 4 22:06:07 2017 johannesOmnistructureComputersAcromag XEND progress I wired up the power distribution, and ethernet cables in the Acromag chassis today. For the time being it's all kind of loose in there but tomorrow the last parts should arrive from McMaster to put everything in its place. I had to unplug some of the wiring that Aaron had already done but labeled everything before I did so. I finalized the IP configuration via USB for all the units, which are now powered through the chassis and active on the network. I started transcribing the database file ETMXaux.db that is loaded by c1auxex in the format required by the Acromags and made sure that the new c1auxex2 properly functions as a server, which it does. ToDo-list: • Need to calibrate the +/- 10V swing of the analog channels via the USB utility, but that requires wiring the channels to the connectors and should probably be done once the unit sits in the rack • Need to wire power from the Sorensens into the chassis. There are +/- 5V, +/- 15V and +/- 20V present. The Acromags need only +12V-32V, for which I plan to use the +20V, and an excitation voltage for the binary channels, for which I'm going to wire the +5V. Should do this through the fuse rails on the side. • The current slow binary channels are sinking outputs, same as the XT1111 16-channel module we have. The additional 4 binary outputs of the XT1541 are sourcing, and I'm currently not sure if we can use them with the sos driver and whitening vme boards that get their binary control signals from the slow system. • Confirm switching of binary channels (haven't used model XT1111 before, but I assume the definitions are identical to XT1121) • Setup remaining essential EPICS channels and confirm that dimensions are the same (as in both give the same voltage for the same requested value) • Disconnect DIN cables, attach adapter boards + DSUB cables • Testing Quote: [Aaron, Johannes] We configured the AtomServer for the Martian network today. Hostname is c1auxex2, IP is 192.168.113.49. Remote access over SSH is enabled. There will be 6 acromag units served by c1auxex2. Hostname Type IP Address c1auxex-xt1221a 1221 192.168.113.130 c1auxex-xt1221b 1221 192.168.113.131 c1auxex-xt1221c 1221 192.168.113.132 c1auxex-xt1541a 1541 192.168.113.133 c1auxex-xt1541b 1541 192.168.113.134 c1auxex-xt1111a 1111 192.168.113.135 Some hardware to assemble the Acromag box and adapter PCBs are still missing, and the wiring and channel definitions have to be finalized. The port driver initialization instructions and channel definitions are currently locally stored in /home/controls/modbusIOC/ but will eventually be migrated to a shared location, but we need to decide how exactly we want to set up this infrastructure. • Should the new machines have the same hostnames as the ones they're replacing? For the transition we simply named it c1auxex2. • Because the communication of the server machine with the DAQ modules is happening over TCP/IP and not some VME backplane bus we could consolidate machines, particularly in the vertex area. • It would be good to use the fact that these SuperMicro servers have 2+ ethernet ports to separate CDS EPICS traffic from the modbus traffic. That would also keep the 30+ IPs for the Acromag thingies off the Martian host tables. 13468   Thu Dec 7 22:24:04 2017 johannesOmnistructureComputersAcromag XEND progress Quote: Need to calibrate the +/- 10V swing of the analog channels via the USB utility, but that requires wiring the channels to the connectors and should probably be done once the unit sits in the rack Need to wire power from the Sorensens into the chassis. There are +/- 5V, +/- 15V and +/- 20V present. The Acromags need only +12V-32V, for which I plan to use the +20V, and an excitation voltage for the binary channels, for which I'm going to wire the +5V. Should do this through the fuse rails on the side. The current slow binary channels are sinking outputs, same as the XT1111 16-channel module we have. The additional 4 binary outputs of the XT1541 are sourcing, and I'm currently not sure if we can use them with the sos driver and whitening vme boards that get their binary control signals from the slow system. Confirm switching of binary channels (haven't used model XT1111 before, but I assume the definitions are identical to XT1121) Setup remaining essential EPICS channels and confirm that dimensions are the same (as in both give the same voltage for the same requested value) Disconnect DIN cables, attach adapter boards + DSUB cables Testing Getting the chassis ready took a little longer than anticipated, mostly because I had not looked into the channel list myself before and forgot about Lydia's post which mentions that some of the switching controls have to be moved from the fast to the slow DAQ. We would need a total of 5+5+4+8=22 binary outputs. With the existing Acromag units we have 16 sinking outputs and 8 sourcing outputs. I looked through all the Eurocrate modules and confirmed that they all use the same switch topology which has sourcing inputs. While one can use a pull-down resistor to control a sourcing input with a sourcing output, pulling down the MAX333A input (datasheet says logic low is <0.8V) requires something like 100 Ohms for the pull down resistor, which would require ~150mA of current PER CHANNEL, which is unreasonable. Instead, I asked Steve to buy a second XT1111 and modified the chassis to accomodate more Acromag units. I have now finished wiring the chassis (except for 8 remaining bypass controls to the whitening board which need the second XT1111), calibrated all channels in use, confirmed all pin locations via the existing breakout boards and DCC drawings for the eurocrate modules, and today Steve and I added more fuses to the DIN rail power distribution for +20V and +15V. There was not enough contingent free space in the XEND rack to mount the chassis, so for now I placed it next to it. c1auxex2 is currently hosting all original physical c1auxex channels (not yet calc records) under their original name with an _XT added at the end to avoid duplicate channel names. c1auxex is still in control of ETMX. All EPICS channels hosted by c1auxex2 are in dimensions of Volts. The plan for tomorrow is to take c1auxex off the grid, rename the c1auxex2 hosted channels and transfer ETMX controls to it, provided we can find enough 37pin DSub cables (8). I made 5 adapter boards for the 5 Eurocrate modules that need to talk to the slow DAQ through their backplane connector. 13469   Fri Dec 8 12:06:59 2017 johannesOmnistructureComputersc1auxex2 ready - but need more cables The new slow machine c1auxex2 is ready to deploy. Unfortunately we don't have enough 37pin DSub cables to connect all channels. In fact, we need a total of 8, and I found only three male-male cables and one gender changer. I asked Steve to buy more. Over the past week I have transferred all EPICS records - soft channels and physical ones - from c1auxex to c1auxex2, making changes where needed. Today I started the in-situ testing 1. Unplugged ETMX's satellite box 2. Unplugged the eurocrate backplane DIN cables from the SOS Driver and QPD Whitening filter modules (the ones that receive ao channels) 3. Measured output voltages on the relevant pins for comparison after the swap 4. Turned off c1auxex by key, removed ethernet cable 5. Started the modbus ioc on c1auxex2 6. Slow machine indicator channels came online, ETMX Watchdog was responsive (but didn't have anything to do due to missing inputs) and reporting. PIT/YAW sliders function as expected 7. Restoring the previous settings gives output voltages close to the previous values, in fact the exact values requested (due to fresh calibration) 8. Last step is to go live with c1auxex2 and confirm the remaining channels work as expected. I copied the relevant files to start the modbus server to /cvs/cds/caltech/target/c1auxex2, although kept local copies in /home/controls/modbusIOC/ from which they're still run. I wonder what's the best practice for this. Probably to store the database files centrally and load them over the network on server start? 13487   Mon Dec 18 17:48:09 2017 ranaUpdateComputersrossa: SL7.3 upgrade continues Following instructions from LLO-CDS fo the rossa upgrade. Last time there were some issues with not being to access the LLO EPEL repos, but this time it seems to be working fine. After adding font aliases, need to run 'sudo xset fp rehash' to get the new aliases to take hold. Afterwards, am able to use MEDM and sitemap just fine. But diaggui won't run because of a lib-sasl error. Try 'sudo yum install gds-all'. diaggui: error while loading shared libraries: libsasl2.so.2: cannot open shared object file: No such file or directory (have contacted LLO CDS admins) X-windows keeps crashing with SL7 and this big monitor. Followed instructions on the internet to remove the generic 'Nouveau' driver and install the proprietary NVDIA drivers by dropping to run level 3 and runnning some command line hoodoo to modify the X-files. Now I can even put the mouse on the left side of the screen and it doesn't crash. ELOG V3.1.3-
2022-08-19 23:52:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33455702662467957, "perplexity": 5589.642968934713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00226.warc.gz"}
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Motion+in+A+Plane&q_type=&q_topic=Motion+In+A+Plane+&q_category=&question_id=PHENNT11132320
 A particle starts its motion from rest under the action of a constant force. If the distance covered in first 10 s is s1 and that covered in the first 20 s is s2, then from Physics Motion in A Plane Class 11 Manipur Board ### Book Store Currently only available for. CBSE Gujarat Board Haryana Board ### Previous Year Papers Download the PDF Question Papers Free for off line practice and view the Solutions online. Currently only available for. Class 10 Class 12 A particle starts its motion from rest under the action of a constant force. If the distance covered in first 10 s is s1 and that covered in the first 20 s is s2, then • s2 = 2s1 • s2 = 3s1 • s2 = 4 s1 • s2 = s1 C. s2 = 4 s1 If the particle is moving in a straight line under the action of a constant force then distance covered s = ut + at2 /2 since the body start from rest u = 0 therefore, s = at2/2 Now, If the particle is moving in a straight line under the action of a constant force then distance covered s = ut + at2 /2 since the body start from rest u = 0 therefore, s = at2/2 Now, 1104 Views What are the basic characteristics that a quantity must possess so that it may be a vector quantity? A quantity must possess the direction and must follow the vector axioms. Any quantity that follows the vector axioms are classified as vectors. 814 Views Give three examples of scalar quantities. Mass, temperature and energy 769 Views Give three examples of vector quantities. Force, impulse and momentum. 865 Views What is a scalar quantity? A physical quantity that requires only magnitude for its complete specification is called a scalar quantity. 1212 Views What is a vector quantity? A physical quantity that requires direction along with magnitude, for its complete specification is called a vector quantity. 835 Views
2018-08-20 05:09:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5832294821739197, "perplexity": 1486.0185122400758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215843.55/warc/CC-MAIN-20180820042723-20180820062723-00497.warc.gz"}
https://dash.harvard.edu/browse?authority=de1dbdf3bb81cf44de1e04c4f4111917&type=author
Now showing items 1-20 of 47 • #### The 2.1-D Sketch  (IEEE Computer Society Press, 1990) A model is described for image segmentation that tries to capture the low-level depth reconstruction exhibited in early human vision, giving an important role to edge terminations. The problem is to find a decomposition ... • #### 2D-Shape Analysis Using Conformal Mapping  (Springer Verlag, 2006) The study of 2D shapes and their similarities is a central problem in the field of vision. It arises in particular from the task of classifying and recognizing objects from their observed silhouette. Defining natural ... • #### An Algebraic Surface with $K$ ample, $(K^2)= 9, p_g = q = 0$  (Johns Hopkins University Press, 1979) • #### A Bayesian Treatment of the Stereo Correspondence Problem Using Half-Occluded Regions  (IEEE Computer Society Press, 1992) A half-occluded region in a stereo pair is a set of pixels in one image representing points in space visible to that camera or eye only, and not to the other. These occur typically as parts of the background immediately ... (1962) • #### Deformations and Liftings of Finite, Commutative Group Scheme  (Springer Verlag, 1968) • #### An Elementary Theorem in Geometric Invariant Theory  (American Mathematical Society, 1961) • #### Empirical Statistics and Stochastic Models for Visual Signals  (Massachusetts Institute of Technology Press, 2006) • #### Enriques' Classification of Surfaces in Char. p, III  (Springer Verlag, 1976) • #### Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling  (Springer Verlag, 1998) This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous ... • #### Further Pathologies in Algebraic Geometry  (Johns Hopkins University Press, 1962) • #### GRADE: Gibbs Reaction and Diffusion Equations  (Narosa, 1998) Recently there have been increasing interests in using nonlinear PDEs for applications in computer vision and image processing. In this paper, we propose a general statistical framework for designing a new class of PDEs. ... • #### Hierarchical Bayesian Inference in the Visual Cortex  (Optical Society of America, 2003) Traditional views of visual processing suggest that early visual neurons in areas V1 and V2 are static spatiotemporal filters that extract local features from a visual scene. The extracted information is then channeled ... • #### Hirzebruch's Proportionality Theorem in the Non-Compact Case  (Springer Verlag, 1977) • #### The Irreducibility of the Space of Curves of Given Genus  (Springer Verlag, 1969) • #### Learning Generic Prior Models for Visual Computation  (Institute of Electrical and Electronics Engineers, 1997) This paper presents a novel theory for learning generic prior models from a set of observed natural images based on a minimax entropy theory that the authors studied in modeling textures. We start by studying the statistics ... • #### Modeling and Decoding Motor Cortical Activity Using a Switching Kalman Filter  (Institute of Electrical and Electronics Engineers, 2004) We present a switching Kalman filter model for the real-time inference of hand kinematics from a population of motor cortical neurons. Firing rates are modeled as a Gaussian mixture where the mean of each Gaussian component ... • #### The Nonlinear Statistics of High-Contrast Patches in Natural Images  (Springer Verlag, 2003) Recently, there has been a great deal of interest in modeling the non-Gaussian structures of natural images. However, despite the many advances in the direction of sparse coding and multi-resolution analysis, the full ... • #### A Note of Shimura's Paper “Discontinuous Groups and Abelian Varieties”  (Springer Verlag, 1969) • #### Occlusion Models for Natural Images: A Statistical Study of a Scale-Invariant Dead Leaves Model  (Springer Verlag, 2001) We develop a scale-invariant version of Matheron's “dead leaves model” for the statistics of natural images. The model takes occlusions into account and resembles the image formation process by randomly adding independent ...
2023-01-30 02:49:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31698495149612427, "perplexity": 2844.6603679907084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00703.warc.gz"}
https://math.stackexchange.com/questions/1449244/function-defined-by-a-sum-using-rational-numbers
# Function defined by a sum using rational numbers Let $\{r_n\}_{n=1}^\infty$ be enumeration of rational numbers, define $$f(x)=\sum_{\{ n \ / \ r_n<x \}} \frac{1}{2^n}$$ $(i)$ $f$ is continuous at irrational points $(ii)$ $\lim_{x\rightarrow a^-} f(x) = f(a)$ $\qquad$ (left continuity) $(iii)$ $\lim_{x\rightarrow a^+} f(x) = \sum_{\{ n \ / \ r_n \leq a \}} \frac{1}{2^n}$ $(iv)$ what is $\int_0^1 f$ Just managed to show that $f$ is discontinuous at rational points. I guess that some of the above points are similar to each other, just don't know how to deal with this kind of problems. Thank you. Hints: • (ii) and (iii) together imply (i) and that $f$ is discontinuous at rational points. • (ii) and (iii) are similar: Think of the set $\{ r_n : r_n <a\}$ which defines $f(a)$. What is the "limit" of $\{r_n : r_n <a^-\}$ when $a^-\to a$ (from the left) and $\{r_n : r_n <a^+\}$ when $a^+\to a$ (from the right)? • For (iv). Try to write $f$ as a (increasing) limit of $f_n$, where $$f_n(x) = \sum_{\{k\le n| r_k < x\}} \frac{1}{2^k}.$$ Calculate $\int_0^1 f_n(x) dx$ and then take $n\to \infty$. (Remark: we have that $f_n \to f$ uniformly, so $\lim_{n\to \infty} \int_0^1 f_n = \int_0^1 f$). Further hints: I would say a bit more on how to tackle (ii) and (iii) as they are not that straight forward. Take (iii) as an example (similar for (ii)). We want to show $$\lim_{x\to a^+} f(x) = \sum _{\{n| r_n \le a\}} \frac{1}{2^n}.$$ That $\ge$ is obvious from the definition of $f$. We want to show that $>$ is not possible. For the sake of contradiction, assume $>$ holds. Then $$f(x) >\delta + \sum _{\{n| r_n \le a\}} \frac{1}{2^n}$$ for all $x>a$ and for some $\delta >0$. But note that $$b_n :=\sum_{ k\ge n} \frac{1}{2^k} \to 0$$ as $n\to 0$, so there is $K$ so that $$\sum_{n=K}^\infty \frac{1}{2^n} <\delta.$$ So what can we say about $f(x)$ if $x$ is closed to $a$ so that $r_1, \cdots, r_K \notin (a, x)$? • Thank you. The first hint is completely clear. For the second one it is clear heuristically, if we approach from the right, i.e. $a^+\rightarrow a$ then at the limit we are going to include the point $a$ and get $\{r_n : r_n \leq a\}$ , when approaching from left we would get the same set $\{r_n : r_n < a\}$... but don't have any idea how to write the proof rigorously. – user16015 Sep 24 '15 at 7:01 • Try to do the following: For (iii), try to show the equality by showing (a) $f(x) \ge RHS$ for all $x>a$ and (b) that $>$ is not true by arguing by contradiction. @user16015 – user99914 Sep 24 '15 at 7:23 • I have to say that this is not that straight forward. I will try to write down more hints. – user99914 Sep 24 '15 at 7:27 Hint for (ii): For $x$ sufficiently close to $a$ (but $<a$) we can ensure that none of the first $N$ rationals appears in $[x,a)$. Hence $f(a)-f(x)$ can be bounded from above by $\sum_{n>N}2^{-n}$. This can be adapted for (iii).
2020-10-21 01:38:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923382580280304, "perplexity": 101.6223095443975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00701.warc.gz"}
https://mathhelpboards.com/threads/laplace-mixed-bc.1987/
# [SOLVED]Laplace mixed BC #### dwsmith ##### Well-known member Solve Laplace’s equation $\nabla^2u = 0$ on the rectangle with the following boundary conditions: $$u_y(x,0) = 0\quad u_x(0,y) = 0\quad u(x,H) = f(x)\quad u_x(L,y) + u(L,y) = 0.$$ How are mixed BC handled? #### dwsmith ##### Well-known member Consider the boundary conditions $u_x(0,y) = 0$ and $u_x(L,y) + u(L,y) = 0$. Therefore, if $u(x,y)$ is of the form $u(x,y) = \varphi(x)\psi(y)$, $\varphi_n(x) = A\cos\lambda_nx$ and the eigenvalues are determined by $$\tan\lambda_n = \frac{1}{\lambda_n}.$$ So we have that \begin{alignat*}{3} u(x,y) & = & \sum_{n = 1}^{\infty}A\cos\lambda_nx(B\cosh\lambda_ny + C\sinh\lambda_ny)\\ & = & \sum_{n = 1}^{\infty}\cos\lambda_nx(A_n\cosh\lambda_ny + B_n\sinh\lambda_ny) \end{alignat*} Now because of the first boundary condition, $u_y(x,0)$, we have that $B_n = 0$. Therefore, the solution is of the form $$u(x,y) = \sum_{n = 1}^{\infty}A_n\cos\lambda_nx\cosh\lambda_ny.$$ Last edited:
2020-09-23 09:10:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549200534820557, "perplexity": 1260.2560207786767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00740.warc.gz"}
https://piparadox.live/questions/find-the-divisors/
# Find the divisors 448 views 0 Let $n$ be the least positive integer for which $149^n-2^n$ is divisible by $3^3\cdot5^5\cdot7^7.$ Find the number of positive integer divisors of n 0 Lifting the Exponent shows that so thus,  divides . It also shows that so thus,  divides . Now, multiplying  by , we see and since  and  then  meaning that we have that by LTE,  divides . Since  and  all divide , the smallest value of  working is their LCM, also . Thus the number of divisors is .
2020-11-26 18:40:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456588983535767, "perplexity": 407.154982018486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188899.42/warc/CC-MAIN-20201126171830-20201126201830-00261.warc.gz"}
https://mathoverflow.net/questions/63300/is-every-poset-the-poset-of-prime-ideals-of-a-ring
# Is every poset the poset of prime ideals of a ring? The answer to this question, as it is, is trivially false, for one necessary condition is the existence of maximal element(s), i.e., maximal ideals exist and are prime. My question was inspired from Exercise 1.8 of Atiyah-Macdonald's book in Commutative Algebra: The set of prime ideals of a nonzero ring has minimal elements with respect to inclusion. Thus the above is also a necessary condition. My question is if there are more necessary conditions which are also sufficient, i.e., Are there necessary and sufficient conditions that a poset must possess so that it is the poset of a ring's spectrum. Answers (possibly partial answers) are welcome for both necessary and sufficient conditions as well as imposing restrictions on the ring. For example, the poset corresponding to Noetherian rings must obey the ACC. If the poset is a lattice, then the ring must be local. Do nice conditions occur assuming our ring is Artinian or a Dedekind domain? • József Pelikán told me that every finite poset can be realized as the spectrum of a ring (although I don't have a reference). A crucial point is that you can obtain closed and open subspaces by ring quotients and localizations, respectively. So the real challenge is building a ring with a big, complicated spectrum with the posets you want as subspaces. I enjoyed constructing examples by hand for some small posets; it's a nice exercise. Apr 29, 2011 at 7:51 • @Kopp, The papers below due to Hochster and Lewis do have results for finite posets. Although it will take time for me to read the papers, I had a glance. Apr 29, 2011 at 10:22
2022-08-19 09:07:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280707597732544, "perplexity": 223.38483753554993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00099.warc.gz"}
https://newproxylists.com/tag/configurations/
## 8 – Configuration import constantly showing the configurations to import Recently, after performing a configuration import via drush (`drush cim sync`), the configuration files do not seem to import correctly, because the files are listed after a supposed successful import. (and make a `cex sync` displays the configurations to export, despite the fact that nothing is changed in the backend) The only thing I have done recently is to import a copy of a database from one of our live test servers, to get the content. Is there a UUID I need to change or something? ## Bitcoin core – Does restoring wallet.dat require the same paths and configurations on the servers? I have a main Bitcoin server (server 1) that works well. Now, I am testing the backup and restoration of his portfolio on a new server (Server 2). Imagine that server 1 has these configurations: ``````blocksdir=/btc/blocks datadir=/btc/data # wallet.dat file is here in the wallets directory `````` Now I want to move the backup file (wallet.dat) to the new server whose default paths are like this: ``````~/.bitcoin/wallet.dat ~/.bitcoin/blocks `````` 1. Do i have to have the same paths on server 2 for The data and block as server 1? Or can I move the backup file to the default path of wallet.dat on server 2? 2. Should I copy the download blockchain from server 1 and move them to server 2 as well? ## bitcoind – Bitcoin Core JSONRPC only accepts requests with 0.0.0.0 in configurations I had lunch with a Bitcoin Core server and am trying to connect via JSON-RPC. Here are my configuration settings: ``````server=1 rpcport=1234 rpcallowip=94.183.32.151 `````` But all cURL connections to this server via IP 94.183.32.151 have the same error result as: `cURL error 7: Failed to connect to 94.183.32.151 port 1234: Connection refused` I also tried to add this option, but that did not solve the problem: `rpcbind:94.183.32.151` Only when i put `0.0.0.0` as a combined RPC IP, the Bitcoin core returns a real answer. I have checked many pages, but I have found no other suitable way to allow certain IP addresses to Bitcoind. Can you help me please? Note: IP, port number, user name and password are changed from actual values. ## Why did PostgreSQL, MongoDB, and possibly other database software allow such dangerous configurations? A few years ago now, still well into the 2000s, I was very naive. Especially in terms of IT security. To make a long and painful story that I don't even remember too well, the bottom line is that I installed a FreeBSD server at home with PostgreSQL. Being the naive fool that I was, I had no idea that there was even such a thing as an "SSH tunnel" or something like that. So, I assumed that the only way to connect to my database was to allow remote connections directly to it. I did not have a "LAN LAN" with internal IP addresses; instead, I had several "real" (external) IP addresses, one for my normal PC and one for the server. As such, this problem has become even more serious. When setting up the file called "pg_hba.conf", which controls how you can connect to the PostgreSQL database (separate from user accounts or "roles"), I didn’t not read or understand the manual and comments in the file correctly. For this reason, I have interpreted "trust" mode to mean "trust, assuming they give the correct username and password". In reality, it meant "trust this username with ANY PASSWORD OR NO PASSWORD AT ALL". Since I also selected "all IP addresses" (because, even if they were "real" IP addresses, they were not static and sometimes changed), this means that six months, my "secure" server (as I imagined it in my stupid head) with very private and sensitive data was there so that the whole world could connect freely as long as they could guess my name ; PG user very easily guessable .. from any IP address … with or without password … It was only after months and months (again, six months seemed about right) that I reviewed this file after getting cold feet. It was basically just a "feeling", and it could easily have gone on like this for years and years. To date, I don't know if anyone has logged in and stolen all of the data and is now sitting on it for future blackmail opportunities. Yes, I was a complete idiot for not reading / understanding. I understand. I even agree. But still, why would it be even to have such a configuration possibility? Who would ever want them to "trust" someone just providing the user name / role and ignoring the password, even if a password has been set? It doesn't make sense to me. In my defense, it has never happened in my brain that anyone who designs a system in such a stupid way. Yes, I blame the database software designers to some extent, even if it was not the default configuration. I actively changed it, but why do it possible to do this? The manual didn't exactly have a big warning about it, and no message was issued when restarting the database to warn me of this or something like that. To this day, it still occurs to me that such a configuration was (and probably still is) possible. You don't set a password for it to be bypassed like this. I'm still almost incredulous about it. Also, even though I have never used it myself, in recent years I have heard horror stories about MongoDB databases allowing the whole world to freely connect to it by default! It goes even further than PostgreSQL and makes my skin crawl just by thinking about it. I really feel for those poor fools who trust this database and configure it thinking, as I did with PG, that it is secure and sane. Why are they doing this? If it is to give some "job security" to database administrators, well, that’s a really cruel way to do it. Even though it was largely / mostly my fault, I continue to hold this against the PostgreSQL developers and will never "drop" it mentally. In the case of MongoDB, it looks like they really did it on purpose, because it was by default. I don't understand how they can endanger their users like that, especially not without the user even changing the configuration. ## Sharing the nginx cache between two server configurations I am trying to use an Nginx cache instance in two server configurations. on the same server. Is it safe and supported by Nginx? The configuration works. But I'm not sure of the consistency Nothing is written in the documentation ``````proxy_cache_path /home/mycache levels=1:2 keys_zone=mycache:90m max_size=200G inactive=15d; server { server_name server1; ... location / { proxy_temp_path /home/temp; proxy_cache mycache; proxy_cache_key \$uri; # only URI expires 50d; proxy_pass http://blabla; } } server { server_name server2; ... location / { proxy_temp_path /home/temp; proxy_cache mycache; proxy_cache_key \$uri; # only URI expires 50d; proxy_pass http://blabla; } } `````` ## architecture – How to manage different system configurations for multiple clients? Background: I work in a retail market. We have more than 15 different applications in our system (including desktop, web and background microservices) and all these applications have their own properties (that is, each application is written from so that its features are based on the configuration, you can enable the feature or disable it depending on what the customer wants, for example a feature that takes "Create Gift Lists at the point of sale" (its configuration can be enabled or disabled.) Similarly, some properties are in the database, which will be different for each client. Most properties are in a .properties file similar to the spring properties. Statement of the problem: Now suppose we have more than 100 clients and each client has their own requirements and we need to configure each property for them. It is very difficult to integrate a new client and define each property for him. There should be a way to manage all these properties, such as a configurator, in which we ask it to enable / disable a feature, which will do all the work for us. This is a real problem and I am the people solve this problem. Can you give me some suggestions on this? ## Drupal Console: How to Get Unique Items in Multi-Object Configurations Using Debug: Config For example, in drush I can run `drush cget search_api.server.iges_solr backend_config.connector_config.port` (`'search_api.server.iges_solr:backend_config.connector_config.port': 8988`) In Drupal Console, I tried the same formatting as in the drush command, but two points between the main config point and the subitem (`drupal dc search_api.server.iges_solr:backend_config.connector_config.port`) I have tried quite a few other things and have read the documentation in vain. Is it possible to do without Grepping? ## Browser configurations to stay safe from malicious software and unwanted items I have to set up a browser to surf the Internet trying to stay as much as possible away from malware (I already know that there is no way to stay safe at 100%) My idea is: to use Firefox with these extensions: Adblock More, uBlock Origin, HTTPS Anywhere and especially NoScript Security Suite. I've also thought about clear the cache when Firefox is closed(https://superuser.com/questions/461574/does- clearing-the-browser-cache-provide-real-security-benefits). But as I am not an expert, I searched on the internet info and read this: https://security.stackexchange.com/a/27957 and he said: Disabling JS should not be considered a miracle solution for the browser. security and Take into consideration that NoScript will also increase the attack area Before reading this, I was pretty sure that No Script would have been enough to make the browser very secure. But now, I wonder if there are the safest ways to secure the browser, and I now have these questions: Is my idea good? If yes, what can I improve? Are the extensions I mention above good?(I know that Adblock Plus and uBlock Origin block more or less the same ads, but I prefer to keep both.) Browser performance is not a problem. is there another extension that I should install? is there another browser setting that I should turn on / off (like the option to clear the cache when Firefox is closed)? I already know the basic rules, such as the update browser and the operating system, do not open an unsecured link, and so on. I would like to know the advanced tips. I know it also depends on the operating system and other elements, but in this topic, I would like to talk about the browser. PS: I know that instead of no script, I could just disable the global scripts of the browser settings, but I like to allow a script in a site because some sites could not run without script specific. PPS: Sorry for my bad english ## formal languages ​​- Maximum number of configurations of the Turing machine after \$ n \$ has moved I came across the following question: What is the maximum number of Turing Machine configuration after $$n$$ moves? $$k ^ n$$, or $$k$$ is a branching factor. And this "branching factor" left me confused. I've thought about it: $$Q$$ to be the total number of states, $$Gamma$$ to be a band alphabet and two movements, left and right $${L, R }$$, for each transition function, we have $$2 ^ {Q times Gamma times 2}$$ possible transitions to each of these $$n$$ moves. So, $$k$$ must be $$2 ^ {Q times Gamma times 2}$$. So total number of configurations of the Turing machine after $$n$$ the movements must be $${(2 ​​^ {Q times Gamma times 2})} ^ n$$. Am I correct with this? ## design – Angular: Is it a good way to manage configurations when using similar but different configurations? I want to hear some thoughts on a pattern I use to keep track of different instances of table configurations. These tables can be the same table with slightly different configurations for different pages. Thus, it is sometimes necessary to add a column to all tables and sometimes a column specific to the table of a specific page. We are currently encapsulating the common behavior in a component, but we are having problems modifying the entries and updating the table. It also seems difficult to keep these details inside the component. Here is a StackBlitz with the same pattern, but not the same code as below. Below you will find a more condensed version of the pattern. table state.service.ts ``````export class TableStateService { state = new BehaviorSubject(new Configuration()) set(options: Configuration): void { this.state.next(new Configuration({ ...this.state.value, ...options }) } } `````` table-one-state.service.ts ``````export class TableOneStateService extends TableStateService { constructor() { super() this.set(new Configuration(...)) } } `````` wrapper-state.service.ts ``````export class WrapperStateService { service: TableStateService \$state: Observable setService(service: TableStateService): void { this.service = service this.\$state = service.state.asObservable().pipe(throttleTime(50)) } } `````` table.component.ts ``````ngComponent({ ..., providers: (WrapperStateService), }) export class TableComponent { private sink = new SubSink() @Input() service: TableStateService constructor( private wrapper: WrapperStateService, ) { } ngOnInit() { this.wrapper.setService(this.service) this.wrapper.\$state.subscribe(state => ...) ) } ngOnDestroy() { this.sink.unsubscribe(); } } `````` various-table-enfant.component.ts ``````export class VariousTableChildComponent { private sink = new SubSink() constructor( private wrapper: WrapperStateService, ) { } ngOnInit() { this.wrapper.\$state.subscribe(state => ...) ) } ngOnDestroy() { this.sink.unsubscribe(); } } `````` page.component.html `````` ``````
2020-02-21 05:37:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2914718687534332, "perplexity": 1668.498190627297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145443.63/warc/CC-MAIN-20200221045555-20200221075555-00232.warc.gz"}
https://openreview.net/forum?id=H1gJ2RVFPH
## Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks Sep 25, 2019 Withdrawn Submission readers: everyone • TL;DR: We argue theoretically that by simply assuming the weights of a ReLU network to be Gaussian distributed (without even a Bayesian formalism) could fix this issue; for a more calibrated uncertainty, a simple Bayesian method could already be sufficient. • Abstract: The point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data. This architecture is thus not robust, e.g., against out-of-distribution data. Approximate Bayesian posteriors on the weight space have been empirically demonstrated to improve predictive uncertainty in deep learning. The theoretical analysis of such Bayesian approximations is limited, including for ReLU classification networks. We present an analysis of approximate Gaussian posterior distributions on the weights of ReLU networks. We show that even a simplistic (thus cheap), non-Bayesian Gaussian distribution fixes the asymptotic overconfidence issue. Furthermore, when a Bayesian method, even if a simple one, is employed to obtain the Gaussian, the confidence becomes better calibrated. This theoretical result motivates a range of Laplace approximations along a fidelity-cost trade-off. We validate these findings empirically via experiments using common deep ReLU networks. • Keywords: uncertainty quantification, overconfidence, Bayesian inference • Original Pdf:  pdf 0 Replies
2020-01-27 02:15:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683485388755798, "perplexity": 1868.1589316009354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00298.warc.gz"}
https://dml.cz/handle/10338.dmlcz/119427
# Article Full entry | PDF   (0.2 MB) Keywords: $\alpha$-space; $\alpha T_i$-space; minimal-$\alpha T_i$ space; $T_2$-closed space; minimal-$T_2$ space; $\psi$-space Summary: An $\alpha$-space is a topological space in which the topology is generated by the family of all $\alpha$-sets (see [N]). In this paper, minimal-$\alpha\Cal P$-spaces (where $\Cal P$ denotes several separation axioms) are investigated. Some new characterizations of $\alpha$-spaces are also obtained. References: [D] Dontchev J.: Survey on pre-open sets. preprint, 1999. [E] Engelking R.: General Topology. Heldermann, Berlin, 1989. MR 1039321 | Zbl 0684.54001 [L] Larson R.: Minimal $T_0$-spaces and minimal $T_D$-spaces. Pacific J. Math. 31 (1969), 451-458. MR 0251688 [Le] Levine N.: Semi-open sets and semi-continuity in topological spaces. Amer. Math. Monthly 70 (1963), 36-41. MR 0166752 | Zbl 0113.16304 [Lo] Lo Faro G.: Su alcune proprietà degli insieme $\alpha$-aperti. Atti Sem. Mat. Fis. Univ. Modena XXIX (1980), 242-252 (in Italian). [N] Njåstad O.: On some classes of nearly open sets. Pacific J. Math. 15 3 (1965), 961-970. MR 0195040 [PW] Porter J.R., Woods R.G.: Extensions and Absolutes of Hausdorff Spaces. Springer, New York, 1988. MR 0918341 | Zbl 0652.54016 [T] Tall F.D.: The density topology. Pacific J. Math. 62 (1976), 275-284. MR 0419709 | Zbl 0305.54039 Partner of
2019-11-14 21:08:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719848036766052, "perplexity": 5951.866757577043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00308.warc.gz"}
http://mathoverflow.net/questions/25862/naive-questions-about-matrices-representing-endomorphisms-of-hilbert-spaces?sort=oldest
# Naive questions about “matrices” representing endomorphisms of Hilbert spaces. This is a very basic question and might be way too easy for MO. I am learning analysis in a very backwards way. This is a question about complex Hilbert spaces but here's how I came to it: I have in the past written a paper about (amongst other things) compact endomorphisms of $p$-adic Banach spaces (and indeed of Banach modules over a $p$-adic Banach algebra), and in this paper I continually used the notion of a "matrix" of an endomorphism as an essential crutch when doing calculations and proofs. I wondered at the time where more "conceptual" proofs existed, and probably they do, but I was too lazy to find them. Now I find myself learning the basic theory of certain endomorphisms of complex separable Hilbert spaces (continuous, compact, Hilbert-Schmidt and trace class operators) and my instinct, probably wrong, is to learn the theory in precisely the same way. So this is the sort of question I find myself asking. Say $H$ is a separable Hilbert space with orthonomal basis $(e_i)_{i\in\mathbf{Z}_{\geq1}}$. Say $T$ is a continuous linear map $H\to H$. Then $T$ is completely determined by its "matrix" $(a_{ij})$ with $Te_i=\sum_ja_{ji}e_j$. But are there completely "elementary" conditions which completely classify which collections of complex numbers $(a_{ij})$ arise as "matrices" of continuous operators? I will ask a more precise question at the end, but let me, for the sake of exposition, tell you what the the answer is in the $p$-adic world. In the $p$-adic world, $\sum_na_n$ converges iff $a_n\to 0$, and life is easy: the answer to the question in the $p$-adic world is that $(a_{ij})$ represents a continuous operator iff (1) For all $i$, $\sum_j|a_{ji}|^2<\infty$ (equivalently, $a_{ji}\to 0$ as $j\to\infty$), and (2) there's a universal bound $B$ such that $|a_{ij}|\leq B$ for all $i,j$. [there is no inner product in the $p$-adic case, so no adjoint, and the conditions come out being asymmetric in $i$ and $j$]. See for example pages 8--9 of this paper of mine, although of course this isn't due to me---it's in Serre's paper on compact operators on $p$-adic Banach spaces from the 60s---see Proposition 3 of Serre's paper. In particular, in the $p$-adic world, one can identify the continuous maps $H\to H$ (here $H$ is a $p$-adic Banach space with countable ON basis $(e_i)$) with the collection of bounded sequences in $H$, the identification sending $T$ to $(Te_i)$. In the real/complex world though, the analogue of this result fails: the sequence $(e_1,e_1,e_1,\ldots)$ is a perfectly good bounded sequence, but there is no continuous linear map $H\to H$ sending $e_i$ to $e_1$ for all $i$ (where would $\sum_n(1/n)e_n$ go?). Let's consider the finite rank case, so $T$ is a continuous linear map $H\to H$ with image landing in $\mathbf{C}e_1$. Then by Riesz's theorem, $T$ is just "inner product with an element of $H$ and then multiply by $e_1$". Hence we have an additional condition on the $a_{ij}$, namely that $\sum_j|a_{ij}|^2<\infty$. Furthermore a continuous linear map is bounded, as is its adjoint. This makes me wonder whether the following is true, or whether this is still too naive: Q) Say $(a_{ij})$ $(i,j\in\mathbf{Z}_{\geq1})$ is a collection of complex numbers satisfying the following: There is a real number $B$ such that 1) For all $i$, $\sum_j|a_{ij}|^2\leq B$ 2) For all $j$, $\sum_j|a_{ij}|^2\leq B$ Then is there a unique continuous linear map $T:H\to H$ with $Te_i=\sum_ja_{ji}e_i$? My guess is that this is still too naive. Can someone give me an explicit counterexample? Or, even better, a correct "elementary" list of conditions characterising the continuous endomorphisms of a Hilbert space? On the other hand, it clearly isn't a complete waste of time to think about matrix coefficients. For example there's a bijection between Hilbert-Schmidt operators $T:H\to H$ and collections $(a_{ij})$ of complexes with $\sum_{i,j}|a_{ij}|^2<\infty$, something which perhaps the experts don't use but which I find incredibly psychologically useful. - My "meta"-question ("what is a good characterisation of the (a_{ij})") has been answered by Laurent ("there is evidence to suggest there is none"). But my explicit question remains open: can someone give me a collection of (a_{ij}) with each row and column L^2-bounded by some universal B but such that the (a_{ij}) come from no matrix? – Kevin Buzzard May 25 '10 at 12:21 Kevin, have you tried looking in the classic book "Theorems and problems in functional analysis"? When I learned about Hilbert-Schmidt operators and the like, I remember finding its lists of problems to be a useful source of counterexamples, going deeper than "Counterexamples in analysis" (a largely "1-dimensional" book). – BCnrd May 25 '10 at 13:18 Chapter V of Halmos' "A Hilbert space problem book" is called "Infinite matrices". It contains lots of nice results and problems, and also the statement that "there are no elegant and usable necessary and sufficient conditions [for a matrix to be the matrix of an operator]". - And to think my question is answered by a $p$-adic guy! Thanks Laurent. – Kevin Buzzard May 25 '10 at 11:05 [PS the reason I'm learning this stuff is that I'm giving some lectures on the trace formula.] – Kevin Buzzard May 25 '10 at 11:06 I looked in Halmos (Chapter IV of the 1967 edition, by the way, not Chapter V) and he says this and gives a couple of counterexamples to things but doesn't give a counterexample to the explicit question I asked. – Kevin Buzzard May 25 '10 at 12:20 It's chapter five in the 2nd edition, but there's still no counterexample... – Laurent Berger May 25 '10 at 12:35 Consider the n-by-n matrix that has $n^{-1/2}$ as every entry. That satisfies your condition with B=1. The image of the unit vector that has $n^{-1/2}$ as every coordinate is a vector that has 1 as every coordinate, and therefore norm $n^{1/2}$. If you now put a whole lot of these as blocks down the diagonal, you can create an unbounded operator that satisfies your condition with B=1. I'd say that the main general problem with the condition you suggested is that it is too tied to one particular basis. I'm not sure it's all that easy to come up with nice conditions of the kind you are looking for. Additional remark: if you take spaces like ell_1 and ell_infinity, where the definition of the norm is much more closely tied to a particular basis, then it tends to be easier to find nice matrix conditions for boundedness. - To make explicit what Tim was hinting at, note that a matrix defines a bounded linear operator on $\ell_1$ iff the columns form a bounded sequence in $\ell_1$, in which case the norm of the operator is the supremum of the $\ell_1$ norms of the columns. – Bill Johnson May 25 '10 at 16:28 Thanks gowers. You real guys don't know what you're missing---it's all infinitely easier in the p-adic world! Halmos says there's no easy criterion and I'm happy to believe this. On the other hand I think I have easy criteria for compactness, Hilbert-Schmidt and trace class (of the form "matrix must be continuous + ..."). – Kevin Buzzard May 25 '10 at 19:57 You are not the first to say that, Kevin. Many years ago Kurt Mahler told me much the same, using as one reason that every separable p-adic Banach space has a Schauder basis. (To be honest, I never checked out whether that was indeed the case, but who was I to question KM?) – Bill Johnson May 25 '10 at 21:58 One nice fact about a compact operator $T$ on a Hilbert space is that there is an ON basis $(e_n)$ s.t. $Te_n$ is orthogonal. Also, if for a bounded linear operator $T$ there is an ON basis $e_n$ s.t. $Te_n$ is orthogonal, then $T$ is compact iff $Te_n \to 0$; $T$ is HS iff $\sum \|Te_n\|^2<\infty$; $T$ is trace class iff $\sum \|Te_n\|<\infty$. So you can say quite a bit if you allow matrix representations with respect to two different ON basis. – Bill Johnson May 25 '10 at 22:09 So maybe the following is a counterexample to Kevin's original post. (It was created by computer scientist and friend Erik Vee as a "counterexample" to that exercise 3.14 in Zimmer (which says that a bounded operator is compact iff $a_{ij}$ goes to 0 as i and j go to $\infty$); and it was Robert Pollack who figured out that the reason it's not a counterexample is that it doesn't represent a continuous operator. So my role here is transcriber only.) Define the matrix as follows. The first column is (1, 0, 0, ...), the second is (1/2, 1/2, 1/2, 1/2, 0, 0, ...), the nth has $\frac 1 n$ appearing $n^2$ times, followed by all zeros. Then $\ell_2$-norm of each column is exactly 1, and the $\ell_2$-norm of each row is bounded by $\sqrt{\frac{\pi^2}{6}}$. So this matrix has bounded $\ell_2$ rows and columns as necessary. But this matrix cannot represent a continuous operator. If it did, then, since it satisfies Kevin's/Zimmer's criterion, this operator -- call it $A$ -- would be compact, and hence a uniform limit of the operators $A_n$ given by the first n rows of $A$. But the operator $A - A_n$ has, in its nth column, $\frac 1 n$ appearing $n^2 - n$ times, which means that that column's $\ell_2$ norm is $1 - \frac 1 n$, which is bounded away from zero. It's still unclear to me if this matrix represents an unbounded linear map, or if it doesn't represent a well-defined map at all. - My (possibly naïve) understanding is that unbounded operators are not defined everywhere on a Hilbert space, and that they can be specified on at most a dense subspace. – S. Carnahan Jun 15 '10 at 5:18 @Anna M.: just to flag that gowers already gave an explicit counterexample. – Kevin Buzzard Jun 15 '10 at 10:30 @Scott: not quite. An unbounded operator by definition is a linear transformation $A$ defined on some subspace $D$ of $H$. $D=H$ is allowed, in which case $A$ is "everywhere defined". But we mostly care about closed operators, and the closed graph theorem says a closed everywhere defined operator is bounded. So as far as useful examples, you are right. Moreover, unbounded everywhere defined operators (which are not closed) typically (necessarily?) require the axiom of choice to define. For instance, it's easy to do if you have a Hamel basis for $H$. – Nate Eldredge Jun 15 '10 at 15:27 Not an answer, only a side question to Kevin Buzzard. Properly a comment. Kevin, you mention in a comment to gowers' answer above that you have easy criteria for an infinite matrix representing a continuous operator on a Hilbert space to be be representing a compact or trace-class operator. What are these criteria? - If I didn't get it wrong, they were something like this: (a_ij) [assumed to represent a continuous endomorphism] is compact iff for all e>0 there's N such that |a_{ij}|<e for all i>N [do you believe this? It might be wrong.] and trace class iff sum_i(sum_j |a_{ij}|^2)^{1/2}<infty (this is almost the definition, IIRC). – Kevin Buzzard Jun 13 '10 at 7:17 Kevin, I'm not quite convinced by your criterion for trace class. Take the matrix which only has one nonzero column, and let that column be in $\ell^2$ but not in $\ell^1$. Then this is a rank one operator, hence trace class; but won't the quantity you give be infinite? – Yemon Choi Jun 13 '10 at 11:50 I really want to believe the compact criterion! (for one thing, I now think it matches Zimmer's exercise 3.14 in Essential Results of Functional Analysis.) I see that a compact operator satisfies the criterion (because it has to be the limit of its first n rows) but can't figure out the other direction... – Anna M. Jun 14 '10 at 20:52 Anna, I'm a bit worried that the criterion for compactness doesn't force the rows or columns to be $\ell^2$-vectors... – Yemon Choi Jun 15 '10 at 2:19 It doesn't matter, as the criterion for compactness only applies to matrices representing operators we know to be continuous. – Anna M. Jun 15 '10 at 2:45
2016-02-11 23:49:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184014797210693, "perplexity": 347.7569824089161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162903.38/warc/CC-MAIN-20160205193922-00129-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.zbmath.org/?q=an%3A1293.93066
# zbMATH — the first resource for mathematics Event-triggered tracking control for heterogeneous multi-agent systems with Markov communication delays. (English) Zbl 1293.93066 Summary: In this paper, we investigate the consensus problem of a set of discrete-time heterogeneous multi-agent systems with random communication delays represented by a Markov chain, where the multi-agent systems are composed of two kinds of agents differed by their dynamics. First, distributed consensus control is designed by employing the event-triggered communication technique, which can lead to a significant reduction of the information communication burden in the multi-agent network. Then, the mean square stability of the closed loop multi-agent systems is analyzed based on the Lyapunov functional method and the Kronecker product technique. Sufficient conditions are obtained to guarantee the consensus in terms of linear matrix inequalities (LMIs). Finally, a simulation example is given to illustrate the effectiveness of the developed theory. ##### MSC: 93A14 Decentralized systems 68T42 Agent technology and artificial intelligence 93C55 Discrete-time control/observation systems 93E03 Stochastic systems in control theory (general) 60J10 Markov chains (discrete-time Markov processes on discrete state spaces) Full Text: ##### References: [1] Su, H.; Wang, X.; Lin, Z., Flocking of multi-agents with a virtual leader, IEEE Transactions on Automatic Control, 54, 293-307, (2009) · Zbl 1367.37059 [2] Ren, W.; Beard, R.; Atkins, E., Information consensus in multivehicle cooperative control, IEEE Control Systems Magazine, 27, 293-307, (2007) [3] Liu, S.; Xie, L.; Zhang, H., Distributed consensus for multi-agent systems with delays and noises in transmission channels, Automatica, 47, 920-934, (2011) · Zbl 1233.93007 [4] Su, H.; Wang, X.; Lin, Z., Synchronization of coupled harmonic oscillators in a dynamic proximity network, Automatica, 45, 2286-2291, (2009) · Zbl 1179.93102 [5] Su, H.; Wang, X.; Chen, G., Rendezvous of multiple mobile agents with preserved network connectivity, Systems & Control Letters, 59, 313-322, (2010) · Zbl 1191.93005 [6] Ren, W.; Atkins, E., Distributed multi-vehicle coordinated control via local information exchange, International Journal of Robust and Nonlinear Control, 17, 1002-1033, (2007) · Zbl 1266.93010 [7] Yu, W.; Chen, G.; Cao, M.; Kurths, J., Second-order consensus for multiagent systems with directed topologies and nonlinear dynamics, IEEE Transactions on Systems, Man, and Cybernetics, Part BCybernetics, 40, 881-891, (2010) [8] Guan, Z.; Wu, Y.; Feng, G., Control theory and systems-consensus analysis based on impulsive systems in multiagent networks, IEEE Transactions on Circuits and Systems-I—Regular Papers, 59, 170-178, (2012) [9] Tian, Y.; Liu, C., Consensus of multi-agent systems with diverse input and communication delays, IEEE Transactions on Automatic Control, 53, 2122-2128, (2008) · Zbl 1367.93411 [10] Lin, P.; Jia, Y., Brief papermulti-agent consensus with diverse time-delays and jointly-connected topologies, Automatica, 47, 848-856, (2011) · Zbl 1215.93013 [11] Zhou, W.; Wang, T.; Mou, J.; Fang, J., Mean square exponential synchronization in Lagrange sense for uncertain complex dynamical networks, Journal of the Franklin Institute, 349, 1267-1282, (2012) · Zbl 1273.93017 [12] Sun, Y., Average consensus in networks of dynamic agents with uncertain topologies and time-varying delays, Journal of the Franklin Institute, 349, 1061-1073, (2012) · Zbl 1273.93009 [13] Lin, P.; Qin, K.; Zhao, H.; Sun, M., A new approach to average consensus problems with multiple time-delays and jointly-connected topologies, Journal of the Franklin Institute, 349, 293-304, (2012) · Zbl 1254.93009 [14] M. Cao, A. Morse and B. Anderson, Reaching an agreement using delayed information, in: 2006 45th IEEE Conference on Decision and Control, 2006, pp. 3375-3380. [15] Sun, Y.; Wang, L., Consensus of multi-agent systems in directed networks with nonuniform time-varying delays, IEEE Transactions on Automatic Control, 54, 1607-1613, (2009) · Zbl 1367.93574 [16] D. Liu, C. Liu, Consensus problem of discrete-time second-order multi-agent network with communication delays, in: Third International Symposium on Intelligent Information Technology Application, vol. 2, 2009, pp. 340-344. [17] Meng, D.; Jia, Y.; Du, J.; Yu, F., Tracking control over a finite interval for multi-agent systems with a time-varying reference trajectory, Systems & Control Letters, 61, 807-818, (2012) · Zbl 1250.93013 [18] S. Yang, J. Xu, Adaptive Iterative learning control for multi-agent systems consensus tracking, in: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2012, pp. 2803-2808. [19] Yu, W.; Zheng, W.; Chen, G.; Ren, W.; Cao, J., Second-order consensus in multi-agent dynamical systems with sampled position data, Automatica, 47, 1496-1503, (2011) · Zbl 1220.93005 [20] Su, H.; Chen, G.; Wang, X.; Lin, Z., Adaptive second-order consensus of networked mobile agents with nonlinear dynamics, Automatica, 47, 368-375, (2011) · Zbl 1207.93006 [21] Olfati-Saber, R.; Fax, J.; Murray, R., Consensus and cooperation in networked multi-agent systems, Proceedings of the IEEE, 95, 215-233, (2007) · Zbl 1376.68138 [22] Dimarogonas, D.; Frazzoli, E.; Johansson, K., Distributed event-triggered control for multi-agent systems, IEEE Transactions on Automatic Control, 57, 1291-1297, (2012) · Zbl 1369.93019 [23] Hu, S.; Yue, D., Event-triggered control design of linear networked systems with quantizations, ISA Transactions, 51, 153-162, (2012) [24] G. Seyboth, Event-based Control for Multi-Agent Systems, Master’s Degree Project, Stockholm, Sweden, 2010. [25] D. Dimarogonas, K. Johansson, Event-triggered control for multi-agent systems, in: 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, 2009, pp. 7131-7136. [26] Hu, J.; Chen, G.; Li, H., Distributed event-triggered tracking control of leader-follower multi-agent systems with communication delays, Kybernetika, 47, 630, (2011) · Zbl 1227.93008 [27] Liu, Z.; Chen, Z., Reaching consensus in networks of agents via event-triggered control, Journal of Information & Computational Science, 8, 393-402, (2011) [28] Tabuada, P., Event-triggered real-time scheduling of stabilizing control tasks, IEEE Transactions on Automatic Control, 52, 1680-1685, (2007) · Zbl 1366.90104 [29] Zheng, Y.; Zhu, Y.; Wang, L., Consensus of heterogeneous multi-agent systems, IET Control Theory & Applications, 5, 1881-1888, (2011) [30] Liu, C.; Liu, F., Stationary consensus of heterogeneous multi-agent systems with bounded communication delays, Automatica, 47, 2130-2133, (2011) · Zbl 1227.93010 [31] Kim, H.; Shim, H.; Seo, J., Output consensus of heterogeneous uncertain linear multi-agent systems, IEEE Transactions on Automatic Control, 56, 200-206, (2011) · Zbl 1368.93378 [32] Y. Tian, High-order consensus of heterogeneous multi-agent systems, in: 2011 8th Asian Control Conference (ASCC), 2011, pp. 341-346. [33] Wu, J.; Shi, Y., Consensus in multi-agent systems with random delays governed by a Markov chain, Systems & Control Letters, 60, 863-870, (2011) · Zbl 1226.93015 [34] Feng, J.; Lam, J.; Shu, Z., Stabilization of Markovian systems via probability rate synthesis and output feedback, IEEE Transactions on Automatic Control, 55, 773-777, (2010) · Zbl 1368.93535 [35] Wang, X.; Lemmon, M., Event-triggering in distributed networked control systems, IEEE Transactions on Automatic Control, 56, 586-601, (2011) · Zbl 1368.93211 [36] Yook, J.; Tilbury, D.; Soparkar, N., Trading computation for bandwidthreducing communication in distributed control systems using state estimators, IEEE Transactions on Control Systems Technology, 10, 503-518, (2002) [37] Liu, C.; Liu, F., Asynchronously-coupled consensus of second-order dynamic agents with communication delay, International Journal of Innovative Computing, Information and Control, 6, 5035-5046, (2010) [38] Hu, S.; Yue, D.; Liu, J., $$h_\infty$$ filtering for networked systems with partly known distribution transmission delays, Information Sciences, 194, 270-282, (2012) · Zbl 1248.93164 [39] Y. Hu, H. Su, J. Lam, Adaptive consensus with a virtual leader of multiple agents governed by locally Lipschitz nonlinearity, International Journal of Robust and Nonlinear Control (2012) http://dx.doi.org/10.1002/rnc.2811. · Zbl 1270.93004 [40] Ko, J.; Park, P., Delay-dependent stability criteria for systems with asymmetric bounds on delay derivative, Journal of the Franklin Institute, 348, 2674-2688, (2011) · Zbl 1239.93089 [41] Lou, X.; Ye, Q.; Cui, B., Parameter-dependent robust stability of uncertain neural networks with time-varying delay, Journal of the Franklin Institute, 349, 1891-1903, (2012) · Zbl 1254.93128 [42] Chiou, J.; Wang, C.; Cheng, C., On delay-dependent stabilization analysis for the switched time-delay systems with the state-driven switching strategy, Journal of the Franklin Institute, 348, 261-276, (2011) · Zbl 1218.34091 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-20 17:01:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801607072353363, "perplexity": 7548.905966177216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00080.warc.gz"}
http://www-old.newton.ac.uk/programmes/LAA/seminars/2006022809301.html
LAA Seminar Cardinality-based semantics for consistent query answering: incremental and parameterized complexity Bertossi, L (Carleton) Tuesday 28 February 2006, 09:30-10:15 Seminar Room 1, Newton Institute Abstract Consistent Query Answering (CQA) is the problem of computing from a database the answers to a query that are consistent with respect to certain integrity constraints, that the database, as a whole, may fail to satisfy. Consistent answers have been characterized as those that are invariant under certain minimal forms of restoration of the consistency of the database. In this paper we investigate algorithmic and complexity theoretic issues of CQA under database repairs that minimally depart -wrt the cardinality of the symmetric difference- from the original database. Research on this kind of repairs had been suggested in the literature, but no systematic study had been done. Here we obtain first tight complexity bounds. We also address, considering for the first time a dynamic scenario for CQA, the problem of incremental complexity of CQA, that naturally occurs when an originally consistent database becomes inconsistent after the execution of a sequence of update operations. Tight bounds on incremental complexity are provided for various semantics under denial constraints, e.g. (a) minimum tuple-based repairs wrt cardinality, (b) minimal tuple-based repairs wrt set inclusion, and (c) minimum numerical aggregation of attribute-based repairs. Fixed parameter tractability is also investigated in this dynamic context, where the size of the update sequence becomes the relevant parameter.
2016-09-24 20:55:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054226040840149, "perplexity": 2090.754333614479}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659496.36/warc/CC-MAIN-20160924173739-00071-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-017-01326-x?error=cookies_not_supported&code=ff17d3fe-7c86-480d-9a9e-249a97f4f3a7
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Linear dynamics of classical spin as Möbius transformation ## Abstract Though the overwhelming majority of natural processes occur far from the equilibrium, general theoretical approaches to non-equilibrium phase transitions remain scarce. Recent breakthroughs introduced a description of open dissipative systems in terms of non-Hermitian quantum mechanics enabling the identification of a class of non-equilibrium phase transitions associated with the loss of combined parity (reflection) and time-reversal symmetries. Here we report that the time evolution of a single classical spin (e.g. monodomain ferromagnet) governed by the Landau-Lifshitz-Gilbert-Slonczewski equation in the absence of magnetic anisotropy terms is described by a Möbius transformation in complex stereographic coordinates. We identify the parity-time symmetry-breaking phase transition occurring in spin-transfer torque-driven linear spin systems as a transition between hyperbolic and loxodromic classes of Möbius transformations, with the critical point of the transition corresponding to the parabolic transformation. This establishes the understanding of non-equilibrium phase transitions as topological transitions in configuration space. ## Introduction The interest to dissipative spin-transfer torque (STT)-driven dynamics of a spin, described by Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation1,2,3 is two-fold. On the application side, spin controlled by the applied spin-polarized current is an elemental unit for a wealth of spintronic applications4,5,6,7. On the fundamental side, complete quantitative understanding of single-spin dynamics provides the essential tool for predictive description of many complex spin systems. Analytical studies of nonlinear spin dynamics in nanomagnetic devices and structures have been the focus of active research for many decades (see, e.g., refs 8, 9 and references therein). It has recently been shown that nonequilibrium classical spin dynamics described by the LLGS equation naturally follows from the non-Hermitian extension of Hamiltonian formalism10. Within this framework, the nonconservative effects of Gilbert damping and applied Slonczewski STT3 originate from the imaginary part of the system’s Hamiltonian. This new technique has enabled important advances in the field of nonlinear spin dynamics, including the discovery of parity-time ($${\mathscr{P}}{\mathscr{T}}$$) symmetry-breaking in systems with mutually orthogonal applied magnetic field and STT. This new type of phase transitions in spin systems is possible due to the invariance of STT action under simultaneous operations of time-reversal and reflection with respect to the direction of spin polarization. Here we find that the $${\mathscr{P}}{\mathscr{T}}$$ symmetry-breaking phase transition occurring in STT-driven linear spin systems (i.e. the systems designed to have zero or negligibly small magnetic anisotropy) is a transition between hyperbolic and loxodromic classes of Möbius transformations governing the spin dynamics. The critical point of the phase transition corresponds to the merging of two fixed points of these Möbius transformations (equilibrium points of spin dynamics) into a single fixed point of a parabolic transformation. This establishes that non-equilibrium phase transitions associated with $${\mathscr{P}}{\mathscr{T}}$$ symmetry-breaking are topological transitions in configuration space. We undertake the analytical study of dissipative STT-driven dynamics of a single classical spin described by a linear (in spin operators) non-Hermitian Hamiltonian. We show that the combined effect of an external magnetic field, Gilbert damping, and applied Slonczewski STT can be incorporated in the effective action of a complex magnetic field. We derive an equation of motion in complex stereographic coordinates that assumes the form of a Riccati equation. This allows a recasting of the equation of motion into linear form without any approximations beyond the initial choice of the non-Hermitian spin Hamiltonian. The equation of motion in stereographic projection coordinates admits an exact solution in the form of a Möbius transformation of $${{\mathbb{C}}}^{2}$$. The correspondence between different regimes of spin dynamics and classes of Möbius transformations is established and illustrated on the example of the $${\mathscr{P}}{\mathscr{T}}$$ symmetry-breaking phenomenon, which is identified as a transition between elliptic and loxodromic Möbius transformations via a parabolic transformation. The equation of motion can also be recast into the linear form by employing complex homogeneous coordinates. The linear form of the spin dynamics equation provides a solid foundation for the study of nonlinear effects in single and coupled spin systems, including chaotic dynamics11, 12, spin-wave instabilities13, and solitons14. ## Results and Discussion We study the most general linear version of the spin Hamiltonian proposed by Galda and Vinokur10, $$\hat{{\mathscr{H}}}=(\frac{\gamma {\bf{H}}+i{\bf{j}}}{1-i\alpha })\cdot \hat{{\bf{S}}},$$ (1) where H is the applied magnetic field, the imaginary field i j is responsible for the action of STT, and the phenomenological constant α describes Gilbert damping. The corresponding LLGS equation of spin dynamics reads $$\dot{{\bf{S}}}=\gamma {\bf{H}}\times {\bf{S}}+\frac{\alpha }{S}\dot{{\bf{S}}}\times {\bf{S}}+\frac{1}{S}[{\bf{j}}\times {\bf{S}}]\times {\bf{S}},$$ (2) where γ =  B/ħ is the absolute value of the gyromagnetic ratio, $$g\simeq 2$$, and S ≡ |S| is the total spin (constant in time). The first two terms in Eq. (2) describe the standard Landau-Lifshitz (LL) torque and dissipation in Gilbert form, while the last one is responsible for Slonczewski STT. To show that Hamiltonian (1) yields the above LLGS dynamics equation in the classical limit (S → ∞), it is most convenient to consider SU(2) spin-coherent states15, 16 $$|\zeta \rangle ={{\rm{e}}}^{\zeta {\hat{S}}_{+}}|S,-S\rangle$$, where $${\hat{S}}_{\pm }={\hat{S}}_{x}\pm i{\hat{S}}_{y}$$, and $$\zeta \in {\mathbb{C}}$$ is the standard stereographic projection of the spin direction on a unit sphere, $$\zeta =({s}_{x}+i{s}_{y}\mathrm{)/(1}-{s}_{z})$$, with the south pole (spin-down state) corresponding to ζ = 0. The Hamiltonian function in spin-coherent states reads $${\mathcal H} (\zeta ,\bar{\zeta })=\frac{\langle \zeta |\hat{ {\mathcal H} }|\zeta \rangle }{\langle \zeta |\zeta \rangle },$$ (3) which gives10 the following compact form of Hamilton’s equation of motion for classical spin: $$\dot{\zeta }=i\frac{{(1+|\zeta {|}^{2})}^{2}}{2S}\frac{\partial {\mathcal H} }{\partial \bar{\zeta }},$$ (4) where the factor $${(1+|\zeta {|}^{2})}^{2}/2S$$ ensures invariance of measure on a two-sphere. Let us now normalize and rewrite the linear non-Hermitian Hamiltonian (1) in terms of dimensionless variables: $${\hat{{\mathscr{H}}}}_{0}\equiv \hat{{\mathscr{H}}}/S=\mathop{{\bf{h}}}\limits^{\sim}\cdot \hat{{\bf{s}}},$$ (5) where s ≡ S/S, and the effects of the applied magnetic field, Gilbert damping and Slonczewski STT contributions are all incorporated into the complex magnetic field $$\mathop{{\bf{h}}}\limits^{\sim}=({\mathop{h}\limits^{\sim}}_{x},{\mathop{h}\limits^{\sim}}_{y},{\mathop{h}\limits^{\sim}}_{z})\in {\mathbb{C}}$$. The equation of motion (4) for the linear classical spin Hamiltonian (5) can be rewritten as a linear matrix ordinary differential equation: $$\frac{d}{dt}[\begin{matrix}\xi (t)\\ \eta (t)\end{matrix}]=A[\begin{matrix}\xi (t)\\ \eta (t)\end{matrix}],$$ (6) $$A=\frac{i}{2}\sum _{k=x,y,z}{\tilde{h}}_{k}{\sigma }_{k},$$ (7) where σ k are Pauli matrices, and $$\zeta (t)\equiv \xi (t)/\eta (t)$$. The pair of complex functions $$\{\xi ,\eta \}$$ are called homogeneous coordinates of ζ 17, such that each ordered pair {ξ, η} (except {0, 0}) corresponds to a unique stereographic projection coordinate ζ. The initial conditions for Eq. (6) can be chosen as $$\xi \mathrm{(0)}=\zeta \mathrm{(0),}\,\eta \mathrm{(0)}=1$$. The solution in terms of stereographic projection coordinates ζ takes the simple form of a Möbius transformation: $$\zeta (t)=\frac{{M}_{11}\zeta \mathrm{(0)}+{M}_{12}}{{M}_{21}\zeta \mathrm{(0)}+{M}_{22}}\equiv M[\zeta \mathrm{(0)}],$$ (8) where the normalized (det M = 1) transformation matrix is given by the matrix exponential: $$M={{\rm{e}}}^{At}.$$ (9) It is important that non-conservative spin dynamics only takes the form of a Möbius transformation for systems described by linear spin Hamiltonians. Experimentally this corresponds to systems designed to have negligibly small magnetic anisotropies. The inclusion of nonlinear anisotropy terms in the spin Hamiltonian10 inevitably leads to other types of spin dynamics equations due to the action of spin-orientation-dependent effective magnetic fields on the spin. The equation of motion (6) illustrates that the classical spin dynamics discussed here can be written in linear form despite the nonlinear nature of the LLGS Eq. (2) it reproduces. Understanding this linear system and its solutions represents a crucial step in describing nonlinear STT-driven magnetic systems. ### Möbius transformation We now study the solution of Eq. (4) for linear spin Hamiltonians. Without loss of generality, we can take $${\tilde{h}}_{z}=0$$ and $${\rm{Im}}\,({\tilde{h}}_{x})=0$$ in Eq. (5) by choosing the z axis along $$[{\rm{R}}{\rm{e}}(\mathop{{\bf{h}}}\limits^{\sim})\times {\rm{I}}{\rm{m}}(\mathop{{\bf{h}}}\limits^{\sim})]$$ and y axis along $${\rm{I}}{\rm{m}}(\mathop{{\bf{h}}}\limits^{\sim})$$, while $${h}_{x}\equiv {\rm{Re}}\,({\tilde{h}}_{x})$$ and $${\tilde{h}}_{y}\in {\mathbb{C}}$$ can be arbitrary: $${\hat{ {\mathcal H} }}_{0}={h}_{x}{\hat{s}}_{x}+{\tilde{h}}_{y}{\hat{s}}_{y}.$$ (10) The equation of motion for this Hamiltonian takes the form of a Riccati equation: $$\dot{\zeta }(t)=-i\frac{{h}_{x}-i{\mathop{h}\limits^{\sim}}_{y}}{2}[{\zeta }^{2}(t)-\frac{{h}_{x}+i{\mathop{h}\limits^{\sim}}_{y}}{{h}_{x}-i{\mathop{h}\limits^{\sim}}_{y}}]$$ (11) with two fixed points, $${\zeta }_{\mathrm{1,2}}=\pm \sqrt{\frac{{h}_{x}+i{\tilde{h}}_{y}}{{h}_{x}-i{\tilde{h}}_{y}}},$$ (12) and the solution $$\zeta (t)=\frac{\cos (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t)\zeta (0)+\frac{i{h}_{x}+{\mathop{h}\limits^{\sim}}_{y}}{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}\sin (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t)}{\frac{i{h}_{x}-{\mathop{h}\limits^{\sim}}_{y}}{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}\sin (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t)\zeta (0)+\cos (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t)}.$$ (13) Equation (13) shows that the time evolution of a classical spin generated by an arbitrary linear non-Hermitian Hamiltonian presented in stereographic projection coordinates, is nothing but a Möbius transformation of $${{\mathbb{C}}}^{2}$$: $$\begin{matrix}[M]=[{{\rm{e}}}^{\frac{i}{2}({h}_{x}{\sigma }_{x}+{\mathop{h}\limits^{\sim}}_{y}{\sigma }_{y})t}]\\ \quad =[\begin{matrix}\cos (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t) & \frac{i{h}_{x}+{\mathop{h}\limits^{\sim}}_{y}}{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}\sin (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t)\\ \frac{i{h}_{x}-{\mathop{h}\limits^{\sim}}_{y}}{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}\sin (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t) & \cos (\frac{\sqrt{{h}_{x}^{2}+{\mathop{h}\limits^{\sim}}_{y}^{2}}}{2}t)\end{matrix}],\end{matrix}$$ (14) in accordance with Eqs. (7), (9) and (10). ### Classification of Möbius transformations based on spin dynamics The traditional classification of Möbius transformations based on the number and type of fixed points distinguishes three different classes: elliptic, loxodromic (including hyperbolic as a special case) and parabolic transformations, which can be identified by calculating tr2 M 17. Here we show that all Möbius transformations can be obtained from a superposition of only two basic transformations, elliptic and hyperbolic, because in spin dynamics these two directly correspond to applied real and imaginary magnetic fields. An elliptic Möbius transformation induces a uniform rotation of the entire Riemann sphere around a central axis, while a hyperbolic transformation produces antipodal expansion and contraction centers, see Fig. 1, where the lines depict invariant geodesics of the corresponding Möbius transformation on the sphere. According to this consideration, every elliptic and hyperbolic transformation is fully determined by two parameters: ‘direction’ and ‘amplitude’. Together, these parameters define the direction of geodesics, including the location of the fixed points and the displacement of points on the Riemann sphere along geodesics upon the transformation. In these terms, the action of a real magnetic field $${\bf{h}}=({h}_{x},{h}_{y},{h}_{z})$$ leads to spin dynamics governed by an elliptic Möbius transformation with the normalized transformation matrix $$M=\exp (\frac{i}{2}{\sum }_{k=x,y,z}{h}_{k}{\sigma }_{k})$$. Similarly, an imaginary applied magnetic field produces spin dynamics associated with a hyperbolic transformation, with the matrix of the transformation containing purely imaginary coefficients h k in the exponent. Given that any complex matrix M, such that det M = 1, can be uniquely represented as a matrix exponential of the form $$M=\exp (\frac{i}{2}{\sum }_{k=x,y,z}{\mathop{h}\limits^{\sim}}_{k}{\sigma }_{k})$$, where $${\tilde{h}}_{k}\in {\mathbb{C}}$$, it follows that any Möbius transformation is a superposition of an elliptic and a hyperbolic transformations. A general loxodromic transformation has two fixed points, an attractive and repulsive nodes, which in spin dynamics correspond to the stable and unstable equilibrium states. The transformation (14) is loxodromic when both of the following two conditions are met: (a) $$\beta \equiv \,{\rm{Im}}\,{\tilde{h}}_{y}\ne 0$$ and (b) $${h}_{y}\equiv {\rm{Re}}\,{\tilde{h}}_{y}\ne 0$$ if h x  ≠ 0, which follows from the condition $${{\rm{tr}}}^{2}M\in {\mathbb{C}}\backslash \mathrm{[0,\; 4]}$$ 17. Let us now consider a superposition of mutually orthogonal elliptic (conservative spin dynamics in real magnetic field, see Fig. 1a) and hyperbolic (spin saturation in the direction of imaginary magnetic field, see Fig. 1b) transformations. For the Möbius transformation (14) this corresponds to h x  ≠ 0, h y  = 0, and β ≠ 0. Depending on the ratio $$\varepsilon \equiv |\beta /{h}_{x}|$$, the transformation (14) can be elliptic (ε < 1), loxodromic (ε > 1) or parabolic (ε = 1). As ε approaches 1 from below, the two fixed points of the elliptic transformation, which describes a steady state spin dynamics, move toward one another (see Fig. 2a) until they eventually coalesce into the single fixed point of a parabolic transformation at ε = 1, as shown in Fig. 2b. As ε is increased further, the fixed point splits into the attractive and repulsive centers of the hyperbolic transformation (see Fig. 2c), which corresponds to exponentially fast saturation of spin. The described transition plays an important role in spin dynamics: it is associated with the transition between regimes of unbroken and broken $${\mathscr{P}}{\mathscr{T}}$$ symmetry10. Expectation values of the spin Hamiltonian (1) evaluated at the fixed points, Eq. (12), $${E}_{\mathrm{1,2}}=\pm \sqrt{{h}_{x}^{2}+{\tilde{h}}_{y}^{2}}$$, are directly related to the eigenvalues of the corresponding Möbius transformation matrix, Eq. (12), $${\lambda }_{\mathrm{1,2}}={{\rm{e}}}^{i\frac{{E}_{\mathrm{1,2}}t}{2}}$$. They fully determine the types of the fixed points and the type of the transformation. The standard classification17 uses multipliers of the transformation $${\kappa }_{\mathrm{1,2}}\equiv {\lambda }_{\mathrm{1,2}}^{-2}={{\rm{e}}}^{-i{E}_{\mathrm{1,2}}t},$$ (15) such that |κ 1,2| = 1 $$({\kappa }_{\mathrm{1,2}}={{\rm{e}}}^{\pm i\theta }\ne 1)$$ for elliptic transformations, κ 1,2 = 1 for parabolic transformations, and κ 1,2 ≠ 1 for loxodromic transformations (with real κ 1,2 ≠ 1 in the special case of hyperbolic transformations). In the language of classical spin dynamics, this outcome fully accords with the above considerations. ## Conclusions We have shown that the time evolution of linear classical single-spin systems has a simple interpretation in terms of Möbius transformations of $${{\mathbb{C}}}^{2}$$, provided magnetic anisotropies are negligibly small. The $${\mathscr{P}}{\mathscr{T}}$$ symmetry-breaking phase transition in such systems can be identified as a transition between elliptic and hyperbolic (via parabolic) classes of Möbius transformations appearing as solutions of the corresponding spin dynamics equations in complex stereographic coordinates. The established correspondence between linear spin dynamics and Möbius transformations reveals that any Möbius transformation can be produced by a unique superposition of an elliptic and hyperbolic transformations, corresponding to real and imaginary applied magnetic fields, respectively. We have demonstrated that the nonlinear LLGS equation describing dissipative STT-driven dynamics of a linear single-spin system can be written in a linear form, illustrating that such dynamics cannot produce any nonlinear effects, e.g. chaotic dynamic, for which additional time-dependent perturbation are necessary11, 12. The nonconservative effect of Slonczewski STT on spin systems, equivalent to the action of imaginary magnetic field, promises a unique tool for studying Lee-Yang zeros18 in ferromagnetic Ising and Heisenberg models. ## References 1. Landau, L. D. & Lifshitz, E. M. On the theory of the dispersion of magnetic permeability in ferromagnetic bodies. Phys. Z. Sowjetunion 8, 101–114 (1935). 2. Gilbert, T. L. A phenomenological theory of damping in ferromagnetic materials. IEEE Trans. Magn. 40, 3443 (2004). 3. Slonczewski, J. C. Current-driven excitation of magnetic multilayers. J. Magn. Magn. Matter 159, L1–L7 (1996). 4. Chen, E. et al. Advances and future prospects of spin-transfer torque random access memory. IEEE Trans. Magn. 46, 1873–1878 (2010). 5. Kawahara, T. et al. Spin-transfer torque RAM technology: review and prospect. Microelectron. Reliab. 52, 613–627 (2012). 6. Locatelli, N., Cros, V. & Grollier, J. Spin-torque building blocks. Nature Mater 13, 11–20 (2014). 7. Hoffmann, A. & Bader, S. D. Opportunities at the Frontiers of Spintronics. Phys. Rev. Appl. 4, 047001 (2015). 8. Chudnovsky, E. M. & Tejada, J. Lectures on Magnetism. Rinton Press (2006). 9. Mayergoyz, I. D., Bertotti, G. & Serpico, C. Nonlinear magnetization dynamics in nanosystems. Elsevier (2009). 10. Galda, A. & Vinokur, V. M. Parity-time symmetry breaking in magnetic systems. Phys. Rev. B 94, 020408(R) (2016). 11. Yang, Z., Zhang, S. & Charles Li, Y. Chaotic dynamics of spin-valve oscillators. Phys. Rev. Lett. 99, 134101 (2007). 12. Bragard, J. et al. Chaotic dynamics of a magnetic nanoparticle. Phys. Rev. E 84, 037202 (2011). 13. Bertotti, G., Mayergoyz, I. D. & Serpico, C. Spin-Wave Instabilities in Large-Scale Nonlinear Magnetization Dynamics. Phys. Rev. Lett. 87, 217203 (2001). 14. Lakshmanan, M. & Daniel, M. On the evolution of higher dimensional Heisenberg ferromagnetic spin systems. Physica A 107, 533 (1981). 15. Lieb, E. H. The classical limit of quantum spin systems. Commun. Math. Phys. 34, 327–340 (1973). 16. Stone, M., Park, K.-S. & Garg, A. The semiclassical propagator for spin coherent states. Journ. Math. Phys. 41, 8025–8049 (2000). 17. Needham, T. Visual complex analysis. Clarendon Press (1997). 18. Yang, C. N. & Lee, T. D. Statistical theory of equations of state and phase transitions. II. Lattice gas and Ising model. Phys. Rev. 87, 410 (1952). ## Acknowledgements This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. ## Author information Authors ### Contributions A.G. conceived and performed calculations; V.V. supervised the project, both authors discussed results and wrote the manuscript. ### Corresponding author Correspondence to Alexey Galda. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Galda, A., Vinokur, V.М. Linear dynamics of classical spin as Möbius transformation. Sci Rep 7, 1168 (2017). https://doi.org/10.1038/s41598-017-01326-x • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-017-01326-x • ### Exceptional points in classical spin dynamics • Alexey Galda • Valerii M. Vinokur Scientific Reports (2019)
2022-08-12 22:16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368921875953674, "perplexity": 992.7562266154214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00564.warc.gz"}
http://astronomyandspace.net/mobile
# Astronomy and Space ## Aug 08 ### Galactic Center Plasma The Milky Way Galaxy’s central regions are a dynamic and fascinating laboratory. Besides the presence of the supermassive black hole, the galactic center also plays host to a disproportionate amount of massive stars, gas at high densities and high temperatures, and the presence of a strong galactic wind. All this and more combines to create a dynamic process that we do not entirely understand yet. Remarkably, the evidence we have to know and understand the galactic center comes from observations taken at wavelengths outside of the visible spectrum. Our Milky Way is a flat spiral disk galaxy, and our sun and solar system are very nearly embedded right in the middle of the plane of the disk of the Milky Way. We’re about 8 kpc (or about 26000 light years) away from the center of the Milky Way, yet we are positioned just 30 pc (or about 100 ly) above the plane of the Milky Way’s disk. This means that when we look towards the Milky Way’s center, we have to observe through the entire intervening Milky Way plane located between us and the center, and its containing gas and dust. This gas and dust along the way tends to cause extinction for shorter wavelengths, and allows longer wavelengths to pass through1. This means, for galactic center observations, wavelengths beyond 1 µm (which is in the infrared portion of the spectrum) can be used. The dust and gas along the way also becomes transparent at more energetic portions of the electromagnetic spectrum, for radiation with energy greater than a few keV. Because of this extinction and scattering of light along the way from the galactic center to the Earth, our observations of the region are only limited to observations beyond infrared wavelengths or beyond X-ray energy levels. Still, even with this limited portion of the spectrum to work with, we can still get a very good picture of the exciting dynamics and properties of the galactic center. An interesting and mysterious phenomena is revealed by observations in X-ray. The spectra collected by X-ray photons displays He-like and H-like lines that are estimated to originate in a plasma at two different energies (i.e. two different temperatures)2: 0.8 keV (“soft” plasma) and 8 keV (“hard” plasma). #### Plasma! Hot plasma! So, before diving into what makes these two types of plasma found in the galactic center interesting, it’s useful for an overview of plasmas. Plasma is often described as the fourth state of matter. Plasmas are like gasses, but with one major exception: the atoms are separated into charged particles, with electrons ionized from their atoms, creating a collection of negatively charged electrons and positively charged ions. This essential difference results in the main difference between a plasma and a gas. Plasmas act differently than gasses because the charged particles in a plasma cause additional phenomena beyond that exhibited by uncharged particles in a gas and described by statistical mechanics. The galactic center contains a large amount of hot plasma, and as revealed by X-ray observations, this plasma is found in two different temperatures. Now, an obvious question is how and where the hot plasma in the galactic center originated. Supernovae and their remnants are excellent sources for energy that help generate heat and plasma on their shock fronts. These supernovae can help explain the presence of the soft plasma. The galactic center hosts a huge number of supernova events, with estimates of 0.04 supernovae per century. This rate is 2000 times greater than the average rate of the rest of the galaxy. For the warmer, 8 keV, component of the plasma, there’s isn’t yet a satisfactory explanation for a heating mechanism. Supernova remnants are only known to produce plasma that reach energies approaching 3 keV. Supernovae themselves are hotter, but not long enough to heat the plasma found in the galactic center. Without a heating mechanism, we start running into a problem3. The temperature of a substance corresponds to the kinetic energy of the particles in that substance. So a substance with a high temperature means that the atoms and molecules in the substance have a high kinetic energy. This can be quantified by the mean, or average, thermal velocity, $$v_\text{th}$$: $$v_\text{th} = \sqrt{\frac{k_b T}{\mu m_p}}$$ Here, $$\mu$$ is the mean molecular weight. For a pure hydrogen plasma $$\mu = 0.5$$, and at the hard plasma temperatures at the galactic center, we get that the thermal velocity is 1250 km/s. This is a problem when we compare this thermal velocity with the escape velocity required for a particle to escape from the galactic center. Estimates of the escape velocity for the galactic center are typically in the range between 1000–1200 km/s, so this means that we expect plasma at this temperature to escape and to not be confined in the galactic center. Overall, we need a source that can heat the plasma found at the galactic center at the hard plasma temperatures in less time than it takes for the plasma to escape from the galactic center. Unfortunately, we don’t know of any astrophysical mechanisms that can satisfy both these requirements. #### But it may not be completely depressing… Currently, there are theories proposing workarounds to help explain the presence of hard plasma in the galactic center. Some have proposed a magnetic field that may be helping confine the hard plasma in the galactic center and preventing it from escape. However, detection of magnetic filaments in radio observations of the galactic center seem to suggest that this magnetic field that could sufficiently confine the plasma is not present. It has also been suggested that the assumption of a nearly completely hydrogen plasma may be incorrect. Above, the thermal velocity we calculated relied on an assumption of $$\mu = 0.5$$. But this value is higher if the plasma is no longer predominantly hydrogen and instead consisting of heavier elements like helium, subsequently making the thermal velocity much lower. The thermal velocity for a predominantly helium plasma would be about 750 km/s, much less than the estimated escape velocity. Further, the hard helium plasma could be sufficiently heated by friction in molecular clouds found in the galactic center. This proposition of a helium plasma still requires further observational evidence to support, but it might offer a path to understanding where the hot hard plasma in the galactic center came from and how it is regulated. #### Sources and Further Exploration • Belmont, R., M. Tagger, M. Muno, et al. “A Hot Helium Plasma in the Galactic Center Region”, 20 September 2005, The Astrophysical Journal, 631:L53–L56. • A fantastic outline of the problem of explaining the hard plasma in the galactic center, and the proposal of the helium plasma solution to the problem. • Carroll, Bradley W. and Dale A. Ostlie. “An Introduction to Modern Astrophysics”, 2007 (2nd Ed.), Pearson Education, Inc: 398–405, 922–932. • Good introduction to extinction and the galactic center. • Goto, Miwa, Nick Indriolo, T. R. Geballe, and T. Usuda. “H3+ Spectroscopy and the Ionization Rate of Molecular Hydrogen in the Central Few Parsecs of the Galaxy”, 2013, The Journal of Physical Chemistry A, 117: 9919–9930. • A short overview of the importance of ionization (e.g. plasmas) in understanding interstellar molecular clouds, and then a discussion of ionization in the galactic center regions. • Morris, Mark, and Eugene Serabyn. “The Galactic Center Environment”, 1996, Annu. Rev. Astron. Astrophys., 34: 645–701. • A review article with a huge amount of detail about the constituents of the galactic center. • Muno, M. P., F. K. Baganoff, M. W. Bautz, et al. “Diffuse X-ray Emission in a Deep Chandra Image of the Galactic Center”, 20 September 2004, The Astrophysical Journal, 613: 326–342. • Observations of the diffuse X-ray emission in the galactic center, and an overview of the problem of explaining the hard plasma. • Muno, M. P., J. S. Arabadjis, F. K. Baganoff, et al. “The Spectra and Variability of X-ray Sources in a Deep Chandra Observation of the Galactic Center”, 1 October 2004, The Astrophysical Journal, 613: 1179–1201. • Details about the spectra of X-ray observations of the galactic center. • Skinner, G. K, A. P. Willmore, C. J. Eyles, et al. “Hard X-ray images of the galactic centre”, 10 December 1987, Nature, 330: 544–547. • Some early observations of the galactic center in X-ray, and detections of diffuse X-ray emission. 1. This can be explained, in a simple way, by Mie scattering. Essentially, Mie scattering explains that a spherical (hehe…) dust grain will scatter electromagnetic wavelengths that are smaller than the order of size of the dust grain. Here’s a helpful analogy (adapted from one provided by Carroll and Ostlie): if the waves on an ocean are much smaller than an obstructing island, they get blocked. However, if they are much larger in size than the island, they pass by mostly unscathed. Mie scattering’s predictions break down for high energy radiation, though, in and beyond the ultraviolet region. 2. The energies ($$kT \approx 0.8 \text{ keV}$$ and $$kT \approx 8 \text{ keV}$$) easily convert to temperatures. It is convenient to share temperature information in terms of energy since it can be measured directly from the observed X-ray photons. 3. Or rather, as demonstrated, the plasma starts running into a problem. (Yes, I know everyone here comes for the subtle jokes. They’re the best kind.) ## Aug 06 ESA’s Rosetta spacecraft became the first spacecraft to rendezvous with a comet earlier today! And we now have some fabulous and stunning images of Comet 67P/Churyumov–Gerasimenko thanks to the OSIRIS (Onboard Scientific Imaging System) camera onboard. Image: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA ## Mar 17 ### Gravitational Waves in the Cosmic Microwave Background Earlier today, astronomers and physicists working on the BICEP2 collaboration announced results from their three year long experiment. Polarization measurements of the Cosmic Microwave Background radiation have revealed signs of gravitational waves. The experiment also provides some of the strongest evidence yet to support the inflation hypothesis first proposed by Alan Guth. Unfortunately, I’ve been a little short on time this semester (as evidenced by the lack of activity on the blog over the past few months), so I don’t have too much time to dive into this exciting discovery. But I do at least want to point to some of the best explanations of the newly announced results: • Physicist Sean Carroll provides an excellent (and technical) overview of the significance of the results here and here. • Astrobites also provides a technical but good overview of the results here. • The Guardian has a much more accessible article that highlights the significance of the results here. • The actual data and results (and pretty plots!) from the BICEP2 collaboration are available here. Update: It seems that the results are not nearly as significant as initially proposed due to a possibly incorrect accounting of foreground dust. Here’s a revised publication, and an overview of what happened. ## Nov 27 Earlier this year, a second reaction wheel (out of the total four) failed in the Kepler spacecraft. Reaction wheels are like gyroscopes, and provided the stability for the precise pointing of the Kepler spacecraft. Three reaction wheels were required to provide the necessary high amount of precision in pointing to enable much of the spacecraft’s exoplanet detection. Without the third reaction wheel, the spacecraft’s primary mission was thought to be finished. Since then, the (very creative) engineers at NASA have proposed a plan to use photons from the Sun in place of the third reaction wheel. Photons colliding with the surface of the Kepler spacecraft result in a force being applied to the spacecraft, and the resulting radiation pressure could be used to provide the necessary stability for the spacecraft to continue with further exoplanet detection work. The graphic above provides a simple overview of the procedure (click for full size). (Source: nasa.gov) ## Oct 31 ### FRBs: Mysterious Pulses in Radio Fast Radio Bursts (FRBs1) are exactly what their name implies: very quick and bright signals visible in the sky at radio wavelengths. Just by that description alone, though, these signals sound a little boring. But when we start to dig a little bit deeper into what makes up a FRB, we find something much more exciting, possibly even originating from very bright cataclysmic events far away. #### Dispersion Measure An especially useful way to look at FRBs is through the lens of the quantity called “dispersion measure”. The light from any astrophysical source takes a finite amount of time to reach to the Earth. Simply, that time is due to the finite speed of light. However, there can be an additional delay, accounted by the fact that the space that the light travels through is not quite a perfect vacuum. By modeling the space between stars, the interstellar medium, as a cold and ionized plasma, we discover that a signal’s group velocity travels slower than the speed of light. Therefore, a signal traveling through the interstellar medium cannot travel as fast as the speed of light. This effect can be described in the following equation detailing the time delay that a signal experiences due to dispersion: $$\Delta t \simeq 4.15 \times 10^6 \text{ ms} \times (f_1^{-2} - f_2^{-2}) \times \text{DM}$$ The DM stands for dispersion measure, and is given by: $$\text{DM} = \int_0^d n_e dl$$ The main takeaway from these equations is that the time delay is dependent on frequency. Signals at lower frequencies, and therefore at longer wavelengths, arrive later than signals at higher frequencies. This is why this effect becomes important for radio waves, which have long wavelengths. At shorter wavelengths, like optical or even infrared wavelengths, the delay due to dispersion becomes negligible. The time delay is also dependent on the dispersion measure (DM), which in turn is essentially the sum of electrons along our line of sight to the object emitting the signal. If there are more electrons along the way, the DM will be higher, and we expect a longer time delay in the signal. If there is a short and bright broadband (visible across a wide range of frequencies) signal, the dispersion gives rise to a very characteristic pattern. We see the larger frequencies, or shorter wavelengths, arrive first at the Earth. The shorter frequencies, which have a longer time delay, arrive later. An example is shown in the plots below for DM = 125 and 250 pc cm-3: If there was no dispersion, our broadband signal would just look like the dotted line: all the frequencies would arrive at the Earth at the same time. But since there are electrons in the way, we get dispersion, and the signals we measure start looking like that curved solid line. The dispersion measure becomes more powerful when you start thinking of it as an analog to distance. Since objects further away emit signals that travel through more of the interstellar medium, they encounter more electrons along their journey to a telescope on the Earth and therefore have higher values for their DM. For example, since pulsars emit quick broadband pulses in radio, we can use their DMs to get a rough estimate of their distances from the Earth. The dispersion measure is also different when we look out at different areas of the sky. Since the Milky Way is a flat spiral galaxy, most of the dust in the galaxy is confined in a thin plane. When we look at objects in the plane of the galaxy, we expect their DM to be higher because signals emitted by these objects have to travel through more dust, and therefore more electrons that cause dispersion, to reach us. Conversely, looking at objects outside the plane of the galaxy, we expect the DM to be lower since there is not as much dust in the way. We can see this by mapping the DMs of all the pulsars we know of in terms of their positions relative to the galactic plane (or in other words, their galactic latitude): As expected, we can see a peak at a latitude of 0, where the galactic plane is located. Looking away from the plane, the dispersion measures of the pulsars fall away quickly, since there is less dust in the way. The only exception is two small peaks near -30 and -45 degrees latitude: those correspond to the two Magellanic clouds. #### The artist formerly known as Lorimer Burst So now that we understand DMs, we can now properly dive into FRBs. The first FRB was discovered in 2007, and was called a Lorimer Burst. While searching through old archival data of the Small Magellanic Cloud from the Parkes Radio Telescope in Australia, a group of astronomers led by Duncan Lorimer discovered a single very bright burst in radio lasting less than 5 milliseconds. What made the burst notable was its large power, short duration, and especially its high dispersion measure. The burst was located low below the galactic plane, a few degrees away from the Small Magellanic Cloud, yet it still had a DM of 375 pc cm-3. What this suggested was that the signal was not originating from our Milky Way galaxy. If the signal had originated from within the galaxy, it would likely have had a smaller DM, matching that of those pulsars near the signal. In fact, models of the dust distribution in the Milky Way indicate that only 25 pc cm-3 would be expected for an object located at that particular position in the sky. Instead, the higher DM means that the signal passed through a larger number of electrons before being detected, suggesting much further distances. In the original discovery paper, the authors estimated a maximum distance of about 1 Gpc, which would place it in another distant galaxy. The large distance also suggests that the power of the event that generated the signal would also have to be very large in order to be detected so far away. Unfortunately, no other events in that area of the sky were recorded at the time in other observations. Another simultaneous observation of an event in x-ray or gamma ray, for example, could provide more clues about what caused the burst and where it’s located. While this burst was fascinating and mysterious on its own, it was still only one event. Without more similar bursts, we could only say a limited amount about what the objects emitting the burst might be. Five more bursts have since been detected and reported: one in 2011 and four more in 2013. Along the way, these bursts lost their original “Lorimer Burst” name and instead gained the name “Fast Radio Burst”. All of these bursts share similar characteristics to the original Lorimer Burst. The high values for the DM of these bursts is especially notable. Plotting the DM of these FRBs (in red) on our pulsar DM plot from before highlights how much these bursts don’t fit in with the pulsars in the galaxy: The high dispersion measures of these bursts is very convincing in suggesting that FRBs are not from within our own galaxy. This raises many interesting theoretical possibilities. To generate pulses that are powerful enough to be detected over the large distances predicted would require very energetic mechanisms that we do not yet know of or understand. Currently, the only things visible at radio wavelengths outside our own galaxy include sources like gamma ray bursts and active galactic nuclei. None of them, however, can generate the fast and bright signals that we see for FRBs. FRBs could be leading us towards discoveries of very exciting astronomical phenomena, something that we cannot yet explain theoretically. #### A little closer to home? But throughout this post so far, I may have been a little misleading… When looking at the FRBs, we have been assuming so far that the large DM has been caused by the interstellar medium. Although this is usually the primary cause of the dispersion observed in astronomical objects, it does not necessarily have to be the only cause or even the largest contributor to the dispersion. Any ionized plasma with electrons could theoretically cause the same effect to happen. We cannot necessarily say that the high dispersion measure on FRBs is due to the interstellar medium. Just a few weeks ago, a group of astronomers proposed that flaring stars within the Milky Way might in fact be the source of FRBs. Examples of small dwarf stars that produce flares in radio with the necessary brightness and time scale have already been found. In order to explain the high DM we observe for FRBs, the astronomers modeled these flaring stars to include a plasma “blanket” surrounding them. Coherent emission in radio could be generated at the bottom of the coronae of the stars, and once it passes through the plasma blanket, a time delay is added on due to dispersion. So if this model turns out to be correct, what astronomers thought might be an effect of the interstellar medium over billions of parsecs may instead just be due to a thick blanket of plasma surrounding flaring stars2. #### Small Sample Size The end of the story about FRBs is that there really is no end right now. We’re just getting started. FRBs are a very new discovery and we haven’t found many examples of them to make convincing statements about what causes them or even where they’re located. We’re working with a very small sample size of 6, and we cannot say very much until we find more examples and perhaps discover interesting events coincident with a FRB at other wavelengths. The recent discovery of four FRBs allowed astronomers to estimate a rate at which these events may be occurring: about 10000 per day over the whole sky. That’s a big number and you may wonder why we haven’t been able to find more if they’re that frequent. The answer is that the telescopes big enough to detect these signals can only look at small areas of the sky. We have only started to detect FRBs because of large scale surveys searching for pulsars and other radio transients. As we increase these transient surveys over the coming years, we should find more examples of FRBs, and these examples can help us better understand what is actually behind these bright mysterious pulses. #### Sources and Further Exploration • Keane, E. F., M. Kramer, A. G. Lyne, et al. “Rotating Radio Transients: new discoveries, timing solutions and musings”, 13 April 2011, Monthly Notices of the Royal Astronomical Society, 415, 3065–3080. • Keane, E.F., B. W. Stappers, M. Kramer, and A. G. Lyne. “On the origin of a highly-dispersed coherent radio burst”, 19 June 2011, Monthly Notices of the Royal Astronomical Society (pre-print), arXiv:1206.4135v1. • Loeb, Abraham, Yossi Shvartzvald, and Dan Maoz. “Fast radio bursts may originate from nearby flaring stars”, 9 October 2013, Monthly Notices of the Royal Astronomical Society (pre-print), arXiv:1310.2419v1. • Lorimer, D. R., M. Bailes, M. A. McLaughlin, et al. “A Bright Millisecond Radio Burst of Extragalactic Origin”, 2 November 2007, Science, 318, 777–780. • Lorimer, D. R., A. Karastergiou, M. A. McLaughlin, and S. Johnston. “On the detectability of extragalactic fast radio transients”, 25 July 2013, Monthly Notices of the Royal Astronomical Society (pre-print), arXiv:1307.1200v3. • Lorimer, Duncan, and Michael Kramer. “Handbook of Pulsar Astronomy”, 2007, Cambridge University Press. • Thornton, D., B. Stappers, M. Bailes, et al. “A Population of Fast Radio Bursts at Cosmological Distances”, 5 July 2013, Science, 341, 53–56. 1. Trust me, it’s a lot more fun if you pronounce FRBs as “Furbies”. Extra points if you imagine small furry/creepy toys actually being responsible for these signals. 2. Another way to think of this: These are sick stars that sometimes sneeze. They’re covered in thick plasma blankets in order to stay warm and get better. (My imagination might be a little too active right now…) Some fascinating details and background about the Soviet shuttle program. Launching up to 60 times per year with the capacity to lift nearly 25,000 kg into low-Earth orbit meant that the United States could put a lot of hardware into space each year. It seemed plausible that the Americans were planning to launch experimental laser weapons into orbit—and with the shuttle’s capacity to bring 15 tons back from space, these weapons could be tested in orbit and then be brought back for modification. In the long term, this capability would let the Americans build a functioning orbital battle station. A clear explanation for why there isn’t an easy and simple way to say that Voyager 1 has reached interstellar space. ## Sep 12 We know trillions of stars, millions of galaxies, and only this one place with this strange accident of self-replicating chemistry. When that sense of singularity ends, as it very well might, how might humans see the cosmos differently? The real space age will have begun. Alexis Madrigal writes a beautiful tribute to the Kepler spacecraft. Interesting look at attempts to find more fundamental origins of quantum mechanics. The lesson, says Fuchs, isn’t that Spekkens’s model is realistic — it was never meant to be — but that entanglement and all the other strange phenomena of quantum theory are not a completely new form of physics. They could just as easily arise from a theory of knowledge and its limits. The ISS Expedition 36 crew arrived back on Earth on September 10 aboard a Soyuz capsule. This picture in particular beautifully captures the retrorockets being fired to slow down the capsule before its landing. Soyuz landings are very photogenic.
2014-08-27 16:54:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5467098951339722, "perplexity": 984.5209563091487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829661.96/warc/CC-MAIN-20140820021349-00366-ip-10-180-136-8.ec2.internal.warc.gz"}
http://qwerkyapp.com/error-function/error-function-matlab-example.html
# qwerkyapp.com Home > Error Function > Error Function Matlab Example # Error Function Matlab Example ## Contents Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view However, for −1 < x < 1, there is a unique real number denoted erf − 1 ⁡ ( x ) {\displaystyle \operatorname Γ 0 ^{-1}(x)} satisfying erf ⁡ ( erf Use sym to convert 0 and infinities to symbolic objects. For details, see Tips.Plot the CDF of the normal distribution with and .x = -3:0.1:3; y = (1/2)*(1+erf(x/sqrt(2))); plot(x,y) grid on title('CDF of normal distribution with \mu = 0 and \sigma check over here If x is a vector or a matrix, erfi(x) returns the imaginary error function of each element of x.ExamplesImaginary Error Function for Floating-Point and Symbolic Numbers Depending on its arguments, erfi New York: Dover, 1972. In this case, MATLAB passes control to the catch block.If all inputs to error are empty, MATLAB does not throw an error. Include information aboutthe class of variable nin the error message.n = 7; if ~ischar(n) error('Error. \nInput must be a char, not a %s.',class(n)) endError. https://www.mathworks.com/help/matlab/ref/erf.html ## Inverse Error Function Matlab Compute the complementary error function for elements of matrix M and vector V:M = sym([0 inf; 1/3 -inf]); V = sym([1; -i*inf]); erfc(M) erfc(V)ans = [ 1, 0] [ erfc(1/3), 2] Springer-Verlag. identifierError message identifier. Matlab Error Function Definitions Are Not Permitted In This Context When erfc(x) is close to 1, then 1 - erfc(x) is a small number and might be rounded down to 0. No whitespace characters can appear anywhere in msgID. How To Write A Function In Matlab Example R. (March 1, 2007), "On the calculation of the Voigt line profile: a single proper integral with a damped sine integrand", Monthly Notices of the Royal Astronomical Society, 375 (3): 1043–1048, Stegun, eds.). https://www.mathworks.com/help/matlab/ref/erf.html MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation. Java: Apache commons-math[19] provides implementations of erf and erfc for real arguments. Matlab Error Function Fit errorStruct -- Error reporting informationscalar structure Error reporting information, specified as a scalar structure. Some authors discuss the more general functions:[citation needed] E n ( x ) = n ! π ∫ 0 x e − t n d t = n ! π ∑ You must specify more than one input argument with error if you want MATLAB to convert special characters (such as \n) in the error message. 1. doi:10.1109/TCOMM.2011.072011.100049. ^ Numerical Recipes in Fortran 77: The Art of Scientific Computing (ISBN 0-521-43064-X), 1992, page 214, Cambridge University Press. ^ DlangScience/libcerf, A package for use with the D Programming language. 2. Continued fraction expansion A continued fraction expansion of the complementary error function is:[11] erfc ⁡ ( z ) = z π e − z 2 1 z 2 + a 1 3. J. 4. Compute the error function for these numbers. 5. The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function. 6. A. 7. You can approximate such results with floating-point numbers using vpa.At least one input argument must be a scalar or both arguments must be vectors or matrices of the same size. 8. If L is sufficiently far from the mean, i.e. μ − L ≥ σ ln ⁡ k {\displaystyle \mu -L\geq \sigma {\sqrt {\ln {k}}}} , then: Pr [ X ≤ L 9. Click the button below to return to the English verison of the page. ## How To Write A Function In Matlab Example Handbook of Continued Fractions for Special Functions. https://www.mathworks.com/help/matlab/ref/erfc.html The inverse complementary error function is defined as erfc − 1 ⁡ ( 1 − z ) = erf − 1 ⁡ ( z ) . {\displaystyle \operatorname ζ 8 ^{-1}(1-z)=\operatorname Inverse Error Function Matlab Back to English × Translate This Page Select Language Bulgarian Catalan Chinese Simplified Chinese Traditional Czech Danish Dutch English Estonian Finnish French German Greek Haitian Creole Hindi Hmong Daw Hungarian Indonesian Complementary Error Function Matlab LCCN65-12253. Compute the imaginary error function for elements of matrix M and vector V:M = sym([0 inf; 1/3 -inf]); V = sym([1; -i*inf]); erfi(M) erfi(V)ans = [ 0, Inf] [ erfi(1/3), -Inf] check my blog Compute the first and second derivatives of the imaginary error function:syms x diff(erfi(x), x) diff(erfi(x), x, 2)ans = (2*exp(x^2))/pi^(1/2) ans = (4*x*exp(x^2))/pi^(1/2)Compute the integrals of these expressions:int(erfi(x), x) int(erfi(log(x)), x)ans = Based on your location, we recommend that you select: . The defining integral cannot be evaluated in closed form in terms of elementary functions, but by expanding the integrand e−z2 into its Maclaurin series and integrating term by term, one obtains How To Call Function In Matlab Example MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation. Back to English × Translate This Page Select Language Bulgarian Catalan Chinese Simplified Chinese Traditional Czech Danish Dutch English Estonian Finnish French German Greek Haitian Creole Hindi Hmong Daw Hungarian Indonesian Positive integer values of Im(f) are shown with thick blue lines. http://qwerkyapp.com/error-function/error-function-on-matlab.html The error function is related to the cumulative distribution Φ {\displaystyle \Phi } , the integral of the standard normal distribution, by[2] Φ ( x ) = 1 2 + 1 ## This substitution maintains accuracy. IEEE Transactions on Wireless Communications, 4(2), 840–845, doi=10.1109/TWC.2003.814350. ^ Chang, Seok-Ho; Cosman, Pamela C.; Milstein, Laurence B. (November 2011). "Chernoff-Type Bounds for the Gaussian Error Function". Translate errorThrow error and display messagecollapse all in page Syntaxerror(msg) exampleerror(msg,A1,...,An)error(msgID,___)error(errorStruct) exampleDescription exampleerror(msg) throws an error and displays an error message. Use sym to convert complex infinities to symbolic objects:[erf(sym(i*Inf)), erf(sym(-i*Inf))]ans = [ Inf*1i, -Inf*1i]Handling Expressions That Contain Error Function Many functions, such as diff and int, can handle expressions containing erf. Q Function Matlab Based on your location, we recommend that you select: . ISBN 978-0-486-61272-0. Examplescollapse allThrow Errormsg = 'Error occurred.'; error(msg)Error occurred.Throw Error with Formatted MessageThrow a formatted error message with a line break. For |z| < 1, we have erf ⁡ ( erf − 1 ⁡ ( z ) ) = z {\displaystyle \operatorname ζ 2 \left(\operatorname ζ 1 ^{-1}(z)\right)=z} . have a peek at these guys If X is a vector or a matrix, erf(X) computes the error function of each element of X.ExamplesError Function for Floating-Point and Symbolic Numbers Depending on its arguments, erf can return Acknowledgments Trademarks Patents Terms of Use United States Patents Trademarks Privacy Policy Preventing Piracy © 1994-2016 The MathWorks, Inc. See [2]. ^ http://hackage.haskell.org/package/erf ^ Commons Math: The Apache Commons Mathematics Library ^ a b c Cody, William J. (1969). "Rational Chebyshev Approximations for the Error Function" (PDF). All generalised error functions for n>0 look similar on the positive x side of the graph. Input must be a char, not a double. Compute the error function for x = 0, x = ∞, and x = -∞. Back to English × Translate This Page Select Language Bulgarian Catalan Chinese Simplified Chinese Traditional Czech Danish Dutch English Estonian Finnish French German Greek Haitian Creole Hindi Hmong Daw Hungarian Indonesian H. At the real axis, erf(z) approaches unity at z→+∞ and −1 at z→−∞. For real values x, the system applies the following simplification rules:inverf(erf(x)) = inverf(1 - erfc(x)) = inverfc(1 - erf(x)) = inverfc(erfc(x)) = xinverf(-erf(x)) = inverf(erfc(x) - 1) = inverfc(1 + erf(x)) Because these numbers are not symbolic objects, you get the floating-point results:A = [erf(1/2), erf(1.41), erf(sqrt(2))]A = 0.5205 0.9539 0.9545Compute the error function for the same numbers converted to symbolic objects. Close Was this topic helpful? × Select Your Country Choose your country to get translated content where available and see local events and offers. The resulting code is about three times faster in execution, but is considerably less accurate. References [1] Cody, W.
2017-07-21 12:49:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.614268958568573, "perplexity": 5539.851579666973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423774.37/warc/CC-MAIN-20170721122327-20170721142327-00500.warc.gz"}
https://stats.stackexchange.com/questions/76827/how-is-lambda-tuning-parameter-in-lasso-logistic-regression-generated?noredirect=1
# How is $\lambda$ tuning parameter in lasso logistic regression generated I know glmnet(x,y) generates $\lambda$ but I am very curious to know the actual formula that is behind this, generating $\lambda$. • Are you talking about the function glmnet in the R package glmnet? – Glen_b Nov 17 '13 at 22:38 • @Glen_b yes library(glmnet) in R – bison2178 Nov 17 '13 at 23:35 • I discovered one thing I had missed before: here (scroll down to lambda.min.ratio) it says that the largest value of $\lambda$, lambda.max is the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). Once lambda.max is given, obtaining the rest of the $\lambda$ sequence should be pretty simple. The question remains, how is lambda.max calculated? – Richard Hardy Feb 4 '15 at 20:46 I had this same question and also ran into confusion in the F90 code in the glmnet package. In the end I took some code from the quadrupen package (at the end of quadrupen.R) and modified it for my purposes. I can confirm that the maximum lambda value produces all zero coefficients in glmnet with alpha=1. I'd love to hear better answers to this question or an implementation of the glmnet fortran version in R --- at least to help with teaching and learning. ### from quadrupen ## GENERATE A GRID OF PENALTIES IF NONE HAVE BEEN PROVIDED get.lambda.l1 <- function(xs,y,nlambda,min.ratio) { ##xs <- as(x, "dgCMatrix") ## currently not robust to missing values in xs or y ybar <- mean(y,na.rm=TRUE) xbar <- colMeans(xs,na.rm=TRUE) x <- list(Xi = xs@i, Xj = xs@p, Xnp = diff(xs@p), Xx = xs@x) xty <- drop(crossprod(y-ybar,scale(xs,xbar,FALSE))) lmax <- max(abs(xty)) return(10^seq(log10(lmax), log10(min.ratio*lmax), len=nlambda)) } From the documentation, it seems that cross-validation is used on a self-generated sequence for lambda. This results in the lambda.min being the lambda value in the sequence which produces the smallest cvm (mean cross-validated error) and lambda.1se being the largest lambda in the sequence such that error is within 1 standard error of the minimum. There is some discussion and illustration in section 6 of the JStatSoft article • Your answer addresses a different question than the one being asked. The question of interest is how the sequence of lambda is generated by the function glmnet in the glmnet package in R. I have been wondering about this question and checked the code behind the glmnet function, but ultimately faced some Fortran code (since the glmnet package is not entirely coded in R) which I had trouble with... I was unable to find the answer in the documentation nor in the JStatSoft article. – Richard Hardy Dec 5 '14 at 10:13
2021-08-05 11:26:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5330525040626526, "perplexity": 1503.2290134519646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00402.warc.gz"}
https://www.physicsforums.com/threads/how-does-the-time-dilation-equation-work.850647/
# How does the time dilation equation work? Tags: 1. Jan 3, 2016 ### Isaac0427 Hi all! I've been doing some studying length contraction and time dilation and I came across this link http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/tdil.html . I completely understand what it did with length contraction but when I got to time dilation I couldn't figure out one thing. They appeared to be using the equation t=(t'+vx'/c2)γ, but I thought that the equation was t'=(t+vx/c2)γ. Using that equation I got T=T0/γ and not T=T0γ, but I know that is wrong. Can you please help me understand this? Thanks! 2. Jan 3, 2016 ### bcrowell Staff Emeritus To invert the Lorentz transformation, you just flip the sign of v. So the two equations you wrote are equivalent except that to make them consistent with each other you would need to replace v with -v in one of them. 3. Jan 3, 2016 ### Isaac0427 Ok, but then wouldn't you have to flip the sign of the velocity in the γ term? 4. Jan 3, 2016 ### Mister T You, and the article you linked, are not using consistent notation. The proper time is the time elapsed in a frame of reference where the change in position is zero. Let's start with your equation $t'=(t+\frac{vx}{c^2}) \gamma$. If we set $x=0$ that makes $t$ a proper time (assuming a reference event at (0,0), as usual). Thus we have $t'=t \gamma$. Note that the transformation equation you wrote assumes the primed frame is moving to the left relative to the unprimed frame, which is not the usual convention. The usual convention is the opposite, so that $t'=(t-\frac{vx}{c^2}) \gamma$, but in this case it doesn't matter. And changing the sign of $v$ does not change the sign of $\gamma$. 5. Jan 3, 2016 ### Staff: Mentor They're same equation except that they're saying t' and x' where most people would say t and x, and vice versa. You could do something similar with the quadratic formula from first-year algebra that says that the solution to the equation $ax^2+bx+c=0$ is $\frac{-b\pm\sqrt{b^2-4ac}}{2a}$; just say that the solution to the equation $cx^2+bx+a=0$ is $\frac{-b\pm\sqrt{b^2-4ac}}{2c}$ - it looks different but that's just because we're playing games with the names of two of the variables. If you flip which coordinates are the primed ones, you also have to flip which time interval is labeled $T$ and which is $T_0$. You ended up with the right result using the weird convention in which $T$ and $T_0$ have been flipped. 6. Jan 4, 2016 ### Isaac0427 Ok, so first of all, there was a mistake in my original post. I said t'=t+vx/c2 and x'=x+vt instead of t'=t-vx/c2 and x'=x-vt. Using this here's my thinking: Let T=t2-t1, T0=t'2-t'1, L=x2-x1 and L0=x'2-x'1. Using this and the Lorentz transformations, I got T0=Tγ assuming x1=x2 and L0=Lγ assuming t1=t2, which would also give me T=T0/γ and L=L0/γ. What did I do wrong? 7. Jan 4, 2016 ### Isaac0427 Ok, so after reviewing this it appears to me that you get a different answer if you use the regular Lorentz transformation than if you use the inverse Lorentz transformation. Is this correct? If yes, how do you know which one to use? 8. Jan 4, 2016 ### Mister T Proper time is the time elapsed in the frame where the change in position is zero. 9. Jan 4, 2016 ### PeroK It's great you're learning SR, but I'd say you should try to sharpen up your mathematical thinking a little. You know which transformation to use from the way you set up your problem. You must choose a direction to be positive. (Keeping this to motion in one dimension). Then, if you have an object moving, it has either a positive or negative velocity. Likewise, a reference frame can be moving in the positive or negative directions. Any formula, including the Lorentz Transformation (LT), depends on these things. You can't just apply any formula without checking that the formula applies to the way you have set up your problem. It's no different from, say, knowing when to take gravity to be positive (when down is the positive direction) and negative (when up is the positive direction). Take a look at the way the LT is derived/defined and note the assumptions that relate one frame (normally unprimed) with another frame (normally primed, S' or whatever). If you do that, you'll see when to use the LT and when to use the "inverse" LT. 10. Jan 4, 2016 ### Isaac0427 So if I am understanding you correctly, it is completely based on the situation. I was just wondering if there is some mathematical reason to use one or the other. 11. Jan 5, 2016 ### PeroK The usual Lorentz Transformation is: $t' = \gamma (t - vx/c^2)$ and $x' = \gamma (x - vt)$ This relates the coordinates of time $t'$ and distance $x'$ for a frame moving at velocity $v$ with respect to another frame. There are three important points: the frames have a common origin: $t = 0, x = 0$ corresponds to $t' = 0, x' = 0$; the positive $x/x'$ direction is the same for both frames; and, $v$ is positive if the motion of the "moving" frame is in the positive x-direction. For example. If $v = c/2$ then this means the moving frame is moving to the right (positive x-direction), In this case you have: $t' = \gamma (t - x/2c)$ and $x' = \gamma (x - ct/2)$ The "inverse" LT is: $t = \gamma (t' + vx'/c^2)$ and $x = \gamma (x' + vt')$ This gives the $t, x$ coordinates in terms of $t', x'$. In the same set-up as above. Now, you can see this two ways. The quick way is to note that you can consider the unprimed frame as moving to the left with respect to the primed (') frame. So, you just apply the normal LT equation using $-(-v) = +v$ instead of $-v$ and swap the roles of the coordinates (my first suggestion is that you think that through). For example. If $v = c/2$ you have: $t = \gamma (t' + x'/2c)$ and $x = \gamma (x' + ct'/2)$ The second way is to do the algebra and rearrange the LT equations to express $t', x'$ in terms of $t, x$. This will confirm the quick way. My second suggestion is that you do that too. It's a good algebraic exercise if nothing else. You can see now in the hyperphysics page that they were simply using the inverse LT to relate coordinates in the "moving" frame to coordinates in the "rest" frame. And that the set-up on that page is as I described it above. 12. Jan 5, 2016 ### Isaac0427 Ok, I completely understand this. My only question is why do you use the regular transformation for length contraction and the inverse transformation for time dilation. 13. Jan 5, 2016 ### PeroK It depends what they are trying to do. That page doesn't have much explanation, so it's maybe not a good place to learn. Especially the length contraction needs a bit more explanation. Over to Mister T ... 14. Jan 5, 2016 ### Mister T You can use either transformation to derive either effect. You originally asked about a confusion over time dilation, so let's look at that. (Assume a reference event at $(0,0)$ as usual). Consider the transformation equation $t'=(t-\frac{vx}{c^2}) \gamma$. If I let $x=0$ then $t$ is by definition a proper time and $t'=t \gamma$. Consider the inverse transformation equation $t=(t'+\frac{vx'}{c^2}) \gamma$. If I let $x'=0$ then $t'$ is by definition a proper time and $t=t' \gamma$. In all cases you multiply the proper time by $\gamma$ to get the dilated time. Since $\gamma \geq 1$ the dilated time is greater than or equal to the proper time. 15. Jan 5, 2016 ### Isaac0427 Oh, so depending on which transformation you use, either L or L0 could be proper time. This makes more sense. Thank you. 16. Jan 5, 2016 ### Mister T You can use whatever you want to refer to whatever you choose, but keep in mind that when you use nonstandard notation you have trouble communicating with others. Usually $L$ is used for a distance or length, not a time. And usually the subscript "o" refers to the proper value. So $L_o$ would be a proper length and $t_o$ would be a proper time. It's even more common to call $\tau$ the proper time. The link in your first post is sloppy and inconsistent with its notation, first using $\tau_o$ for the proper time and then later switching to $T_o$. This is at least a factor in your confusion if not the outright cause. In your first post you first used $t$ and $t'$ and then later switched to $T_o$ and $T$ without any indication, let alone a clear indication, of what that change in notation meant. All of your confusion is based on the absence of that being sorting out. Last edited: Jan 5, 2016 17. Jan 6, 2016 ### PeroK I don't like the explanation for length contraction on that page. Here's how I would look at it. Imagine we have an object moving at velocity $v$. At $t=0$ the ends are at $x_1$ and $x_2$. At time $t$, therefore, the ends are at: $(t, x_1 + vt)$ and $(t, x_2 + vt)$ And the length of the object in this frame is $x_2 - x_1$ What are the coordinates of the two ends in a frame moving with the object at velocity $v$? $x_1' = \gamma (x_1 + vt - vt) = \gamma x_1$ $x_2' = \gamma (x_2 + vt - vt) = \gamma x_2$ The object is at rest in this frame (its position does not depend on $t'$) and so the length of the object is measured as: $x_2' - x_1' = \gamma (x_2 - x_1)$ 18. Jan 6, 2016 ### Isaac0427 In this example x2-x1 is proper length. If the equations were $x_1 = \gamma (x_1' + vt' - vt') = \gamma x_1$ $x_2 = \gamma (x_2' + vt' - vt') = \gamma x_2$ $x_2 - x_1 = \gamma (x_2' - x_1')$ then x'2-x'1 would be proper length, correct? 19. Jan 6, 2016 ### Isaac0427 Sorry, I meant T and T0, not L and L0. 20. Jan 6, 2016 ### PeroK Not at all. Proper length is the length in a frame where the object is at rest.
2017-08-20 16:46:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200856447219849, "perplexity": 361.6284398750946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106779.68/warc/CC-MAIN-20170820150632-20170820170632-00011.warc.gz"}
https://asmedigitalcollection.asme.org/nuclearengineering/article-abstract/2/4/041006/369815/Design-and-Performance-Evaluation-of-a-Heat?redirectedFrom=fulltext
Hitachi-GE developed a $300-MWe$-class modular simplified and medium small reactor (DMS) between 2000 and 2004. It was designed to have merits over traditional nuclear power plants in areas of lower initial capital investment, flexibility, enhanced safety, and security. The balance of plant (BOP) system of the DMS was originally designed for supplying just electricity. In this study, the cogeneration DMS that supplies both electricity and heat is under investigation. The heat exchanger (HX) network, mainly consisting of the BOP heat exchanger, water pump, and the heat exchangers that deliver heat to the thermal utilization (TU) applications, must operate in an efficient way to keep the overall system costs low. In this paper, the configuration of a heat exchanger network that serves for various TU applications is investigated first. A numerical model for the heat exchanger network is built, and sensitivity studies are performed to estimate the energy efficiency and exergy efficiency of the whole heat exchanger network under different design and operating conditions (e.g., different water temperatures and flow rates). Important design and operating parameters, which significantly impact the performance of the network, are evaluated and presented. ## References References 1. International Atomic Energy Agency (IAEA) , 1999 , “ Hydrogen as an Energy Carrier and Its Production by Nuclear Power ,” , Vienna. 2. Crabtree , G. W. , Dresselhaus , M. S. , and Buchanan , M. V. , 2004 , “ The Hydrogen Economy ,” Phys. Today , 57 ( 12 ), pp. 39 44 .10.1063/1.1878333 3. International Energy Agency (IEA) , 2007 , “ Key World Energy Statistics 2007 .” 4. Ando , K. , Yokouchi , S. , Hirako , S. , Tominaga , K. , Moriya , K. , and Hida , T. , 2005 , “ Development of the DMS (Double MS: Modular Simplified & Medium Small Reactor) (1): Plant Concept and System Design for the DMS ,” Proceedings of 13th International Conference on Nuclear Engineering, ICONE13-50682 , Chinese Nuclear Society (CNS), American Society of Mechanical Engineers (ASME), Japan Society of Mechanical Engineers (JSME) and in cooperation with International Atomic Energy Agency (IAEA) . 5. Ikegawa , T. , Kawabata , Y. , Ishii , Y. , Matsuura , M. , Hirako , S. , and Hoshi , T. , 2010 , “ The Plant Feature and Performance of Double MS (Modular Simplified and Medium Small Reactor) ,” ASME J. Eng. Gas Turbines Power , 132 ( 1 ), pp. 015001-1 015001-7 .10.1115/1.3125305 6. Hitachi-GE , 2013 , “ Advanced Boiling Water Reactor—The Only Generation III+ Reactor in Operation Today ,” Hitachi-GE Nuclear Energy, Ltd. , Japan. 7. Konkin , D. , Simonson , C. , Dalai , A. K. , Tanino , K. , Nishida , K. , Mochida , T. , Ikegawa , T. , Kito , K. , Knudsen , R. , Aiken , A. , and Humphries , R. , 2014 , “ Thermal Utilization Opportunities with a Small-to-Medium Sized BWR ,” 3rd International Technical Meeting on Small Reactors , Nov. 5–7 , . 8. Ingersoll , D. T. , Binder , J. L. , Kostin , V. I. , Panov , Y. K. , Polunichev , V. , Ricotti , M. E. , Conti , D. , and Alonso , G. , 2004 , “ Cogeneration of Electricity and Potable Water Using the International Reactor Innovative and Secure (IRIS) Design ,” Proceedings of Americas Nuclear Energy Symposium (ANES 2004) , INIS-US-0470, U.S. Department of Energy, the American Nuclear Society . 9. Asiedu-Boateng , P. , Akaho , E. H. K. , Nyarko , B. J. B. , and Yamoah , S. , 2012 , “ Modeling and Simulation of Cogeneration Nuclear Power Plant for Seawater Desalination ,” Nucl. Eng. Des. , 242 , pp. 143 147 .10.1016/j.nucengdes.2011.09.037 11. Sun , J. , Feng , X. , Wang , Y. , Deng , C. , and Chu , K. H. , 2014 , “ Pump Network Optimization for a Cooling Water System ,” Energy , 67 , pp. 506 512 . 0149-938610.1016/j.energy.2014.01.028 12. Wang , J. F. , Wang , J. Y. , Zhao , P. , and Dai , Y. P. , 2016 , “ Thermodynamic Analysis of a New Combined Cooling and Power System Using Ammonia-Water Mixture ,” Energy Convers. Manage. , 117 , pp. 335 342 .10.1016/j.enconman.2016.03.019 13. TRNSYS-version 17 , 2010 , “ A Transient System Simulation Program ,” Solar Energy Laboratory, University of Wisconsin
2019-10-17 07:42:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4120841324329376, "perplexity": 11930.713484386393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00460.warc.gz"}
https://bathmash.github.io/HELM/30_4_mtrx_norms-web/30_4_mtrx_norms-web.html
### Introduction A matrix norm is a number defined in terms of the entries of the matrix. The norm is a useful quantity which can give important information about a matrix. #### Prerequisites • be familiar with matrices and their use in writing systems of equations • revise material on matrix inverses, be able to find the inverse of a $2×2$ matrix, and know when no inverse exists • revise Gaussian elimination and partial pivoting • be aware of the discussion of ill-conditioned and well-conditioned problems earlier in Section 30.1 #### Learning Outcomes • calculate norms and condition numbers of small matrices • adjust certain systems of equations with a view to better conditioning 1.1 The 1-norm 1.4 Other norms
2022-09-25 23:03:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7214829921722412, "perplexity": 1088.545221975864}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00359.warc.gz"}
http://askbot.fedoraproject.org/en/answers/115618/revisions/
# Revision history [back] You can use RPM to do so. Delete the package and install it. rpm -qa | grep dnf then rpm -e thatpackage.rpm then finally to install it again rpm -i thatpackage.rpm That should completely reinstall the package, but I am not sure it will solve the problem of the missing directory. Please tell me how it goes.
2021-03-05 01:36:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20586413145065308, "perplexity": 4412.835581551716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00294.warc.gz"}
http://ucalgary.ca/rzach/papers/etheorem.html
University of Calgary The epsilon calculus and Herbrand complexity Source Studia Logica 82 (2006) 133-155 (with Georg Moser) Abstract Hilbert’s ?-calculus is based on an extension of the language of predicate logic by a term-forming operator ?x. Two fundamental results about the ?-calculus, the first and second epsilon theorem, play a role similar to that which the cut-elimination theorem plays in sequent calculus. In particular, Herbrand’s Theorem is a consequence of the epsilon theorems. The paper investigates the epsilon theorems and the complexity of the elimination procedure underlying their proof, as well as the length of Herbrand disjunctions of existential theorems obtained by this elimination procedure. Review Mathematical Reviews 2205042 (2006k:03127) (Mitsuru Yasuhara)
2017-05-29 02:09:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417941331863403, "perplexity": 1347.6849873404963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612008.48/warc/CC-MAIN-20170529014619-20170529034619-00249.warc.gz"}
https://natverse.org/neuromorphr/reference/neuromorpho_read_neurons.html
Read standardised neurons from neuromorpho.org, given a single, or vector of, neuron ID or neuron name. Neurons can be returned as SWC-style data frames or as a nat package neuron/neuronlist object, that can be be plotted in 3D using rgl and analysed with tools from the nat ecosystem. Each neuron in the neuromorpho repository is represented by a name, general information (metadata), the original and standardised SWC files of the digital morphological reconstruction (see details on standardisation process), and a set of morphometric features (see details on available measures). neuromorpho_read_neurons(neuron_name = NULL, neuron_id = NULL, nat = TRUE, batch.size = 2, meta = TRUE, light = TRUE, find = FALSE, progress = TRUE, neuromorpho_url = "http://neuromorpho.org", ...) neuromorpho_read_neuron(neuron_name = NULL, neuron_id = NULL, nat = TRUE, neuromorpho_url = "http://neuromorpho.org", ...) ## Arguments neuron_name a neuron name, or vector of neuron names, as recorded in the neuromorpho database. Names and neuron IDs can be found by searching the repository, for example via neuromorpho_search a neuron ID, or vector of neuron IDs, as recorded in the neuromorpho database. If neuron_name is given this supersedes neuron_id, which is then treated as if its value were NULL. if TRUE, neurons are returned formatted as a neuron object, in a neuronlist See details for more information. Otherwise, a data frame is returned in the SWC file type format. If TRUE, the resulting neuronlist object's associated meta data will be pulled using neuromorpho_neuron_meta the number of requests sent at once to the neuromorpho.org, using multi_run. Requests are sent to neuromorpho.org in parallel to speed up the process of reading neurons. Batches of queries are processed serially. Increasing the value of batch.size may reduce read time. if TRUE, meta data is retrieved for the returned neuronlist or list object, using neuromorpho_neuron_meta. if TRUE, the only a subset of the full meta data for each neurons is returned with the resulting neuronlist. if TRUE, then we scrape each neuron's webpage to find the correct link to download its SWC file. This is more stable, but more time consuming, than setting find = FALSE and using the standard neuromorpho.org format for the download link. If the database changes, or you cannot find your neuron even though you know it exists, try setting find = TRUE if TRUE or a numeric value, a progress bar is shown. The bar progresses when each batch is completed. If TRUE, or 100, the bar completes where all batches are done. the base URL for querying the neuromorpho database, defaults to http://neuromorpho.org methods passed to neuromorpho_async_req, or in some cases, neuromorphr:::neuromorpho_fetch ## Value if nat = TRUE, then a neuronlist object is returned. If FALSE, then a list of data frames for neuron morphologies in SWC format are returned. ## Details A single neuron can be read using using neuromorpho_read_neuron, or multiple using neuromorpho_read_neurons. If nat = TRUE, then neurons are returned as a neuron object, If multiple neurons are returned, they will be given together in a neuronlist. This format and its manipulation is described in detail here. When using neuromorpho_read_neurons, meta data for the neuron is also returned using neuromorpho_neuron_meta. If light = TRUE, then only a subset of this metadata is returned, i.e. the fields: neuron_id neuron_name species brain_region cell_type archive . Note that since neurons are reconstructed from many different neural systems and species, there is no 'standard' orientation. Instead, neuromorpho.org's standardisation process orients the morphologies by placing the soma in the origin of coordinates and aligning the first three principal components of all XYZ coordinates with heights, width, and depth. neuromorpho_neurons_info, neuromorpho_neurons_meta ## Examples # NOT RUN { # Let's get all the elephant neurons in the repository ## First, we need to find their names or IDs elephant.df = neuromorpho_search(search_terms="species:elephant") ## Let's see what cell types we have here t = table(elephant.df$cell_type) t ## We have many pyramidal cells. Let's get those. neuron_names = subset(elephant.df, cell_type == names(t)[which.max(t)])$neuron_name # }
2021-07-29 15:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20532624423503876, "perplexity": 3712.4165031505772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00034.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-3-problem-10re-intermediate-accounting-reporting-and-analysis-3rd-edition/9781337788281/use-the-information-in-re3-6-a-assuming-ringo-company-makes-reversing-entries-prepare-the/65009554-8c54-11e9-8385-02ee952b546e
Chapter 3, Problem 10RE ### Intermediate Accounting: Reporting... 3rd Edition James M. Wahlen + 2 others ISBN: 9781337788281 Chapter Section ### Intermediate Accounting: Reporting... 3rd Edition James M. Wahlen + 2 others ISBN: 9781337788281 Textbook Problem 1 views # Use the information in RE3-6, (a) assuming Ringo Company makes reversing entries, prepare the reversing entry on January 1, and the journal entry to record the payment of the note on April 1; and (b) assuming Ringo does not make reversing entries, prepare the journal entry to record the payment of the note on April 1. To determine Prepare journal entry to record the payment of the note on April 1, assume that (a) Company R makes reversing entry, and (b) Company R does not make reversing entry. Explanation Journal entry: Journal entry is a set of economic events which can be measured in monetary terms. These are recorded chronologically and systematically. Reversing entries: Reversing entries are made at the beginning of the accounting period when the accountant needs to cancel adjusting entry made in the previous accounting period. Reversing entry will just reverse the adjusting entry and enables the company to simplify the recording of subsequent transactions related to the adjusting entry. Rules of Debit and Credit: Following rules are followed for debiting and crediting different accounts while they occur in business transactions: • Debit, all increase in assets, expenses and dividends, all decrease in liabilities, revenues and stockholders’ equities. • Credit, all increase in liabilities, revenues, and stockholders’ equities, all decrease in assets and expenses. Prepare journal entry to record the payment of the note on April 1 as follows: (a) Company R makes reversing entry: Date Account Title and Explanation Debit ($) Credit ($) January 1 Interest payable 1,350 Interest expense (1) 1,350 (To record the reversing entry for the interest expense) Table (1) • Interest payable is a liability account and it decreases in the value of liabilities. Hence, debit the interest payable with $1,350. • Interest expense is component of shareholders’ equity, and it increases the value of shareholders equity. Hence, credit the interest expense with$1,350. Working note (1): Calculate the amount of interest expense. Interest expense=[Note payable×Interest rate×(Number of months interest occruedMonths in a year)]=$20,000×9100×912(April 1 to December 31)=$1,350 Date Account Title and Explanation Debit ($) Credit ($) April 1 Notes payable 20,000 Interest expense (2) 1,800 Cash 21,800 (To record the principal and interest on note paid) Table (2) • Notes payable is a liability account and it decreases in the value of liabilities. Hence, debit the notes payable with \$20,000. • Interest expense is component of shareholders’ equity, and it decreases the value of shareholders equity ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
2019-12-06 15:28:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2556699514389038, "perplexity": 7354.841767725284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00502.warc.gz"}
https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_3017487_3
English # Item ITEM ACTIONSEXPORT Released Paper #### Instantons on hyperkähler manifolds ##### MPS-Authors /persons/resource/persons134377 Devchand,  Chandrashekar Quantum Gravity & Unified Theories, AEI-Golm, MPI for Gravitational Physics, Max Planck Society; ##### External Ressource No external resources are shared ##### Fulltext (public) 1812.06498.pdf (Preprint), 435KB ##### Supplementary Material (public) There is no public supplementary material available ##### Citation Devchand, C., Pontecorvo, M., & Spiro, A. (in preparation). Instantons on hyperkähler manifolds. Cite as: http://hdl.handle.net/21.11116/0000-0002-BB70-D ##### Abstract An instanton $(E, D)$ on a (pseudo-)hyperk\"ahler manifold $M$ is a vector bundle $E$ associated to a principal $G$-bundle with a connection $D$ whose curvature is pointwise invariant under the quaternionic structures of $T_x M, \ x\in M$, and thus satisfies the Yang-Mills equations. Revisiting a construction of solutions, we prove a local bijection between gauge equivalence classes of instantons on $M$ and equivalence classes of certain holomorphic functions taking values in the Lie algebra of $G^\mathbb{C}$ defined on an appropriate $SL_2(\mathbb{C})$-bundle over $M$. Our reformulation affords a streamlined proof of Uhlenbeck's Compactness Theorem for instantons on (pseudo-)hyperk\"ahler manifolds.
2020-05-31 10:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7813765406608582, "perplexity": 2486.0205798049305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413097.49/warc/CC-MAIN-20200531085047-20200531115047-00342.warc.gz"}
https://webthesis.biblio.polito.it/16781/
polito.it # An energy autonomous electronic transmitting system for green plant sensing applications Stefano Calvo An energy autonomous electronic transmitting system for green plant sensing applications. Rel. Danilo Demarchi, Alessandro Sanginario, Umberto Garlando. Politecnico di Torino, Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict), 2020 Abstract: This dissertation deals with the design of a completely autonomous electronic transmitting system meant to be applied in plant sensing applications. It is grafted on a previously performed study carried on tobacco plants. It concerned the study of the variation of plant stem's electrical impedance with respect to the drying process and analysis carried to understand which is the best frequency range for signal propagation. It has been found that electrical impedance grows over time if no watering events occur, and it decreases right after the plant gets water. This mechanism offers a considerable possibility: understanding plant health conditions inspecting its impedance by directly exploiting electrical signals propagation inside it. This kind of analysis has never been done: watering status information has always been acquired through the reading of external sensors (for example, soil humidity, sun irradiation, and temperature) or human knowledge. This work aims to provide a system able to overcome this limit and offer the possibility to get direct information from plants. The target device must show essential characteristics: low power consumption, reliability, long durability, biocompatibility, and (hopefully) compactness. In the thesis, every component used to create the transmitting system will be presented together with the reasons leading to its choice, advantages, and drawbacks. Since the transmitter has to be implemented directly on plants, it must rely on nature's energy sources. The chosen one has to be renewable, reliable, and easy to convert. To this purpose, firstly, an overview of all the exploitable sources, related harvesting device, and the reasons that led to discard or choose them, is presented. Renewable resources are not entirely reliable (for example, there is no sunlight to convert at night, and the wind does not always blow). Thus a storage device must be implemented to guarantee a higher level of durability. Such a device must have specific features and satisfy certain performance standards. Therefore a brief overview of storage devices will be done to choose the most suited, and reasons leading to the final choice will be described in detail. Then stratagems used to prevent useless power consumption and improve power management will be reported together with devices allowing them, advantages, and disadvantages that their implementation implies. At the end of every transmitter component description, the whole device and its working principle will be shown, highlighting the reasons that led to discard or choose it. After that, the complete transmitter is presented assembling every component forming the whole device, and its working principle will be entirely detailed. Finally, tests performed on a real tobacco plant will be presented. They highlighted that, even in the total absence of energy source, power saving gimmicks are quite useful, leading to the power consumption of about $18 \ \mu W$ per each measurement. It implied that the estimated transmitter autonomy is quite satisfactory: approximately one day and a half. Moreover, it showed sufficient autonomy even when sunlight was weak (cloudy days) or highly scattered (foggy days). The system lasted for more than two days, corroborating the goodness of the choices made. At the end of this work, possible improvements that can be implemented to improve performances will be presented. Danilo Demarchi, Alessandro Sanginario, Umberto Garlando 2020/21 Electronic 123 Tesi secretata. Fulltext non presente Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict) New organization > Master science > LM-29 - ELECTRONIC ENGINEERING INSTITUT NATIONAL POLYTECHNIQUE DE GRENOBLE (INPG) - PHELMA (FRANCIA) UNSPECIFIED http://webthesis.biblio.polito.it/id/eprint/16781 Modify record (reserved for operators)
2022-12-04 19:13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.401302695274353, "perplexity": 2851.9710001279554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00218.warc.gz"}
https://en.wikipedia.org/wiki/Molar_absorptivity
# Molar absorption coefficient (Redirected from Molar absorptivity) In chemistry, the molar absorption coefficient or molar attenuation coefficient (ε)[1] is a measurement of how strongly a chemical species absorbs, and thereby attenuates, light at a given wavelength. It is an intrinsic property of the species. The SI unit of molar absorption coefficient is the square metre per mole (m2/mol), but in practice, quantities are usually expressed in terms of M−1⋅cm−1 or L⋅mol−1⋅cm−1 (the latter two units are both equal to 0.1 m2/mol). In older literature, the cm2/mol is sometimes used; 1 M−1⋅cm−1 equals 1000 cm2/mol. The molar absorption coefficient is also known as the molar extinction coefficient and molar absorptivity, but the use of these alternative terms has been discouraged by the IUPAC.[2][3] ## Beer–Lambert law The absorbance of a material that has only one absorbing species also depends on the pathlength and the concentration of the species, according to the Beer–Lambert law ${\displaystyle A=\varepsilon c\ell ,}$ where • ε is the molar absorption coefficient of that material; • c is the molar concentration of those species; • is the path length. Different disciplines have different conventions as to whether absorbance is decadic (10-based) or Napierian (e-based), i.e., defined with respect to the transmission via common logarithm (log10) or a natural logarithm (ln). The molar absorption coefficient is usually decadic.[1][4] When ambiguity exists, it is best to indicate which one applies. When there are N absorbing species in a solution, the overall absorbance is the sum of the absorbances for each individual species i: ${\displaystyle A=\sum _{i=1}^{N}A_{i}=\ell \sum _{i=1}^{N}\varepsilon _{i}c_{i}.}$ The composition of a mixture of N absorbing species can be found by measuring the absorbance at N wavelengths (the values of the molar absorption coefficient for each species at these wavelengths must also be known). The wavelengths chosen are usually the wavelengths of maximum absorption (absorbance maxima) for the individual species. None of the wavelengths may be an isosbestic point for a pair of species. The set of the following simultaneous equations can be solved to find the concentrations of each absorbing species: ${\displaystyle {\begin{cases}A(\lambda _{1})=\ell \sum _{i=1}^{N}\varepsilon _{i}(\lambda _{1})c_{i},\\\ldots \\A(\lambda _{N})=\ell \sum _{i=1}^{N}\varepsilon _{i}(\lambda _{N})c_{i}.\\\end{cases}}}$ The molar absorption coefficient (in units of cm2) is directly related to the attenuation cross section via the Avogadro constant NA:[5] ${\displaystyle \sigma =\ln(10){\frac {10^{3}}{N_{\text{A}}}}\varepsilon \approx 3.82353216\times 10^{-21}\,\varepsilon .}$ ## Mass absorption coefficient The mass absorption coefficient is equal to the molar absorption coefficient divided by the molar mass of the absorbing species. εm = εM where • εm = Mass absorption coefficient • ε = Molar absorption coefficient • M = Molar mass of the absorbing species ## Proteins In biochemistry, the molar absorption coefficient of a protein at 280 nm depends almost exclusively on the number of aromatic residues, particularly tryptophan, and can be predicted from the sequence of amino acids.[6] Similarly, the molar absorption coefficient of nucleic acids at 260 nm can be predicted given the nucleotide sequence. If the molar absorption coefficient is known, it can be used to determine the concentration of a protein in solution. ## References 1. ^ a b "Chapter 11 Section 2 - Terms and symbols used in photochemistry and in light scattering" (PDF). Compendium on Analytical Nomenclature (Orange Book). IUPAC. 2002. p. 28. 2. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "Extinction". doi:10.1351/goldbook.E02293 3. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "Absorptivity". doi:10.1351/goldbook.A00044 4. ^ "Molecular Spectroscopy" (PDF). Compendium on Analytical Nomenclature. IUPAC. 2002."Measuring techniques" (PDF). Compendium on Analytical Nomenclature. IUPAC. 2002. 5. ^ Lakowicz, J. R. (2006). Principles of Fluorescence Spectroscopy (3rd ed.). New York: Springer. p. 59. ISBN 9780387312781. 6. ^ Gill, S. C.; von Hippel, P. H. (1989). "Calculation of protein extinction coefficients from amino acid sequence data". Analytical Biochemistry. 182 (2): 319–326. doi:10.1016/0003-2697(89)90602-7. PMID 2610349.
2023-04-02 09:51:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8803330659866333, "perplexity": 3124.5174703391904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00276.warc.gz"}
https://shinerightstudio.com/posts/animate-nodes-using-skaction-in-ios-sprite-kit/
# Animate Nodes Using SKAction in iOS Sprite Kit Posted on Apple’s Sprite Kit is a really well-designed and convenient framework for iOS game development. In this article, I will briefly introduce Sprite Kit Action (SKAction in short), which is the bread and butter for animating nodes in Sprite Kit. ## The Basics In Sprite Kit, every node can run an SKAction by calling the run(_:) function. run(_:) takes an SKAction as its parameter, and will perform the action immediately after the call. Below is a simple examples of running SKActions: 1 2 3 let moveAction = SKAction.move(to: CGPoint(x: 10.0, y: 20.0), duration: 5.0) spriteNode.run(moveAction) The code above moves (aka. translates) spriteNode to (x: 10, y: 20) in 5 seconds. Sprite Kit also provides a variety of SKActions of the same type. For instance, aside from SKAction.move(to:duration:), there is also a move(by:duration:) function, which you can specify the amount of movement instead of the exact destination position of the movement. On top of move action, rotate and scale are also frequently used actions which animate the transform of nodes. ## Completion Closure When performing animations, it is quite often that we want to do some other stuff when the animation ends. For example, after scaling up a sprite, we may want it to change color. SKAction fulfills this need by providing another run(_:completion:) function. It receives an additional () -> Void typed closure named completion as its parameter. The completion closure will be run right after the action is completed. Below is an example. 1 2 let scaleAction = SKAction.scale(to: 2.0, duration: 3.0) spriteNode.run(scaleAction, completion: { spriteNode.color = SKColor.red }) This code scales up spriteNode by 3 over 3 seconds, then changes its color to red right after that. You can also pass the closure in a more structured way: 1 2 3 4 let scaleAction = SKAction.scale(to: 2.0, duration: 3.0) spriteNode.run(scaleAction) { spriteNode.color = SKColor.red } ## Stop an Action In almost all games, some animations will have to stop before completion. For example, when a monster is killed, we should definitely stop its movement and play its dying animation at the exact position which it was killed. Sprite Kit uses keys to attain this functionality. We can assign a key for any SKActions by 1 let actionWithKey = SKAction.scale(by: 2.0, duration: 2.0, withKey: "my_key") For every SKAction, there is a SKAction.action(by:duration:withKey:)-like function so that you can assign a String typed key to it. Back to the topic, if you want to stop an action while it is still running, you can simply call removeAction(forKey:). Below is a brief example. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 func spawnMonsterAndMove() { // ... // Move the monster by the move action. let moveMonster = SKAction.move(to: CGPoint(x: 100.0, y: 100.0), duration: 10.0, withKey: "move_monster") monsterNode.run(moveMonster) } // Being called when the monster is shot. func monsterDied() { // Stops the move action of monster node. removeAction(forKey: "move_monster") // ... } ## Sequence, Group, Wait There are also times that you want to perform multiple animations sequentially or perform a group of animations at the same time. #### Sequence There is a sequence action in Sprite Kit which takes an array of SKActions as its parameter. When running a sequence action, it runs the separate actions one by one. Below is an example. 1 2 3 4 5 6 let moveAction = SKAction.move(to: CGPoint(10.0, 20.0), duration: 10.0) let scaleAction = SKAction.scale(by: 2.0, duration: 1.0) // Move the sprite, then scale it by 2. let moveThenScaleAction = SKAction.sequence([moveAction, scaleAction]) spriteNode.run(moveThenScaleAction) #### Wait Now you can run a sequence of actions quite easily, however, usually, we will want a small pause between the consecutive actions. SKAction.wait(forDuration:) is what you need here. By adding a wait action in the middle of the action array, a small pause then appears in between the animation: 1 2 3 4 5 6 7 let moveAction = SKAction.move(to: CGPoint(10.0, 20.0), duration: 10.0) let waitAction = SKAction.wait(forDuration: 1.0) let scaleAction = SKAction.scale(by: 2.0, duration: 1.0) // Move the sprite, pause for 1 sec, then scale it by 2. let moveThenScaleAction = SKAction.sequence([moveAction, waitAction, scaleAction]) spriteNode.run(moveThenScaleAction) #### Closure as an action Other times, instead of pausing, we want to run some other code in between of the sequence. There is an SKAction.run(_:) action which takes a () -> Void typed closure as its parameter. The closure will be called when the action is run. For example, if we want to change the color of spriteNode in between the move and scale action, we can do this: 1 2 3 4 5 6 7 let moveAction = SKAction.move(to: CGPoint(10.0, 20.0), duration: 10.0) let changeColor = SKAction.run({ spriteNode.color = SKColor.red }) let scaleAction = SKAction.scale(by: 2.0, duration: 1.0) // Move the sprite, change its color to red, then scale it by 2. let moveThenScaleAction = SKAction.sequence([moveAction, changeColor, scaleAction]) spriteNode.run(moveThenScaleAction) #### Group Group action is similar to sequence action, it takes an array of SKActions as its parameter. However, instead of running the actions one by one, it runs all the actions at the same time. By changing the previous example from SKAction.sequence(_:) to SKAction.group(_:), the sprite will start scaling and moving at the same time. ## Repeating Repeating an action multiple times is easy in Sprite Kit. There are 2 types of SKActions which we use from time to time to repeat actions. Firstly, if you want to repeat an action infinitely, you should use SKAction.repeatForever(_:). The below code moves spriteNode back and forth forever. 1 2 3 4 5 6 7 let moveForth = SKAction.move(to: CGPoint(x: 100.0, y: 100.0), duration: 10.0) let moveBack = SKAction.move(to: CGPoint(x: 0.0, y: 0.0), duration: 10.0) let moveBackAndForth = SKAction.sequence([moveForth, moveBack]) let repeatMovement = SKAction.repeatForever(moveBackAndForth) spriteNode.run(repeatMovement) Otherwise, if you want to repeat the action for only a finite number of times, use SKAction.repeat(_:count:) instead. 1 2 3 4 5 6 7 let moveForth = SKAction.move(to: CGPoint(x: 100.0, y: 100.0), duration: 10.0) let moveBack = SKAction.move(to: CGPoint(x: 0.0, y: 0.0), duration: 10.0) let moveBackAndForth = SKAction.sequence([moveForth, moveBack]) let repeatMovement = SKAction.repeatForever(moveBackAndForth, count: 2) spriteNode.run(repeatMovement) The code above will move spriteNode back and forth only 2 times. ## Conclusion From June until now, I have programmed 3 mini games for my Google Summer of Code 2017 project PowerUp-iOS. In all three of the games, I used SKActions intensively for animation code. Compared to Coroutine in Unity, I think Sprite Kit Action provides a more elegant and simpler way to code animations. Hope you can also appreciate the beauty of Sprite Kit Action. :D
2022-08-10 22:23:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3773927688598633, "perplexity": 4440.214946600178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00710.warc.gz"}
http://mathoverflow.net/questions/87083/explicit-element-in-free-group-which-is-killed-by-every-solvable-quotient/96157
# Explicit element in free group which is killed by every solvable quotient The free group on two generators $F_2=\langle x,y|\rangle$ is the fundamental group of $\mathbb P^1(\mathbb C)\setminus\{0,1,\infty\}$. Now, there are plenty of galois covers of this space whose galois group is not solvable. Thus the "maximal solvable cover" (i.e. the limit over all galois covers with solvable galois group) is not the universal cover, but rather a quotient thereof. In other words, the natural map: $$F_2\to\lim_{\begin{smallmatrix}\longleftarrow\cr H\unlhd F_2\cr F_2/H\text{ finite solvable}\end{smallmatrix}}F_2/H$$ is not injective. Can someone exhibit an explicit element of the kernel? What about the shortest element (by word length in $F_2$) in the kernel? In other words, the question is: what universal word in $x,y$ always vanishes when $x,y$ are specialized to elements of some solvable group $G$? (note that since $G$ is solvable, so is the subgroup generated by $x$ and $y$). Such an element now has the following seemingly impossible property. Consider it as a closed path $\gamma$ in $\mathbb P^1(\mathbb C)\setminus\{0,1,\infty\}$. Now try to lift $\gamma$ to the cover of $\mathbb P^1(\mathbb C)\setminus\{0,1,\infty\}$ corresponding to the linking number with $0$ (i.e. we take the universal cover of $\mathbb P^1(\mathbb C)\setminus\{0,\infty\}$ and take the inverse image of $\mathbb P^1(\mathbb C)\setminus\{0,1,\infty\}$). Of course $\gamma$ lifts to a closed curve, since otherwise we just exhibited an abelian quotient of $F_2$ in which $\gamma$ is not sent to zero. The cover we just considered is $\mathbb P^1(\mathbb C)\setminus\mathbb Z$, and we can try to lift $\gamma$ to some abelian cover of this, etc. Of course, $\gamma$ always lifts to a closed curve, since all these covers are solvable! I'm having a hard time visualizing what such a curve $\gamma$ would look like geometrically in $\mathbb P^1(\mathbb C)\setminus\{0,1,\infty\}$ (I once thought all elements of $F_2$ could be "broken" by a sequence of such covers!) - The obstruction isn't that there are elements not contained in any subgroup with a finite solvable quotient. The problem is that each subgroup with a finite solvable quotient contains elements that are not in the kernel of $A_5$. You can keep reducing the number, but there will always be infinitely many. You might get a kernel if you take the profinite completion, though. –  Will Sawin Jan 30 '12 at 22:48 Andy's answer is a good one and proves that the map you right down actually IS injective. The natural map which is NOT injective is the one from the profinite completion of F_2 to the inverse limit you write down. –  JSE Jan 31 '12 at 2:32 "right" ==> "write." Why oh why can't we edit comments? –  JSE Jan 31 '12 at 19:19 There are no such elements -- the intersection of the derived series of a free group is trivial. In fact, even more is true -- the intersection of the lower central series of a free group is trivial. This is a theorem of Magnus, and by now there are many proofs. The classical one is in the final chapter of Magnus-Karass-Solitar's book on combinatorial group theory. By the way, a topological proof of this fact (lifting curves to covers to resolve self-intersections, etc) is contained in my paper "On the self-intersections of curves deep in the lower central series of a surface group" with Justin Malestein. EDIT : I see that you really want finite solvable quotients, not general solvable quotients. It is still true. Fixing a prime $p$, there is a mod $p$ lower central series'' of a group whose quotients are $p$-groups (so finite nilpotent if the group is finitely generated). For a free group, Zassenhaus proved in his paper H. Zassenhaus, Ein Verfahren, jeder endlichen p-Gruppe eine Lie-Ring mit der Charakteristik p zuzuordnen, Abh. Math. Sem. Hamburg Univ. 13 (1939), 200-207. that the intersection of the mod $p$ lower central series of a free group is trivial. This can also be deduced from the paper I mentioned with Justin Malestein, at least for the prime $2$ (one of the proofs we give actually yields regular covers whose order is a power of $2$). - The fact that free groups are residually finite p-groups for any prime p follows quite nicely from the fact that $\begin{pmatrix} 1 & p\\ 0 &1\end{pmatrix}$ and $\begin{pmatrix} 1 & 0\\ p &1\end{pmatrix}$ generate a free group of rank 2. –  Steve D Jan 30 '12 at 23:06 Andy - or, for the finite case, you can combine Magnus' Theorem with the easy fact that nilpotent groups are residually finite. –  HJRW Feb 2 '12 at 19:35 @HW : Good point! I don't know why that slipped my mind when I was writing this. I had just read Zassenhaus's paper (which proves a lot more than what I said) when this question arrived, so it was very fresh in my mind. –  Andy Putman Feb 2 '12 at 22:34 In other words, the question is: what universal word in x,y always vanishes when x,y are specialized to elements of some solvable group G? (note that since G is solvable, so is the subgroup generated by x and y). This paper of Miklos Abert answers a related question: On the probability of satisfying a word in a group. Given a finite group $G$, is there a word $\omega(x,y) \in F_2$ such that $\omega(g_1,g_2)=e$ exactly when $\langle g_1, g_2 \rangle \leq G$ is solvable? Abert proves the answer is yes! The paper is quite nice. - If C is a family of finite groups that is closed to taking quotients, normal subgroups, and extensions, then the free group on two generators embeds into the pro-C completion. Such C's can be the finite p-groups or the finite solvable groups, and others. See, e.g., Fried-Jarden's Field Arithmetic, Prop. 17.5.11, for a proof. - Dror Speiser suggested to me that maybe a different question lies behind this one. Namely, instead of $F_2$, perhaps, it was meant to consider the profinite completion $\widehat{F_2}^{profinite}$ of $F_2$ (which is the etale fundamental group of $\mathbb{P}^1\smallsetminus\{0,1,\infty\}$). In this case the natural map $\widehat{F_2}^{profinite}\to \widehat{F_2}^{prosolvable}$ is not injective, for, essentially, the reason that was mentioned in the question, i.e., since there are covers of $\mathbb{P}^1\smallsetminus\{0,1,\infty\}$ with non-solvable Galois groups. Here $\widehat{F_2}^{prosolvable}$ is the prosolvable completion of $F_2$. If this was indeed what was mentioned, then the kernel is well understood, namely it is the free pro-$\mathcal{C}$ group of countable rank, where $\mathcal{C}$ is the family of all finite groups whose decomposition factors are non-cyclic. I can't seem to find the reference to the paper that proves this at this moment, but if it interests someone I can look harder. Regarding the minimal length of a word in the kernel, it will be infinite, exactly for the reason Andy Putman explained. (An example for a nontrivial profinite word in the kernel is something of the form $[\cdots [[[[[x,y],x^2],y^2],x^3],y^3],\cdots]$, where the powers are growing so that it will converge. I didn't check that this element really works.) - This question is interesting. However, in your example I don't see how (with which choice of exponents) you can arrange this word to be converging to a nontrivial element. [by the way, any limit point of a sequence as you construct is in the kernel of the map to the pronilpotent, not prosolvable, completion]. –  YCor May 6 '12 at 22:18 You are right, it is in the kernel on the pronilpotent completion. To check the the kernel $K$ onto the maximal prosolvable quotient is non trivial, just take $U$ open normal such that $\widehat{F_2}^{profinite}/U \cong A_5$. This will assure you that $KU=\widehat{F_2}^{profinite}$, hence by iso-2, $K/K\cap U=A_5$. To find an explicit element in $K$ shouldn't be too difficult, I think. –  Lior Bary-Soroker May 7 '12 at 9:48 I agree this kernel is nontrivial. But writing down an explicit nontrivial element in this kernel is certainly doable, but doesn't seem too immediate. [I really mean explicit, not only finding a sequence for which some limit points are nontrivial elements in the kernel.] –  YCor May 7 '12 at 10:46 One can reduce the question to non-finitely generated free profinite groups, since the commutator subgroup of the free profinite group on 2 generators is free of countable rank. –  Lior Bary-Soroker May 7 '12 at 10:57
2014-07-22 19:56:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179737567901611, "perplexity": 278.7165373057121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00183-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.algebra.com/tutors/your-answers.mpl?userid=solver91311&from=9180
Algebra ->  Tutoring on algebra.com -> See tutors' answers!      Log On Tutoring Home For Students Tools for Tutors Our Tutors Register Recently Solved By Tutor | By Problem Number | Tutor: # Recent problems solved by 'solver91311' Jump to solutions: 0..29 , 30..59 , 60..89 , 90..119 , 120..149 , 150..179 , 180..209 , 210..239 , 240..269 , 270..299 , 300..329 , 330..359 , 360..389 , 390..419 , 420..449 , 450..479 , 480..509 , 510..539 , 540..569 , 570..599 , 600..629 , 630..659 , 660..689 , 690..719 , 720..749 , 750..779 , 780..809 , 810..839 , 840..869 , 870..899 , 900..929 , 930..959 , 960..989 , 990..1019 , 1020..1049 , 1050..1079 , 1080..1109 , 1110..1139 , 1140..1169 , 1170..1199 , 1200..1229 , 1230..1259 , 1260..1289 , 1290..1319 , 1320..1349 , 1350..1379 , 1380..1409 , 1410..1439 , 1440..1469 , 1470..1499 , 1500..1529 , 1530..1559 , 1560..1589 , 1590..1619 , 1620..1649 , 1650..1679 , 1680..1709 , 1710..1739 , 1740..1769 , 1770..1799 , 1800..1829 , 1830..1859 , 1860..1889 , 1890..1919 , 1920..1949 , 1950..1979 , 1980..2009 , 2010..2039 , 2040..2069 , 2070..2099 , 2100..2129 , 2130..2159 , 2160..2189 , 2190..2219 , 2220..2249 , 2250..2279 , 2280..2309 , 2310..2339 , 2340..2369 , 2370..2399 , 2400..2429 , 2430..2459 , 2460..2489 , 2490..2519 , 2520..2549 , 2550..2579 , 2580..2609 , 2610..2639 , 2640..2669 , 2670..2699 , 2700..2729 , 2730..2759 , 2760..2789 , 2790..2819 , 2820..2849 , 2850..2879 , 2880..2909 , 2910..2939 , 2940..2969 , 2970..2999 , 3000..3029 , 3030..3059 , 3060..3089 , 3090..3119 , 3120..3149 , 3150..3179 , 3180..3209 , 3210..3239 , 3240..3269 , 3270..3299 , 3300..3329 , 3330..3359 , 3360..3389 , 3390..3419 , 3420..3449 , 3450..3479 , 3480..3509 , 3510..3539 , 3540..3569 , 3570..3599 , 3600..3629 , 3630..3659 , 3660..3689 , 3690..3719 , 3720..3749 , 3750..3779 , 3780..3809 , 3810..3839 , 3840..3869 , 3870..3899 , 3900..3929 , 3930..3959 , 3960..3989 , 3990..4019 , 4020..4049 , 4050..4079 , 4080..4109 , 4110..4139 , 4140..4169 , 4170..4199 , 4200..4229 , 4230..4259 , 4260..4289 , 4290..4319 , 4320..4349 , 4350..4379 , 4380..4409 , 4410..4439 , 4440..4469 , 4470..4499 , 4500..4529 , 4530..4559 , 4560..4589 , 4590..4619 , 4620..4649 , 4650..4679 , 4680..4709 , 4710..4739 , 4740..4769 , 4770..4799 , 4800..4829 , 4830..4859 , 4860..4889 , 4890..4919 , 4920..4949 , 4950..4979 , 4980..5009 , 5010..5039 , 5040..5069 , 5070..5099 , 5100..5129 , 5130..5159 , 5160..5189 , 5190..5219 , 5220..5249 , 5250..5279 , 5280..5309 , 5310..5339 , 5340..5369 , 5370..5399 , 5400..5429 , 5430..5459 , 5460..5489 , 5490..5519 , 5520..5549 , 5550..5579 , 5580..5609 , 5610..5639 , 5640..5669 , 5670..5699 , 5700..5729 , 5730..5759 , 5760..5789 , 5790..5819 , 5820..5849 , 5850..5879 , 5880..5909 , 5910..5939 , 5940..5969 , 5970..5999 , 6000..6029 , 6030..6059 , 6060..6089 , 6090..6119 , 6120..6149 , 6150..6179 , 6180..6209 , 6210..6239 , 6240..6269 , 6270..6299 , 6300..6329 , 6330..6359 , 6360..6389 , 6390..6419 , 6420..6449 , 6450..6479 , 6480..6509 , 6510..6539 , 6540..6569 , 6570..6599 , 6600..6629 , 6630..6659 , 6660..6689 , 6690..6719 , 6720..6749 , 6750..6779 , 6780..6809 , 6810..6839 , 6840..6869 , 6870..6899 , 6900..6929 , 6930..6959 , 6960..6989 , 6990..7019 , 7020..7049 , 7050..7079 , 7080..7109 , 7110..7139 , 7140..7169 , 7170..7199 , 7200..7229 , 7230..7259 , 7260..7289 , 7290..7319 , 7320..7349 , 7350..7379 , 7380..7409 , 7410..7439 , 7440..7469 , 7470..7499 , 7500..7529 , 7530..7559 , 7560..7589 , 7590..7619 , 7620..7649 , 7650..7679 , 7680..7709 , 7710..7739 , 7740..7769 , 7770..7799 , 7800..7829 , 7830..7859 , 7860..7889 , 7890..7919 , 7920..7949 , 7950..7979 , 7980..8009 , 8010..8039 , 8040..8069 , 8070..8099 , 8100..8129 , 8130..8159 , 8160..8189 , 8190..8219 , 8220..8249 , 8250..8279 , 8280..8309 , 8310..8339 , 8340..8369 , 8370..8399 , 8400..8429 , 8430..8459 , 8460..8489 , 8490..8519 , 8520..8549 , 8550..8579 , 8580..8609 , 8610..8639 , 8640..8669 , 8670..8699 , 8700..8729 , 8730..8759 , 8760..8789 , 8790..8819 , 8820..8849 , 8850..8879 , 8880..8909 , 8910..8939 , 8940..8969 , 8970..8999 , 9000..9029 , 9030..9059 , 9060..9089 , 9090..9119 , 9120..9149 , 9150..9179 , 9180..9209 , 9210..9239 , 9240..9269 , 9270..9299 , 9300..9329 , 9330..9359 , 9360..9389 , 9390..9419 , 9420..9449 , 9450..9479 , 9480..9509 , 9510..9539 , 9540..9569 , 9570..9599 , 9600..9629 , 9630..9659 , 9660..9689 , 9690..9719 , 9720..9749 , 9750..9779 , 9780..9809 , 9810..9839 , 9840..9869 , 9870..9899 , 9900..9929 , 9930..9959 , 9960..9989 , 9990..10019 , 10020..10049 , 10050..10079 , 10080..10109 , 10110..10139 , 10140..10169 , 10170..10199 , 10200..10229 , 10230..10259 , 10260..10289 , 10290..10319 , 10320..10349 , 10350..10379 , 10380..10409 , 10410..10439 , 10440..10469 , 10470..10499 , 10500..10529 , 10530..10559 , 10560..10589 , 10590..10619 , 10620..10649 , 10650..10679 , 10680..10709 , 10710..10739 , 10740..10769 , 10770..10799 , 10800..10829 , 10830..10859 , 10860..10889 , 10890..10919 , 10920..10949 , 10950..10979 , 10980..11009 , 11010..11039 , 11040..11069 , 11070..11099 , 11100..11129 , 11130..11159 , 11160..11189 , 11190..11219 , 11220..11249 , 11250..11279 , 11280..11309 , 11310..11339 , 11340..11369 , 11370..11399 , 11400..11429 , 11430..11459 , 11460..11489 , 11490..11519 , 11520..11549 , 11550..11579 , 11580..11609 , 11610..11639 , 11640..11669 , 11670..11699 , 11700..11729 , 11730..11759 , 11760..11789 , 11790..11819 , 11820..11849 , 11850..11879 , 11880..11909 , 11910..11939 , 11940..11969 , 11970..11999 , 12000..12029 , 12030..12059 , 12060..12089 , 12090..12119 , 12120..12149 , 12150..12179 , 12180..12209 , 12210..12239 , 12240..12269 , 12270..12299 , 12300..12329 , 12330..12359 , 12360..12389 , 12390..12419 , 12420..12449 , 12450..12479 , 12480..12509 , 12510..12539 , 12540..12569 , 12570..12599 , 12600..12629 , 12630..12659 , 12660..12689 , 12690..12719 , 12720..12749 , 12750..12779 , 12780..12809 , 12810..12839 , 12840..12869 , 12870..12899 , 12900..12929 , 12930..12959 , 12960..12989 , 12990..13019 , 13020..13049 , 13050..13079 , 13080..13109 , 13110..13139 , 13140..13169 , 13170..13199 , 13200..13229 , 13230..13259 , 13260..13289 , 13290..13319 , 13320..13349 , 13350..13379 , 13380..13409 , 13410..13439 , 13440..13469 , 13470..13499 , 13500..13529 , 13530..13559 , 13560..13589 , 13590..13619 , 13620..13649 , 13650..13679 , 13680..13709 , 13710..13739 , 13740..13769 , 13770..13799 , 13800..13829 , 13830..13859 , 13860..13889 , 13890..13919 , 13920..13949 , 13950..13979 , 13980..14009 , 14010..14039 , 14040..14069 , 14070..14099 , 14100..14129 , 14130..14159 , 14160..14189 , 14190..14219 , 14220..14249 , 14250..14279 , 14280..14309 , 14310..14339 , 14340..14369 , 14370..14399 , 14400..14429 , 14430..14459 , 14460..14489 , 14490..14519 , 14520..14549 , 14550..14579 , 14580..14609 , 14610..14639 , 14640..14669 , 14670..14699 , 14700..14729 , 14730..14759 , 14760..14789 , 14790..14819 , 14820..14849 , 14850..14879 , 14880..14909 , 14910..14939 , 14940..14969 , 14970..14999 , 15000..15029 , 15030..15059 , 15060..15089 , 15090..15119 , 15120..15149 , 15150..15179 , 15180..15209 , 15210..15239 , 15240..15269 , 15270..15299 , 15300..15329 , 15330..15359 , 15360..15389 , 15390..15419 , 15420..15449 , 15450..15479 , 15480..15509 , 15510..15539 , 15540..15569 , 15570..15599 , 15600..15629 , 15630..15659 , 15660..15689 , 15690..15719 , 15720..15749 , 15750..15779 , 15780..15809 , 15810..15839 , 15840..15869 , 15870..15899 , 15900..15929 , 15930..15959 , 15960..15989 , 15990..16019 , 16020..16049 , 16050..16079 , 16080..16109 , 16110..16139 , 16140..16169 , 16170..16199 , 16200..16229 , 16230..16259 , 16260..16289 , 16290..16319 , 16320..16349 , 16350..16379 , 16380..16409 , 16410..16439 , 16440..16469 , 16470..16499 , 16500..16529 , 16530..16559 , 16560..16589 , 16590..16619 , 16620..16649 , 16650..16679 , 16680..16709 , 16710..16739 , 16740..16769 , 16770..16799 , 16800..16829 , 16830..16859, >>Next Triangles/334212: if one angle of a right triangle is 70 degrees,then how many degrees are there in the smallest angle?1 solutions Answer 239471 by solver91311(16868)   on 2010-08-23 17:09:05 (Show Source): You can put this solution on YOUR website! If it is a right triangle, one of the angles has to measure . One other angle is given as . The sum of the measures of the three angles of any triangle is . John My calculator said it, I believe it, that settles it Linear-equations/334213: A student has earned scores of 87, 81, and 88 on the first 3 of 4 tests. If the student wants an average (arithmetic mean) of exactly 87, what score must she earn on the fourth test?1 solutions Answer 239470 by solver91311(16868)   on 2010-08-23 17:05:55 (Show Source): You can put this solution on YOUR website! Think about how we take an average. We add up all of the numbers and divide by the number of numbers. The final average for our student will be the sum of all 4 test scores divided by 4. Let's say we want to achieve an overall average of for data elements given data elements. We know that So we can say that But , the answer to the question, can be found by: Which is to say: So add up the scores for the tests already taken and subtract that sum from the product of the total number of tests multiplied by the desired overall average. Done. John My calculator said it, I believe it, that settles it Subset/334211: Suppose B is a proper subset of C If n(C) =8, what is the minimum number of elements of B?1 solutions Answer 239469 by solver91311(16868)   on 2010-08-23 17:03:38 (Show Source): You can put this solution on YOUR website! The minimum number of elements in B is zero, because the null set is a proper subset of any set other than the null set. John My calculator said it, I believe it, that settles it Miscellaneous_Word_Problems/334210: A carpenter is building a rectangular room with a fixed perimeterof 112 ft. What dementions would yeild the maximum area? What is the maximum area? The lenght that would yeild the maximum are is __?__ft1 solutions Answer 239467 by solver91311(16868)   on 2010-08-23 16:51:44 (Show Source): You can put this solution on YOUR website! Let's solve this one in general, that is for any given perimeter. Let w represent the width of the field. Let l represent the length of the field. The perimeter of a rectangle is: So The area of a rectangle is the length times the width so a function for the area in terms of the width is: Algebra Solution: The area function is a parabola, opening downward, with vertex at: Since the parabola opens downward, the vertex represents a maximum value of the area function. The value of the width that gives this maximum value is one-fourth of the available fencing. Therefore, the shape must be a square, and the area is the width squared. Calculus Solution: The area function is continuous and twice differentiable across its domain, therefore there will be a local extrema wherever the first derivative is equal to zero and that extreme point will be a maximum if the second derivative is negative. Therefore the maximum area is obtained when And the shape is therefore a square. And that maximum area is: John My calculator said it, I believe it, that settles it Problems-with-consecutive-odd-even-integers/334191: How many integers are there between 6 x 10^98 and 5 x 10^100 (not counting 6 x 10^98 and 5 x 10^100 )? a 4.94 x 10^100 -1 b 4.94 x 10^99 -1 c 4.94 x 10^98 -1 d 494 e 4931 solutions Answer 239466 by solver91311(16868)   on 2010-08-23 16:46:06 (Show Source): You can put this solution on YOUR website! First change so that it is expressed in the same power of 10 that the other number is in. So, reduce the exponent by 2, and move the decimal point two places right. Now that the decimal points line up, you can just subtract the numbers: But just subtracting two integers gives you the number of integers between them including one of the endpoints. Since you want to eliminate both endpoints, you need to subtract 1 more unit. Furthermore, you need to put the decimal point back where it belongs. Hence: John My calculator said it, I believe it, that settles it Miscellaneous_Word_Problems/334189: Eighty people are trapped in a ski lodge. They have enough food to last eight days. It takes five days to reach help (and five days for help to get back to the lodge). What is the fewest number of people to send for help (with sufficient food) so that those staying behind will be rescued before food runs out?1 solutions Answer 239463 by solver91311(16868)   on 2010-08-23 16:13:49 (Show Source): You can put this solution on YOUR website! The trick to this one is to realize that the group of people who trek out to look for help will only need 5 days food (presumably there will be unlimited food for them once they get reach help in 5 days) while the group of people who remain at the lodge will need 10 days food, the 5 days it takes the other group to get out and the 5 days it takes the rescuers to return to the lodge. If you consider one food unit to be sufficient food to feed one person for 1 day, then there must be 80 times 8 = 640 food units available. Let represent the number of people who leave to get help. Then must be the number of people who remain at the lodge. So Just solve for John My calculator said it, I believe it, that settles it Quadratic_Equations/334180: I need help solving an equation please. I need to solve the equation The square root of x over the square root of x -3 plus 5 over the square root of x.1 solutions Answer 239461 by solver91311(16868)   on 2010-08-23 16:00:05 (Show Source): You can put this solution on YOUR website! What you described was: But you said you needed help solving an equation. What you described is not an equation. An equation is something that looks like {something} = {something else}. Notice the equals sign? That's where the word equation comes from. Since you don't have an equation or an inequality, you can't "solve" anything. Your expression can be simplified, but that's not what you asked. John My calculator said it, I believe it, that settles it test/334181: Is nvr too late to be the best in math & algebra Is it ??1 solutions Answer 239460 by solver91311(16868)   on 2010-08-23 15:53:23 (Show Source): You can put this solution on YOUR website! While you are at it, strive for improvement in your ability to communicate in the English language. John My calculator said it, I believe it, that settles it Travel_Word_Problems/334054: A poodle is shot of a cannon at a circus The height of the dog after t seconds is given by h=-16t^2+32t+8 A)Find the height of the dog after 1 second? B)How long until the dog hits the ground? C)Find the max height that the poodle will attain?1 solutions Answer 239459 by solver91311(16868)   on 2010-08-23 15:48:06 (Show Source): You can put this solution on YOUR website! where and a) calculate b) set and solve the quadratic for c) calculate the -coordinate of the vertex of your parabola using: Then calculate the value of the function at that time value, i.e. (Note: same answer as part a) Public Service Notice: No animals, in particular poodles, were harmed during the calculation of the solution to this problem. John My calculator said it, I believe it, that settles it Money_Word_Problems/334171: Solve using the five-step problem-solving process. Show all steps necessary to arrive at your solution. A semicircular window of radius 14 inches is to be laminated with a sunblock coating that costs $0.70 per square inch to apply. What is the total cost of coating the window, to the nearest cent? (Use п = 3.14. Hint: Area of Circle: A = пr2)1 solutions Answer 239458 by solver91311(16868) on 2010-08-23 15:27:33 (Show Source): You can put this solution on YOUR website! Calculate the area of the whole circle using the formula you have and then divide by 2 because you are dealing with a semicircle. Then multiply by$0.70 and round off to the nearest one-hundredth. John My calculator said it, I believe it, that settles it Trigonometry-basics/334167: If a right triangle has a hypotenuse with length 15cm. If one of the acute angles of the triangle is 25degrees, find the length of the two shorter sides of the triangle correct to the nearest tenth of a centimeter.1 solutions Answer 239457 by solver91311(16868)   on 2010-08-23 15:23:39 (Show Source): You can put this solution on YOUR website! Let represent one leg of the triangle. Let represent the other leg. Then: I'll leave you alone to spend some quality time with your calculator. John My calculator said it, I believe it, that settles it Mixture_Word_Problems/334164: 12) A person earns an annual salary of $75,000. The person will receive a 5% raise the next year. Approximate the person’s salary after the 5% raise. Answer:$78,750 Please confirm this is right. Thanks1 solutions Answer 239451 by solver91311(16868)   on 2010-08-23 15:08:33 (Show Source): You can put this solution on YOUR website! Exactly. John My calculator said it, I believe it, that settles it Conjunction/334136: Construct a truth table for ~(~q^p)1 solutions Answer 239449 by solver91311(16868)   on 2010-08-23 15:05:46 (Show Source): You can put this solution on YOUR website! John My calculator said it, I believe it, that settles it Trigonometry-basics/334138: if t is in quadrant II and csc t=2, find the value of cot t.1 solutions Answer 239447 by solver91311(16868)   on 2010-08-23 14:52:46 (Show Source): You can put this solution on YOUR website! in Quadrant II which is to say: In Quadrant II, and Hence and Furthermore: So But for Quadrant II so we need Next: Hence: John My calculator said it, I believe it, that settles it Inequalities/334149: 0.6x + 1 < 1.0x - 21 solutions Answer 239440 by solver91311(16868)   on 2010-08-23 14:31:57 (Show Source): You can put this solution on YOUR website! What a handsome little inequality. Exactly what did you want to do with this? What is it that you don't understand about it? How can we help you with it? Did you mistake this site for the Psychic Hot Line and presume that we can just guess what it is you want or need? John My calculator said it, I believe it, that settles it Inequalities/334150: 2/3 (2x-1)>101 solutions Answer 239439 by solver91311(16868)   on 2010-08-23 14:31:44 (Show Source): You can put this solution on YOUR website! What a handsome little inequality. Exactly what did you want to do with this? What is it that you don't understand about it? How can we help you with it? Did you mistake this site for the Psychic Hot Line and presume that we can just guess what it is you want or need? John My calculator said it, I believe it, that settles it Equations/334141: 42 Solve the equation: 2y/3 = 4 43 Solve the equation: -3c/7 = -6 44 Solve the equation: 4x/9 = 6 45 Solve the equation: -8v/17 = 16 46 Solve the equation: 3w/4 = -1/2 47 Solve the equation: -13p/20 = -1/5 49 Solve the equation: t/2 + 5/12 = 5/6 50 Solve the equation: -7/10 = 3a/4 - 13/20 51 Solve the equation: 3.4n + 6.2 = -7.4 52 Solve the equation: 11.6b - 5 = 9.5 54 Solve the inequality: 2k/5 - 4/15 ≥ -2/3 55 Solve the inequality: 7d/11 + 2 < 2/3 Thanks in advance! :)1 solutions Answer 239438 by solver91311(16868)   on 2010-08-23 14:29:20 (Show Source): You can put this solution on YOUR website! You are a bit premature with your thanks. Read the Instructions Particularly the one that says "One question per post." John My calculator said it, I believe it, that settles it real-numbers/334147: Which of the following statements is false? a.There is a greatest negative integer. b.Between any two rational numbers, there is another rational number. c.Between any two irrational numbers, there is another irrational number. d.There is a least non-negative rational number. e.There is a greatest negative rational number1 solutions Answer 239437 by solver91311(16868)   on 2010-08-23 14:27:53 (Show Source): You can put this solution on YOUR website! Statements d and e are false. John My calculator said it, I believe it, that settles it Linear_Equations_And_Systems_Word_Problems/334073: Hi there, I had tried to solve this word problems...can't figure it out!! Everytime, I come up with different answers. Any help would be greatly appreciated. Thanks. " At the end of the year, a business has made a profit of $154,000. They must calculate the amount of tax due to the federal government and to the state. The federal tax rate is 30%, and the state tax rate is 10%. State taxes (ST) are deductible before federal taxes (FT) are calculated, and federal tax are deductible before states taxes are calculated. In other words, state taxes are subtracted from the profits before the 30% federal tax is calculated, and federal taxes are subtracted from the profits before the 10% state tax is calculated. 10% or 30 % of 154,000 does not require a system of equations to solve. Create and solve a system of equations to figure out how much state and federal taxes are owed by this company. 1. Identify the variables first. Do not confuse the tax rate with the amount of tax. 2. Set up a system of two equations. This is the only way that this problem can be solved. 3. Solve the system. Remember to report the answer in dollars." 4. Using the addition (elimination) method to solve systems of equations. Replace the ? with your student number. 1 solutions Answer 239430 by solver91311(16868) on 2010-08-23 13:32:04 (Show Source): You can put this solution on YOUR website! Let represent the amount of tax paid to the state. Let represent the amount of tax paid to the federal government. According to the problem, we start with$154K, subtract the amount paid to the feds and then take 10%, like this: Likewise, the other equation is set up like this: This system of equations is set up perfectly to use the substitution method, so take the expression that is equal to in the first equation, and put it in place of in the second equation: A little arithmetic and a very little algebra gets us to: rounded to the nearest dollar. Make the opposite substitution: You can finish up yourself, I think. John My calculator said it, I believe it, that settles it Trigonometry-basics/334129: Find the exact solutions of x^2-(y-6)=36 and y=-x^21 solutions Answer 239423 by solver91311(16868)   on 2010-08-23 13:02:34 (Show Source): You can put this solution on YOUR website! Rearrange your first equation so that it, like the other one, becomes expressed as a function of Now set the two things that are equal to equal to each other. Then by substitution into the second equation: Hence, the solution set is: John My calculator said it, I believe it, that settles it Quadratic_Equations/334116: Given the function f(x)=5x^2-10x-4, what is the domain of f?1 solutions Answer 239417 by solver91311(16868)   on 2010-08-23 12:49:57 (Show Source): You can put this solution on YOUR website! Your function is a polynomial, that is it is an expression of finite length constructed from variables and constants, using only the operations of addition, subtraction, multiplication, and non-negative, whole-number exponents. Therefore the domain is the set of real numbers. Said another way, there are no real number values that can be substituted for for which is undefined. John My calculator said it, I believe it, that settles it test/334107: HI, pls help me solve myproblem. i am planning to install a fire pump. w/ 100psi pressure. i would want to completely empty a given tank for 30mins. with max height it can be. pls help me... thank you...1 solutions Answer 239411 by solver91311(16868)   on 2010-08-23 12:41:32 (Show Source): You can put this solution on YOUR website! You have not provided sufficient information. The pressure rating of your pump bears no relationship whatsoever to the pump's volume rate capacity. Is it a 60 gpm pump? 100 gpm? 120? What? You say nothing about the shape or size of your tank -- nothing to give anyone a clue as to the actual volume of the tank as a function of the tank's height. So, all I can tell you is that you need to determine the volume capacity of your pump in volume units per minute, and multiply that times 30 minutes. That will be the amount you can pump in 30 minutes. Then you need to create a function that describes the volume of your tank as a function of its height. Set that function equal to the volume you calculated in the step above and then solve for the height that gives you that function value. John My calculator said it, I believe it, that settles it Triangles/334126: The hypotenuse of a right triangle is 8 inches longer than the shorter leg. The longer leg is 4 inches longer than the shorter leg. Find the length of the shorter leg.1 solutions Answer 239406 by solver91311(16868)   on 2010-08-23 12:32:00 (Show Source): You can put this solution on YOUR website! Let represent the measure of the short leg. Then must represent the measure of the long leg, and must represent the measure of the hypotenuse. Use Pythagoras: Expand the two squared binomials, collect like terms, and solve the resulting quadratic for , the measure of the short leg. John My calculator said it, I believe it, that settles it Quadratic_Equations/334101: use the five step problem process translate reword carry-out check state a square support unit in a t.v. set is made with a side measuring 5 centimeters. A new model being designed for the upcoming year will have a large square with a side measuring 5.3 centimeters. By how much will the area of the square be increase?1 solutions Answer 239402 by solver91311(16868)   on 2010-08-23 12:00:10 (Show Source): You can put this solution on YOUR website! Multiply 5.3 times 5.3. Then multiply 5 times 5. Subtract the second result from the first. John My calculator said it, I believe it, that settles it test/334081: how to put square and cube1 solutions Answer 239400 by solver91311(16868)   on 2010-08-23 11:58:34 (Show Source): You can put this solution on YOUR website! What do you mean? John My calculator said it, I believe it, that settles it test/334082: how to use square and cubes1 solutions Answer 239399 by solver91311(16868)   on 2010-08-23 11:58:19 (Show Source): You can put this solution on YOUR website! What do you mean? John My calculator said it, I believe it, that settles it Polynomials-and-rational-expressions/334089: 1. In this problem, we analyze the profit found for sales of decorative tiles. A demand equation (sometimes called a demand curve) shows how much money people would pay for a product depending on how much of that product is available on the open market. Often, the demand equation is found empirically (through experiment, or market research). a. Suppose a market research company finds that at a price of p = $20, they would sell x = 42 tiles each month. If they lower the price to p =$10, then more people would purchase the tile, and they can expect to sell x = 52 tiles in a month’s time. Find the equation of the line for the demand equation. Write your answer in the form p = mx + b. Hint: Write an equation using two points in the form (x,p). 20=42x + b and 10= 52x +b A company’s revenue is the amount of money that comes in from sales, before business costs are subtracted. For a single product, you can find the revenue by multiplying the quantity of the product sold, x, by the demand equation, p. b. Substitute the result you found from part a. into the equation R = xp to find the revenue equation. Provide your answer in simplified form. The costs of doing business for a company can be found by adding fixed costs, such as rent, insurance, and wages, and variable costs, which are the costs to purchase the product you are selling. The portion of the company’s fixed costs allotted to this product is $300, and the supplier’s cost for a set of tile is$6 each. Let x represent the number of tile sets. c. If b represents a fixed cost, what value would represent b? $300 d. Find the cost equation for the tile. Write your answer in the form C = mx + b. C=6x + 300 The profit made from the sale of tiles is found by subtracting the costs from the revenue. e. Find the Profit Equation by substituting your equations for R and C in the equation . Simplify the equation. f. What is the profit made from selling 20 tile sets per month? g. What is the profit made from selling 25 tile sets each month? h. What is the profit made from selling no tile sets each month? Interpret your answer. i. Use trial and error to find the quantity of tile sets per month that yields the highest profit. j. How much profit would you earn from the number you found in part i? k. What price would you sell the tile sets at to realize this profit? Hint: Use the demand equation from part a. 2. The break even values for a profit model are the values for which you earn$0 in profit. Use the equation you created in question one to solve P = 0, and find your break even values. 3. In 2002, Home Depot’s sales amounted to $58,200,000,000. In 2006, its sales were$90,800,000,000. a. Write Home Depot’s 2002 sales and 2006 sales in scientific notation. You can find the percent of growth in Home Depot’s sales from 2002 to 2006 by following these steps: • Find the increase in sales from 2002 to 2006. • Find what percent that increase is of the 2002 sales. b. What was the percent growth in Home Depot’s sales from 2002 to 2006? Do all your work by using scientific notation. The Home Depot, Inc. (2007, March 29). 2006 Annual Report. Retrieved from http://www6.homedepot.com/annualreport/index.html 4. A customer wants to make a teepee in his backyard for his children. He plans to use lengths of PVC plumbing pipe for the supports on the teepee, and he wants the teepee to be 12 feet across and 8 feet tall (see figure). How long should the pieces of PVC plumbing pipe be? 1 solutions Answer 239398 by solver91311(16868)   on 2010-08-23 11:57:00 (Show Source): You can put this solution on YOUR website! Re-read the rules for posting questions on this site, particularly the part that says: One problem per submission. Do not dump your whole homework in one question Read the Instructions John My calculator said it, I believe it, that settles it Matrices-and-determiminant/334119: use the echelon method to solve the system of two equations in two unknowns. x-3y=-4 -7x-2y=28 please show work1 solutions Answer 239396 by solver91311(16868)   on 2010-08-23 11:52:43 (Show Source): You can put this solution on YOUR website! Hence, the solution set is John My calculator said it, I believe it, that settles it Volume/334112: How much water would a container hold in gallons if the tank is 6 foot by 2 foot by 2 foot? Thank you.1 solutions Answer 239393 by solver91311(16868)   on 2010-08-23 11:14:07 (Show Source): You can put this solution on YOUR website! The number of cubic feet in a rectangular solid container is given by the length times the width times the height. Each cubic foot is approximately 7.5 gallons. John My calculator said it, I believe it, that settles it Functions/333938: The question that I am working on is this: Determine the domain of the average cost, in dollars, that you anticipate the average American family might be willing to pay for one gallon of white milk. Provide a reasonable range, keeping in mind that some families do not drink milk, no matter the cost (that's a hint on the low end of this domain). Required format for writing domain and range is: ____less than or equal to x less than or equal to _____. I need to know if I did this correctly. I put down 0 less than or equal to x less than or equal to 1.68. Did I comprehend this correctly. This first question has to be correct in order to answer all of the following questions. Thank you.1 solutions Answer 239283 by solver91311(16868)   on 2010-08-22 19:49:08 (Show Source): You can put this solution on YOUR website! I have no idea where you are buying your milk, but the average price of a gallon of whole milk in the US in the second quarter calendar 2010 was \$3.06. Click here You have the bottom end of the interval specified correctly -- unless you can find an instance where someone is willing to pay someone else to consume their milk, in which case you could assign a negative value to the low end of the interval. John My calculator said it, I believe it, that settles it Finance/333922: Solve directly when Fvt =PV0* (1+r)^t The t and the 0 is supposed to be small like X 21 solutions Answer 239266 by solver91311(16868)   on 2010-08-22 18:51:35 (Show Source): You can put this solution on YOUR website! Ok, but what do you mean by "Solve directly"? You have provided no parameters for the problem at all. John My calculator said it, I believe it, that settles it
2013-05-18 12:34:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42433977127075195, "perplexity": 3687.734115557594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382396/warc/CC-MAIN-20130516092622-00065-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/finding-charge-based-on-net-force-and-other-charges.258441/
# Finding charge based on net force and other charges 1. Sep 22, 2008 ### jelliDollFace 1. The problem statement, all variables and given/known data Blue charge is at origin with positive q charge Red charge at point (d1,0) with unknown positive charge q_red Yellow charge at point (d2cos(theta),-d2sin(theta)) with negative 2q charge The net electric force on the blue sphere has a magnitude F and is directed in the - y direction. Suppose that the magnitude of the charge on the yellow sphere is determined to be 2q. Calculate the charge q_red on the red sphere. Express your answer in terms of q, d1, d2, and theta. 2. Relevant equations electric force F = kq_1q_2/r^2 where k = 9*10^9, q_1 and q_2 represent point charges, and r is distance between point charges 3. The attempt at a solution F = [(k*q_yellow*q_blue)/d2^2 ] + [(k*q_red*q_blue)/d1^2] F = [(k*(-2q)*q)/d2^2] + [(k*q_red*q)/d1^2] d1^2[F - ((k*(-2q)*q)/d2^2)] / kq = q_red i think i'm on the right track but i did not use theta which i need to, where did i go wrong #### Attached Files: • ###### untitled2.JPG File size: 5.8 KB Views: 135 2. Sep 23, 2008 ### jelliDollFace does this involve the coordinate location of the yellow charge because it contains theta which i think i need in my final answer, but the distance is stated as d2 so why should i need it? 3. Sep 23, 2008 ### LowlyPion I can't see your picture as yet, but I would suggest that you separate the forces into their x,y components and then add them. They tell you the result vector is acting in the -Y direction only, so x components must add to 0. Force is a vector and adding the magnitudes if they are not acting along the same line is not the way to do it. 4. Sep 24, 2008 ### jelliDollFace fnet_x = [k(2q)(q)/(d2cos(theta))^2] + [k(q_red)(q)/(d1^2)] fnet_y = [k(2q)(q)/(d2sin(theta))^2] + 0 F = sqrt((fnet_x)^2 + (fnet_y)^2) [sqrt[F^2 - (fnet_x)^2](d2sin(theta))]/(2q)(k) = q_red is that correct now? how do i factor in the -y net force direction? Last edited: Sep 24, 2008 5. Sep 24, 2008 ### LowlyPion Not quite. First simply identify the force between the Blue/Red and the Blue/Yellow. These are the forces that you must treat as vectors. Hence F(b/r) = kqb*qr/(d1)2*x-hat + 0*y-hat F(b/y) = kqb*qy/(d2)2*Cosθ*x-hat + kqb*qy/(d2)2*Sinθ*y-hat But you also know that the x-components (x-hat terms) must add to 0 And you also know that the charge on Yellow is -2*q and the charge on Blue is +1*q. The qr is the one that is unknown. Figure it must be a positive charge since Red is positive Yellow negative and otherwise they could never add to 0. 6. Sep 24, 2008 ### jelliDollFace so this is what i got, since we know the x components must sum to zero soo... [(q_red)(+q)k]/d1^2 + [(+q)(-2q)k]/d2cos(theta)^2 = 0 so q_red = [-k(+q)(-2q)(d1^2)]/[(d2cos(theta)^2)(+q)(k)] is this right? 7. Sep 24, 2008 ### LowlyPion No. I don't think you read the equations I gave you carefully. $$\vec{F_{BR}} = \frac{k*Q_B*Q_R}{d_1^2} *\hat{x} + 0*\hat{y}$$ $$\vec{F_{BY}} = \frac{k*Q_B*Q_Y}{d_2^2}*Cos\theta* \hat{x} + \frac{k*Q_B*Q_Y}{d_2^2}*Sin\theta *\hat{y}$$ Last edited: Sep 24, 2008 8. Sep 24, 2008 ### jelliDollFace okay i see now, aside from the issue with the sine and cosine, was my approach for solving for q_red correct? here it is with corrections [(q_red)(+q)k]/d1^2 + [(+q)(-2q)(k)(cos(theta))]/d2^2 = 0 q_red = -[k(+q)(-2q)(d1^2)(cos(theta))]/[(d2^2)(+q)(k)] q_red = -[(-2q)(d1^2)(cos(theta))]/[(d2^2)] = (2q)(d1^2)(cos(theta))]/[(d2^2) 9. Sep 24, 2008 ### LowlyPion That looks more like it. 10. Sep 24, 2008 ### jelliDollFace thanks so much, it was right!!!
2016-12-09 23:26:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6442566514015198, "perplexity": 2307.522738976347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542851.96/warc/CC-MAIN-20161202170902-00336-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.ncatlab.org/nlab/show/differential+forms+in+synthetic+differential+geometry
nLab differential forms in synthetic differential geometry Context Differential geometry synthetic differential geometry Introductions from point-set topology to differentiable manifolds Differentials V-manifolds smooth space Tangency The magic algebraic facts Theorems Axiomatics cohesion tangent cohesion differential cohesion $\array{ && id &\dashv& id \\ && \vee && \vee \\ &\stackrel{fermionic}{}& \rightrightarrows &\dashv& \rightsquigarrow & \stackrel{bosonic}{} \\ && \bot && \bot \\ &\stackrel{bosonic}{} & \rightsquigarrow &\dashv& \mathrm{R}\!\!\mathrm{h} & \stackrel{rheonomic}{} \\ && \vee && \vee \\ &\stackrel{reduced}{} & \Re &\dashv& \Im & \stackrel{infinitesimal}{} \\ && \bot && \bot \\ &\stackrel{infinitesimal}{}& \Im &\dashv& \& & \stackrel{\text{étale}}{} \\ && \vee && \vee \\ &\stackrel{cohesive}{}& ʃ &\dashv& \flat & \stackrel{discrete}{} \\ && \bot && \bot \\ &\stackrel{discrete}{}& \flat &\dashv& \sharp & \stackrel{continuous}{} \\ && \vee && \vee \\ && \emptyset &\dashv& \ast }$ Models Lie theory, ∞-Lie theory differential equations, variational calculus Chern-Weil theory, ∞-Chern-Weil theory Cartan geometry (super, higher) Idea In the context of synthetic differential geometry a differential form $\omega$ of degree $k$ on a manifold $X$ is literally a function on the space of infinitesimal cubes or infinitesimal simplices in $X$. We give the definition as available in the literature and then interpret this in a more unified way in terms of the Chevalley-Eilenberg algebra of the infinitesimal singular simplicial complex. Definition missing here are details on what axioms the space we are working on has to satisfy for the following to make sense. See the case distinction at infinitesimal singular simplicial complex. differential forms An infinitesimal $k$-simplex in a synthetic differential space $X$ is a collection of $k+1$-points in $X$ that are pairwise infinitesimal neighbours. The spaces $X^{\Delta^k_{diff}}$ of infinitesimal $k$-simplices arrange to form the infinitesimal singular simplicial complex $X^{\Delta^\bullet_{diff}}$. The functions on the space of infinitesimal $k$-simplices form a generalized smooth algebra $C^\infty(X^{\Delta^k_{inf}})$. A differential $k$-form (often called simplicial $k$-form or, less accurately, combinatorial $k$-form to distinguish it from similar but cubical definitions) on $X$ is an element in this function algebra that has the property that it vanishes on degenerate infinitesimal simplices. See definition 3.1.1 in • Anders Kock, Synthetic geometry of manifolds (pdf) for this simplicial definition. A detailed account of this is in the entry infinitesimal object in the section Spaces of infinitesimal simplices. This is a very simple-looking statement. The reason is the topos-theoretic language at work in the background, which takes care that we may talk about infinitesimal objects as if they were just plain ordinary sets. For a very detailed account of how the above statement is implemented concretely in terms of concrete models for synthetic differential spaces see section 1 of • Breen, Messing, Combinatorial differential forms (arXiv) There are also cubical variants of the above definition • Anders Kock, Cubical version of combinatorial differential forms (pdf for fee) for a realization of the cubical version in models based on sheaves on generalized smooth algebras. We may characterize the object $\Omega^k(X) \subset C^\infty(X^{\Delta^k_{inf}})$ as follows: for $k \geq 1$ there are the obvious images $s_i^* : C^\infty(X^{\Delta^{k}_{inf}}) \to C^\infty(X^{\Delta^{k-1}_{inf}})$ of the degeneracy maps. As one can see, these act by restricting a function on infinitesimal $k$-simplices to the degenerate ones and regarding these then as a $(k-1)$-simplex. Therefore we may characterize the subobject $\Omega^k(X) \hookrightarrow C^\infty(X^{\Delta^k_{inf}})$ as the joint kernel of the degeneracy maps $\Omega^k(X) = \cap_{i = 0}^{k-1} ker(s_i^*) \,.$ coboundary operator According to section 3.2 of Andres Kock’s book, the coboundary operator $d : \Omega^k(X) \to \Omega^{k+1}(X)$ sends a differential $k$-form $\omega$ to the $(k+1)$-form $d \omega$ that on an infinitesimal $(k+1)$-simplex $(x_0, x_1, \cdots, x_{k+1})$ in $X$ evaluates to $d\omega(x_0, x_1, \cdots, x_{k+1}) := \sum_{i=0}^{k+1} \omega(x_0, \cdots , \hat{x_i}, \cdots, x_{k+1}) \,,$ where the hat indicates that the corresponding variable is omitted, as usual. We recognize this as the alternating sum of the face maps $\partial_i^*$ of the cosimplicial object $C^\infty(X^{\Delta_{inf}^\bullet})$. $d := \sum_{i=0}^{k+1} \partial_i^* : \Omega^k(X) \to \Omega^{k+1}(X) \,.$ These constructions remind one and should be compared with the Dold-Kan correspondence. In particular with its dual (cosimplicial) version as recalled in section 4 of CastiglioniCortinas In total this should show the following Proposition Let $X$ be a synthetic differential space and $C^\infty(X^{\Delta_{inf}^\bullet})$ the cosimplicial object of generalized smooth algebras of functions on the spaces of infinitesimal $k$-simplices in $X$. Then the deRham complex $(\Omega^\bullet(X), d)$ of differential forms on $X$ is the normalized Moore complex of the cosimplicial object $C^\infty(X^{\Delta_{inf}^\bullet})$. In other words, in as far as the Dold-Kan correspondence is an equivalence, we find that: the object of differential forms on $X$ is the cosimplicial generalized smooth algebra $C^\infty(X^{\Delta_{inf}^k})$. References Last revised on May 2, 2019 at 03:42:10. See the history of this page for a list of all contributions to it.
2019-06-27 05:58:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 58, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768299221992493, "perplexity": 646.8684007476878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00261.warc.gz"}
http://accessanesthesiology.mhmedical.com/content.aspx?bookid=415&sectionid=42070957
Chapter 13 Despite all the effort and expense, acute myocardial infarction (MI) remains a major cause of death and disability. Ultimate progress in dealing with this plague will depend not on emergency departments, cardiac care units, and cardiac catheterization laboratories, but on the prevention of atherosclerosis. Thus, diet, exercise, and smoking avoidance coupled with early identification of atherosclerosis with effective targeted medical therapy will stabilize vulnerable plaque, reduce atherosclerotic burden, and prevent acute thrombotic events. To the individual in the grip of acute MI, however, different and pressing priorities exist. Symptoms must be ameliorated, lethal arrhythmias identified and treated, arteries opened, and complications identified and managed. In many cases, little is needed beyond a targeted history and physical, 12-lead electrocardiogram (ECG), and simple, rapid blood work with prompt thrombolysis or emergency coronary arteriography and balloon angioplasty with or without stenting. In such straightforward cases, point-of-care echocardiography will prove interesting and perhaps helpful if potential complications are identified early. Such study, however, should never delay needed efforts at reperfusion. In other cases,15 the history and physical examination may be confusing, or ECG and enzymatic data may be conflicting, misleading, or delayed. These situations include: (1) typical symptoms but normal or equivocal lab studies, (2) atypical symptoms with equivocal or abnormal lab studies, (3) pacemaker therapy, (4) left bundle branch block on ECG, (5) presence of new systolic murmur, (6) shock, including right ventricular myocardial infarction, (7) late clinical presentation, including post-MI pericarditis, (8) large, non-Q-wave MI, (9) true posterior MI, and (10) suspected LV thrombus. In these instances, point-of-care ultrasonography is not only beneficial, it may be critical for improving the understanding of the patient's condition and selecting appropriate treatment. For point-of-care echocardiography to prove helpful in the acute MI setting, a simple, rapidly activated and portable machine must be present in the proximate clinical area. This machine must provide good quality two-dimensional and colored Doppler images on a wide variety of challenging patients (chronic obstructive pulmonary disease [COPD], obesity). In most situations, a full, formal follow-up echocardiogram should be obtained later with results correlated to the point-of-care echocardiography findings. Point-of-care operators require training in theory and hands-on techniques plus proctored imaging and interpretation experience. These providers will need to work closely with institutional credentialing bodies to ensure that standards of initial training, ongoing training, and quality assurance are identified and met. The three standard windows should be interrogated in each patient with and without color-flow Doppler (Figure 13.1[A,B]). Apical views should be examined first because the two-chamber, four-chamber, and five-chamber views are often readily obtained and identify all left ventricular myocardial segments in addition to the right ventricle (Figure 13.1[A]). Aortic, mitral, and tricuspid valves are easily identified. Color-flow interrogation in the apical views readily identify ventricular septal defects and mitral insufficiency. Left parasternal short-axis views should be obtained next with expected good visualization of the apex, mid left ventricle, and left ventricle ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus. ## Subscription Options ### AccessAnesthesiology Full Site: One-Year Subscription Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more
2017-03-27 08:34:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17474977672100067, "perplexity": 11991.15343596695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00545-ip-10-233-31-227.ec2.internal.warc.gz"}
https://testbook.com/question-answer/if-the-product-of-two-eigen-values-of-the-matrixn--5fc8ce98a958e66ff4272706
If the product of two eigen values of the matrix $$\begin{bmatrix} 6 & -2 & 2 \\\ -2 & 3 & -1 \\\ 2 & -1 & 3 \end{bmatrix}$$ is 16, then third eigen value is This question was previously asked in Junior Executive (ATC) Official Paper 7: Held on Dec 2015 - Shift 2 View all JE ATC Papers > 1. 2 2. -2 3. 36 4. 6 Option 1 : 2 Free Junior Executive (ATC) Official Paper 1: Held on Nov 2018 - Shift 1 20440 120 Questions 120 Marks 120 Mins Detailed Solution Explanation: $$\left[ A \right] = \left[ {\begin{array}{*{20}{c}} 6&{ - 2}&2\\ { - 2}&3&{ - 1}\\ 2&{ - 1}&3 \end{array}} \right]$$ As we know: Product of Eigen values = Determinant of matrix Determinant of matrix = 6 × (9 – 1) – (- 2) (- 6 + 2) + 2 (2 - 6) Determinant of matrix = 48 – 8 – 8 Determinant of matrix = 32 Now, Let the third Eigen value be x Product of Eigen value = Determinant of [A] 16 × x = 32 x = 2 Third Eigen value = 2
2021-09-27 23:13:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.299485445022583, "perplexity": 3571.174104900799}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00366.warc.gz"}
https://wiki.alquds.edu/?query=Talk:3-manifold
# Talk:3-manifold Page contents not supported in other languages. WikiProject Mathematics (Rated C-class, High-priority) ## Reference requested Is there anybody who would want complete a reference due to Lackenby, please? —The preceding unsigned comment was added by Juan Marquez (talkcontribs). Which one? There's no reference to Lackenby in this article, although he's mentioned in alternating knot, where I've added the reference, and also he's mentioned in the talk page for Heegaard splitting. Where exactly did you see mention of Lackenby? If I wrote it (which is likely), I can give you the reference. --C S (Talk) 15:18, 11 April 2006 (UTC) ## Deleting five dimensions Prior to my edit, there was a claim that spacetime might be five dimensional. I'm pretty up on these topics, and I've never heard this claim before. I'm deleting it. If anyone wants to revert, we can discuss it. —Preceding unsigned comment added by 129.215.255.13 (talk) 15:08, 20 August 2009 (UTC) ## wikimedia commons sketches I opened a section [1] in Wikimedia Commons to illustrate some of the very basic 3d spaces to give a better idea of the beasts lurking in this land, enjoy!--kmath (talk) 02:47, 4 November 2009 (UTC) more contributions are welcome--kmath (talk) 02:57, 4 November 2009 (UTC) ## Categories? What does it mean that the topological, piecewise linear, and smooth "categories" are all the same? Not in a category-theory sense, right? Crasshopper (talk) 05:52, 12 December 2010 (UTC) ## Possible typo The caption reads: All of the cubes in the image are the same cube, since light in the manifold wraps around into closed loops, the effect is that the cube is tiling all of space. Was tiling supposed to read filling? Just asking --guyvan52 (talk) 23:10, 27 May 2014 (UTC) Hello fellow Wikipedians, I have just modified one external link on 3-manifold. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs. This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022). • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool. • If you found an error with any archives or the URLs themselves, you can fix them with this tool. Cheers.—InternetArchiveBot 15:38, 22 June 2017 (UTC) ## Lower star for induced map under \pi_1? Under the Simple Loop Conjecture, I believe the induced map is typically written as f_* as opposed to f^*. Baldersmash (talk) 23:21, 2 March 2019 (UTC) • Yes, definitely. I fixed that. Turgidson (talk) 20:10, 3 March 2019 (UTC)
2023-03-20 09:13:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5458245277404785, "perplexity": 3649.9871608393364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00735.warc.gz"}
https://uniontestprep.com/english-basics/practice-test/word-usage/pages/2
# Question 2 Word Usage Practice Test for the English Basics Which word pair is the correct usage for the context of the following sentence? “Finley said she could ____ found your house if you’d given her the ____ address.
2018-06-23 06:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23015530407428741, "perplexity": 3057.120888159566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864943.28/warc/CC-MAIN-20180623054721-20180623074721-00112.warc.gz"}
https://math.stackexchange.com/questions/1922975/given-that-lim-limits-theta-to-0-frac-sin-theta-theta-1-calcula
# Given that $\lim\limits_{\theta \to 0} \frac{\sin(\theta)}{\theta} = 1$, calculate $\lim\limits_{t \to 0} \frac{\sin(kt)}{kt}$ I've solved this problem, but I'm unsure if my reasoning is correct. Please review my understanding of the problem and whether or not my reasoning is correct. Thank you. We know that $\lim\limits_{\theta \to 0} \frac{\sin(\theta)}{\theta} = 1$. We want to know $\lim\limits_{t \to 0} \frac{\sin(kt)}{t}$. $\lim\limits_{t \to 0} \frac{\sin(kt)}{t}$ = (k) $\lim\limits_{t \to 0} \frac{\sin(kt)}{kt}$. This is because we are multiplying both the numerator and denominator by $k$. Therefore, by the limit laws, we are not changing the limit? As $t$ goes to $0$, $kt$ goes to $0$: (k) $\lim\limits_{t \to 0} \frac{\sin(kt)}{kt}$ = (k) $\lim\limits_{kt \to 0} \frac{\sin(kt)}{kt}$. Let $kt = \theta$. (k) $\lim\limits_{kt \to 0} \frac{\sin(kt)}{kt}$ = (k) $\lim\limits_{\theta \to 0} \frac{\sin(\theta)}{\theta}$ = (1)k = k • Please use MathJax to write the math: meta.math.stackexchange.com/questions/5020/… – Bobson Dugnutt Sep 11 '16 at 19:02 • This is correct. – Faraad Armwood Sep 11 '16 at 19:03 • @Lovsovs I managed to fix it. Not sure how to include the thetas, though. :S – The Pointer Sep 11 '16 at 19:12 • For greek letters, you write \greekletter. – David Bowman Sep 11 '16 at 19:22 • @DavidBowman Thanks! – The Pointer Sep 11 '16 at 19:22 Let $t = k\theta$ then if $\theta \to 0, t \to 0$ and so; $$\lim_{\theta \to 0} \frac{\sin(k\theta)}{\theta} = \lim_{\theta \to 0} \frac{k}{k}\cdot \frac{\sin(k\theta)}{\theta} = k \cdot \lim_{\theta \to 0} \frac{\sin(k\theta)}{k\theta} = k \cdot \lim_{t \to 0} \frac{\sin (t)}{t} =k \cdot 1 = k$$ • There is an error in your calculations? – The Pointer Sep 11 '16 at 19:09 • Ok, it's fixed. Everything you did was basically correct, I just submitted this as an answer to close the question. You might want to look at your substitution though, that's where you error is. – Faraad Armwood Sep 11 '16 at 19:10 • Ok, thanks. Which substitution are you referring to? – The Pointer Sep 11 '16 at 19:19 Set $kt = x$ $$\lim_{x \to 0} \frac{\sin x}{x} = 1$$ Hence your limit is then $k$
2021-05-16 09:30:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.725641667842865, "perplexity": 733.4017211874702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00252.warc.gz"}
https://www.statistics-lab.com/%E7%BB%8F%E6%B5%8E%E4%BB%A3%E5%86%99%E5%BE%AE%E8%A7%82%E7%BB%8F%E6%B5%8E%E5%AD%A6%E4%BB%A3%E5%86%99microeconomics%E4%BB%A3%E8%80%83/
### 经济代写|微观经济学代写Microeconomics代考|THE STANDARDTEXT statistics-lab™ 为您的留学生涯保驾护航 在代写微观经济学Microeconomics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写微观经济学Microeconomics代写方面经验极为丰富,各种代写微观经济学Microeconomics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 经济代写|微观经济学代写Microeconomics代考|Economics is the science of choice It seems obvious that economics is about the economy; so a common sense definition of economics might be that it concerns itself with money, markets, business and how people make a living. But this definition is too narrow. Economics is not just the study of money and markets. It studies families, criminal behaviour and governments’ policy choices. It includes the study of population growth, standards of living and voting patterns. It can also have a shot at explaining human behaviours in relation to dating and marriage. The fact that economics can examine subjects traditionally studied by other social sciences suggests that content does not define the discipline. As long as a topic has a social dimension, we can look at it from the perspective of any social science. Most textbooks define economics as the science of choice. It’s about how individuals and society make choices, and how those choices are affected by incentives. This definition includes all aspects of life: a couple’s choice to have a child, or a political party’s choice of its platform. Its drawback is that it doesn’t help to differentiate economics from the other social sciences, since they too look at how we make choices. What distinguishes economics from other social sciences is its commitment to rational choice theory. This assumes that individuals are rational, selfinterested, have stable and consistent preferences, and wish to maximize their own happiness (or ‘utility’), given their constraints – such as the amount of time or money that they have. Social situations and collective behaviours are analysed as resulting from freely chosen individual actions. Just as science attempts to understand the properties of metals by understanding the atoms that comprise them, so economics attempts to understand society by analysing the behaviour of the individuals who comprise it. ## 经济代写|微观经济学代写Microeconomics代考|Scarcity Why is choice necessary? Economics assumes that people have unlimited wants. Therefore, no matter how abundant resources may be, they will always be scarce in the face of these unlimited wants. A fundamental question in economics has always been how do we maximize happiness? Economists maintain that while we must allow people to decide for themselves what makes them happy, we know that people always want more. Therefore, society needs to use its resources as efficiently as possible to produce as much as possible; and society needs to expand what it can produce as quickly as possible. This explains why economists emphasize the goals of efficiency and growth. But does the concept of unlimited wants mean that someone will want an unlimited number of new coats, or an unlimited number of pairs of shoes? No, it doesn’t. Along with unlimited wants, economists normally assume that the more you have of something, the less you value one more unit of it. So, unlimited wants does not mean we want an unlimited amount of a specific thing. Rather, it means that there will always be something that we will desire. There will always be new desires. Our desires and wants are fundamentally unlimited. ## 经济代写|微观经济学代写Microeconomics代考|Marginal thinking: costs and benefits You are familiar with the margin on a page – it lies at the edge. And when someone describes a soccer player as being marginal they mean he is a fringe player, on the edge of inclusion. Economists use the word marginal in a similar way. Marginal cost is the cost at the margin – or to be more precise, the cost of an additional unit of output or consumption. Thus, the marginal cost of wheat is the additional cost of producing one more unit of wheat. Similarly, marginal benefit is just the benefit someone gets from having one more unit of something. We might measure benefit in hypothetical utils of satisfaction; or in dollar terms – the maximum willingness to pay for one more unit. As the science of choice, the core economic framework is remarkably simple: all activities are undertaken to the point where marginal cost equals marginal benefit. Why? Because at this point total net benefit is maximized. An example will help. Imagine we are old-style Soviet planners, trying to determine the quantity of Russian-style fur hats to produce. Let’s assume that the marginal cost of producing a fur hat increases the more we produce – so we draw it as the upward-sloping line in the upper diagram of Figure 1.1. Further assume that the more hats are produced, the less one more hat is valued – so the marginal benefit line slopes down. How many hats should we produce? If we produce only $\mathrm{Q}_{1}$ units, the marginal benefit of one more hat is $\$ 6$, but the marginal cost is only \$3. This means that the extra benefit of one more unit is greater than the extra cost of producing it. Therefore, we can improve society’s well-being by producing one more. This remains true as we increase production to $Q^{}$. But we should not produce more than $Q^{}$. Beyond that point marginal cost exceeds marginal benefit, reducing total net benefit from hat production. Total net benefit is shown in the lower diagram of Figure 1.1. Clearly, this is maximized at an output of $Q^{*}$. ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2022-10-03 07:26:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5335959196090698, "perplexity": 1545.72760460875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00199.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-1-tools-of-geometry-1-8-perimeter-circumference-and-area-practice-and-problem-solving-exercises-page-67/62
## Geometry: Common Core (15th Edition) Use the distance formula to find AB. $d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ $d=\sqrt{(-2-4)^2+(3-(-1))^2}$ $d=\sqrt{(-6)^2+(4)^2}$ $d=\sqrt{(36+16}$ $d=\sqrt{52}$ $d=7.2$ The distance from the midpoint is half the distance from A to B. $MB=\frac{AB}{2}=\frac{7.2}{2}=3.6$
2020-02-21 12:33:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6906030178070068, "perplexity": 308.9769627785843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00315.warc.gz"}
http://hitchhikersgui.de/Digital_geometry
# Digital geometry Digital geometry deals with discrete sets (usually discrete point sets) considered to be digitized models or images of objects of the 2D or 3D Euclidean space. Simply put, digitizing is replacing an object by a discrete set of its points. The images we see on the TV screen, the raster display of a computer, or in newspapers are in fact digital images. Its main application areas are computer graphics and image analysis. Main aspects of study are: • Constructing digitized representations of objects, with the emphasis on precision and efficiency (either by means of synthesis, see, for example, Bresenham's line algorithm or digital disks, or by means of digitization and subsequent processing of digital images). • Study of properties of digital sets; see, for example, Pick's theorem, digital convexity, digital straightness, or digital planarity. • Transforming digitized representations of objects, for example (A) into simplified shapes such as (i) skeletons, by repeated removal of simple points such that the digital topology of an image does not change, or (ii) medial axis, by calculating local maxima in a distance transform of the given digitized object representation, or (B) into modified shapes using mathematical morphology. • Reconstructing "real" objects or their properties (area, length, curvature, volume, surface area, and so forth) from digital images. • Study of digital curves, digital surfaces, and digital manifolds. • Designing tracking algorithms for digital objects. • Functions on digital space. Digital geometry heavily overlaps with discrete geometry and may be considered as a part thereof. ## Digital space A 2D digital space usually means a 2D grid space that only contains integer points in 2D Euclidean space. A 2D image is a function on a 2D digital space (See image processing). In Rosenfeld and Kak's book, digital connectivity are defined as the relationship among elements in digital space. For example, 4-connectivity and 8-connectivity in 2D. Also see pixel connectivity. A digital space and its (digital-)connectivity determine a digital topology. In digital space, the digitally continuous function (A. Rosenfeld, 1986) and the gradually varied function (L. Chen, 1989) were proposed, independently. A digitally continuous function means a function in which the value (an integer) at a digital point is the same or off by at most 1 from its neighbors. In other words, if x and y are two adjacent points in a digital space, |f(x) − f(y)| ≤ 1. A gradually varied function is a function from a digital space ${\displaystyle \Sigma }$ to ${\displaystyle \{A_{1},\dots ,A_{m}\}}$ where ${\displaystyle A_{1}<\cdots and ${\displaystyle A_{i}}$ are real numbers. This function possesses the following property: If x and y are two adjacent points in ${\displaystyle \Sigma }$, assume ${\displaystyle f(x)=A_{i}}$, then ${\displaystyle f(y)=A_{i}}$, ${\displaystyle f(x)=A_{i+1}}$, or ${\displaystyle A_{i-1}}$. So we can see that the gradually varied function is defined to be more general than the digitally continuous function. An extension theorem related to above functions was mentioned by A. Rosenfeld (1986) and completed by L. Chen (1989). This theorem states: Let ${\displaystyle D\subset \Sigma }$ and ${\displaystyle f:D\rightarrow \{A_{1},\dots ,A_{m}\}}$. The necessary and sufficient condition for the existence of the gradually varied extension ${\displaystyle F}$ of ${\displaystyle f}$ is : for each pair of points ${\displaystyle x}$ and ${\displaystyle y}$ in ${\displaystyle D}$, assume ${\displaystyle f(x)=A_{i}}$ and ${\displaystyle f(y)=A_{j}}$, we have ${\displaystyle |i-j|\leq d(x,y)}$, where ${\displaystyle d(x,y)}$ is the (digital) distance between ${\displaystyle x}$ and ${\displaystyle y}$.
2018-02-23 12:58:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 22, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3909986913204193, "perplexity": 2856.369152605806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00593.warc.gz"}
https://socratic.org/questions/a-triangle-has-sides-a-b-and-c-sides-a-and-b-have-lengths-of-2-and-1-respectivel-2
# A triangle has sides A, B, and C. Sides A and B have lengths of 2 and 1, respectively. The angle between A and C is (5pi)/24 and the angle between B and C is (5pi)/24. What is the area of the triangle? Nov 17, 2017 $3.20 \cdot {10}^{- 2}$ #### Explanation: $\text{Area of a triangle} = \frac{1}{2} a b \sin C$ $a$ and $b$ are already known as $2$ and $2$, so $\frac{1}{2} \cdot 2 \cdot 1 = 1$ $\sin C$ is less obvious. $\angle a b = C$ $\angle a c = \frac{5 \pi}{24}$ and $\angle b c = \frac{5 \pi}{24}$ $\setminus \Sigma \angle = \angle a b + \angle b c + \angle a c = \pi$ $\angle a b = \pi - \angle a c - \angle b c = \pi - 2 \left(\frac{5 \pi}{24}\right) = \frac{7 \pi}{12}$ $\text{Area of the triangle} = \sin C = \sin \left(\frac{7 \pi}{12}\right) = 3.20 \cdot {10}^{- 2}$ Feb 17, 2018 Triangle cannot exist with the given information. #### Explanation: Given : $a = 2 , \hat{A} = \frac{5 \pi}{24} , b = 1 , \hat{B} = \frac{5 \pi}{24}$ Though the two angles are equal, sides are not. In any triangle, the largest side and largest angle are opposite one another. In any triangle, the smallest side and smallest angle are opposite one another. ... Alternately, if two angles are congruent (equal in measure), then the corresponding two sides will be congruent (equal in measure). Since the above condition is not satisfied, Triangle given in the sucasum cannot exist.
2021-09-17 06:03:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7281875610351562, "perplexity": 521.2692148002528}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00530.warc.gz"}
http://mathoverflow.net/questions/82333/building-a-polyhedron-from-areas-of-its-faces/82338
# Building a polyhedron from areas of its faces Is there a known algorithm which, given a finite multiset (unordered list) of integers $A$, returns a yes/no answer for "Is there a polyhedron such that the multiset of areas of all its faces is exactly $A$?"? Is there a known general algorithm for $n$-dimensional polytopes? - There is one for two dimensions. If you don't mind prisms, you can likely adapt it to arbitrary dimensions. Gerhard "Ask Me About System Design" Paseman, 2011.11.30 –  Gerhard Paseman Dec 1 '11 at 1:34 I can answer your question with the specialization to convex polyhedra and polytopes. Specializing further to $\mathbb{R}^3$, the result is that $n \ge 4$ positive real numbers are the face areas of a convex polyhedron if and only if the largest number is not more than the sum of the others. I wrote up a short note establishing this: "Convex Polyhedra Realizing Given Face Areas," arXiv:1101.0823. The result relies on Minkowski's 1911 theorem, which perhaps you know: Theorem (Minkowski). Let $A_i$ be positive faces areas and $n_i$ distinct, noncoplanar unit face normals, $i=1,\ldots,n$. Then if $\sum_i A_i n_i = 0$, there is a closed convex polyhedron whose faces areas uniquely realize those areas and normals. This theorem reduces the problem to finding orientations $n_i$ so that vectors of length $A_i$ at those orientations sum to zero. And this is not difficult. Here is Figure 3 from my note from which you can almost infer the construction: Minkowski's theorem generalizes to $\mathbb{R}^d$ and so does an analog of the above claim (but I did not work that out in detail in the arXiv note). In terms of an algorithm, the decision question is linear in the number $n$ of facet areas, and even constructing the polyhedron is linear in $\mathbb{R}^3$, and likely $O(dn)$ in $\mathbb{R}^d$ (but again, I didn't work that out). But you don't mention the word "convex" in your post, so perhaps you are interested in nonconvex polyhedra and polytopal complexes? - Although I did not mention polygons, I suggested such in my comment above. There for n >=3 the condition is the same: no side must have length at least half the sum of all lengths. For arbitrary dimensions, I suspect the same holds true of any nontrivial polytope, convex or not: no k-face has k-measure at least half the sum of the k-measures of all k-faces. Triangular prisms should show that one cannot replace half by anything smaller. Gerhard "Ask Me About System Design" Paseman, 2011.11.30 –  Gerhard Paseman Dec 1 '11 at 1:53 Let me mention that in a certain sense, the dual problem was resolved by Zil'berberg in 1962. This paper is not available in English I think, but is stated as Exercise 35.9 in my book: math.ucla.edu/~pak/book.htm There, the areas are replaced by curvatures, which satisfy the Gauss-Bonnet formula and the proof is via an easy reduction to Alexandrov's "ray theorem", which is dual to Minkowski's theorem (but neither easily implies another). –  Igor Pak Dec 1 '11 at 2:16 (cont'd) One curious extension of Zil'berberg's proof is that the polytope can be made simple (for even number of vertices). I bet your theorem extends to have all polytopes simpicial. –  Igor Pak Dec 1 '11 at 2:17 I should be more careful in my statements. Let a polytope live in R^d. For positive integral k less than d there is a constant c_(k,d) such that the sum of the measures of all k-faces of the polytope times that constant is greater than the k-measure of any single k-face. When k=d-1, I assert that the constant is 1/2. For smaller k, smaller constants are possible, and are likely to be c_(k,d)=1/(1+d-k). Gerhard "Ask Me About System Design" Paseman, 2011.11.30 –  Gerhard Paseman Dec 1 '11 at 2:17 As a remark to the case of an n dimensional non-convex polytope $P\subset \mathbb{R}^n$, Gerhard's condition is still necessary. Indeed, for any face f , the orthogonal projection onto the hyperplane containing that face is 1-Lipschitz and maps the n−1 skeleton of P minus f surjectively onto f (the easy reason is: f has an inner and an outer side (essentially by hypothesis on P, so that the line orthogonal to f at x∈f also meet ∂P in another y∉f). The analogous argument seems to work also for k-faces (giving the constant 1/2, possibly non optimal) –  Pietro Majer Dec 2 '11 at 11:55
2014-08-27 15:08:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028570055961609, "perplexity": 590.2152885146288}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829421.59/warc/CC-MAIN-20140820021349-00275-ip-10-180-136-8.ec2.internal.warc.gz"}
https://en.universaldenker.org/lessons/286
# Linearly and Circularly Polarized Electromagnetic Waves Level 3 (up to Physics B.Sc.) Level 3 requires the basics of vector calculus, differential and integral calculus. Suitable for undergraduates and high school students. Updated by Alexander Fufaev on ## Video - Electromagnetic Wave Equation Simply Explained Consider a plane, periodic electromagnetic wave in vacuum. It has an electric field $$\boldsymbol{E}$$ and a magnetic field $$\boldsymbol{B}$$. For polarization only the E-field $$\boldsymbol{E} = (E_{\text x}, E_{\text y}, E_{\text z})$$ is relevant. The individual E-field components of a plane wave are: E-field vector components Formula anchor Here $$\boldsymbol{E}_0 = (E_{0 \text x}, E_{0 \text y}, E_{0 \text z})$$ is the amplitude of the E-field, $$\omega$$ the angular frequency, $$k$$ the wave number and $$\alpha, \beta, \gamma$$ are phases to get the possible phase shift between the vector components. Since the electromagnetic wave is plane, Eq. 1 depends only on one position coordinate (here it is the $$z$$ coordinate). And, since the wave is periodic, it is described by a sine or cosine function (here it is cosine). Furthermore, the wave propagates in the $$z$$ direction. Light, i.e. an electromagnetic wave, can be polarized with a polarization filter, for example. Mathematically, this means that the E-field components 1 are linked to certain conditions, depending on the type of polarization. For this purpose, let's look at two important types of polarization and their conditions, namely linear and circular polarization. One condition that both types of polarization must meet is: Thus the $$E_{\text z}$$ component of the E-field is zero: E-field vector components without z-component Formula anchor ## Linear polarized plane wave A linearly polarized electric wave must also satisfy the following condition besides condition #1: You wonder why it has to be that way? Because this is a definition! If the conditions #1 and #2 are fulfilled, then we speak of linear polarized plane waves. According to condition #2, the phases $$\omega \, t - k\,z + \alpha$$ and $$\omega \, t - k\,z + \beta$$ must be equal. For this, $$\alpha = \beta$$ must be satisfied. For simplicity, let's set $$\alpha$$ and $$\beta$$ equal to zero (the important part is that they are BOTH equal to zero): Vector components of a linearly polarized EM wave Formula anchor Of course, we can write down this E-field vector compactly and get: Linearly polarized plane wave Formula anchor ## Circularly polarized wave For a circularly polarized wave, the phase shift $$\beta - \alpha$$ between the two E-field components is not zero, as it is for a linearly polarized wave, but $$\pm \pi/2$$ (i.e., 90 degrees). Let us apply the definition to the E-field 2: E-field components phase-shifted by 90 degrees Formula anchor Since cosine and sine are also 90 degrees out of phase, the second E-field component in 5 can be replaced with sine: E-field components of a plane wave expressed with cosine and sine Formula anchor Another condition that a circularly polarized wave must meet is: With the third condition E-field 6 becomes: Right-circularly polarized plane wave Formula anchor The E-field 7 corresponds exactly to the polar representation. Thus, when the time $$t$$ changes, the E-field vector $$\boldsymbol{E}$$ rotates in the $$x$$-$$y$$ plane (see illustration 2). This is where the term "circular" comes from. Along the $$z$$-axis the E-field vector thus spirals. If the circularly polarized plane wave is viewed orthogonal to the $$x$$-$$y$$ plane in the propagation direction, the E-field vector rotates rightwards for the observer. Therefore the E-field vector 6 is called a right-circularly polarized wave (or short: $$\sigma^{+}$$ wave). If cosine and sine are interchanged in 6, the field vector turns left for the observer. This wave is called a left-circularly polarized wave (or short: $$\sigma^{-}$$ wave): Left-circularly polarized wave Formula anchor Now you should have a theoretical understanding of the definitions of linearly and circularly polarized plane waves.
2022-05-23 11:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344261050224304, "perplexity": 674.874092033074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00516.warc.gz"}
https://www.physicsforums.com/threads/uniform-convergence.720807/
# Homework Help: Uniform convergence 1. Nov 4, 2013 ### Lee33 1. The problem statement, all variables and given/known data Is the sequence of function $f_1, f_2,f_3,\ldots$ on $[0,1]$ uniformly convergent if $f_n(x) = \frac{x}{1+nx^2}$? 2. The attempt at a solution I got the following but I think I did it wrong. For $f_n(x) = \frac{x}{1+nx^2}$, I got if $f_n \to0$ then we must find $\epsilon>0$ an $N$ such that for $n>N$ implies $|f_n-0|<\epsilon.$ So $f_n(x) = \frac{x}{1+nx^2}$; $\lim_{n\to\infty}f_n(x) =0$. Then for $\epsilon>0$ we have $|f_n(x)-f(x)| = |\frac{x}{1+nx^2}|\le |\frac{1}{1+n}|<|\frac{1}{n}|<\epsilon$ thus $N = \frac{1}{\epsilon}$. But I think this is wrong since $|1+nx^2|<|1+n|$? How can I show it is uniformly convergent? 2. Nov 4, 2013 ### jbunniii $f_n \rightarrow 0$ uniformly if and only if $\sup f_n \rightarrow 0$. So I would suggest that you start by finding the maximum value of $f_n$. 3. Nov 4, 2013 ### Lee33 jbunniii - We haven't been taught that way yet. We still haven't defined "derivative" or proved $f_n \to 0$ uniformly iff $\sup f_n \to 0$ so I can't use it to prove my problem. What I was thinking is how can I divide into two regions? 4. Nov 4, 2013 ### jbunniii Consider the denominator: $1 + nx^2$. This is not quite a perfect square, but it would be if we added the missing term (actually, subtracting is more useful here): $1 - 2\sqrt{n} x + nx^2 = (1 - \sqrt{n}x)^2$. Now this is a square, so it is nonnegative. What can you conclude? 5. Nov 4, 2013 ### Lee33 Thanks, jbunniii for the hint!
2018-07-20 13:24:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070267677307129, "perplexity": 318.1983555049352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591596.64/warc/CC-MAIN-20180720115631-20180720135631-00031.warc.gz"}
http://math.stackexchange.com/questions/112016/properties-on-normed-vector-space
properties on normed vector space Let $X \neq \{0\}$ a normed vector space.Prove the following (a) $X$ does not have isolated points. (b) If $x,y \in X$ such that $||x-y||= \epsilon >0$ then 1.Exists a sequence $(y_n)_n$ in $X$ such that $||y_n-x|| < \epsilon \quad$ for all $n$ and $y_n \to y$ 2.Exists a sequence $(y'_n)_n$ in $X$ such that $||y'_n - x|| > \epsilon \quad$ for all $n$ and $y'_n \to y$. - If it is a homework, put an appropriate tag. What did you try? –  Ilya Feb 22 '12 at 12:59 add comment 1 Answer Hints. a. For $x\in X$ you can define $x_n = (1-\frac{1}{n})x$ and find $\|x-x_n\|$. b1. consider $y_n = \alpha_n x+(1-\alpha_n)y$ with $\alpha_n\in(0,1)$ and $\alpha_n\to 0$ with $n\to\infty$. I guess, for b2. you can imagine a similar example. - Thank's for the reply! –  passenger Feb 22 '12 at 16:28 add comment
2013-12-18 15:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357834458351135, "perplexity": 679.7755352520275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758904/warc/CC-MAIN-20131218054918-00098-ip-10-33-133-15.ec2.internal.warc.gz"}
https://dynamische-vwl.de/english/micro/dynvwlengch1.html
Chapter 1The demand curve The demand curve represents the willingness to pay for a good or service of all consumers in a market. The demand curve $D\left(p\right)$ indicates at a price p the total quantity of the good demanded. The (market) demand curve represents the sum of all individual demands. A point on the demand curve has the following meaning (see graph): For this price (y-axis), this amount of goods (x- axis) is being demanded. Please note: In contrast to the usual mathematical representation, the free variable (the price) is shown on the ordinate (y-axis) and the dependent variable (the quantity) on the abscissa (x-axis). The demand curve is generally negatively sloped, meaning that the higher the price, the less demand there is. There are two main reasons for this. Firstly, if the price is higher, fewer customers are willing to buy the good, so the number of consumers decreases. Secondly, individual demand also decreases at a higher price (we will deal with exceptions such as the Snob Effect later). This can be explained by the budget effect or alternatives. The budget effect will be illustrated by an example: On a hot summer day, a boy wants to buy ice cream at an ice cream parlor with 5 Euros in his pocket. If the scoop costs 50 cents, he can afford 10 scoops. If the scoop costs 70 cents, he can only buy 7 scoops, at 1 euro per scoop he can buy 5 scoops, and at 2 euros per scoop the boy can only afford 2 scoops. At a price above 5 Euro per scoop, he can no longer buy a scoop, the demand is zero. This price, at which there is no more demand for a good, is called the prohibitive price. The quantity that is demanded at a price of zero, i.e. when the good is given away, is called saturation quantity. This is finite, since every commodity will eventually reach saturation, even ice cream in summer. As you can see from the example, demand curves are actually step shaped, as only whole units or certain fractions can be demanded. As a rule, however, demand curves are not modeled as step functions but as smooth curves, for example as straight lines like in the above graphic. This is based on the assumption that if markets are sufficiently large, for example, 80 million consumers in Germany, the steps are infinitesimally small. (c) by Christian Bauer Prof. Dr. Christian Bauer Chair of monetary economics Trier University D-54296 Trier Tel.: +49 (0)651/201-2743 E-mail: Bauer@uni-trier.de URL: https://www.cbauer.de
2022-01-27 02:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4905046224594116, "perplexity": 1222.6497434037015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00256.warc.gz"}
http://math.stackexchange.com/questions/99031/exponential-equation-factoring
# Exponential equation factoring I've empiricaly produced this exponential equation to express a graphical representation : $$y = \left(a^x + (bx)²\right) \left((1-10^{-x}) x\right)$$ I know the constants $a$ and $b$. Now, i would like to extract the formula to be able to calculate x plots separatively. How can i factor this equation to solve x ? $$x = ?$$ I didnt practice maths for so many years, i almost completely forgot all of the factorisation rules so im likely stucked... Some help would be greatly appreciated. EDIT : I forgot to say that A and B values can be restricted to a range allowing to find an acceptable solution. For example, a = 1 and b = 10 - For $y$ in a limited range, there may be a formula $F(y)$ such that $x$ is well-approximated by $F(y)$. An exact formula for $x$ in terms of $y$ is hopeless. –  André Nicolas Jan 14 '12 at 16:35 In fact i have a fixed range of X for which i would like to be able to find y. With theses y values i would like to be able to find the corresponding x. You can check the spreadsheet link i posted to fully understand my problem. –  Puls Jan 14 '12 at 17:01 Unfortunately, I do not have the specialized skills to produce a good formula. –  André Nicolas Jan 14 '12 at 17:05 That is too complicated to solve explicitly, but it may be possible to solve numerically, if there is a solution (assuming $a$ is positive, there may be some negative values of $y$ for which there is no real solution). For example if $y=2$, $a=3$ and $b=4$ then $x \approx -0.403132$ and $x \approx 0.503299$ are solutions. It looked to me as if $(0,0)$ was the minimum point on the curve, so I just found positive and negative $x$ which gave $y$ too high and then used bisection methods to find solutions: there are many others ways of doing it. An alternative (at least to start) would to draw the curve. –  Henry Jan 14 '12 at 20:50
2015-09-02 03:04:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753819465637207, "perplexity": 161.35460144776997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645241661.64/warc/CC-MAIN-20150827031401-00345-ip-10-171-96-226.ec2.internal.warc.gz"}
https://beam.apache.org/releases/pydoc/2.30.0/apache_beam.utils.retry.html
# apache_beam.utils.retry module¶ Retry decorators for calls raising exceptions. For internal use only; no backwards-compatibility guarantees. This module is used mostly to decorate all integration points where the code makes calls to remote services. Searching through the code base for @retry should find all such places. For this reason even places where retry is not needed right now use a @retry.no_retries decorator. exception apache_beam.utils.retry.PermanentException[source] Bases: Exception Base class for exceptions that should not be retried. class apache_beam.utils.retry.FuzzedExponentialIntervals(initial_delay_secs, num_retries, factor=2, fuzz=0.5, max_delay_secs=3600, stop_after_secs=None)[source] Bases: object Iterable for intervals that are exponentially spaced, with fuzzing. On iteration, yields retry interval lengths, in seconds. Every iteration over this iterable will yield differently fuzzed interval lengths, as long as fuzz is nonzero. Parameters: initial_delay_secs – The delay before the first retry, in seconds. num_retries – The total number of times to retry. factor – The exponential factor to use on subsequent retries. Default is 2 (doubling). fuzz – A value between 0 and 1, indicating the fraction of fuzz. For a given delay d, the fuzzed delay is randomly chosen between [(1 - fuzz) * d, d]. max_delay_secs – Maximum delay (in seconds). After this limit is reached, further tries use max_delay_sec instead of exponentially increasing the time. Defaults to 1 hour. stop_after_secs – Places a limit on the sum of intervals returned (in seconds), such that the sum is <= stop_after_secs. Defaults to disabled (None). You may need to increase num_retries to effectively use this feature. apache_beam.utils.retry.retry_on_server_errors_filter(exception)[source] Filter allowing retries on server errors and non-HttpErrors. apache_beam.utils.retry.retry_on_server_errors_and_notfound_filter(exception)[source] apache_beam.utils.retry.retry_on_server_errors_and_timeout_filter(exception)[source] apache_beam.utils.retry.retry_on_server_errors_timeout_or_quota_issues_filter(exception)[source] Retry on server, timeout and 403 errors. 403 errors can be accessDenied, billingNotEnabled, and also quotaExceeded, rateLimitExceeded. apache_beam.utils.retry.retry_on_beam_io_error_filter(exception)[source] Filter allowing retries on Beam IO errors. apache_beam.utils.retry.retry_if_valid_input_but_server_error_and_timeout_filter(exception)[source] class apache_beam.utils.retry.Clock[source] Bases: object A simple clock implementing sleep(). sleep(value)[source] apache_beam.utils.retry.no_retries(fun)[source] A retry decorator for places where we do not want retries. apache_beam.utils.retry.with_exponential_backoff(num_retries=7, initial_delay_secs=5.0, logger=<bound method Logger.warning of <Logger apache_beam.utils.retry (WARNING)>>, retry_filter=<function retry_on_server_errors_filter>, clock=<apache_beam.utils.retry.Clock object>, fuzz=True, factor=2, max_delay_secs=3600, stop_after_secs=None)[source] Decorator with arguments that control the retry logic. Parameters: num_retries – The total number of times to retry. initial_delay_secs – The delay before the first retry, in seconds. logger – A callable used to report an exception. Must have the same signature as functions in the standard logging module. The default is _LOGGER.warning. retry_filter – A callable getting the exception raised and returning True if the retry should happen. For instance we do not want to retry on 404 Http errors most of the time. The default value will return true for server errors (HTTP status code >= 500) and non Http errors. clock – A clock object implementing a sleep method. The default clock will use time.sleep(). fuzz – True if the delay should be fuzzed (default). During testing False can be used so that the delays are not randomized. factor – The exponential factor to use on subsequent retries. Default is 2 (doubling). max_delay_secs – Maximum delay (in seconds). After this limit is reached, further tries use max_delay_sec instead of exponentially increasing the time. Defaults to 1 hour. stop_after_secs – Places a limit on the sum of delays between retries, such that the sum is <= stop_after_secs. Retries will stop after the limit is reached. Defaults to disabled (None). You may need to increase num_retries to effectively use this feature. As per Python decorators with arguments pattern returns a decorator for the function which in turn will return the wrapped (decorated) function. The decorator is intended to be used on callables that make HTTP or RPC requests that can temporarily timeout or have transient errors. For instance the make_http_request() call below will be retried 16 times with exponential backoff and fuzzing of the delay interval (default settings). from apache_beam.utils import retry # … @retry.with_exponential_backoff() make_http_request(args)
2021-08-05 09:59:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.229922354221344, "perplexity": 6485.540821811138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00233.warc.gz"}
https://acroname.com/reference/python/Mux.html
# Mux¶ class brainstem.stem.Mux(module, index)[source] Access MUX specialized entities on certain BrainStem modules. A MUX is a multiplexer that takes one or more similar inputs (bus, connection, or signal) and allows switching to one or more outputs. An analogy would be the switchboard of a telephone operator. Calls (inputs) come in and by re-connecting the input to an output, the operator (multiplexor) can direct that input to on or more outputs. One possible output is to not connect the input to anything which essentially disables that input’s connection to anything. Not every MUX has multiple inputs. Some may simply be a single input that can be enabled (connected to a single output) or disabled (not connected to anything). Useful Constants: • UPSTREAM_STATE_ONBOARD (0) • UPSTREAM_STATE_EDGE (1) • UPSTREAM_MODE_AUTO (0) • UPSTREAM_MODE_ONBOARD (1) • UPSTREAM_MODE_EDGE (2) • DEFAULT_MODE (UPSTREAM_MODE_AUTO) getChannel()[source] Gets the current selected channel. Param: channel (int): The channel of the mux to enable. Returns: Return NO_ERROR on success, or one of the common sets of return error codes on failure. Result.error getConfiguration()[source] Gets the configuration of the Mux. Returns: Return result object with NO_ERROR set and the current mux voltage setting in the Result.value or an Error. Result getEnable()[source] Gets the enable/disable status of the mux. Returns: Return NO_ERROR on success, or one of the common sets of return error codes on failure. Result.error getSplitMode()[source] Gets the bit packed mux split configuration. Returns: Return result object with NO_ERROR set and the current mux voltage setting in the Result.value or an Error. Result getVoltage(channel)[source] Gets the voltage of the specified channel. On some modules this is a measured value so may not exactly match what was previously set via the setVoltage interface. Refer to the module datasheet to to determine if this is a measured or stored value. Returns: Return result object with NO_ERROR set and the current mux voltage setting in the Result.value or an Error. Result setChannel(channel)[source] Enables the specified channel of the mux. Param: channel (int): The channel of the mux to enable. Returns: Return NO_ERROR on success, or one of the common sets of return error codes on failure. Result.error setConfiguration(config)[source] Sets the configuration of the mux. Returns: Return NO_ERROR on success, or one of the common sets of return error codes on failure. Result.error setEnable(bEnable)[source] Enables or disables the mux based on the param. Param: bEnable (bool): True = Enable, False = Disable Returns: Return NO_ERROR on success, or one of the common sets of return error codes on failure. Result.error setSplitMode(splitMode)[source] Sets the mux split configuration Returns: Return NO_ERROR on success, or one of the common sets of return error codes on failure. Result.error
2019-02-19 23:07:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22530345618724823, "perplexity": 7599.602975024209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247493803.26/warc/CC-MAIN-20190219223509-20190220005509-00246.warc.gz"}
http://tex.stackexchange.com/questions/135976/defining-font-color-inside-highlight-fails
# Defining font color inside highlight fails Since I wanted a better readability of highlights of a would be black & white-printed document, I needed to change both the highlighting color (gray) AND the font color (white). Depending which command you implement inside the other will compile or not: \hl{\textcolor{white}{Fail}} does not work, compiling returns Package xcolor Error: Undefined color '{white}'. Here's the mwe: \documentclass{article} \usepackage{xcolor} \usepackage{color} \usepackage{soul} % Enables highlighting \sethlcolor{gray} \begin{document} \textcolor{white}{\hl{Success}} %% Error: %% Package xcolor Error: Undefined color {white}'. \hl{\textcolor{white}{Fail}} \end{document} So, in the end, the problem is sorta solved but I thought you might have an answer as to what happens. - The error message is the one you get from \textcolor{{white}}{Fail} with spurious doubled braces, and that is in fact the command that is executed. \hl makes a pre-scan of its argument in order to do whatever it does and then re-constructs the argument to be evaluated. In this instance its reconstruction is not perfect and it inserts extra braces. You can hide the construct from the scan inside an extra pair of braces \hl{{\textcolor{white}{Fail}}} ` works -
2015-03-05 03:01:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7741982340812683, "perplexity": 3143.3815300434926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463679.20/warc/CC-MAIN-20150226074103-00239-ip-10-28-5-156.ec2.internal.warc.gz"}
http://decmalpndexfucowi.gq/?binance=9953
# Definition Of Option In Hindi - Binary Options Example • Definition Of Option In Hindi - Binary Options Example
2022-01-26 05:34:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490613102912903, "perplexity": 9917.7734732104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00206.warc.gz"}
https://stats.stackexchange.com/questions/518179/determine-family-in-gams-of-negative-values
# Determine family in GAMs of negative values I have these values (negative and positive) and I want to determine the nonlinear relationship between variable and predictor using generalized additive models (GAMs). df= data.frame(variable=c(-1.03391679,-1.324947736,-0.511549589,-0.890394361,-0.114783801,-0.607616663,-0.06972697, 0.10417567,0.346472235,0.566684541,0.552601689,0.146828173,0.352881477,0.105327722, -0.199475689,0.381211487,1.793614103,0.714099845,-0.881529659,-0.946126372,-1.731426637, -1.023895647,-1.155725351,-1.592679052,-0.788258216,-1.643160676,-1.073264013,-1.256619939,-0.840765857, -0.502305706,-0.598869913,-0.70151056,-1.162473227,-0.817998155,-0.947264438,-0.909921175, -1.015944272,-1.49645676,-0.894219391,-0.936551829,-0.977840436,-1.102238876,-1.236164349, -1.30339163,-1.110259713,-1.592403782,-0.818693844,-1.517424033,-0.461633536,-0.887032296,-0.899936415, -1.181668619,-0.760226407,-0.510525117,-0.276555603,-0.270391739,-0.763202415,-0.514927158,-0.207406064, -0.514130386,-0.787170279,-0.998968675,-0.728808123,-0.590584485,-1.133567269,-1.020126191, -1.035483352,-1.252052964,-1.701579112,-1.237738968,-0.133874299,-0.235070008, -1.495950815,-0.974074072,-1.988234189,-1.168609357,-0.495524754,-0.401234574,0.007524237,0.332921197, -0.007038695,-0.198511569,-0.576370464,-0.527011486,-0.493142973), date=c('2017-01-10','2017-01-24','2017-02-10','2017-02-21','2017-03-06','2017-03-20', '2017-04-03',"2017-04-18","2017-05-05","2017-05-16","2017-06-17","2017-06-19",'2017-07-05', "2017-07-21","2017-08-14","2017-08-29","2017-09-15","2017-10-18","2017-10-30", "2017-11-14","2017-11-30","2017-12-13","2017-12-29","2018-01-23","2018-01-31", "2018-02-16","2018-02-28","2018-03-14","2018-03-28","2018-04-13","2018-04-26", "2018-05-16","2018-05-30","2018-06-15","2018-06-29","2018-07-16","2018-07-30", "2018-08-14","2018-08-28","2018-09-17","2018-09-28","2018-10-12","2018-10-30", "2018-11-15","2018-11-30","2018-12-13","2018-12-31","2019-01-18","2019-01-31", "2019-02-15","2019-02-25","2019-03-14","2019-03-29","2019-04-15","2019-04-29", "2019-05-17","2019-05-29","2019-06-18","2019-06-30","2019-07-19","2019-07-31", "2019-08-15","2019-08-27","2019-09-16","2019-09-27","2019-10-15","2019-10-29", "2019-11-14","2019-11-27","2019-12-13","2019-12-27","2020-01-16","2020-01-31","2020-02-13", "2020-02-28","2020-03-12","2020-03-31","2020-04-16","2020-04-30","2020-05-14", "2020-05-29","2020-06-15","2020-06-29","2020-07-15","2020-07-28")) My variable has an almost "normal" distribution shapiro.test(df$variable) hist(df$variable) But also, descdist tells me that the distribution is lognormal (but I have negative values) descdist(cc_A$value, discrete=FALSE, boot=500) My question is: What family can I use in my GAM if I have positive and negative values (I can't use Poison, gamma or other), and it's almost normal (and I don't want to transform). I'm dealing with quasi-likelihood, but I don't know if it's okay, and I don't know how I can specify the link "" and the variance "". The model are: df$$date<-as.POSIXct(df$$date,"%Y-%m-%d",tz = "UTC") df$$date <- as.integer(as.Date(df$$date, format = "%Y-%m-%d")) mod1 <- mgcv::gam(variable ~s(date, bs="cr", k=10), data = df, method = "REML") mod2 <- mgcv::gam(variable ~s(date, bs="cr", k=10), data = df, family=quasi(link = "identity", variance = "constant"), method = "REML") summary(mod1) summary(mod2) $$$$ • Welcome to CV, Pablo. Why do you care about the distribution of your regression variables ($y$or$x\$)? GAMs will estimate relationships between both transformed and untransformed variables. Which (transformed or untransformed) do you substantively care about? Apr 4, 2021 at 17:32 • Hi @Alexis, thank you. I really thought that GAMs in mgcv need to specify the distribution family. I thought it was necessary for the penalty. Please, if I'm wrong, could you share a paper or book (S. Wood 2017 I already read it) about this. Apr 4, 2021 at 18:32 • We are interested in the conditional distribution of Y not the marginal distribution; what that means is we are not interested in the distribution of the raw response data. To choose a family, and you aren't familiar with which family might be used historically with such data, consider the properties of the response; yours is continuous and both positive and negative, so the options are somewhat limited in mgcv; start with gaussian() and then go form there, checking diagnostics etc. but the scat() family for a scaled t distribution is also possible Apr 4, 2021 at 19:23 • There's also the shash()` family for the sinh-arcsinh distribution. Other distributions would be available in other packages such as the GAMLSS package. Apr 4, 2021 at 19:27 • Hi @GavinSimpson , thank you for your comment. When you say "then go from there, checking diagnostics" do you mean, run the models with different families (gaussian, scat, ...) and then compare them using, e.g., AIC, Deviance explained or R^2? And Yes, my values are negative and positive because I am standardizing (value - average) to compare the magnitude of the change before a disturbance (the first 15 dates are control, the rest are post-disturbance) Apr 4, 2021 at 20:12
2022-12-03 01:52:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.521941065788269, "perplexity": 1639.4249106298557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00754.warc.gz"}
https://mathematica.stackexchange.com/questions/175407/does-mathematica-understand-the-concept-of-infinitesimal-increment
Does Mathematica understand the concept of infinitesimal increment? Commonly, in textbooks, differential of differential increment vanishes, good example is how Euler-Lagrange equations are derived using the differential (from step 2 to step 3 here, you can see d[q(t) + e*eta(t)] becomes d[q(t)] because eta(t) is infinitesimal change). And it makes sense because such small changes are considered negligible, and that is essential to define derivative rules (to see why I recommend this video at 3:00). I would like to replicate similar behaviour in Mathematica to check correctness of certain derivations I made. Without it vanishing, my expressions will grow and not exlude negligeable terms. I used the DifferentialD operator to represent differential increment, however Mathematica doesn't seem to replicate such behaviour. I started with the following DifferentialD[x + DifferentialD[y]] I also tried to check if following expression be equal to zero but it wasn't the case. DifferentialD[DifferentialD[y]] Could someone more experienced with Mathematica (or symbolic computation in general) comment on that? EDIT: It appears that problem of how to handle operations on infinitesimal quantities is more deep than just simple algebra. For reference I'd like to share this thread, it has a lot of good references and explanations. • The documentation of DifferentialD says "DifferentialD[x] has no built-in meaning." – Henrik Schumacher Jun 15 '18 at 20:07 • There would be much less confusion in this world if people finally reached 20th century mathematics, left this "infinitesimal change" business, and learned what a (Fréchet or Gâteaux) derivative is. Actually, in the source you cite, $\eta$ is not infinitesimal at all. It is just a tangent vector and what is performed there is simply the directional derivative in that direction. – Henrik Schumacher Jun 15 '18 at 20:16 • I get what you're saying, @Henrik. Thank you for quick feedback. The source says "little variation η(t), although infinitesimal", which really is confusing. – Marek Jun 15 '18 at 22:15 • The infinitesimal part is $\epsilon$, not $\eta$. – Michael E2 Jun 16 '18 at 1:43 I don't really approve of your intention but anyway. You can use DifferentialD[x_ + y_] := DifferentialD[x] + DifferentialD[y] DifferentialD[DifferentialD[x_]] := 0 With this, DifferentialD[x + DifferentialD[y]] evaluates to DifferentialD[x], as required. For completeness, DifferentialD[x_^n_] := n x^(n - 1) DifferentialD[x] DifferentialD[x_ y_] := x DifferentialD[y] + y DifferentialD[x] • Note: there is a built-in that does almost what OP wants, Dt, but one has to unprotect it and make it nilpotent, Dt[Dt[x_]] := 0. – AccidentalFourierTransform Jun 15 '18 at 21:13 • Are you aware that this would make all functions linear? – Henrik Schumacher Jun 15 '18 at 21:59 • @AccidentalFourierTransform since you don't approve this initiative, could you provide some examples of derivations that don't require this? For instance in the example I provided with Euler-Lagrange equations, how could that step be skipped? Because I agree with you and Henrik that this formalism is really ugly and I would love to learn a better way. – Marek Jun 15 '18 at 22:19 • @Marek For E-L, you need Variational Methods. – AccidentalFourierTransform Jun 15 '18 at 22:28
2020-02-26 02:31:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.324674516916275, "perplexity": 1409.485403839206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00275.warc.gz"}
https://math.stackexchange.com/questions/1495275/conditional-probability-and-joint-probability
# Conditional probability and joint probability This is trivial, Regarding probability and random variables. Are the following probabilities equivalent or similar? $P(A=a \mid B=b)$ Conditional probability $P(A=a,B=b)$ Joint probability If not, what do they both mean? • – georg Oct 24 '15 at 13:31 • @georg Exactly the clarity I was looking for! – Ben Winding Oct 24 '15 at 13:33 They are very different. The conditional probability is the probability that $A$ occurs given that you know that $B$ occurs. If, for example $A=B$ then this is $1$. The joint probability is the probability that both occur. If for example $A=B$ then this is $P(A)$. To give a concrete example, consider one toss of a fair die. Let $A$ denote the event "you throw a $2$", let $B$ be the event "you throw less than a $4$. Then the Conditional Probability $P(A|B)$ is the probability that you have thrown a $2$ Given that you know you have thrown less than a $4$. That value is $\frac 13$. The joint Probability is the probability that both occur, which is $\frac 16$. $P(A=a\mid B=b)$ is the condition probability that random variable $A$ has value $a$ when it is given that variable $B$ has value $b$ $P(A=a, B=b)$ is the joint probability that random variable $A$ has value $a$ and that random variable $B$ has value $b$ These are related as follows: $$P(A=a,B=b)= P(B=b)\;P(A=a\mid B=b)$$
2020-05-31 17:02:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263355731964111, "perplexity": 204.2624886558759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00110.warc.gz"}
https://projecteuclid.org/euclid.jsl/1183742433
## Journal of Symbolic Logic ### A Model in which the Base-Matrix Tree Cannot have Cofinal Branches Peter Lars Dordal #### Abstract A model of ZFC is constructed in which the distributivity cardinal $\mathbf{h}$ is $2^{\aleph_0} = \aleph_2$, and in which there are no $\omega_2$-towers in $\lbrack\omega\rbrack^\omega$. As an immediate corollary, it follows that any base-matrix tree in this model has no cofinal branches. The model is constructed via a form of iterated Mathias forcing, in which a mixture of finite and countable supports is used. #### Article information Source J. Symbolic Logic, Volume 52, Issue 3 (1987), 651-664. Dates First available in Project Euclid: 6 July 2007 https://projecteuclid.org/euclid.jsl/1183742433 Mathematical Reviews number (MathSciNet) MR902981 Zentralblatt MATH identifier 0637.03049 JSTOR #### Citation Dordal, Peter Lars. A Model in which the Base-Matrix Tree Cannot have Cofinal Branches. J. Symbolic Logic 52 (1987), no. 3, 651--664. https://projecteuclid.org/euclid.jsl/1183742433
2019-12-13 11:09:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4845483899116516, "perplexity": 1607.6243302681155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00404.warc.gz"}
https://www2.sci.hokudai.ac.jp/dept/math/en/event/9700
# 北大数論セミナー:Extending the conjecture of Prasad to quaternionic dual pairs(角濱寛隆氏) Event Date: Dec 20, 2022 Time: 16:00-17:00 Place:理学部3号館3-307 Speaker:角濱寛隆氏(大阪公立大学数学研究所) Title:Extending the conjecture of Prasad to quaternionic dual pairs Abstract:In this talk, we formulate the conjecture describing the local theta correspondence of quaternionic dual pairs of almost equal rank in terms of local Langlands correspondence. To do this, we observe some properties of the parameterizations of irreducible representations for quaternionic unitary groups. This extends the conjecture of Prasad for orthogonal-symplectic and unitary-unitary dual pairs which had been proved by Gan-Ichino, Atobe-Gan, and Atobe. Organizer:跡部 発、安田 正大
2023-03-28 20:52:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186261653900146, "perplexity": 2458.6543586139114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00404.warc.gz"}
https://www.physicsforums.com/threads/age-of-the-universe-as-measured-by-a-non-co-moving-observer.877168/
# Age of the Universe as Measured by a Non-Co-Moving Observer • I Staff Emeritus ## Main Question or Discussion Point In a recent thread I incorrectly stated that an observer moving at a high velocity relative to a co-moving observer would measure the age of the universe as more than the co-moving observer would. Apparently that is incorrect, and the age of the universe measured by the first observer would be less than that measured by the co-moving observer. I'm just curious as to how this works. Is it possible to explain this using just SR and moving reference frames, or do we need to invoke GR? PeterDonis Mentor 2019 Award Is it possible to explain this using just SR and moving reference frames, or do we need to invoke GR? One's initial intuitive guess would be that SR couldn't answer this question since we are dealing with a curved spacetime. However, I think there is a sense in which we can sort of use SR to see why it's true. First, a key fact about comoving observers (which does require GR to derive, since you need to show that the FRW metric is a solution of the EFE): all comoving observers experience the same amount of proper time between any two surfaces of constant FRW coordinate time. These surfaces are picked out by the fact that they are homogeneous and isotropic; no other family of spacelike hypersurfaces in FRW spacetime has this property. Now consider any non-comoving observer's worldline, and ask how much proper time will elapse along it between two surfaces of constant FRW coordinate time. In standard FRW coordinates, the worldline of this non-comoving observer will have some nonzero spatial displacement, which we can consider to be purely radial. For simplicity we consider the case of a spatially flat FRW universe. The proper time along the non-comoving worldline will be the integral of the line element, which we can write as: $$\tau = \int \sqrt{dt^2 - a^2(t) dr^2}$$ Of course we can't fully evaluate this integral without knowing the specific dynamics of the scale factor $a$ and the function $dr/dt$ that gives the non-comoving observer's spatial motion. However, we don't need to do any of that to see that the above integral must give a value that is smaller than the corresponding value for a comoving observer between the same two surfaces of constant FRW time: $$\tau_o = \int dt$$ Technically, we do need one other premise to complete the argument: we need to recognize that the "age of the universe" is evaluated starting at some particular surface of constant FRW coordinate time. The usual convention is to use the "Big Bang" surface, i.e., the surface of constant FRW coordinate time that marks the end of inflation and the beginning of the hot, dense, rapidly expanding state that became the universe we observe. So far I've phrased everything purely in GR terms. But of course the two integrals above look very much like the corresponding integrals for a stationary vs. a moving observer in a particular inertial frame in SR. So we could make a similar sort of argument, heuristically, in the local inertial frame of a comoving observer. But I think we would still need extra information from GR to really nail it down. Staff Emeritus Thanks Peter. One question right now: What's a "surface of constant FRW coordinate time"? PeterDonis Mentor 2019 Award What's a "surface of constant FRW coordinate time"? A spacelike hypersurface of constant coordinate time $t$ in FRW coordinates. Physically, it's a spacelike hypersurface that is homogeneous and isotropic. FRW spacetime can be foliated by a family of such hypersurfaces, and standard FRW coordinates are constructed such that each hypersurface in the family is labeled by a unique value of the coordinate time $t$. This time also turns out to be the proper time of comoving observers, i.e., observers that always see the universe as homogeneous and isotropic; the worldlines of such observers are everywhere orthogonal to the family of hypersurfaces just described. Last edited: Chalnoth Technically, we do need one other premise to complete the argument: we need to recognize that the "age of the universe" is evaluated starting at some particular surface of constant FRW coordinate time. The usual convention is to use the "Big Bang" surface, i.e., the surface of constant FRW coordinate time that marks the end of inflation and the beginning of the hot, dense, rapidly expanding state that became the universe we observe. To be really technical, the time of the Big Bang is the singularity that exists in the Big Bang model, while not considering pre-Big Bang models such as inflation or a bounce cosmology. This is close enough to reheating that they're essentially indistinguishable, however (the difference is something of the order of $10^{-30}$ seconds, if I recall). So, basically the Big Bang time is easy an easy to calculate moment that's close enough to the events that kicked off our observable universe that we generally aren't concerned about any discrepancies. Staff Emeritus A spacelike hypersurface of constant coordinate time $t$ in FRW coordinates. Physically, it's a spacelike hypersurface that is homogeneous and isotropic. FRW spacetime can be foliated by a family of such hypersurfaces, and standard FRW coordinates are constructed such that each hypersurface in the family is labeled by a unique value of the coordinate time $t$. This time also turns out to be the proper time of comoving observers, i.e., observers that always see the universe as homogeneous and isotropic; the worldliness of such observers are everywhere orthogonal to the family of hypersurfaces just described. I don't think my Calculus 2 class prepared me for this... What's a "surface of constant FRW coordinate time"? That's easy: every observer that is comoving with the hubble flow (in other words at rest relative to the cmb) observes the universe to be 13.8 billion years old. The time dilation between two observers is then just calculated via the v relative to the cmb (the local proper velocity), not the relative recessional velocity between them. Fraser Flav did a short video on this which is also good for laymen: What Time Is It In The Universe? The term "hypersurface" sounds wild, but it is only a volume in comoving coordinates. Just imagine the famous surface of an expanding raisin bread. Every raisin is sitting on the 2D surface of the bread; now just add a 3rd dimension and watch the raisins expand. In this metaphor every raisin has the same proper time. I don't think my Calculus 2 class prepared me for this... I think nobody who doesn't already know the answear could understand this hardcore formulation (: Last edited: PeterDonis Mentor 2019 Award I don't think my Calculus 2 class prepared me for this... Chapter 8 of Carroll's online lecture notes gives a good presentation, explaining in more detail what the terms I used mean. He also gives a good introduction to differential geometry in chapters 2 and 3, which helps when you encounter "hardcore" formulations like the one I posted. Staff Emeritus
2020-04-10 09:10:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427529335021973, "perplexity": 415.10024985956437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00204.warc.gz"}
https://www.nature.com/articles/s41467-020-16137-4?error=cookies_not_supported&code=e09f3c6d-cae9-4b53-b4bc-6a5fc119c246
# Constraints on nonlocality in networks from no-signaling and independence ## Abstract The possibility of Bell inequality violations in quantum theory had a profound impact on our understanding of the correlations that can be shared by distant parties. Generalizing the concept of Bell nonlocality to networks leads to novel forms of correlations, the characterization of which is, however, challenging. Here, we investigate constraints on correlations in networks under the natural assumptions of no-signaling and independence of the sources. We consider the triangle network with binary outputs, and derive strong constraints on correlations even though the parties receive no input, i.e., each party performs a fixed measurement. We show that some of these constraints are tight, by constructing explicit local models (i.e. where sources distribute classical variables) that can saturate them. However, we also observe that other constraints can apparently not be saturated by local models, which opens the possibility of having nonlocal (but non-signaling) correlations in the triangle network with binary outputs. ## Introduction The no-signaling principle states that instantaneous communication at a distance is impossible. This imposes constraints on the possible correlations between distant observers. Consider the so-called Bell scenario1, where each party performs different local measurements on a shared physical resource distributed by a single common source. In this case, the no-signaling principle implies that the choice of measurement (the input) of one party cannot influence the measurement statistics observed by the other parties (their outputs). In other words, the marginal probability distribution of each party (or subset of parties) must be independent of the input of any other party. These are the well-known no-signaling conditions, which represent the weakest conditions that correlations must satisfy in any reasonable physical theory2, in the sense of being compatible with relativity. More generally, the no-signaling principle ensures that the information cannot be transmitted without any physical carrier. This provides a useful framework to investigate quantum correlations (which obviously satisfy the no-signaling conditions, but do not saturate them in general2) within a larger set of physical theories satisfying no-signaling; see e.g., refs. 2,3,4,5,6,7,8,9. Recently, the concept of Bell nonlocality has been generalized to networks, where separated sources distribute physical resources to subsets of distant parties (Fig. 1). Assuming the sources to be independent from each other10,11, arguably a natural assumption in this context, leads to many novel effects. Notably, it becomes now possible to demonstrate quantum nonlocality without the use of measurement inputs11,12,13,14,15, but only by considering the output statistics of fixed measurements. Just recently, a first example of such nonlocality genuine to networks was proposed15,16. This radically departs from the standard setting of Bell nonlocality, and opens many novel questions. Characterizing correlations in networks (local or quantum) is however still very challenging at the moment, despite recent progress17,18,19,20,21,22,23,24,25,26,27,28. Moving beyond quantum correlations, this naturally raises the question of finding the limits of possible correlations in networks, assuming only no-signaling and independence (NSI) of the sources22,29,30,31,32,33. Here, we investigate this question and derive limits on correlations, which we refer to as NSI constraints. While our approach can in principle be applied to any network, we focus here on the well-known triangle network with binary outputs and no inputs, for which we obtain strong, and even tight NSI constraints. Specifically, we show that, despite the absence of an input, some statistics imply the possibility for one party to signal to others by locally changing (or not changing) the structure of the network. Formally, this amounts to considering a specific class of so-called network inflations, as introduced in ref. 22, which we show can lead to general and strong NSI constraints. Moreover, we prove that some of our NSI constraints are in fact tight, by showing that they can be saturated by correlations from explicit trilocal models, in which the sources distribute classical variables. Interestingly, however, it appears that not all of our NSI constraints can be saturated by trilocal models, which opens the possibility of having nonlocal (but nevertheless non-signaling) correlations in the triangle network with binary outputs. Finally, we conclude with a list of open questions. ## Results ### NSI constraints The triangle network (sketched in Fig. 1a) features three observers: Alice, Bob, and Charlie. Every pair of observers is connected by a (bipartite) source, providing a shared physical system. Importantly, the three sources are assumed to be independent from each other. Hence, the three observers share no common (i.e., tripartite) piece of information. Based on the received physical resources, each observer provides an output (a, b, and c, respectively). Note that the observers receive no input in this setting, contrary to standard Bell nonlocality tests. The statistics of the experiment are thus given by the joint probability distribution p(abc). We focus on the case of binary outputs: abc {+1, −1}. It is then convenient to express the joint distribution as follows: $$p(a,b,c) = \frac{1}{8}\left(\right.1+a{E}_{{\rm{A}}}+b{E}_{{\rm{B}}}+c{E}_{{\rm{C}}}+ab{E}_{{\rm{AB}}}\\ \quad +\, ac{E}_{{\rm{AC}}}+bc{E}_{{\rm{BC}}}+abc{E}_{{\rm{ABC}}}\left)\right.,$$ (1) where EA, EB, and EC are the single-party marginals, EAB, EBC, and EAC are the two-party marginals, and EABC is the three-body correlator. Note that the positivity of p(abc) implies constraints on marginals, in particular p(+ + +) + p(− − −) ≥ 0 implies $${E}_{{\rm{AB}}}+{E}_{{\rm{AC}}}+{E}_{{\rm{BC}}}\ge -\!1\ .$$ (2) In the following, we will derive nontrivial constraints bounding and relating the single-party and two-party marginals of p(abc) under the assumption of NSI. While it seems a priori astonishing that the no-signaling principle can impose constraints in a Bell scenario, featuring no inputs for the parties, we will see that this is nevertheless the case in the triangle network. The main idea is the following. Although one party (say Alice) receives no input, she could still potentially signal to Bob and Charlie by locally modifying the structure of the network. To see this, consider the hexagon network depicted in Fig. 1b, and focus on parties Bob and Charlie. From their point of view, the two networks (triangle and hexagon) should be indistinguishable. This is because all the modification required to bring the triangle network to the hexagon (e.g., by having Alice adding extra parties and sources) occurs on Alice’s side, and can therefore be space-like separated from Bob and Charlie. If Alice, by deciding which network to use, could remotely influence the statistics of Bob and Charlie, this would clearly lead to signaling. Hence, we conclude that the local statistics of Bob and Charlie (i.e., the single-party marginals EB and EC, as well as the two-party marginals EBC) must be the same in the triangle and in the hexagon. To see that this condition really captures the possibility to signal, we could imagine a thought experiment in which we would give an input to Alice, which determines whether she modifies her network structure or not. If she does so and this has an incidence on the EBC marginal, then Bob and Charlie can learn about Alice’s input, hence breaking the usual notion no-signaling condition. Note that the input considered here is however purely fictional, Alice’s input is not present in the actual experiment. From the above reasoning, we conclude that the joint output probability distribution for the hexagon, i.e., $$p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )$$, must satisfy several constraints. In particular, one should have that $$\sum b\ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )=\sum b^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{B}}}$$ (3) $$\sum c\ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )=\sum c^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{C}}}$$ (4) $$\sum bc\ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )=\sum b^{\prime} c^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{BC}}},$$ (5) where all sums go over all outputs $$a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime}$$. From the independence of the sources, we obtain additional constraints, namely $$\sum bb^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{B}}}^{2}$$ (6) $$\,\sum cc^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{C}}}^{2}$$ (7) $$\,\sum bb^{\prime} c\ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{BC}}}{E}_{{\rm{B}}}$$ (8) $$\,\sum bcc^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{BC}}}{E}_{{\rm{C}}}$$ (9) $$\,\sum bcb^{\prime} c^{\prime} \ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )={E}_{{\rm{BC}}}^{2}\ .$$ (10) Clearly, we also get similar constraints when considering signaling between any other party (Bob or Charlie) to the remaining two. Altogether, we see that NSI imposes many constraints on $$p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )$$. Obviously, we also require that $$p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )\ge 0\quad {\rm{and}}\quad \sum p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )=1\ .$$ (11) Now reversing the argument, we see that the non-negativity of $$p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )$$ imposes nontrivial constraints relating the single- and two-party marginals of the triangle distribution p(abc). To illustrate this, let us proceed with an example in a slightly simplified scenario, assuming all single-party marginals to be uniformly random, i.e., EA = EB = EC = 0. In this case, we obtain $$64\ p(a,b,c,a^{\prime} ,b^{\prime} ,c^{\prime} )= \, 1+(ab+a^{\prime} b^{\prime} ){E}_{{\rm{AB}}}+(bc+b^{\prime} c^{\prime} ){E}_{{\rm{BC}}} +(ca^{\prime} +c^{\prime} a){E}_{{\rm{AC}}}\\ +(abc+a^{\prime} b^{\prime} c^{\prime} ){F}_{3}+(bca^{\prime} +b^{\prime} c^{\prime} a){F}_{3}^{\prime} +(ca^{\prime} b^{\prime} +c^{\prime} ab){F}_{3}^{\prime\prime} \\ +aa^{\prime} bb^{\prime} {E}_{{\rm{AB}}}^{2}+bb^{\prime} cc^{\prime} {E}_{{\rm{BC}}}^{2} +aa^{\prime} cc^{\prime} {E}_{{\rm{AC}}}^{2}+aa^{\prime} (bc+b^{\prime} c^{\prime} ){F}_{4}\\ +bb^{\prime} (ca^{\prime} +c^{\prime} a){F}_{4}^{\prime} +cc^{\prime} (ab+a^{\prime} b^{\prime} ){F}_{4}^{\prime\prime} +aa^{\prime} bb^{\prime} (c+c^{\prime} ){F}_{5}\\ +bb^{\prime} cc^{\prime} (a+a^{\prime} ){F}_{5}^{\prime} +aa^{\prime} cc^{\prime} (b+b^{\prime} ){F}_{5}^{\prime\prime} +aa^{\prime} bb^{\prime} cc^{\prime} {F}_{6}\ge 0$$ (12) Importantly, notice that the above expression contains a number of variables (of the form FX) that are uncharacterized; these represent X-party correlators in the hexagon network, see Supplementary Note 1 for more details. Hence, we obtain a set of inequalities imposing constraints on our variables of interest (i.e., EAB, EBC, and EAC), but containing also additional variables that we would like to discard. This can be done systematically via the algorithm of Fourier–Motzkin elimination34. Note that here we need to treat the squared terms, such as $${E}_{{\rm{AB}}}^{2}$$, as new variables, independent from EAB, so that we get a system of linear inequalities. Solving the latter, and taking into account positivity constraints as in Eq. (2), we obtain a complete characterization of the set of two-body marginals (i.e., EAB, EBC, and EAC) that are compatible with NSI in the triangle network (for a hexagon inflation and uniform single-party marginals), in terms of a single inequality $${(1-{E}_{{\rm{AB}}})}^{2}-{E}_{{\rm{BC}}}^{2}-{E}_{{\rm{AC}}}^{2}\ge 0\ ,$$ (13) and its symmetries (under relabeling of the parties and of the outputs). This implies a more symmetric, but slightly weaker inequality: $${(1+{E}_{{\rm{AB}}})}^{2}+{(1+{E}_{{\rm{BC}}})}^{2}+{(1+{E}_{{\rm{AC}}})}^{2}\le 6\ .$$ (14) Note that when EAB = EBC = EAC ≡ E2, we get simply $${E}_{2}\le \sqrt{2}-1\approx 0.41$$. Next, we consider the symmetric case (i.e., EA = EB = EC ≡ E1 and EAB = EBC = EAC ≡ E2) and obtain nontrivial NSI constraints on the possible values of E1 and E2 (Fig. 2). In particular, correlations compatible with NSI must satisfy the following inequality $${(1+2| {E}_{1}| +{E}_{2})}^{2}\le 2{(1+| {E}_{1}| )}^{3}\ .$$ (15) Let us move now to the most general case, with arbitrary values for single- and two-party marginals. For a given set of values EA, EB, EC, EAB, EBC, and EAC, it is possible here to determine via a linear program whether this set is compatible with NSI or not (Supplementary Note 1). More generally, obtaining a characterization of the NSI constraints in terms of explicit inequalities (as above) is challenging, due mainly to the number of parameters and nonlinear constraints. We nevertheless obtain that the following inequality represents an NSI constraint $$\begin{array}{l}{(1+| {E}_{{\rm{A}}}| +| {E}_{{\rm{B}}}| +{E}_{{\rm{AB}}})}^{2}\\ +\,{(1+| {E}_{{\rm{A}}}| +| {E}_{{\rm{C}}}| +{E}_{{\rm{AC}}})}^{2}\\ +\,{(1+| {E}_{{\rm{B}}}| +| {E}_{{\rm{C}}}| +{E}_{{\rm{BC}}})}^{2}\\ \le 6(1+| {E}_{{\rm{A}}}| )(1+| {E}_{{\rm{B}}}| )(1+| {E}_{{\rm{C}}}| )\ .\end{array}$$ (16) A proof of this general inequality is given in Supplementary Note 1. Note that this inequality reduces to Eq. (14) when EA = EB = EC = 0, as well as to Eq. (15) for the symmetric case. It is worthwhile discussing the connection between our approach and the inflation technique presented in refs. 22,25. There, the main focus is on using inflated networks for deriving constraints on correlations achievable, with classical resources. In that case, information can be readily copied, so that sources can send the same information to several parties. Ultimately, this allows for a full characterization of correlations achievable with classical resources22. Copying information is however not possible in our case, as no-signaling resources cannot be perfectly cloned in general6. Hence only inflated networks with bipartite sources can be considered in our case, such as the hexagon. A discussion of these ideas can be found in Section V.D of ref. 22, where the idea of using inflation to limit no-signaling correlations in networks is mentioned. Here, we derive explicitly bounds that all correlations satisfying the NSI constraints, whether quantum of post-quantum, have to satisfy, and identify the physical principle behind them. Finally, the choice of the hexagon inflation deserves a few words. As seen from Fig. 1b, it is judicious to consider inflated networks forming a ring, with a number of parties that is a multiple of three. Intuitively, this should enforce the strongest constraints on the correlations of the inflated network; in particular, all single- and two-body marginals are fixed by the correlations of the triangle. This would not be the case when considering inflations to ring networks, with a number of parties that is not divisible by three. ### Tightness A natural question is whether the constraints we derived above, that are necessary to satisfy NSI, are also sufficient. There is a priori no reason why this should be the case. Of course, starting from the triangle network, there are many (in fact infinitely many) possible extended networks that can be considered, and no-signaling must be enforced in all cases. For instance, instead of extending the network to a hexagon (as in Fig. 1), Alice could consider an extension to a ring network featuring 9, 12, or more parties. Clearly, such extensions could lead to stronger constraints than those derived here for the hexagon network. Nevertheless, we show that some of the constraints we obtain above are in fact tight, i.e., necessary and sufficient for NSI. We prove this by presenting explicit correlations (constructed within a generalized probabilitic theory satisfying NSI) that saturate these constraints. In fact, we consider simply the case where all sources distribute classical variables to each party, which we refer to as trilocal models. The latter give rise to correlations of the form $$\begin{array}{l}p(a,b,c)=\int\mu (\alpha )d\alpha \int\nu (\beta )d\beta \int\omega (\gamma )d\gamma \\ \hskip 60pt{p}_{A}(a| \beta ,\gamma )\ {p}_{B}(b| \alpha ,\gamma )\ {p}_{C}(c| \alpha ,\beta )\end{array},$$ (17) where α, β, and γ represent the three local variables distributed by each source, with arbitrary probability densities μ(α), ν(β), and ω(γ). Also, pA(aβγ) represents an arbitrary response function for Alice, and similarly for pB(bαγ) and pC(cαβ). Note that such trilocal models represents a natural extension of the concept of Bell locality to networks (see e.g., refs. 10,19). We first consider the case of symmetric distributions, i.e., characterized by the two parameters E1 and E2, and seek to determine the set of correlations that can be achieved with trilocal models. As shown in Fig. 2, it turns out that almost all NSI constraints can be saturated in this case, in particular the inequality (15). After performing a numerical search, we could construct explicitly some of these trilocal models, which involve up to ternary local variables (see Supplementary Note 2 for details). Moreover, we compare our NSI constraint (15) to the one derived in ref. 22 (see Eq. (34)), and find that the present one is stronger, and in fact tight (Fig. 2). Note also that a previous work derived an NSI constraint based on entropic quantities29; such constraints are however known to be generally weak, as entropies are a coarse-graining of the statistics, which no longer distinguishes between correlations and anticorrelations. As seen from Fig. 2, there is however a small region (in yellow) that is compatible with NSI (considering the hexagon inflation), but for which we could not construct a trilocal model. Whether this gap can be closed by considering more sophisticated local models (using variables of larger alphabet) or whether stronger no-signaling bounds can be obtained is an interesting open question. For the triangle network with binary outcomes, any trilocal distribution can be obtained by considering shared variables of dimension (at most) six, and deterministic response functions24. In fact, another (and arguably much more interesting) possibility would be that this gap cannot be closed, as it would feature correlations with binary outcomes satisfying NSI, but that are nevertheless non-trilocal. To further explore this question, let us now focus on the case where single-party marginals vanish, i.e., E1 = 0. We investigate the relation between two-party marginals E2 and the three-party correlator E3 = EABC, comparing NSI constraints and trilocal models. Notice that the NSI constraints we obtain here do not involve E3 (as the latter cannot be recovered within the analysis of the hexagon). Hence NSI imposes only $${E}_{2}\le \sqrt{2}-1$$, while positivity of p(abc) imposes other constraints. This is shown in Fig. 3, where we also seek to characterize the set of correlations achievable via trilocal models (proceeding as above). Interestingly, we find again a potential gap between trilocal correlations and NSI constraints. This should however be considered with care. First, the NSI constraints obtained from the hexagon may not be optimal (see Discussion section). Second, there could exist more sophisticated trilocal models (e.g., involving higher-dimensional variables) that could lead to a stronger correlations (i.e., cover a larger region in Fig. 3). Note also that we investigated whether quantum distributions satisfying the independence assumption exist outside of the trilocal region, but we could not find any example (we performed a numerical search, considering entangled states of dimension up to 4 × 4). Finally, note that we also performed a similar analysis for the case where single-party marginals vanish, but two-body marginals are not assumed to be identical to each other. Here, we find that inequality (13) can be saturated in a few specific cases. However, there also exist correlations satisfying the NSI bounds that do not seem to admit a trilocal model; details in Supplementary Note 1. ## Discussion We discussed the constraints arising on correlations in networks, under the assumption of NSI of the sources. We focused our attention on the triangle network with binary outputs for which we derived strong constraints, including tight ones. Our work raises a number of open questions that we now discuss further. A first question is whether the constraints we derive (necessary under NSI), could also be sufficient. We believe this not to be the case, as stronger NSI constraints could arise from inflations of the triangle to more complex networks (e.g., loop networks with an arbitrary number of parties). Note that there could also exist different forms of no-signaling constraints, that cannot be enforced via inflation. In this respect, we compare in Supplementary Note 1 our NSI constraints with the recent work of ref. 32 proposing a very different approach to this problem, using the Finner inequality. A notable difference is that the latter imposes constraints on tripartite correlations, which is not the case here. Another important question is whether there could exist nonlocality in the simplest triangle network with binary outcomes. That is, can we find a p(abc) that satisfies NSI, but that is nevertheless non-trilocal? While we identified certain potential candidate distributions for this, we could not prove any conclusive result at this point. We cannot exclude the possibilities that (i) these correlations are in fact not compatible with NSI (as there exist stronger NSI constraints) or (ii) these correlations can in fact be reproduced by a trilocal model. In order to address point (i), one could try to reproduce these correlations via an explicit NSI model, for instance considering that all sources emit no-signaling resources (such as nonlocal boxes2) which could then be wired together by the parties. To address point (ii), one could show that these correlations violate a multilocality inequality for the triangle network. Of course finding such inequalities is notably challenging, see e.g., ref. 13. Furthermore, it would be interesting to derive NSI constraints for other types of networks. Indeed, the approach developed here can be straightforwardly used. Cases of high interest are general loop networks, as well as the triangle network with larger output alphabet (where examples of quantum nonlocality are proven to exist11,15). Finally, a more fundamental question is whether any correlation satisfying the complete NSI constraints can be realized within an explicit physical theory satisfying no-signaling (the latter are usually referred to as generalized probabilistic theories6). While this is the case in the standard Bell scenario (where all parties share a common resource), it is not clear if that would also be the case in the network scenario. ## References 1. 1. Bell, J. S. On the Einstein Podolsky Rosen paradox. Physics 1, 195 (1964). 2. 2. Popescu, S. & Rohrlich, D. Quantum nonlocality as an axiom. Found. Phys. 24, 379–385 (1994). 3. 3. Barrett, J. et al. Nonlocal correlations as an information-theoretic resource. Phys. Rev. A 71, 022101 (2005). 4. 4. Van Dam, W. Implausible consequences of superstrong nonlocality. Nat. Comput. 12, 9–12 (2013). 5. 5. Brassard, G. et al. Limit on nonlocality in any world in which communication complexity is not trivial. Phys. Rev. Lett. 96, 250401 (2006). 6. 6. Barrett, J. Information processing in generalized probabilistic theories. Phys. Rev. A 75, 032304 (2007). 7. 7. Pawłowski, M. et al. Information causality as a physical principle. Nature 461, 1101–1104 (2009). 8. 8. Brunner, N., Cavalcanti, D., Pironio, S. & Scarani, V. Wehner, S. Bell nonlocality. Rev. Mod. Phys. 86, 419 (2014). 9. 9. Popescu, S. Nonlocality beyond quantum mechanics. Nat. Phys. 10, 264–270 (2014). 10. 10. Branciard, C., Gisin, N. & Pironio, S. Characterizing the nonlocal correlations created via entanglement swapping. Phys. Rev. Lett. 104, 170401 (2010). 11. 11. Fritz, T. Beyond Bell’s theorem: correlation scenarios. New J. Phys. 14, 103001 (2012). 12. 12. Branciard, C., Rosset, D., Gisin, N. & Pironio, S. Bilocal versus nonbilocal correlations in entanglement-swapping experiments. Phys. Rev. A 85, 032119 (2012). 13. 13. Gisin, N. Entanglement 25 years after quantum teleportation: testing joint measurements in quantum networks. Entropy 21, 325 (2019). 14. 14. Fraser, T. C. & Wolfe, E. Causal compatibility inequalities admitting quantum violations in the triangle structure. Phys. Rev. A 98, 022113 (2018). 15. 15. Renou, M.-O. et al. Genuine quantum nonlocality in the triangle network. Phys. Rev. Lett. 123, 140401 (2019). 16. 16. Pusey, M. F. Quantum correlations take a new shape. Physics 12, 106 (2019). 17. 17. Chaves, R. & Fritz, T. Entropic approach to local realism and noncontextuality. Phys. Rev. A 85, 032113 (2012). 18. 18. Tavakoli, A., Skrzypczyk, P., Cavalcanti, D. & Acín, A. Nonlocal correlations in the star-network configuration. Phys. Rev. A 90, 062109 (2014). 19. 19. Rosset, D. et al. Nonlinear Bell inequalities tailored for quantum networks. Phys. Rev. Lett. 116, 01040 (2016). 20. 20. Chaves, R. Polynomial Bell inequalities. Phys. Rev. Lett. 116, 010402 (2016). 21. 21. Tavakoli, A. Quantum correlations in connected multipartite Bell experiments. J. Phys. A Math. Theor. 49, 145304 (2016). 22. 22. Wolfe, E., Spekkens, R. W. The inflation technique for causal inference with latent variables. J. Causal Inference 7 https://doi.org/10.1515/jci-2017-0020 (2019). 23. 23. Lee, C. M. & Spekkens, R. W. Causal inference via algebraic geometry: feasibility tests for functional causal structures with two binary observed variables. J. Causal Inference 5 https://doi.org/10.1515/jci-2016-0013 (2017). 24. 24. Rosset, D., Gisin, N. & Wolfe, E. Universal bound on the cardinality of local hidden variables in networks. Quantum Inf. Comput. 18, 910–926 (2018). 25. 25. Navascues, M. & Wolfe, E. The inflation technique completely solves the causal compatibility problem. Preprint at https://arxiv.org/abs/1707.06476 (2017). 26. 26. Luo, M.-X. Computationally efficient nonlinear Bell inequalities for quantum networks. Phys. Rev. Lett. 120, 140402 (2018). 27. 27. Canabarro, A., Brito, S. & Chaves, R. Machine learning nonlocal correlations. Phys. Rev. Lett. 122, 200401 (2019). 28. 28. Pozas-Kerstjens, A. et al. Bounding the sets of classical and quantum correlations in networks. Phys. Rev. Lett. 123, 140503 (2019). 29. 29. Henson, J., Lal, R. & Pusey, M. F. Theory-independent limits on correlations from generalized bayesian networks. New. J. Phys. 16, 113043 (2014). 30. 30. Fritz, T. Beyond Bell’s theorem II: scenarios with arbitrary causal structure. Commun. Math. Phys. 341, 391–434 (2016). 31. 31. Chaves, R. & Budroni, C. Entropic nonsignaling correlations. Phys. Rev. Lett. 116, 240501 (2016). 32. 32. Renou, M.-O. et al. Limits on correlations in networks for quantum and no-signaling resources. Phys. Rev. Lett. 123, 070403 (2019). 33. 33. Weilenmann, M. & Colbeck, R. Analysing causal structures in generalised probabilistic theories. Quantum 4, 2020 (2020). 34. 34. Ziegler, G. Lectures on Polytopes (Springer, New York, 1998). ## Acknowledgements We thank Stefano Pironio, Marc-Olivier Renou, Denis Rosset, and Elie Wolfe for discussions. We acknowledge financial support from the Swiss national science foundation (Starting grant DIAQ, NCCR-QSIT, and NCCR-Swissmap). E.Z.C. acknowledges support by the Swiss National Science Foundation via the Mobility Fellowship P2GEP2_188276. ## Author information Authors ### Contributions N.G., S.P., and N.B. came up with the idea of the method. N.G., J.-D.B., Y.C., P.R., A.T., E.Z.C., S.P., and N.B. participated in deriving the results, and writing and editing the manuscript. ### Corresponding authors Correspondence to Nicolas Gisin or Jean-Daniel Bancal or Yu Cai. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Gilles Brassard and the other, anonymous, reviewer for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Gisin, N., Bancal, JD., Cai, Y. et al. Constraints on nonlocality in networks from no-signaling and independence. Nat Commun 11, 2378 (2020). https://doi.org/10.1038/s41467-020-16137-4 • Accepted: • Published: • ### Genuine Multipartite Nonlocality is Intrinsic to Quantum Networks • , Carlos Palazuelos •  & Julio I. de Vicente Physical Review Letters (2021) • ### Semidefinite Tests for Quantum Network Topologies • Johan Åberg • , Ranieri Nery • , Cristhiano Duarte •  & Rafael Chaves Physical Review Letters (2020) • ### A neural network oracle for quantum nonlocality problems in networks • Tamás Kriváchy • , Yu Cai • , Daniel Cavalcanti • , Arash Tavakoli • , Nicolas Gisin •  & Nicolas Brunner npj Quantum Information (2020)
2021-02-28 19:54:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483943343162537, "perplexity": 1358.2459716772182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00444.warc.gz"}
https://blog.acolyer.org/2019/01/25/programming-paradigms-for-dummies-what-every-programmer-should-know/?replytocom=48556
# Programming paradigms for dummies: what every programmer should know Programming paradigms for dummies: what every programmer should know Peter Van Roy, 2009 We’ll get back to CIDR’19 next week, but chasing the thread starting with the Data Continuum paper led me to this book chapter by Peter Van Roy mapping out the space of programming language designs. (Thanks to TuringTest for posting a reference to it in a HN thread). It was too good not to take a short detour to cover it! If you like the chapter, you’ll probably enjoy the book, ‘Concepts, Techinques, and Models of Computer Programming’ by Van Roy & Haridi on which much of this chapter was based . This chapter gives an introduction to all the main programming paradigms, their underlying concepts, and the relationships between them… We give a taxonomy of about 30 useful programming paradigms and how they are related. Programming paradigms are approaches based on a mathematical theory or particular set of principles, each paradigm supporting a set of concepts. Van Roy is a believer in multi-paradigm languages: solving a programming problem requires choosing the right concepts, and many problems require different sets of concepts for different parts. Moreover, many programs have to solve more than one problem! “A language should ideally support many concepts in a well-factored way, so that the programmer can choose the right concepts whenever they are needed without being encumbered by the others.” That makes intuitive sense, but in my view does also come with a potential downside: the reader of a program written in such a language needs to be fluent in multiple paradigms and how they interact. (Mitigating this is probably what Van Roy had in mind with the ‘well-factored’ qualification: a true multi-paradigm language should avoid cross-paradigm interference, not just support a rag-bag of concepts). As Van Roy himself says later on when discussing state: “The point is to pick a paradigm with just the right concepts. Too few and programs become complicated. Too many and reasoning becomes complicated. There are a huge number of programming languages, but many fewer paradigms. But there are still a lot of paradigms. This chapter mentions 27 different paradigms that are actually used. The heart of the matter is captured in the following diagram, “which rewards careful study.” Each box is a paradigm, and the arrows between boxes show the concept(s) that need to be added to move between them. (Enlarge) Figure 2 is organised according to the creative extension principle: Concepts are not combined arbitrarily to form paradigms. They can be organized according to the the creative extension principle… In a given paradigm, it can happen that programs become complicated for technical reasons that have no direct relationship to the specific problem that is being solved. This is a sign that there is a new concept waiting to be discovered. The most common ‘tell’ is a need to make pervasive (nonlocal) modifications to a program in order to achieve a single objective. (I’m in danger of climbing back on my old AOP soapbox here!). For example, if we want any function to be able to detect an error at any time and transfer control to an error correction routine, that’s going to be invasive unless we have a concept of exceptions. Two key properties of a programming paradigm are whether or not it has observable non-determinism, and how strongly it supports state. … non-determinism is observable if a user can see different results from executions that start at the same internal configuration. This is highly undesirable… we conclude that observable nondeterminism should be supported only if its expressive power is needed. Regarding state, we’re interested in how a paradigm supports storing a sequence of values in time. State can be unnamed or named; deterministic or non-determinstic; and sequential or concurrent. Not all combinations are useful! Figure 3 below shows some that are: The horizontal axis in the main paradigms figure (figure 2) is organised according to the bold line in the figure above. ### The four most important programming concepts The four most important programming concepts are records, lexically scoped closures, independence (concurrency) and named state. Records are groups of data items with indexed access to each item (e.g. structs). Lexically scoped closures combine a procedure with its external references (things it references outside of itself at its definition). They allow you to create a ‘packet of work’ that can be passed around and executed at a future date. Independence here refers to the idea that activities can evolve independently. I.e., they can be executed concurrently. The two most popular paradigms for concurrency are shared-state and message-passing. Named state is at the simplest level the idea that we can give a name to a piece of state. But Van Roy has a deeper and very interesting argument that revolves around named mutable state: State introduces an abstract notion of time in programs. In functional programs, there is no notion of time… Functions do not change. In the real world, things are different. There are few real-world entities that have the timeless behaviour of functions. Organisms grows and learn. When the same stimulus is given to an organism at different times, the reaction will usually be different. How can we model this inside a program? We need to model an entity with a unique identity (its name) whose behaviour changes during the execution of the program. To do this, we add an abstract notion of time to the program. This abstract time is simply a sequence of values in time that has a single name. We call this sequence a named state. Then Van Roy goes on to give what seems to me to be conflicting pieces of advice: “A good rule is that named state should never be invisible: there should always be some way to access it from the outside” (when talking about correctness), and “Named state is important for a system’s modularity” (think information hiding). ### Abstracting data A data abstraction is a way to organize the use of data structures according to precise rules which guarantee that the data structures are used correctly. A data abstraction has an inside, an outside, and an interface between the two. Data abstractions can be organised along two main dimensions: whether or not the abstraction uses named state, and whether or not the operations are bundled into a single entity with the data. Van Roy then goes on to discuss polymorphism and inheritance (note that Van Roy prefers composition to inheritance in general, but if you must use inheritance then make sure to follow the substitution principle). ### Concurrency The central issue in concurrency is non-determinism. Nondeterminism is very hard to handle if it can be observed by the user of the program. Observable nondeterminism is sometimes called a race condition Not allowing non-determinism would limit our ability to write programs with independent parts. But we can limit the observability of non-determinate behaviour. There are two options here: defining a language in such a way that non-determinism cannot be observed; or limiting the scope of observable non-determinism to those parts of the program that really need it. There are at least four useful programming paradigms that are concurrent but have no observable non-determinism (no race conditions). Table 2 (below) lists these four together with message-passing concurrency. Declarative concurrency is also known as monotonic dataflow. Deterministic inputs are received and used to calculate deterministic outputs. In functional reactive programming, FRP, (aka ‘continuous synchronous programming’) we write function programs but the function arguments can be changed and the change is propagated to the output. Discrete synchronous programming (aka reactive) systems wait for input events, perform internal calculations, and emit output events. The main difference between reactive and FRP is that in reactive programming time is discrete instead of continuous. ### Constraints In constraint programming we express the problem to be solved as a constraint satisfaction problem (CSP)… Constraint programming is the most declarative of all practical programming paradigms. Instead of writing a set of instructions to be executed, in constraint programming you model the problem: representing the problem as a set of variables with constraints over those variables and propagators that implement the constraints. You then pass this model to a solver. ### Language design guidelines Now that we’ve completed a whirlwind tour through some of the concepts and paradigms, I want to finish up with some of Van Roy’s thoughts on designing a programming language. One interesting class of language is the ‘dual-paradigm’ language. A dual-paradigm language typically supports one paradigm for programming in the small, and another for programming in the large. The second paradigm is typically chosen to support abstraction and modularity. For example, solvers supporting constraint programming embedded in an OO language. More generally, Van Roy sees a layered language design with four core layers, a structure which has been independently discovered across multiple projects: The common language has a layered structure with four layers: a strict functional core, followed by declarative concurrency, then asynchronous message passing, and finally global named state. This layered structure naturally supports four paradigms. Van Roy draws four conclusions from his analysis here: 1. Declarative programming is at the very core of programming languages. 2. Declarative programming will stay at the core for the foreseeable future, because distributed, secure, and fault-tolerant programming are essential topics that need support from the programming language 3. Deterministic concurrency is an important form of concurrency that should not be ignored. It is an excellent way to exploit the parallelism of multi-core processors. 4. Message-passing concurrency is the correct default for general-purpose concurrency instead of shared-state concurrency. For large-scale software systems, Van Roy believes we need to embrace a self-sufficient style of system design in which systems become self-configuring, healing, adapting, etc.. The system has components as first class entities (specified by closures), that can be manipulated through higher-order programming. Components communicate through message-passing. Named state and transactions support system configuration and maintenance. On top of this, the system itself should be designed as a set of interlocking feedback loops. Here I’m reminded of systems thinking and causal loop diagrams. ### The last word Each paradigm has its own “soul” that can only be understood by actually using the paradigm. We recommend that you explore the paradigms by actually programming in them…
2021-01-22 16:02:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4342843294143677, "perplexity": 1335.0529859713859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703530835.37/warc/CC-MAIN-20210122144404-20210122174404-00429.warc.gz"}
http://blog.sigfpe.com/2006/05/writing-pure-functional-lazy-call-by.html?showComment=1147321800000
# A Neighborhood of Infinity ## Wednesday, May 10, 2006 ### Writing a pure functional lazy call-by-name compiler There are lots of minimal functional language/lambda expression evaluators out there, so instead I thought I'd write one that actually compiled to standalone C. But by accident it seems to have grown and I now have a 'typeless' Haskell, or at least a language with ML type grammar, lazy evaluation and call-by-name, primitive monad support and only two types - integers and lists. I'm now at the stage where I can compile pieces of code like this: return a s = a : s.bind x f s = let vs = x s in seq vs ((f (head vs) (tail vs))).bind' x f s = bind x (\a b -> f b) s.contains x = if x (if (head x==81) 1 (contains (tail x))) 0.getLine = do { c <- getChar; if (c==10) (return 0) do { l <- getLine; return (c : l) } }.putLine l = if l do { putChar (head l); putLine (tail l) } (return 0).prog = do { a <- getLine; if (contains a) ( do { putLine a; putChar 10 }) (return 0); prog }.start = prog 666. This piece of code does the same as "grep Q" and amazingly doesn't leak memory and could probably outrun a snail (just). When I tidy it up and fix the bugs I'll release the compiler on my web page. It's been one hell of a learning experience. The main thing I've learnt is that those compiler writers are damn clever people. But I also learnt other things like: even the most innocent looking expressions can gobble up vast amounts of memory in a lazy language, it's really heard to debug lazy programs even when you have complete freedom to make your compiler emit whatever 'instrumentation' you want, garbage collection is more subtle than I thought and Haskell has a weird grammar. On the other hand, some things were easier than I expected. Dealing with lambda expressions looks hard at first - but lambda lifting is very easy to implement so that once you have a way of compiling equations, lambdas come for free. And ultimately it's incredible how much stuff you can get for nothing. The core compiler is miniscule - it knows how to compile one operation - function application. The rest is syntactic sugar or part of the library that the C code links to. My ultimate goal is to get my compiler to emit self-contained C programs that are as small as possible. I'd like to take these programs and make them run on a microcontroller like the one in this robot I built last year. It has 1K of RAM, 8K of program space and runs at 8MHz, which may be enough for a simple light following algorithm. (In assembler it'd be about 100 bytes long and use 2 or 3 bytes of RAM.) I don't want to actually achieve anything useful - I'm too much of a mathematician to want to do that. I just want to construct an example of what I see as an extreme form of computational perversity: a pure language with no side effects having actual side effects in the physical world. Kim-Ee Yeoh said... I can relate to some of your experiences developing a compiler. One of the better kept secrets for learning Haskell is Peyton-Jones and Lester's Implementing functional languages: a tutorial, which guides you through several ways of executing Haskell, including the G-machine at the heart of GHC. Best of luck on the embedded project. I'd really be keen to know how it turns out. sigfpe said... Kim-ee, My code is an amalgamation of various ideas I picked up by skimming a bunch of documents like that tutorial. I find that it's better to try to implement something first and then read the papers - that way you have a better appreciation for what the problems are. In fact, many statements I'd read about lazy functional compiler writing used to make no sense to me at all, but now make perfect sense. So I'll probably read that tutorial properly some time soon.
2014-04-19 06:53:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31846216320991516, "perplexity": 1671.7773606029418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
http://codereview.stackexchange.com/questions/43721/finding-person-with-most-medals-in-olympics-database
Finding person with most medals in Olympics database I have an Olympics database from each Olympic year and I want to find the person that has won the most medals. The main problem is that I'm basically querying the same sub-query twice in SUBSET1 and SUBSET2. How would I go about making this more efficient? Select athlete FROM ( Select athlete, Sum(total_medals) as total_medals from Olympics Group by athlete) as SUBSET1 Where total_medals = ( Select Max( total_medals ) FROM ( Select Sum(total_medals) as total_medals from Olympics Group by athlete ) as SUBSET2); - Can you add which database you are actually using (vendor/version)... SQLServer, DB2, MySQL, Oracle, etc. –  rolfl Mar 7 at 16:41 Updated answer to include PostgreSQL –  rolfl Mar 7 at 17:48 Rolled back Rev 8 to Rev 7. (Please don't edit questions in a way that invalidates answers.) –  200_success Mar 7 at 18:16 @200_success: Why does Revision 7 invalidate answers? –  miracle173 Jun 26 at 7:06 This alternative to @rolfl's answer is more readable, in my opinion. It also has a more efficient execution plan. WITH medal_count AS ( SELECT athlete , SUM(total_medals) AS grand_total_medals , RANK() OVER (ORDER BY SUM(total_medals) DESC) AS rank FROM Olympics GROUP BY athlete ) SELECT athlete , grand_total_medals FROM medal_count WHERE rank = 1 ORDER BY athlete; SQLFiddle - In my experience, CTEs are slower than using a subquery since it is about the same as creating a temporary table (ie. you lose your indexes). –  cimmanon Mar 7 at 21:18 In PostgreSQL, you can use the rank() mechanism to help. It still requires a subselect, but consider the following query: Select o.athlete, sum(o.total_medals) as sumtotal_medals from Olympics o, ( select r.athlete as toprank, rank() over ( order by sum(r.total_medals) desc ) as rank from Olympics r group by r.athlete ) rankings where o.athlete = rankings.toprank and rankings.rank = 1 group by o.athlete order by o.athlete I have put this in to the SQLFiddle here.... Previous MySQL exampl This can be done as top-count with a grouped select with a having clause. Select TOP 1 athlete from Olympics group by athlete order by Sum(total_medals) DESC if you want the actual medal haul, add the sum to the select. Select TOP 1 athlete, Sum(total_medals) as total_medals from Olympics group by athlete order by Sum(total_medals) DESC I have put together a fiddle using MySQL (which has the LIMIT key-word) - Had a similar answer, but what if more than 1 person have the top-count? –  konijn Mar 7 at 16:50 Good question... @user35265 - what konijn says.... ? –  rolfl Mar 7 at 16:51 If there is more than one athlete with the same medal count in my query I believe it would select all athletes with the medal count –  The Bear Mar 7 at 17:03 The runtime of your new sql query is 66.962 ms while the runtime of my query is 38.919 ms. –  The Bear Mar 7 at 17:56 @user35265 if you really need better performance on this query then you should consider a separate table with pre-computed aggregations ... perhaps a materialized view... or a table that pre-aggregates the data.... –  rolfl Mar 7 at 18:06 I'm a little late to the party, but I think you were all over-complicating this... Wouldn't this be what you need: SELECT athlete FROM Olympics GROUP BY athlete ORDER BY SUM(total_medals) DESC LIMIT 1 Here is the obligitory SQL Fiddle. EDIT: Previous version didn't account for multiple people with the same number of medals. SELECT athlete FROM Olympics GROUP BY athlete HAVING SUM(total_medals) = ( SELECT SUM(total_medals) FROM Olympics GROUP BY athlete ORDER BY SUM(total_medals) DESC LIMIT 1 ) At a quick glance, the execution plan for this seems a little nicer than the other suggestions, feel free to correct me if I am wrong though. Here is the SQL Fiddle. - This isn't correct because you could have more than one athlete that has the "highest" number of medals –  The Bear Aug 20 at 2:06 @TheBear Thanks, I think I misread the question, I've updated the answer now. –  PenutReaper Aug 20 at 9:33 I think that you are over thinking this, in SQL Server I would do something like this SELECT TOP (10) athlete FROM ( SELECT athlete, Sum(total_medals) AS total_medals FROM Olympics ORDER BY total_medals DESC GROUP BY athlete) And then I would use my Reporting Software to decide if there are 2 or more people at the top. This is probably more of what you want anyway, a top 10 list of all time. Side Note I found it rather difficult to read your query because it wasn't indented and the reserved words weren't capitalized. I would recommend that you do those things when writing a query. - +1 for mentioning the hard to read formatting –  RubberDuck Aug 20 at 12:34
2014-11-26 04:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.309698224067688, "perplexity": 8611.989921089404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005387.19/warc/CC-MAIN-20141125155645-00071-ip-10-235-23-156.ec2.internal.warc.gz"}
http://logicwonders.com/androbode-manual/
# AndroBode user’s manual AndroBode is an Android software written to assist students in drawing Bode plots: one of the most commonly used representations of linear transfer functions. Supported features are: • Drawing of transfer functions in the Bode form. • Graphical representation of the transfer function the formula being edited. • Zooming and panning of the generated Bode plots. In this article we will show how Androbode can be used to input the data related to a transfer function expressed in the Bode form. $$K\frac{(1+T_1s)+...(1+\frac{2\zeta_1s}{\rho_{n1}}+\frac{s^2}{\rho_{n1}^2})+...}{s^n(1+\tau_1s)+...(1+\frac{2\xi_1s}{\omega_{n1}}+\frac{s^2}{\omega_{n1}^2})+...}$$ In our example, we will take a system whose function parameters assume the following values. Parameter Value K 10 n 1 $\tau$ 1ms $\xi$ 0.2 $\omega_n$ 100$\frac{rad}{sec}$ The leftmost tab shows an example of the Bode form, which can be used as a reference to edit the transfer function. Using the second tab: we can edit the binomial factors. In this example we have just one of these factors in the denominator: insert 0.001 as the value of $\tau$ in the textbox and choose the denominator in the radio buttons. Now you can press the “Add” button, to modify the transfer function. If everything went right, you should have achieved the following result. To select a factor which has to be modified or deleted: just click on the desired factor in the formula below. Repeat the same process in the “2nd order tab”: insert the value of $\xi$=0.2 and $\omega_n$=100 as shown in the following picture. In the last tab, we will edit the values of K=10 and n=1. Insert the values in the proper textboxes and press the buttons “Set K” and “Set n”. To insert poles in the origin you can set the exponent n. Setting n to negative values inserts the desired number of zeroes in the origin. It’s finally time to see our Bode plot: just press the “plot” button. The bode plot shows an upper part which is the representation of the gain, and a lower part, which represents the phase. Both asymptotic (red line) and real (blue line) representations are shown. Each one of the plots can be zoomed or panned just touching the screen. The vertical and horizontal axes can be zoomed independently. Androbode can also show Nyquist plots: click on the multiple choice box just under the “Plot” button and select “Nyquist”. Click the “Plot” button again. In the control theory section, several examples have been made available, about the calculation of the parameters in the transfer function.
2020-07-06 21:01:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 8, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6109406352043152, "perplexity": 1032.470330377584}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890181.37/warc/CC-MAIN-20200706191400-20200706221400-00285.warc.gz"}
https://cstheory.stackexchange.com/questions/41975/pulling-a-graph-across-a-partition/41976
# Pulling a graph across a partition I am looking for the name for a particular graph property, if it has been studied, and efficient algorithms for computing it, if they exist. I realise that this may be a well known property that I am just ignorant of, in which case I apologise, and would be grateful to be set right. The property is as follows: Take a graph $$G=(V,E)$$ on $$n$$ vertices, where $$V$$ is the vertex set and $$E$$ is the edge set. Let me define two families of subsets of vertices, $$A_i$$ and $$B_i$$, with the relationship that $$B = V \setminus A$$. I require that $$A_0 = V$$ and $$A_n$$ is the empty set. Each intermediate $$A_i$$ is obtained from $$A_{I-1}$$ by removing one vertex (according to some strategy). The property I care about is $$\max_i |\{(a,b): a\in A_i, b\in B_i,(a,b) \in E\}|$$, minimised over all strategies for choosing $$A_1 \ldots A_{n-1}$$ according to the rules above. $$A_i$$ and $$B_i$$ can be thought of as the $$i$$th step in a process of moving vertices of the graph from one side of a partition (the $$A$$ side) to the other (the $$B$$ side). Thus the property I am concerned with has an operational interpretation as the maximum number of edges that need to cross the partition at any one point in time as we move the graph from one side to another, minimised over all strategies for moving vertices across the partition. Aside from computing the value of this property, an algorithm for determining the optimal strategy would also be helpful. What you are looking for is known as Cutwidth. The problem is NP complete and quite well studied. For example it has a $$O((\log n)^{3/2})$$- approximation algorithm and is fixed parameter tractable when parameterized by the objective function value. • A paper to cite does not come to mind but it is just applying the $\sqrt{\log n}$ approximation for balanced cut (Arora, Rao, Vazirani) in a divide and conquer manner (find balanced cut, solve the two sides recursively, put solution to left side before the solution to the right side). Better algorithms might be known, I am not sure. – daniello Dec 2 '18 at 6:30
2021-06-17 01:34:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7425124645233154, "perplexity": 183.44485159934203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00200.warc.gz"}
http://www.maths.ox.ac.uk/events/past/624?field_seminar_date_value_op=%3C&field_seminar_date_value%5Bvalue%5D=&field_seminar_date_value%5Bmin%5D=&field_seminar_date_value%5Bmax%5D=&page=29
Past Algebraic and Symplectic Geometry Seminar 16 June 2009 14:00 Joe Chuang Abstract • Algebraic and Symplectic Geometry Seminar 2 June 2009 15:45 Bernhard Keller Abstract • Algebraic and Symplectic Geometry Seminar 2 June 2009 14:00 Bernhard Keller Abstract • Algebraic and Symplectic Geometry Seminar 26 May 2009 15:45 Nicos Kapouleas Abstract I will survey the recent work of Haskins and myself constructing new special Lagrangian cones in ${\mathbb C}^n$ for all $n\ge3$ by gluing methods. The link (intersection with the unit sphere ${\cal S}^{2n-1}$) of a special Lagrangian cone is a special Legendrian $(n-1)$-submanifold. I will start by reviewing the geometry of the building blocks used. They are rotationally invariant under the action of $SO(p)\times SO(q)$ ($p+q=n$) special Legendrian $(n-1)$-submanifolds of ${\cal S}^{2n-1}$. These we fuse (when $p=1$, $p=q$) to obtain more complicated topologies. The submanifolds obtained are perturbed to satisfy the special Legendrian condition (and their cones therefore the special Lagrangian condition) by solving the relevant PDE. This involves understanding the linearized operator and its small eigenvalues, and also ensuring appropriate decay for the solutions. • Algebraic and Symplectic Geometry Seminar 19 May 2009 15:45 Kazushi Ueda Abstract A polynomial $f$ is said to be a Brieskorn-Pham polynomial if $f = x_1^{p_1} + ... + x_n^{p_n}$ for positive integers $p_1,\ldots, p_n$. In the talk, I will discuss my joint work with Masahiro Futaki on the equivalence between triangulated category of matrix factorizations of $f$ graded with a certain abelian group $L$ and the Fukaya-Seidel category of an exact symplectic Lefschetz fibration obtained by Morsifying $f$. • Algebraic and Symplectic Geometry Seminar 19 May 2009 14:00 Abstract I'll define the category of B-branes in a LG model, and show that for affine models the Hochschild homology of this category is equal to the physically-predicted closed state space. I'll also explain why this is a step towards proving that LG B-models define TCFTs. • Algebraic and Symplectic Geometry Seminar 12 May 2009 15:45 Sven Meinhardt Abstract • Algebraic and Symplectic Geometry Seminar 7 May 2009 15:45 Eduard Looijenga Abstract This is an overview, mostly of work of others (Denef, Loeser, Merle, Heinloth-Bittner,..). In the first part of the talk we give a brief introduction to motivic integration emphasizing its application to vanishing cycles. In the second part we discuss a join construction and formulate the relevant Sebastiani-Thom theorem. • Algebraic and Symplectic Geometry Seminar 7 May 2009 14:00 Eduard Looijenga Abstract This is an overview, mostly of work of others (Denef, Loeser, Merle, Heinloth-Bittner,..). In the first part of the talk we give a brief introduction to motivic integration emphasizing its application to vanishing cycles. In the second part we discuss a join construction and formulate the relevant Sebastiani-Thom theorem. • Algebraic and Symplectic Geometry Seminar 28 April 2009 15:45 Geordie Williamson Abstract Triply graded link homology (introduced by Khovanov and Rozansky) is a categorification of the HOMFLYPT polynomial. In this talk I will discuss recent joint work with Ben Webster which gives a geometric construction of this invariant in terms of equivariant constructible sheaves. In this framework the Reidemeister moves have quite natural geometric proofs. A generalisation of this construction yields a categorification of the coloured HOMFLYPT polynomial, constructed (conjecturally) by Mackay, Stosic and Vaz. I will also describe how this approach leads to a natural formula for the Jones-Ocneanu trace in terms of the intersection cohomology of Schubert varieties in the special linear group. • Algebraic and Symplectic Geometry Seminar
2018-06-23 23:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7421424388885498, "perplexity": 1194.3577865267753}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00210.warc.gz"}
https://kuscholarworks.ku.edu/handle/1808/278?show=full
dc.contributor.author Duncan, Tyrone E. dc.contributor.author Hu, Yaozhong dc.contributor.author Pasik-Duncan, Bozenna dc.date.accessioned 2005-04-11T18:25:26Z dc.date.available 2005-04-11T18:25:26Z dc.date.issued 2000-02-02 dc.identifier.citation Duncan, TE; Hu, YZ; Pasik-Duncan, B. Stochastic calculus for fractional Brownian motion - I. Theory. SIAM JOURNAL ON CONTROL AND OPTIMIZATION. Feb 2 2000.38(2):582-612. dc.identifier.other ISI:000085672100011 dc.identifier.other http://www.siam.org/journals/sicon/sicon.htm dc.identifier.uri http://hdl.handle.net/1808/278 dc.description.abstract In this paper a stochastic calculus is given for the fractional Brownian motions that have the Hurst parameter in (1/2, 1). A stochastic integral of Ito type is defined for a family of integrands so that the integral has zero mean and an explicit expression for the second moment. This integral uses the Wick product and a derivative in the path space. Some Ito formulae (or change of variables formulae) are given for smooth functions of a fractional Brownian motion or some processes related to a fractional Brownian motion. A stochastic integral of Stratonovich type is defined and the two types of stochastic integrals are explicitly related. A square integrable functional of a fractional Brownian motion is expressed as an infinite series of orthogonal multiple integrals. dc.description.sponsorship Research partially funded by NSF Grant DMS 9623439 dc.format.extent 323285 bytes dc.format.mimetype application/pdf dc.language.iso en_US dc.publisher SIAM PUBLICATIONS dc.subject Fractional brownian motion dc.subject Multiple stratonovich integrals dc.subject Multiple ito integrals dc.subject Ito calculus dc.subject Stochastic calculus dc.subject Ito integral dc.subject Stratonovich integra dc.subject Ito formula dc.subject Wick product dc.title Stochastic calculus for fractional Brownian motion - I. Theory dc.type Article dc.rights.accessrights openAccess  ### This item appears in the following Collection(s) 785-864-8983 KU Libraries 1425 Jayhawk Blvd Lawrence, KS 66045 785-864-8983 KU Libraries 1425 Jayhawk Blvd Lawrence, KS 66045
2021-05-14 01:39:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307588696479797, "perplexity": 4675.63816899152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00453.warc.gz"}
https://statkat.com/online-calculators/p-value-binomial-test-single-proportion.php
p value binomial test for a single proportion - online calculator Enter your observed number of 'successes' X: Enter the sample size/number of trials n: Enter the population proportion of successes according to the null hypothesis/the true probability of a success according to the null hypothesis, $\pi_0$: The test should be:
2021-10-17 07:12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5589348077774048, "perplexity": 1452.9367343624829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00125.warc.gz"}
https://tex.stackexchange.com/questions/160483/printbibliography-only-with-citetitle-occurrences
# \printbibliography only with \citetitle occurrences I want to print two bibliographies: • first, a classic bibliography with documents I cited in my text, I always use \footcite{} • and a second who references only occurrences \citetitle{} Actually, I mark my references of pictures, artworks, musical works, performances, art exhibitions, museum exhibitions or architectural works in my text with \citetitle{}. I don't need use \footcite{} for that. I reserved to verbal works: books, articles, podcasts, etc. But I want they be referenced in another final references section, named here "Œuvres & monuments", possibly classifed by type of document. Classify by type of document is a section named "film" with all my films referenced, another one named "paintings" with all my paintings, "photographies" etc. Types appears in my '.bib' with '@misc' for exemple. (Of course, other problem is the translation from my reference management software : @film becomes @electronic or here @painting -> @misc…) So, it's between bibliography & index: I have a lot of entries referenced (author, title, date…) like in a bibliography but I want to facilitate indexing like in an index. My solution is using [backref=true]… Well, subsidiary question: is it possible using backref in a bibliography and not in another ? I resume my questions: • \printbibliography with only \citetitle references ? • classify my first bibliography by sorting=nyvt & my second by type (@image,@movie…) ? • using backref in a bibliography and not in another ? An idea ? Thank you. \documentclass{scrbook} \usepackage[style=authortitle-ticomp,sorting=nyvt,isbn=false,doi=false,backref=true]{biblatex} \usepackage[english,frenchb]{babel} \begin{document} … some \footnote{MerleauPonty:1985wh}, some \citetitle{Burton:1982td,Leradeaudelamedu:1819wn,Persepolis:2007wt} & some words indexed. \printindex[nam] \printindex[loc] \end{document} My biblio.bib @electronic{Burton:1982td, author = {Burton, Tim}, title = {{Vincent}}, year = {1982}, language = {anglais}} @book{MerleauPonty:1985wh, author = {Merleau-Ponty, Maurice and Lefort, Claude}, title = {{L'{\OE}il et l'Esprit}}, publisher = {Gallimard}, year = {1985}, series = {Folio. Essais}, language = {fran{\c c}ais}} @electronic{Persepolis:2007wt, author = {Satrapi, Marjane and Paronnaud, Vincent}, title = {{Persepolis}}, year = {2007}, language = {fran{\c c}ais}} title = {{Le radeau de la m{\'e}duse}}, author = {G{\'e}ricault, Th{\'e}odore}, year = {1819}, • Please post code which will compile. That code won't. I don't have Biblio.bib, for example. Moreover, it would demonstrate the problem anyway because \footnote{...} is not a citation on two grounds (it is just a footnote and there is no key) and \citetitle{...} is not a citation on one ground (it is a citation command but there's still no key). Etc. etc. Also, you don't say if your version of the code works or, if not, what is wrong. (And I can't tell because I don't have compilable code.) I am guessing the answers to your questions are: No*3, Yes*2. (Maybe redefine commands?) – cfr Feb 15 '14 at 0:45 • Could you please expand on what you mean by "second [bibliography is to be ordered] by type". – moewe Feb 15 '14 at 8:00 • One can specify sorting in \printbibliography. So there is no problem in using \printbibliography[title={Bibliographie},sorting=nyvt] and \printbibliography[title={Œuvres & monuments},sorting=nty] for example. – moewe Feb 15 '14 at 8:55 • Hi @cfr Thank you for yours reactions, I'll republished my post to be more precise. Sorry for my english & the time of answer. – Zouib Feb 24 '14 at 15:36 We can redefine \citetitle to add all entries cited via \citetitle to a bibliography category oeuvres. \DeclareBibliographyCategory{oeuvres} \DeclareCiteCommand{\citetitle} {\boolfalse{citetracker}% \boolfalse{pagetracker}% \usebibmacro{prenote}} \ifciteindex {\indexfield{indextitle}}% {}% \printfield[citetitle]{labeltitle}} {\multicitedelim} {\usebibmacro{postnote}} The first bibliography can then ignore oeuvres, while the second only includes oeuvres. \printbibliography[heading=subbibliography,title={Bibliographie},notcategory=oeuvres] To get rid of the page references in the first bibliography, we use \AtNextBibliography{\renewbibmacro*{pageref}{}} right before the first \printbibliography. Globally, backref is enabled, but the macro printing the backreferences is temporarily disabled in the first bibliography. I'm not sure about your sorting requests, but is no problem to specify the sorting in the \printbibliography command: \printbibliography[sorting=nyt, heading=subbibliography, title={Bibliographie}, notcategory=oeuvres] \printbibliography[sorting=ynt, heading=subbibliography, title={Œuvres \& monuments}, category=oeuvres] MWE \documentclass{scrartcl} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[english,frenchb]{babel} \usepackage{csquotes} \usepackage[style=authortitle-ticomp,isbn=false,doi=false,backref=true,backend=biber]{biblatex} \DeclareBibliographyCategory{oeuvres} \DeclareCiteCommand{\citetitle} {\boolfalse{citetracker}% \boolfalse{pagetracker}% \usebibmacro{prenote}} \ifciteindex {\indexfield{indextitle}}% {}% \printfield[citetitle]{labeltitle}} {\multicitedelim} {\usebibmacro{postnote}} \begin{document} some \footcite{cicero,gillies}, some \citetitle{wilde,coleridge} \& some words indexed. \AtNextBibliography{\renewbibmacro*{pageref}{}} \end{document} To sort the bibliography by type we need to define a new sorting scheme: tnyvt. Unfortunately, \sort{ \field{entrytype} } did not work for me, so we use a workaround: The entrytype is parked in the temporary field usera which is used for sorting. \DeclareSourcemap{ \maps[datatype=bibtex]{ \map{ \step[fieldset=usera, origentrytype] } } } \DeclareSortingScheme{tnyvt}{ \sort{ \field{presort} } \sort[final]{ \field{sortkey} } \sort{ \field{usera} } \sort{ \field{sortname} \field{author} \field{editor} \field{translator} \field{sorttitle} \field{title} } \sort{ \field{sortyear} \field{year} } \sort{ • Hello @moewe ! I precise my answer about types in the question. So I want my second references section to be classified by type, for exemple here : first all movies (@film or @electronic) second all paintings (@painting or here @misc). If you have an idea. May be I'll open a new question. Thanks – Zouib Feb 24 '14 at 15:40
2019-10-14 18:11:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7221390008926392, "perplexity": 6348.701404261527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00449.warc.gz"}
http://chasethedevil.blogspot.com/2013_03_01_archive.html
## Friday, March 22, 2013 ### Cracking the Double Precision Gaussian Puzzle In my previous post, I stated that some library (SPECFUN by W.D. Cody) computes $$e^{-\frac{x^2}{2}}$$ the following way: xsq = fint(x * 1.6) / 1.6; del = (x - xsq) * (x + xsq); result = exp(-xsq * xsq * 0.5) * exp(-del * 0.5); where fint(z) computes the floor of z. 1. Why 1.6? An integer divided by 1.6 will be an exact representation of the corresponding number in double: 1.6 because of 16 (dividing by 1.6 is equivalent to multiplying by 10 and dividing by 16 which is an exact operation). It also allows to have something very close to a rounding function: x=2.6 will make xsq=2.5, x=2.4 will make xsq=1.875, x=2.5 will make xsq=2.5. The maximum difference between x and xsq will be 0.625. 2. (a-b)*(a+b) decomposition del is of the order of 2*x*(x-xsq). When (x-xsq) is very small, del will, most of the cases be small as well: when x is too high (beyond 39), the result will always be 0, because there is no small enough number to represent exp(-0.5*39*39) in double precision, while (x-xsq) can be as small as machine epsilon (around 2E-16). By splitting x*x into xsq*xsq and del, one allow exp to work on a more refined value of the remainder del, which in turn should lead to an increase of accuracy. 3. Real world effect Let's make x move by machine epsilon and see how the result varies using the naive implementation exp(-0.5*x*x) and using the refined Cody way. We take x=20, and add machine epsilon a number of times (frac). The staircase happens because if we add machine epsilon to 20, this results in the same 20, until we add it enough to describe the next number in double precision accuracy. But what's interesting is that Cody staircase is regular, the stairs have similar height while the Naive implementation has stairs of uneven height. This is the relative error between the Naive implementation and Cody. The difference is higher than one could expect: a factor of 20. But it has one big drawbacks: it requires 2 exponential evaluations, which are relatively costly. Update March 22, 2013 I looked for a higher precision exp implementation, that can go beyond double precision. I found an online calculator (not so great to do tests on), and after more search, I found one very simple way: mpmath python library. I did some initial tests with the calculator and thought Cody was in reality not much better than the Naive implementation. The problem is that my tests were wrong, because the online calculator expects an input in terms of human digits, and I did not always use the correct amount of digits. For example a double of -37.7 is actually -37.7000000000000028421709430404007434844970703125. Here is a plot of the relative error of our methods compared to the high accuracy python implementation, but using as input strict double numbers around x=20. The horizontal axis is x-20, the vertical is the relative error. We can see that Cody is really much more accurate (more than 20x). The difference will be lower when x is smaller, but there is still a factor 10 around x=-5.7 Any calculation using a Cody like Gaussian density implementation, will likely not be as careful as this, so one can doubt of the usefulness in practice of such accuracy tricks. The Cody implementation uses 2 exponentials, which can be costly to evaluate, however Gary commented out that we can cache the exp xsq because of fint and therefore have accuracy and speed. ### Cracking the Double Precision Gaussian Puzzle In my previous post, I stated that some library (SPECFUN by W.D. Cody) computes $$e^{-\frac{x^2}{2}}$$ the following way: xsq = fint(x * 1.6) / 1.6; del = (x - xsq) * (x + xsq); result = exp(-xsq * xsq * 0.5) * exp(-del * 0.5); where fint(z) computes the floor of z. 1. Why 1.6? An integer divided by 1.6 will be an exact representation of the corresponding number in double: 1.6 because of 16 (dividing by 1.6 is equivalent to multiplying by 10 and dividing by 16 which is an exact operation). It also allows to have something very close to a rounding function: x=2.6 will make xsq=2.5, x=2.4 will make xsq=1.875, x=2.5 will make xsq=2.5. The maximum difference between x and xsq will be 0.625. 2. (a-b)*(a+b) decomposition del is of the order of 2*x*(x-xsq). When (x-xsq) is very small, del will, most of the cases be small as well: when x is too high (beyond 39), the result will always be 0, because there is no small enough number to represent exp(-0.5*39*39) in double precision, while (x-xsq) can be as small as machine epsilon (around 2E-16). By splitting x*x into xsq*xsq and del, one allow exp to work on a more refined value of the remainder del, which in turn should lead to an increase of accuracy. 3. Real world effect Let's make x move by machine epsilon and see how the result varies using the naive implementation exp(-0.5*x*x) and using the refined Cody way. We take x=20, and add machine epsilon a number of times (frac). The staircase happens because if we add machine epsilon to 20, this results in the same 20, until we add it enough to describe the next number in double precision accuracy. But what's interesting is that Cody staircase is regular, the stairs have similar height while the Naive implementation has stairs of uneven height. This is the relative error between the Naive implementation and Cody. The difference is higher than one could expect: a factor of 20. But it has one big drawbacks: it requires 2 exponential evaluations, which are relatively costly. Update March 22, 2013 I looked for a higher precision exp implementation, that can go beyond double precision. I found an online calculator (not so great to do tests on), and after more search, I found one very simple way: mpmath python library. I did some initial tests with the calculator and thought Cody was in reality not much better than the Naive implementation. The problem is that my tests were wrong, because the online calculator expects an input in terms of human digits, and I did not always use the correct amount of digits. For example a double of -37.7 is actually -37.7000000000000028421709430404007434844970703125. Here is a plot of the relative error of our methods compared to the high accuracy python implementation, but using as input strict double numbers around x=20. The horizontal axis is x-20, the vertical is the relative error. We can see that Cody is really much more accurate (more than 20x). The difference will be lower when x is smaller, but there is still a factor 10 around x=-5.7 Any calculation using a Cody like Gaussian density implementation, will likely not be as careful as this, so one can doubt of the usefulness in practice of such accuracy tricks. The Cody implementation uses 2 exponentials, which can be costly to evaluate, however Gary commented out that we can cache the exp xsq because of fint and therefore have accuracy and speed. ## Wednesday, March 20, 2013 ### A Double Precision Puzzle with the Gaussian Some library computes $$e^{-\frac{x^2}{2}}$$ the following way: xsq = fint(x * 1.6) / 1.6; del = (x - xsq) * (x + xsq); result = exp(-xsq * xsq * 0.5) * exp(-del * 0.5); where fint(z) computes the floor of z. Basically, x*x is rewritten as xsq*xsq+del. I have seen that trick once before, but I just can't figure out where and why (except that it is probably related to high accuracy issues). The answer is in the next post. ### A Double Precision Puzzle with the Gaussian Some library computes $$e^{-\frac{x^2}{2}}$$ the following way: xsq = fint(x * 1.6) / 1.6; del = (x - xsq) * (x + xsq); result = exp(-xsq * xsq * 0.5) * exp(-del * 0.5); where fint(z) computes the floor of z. Basically, x*x is rewritten as xsq*xsq+del. I have seen that trick once before, but I just can't figure out where and why (except that it is probably related to high accuracy issues). The answer is in the next post. ## Thursday, March 14, 2013 ### A Seasoned Volatility Swap This is very much what's in the Carr-Lee paper "Robust Replication of Volatility Derivatives", but it wasn't so easy to obtain in practice: • The formulas as written in the paper are not usable as is: they can be simplified (not too difficult, but intimidating at first) • The numerical integration is not trivial: a simple Gauss-Laguerre is not precise enough (maybe if I had an implementation with more points), a Gauss-Kronrod is not either (maybe if we split it in different regions). Funnily a simple adaptive Simpson works ok (but my boundaries are very basic: 1e-5 to 1e5). ### A Seasoned Volatility Swap This is very much what's in the Carr-Lee paper "Robust Replication of Volatility Derivatives", but it wasn't so easy to obtain in practice: • The formulas as written in the paper are not usable as is: they can be simplified (not too difficult, but intimidating at first) • The numerical integration is not trivial: a simple Gauss-Laguerre is not precise enough (maybe if I had an implementation with more points), a Gauss-Kronrod is not either (maybe if we split it in different regions). Funnily a simple adaptive Simpson works ok (but my boundaries are very basic: 1e-5 to 1e5). ## Tuesday, March 12, 2013 ### A Volatility Swap and a Straddle A Volatility swap is a forward contract on future realized volatility. The pricing of such a contract used to be particularly challenging, often either using an unprecise popular expansion in the variance, or a model specific way (like Heston or local volatility with Jumps). Carr and Lee have recently proposed a way to price those contracts in a model independent way in their paper "robust replication of volatility derivatives". Here is the difference between the value of a synthetic volatility swap payoff at maturity (a newly issued one, with no accumulated variance) and a straddle. Those are very close payoffs! I wonder how good is the discrete Derman approach compared to a standard integration for such a payoff as well as how important is the extrapolation of the implied volatility surface. The real payoff (very easy to obtain through Carr-Lee Bessel formula): ### A Volatility Swap and a Straddle A Volatility swap is a forward contract on future realized volatility. The pricing of such a contract used to be particularly challenging, often either using an unprecise popular expansion in the variance, or a model specific way (like Heston or local volatility with Jumps). Carr and Lee have recently proposed a way to price those contracts in a model independent way in their paper "robust replication of volatility derivatives". Here is the difference between the value of a synthetic volatility swap payoff at maturity (a newly issued one, with no accumulated variance) and a straddle. Those are very close payoffs! I wonder how good is the discrete Derman approach compared to a standard integration for such a payoff as well as how important is the extrapolation of the implied volatility surface. The real payoff (very easy to obtain through Carr-Lee Bessel formula):
2017-07-26 14:34:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6770293712615967, "perplexity": 1482.2045558006798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426169.17/warc/CC-MAIN-20170726142211-20170726162211-00282.warc.gz"}
https://euro-math-soc.eu/review/eulers-pioneering-equation
# Euler's pioneering equation Since The Mathematical Intelligencer conducted a poll in 1988 about which was the most beautiful among twenty-four theorems. Euler's equation $e^{i\pi}+1=0$ or $e^{iπ}=−1$ turned out to be the winner, and that is still today largely accepted among mathematicians. Even among physicists this is true. In a similar poll from 2004, it came out second after Maxwell's equations. The subtitle of this book is therefore The most beautiful theorem in mathematics. This may immediately raise some controversy, not about the choice of the formula, but perhaps about what it should be called: a theorem, an identity, an equality, a formula, an equation,... A theorem or a formula applies but these are quite general terms. The others refer to formulas with an equal sign. The term identity assumes that there is a variable involved and that the formula holds whatever the value of that variable. That applies to Euler's identity, which is the related formula $e^{ix}=\cos(x)+i\sin(x)$. The previous formula appears as a special case of this identity. Wilson calls the former formula and "equation" but the reader with some affinity to the French language would probably prefer to call it an equality because the French équation means it has to be solved for an unknown variable. But all the previous names have been used interchangeably to indicate the formula. Calling it Euler's identity may not be the most correct but it is probably the most common terminology. Whatever it is called, the description, if not most beautiful, then certainly the qualification most important or most remarkable, would be well deserved. It involves five fundamental mathematical constants: 1,0,π,e, and i in one simple relation. The 1 generates the counting numbers. The zero took a while to be accepted as a number but also negative numbers were initially considered to be exotic. Rational numbers were showing up naturally in computations, but so did numbers like √2 and π. These required an extension of the rationals with algebraic irrationals like √2 and the transcendentals like π which results in the reals that include all of them. The constant e (notation by Euler) relates to logarithms and its inverse the exponential function growing faster than any polynomial. Finally the imaginary constant i = √-1 (which is another notation introduced by Euler) was needed to solve any quadratic equation. This i allowed to introduce the complex numbers so that the fundamental theorem of algebra could be proved. The exponential and complex exponential are essential in applied mathematics. Euler's identity is most remarkable because it relates exponential growth or decay of the real exponential, and the oscillating behaviour of sines and cosines in the complex case. All these links allow Wilson to tell many stories about mathematics that are usually discussed in books popularizing mathematics for the lay reader. There are indeed five chapters whose titles are the five previous constants and a sixth one is about Euler's equation. He does this in a concise way. The amount of information compressed in only 150 pages is amazing. This doesn't mean that it is so dense that it becomes unreadable. Quite the opposite. Because there are no long drawn-out detours, the story becomes straightforward and understandable. For example the first chapter (only 17 pages including illustrations) introduces children's counting rhymes, compares the names for numbers in seven different languages, and compares number systems: Roman, Egyptian, Mesopotamian, Greek, Chinese, Mayan, and the Hindu-Arabic. The latter was popularized in the West by Fibonacci and Pacioli. There are many illustrations not only of the notation of these different numerals in this chapter, but there are in fact many other illustrations throughout the book. This does not increase the number of pages needlessly because a picture sometimes says more than a thousand words. There are no colour illustrations but colour is not relevant for what they represent. This is not the first book on Euler's equation. For example Paul Nahin. Dr. Euler's Fabulous Formula, Princeton University Press (2006), which is a bit more mathematically advanced, and a more recent one by David Stipp. A Most Elegant Equation, Basic Books (2017), which has more info about the person Euler. In the current book Euler's name appears frequently but as a person he is largely absent. For most of the five constants, separate popularizing books have been written or they are discussed in a chapter of more general popular books about mathematics, too many to list them here. Wilson refers to some of them in an appendix with a short list of additional literature, conveniently listed by subject. There is of course mathematics in this book. It would be weird if there wasn't. But there is nothing that should shy away a reader with a slight affinity for mathematics. Some of it can be skipped, but the exponential and trigonometric functions, series, and an occasional integral do appear. The more advanced definitions or computations, are put in one of the eleven grey-shaded boxes distributed throughout the book, so that skipping is easy. Most of the topics are placed in their historical context. For example, the history of the computation of π is well represented, and also the history of the logarithms as they were derived by Napier and Briggs and how they relate is nicely explained. There are some notes to explain how complex numbers can be generalised to quaternions and even octonions, and several examples from applied mathematics illustrate the meaning and relevance of the exponential function. A minor glitch: Albert Girard (1595-1632) who was the first to have formulated the fundamental theorem of algebra, is called on page 116 a Flemish mathematician, which is strange because the man was born in France, but, as a religious refugee, moved to Leiden in what was then the Dutch Republic of the Netherlands. So I do not think that the characterization Flemish does apply here. The book does not go deep into the subjects discussed, but I liked it because it is quite broad, touching upon so many mathematical subjects, mainly in their historical context, while readability remains most enjoyable notwithstanding its conciseness. Reviewer: Adhemar Bultheel Book details This is a book in which Wilson gives a popularizing account about the historical development of mathematics. His guidance is Euler's equality that connects five fundamental constants of mathematics: 1, 0, π, e, and i = √-1. Each of these is an incentive to discuss respectively different number systems, how counting extends to negative numbers and eventually the real numbers, the approximation and calculation of π, different logarithms, and complex numbers. Author:  Publisher: Published: 2018 ISBN: 9780198794929 (hbk); 9780198794936 (pbk) Price: £ 14.99 (hbk); £ 9.99 (pbk) Pages: 176 Categorisation
2019-12-15 10:33:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6541224122047424, "perplexity": 679.6478526159502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00114.warc.gz"}
http://mathematica.stackexchange.com/questions?page=86&sort=faq
# All Questions 503 views ### How to use summation to get the value of a binary number? How to use summation to get the value of a binary number? I want to do this: $(x_{n}x_{n-1}...x_{0})=\sum_{i=0}^{n}x_{i}\cdot b^{i}$ But it seems that I can only do something like this: ... 170 views ### How to check which file is older Is there a command in Mathematica to check the date when a file was last modified? Suppose I have two notebooks: File1.nb and ... 329 views ### Support for Compressed Sparse Column sparse matrix representation Is there native support for Compressed Sparse Column (CSC) format for sparse matrices, like importing and manipulating them? 2k views ### Plot CSV-data; make statistical statements in Mathematica / WA First, I'm quite new to Mathematica so sorry in advance if some questions seem to be trivial. I have a CSV-file which looks like this: ... 761 views ### How to replace all occurrences of an element in a list? Consider the following: list={1,2,2,2,3}; I would like to replace all 2 with the string "Test". ... 248 views ### FunctionInterpolation Errors / Question re Evaluation Order and Options I have using Mathematica functions that takes a Cartesian coordinate relative to the Earth (xyz) and converts it to a latitude, longitude, and altitude (lla). And here it is: ... 451 views ### Custom setting of HOME and END keys on mac Is there a way to set the HOME and END keys on a Mac's keyboard to jump the cursor to the line's beginning and end respectively? ... 383 views ### Solving an ODE numerically I really appreciate it if anyone helps me with this: How can I solve this ODE and plot the answer for $x$ on $[0.6,5]$: \begin{align*} -2xy'[x] = y''[x]+ 47.21 (-.0025 x^6 & + ... 92 views ### How do I direct output to a specific notebook or cell? [duplicate] Possible Duplicate: How to pipe a stream to another notebook? Is there any way to create a stream so that all output to the stream (from Write or ... 317 views ### Retrieving PlotRange form BarChart Consider the following data: data1={73.9377, 54.4122, 53.0826, 24.1936}; data2={76.828, 49.1673, 45.7883, 18.9015}; I defined my own BarChart as follows: ... 336 views ### Can SetDelayed (:=) be compiled? Is it possible to put an If statement as below within a compile (see below)? I received a warning about SetDelayed. ... 325 views ### Explanation on why Compile statement works only if input and output sizes work I found the following code: Compile[{{m, _Real, 2}}, Fourier[m]][Table[N[i - j], {i, 4}, {j, 4}]] which doesn't work correctly. But the following was posted as a ... 815 views ### Changing variables algebraically Suppose one has two functions, $y(x)$ and $z(x)$, and one seeks to obtain $y(z)$ by substituting $x(z)$ into $y(x)$. Can this be done in a single step? Or must $z(x)$ first be inverted independently? ... 378 views ### Local variables I'm trying to use Modules together with functions. tmp2 = x^2 + 1; f[y_] := Module[{x = 1}, Evaluate[y tmp2]] This works when ... 204 views ### Avoid crash in recursive Dynamic With the following Dynamic cell, I can reproducibly crash Mathematica, so do not try this unless you have saved all your work! ... 762 views ### How to write a function to add edges or vertex to a graph I need to add either a vertex or an edge to a graph, and also sometimes, vertices or edges, or a mixture of the two. How to write a function for this? 258 views ### What is the most efficient way to save a big graphic? I want to generate a graphic using DiscretePlot[primePercent[x], {x, 10^10}, ImageSize -> Full, Filling -> None, Joined -> True] where ... 72 views ### How to lookup by a string key after GroupBy? Inconsistent data type handling This is relates to a bug was introduced in 10.0.0 and shall be fixed in 10.0.1 It also relates to an issue that has yet to be corrected. After applying GroupBy ... 75 views ### Multiple plots using dynamics I am studying Dynamic and DynamicModule, and I have tried to make a simple plot of a family of real functions. The number of functions should be a dynamic variable controlled by a slider. So I have ... 82 views ### Plotting in projective space I am having difficulty figuring out how to plot to projective space. Suppose I have a region which I have plotted such as: ... 79 views ### How to draw a normalized tangent arrow [closed] I want to draw a normalized tangent arrow, so I use the Normalize command as follows: ... 95 views ### How to input and output partitioned matrices that show partitions and compute as normal? I want to demonstrate multiplication of partitioned matrices as in the example here. Using the Insert Menu, you can build a matrix and draw lines between rows and columns. However, I want to be able ... 43 views ### DSolve breaks when the ordering of independent variables aren't proper? I encountered this when trying to solve this problem with DSolve: ... 52 views ### Order of Evaluate for Map I would like to get a list of variable names as strings. For example, x = 1; y = 2; Map[SymbolName, Unevaluated /@ Unevaluated@{x, y}] {"x", "y"} This works ... 74 views ### Excluding region with RegionDifference I can't make RegionDifference work for two Rectangle: ... 120 views ### How to draw timing diagram from a 0 and 1 string list (2) I like ubpdqn's last edition & Mr Wizard's refactoring to my last question very much. I inserted two Reverse just to make the diagrams and the strings match in ... 214 views ### Dataset Mathematica 10.0 I started to use the new funcationality 'Dataset'in Mathematica 10.0. ... 95 views ### Error when attempting to save Notebook. Cause and solution? I am experiencing an error attempting to save a modified Notebook in Mathematica 10 under Windows. The error I get is: ... 49 views ### Having variables appear aqua in user defined function [duplicate] In many of the inbuilt mathematica function, variables appear aqua like in: However, when creating your own function, variables no longer have that green coloring and are instead colored like an ... 78 views ### How does one get Mesh lines at 0 in ParametricPlot3D? The following is in the documentation (MMA 10) under ParametricPlot3D -> Options -> Mesh : ... 41 views ### Is the use of Block to temporarily override definitions inherently unsafe? [duplicate] This may have been discussed before, if so, please let me know. Consider the following example: x = 5 Dynamic[{Clock[], x}] This will always display the ... 300 views ### Mathematica 10 trial limitations [closed] I just installed the mathematica 10 trial on OS X Mavericks. Everything seems to be working fine, except that I cannot use Export or any kind of functions involving ... 61 views ### Antialiased text in manipulate constantly re-evaluated, ignores TrackedSymbols I am trying to display antialiased text (using the ImportString[ExportString[]] method shown here) but inside a manipulate so that the text can be changed interactively. I have found a way to do this, ... 131 views ### BarChart that lines up with dates in DateListPlot I am trying to create a chart similar to the view offered by TradingChart, but with custom data not relating to finance. ... 140 views ### Getting good-looking matrix form output from a matrix having both scalars and matrices as elements I have a nested matrix n: ... 135 views ### Solve works with numbers but not number stored in variable I have the code: ... 187 views ### Find the values of 3 variables that best fit 6 equations [duplicate] I have 6 equations ... 114 views ### How to place more than one ChartLabel in a BarChart I have the following data: ... 305 views ### How to completely delete the head of a function expression Is there any way to completely remove the head of an expression function? For example, how would I remove the head Cos from ... 116 views ### Problems with Manipulate (indicated by blinking output cell) The following code produces the error message when exported to CDF in the title of this question. Although I have seen similar issues on SE, the problem is manifest by the "blinking" of the plot ... 100 views ### Use only exponents (no radicals) in output expressions Radical symbols ($\sqrt{\,\,\,\,}$) are the devil. Is there any way to get mathematica to never use them, and instead express everything as an exponential? i.e. I want ... 88 views ### Set all output to be of specific form I want every output expression in a notebook to use //PowerExpand and also to format numbers as scientific. Is there a way to do this without explicitly calling ... 311 views ### Plotting a Spericon and Bipyramid with 100 faces I would like to plot convex polyhedron with 100 faces, which could be used as a die. My first attempt was asking for Conway's Hecatohedron. Unfortunatly, it can not be used as a die. Now there are ... 97 views ### Cropping a non-cubic lattice unit cell I would like to crop the hexagonal close-packed system as a hexagon and not as a cube. I would like to use the following volumetricPlot already given on this ... 73 views ### Why evaluation doesn't fail if arguments are invalid? [duplicate] Sorry for a noob question, but this is something that I wonder about each time I work with Mathematica. Coming from standard programming languages (Python and Java), I expect my functions to fail ... 88 views ### Best way to Export a large list of figures? [closed] I have over 2000 graphs. I would like to Export them, ideally, in one PDF document with all the graphs on it. 94 views ### Is there a way to specify custom domains in mathematica? I want to solve for all solutions to the system of equations $a_1 a_2 a_3 b_4 + a_1 a_2 b_3 + a_1 b_2 + b_1 = 0$ and $a_1 a_2 a_3 a_4 = 1$ where the $a_i$'s and $b_i$'s take on values from $\{-1,1\}$. ... 159 views ### Perspective transformation of a 2d streamplot I have a two-dimensional streamline plot from a fluid dynamics simulation. I am wondering: is it possible to somehow allow Mathematica to treat the streamline plot as a plane in three-dimensional ...
2014-08-29 22:30:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965114116668701, "perplexity": 1968.1318716823582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833461.95/warc/CC-MAIN-20140820021353-00279-ip-10-180-136-8.ec2.internal.warc.gz"}
https://plover.readthedocs.io/en/latest/plugin-dev/systems.html
# Systems¶ To define a new system called Example System, add it as an entry point: [options.entry_points] plover.system = Example System = plover_my_plugin.system If you have any dictionaries, also add the following line to your MANIFEST.in, to ensure that the dictionaries are copied when you distribute the plugin: include plover_my_plugin/dictionaries/* System plugins are implemented as modules with all of the necessary fields to create a custom key layout. # plover_my_plugin/system.py # The keys in your system, defined in steno order KEYS: Tuple[str, ...] # Keys that serve as an implicit hyphen between the two sides of a stroke IMPLICIT_HYPHEN_KEYS: Tuple[str, ...] # Singular keys that are defined with suffix strokes in the dictionary # to allow for folding them into a stroke without an explicit definition SUFFIX_KEYS: Tuple[str, ...] # The key that serves as the "number key" like # in English NUMBER_KEY: Optional[str] # A mapping of keys to number aliases, e.g. {"S-": "1-"} means "#S-" can be # written as "1-" NUMBERS: Dict[str, str] # The stroke to undo the last stroke UNDO_STROKE_STENO: str # A list of rules mapping regex inputs to outputs for orthography. ORTHOGRAPHY_RULES: List[Tuple[str, str]] # Aliases for similar or interchangeable suffixes, e.g. "able" and "ible" ORTHOGRAPHY_RULES_ALIASES: Dict[str, str] # Name of a file containing words that can be used to resolve ambiguity # when applying suffixes. ORTHOGRAPHY_WORDLIST: Optional[str] # Default key mappins for machine plugins to system keys. KEYMAPS: Dict[str, Dict[str, Union[str, Tuple[str, ...]]]] # Root location for default dictionaries DICTIONARIES_ROOT: str # File names of default dictionaries DEFAULT_DICTIONARIES: Tuple[str, ...] Note that there are a lot of possible fields in a system plugin. You must set them all to something but you don’t necessarily have to set them to something meaningful (i.e. some can be empty), so they can be pretty straightforward. Since it is a Python file rather than purely declarative you can run code for logic as needed, but Plover will try to directly access all of these fields, which does not leave much room for that. However, it does mean that if for example you wanted to make a slight modification on the standard English system to add a key, you could import it and set your system’s fields to its fields as desired with changes to KEYS only; or, you could make a base system class that you import and expand with slightly different values in the various fields for multiple system plugins like Michela does for Italian. See the documentation for plover.system for information on all the fields.
2022-01-22 08:55:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3580577075481415, "perplexity": 3357.4625385052996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00228.warc.gz"}
https://math.eretrandre.org/tetrationforum/showthread.php?mode=threaded&tid=1669&pid=11455
Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy JmsNxn Ultimate Fellow Posts: 1,061 Threads: 121 Joined: Dec 2010 11/19/2022, 04:26 AM (This post was last modified: 11/19/2022, 12:04 PM by JmsNxn.) Oh yes, absolutely Gottfried. This program has quite a few faults, runtime being the most obvious, lol. It is descriptive of much of my programming. I was primarily a C programmer, and I specialized in recursion. I have written some unbelievably complicated recursion programs for C. The thing is, Pari-GP does not have the run time benefits that C has when you write things recursively. I think a large part of that is that I try to preserve Taylor data. For example, the fact I have built in F(1+s,z) as a valid command, which will produce the taylor series in s and the asymptotic taylor series in z--is super helpful if we think of this as an object. It is not helpful pointwise, when you run F(1.5,-0.5). This will run so slow, because much of the code for F(1+s,z) takes priority. I am aware of this, but I still like simple code above all, especially as I am using it on my computer. I am definitely not winning any efficiency awards though, lol I absolutely know your code, or Sheldon's code, is far superior in runtime. But again, they are not recursion based. Everything I do is recursion based (my brain kinda just works like that, lol). Sincere Regards, James Here are some iterates $$f^{\circ s}(z)$$:     These take an exceptionally long time to compile. And I know this. I am again, just trying to stress that my mathematical construction which is slow to program produces the same results as all of Helms' Trappman's work. We can do all of this with integrals. Obviously, codewise, this is not as good--but mathematically my work works, lol. This goes really far into the metaphor I always use. We can do Schrodinger mechanics, or we can do Heisenberg mechanics. We can talk about integral transforms, and actions on hilbert spaces--or we can talk about infinite matrices and eigen values. We are doing the same thing in either language, but we must choose a language. I'm just trying to add some integrals, and Schrodinger mechanics to the problem. And I'm doing so using the Mellin transform (which is just a modified fourier transform when you break it down). I'm trying to be Schrodinger to your Heisenberg... I hope this metaphor makes sense. Everyone here is so fixated on infinite matrix equations, and reducing matrix equations, and approximations through $$N\times N$$ scenarios--and this is exactly Heisenberg's approach of quantum physics. Schrodinger's approach was to look at the solution functions globally and create differential/integral equations between the functions globally. Then Von Neumann, using Hilbert's work, proved that Heisenberg and Schrodinger were saying the same thing. Heisenberg just used matrices, and Schrodinger used integrals (to be simplistic). Again, I'm just trying to add integrals to the discussion, and show how they give the exact same thing you guys already have. Just like Schrodinger... Just understand that, Gottfried. My code is not something groundbreaking. It's just proof that my math is working The integrals are converging pretty good Here's a, very slow to produce (took about 24 hrs of cpu time), graph of $$f^{\circ 0.5}(z)$$ for $$1.25>\Im(z) > 0$$ and $$-2 < \Re(z) < 0.5$$     The schrodinger/integral/mellin/fourier view works. It's not as fast or as good as the Heisenberg, matrix solution method, we see everywhere. But mathematically, it's pretty fucking clean. « Next Oldest | Next Newest » Messages In This Thread Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by JmsNxn - 11/15/2022, 03:18 AM RE: Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by Gottfried - 11/15/2022, 12:53 PM RE: Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by JmsNxn - 11/16/2022, 10:13 PM RE: Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by Gottfried - 11/17/2022, 02:42 AM RE: Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by JmsNxn - 11/17/2022, 06:45 AM RE: Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by Gottfried - 11/18/2022, 12:40 AM RE: Calculating the fractional iterates of $$f(z) = e^z-1$$ to reasonable accuracy - by JmsNxn - 11/19/2022, 04:26 AM Possibly Related Threads… Thread Author Replies Views Last Post fractional iteration/another progress Gottfried 2 8,193 07/26/2008, 02:13 PM Last Post: Gottfried fractional iteration with complex bases/a bit of progress Gottfried 1 7,048 07/21/2008, 10:58 PM Last Post: Gottfried numerical examples for approximation for half-iterates for exp(x)-1 Gottfried 0 4,664 08/14/2007, 11:57 AM Last Post: Gottfried Users browsing this thread: 1 Guest(s)
2023-01-31 23:13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7027429938316345, "perplexity": 1834.9312093278766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00030.warc.gz"}
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/System_F
# System F System F, also known as the (Girard–Reynolds) polymorphic lambda calculus or the second-order lambda calculus, is a typed lambda calculus that differs from the simply typed lambda calculus by the introduction of a mechanism of universal quantification over types. System F thus formalizes the notion of parametric polymorphism in programming languages, and forms a theoretical basis for languages such as Haskell and ML. System F was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds (1974). Whereas simply typed lambda calculus has variables ranging over functions, and binders for them, System F additionally has variables ranging over types, and binders for them. As an example, the fact that the identity function can have any type of the form A→ A would be formalized in System F as the judgment ${\displaystyle \vdash \Lambda \alpha .\lambda x^{\alpha }.x:\forall \alpha .\alpha \to \alpha }$ where ${\displaystyle \alpha }$ is a type variable. The upper-case ${\displaystyle \Lambda }$ is traditionally used to denote type-level functions, as opposed to the lower-case ${\displaystyle \lambda }$ which is used for value-level functions. (The superscripted ${\displaystyle \alpha }$ means that the bound x is of type ${\displaystyle \alpha }$; the expression after the colon is the type of the lambda expression preceding it.) As a term rewriting system, System F is strongly normalizing. However, type inference in System F (without explicit type annotations) is undecidable. Under the Curry–Howard isomorphism, System F corresponds to the fragment of second-order intuitionistic logic that uses only universal quantification. System F can be seen as part of the lambda cube, together with even more expressive typed lambda calculi, including those with dependent types. According to Girard, the "F" in System F was picked by chance.[1] ## Logic and predicates The ${\displaystyle \scriptstyle {\mathsf {Boolean}}}$ type is defined as: ${\displaystyle \scriptstyle \forall \alpha .\alpha \to \alpha \to \alpha }$, where ${\displaystyle \scriptstyle \alpha }$ is a type variable. This means: ${\displaystyle \scriptstyle {\mathsf {Boolean}}}$ is the type of all functions which take as input a type α and two expressions of type α, and produce as output an expression of type α (note that we consider ${\displaystyle \to }$ to be right-associative.) The following two definitions for the boolean values ${\displaystyle \scriptstyle \mathbf {T} }$ and ${\displaystyle \scriptstyle \mathbf {F} }$ are used, extending the definition of Church booleans: ${\displaystyle \mathbf {T} =\Lambda \alpha {.}\lambda x^{\alpha }\lambda y^{\alpha }{.}x}$ ${\displaystyle \mathbf {F} =\Lambda \alpha {.}\lambda x^{\alpha }\lambda y^{\alpha }{.}y}$ (Note that the above two functions require three not two parameters. The latter two should be lambda expressions, but the first one should be a type. This fact is reflected in the fact that the type of these expressions is ${\displaystyle \scriptstyle \forall \alpha .\alpha \to \alpha \to \alpha }$; the universal quantifier binding the α corresponds to the Λ binding the alpha in the lambda expression itself. Also, note that ${\displaystyle \scriptstyle {\mathsf {Boolean}}}$ is a convenient shorthand for ${\displaystyle \scriptstyle \forall \alpha .\alpha \to \alpha \to \alpha }$, but it is not a symbol of System F itself, but rather a "meta-symbol". Likewise, ${\displaystyle \scriptstyle \mathbf {T} }$ and ${\displaystyle \scriptstyle \mathbf {F} }$ are also "meta-symbols", convenient shorthands, of System F "assemblies" (in the Bourbaki sense); otherwise, if such functions could be named (within System F), then there would be no need for the lambda-expressive apparatus capable of defining functions anonymously.) Then, with these two ${\displaystyle \scriptstyle \lambda }$-terms, we can define some logic operators (which are of type ${\displaystyle \scriptstyle {\mathsf {Boolean}}\rightarrow {\mathsf {Boolean}}\rightarrow {\mathsf {Boolean}}}$): {\displaystyle {\begin{aligned}\mathrm {AND} &=\lambda x^{\mathsf {Boolean}}\lambda y^{\mathsf {Boolean}}{.}xy\,\mathbf {F} \\\mathrm {OR} &=\lambda x^{\mathsf {Boolean}}\lambda y^{\mathsf {Boolean}}{.}x\mathbf {T} \,y\\\mathrm {NOT} &=\lambda x^{\mathsf {Boolean}}{.}x\mathbf {F} \,\mathbf {T} \end{aligned}}} As in Church encodings, there is no need for an IFTHENELSE function as one can just use raw ${\displaystyle \scriptstyle {\mathsf {Boolean}}}$-typed terms as decision functions. However, if one is requested: ${\displaystyle \mathrm {IFTHENELSE} =\Lambda \alpha .\lambda x^{\mathsf {Boolean}}\lambda y^{\alpha }\lambda z^{\alpha }.x\alpha yz}$ will do. A predicate is a function which returns a ${\displaystyle \scriptstyle {\mathsf {Boolean}}}$-typed value. The most fundamental predicate is ISZERO which returns ${\displaystyle \scriptstyle \mathbf {T} }$ if and only if its argument is the Church numeral 0: ${\displaystyle \mathrm {ISZERO} =\lambda n^{\forall \alpha .(\alpha \rightarrow \alpha )\rightarrow \alpha \rightarrow \alpha }{.}n^{\mathsf {Boolean}}(\lambda x^{\mathsf {Boolean}}{.}\mathbf {F} )\,\mathbf {T} }$ ## System F structures System F allows recursive constructions to be embedded in a natural manner, related to that in Martin-Löf's type theory. Abstract structures (S) are created using constructors. These are functions typed as: ${\displaystyle K_{1}\rightarrow K_{2}\rightarrow \dots \rightarrow S}$. Recursivity is manifested when ${\displaystyle S}$ itself appears within one of the types ${\displaystyle K_{i}}$. If you have ${\displaystyle m}$ of these constructors, you can define the type of ${\displaystyle S}$ as: ${\displaystyle \forall \alpha .(K_{1}^{1}[\alpha /S]\rightarrow \dots \rightarrow \alpha )\dots \rightarrow (K_{1}^{m}[\alpha /S]\rightarrow \dots \rightarrow \alpha )\rightarrow \alpha }$ For instance, the natural numbers can be defined as an inductive datatype ${\displaystyle N}$ with constructors ${\displaystyle {\mathit {zero}}:\mathrm {N} }$ ${\displaystyle {\mathit {succ}}:\mathrm {N} \rightarrow \mathrm {N} }$ The System F type corresponding to this structure is ${\displaystyle \forall \alpha .\alpha \to (\alpha \to \alpha )\to \alpha }$. The terms of this type comprise a typed version of the Church numerals, the first few of which are: 0 := ${\displaystyle \Lambda \alpha .\lambda x^{\alpha }.\lambda f^{\alpha \to \alpha }.x}$ 1 := ${\displaystyle \Lambda \alpha .\lambda x^{\alpha }.\lambda f^{\alpha \to \alpha }.fx}$ 2 := ${\displaystyle \Lambda \alpha .\lambda x^{\alpha }.\lambda f^{\alpha \to \alpha }.f(fx)}$ 3 := ${\displaystyle \Lambda \alpha .\lambda x^{\alpha }.\lambda f^{\alpha \to \alpha }.f(f(fx))}$ If we reverse the order of the curried arguments (i.e., ${\displaystyle \forall \alpha .(\alpha \rightarrow \alpha )\rightarrow \alpha \rightarrow \alpha }$), then the Church numeral for ${\displaystyle n}$ is a function that takes a function f as argument and returns the ${\displaystyle n}$th power of f. That is to say, a Church numeral is a higher-order function – it takes a single-argument function f, and returns another single-argument function. ## Use in programming languages The version of System F used in this article is as an explicitly typed, or Church-style, calculus. The typing information contained in λ-terms makes type-checking straightforward. Joe Wells (1994) settled an "embarrassing open problem" by proving that type checking is undecidable for a Curry-style variant of System F, that is, one that lacks explicit typing annotations.[2][3] Wells's result implies that type inference for System F is impossible. A restriction of System F known as "Hindley–Milner", or simply "HM", does have an easy type inference algorithm and is used for many statically typed functional programming languages such as Haskell 98 and the ML family. Over time, as the restrictions of HM-style type systems have become apparent, languages have steadily moved to more expressive logics for their type systems. GHC a Haskell compiler, goes beyond HM (as of 2008) and uses System F extended with non-syntactic type equality;[4] non-HM features in OCaml's type system include GADT.[5][6] ## The Girard-Reynolds Isomorphism In second-order intuitionistic logic, the second-order polymorphic lambda calculus (F2) was discovered by Girard (1972) and independently by Reynolds (1974).[7] Girard proved the Representation Theorem: that in second-order intuitionistic predicate logic (P2), functions from the natural numbers to the natural numbers that can be proved total, form a projection from P2 into F2.[7] Reynolds proved the Abstraction Theorem: that every term in F2 satisfies a logical relation, which can be embedded into the logical relations P2.[7] Reynolds proved that a Girard projection followed by a Reynolds embedding form the identity, i.e., the Girard-Reynolds Isomorphism.[7] ## System Fω While System F corresponds to the first axis of Barendregt's lambda cube, System Fω or the higher-order polymorphic lambda calculus combines the first axis (polymorphism) with the second axis (type operators); it is a different, more complex system. System Fω can be defined inductively on a family of systems, where induction is based on the kinds permitted in each system: • ${\displaystyle F_{n}}$ permits kinds: • ${\displaystyle \star }$ (the kind of types) and • ${\displaystyle J\Rightarrow K}$ where ${\displaystyle J\in F_{n-1}}$ and ${\displaystyle K\in F_{n}}$ (the kind of functions from types to types, where the argument type is of a lower order) In the limit, we can define system ${\displaystyle F_{\omega }}$ to be • ${\displaystyle F_{\omega }={\underset {1\leq i}{\bigcup }}F_{i}}$ That is, Fω is the system which allows functions from types to types where the argument (and result) may be of any order. Note that although Fω places no restrictions on the order of the arguments in these mappings, it does restrict the universe of the arguments for these mappings: they must be types rather than values. System Fω does not permit mappings from values to types (Dependent types), though it does permit mappings from values to values (${\displaystyle \lambda }$ abstraction), mappings from types to values (${\displaystyle \Lambda }$ abstraction, sometimes written ${\displaystyle \forall }$) and mappings from types to types (${\displaystyle \lambda }$ abstraction at the level of types) ## Notes 1. Girard, Jean-Yves (1986). "The system F of variable types, fifteen years later". Theoretical Computer Science. 45: 160. doi:10.1016/0304-3975(86)90044-7. However, in [3] it was shown that the obvious rules of conversion for this system, called F by chance, were converging. 2. http://www.macs.hw.ac.uk/~jbw/research-summary.html 3. "The Church Project: Typability and type checking in {S}ystem {F} are equivalent and undecidable". 29 September 2007. Archived from the original on 29 September 2007. 4. "System FC: equality constraints and coercions". gitlab.haskell.org. Retrieved 2019-07-08. 5. "OCaml 4.00.1 release notes". ocaml.org. 2012-10-05. Retrieved 2019-09-23. 6. "OCaml 4.09 reference manual". 2012-09-11. Retrieved 2019-09-23. ## References • Girard, Jean-Yves (1971). "Une Extension de l'Interpretation de Gödel à l'Analyse, et son Application à l'Élimination des Coupures dans l'Analyse et la Théorie des Types". Proceedings of the Second Scandinavian Logic Symposium. Amsterdam. pp. 63–92. doi:10.1016/S0049-237X(08)70843-7. • Girard, Jean-Yves (1972), Interprétation fonctionnelle et élimination des coupures de l'arithmétique d'ordre supérieur (Ph.D. thesis) (in French), Université Paris 7. • Reynolds, John (1974). Towards a Theory of Type Structure. • Girard, Lafont and Taylor, Proofs and Types. Cambridge University Press, 1989, ISBN 0-521-37181-3. • J. B. Wells. "Typability and type checking in the second-order lambda-calculus are equivalent and undecidable." In Proceedings of the 9th Annual IEEE Symposium on Logic in Computer Science (LICS), pages 176–185, 1994. • Pierce, Benjamin (2002). Types and Programming Languages. MIT Press. ISBN 0-262-16209-1., Chapter 23: Universal Types, and Chapter 25: An ML Implementation of System F
2021-04-14 12:56:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 56, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799842596054077, "perplexity": 1515.5756836309668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00413.warc.gz"}
http://languagelog.ldc.upenn.edu/nll/?p=93
## Is English more efficient than Chinese after all? [Executive summary: Who knows?] This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote: I'm not sure we have interacted since you taught your class at the 1991 linguistics institute in Santa Cruz — I fondly remember that class, which got me started in StatNLP. I'm writing because I was intrigued by your posts on compression ratios of different languages. As somebody else remarked, gzip can't really be used to judge the informativeness of a piece of text. I did the following simple experiment. I read the first 109 or so characters from the xml Wikipedia dump and wrote them to a file (which I called wiki). I wrote the same characters to a second file (wikispace), but inserted a space after each character. Then I compressed the two files. Here is what I got: 1012930723 wiki 2025861446 wikispace 314377664 wiki.gz 385264415 wikispace.gz 385264415/314377664 approx 1.225 The two files contain the same information, but gzip's model does not handle this type of encoding well. In this example we know what the generating process of the data was. In the case of Chinese and English we don't. So I think that until there is a more persuasive argument we should stick with the null hypothesis: the two texts of a Chinese-English bitext are equally informative, but the processes transforming the information into text are different in that the output of one can be more efficiently compressed by gzip than the other. I don't see how we can conclude anything about deep cultural differences. Note that a word-based language model also would produce very different numbers for the two files. Does this make sense or is there a flaw in this argument? The flaw, clearly, was in *my* argument. I asserted that modern compression techniques should be able to remove most of the obvious and simple reasons for differences in document size among translations in different languages, like different spacing or spelling conventions. If there are residual differences among languages, this either relates to redundancies that are not being modeled [e.g. marking of number and tense, or omission of pronouns] or it reflects a different sort of difference between languages and cultures [such as differing habits of explicitness]. But Hinrich's simple experiment shows that the first part of this assertion is simply false. At least, gzip compression can't entirely remove even such a simple manipulation as the insertion of a space after every letter of the original. In principle, I believe, coders like gzip, based on accumulating a "dictionary" of previously-seen strings, should be asymptotically oblivious to such manipulations; but in the case at hand, we're clearly a long way from the asymptote. (Or perhaps the principle has been llost due to practical compromises.) Hinrich's note also prodded me to do something that I promised in one of the earlier posts, namely to try a better compression program on some Chinese/English comparisons. A few simple experiments of this type showed that I was even more wrong than Hinrich thought. First, I replicated Hinrich's experiment on English. I took the New York Times newswire for October of 2000 (from English Gigaword Third Edition, LDC2007T07). I created two derived versions, one by adding a space after each character of the original, as Hinrich did: and another by removing all spaces, tabs and newlines from the original. I then compressed the three texts with gzip and with sbc, a compression program based on the Burrows-Wheeler Transform, which seems to be among the better recent text-file compressors. The results: Original Spaces added Space, tab, nl removed No compression 61,287,671 122,575,342 51,121,392 gzip -9 21,467,564 26,678,868 19,329,166 gzip bpB (bits per byte) 2.802 1.741 3.025 sbc -m3 11,881,320 12,702,780 11,632,941 sbc bpB 1.551 0.829 1.820 This replicates Hinrich's result: the spaces-added text is about 24% larger after gzip compression, and about 7% larger after sbc compression. Better compression is reducing the effect, but not eliminating it. In the other direction, removing white space makes the original file about 17% smaller, and this difference is reduced but not eliminated by compression (10% smaller after gzip, 2.1% smaller after sbc). Next, I thought I'd try a recently-released Chinese/English parallel text corpus, created by translating Chinese blogs into English (Chinese Blog Parallel Text , LDC2008T06). I processed the corpus to extract just the text sentences. Chinese English English/Chinese ratio No compression 814,286 1,034,746 1.271 gzip -9 362,565 366,322 1.010 gzip bpB 3.562 2.832 sbc -m3 263,073 254,543 0.968 sbc bpB 2.585 1.968 In the originals, the English translations are about 27% larger than the (UTF-8) Chinese originals, which is similar to the ratios seen before. However, even with gzip, the difference is essentially eliminated by compression. With sbc, the compressed English is actually slightly smaller than the compressed Chinese. So I went back and tried one of the corpora whose compressed size was discussed in my earlier post (Chinese English News Magazine Parallel Text, LDC2005T10). Again, I processed the corpus to extract only the (Big-5 encoded) Chinese or English text, eliminating formatting, alignment markers, etc. To my surprise, in this case, the English versions come out smaller under both gzip and sbc compression: Chinese English English/Chinese ratio No compression 37,399,738 54,336,642 1.453 gzip -9 22,310,891 19,803,723 0.888 gzip bpB 4.77 2.916 sbc -m3 16,708,712 12,458,354 0.746 gzip bpB 3.57 1.834 This is the same corpus as the one called "Sinorama" in the table in my first post on this subject ("One world, how many bytes?", 8/5/2005), where the English/Chinese ratio before compression was given as 1.95, and after gzip compression as 1.19. (Why the difference? Well, the numbers in my 2005 post reflected the results of compressing the whole file hierarchy for each language, without any processing to distinguish the text from other things; and the Chinese files were encoded as Big5 characters, meaning that even the Latin-alphabet characters in the sgml formatting codes were 16 bits each.) My conclusions: 1. Hinrich is right — current compression techniques, from a practical point of view, reduce but don't eliminate the effects of superficial differences in orthographic practices. 2. It's a good idea to be explicit and specific about the sources of experimental numbers, so that others can replicated (or fail to replicate) the process. So what I did to get the Chinese/English numbers is specified below, for those who care. For the Sinorama corpus (LDC2005T10), in the data/Chinese and data/English directories, I extracted the text via this /bin/sh command: for f in *.sgm do egrep '^<seg' $f | sed 's/^<seg id=[0-9]*> //; s/<.seg> *$//' done >alltext and then compressed (using gzip 1.3.3 with the -9 flag, and sbc 0.970r3 with the -m3 flag). For the Chinese blog corpus (LDC2008T06), in the data/source and data/translation directories, I extracted the text via for f in *.tdf do gawk -F '\t' '{print $8}'$f done >alltext and then compressed as above. 1. ### Pekka Karjalainen said, April 28, 2008 @ 10:45 am Minor spelling issue: Isn't it the Burrows-Wheeler Transform? [myl: Oops. Fixed now.] 2. ### Jeff Berry said, April 28, 2008 @ 1:34 pm You can compare efficiency of a writing system by calculating the redundancy in that system. Redundancy = (Max Entropy – Actual Entropy) / Max Entropy find max entropy by using log n, where n is the number of graphemes in the system (for English n=27 (including space), max ent = 4.75) find actual entropy by finding the unigram frequency p of each grapheme, then sum – p log p for each grapheme. For the text of the Wall Street Journal from the Penn Treebank, this comes out as follows: & p & -log p & -p log p \\ ———————————————— & 0.170038 & 2.556071 & 0.434629 \\ a & 0.069675 & 3.843214 & 0.267776 \\ b & 0.012724 & 6.296331 & 0.080113 \\ c & 0.029773 & 5.069847 & 0.150945 \\ d & 0.031845 & 4.972778 & 0.158359 \\ e & 0.098468 & 3.344202 & 0.329297 \\ f & 0.017829 & 5.809607 & 0.103581 \\ g & 0.016908 & 5.886142 & 0.099523 \\ h & 0.034021 & 4.877444 & 0.165934 \\ i & 0.062508 & 3.999818 & 0.250020 \\ j & 0.001773 & 9.139746 & 0.016203 \\ k & 0.006419 & 7.283481 & 0.046751 \\ l & 0.034082 & 4.874854 & 0.166144 \\ m & 0.022562 & 5.469980 & 0.123412 \\ n & 0.060598 & 4.044584 & 0.245094 \\ o & 0.060929 & 4.036726 & 0.245954 \\ p & 0.019223 & 5.701048 & 0.109589 \\ q & 0.000898 & 10.120998 & 0.009089 \\ r & 0.056578 & 4.143611 & 0.234438 \\ s & 0.059687 & 4.066429 & 0.242715 \\ t & 0.073351 & 3.769033 & 0.276464 \\ u & 0.022772 & 5.456564 & 0.124260 \\ v & 0.008375 & 6.899695 & 0.057785 \\ w & 0.012001 & 6.380735 & 0.076573 \\ x & 0.002336 & 8.741966 & 0.020418 \\ y & 0.013965 & 6.162003 & 0.086055 \\ z & 0.000662 & 10.561087 & 0.006990 \\ Actual Entropy = 4.12811140687 Max Entropy = 4.75488750216 Redundancy = 0.131817229115 types = 27 tokens: 474388 The PH corpus for Mandarin Chinese: Actual Entropy = 9.571563 Max Entropy = 12.187042 Redundancy = 0.214611 types = 4663 tokens = 3252625 So in this sense, English is more efficient [myl: But this is all beside the point. We were never interested in the redundancy of the orthographies — the (perhaps forlorn) hope was that good compression would wash that out. (And in any case, all of the compression methods under consideration do better than unigram probabilities at removing redundancy: they are all compressing English text way better than 4.128 bits per character!) The question at issue is whether a given message content (in some language-independent sense) might be normally be expressed more or less redundantly in one language rather than another (in some orthography-indendent sense).] 3. ### Andrew Rodland said, April 28, 2008 @ 3:27 pm I would suggest using PPMd or LZMA; both of them are heavyweight "brute force"-ish compressors that compress general input as well as anything you can get for free. :) [myl: Both appear to have done worse on English text than sbc in the ACT compression test. The point here is not to do a general compression bake-off, or trade compressor preferences. Do you have any empirical evidence, or any theoretical argument, that we'd learn something new about the matter in question by taking the time to try two additional compression methods?] 4. ### Bob Hall said, April 28, 2008 @ 4:24 pm '…the Chinese files were encoded as Big5 characters, meaning that even the Latin-alphabet characters in the sgml formatting codes were 16 bits each." Actually, Big 5 uses the same 8-bit codes for the lower 128 characters of ACII as UTF-8 or ISO Latin-1. You can see this by setting the display of a page of English in your browser to Big 5 (with my browser set to Big 5 by default, this happens all the time). All the lower ASCII characters (letters, numbers, common punctuation) show up just fine. The main issue is with so-called "smart quotes" which are upper ACII characters. Moreover, Big 5 is far more efficient for Chinese characters since they take up two bytes in Big 5, but 3 bytes in UTF-8, which has to deal with a far larger number of possible characters. So, I'd expect Big 5-encoded Chinese to be much smaller than UTF-8-encoded Chinese. Oops. Actually, I knew that, now that I think about it, which makes my mistake just that much dumber. Thanks for the correction. But this means that the difference between the cited ratios is again a mystery — and likely to remain one for a while, since I don't have time to investigate it.] 5. ### Dave Costa said, April 28, 2008 @ 4:40 pm "Better compression is reducing the effect [of adding spaces], but not eliminating it." If the compression tools acted as you seem to think they should, it would be disastrous for computing! gzip is a lossless compressor, meaning that the original input file can be completely reconstructed from the compressed file. (I would assume that sbc is as well.) If two different files produced the same compressed file, how would we know which version to reconstruct when decompressing? Your assumption is that adding spaces adds no information. This may be true to you as a human reader of a text, but gzip operates on a binary data stream and has no way of knowing the intended interpretation of that data. I believe the expectation is that the data is primarily ASCII-encoded text, and the algorithm is optimized for that case; but it must also account for the possibility that the data is something else entirely. From a computing perspective, all of those spaces are information, and gzip cannot choose to discard them. 6. ### Dave Costa said, April 28, 2008 @ 5:07 pm If gzip behaved as Mark and Hinrich seem to be expecting, it would be disastrous! The purpose of data compression is to be able to recover the input data. Note I say "data" not "text". When gzip is invoked on your file, it has no knowledge of the intended interpretation of the bits it contains. Its guiding principle is that it must compress the data in such a way that decompression will reproduce the input data EXACTLY. You seem to be expecting that compressing the files "wiki" and "wikispace" in the example should produce identical compressed files. From a computing perspective, this would be throwing away information. While under your interpretation of the data, this information is irrelevant, gzip cannot know that, and must preserve it. [Neither Hinrich nor I have the obviously stupid expectation attributed to us. What we expect is that an ideal lossless compression algorithm would not waste bits encoding redundant (i.e. predictable) aspects of its input. If every other byte of a file is a space, this fact can be noted (and preserved in the uncompressed form) without doubling the size of the compressed file. Similarly, an ideal compression algorithm would be able to deal with any arbitrary character-encoding scheme, without changing the size of the compressed file (other than perhaps by a fixed amount), since by hypothesis the information encoded does not change. Exactly the same point holds for data other than text — different ways of encoding the color of pixels in an image, for example. Hinrich's point was that gzip is very far from ideal in this respect, as his little experiment shows. My experiments show that sbc is considerable better at abstracting away from such local redundancies, but still far from the ideal; as a result, the effects of character-encoding and other trivial orthographical modulations can't be ignored in a discussion of this sort.] 7. ### Gwillim Law said, April 28, 2008 @ 7:20 pm Have you taken into account the variability of the translation process? Surely two different translators could produce English texts that were accurate translations of the Chinese blogs, but differed in length by ten or twenty percent. You can translate "Il n'y a pas de quoi" as "You're welcome" or "Don't mention it" and the compression ratios will be a little different. As an experiment, I took three paragraphs from a French Wikipedia article and translated them into English, twice. The first time, I stuck to a word-for-word translation as much as possible. The second time, I rephrased the sentences somewhat, so that they read more like my natural English writing. Both translations have the same degree of formality and present the same facts. Here are the statistics: Version | Words | Characters French | 307 | 1793 English-1 | 291 | 1756 English-2 | 260 | 1587 The second English translation is about 10.7% shorter in words and 9.6% shorter in characters than the first. It seems to me that this kind of stylistic disparity would overshadow any difference due to the characteristics of the languages. 8. ### john riemann soong said, April 28, 2008 @ 8:26 pm What if someone took the painstaking task of converting the texts to IPA? It doesn't make sense to be analysing artificial orthography when what we want is to the measuring the entropy in the sounds of natural language. 9. ### john riemann soong said, April 28, 2008 @ 8:38 pm *is the measuring Furthermore, there are times it seems, when one might be removing informationally-salient whitespace, so any salient information contained in prosody like stress would all be kept. Translating the Chinese blogs into IPA wouldn't be impossible, just a bit tedious. A wiki project could even work. 10. ### john riemann soong said, April 28, 2008 @ 9:32 pm Lastly, are we comparing languages or writing systems here? If only the latter, then maybe I misunderstood the aim of the idea. (Potentially far more interesting as an idea perhaps is the "efficiency" of natural spoken language.) [myl: Mr Soong, have you considered reading the sequence of posts that you're commenting on? A radical suggestion, I know, but believe it or not, some people do it.] 11. ### Anders Ringström said, April 29, 2008 @ 9:09 am Forgetting compression algorithms, wouldn't Chinese be more efficient when writing a message for, say, a mobile phone, where physical space for what's displayed counts more than the internal representation? 12. ### Nick Lamb said, April 29, 2008 @ 1:07 pm This post reminded me, and the new comment system gives me the opportunity to just write "off the cuff". Compression algorithms need to operate on a string of symbols. Choosing the minimum symbol size (1 bit) makes things very difficult, so this is rarely attempted. In a compressor with the goal of compressing 16-bit PCM audio (such as FLAC), these symbols are usually 16-bit PCM samples, or (stereo) pairs of such samples. In a general purpose algorithm like GNU zip aka deflate the most obvious choice of symbol size is the octet (8 bits, often called "a byte"). Now this biases your analysis because you're comparing English text in ASCII (one octet corresponds exactly to the language's own symbols, the glyphs of the Latin alphabet) to Chinese in either Big5 or UTF-8, where some variable number of octets correspond to the language's own symbols. Inevitably a general purpose, "octet-oriented" compressor will do less well in the latter case. To make this fairer you might try converting both to UTF-16 (where most symbols from either system will correspond to a single 16-bit code unit) and then, to remove a further bias, add say 0x2000 to every 16-bit value in the English text, thus making the actual numerical values more similar to those in the Chinese, while admittedly making their meaning a bit opaque to a human. In computer systems the deflate algorithm is used because it's cheap. In places where deflate is used on data that isn't just a stream of bytes, you can usually improve things a lot by pre-processing the data. PNG for example, specifies a number of alternative pre-processing steps for any scanline, such as "replace all but the first pixel with the difference from the previous pixel". These pre-processing steps correspond to the authors' knowledge about how pixels are related to one another in meaningful images. Most good implementations use a heuristic which "guesses" an appropriate type of pre-processing for each scanline, the type of pre-processing, plus the processed scanline are then encoded together and sent to deflate for compression. This improvement over just using gzip/ deflate on raw pixel data accounts for a significant decrease in file size compared to earlier lossless file formats. If linguists come up with suitable pre-processing steps for specific languages, which used their higher level knowledge about the meaning of the symbols involved, I have no doubt that it would be useful in conjunction with deflate for compressing human language text, and it would probably help in your investigation. 13. ### Ran Ari-Gur said, April 29, 2008 @ 8:23 pm The results may not be meaningful for technical reasons, but the idea and the process are still very thought-provoking. It's too bad it didn't work out: we could have empirically tested the various claims about certain languages being "better for poetry," others for philosophy, etc. :-) I wonder: even in an ideal world where the size of a compressed file really accurately represented the bitwise information content of the file, mightn't the difference in writing systems still have a large effect? English writing conveniently breaks speech down to the individual phoneme (albeit very inconsistently), so an AI compressor could theoretically learn and make use of rules like the "word initial phonological /fn/ is almost unheard-of" rule mentioned here a while back. Chinese writing, by contrast, is a lot less informative in this regard; the AI compressor could only learn rules at the word/syllable level and up. (Or is it that Chinese writing already incorporates the lower-level rules, since it only has logograms for phonologically possible words/syllables? Maybe Chinese writing already does part of an AI compressor's work? I can't tell.) 14. ### Phil Hand said, April 30, 2008 @ 7:28 am I don't have anything valuable to contribute to the discussion, but I reckon I must have been involved in the creation of the parallel corpora you used here (or their successors – I worked on this over the last two years). The agency I did the translation for never actually told me the name of the final client, but the coincidence (corpora of blog posts and news, translation procedures) would be too much for this not to be the project I worked on. I did it for a pittance, but I don't really mind, it's always good to see my work getting some use. To Anders – Chinese texting seems both faster and more efficient than English texting to me these days, but then, I never did much texting in Britain, whereas I do a lot here. It could just be a practice thing. But I rarely send a "two page" text in Chinese, while my English texts often run to two or three pages. 15. ### john riemann soong said, April 30, 2008 @ 7:20 pm "myl: Mr Soong, have you considered reading the sequence of posts that you’re commenting on? A radical suggestion, I know, but believe it or not, some people do it." You kept on talking about languages (as opposed to writing systems). I was rather misled. Noting that for example Chinese can be written perfectly well in xiao'erjing (a sort of Arabic script), and that other languages can be written in the Chinese writing system (as kanji), I really thought at first (and hence my comment that was posted before checking out the rest of the links) you were attempting to compare natural languages, not orthographic systems. 16. ### john riemann soong said, April 30, 2008 @ 7:29 pm Furthermore, how would you define a superficial difference between writing systems, and what is a non-superficial difference? If an ideal test for information entropy was applied on a Mandarin text that is say, converted to xiao'erjing, shouldn't we expect similar results? This post and the posts it cites talks about languages, and superficial orthographic differences, which makes me think you're comparing the efficiency of natural languages, then you talk about the efficiency of writing systems, which makes me think the other way round. Do pardon me for my confusion.
2018-02-22 07:09:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4958345293998718, "perplexity": 1988.0601863956683}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814036.49/warc/CC-MAIN-20180222061730-20180222081730-00018.warc.gz"}
http://hal.in2p3.fr/in2p3-00506044
# Study of collisions of the radioactive $^{24}$Ne beam at 7.9 MeV/u on $^{208}$Pb Abstract : Cross-sections of the main reaction channels for the collision 24Ne + 208Pb at 7.9MeV/u were studied using the radioactive ion beam delivered by the SPIRAL facility and the VAMOS-EXOGAM experimental set-up. Angular distributions for the elastic and inelastic channels were extracted, together with distributions for the +1n and −1p channels. A comprehensive description of the present data is made within the GRAZING model approach. Document type : Journal articles http://hal.in2p3.fr/in2p3-00506044 Contributor : Michel Lion <> Submitted on : Tuesday, July 27, 2010 - 8:55:01 AM Last modification on : Friday, December 4, 2020 - 9:42:02 AM ### Citation G. Benzoni, F. Azaiez, I. Stefan, S. Franchoo, S. Battacharyya, et al.. Study of collisions of the radioactive $^{24}$Ne beam at 7.9 MeV/u on $^{208}$Pb. European Physical Journal A, EDP Sciences, 2010, 45, pp.287-292. ⟨10.1140/epja/i2010-11011-4⟩. ⟨in2p3-00506044⟩ Record views
2021-03-07 00:29:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4920063316822052, "perplexity": 11374.543937311548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00459.warc.gz"}
https://www.studypug.com/math-4/multiplication-strategies
# Multiplication strategies ### Multiplication strategies In this lesson, we will learn: • Understanding multiplication using arrays • Representing a product as either: (1) a smaller product and a sum, or (2) a bigger product and a difference • Tips and tricks for memorizing the 9 × multiplication table facts Notes: • Multiplication is just repeated addition • Multiplication facts can be shown in an array model with circles/dots • Using the array model, it shows that multiplication facts can be broken into groups of smaller multiplication facts: • Using the same array model, we can find the next multiplication fact by adding another row: • Therefore, a product can be found as a smaller product and a sum • Or, it could be found as a bigger product and a difference • The 9 × multiplication tables can be memorized using your fingers! • Notice that the first ten multiples of 9 are mirrored after the 5-digit in 45 • Ex. 9, 18, 27, 36, 45 $\parallel$ 54, 63, 72, 81, 90 #### Lessons • Introduction Introduction to Multiplication Strategies: a) Using addition and arrays to understand multiplication b) Breaking down multiplication facts into smaller groupsr c) A product can be found using a smaller product and a sum d) A product can be found using a bigger product and a difference e) Patterns to know for memorizing multiples of 9 f) Finger method of 9 times tables • 1. Understanding products using smaller products Turn the product into the sum of two smaller group products. a) 8 × 4 = (5 × 4) + ( __ × 4) $\qquad \qquad$ = _____ + _____ $\qquad \qquad$ = _______ b) 12 × 12 = (10 × 12) + ( __ × 12) $\qquad \qquad$ = _______ + _______ $\qquad \qquad$ = _______ c) 20 × 35 = (10 × 35) + ( __ × 35) $\qquad \qquad$ = _______ + _______ $\qquad \qquad$ = _______ • 2. Describing multiplication array models - 1 Fill in the blanks to describe: 1. the product shown in the array and 2. the sum written with the smaller product a) b) • 3. Describing multiplication array models - 2 Fill in the blanks to turn the product into a smaller product and sum. Use an array model to help fill in the blanks. a) 5 × 3 = ( __ × 3) + 3 $\qquad \qquad$ = _______ + 3 $\qquad \qquad$ = _______ b) 11 × 12 = ( __ × 12) + __ $\qquad \qquad$ = _______ + ____ $\qquad \qquad$ = _______ • 4. Multiplication and array models with subtraction Fill in the blanks to turn the product into a bigger product and a difference. a) 9 × 6 = ( __ × 6) - 6 $\qquad \qquad$ = _______ + ____ $\qquad \qquad$ = _______ b) 9 × 27 = ( __ × 27) - 27 $\qquad \qquad$ = _______ - ____ $\qquad \qquad$ = _______ • 5. Find the answer using the given product. a) If 6 × 86 = 516 , what is 7 × 86 = ? b) If 10 × 53 = 530 , what is 9 × 53 = ? • 6. Multiplication with 9-times tables strategy: word problems Use the finger method for 9-times tables to solve. a) If you put down the fourth finger, what is the 9-times tables multiplication sentence that is represented? b) If you put down the fourth finger, it represents a 9-times table fact. What finger do you need to put down for the opposite answer (when the answer's digits are flipped/mirrored)? Write the multiplication sentence for the 9-times table fact with the opposite answer
2021-09-25 03:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6747185587882996, "perplexity": 2692.913950131891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00484.warc.gz"}
https://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/open-release-eucalyptus.master/developing_course/controlling_content_visibility.html
# 6.7. Controlling Content Visibility¶ As a member of the course team, you must carefully control which content is visible to learners and when. You control content visibility through these features in Studio. These features work together to control content visibility for learners. ## 6.7.1. Release Dates¶ You specify release dates and times for the sections and subsections in an instructor-paced course. By defining release dates, you ensure that content is available to learners on a planned schedule, without requiring manual intervention while the course is running. Note Self-paced courses do not have release dates for sections and subsections. For more information about instructor-paced and self-paced courses, see Setting Course Pacing. By default, a subsection inherits the release date and time of the section it is in. You can change the release date of the subsection to another date. Published units are not visible to learners until the scheduled release date and time. When the section and subsection have different release schedules, published units are not visible until both dates have passed. Prior to release, content is visible to course team members by previewing the course or viewing the live course as staff. Note The release times that you set, and the times that learners see, are in Coordinated Universal Time (UTC). You might want to verify that you have specified the times that you intend by using a time zone converter such as Time and Date Time Zone Converter ## 6.7.2. Unit Publishing Status¶ You publish units to make them visible to learners. In both instructor-paced and self-paced courses, units must be published to be visible to learners. Learners see the last published version of a unit if the section and subsection it is in are released. Learners do not see units that have never been published, and they do not see unpublished changes to units or components within units. Therefore, you can make changes to units in released subsections without disrupting the learner experience. You can publish all changes in a section or subsection at once, or publish changes to individual units. For more information about publishing units, see the following topics. ## 6.7.3. Content Hidden from Learners¶ You can hide content from learners in both instructor-paced and self-paced courses. Such content is never visible to learners, regardless of the release and publishing status. You might hide a unit from learners, for example, when that unit contains an answer to a problem in another unit of that subsection. After the problem’s due date, you could make the unit with the answer visible. You could also hide a unit from learners if you wanted to use that unit to provide instructions or guidance meant only for the course team. Only course team members would see that unit in the course. Note As a best practice, do not hide sections, subsections, or units that contain graded content. When the platform performs grading for any learner, the grading process does not include problems that a learner does not have access to, in other words, any content that is hidden from that learner. For more details, see Hiding Graded Content. You can hide content at different levels, as described in the following topics. Note When you make a previously hidden section or subsection visible to learners, some content in the section or subsection might remain hidden. If you have explicitly set a subsection or unit to be hidden from learners, this subsection or unit remains hidden even when you change the visibility of the parent section or subsection. Unpublished units remain unpublished, and changes to published units remain unpublished. Grading is affected if you hide a section, subsection, or unit that contains graded problems. When the platform performs grading for any learner, the grading process does not include problems that the learner does not have access to, in other words, any content that is hidden from that learner. Note Grading is not affected for timed exams when you select the setting to keep timed exam content hidden from learners even after the exam due date has passed. For more information, see Timed Exams. ## 6.7.4. Content Groups¶ If you have cohorts enabled in your course, you can use content groups to designate particular components in your course as visible only to specific groups of learners. For details, see Content Groups and Creating Cohort-Specific Course Content. ## 6.7.5. Configuring Prerequisite Course Subsections¶ You can hide subsections of your course until learners complete other, prerequisite subsections. If a subsection has a prerequisite, it is not visible in the course navigation unless a learner has earned a minimum score in the prerequisite subsection. ### 6.7.5.1. Enable Subsection Prerequisites¶ To enable prerequisite subsections in a course, follow these steps. 2. In the Enable Subsection Prerequisites field, enter true. 3. Select Save Changes. ### 6.7.5.2. Create a Prerequisite Subsection¶ To prevent learners from seeing a subsection of your course until they have earned a minimum score in a prerequisite subsection, follow these steps. Note Make sure that you configure subsection prerequisites in the order that you intend for learners to encounter them in the course content. The prerequisite configuration controls do not prevent you from creating a circular chain of prerequisites that will permanently hide them from learners. 2. Select the Configure icon for the subsection that must be completed first. This is the prerequisite subsection. 3. Select the Access tab. 4. Select Use as a Prerequisite > Make this subsection available as a prerequisite to other content. 5. Select Save. 6. Select the Configure icon for the subsection that will be hidden until the prerequisite is met. 7. Select the Access tab. 8. In the Limit Access > Prerequisite menu, select the name of the subsection you want to specify as the prerequisite. 9. Enter the percent of the total score that learners must earn in the Minimum Score field. A learner’s score for all problems in the prerequisite subsection must be equal to or greater than this percentage in order to satisfy the prerequisite and display the current subsection. For example, if the prerequisite subsection includes four problems and each problem is worth the same number of points, set the Minimum Score to 75 to require at least three correct answers. 10. Select Save. 11. In the course outline, if a subsection has a prerequisite, the prerequisite name appears under the subsection name.
2019-04-26 08:48:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17892467975616455, "perplexity": 2466.5247371702685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578762045.99/warc/CC-MAIN-20190426073513-20190426095513-00003.warc.gz"}
https://plainmath.net/29650/suppose-that-you-are-taking-multiple-choice-exam-with-five-questions-each
# Suppose that you are taking a multiple-choice exam with five questions, each hav Suppose that you are taking a multiple-choice exam with five questions, each have five choices, and one of them is correct. Because you have no more time left, you cannot read the question and you decide to select your choices at random for each question. Assuming this is a binomial experiment, calculate the binomial probability of obtaining exactly one correct answer. You can still ask an expert for help • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Caren Step 1 Obtain the binomial probability of obtaining exactly one correct answer. The binomial probability of obtaining exactly one correct answer is obtained below as follows: Let X denotes the number of correct answers which follows binomial distribution with the probability of success 1/5 with the number of questions randomly selected is 5. That is, $$\displaystyle{n}={5},{p}={\frac{{{1}}}{{{5}}}},{q}={\frac{{{4}}}{{{5}}}}$$ The probibility distribution is given by, $$P(X=x)=\left(\begin{array}{c}n\\ x\end{array}\right) p^{x}(1-p)^{n-x}; here\ x=0,1,2, \cdots, n\ for\ 0 \leq p \leq 1$$ Where n is the number of trials and p is the probability of success for each trial. Step 2 The required probability is, Use Excel to obtain the probability value for x equals 1. Follow the instruction to obtain the P-value: 1. Open EXCEL 2. Go to Formula bar. 3. In formula bar enter the function as“=BINOMDIST” 4. Enter the number of success as 1 5. Enter the Trails as 5 6. Enter the probability as 0.20 7. Enter the cumulative as False 8. Click enter. EXCEL output: From the Excel output, the P-value is 0.4096. The binomial probability of obtaining exactly one correct answer is 0.4096.
2022-05-29 10:50:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799445629119873, "perplexity": 832.0317339401993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00743.warc.gz"}
https://brilliant.org/problems/mysterious-sum/
# Mysterious Sum If we have the expression $1+ 2x + 3x^2 + 4x^3 + 5x^4 + \cdots$ The value of $1+0.2+0.03+0.004+\cdots$ can be represented as $\dfrac ab$, where $a$ and $b$ are coprime positive integers, find $a+b$. ×
2021-03-04 15:50:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936457633972168, "perplexity": 149.37834700494716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00522.warc.gz"}