url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://walkccc.me/LeetCode/problems/2291/
# 2291. Maximum Profit From Trading Stocks ## Approach 1: 2D DP • Time: $O(n\cdot\texttt{budget})$ • Space: $O(n\cdot\texttt{budget})$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 class Solution { public: int maximumProfit(vector& present, vector& future, int budget) { const int n = present.size(); // dp[i][j] := max profit of buying present[0..i) with j budget vector> dp(n + 1, vector(budget + 1)); for (int i = 1; i <= n; ++i) { const int profit = future[i - 1] - present[i - 1]; for (int j = 0; j <= budget; ++j) if (j < present[i - 1]) dp[i][j] = dp[i - 1][j]; else dp[i][j] = max(dp[i - 1][j], profit + dp[i - 1][j - present[i - 1]]); } return dp[n][budget]; } }; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 class Solution { public int maximumProfit(int[] present, int[] future, int budget) { final int n = present.length; // dp[i][j] := max profit of buying present[0..i) with j budget int[][] dp = new int[n + 1][budget + 1]; for (int i = 1; i <= n; ++i) { final int profit = future[i - 1] - present[i - 1]; for (int j = 0; j <= budget; ++j) if (j < present[i - 1]) dp[i][j] = dp[i - 1][j]; else dp[i][j] = Math.max(dp[i - 1][j], profit + dp[i - 1][j - present[i - 1]]); } return dp[n][budget]; } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 class Solution: def maximumProfit(self, present: List[int], future: List[int], budget: int) -> int: n = len(present) # dp[i][j] := max profit of buying present[0..i) with j budget dp = [[0] * (budget + 1) for _ in range(n + 1)] for i in range(1, n + 1): profit = future[i - 1] - present[i - 1] for j in range(budget + 1): if j < present[i - 1]: dp[i][j] = dp[i - 1][j] else: dp[i][j] = max(dp[i - 1][j], profit + dp[i - 1][j - present[i - 1]]) return dp[n][budget] ## Approach 2: 1D DP • Time: $O(n\cdot\texttt{budget})$ • Space: $O(\texttt{budget})$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 class Solution { public: int maximumProfit(vector& present, vector& future, int budget) { const int n = present.size(); // dp[i] := max profit of buying present so far with i budget vector dp(budget + 1); for (int i = 0; i < n; ++i) for (int j = budget; j >= present[i]; --j) dp[j] = max(dp[j], future[i] - present[i] + dp[j - present[i]]); return dp[budget]; } }; 1 2 3 4 5 6 7 8 9 10 11 12 13 class Solution { public int maximumProfit(int[] present, int[] future, int budget) { final int n = present.length; // dp[i] := max profit of buying present so far with i budget int[] dp = new int[budget + 1]; for (int i = 0; i < n; ++i) for (int j = budget; j >= present[i]; --j) dp[j] = Math.max(dp[j], future[i] - present[i] + dp[j - present[i]]); return dp[budget]; } } 1 2 3 4 5 6 7 8 9 10 11 class Solution: def maximumProfit(self, present: List[int], future: List[int], budget: int) -> int: n = len(present) # dp[i] := max profit of buying present so far with i budget dp = [0] * (budget + 1) for p, f in zip(present, future): for j in range(budget, p - 1, -1): dp[j] = max(dp[j], f - p + dp[j - p]) return dp[budget]
2023-03-23 01:04:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28622955083847046, "perplexity": 3663.190021106554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00131.warc.gz"}
https://chemistry.stackexchange.com/questions/107580/reaction-of-urea-and-thiourea-with-nitrous-acid
# Reaction of urea and thiourea with nitrous acid I was told that urea on reaction with nitrous acid gives nitrogen gas, carbon dioxide and water. While on the other hand, thiourea on reaction with nitrous acid gives $$\ce{H+}$$, thiocyanate ion and water. I tried looking for the mechanism and thought that this would be related: I carried out the above procedure on both the nitrogens of urea and was able to obtain the products of the first reaction enlisted above. Now, my questions are: • Isn't the nucleophilicity of the lone pairs of urea's nitrogens negligible, and any reaction should proceed via attack by oxygen's lone pairs? (I thought that this was similar to amidines and amides) • Why are the products of the reactions with urea and thiourea different, as the carbonyl oxygen, or sulphur doesn't seem involved in the reaction according to me • More importantly, is this even the correct mechanism? • It is similar as in amidines and amides and it doesn't matter. Any attacks on O are unproductive, so effect of conjugation is rather kinetical. – Mithoron Jan 12 at 19:17
2019-08-23 04:33:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6384158134460449, "perplexity": 3019.0049408665996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00174.warc.gz"}
https://www.eduzip.com/ask/question/in-calculating-a-number-ofnbspintegrals-we-had-to-use-the-method-578567
#### Passage In calculating a number of integrals we had to use the method of integration by parts several times in succession. The result could be obtained more rapidly and in a more concise form by using the so-called generalized formula for integration by parts. $\int u(x)\, v(x)dx\, =\, u(x)\, v_{1}(x)\, -\, u^{}(x)v_{2}(x)\, +\, u^{}(x)\, v_{3}(x)\, -\, .\, +\, (-1)^{n\, -\, 1}u^{n\, -\, 1}(x)v_{n}(x)\, -\, (-1)^{n\, -\, 1}$ $\int\, u^{n}(x)v_{n}(x)\, dx$ where $v_{1}(x)\, =\, \int v(x)dx,\, v_{2}(x)\, =\, \int v_{1}(x)\, dx\, ..\, v_{n}(x)\, =\, \int v_{n\, -\, 1}(x) dx$ Of course, we assume that all derivatives and integrals appearing in this formula exist. The use of the generalized formula for integration  by parts is especially useful when calculating $\int P_{n}(x)\, Q(x)\, dx$, where $P_{n}(x)$, is polynomial of degree n and the factor Q(x) is such that it can be integrated successively n + 1 times. Mathematics # If $\int e^{2x}. x^{4}\, dx\, =\, \displaystyle \frac{e^{2x}}{2} f(x)\, +\, C$ then f(x) is equal to $x^{4}\, -\, 2x^{3}\, +\, 3x^{2}\, -\, 3x\, +\, \displaystyle \frac{3}{2}$ ##### SOLUTION $I=\int e^{2x}. x^{4} dx$ Applying integration by parts, $\displaystyle = x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -\int { { 4x }^{ 3 } } \frac { e^{ 2x } }{ 2 } dx$ $\displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -2\int { { x }^{ 3 } } e^{ 2x }dx$ Again applying integration by parts $\displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -2{ x }^{ 3 }\frac { e^{ 2x } }{ 2 } +2\int { { 3x }^{ 2 } } \frac { e^{ 2x } }{ 2 } dx$ $\Rightarrow \displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -{ x }^{ 3 }e^{ 2x }+3 \int { { x }^{ 2 } } e^{ 2x }dx$ Again applying integration by parts, we get $\displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -{ x }^{ 3 }e^{ 2x }+3x^{ 2 }\, \frac { e^{ 2x } }{ 2 } -3\int { { 2x } } \frac { e^{ 2x } }{ 2 } dx$ $\displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -{ x }^{ 3 }e^{ 2x }+\frac { 3 }{ 2 } x^{ 2 }\, e^{ 2x }-3 \int { { x } } e^{ 2x }dx$ Again applying integration by parts, we get $\displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -{ x }^{ 3 }e^{ 2x }+\frac { 3 }{ 2 } x^{ 2 }\, e^{ 2x }-\frac { 3 }{ 2 } xe^{ 2x }+3 \int { \frac { e^{ 2x } }{ 2 } } dx$ $\Rightarrow \displaystyle I=x^{ 4 }\, \frac { e^{ 2x } }{ 2 } -{ x }^{ 3 }e^{ 2x }+\frac { 3 }{ 2 } x^{ 2 }\, e^{ 2x }-\frac { 3 }{ 2 } xe^{ 2x }+\frac { 3 }{ 4 } e^{ 2x }+C$ $\Rightarrow \displaystyle I=\dfrac{e^{2x}}{2} (x^4-2x^3+3x^2-3x+\frac{3}{2})+C$ On comparing with given , we get $f(x)=x^4-2x^3+3x^2-3x+\dfrac{3}{2}$ Its FREE, you're just one step away Single Correct Hard Published on 17th 09, 2020 Mathematics # If $\int (x^{3}\, -\, 2x^{2}\, +\, 3x\, -\, 1)\, cos2x\, dx\, =\, \displaystyle \frac{sin 2x}{4}u(x)\, +\, \frac{cos 2x}{8}v(x)\, +\, c$, then $u(x)\, =\, 2x^{3}\, -\, 4x^{2}\, +\, 3x$ ##### SOLUTION $\int { \left( { x }^{ 2 }-2{ x }^{ 2 }+3x-1 \right) } \cos { 2x } dx\\ =\int { { x }^{ 2 } } \cos { 2x } dx-2\int { { x }^{ 2 }\cos { 2x } } dx+\int { 3x\cos { 2x } } dx-\int { \cos { 2x } } dx\\ =\frac { 1 }{ 2 } { x }^{ 3 }\sin { 2x } -\frac { 3 }{ 2 } \int { { x }^{ 2 } } \sin { 2x } dx-2\int { { x }^{ 2 }\cos { 2x } } dx+3\int { x } \cos { 2x } d-\int { \cos { 2x } dx }$ $\displaystyle =\frac { 3 }{ 4 } { x }^{ 2 }\cos { 2x } +\frac { 1 }{ 2 } { x }^{ 3 }\sin { 2x } +\frac { 3 }{ 2 } \int { x } \cos { 2x } dx-2\int { { x }^{ 2 }\cos { 2x } dx } -\int { \cos { 2x } } dx$ $\displaystyle ={ x }^{ 3 }\sin { x } \cos { x- } { x }^{ 2 }\sin { 2x } +\frac { 3 }{ 4 } { x }^{ 2 }\cos { 2x } \frac { 3 }{ 4 } x\sin { 2x } -x\cos { 2x } +\frac { 3 }{ 8 } \cos { 2x }$ $\displaystyle =\frac { 1 }{ 4 } \left( { 2x }^{ 3 }-{ 4x }^{ 2 }+3x \right) \sin { 2x } +\frac { 1 }{ 8 } \left( { 12x }^{ 3 }-{ 16x }^{ 2 }+6x \right) \cos { 2x }$ Its FREE, you're just one step away Single Correct Hard Published on 17th 09, 2020 Questions 203525 Subjects 9 Chapters 126 Enrolled Students 111 #### Realted Questions Q1 Multiple Correct Hard Let $f(x)\, =\, 3x^{2}.\, \sin \,\displaystyle \frac{1}{x}\, -\, x\cos\, \displaystyle \frac{1}{x},\, x\, \neq\, 0, f(0)\, =\, 0\, f \left (\, \displaystyle \frac{1}{\pi} \right )\, =\, 0$, then which of the  following is/are not correct. • A. $f(x)$ is continuous at $x = 0$ • B. $f(x)$ is non-differentiable at $x = 0$ • C. $f(x)$ is discontinuous at $x = 0$ • D. $f(x)$ is differentiable at $x = 0$ 1 Verified Answer | Published on 17th 09, 2020 Q2 Subjective Medium solve it. $I = \int {{x^2}\cos x} dx$ 1 Verified Answer | Published on 17th 09, 2020 Q3 Subjective Medium Evaluate $\displaystyle\int^a_0\dfrac{dx}{(ax+a^2-x^2)}$. 1 Verified Answer | Published on 17th 09, 2020 Q4 Subjective Medium Evaluate:$\displaystyle\int_{0}^{\frac{\pi}{2}}{{\sin}^{3}{x}\,dx}$ Let $\displaystyle f\left ( x \right )=\frac{\sin 2x \cdot \sin \left ( \dfrac{\pi }{2}\cos x \right )}{2x-\pi }$
2021-08-02 06:50:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124994874000549, "perplexity": 2438.8781650530063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00124.warc.gz"}
https://www.pims.math.ca/scientific-event/180918-scaimsre
## Scientific Computing, Applied and Industrial Mathematics (SCAIM) Seminar : Ron Estrin • Date: 09/18/2018 • Time: 12:30 Lecturer(s): Ron Estrin, Stanford University Location: University of British Columbia Topic: Implementing a Smooth Exact Penalty Function for Nonlinear Optimization Description: We describe a penalty function for constrained nonlinear programs, originally proposed by Fletcher (1970). This penalty function is smooth and exact, so that minimizers of the original problem are minimizers of the penalty function for a sufficiently large (but finite) penalty parameter. The main computational kernel required to evaluate this penalty function and its derivatives is solving augmented least-squares like systems. The penalty function can then be efficiently evaluated for problems where good preconditions exist, such as for PDE-constrained optimization problems. We discuss extensions to regularized problems, problems with inequality constraints, and the use of inexact evaluations. We provide some preliminary numerical results on some standard optimization test problems and PDE-constrained problems. This is joint work with Michael Friedlander, Dominique Orban and Michael Saunders. Other Information: Location: ESB 4133 (PIMS lounge)
2021-05-12 17:09:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805462121963501, "perplexity": 2175.439260663539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00033.warc.gz"}
https://cstheory.stackexchange.com/questions/32560/algorithm-for-finding-heavy-hitters-in-a-weighted-stream/32567
# Algorithm for finding heavy hitters in a weighted stream The problem of finding heavy hitters in a stream is defined as follows: given a $N$ sized stream of elements, return a set $\mathcal D$, such that every item which arrived at least $N\theta$ times appear in $\mathcal D$, and no element with frequency lower than $N(\theta-\epsilon)$ belongs to $\mathcal D$. $\epsilon$ and $\theta$ are constant thresholds given as input. The problem is well studied, with many algorithms developed for it, such as Sticky Sampling, Lossy counting, Batch decrement, and Space Saving. The last two are optimal, in the sense that they require $O(\frac{1}{\epsilon})$ counters and have constant runtime. I'm looking for an algorithm for a weighted variant of the problem: Every item in the stream is of a tuple $(id, weight)$, and the goal is the return the elements with the highest weight. All weights are in $(0,1]$. Formally, a weighted heavy hitters algorithm is required to return all elements whose sum of weights is at least $W\theta$, and no element with weight lower than $W(\theta-\epsilon)$, where $W$ is the sum of weights of the stream elements. Are there known (preferably deterministic) algorithms for this problem that use $O(\frac{1}{\epsilon})$ counters and have $O(1)$ runtime? Batch Decrement and Space Saving does not seem to have a simple generalization to the weighted case, as both maintain a data structure that allows finding the minimum counter in constant time, which might not be doable in the weighted setting. • Is the only problem with batch decrement and space saving the lack of constant update time? And is $\log(1/\varepsilon)$ update time too slow? Sep 17, 2015 at 13:05 • @Thomas - correct. Converting SS into $\log(1 / \epsilon)$ time for the weighted case is relatively simple using a skip list of values rather than a simple list. – R B Sep 17, 2015 at 13:33 Here's a generic randomized solution. (Do we even have deterministic solutions in the unweighted case? Don't Space Saving and Batch Decrement both need hash maps?) This is probably not the ideal solution, but it's a start. Weighted Heavy Hitters Algorithm. Input: $S=\{(\text{id}_i,\text{weight}_i)\}_{i=1}^N$ a weighted stream. 1. Create an unweighted stream $S'=\{\text{id}_j\}_{j=1}^{N'}$ as follows. For every weighted update $(\text{id}_i,\text{weight}_i)$ in $S$, include the unweighted update $\text{id}_i$ in $S'$ independently with probability $\text{weight}_i$. 2. Apply an unweighted heavy hitters algorithm (i.e. Space Saving or Batch Decrement) to $S'$ and output the heavy hitters for $S$. Clearly this algorithm has $O(1)$ update time. To verify that this algorithm is correct we must prove the following claim. Claim. With high probability, for every $\text{id}$, the count of $\text{id}$ in $S'$ is close to the sum of the weights of $\text{id}$ in $S$. Let $w_\text{id}$ be the "true" weight of $\text{id}$ in $S$ and $W_\text{id}$ the weight of $\text{id}$ in $S'$. Let $w=\sum_\text{id} w_\text{id}$ be the total weight of $S$ and $W=\sum_\text{id} W_\text{id}$ be the total weight of $S'$. Our claim is that $|W_\text{id}-w_\text{id}| \leq \varepsilon w$ for all $\text{id}$ with high probability. Clearly $\mathbb{E}\left[W_\text{id}\right]=w_\text{id}$. It remains to show concentration bounds. To this end, we use the following result. Bernstein's Inequality. Let $X_1, \cdots, X_n \in \{0,1\}$ be independent random variables. Then $$\mathbb{P}\left[ \left| \sum_{i=1}^n X_i - \mathbb{E}\left[X_i\right] \right| > t \right] \leq 2 \cdot \exp\left(-\Omega\left(\frac{t^2}{t+\sum_{i=1}^n \mathsf{Var}\left[X_i\right]}\right)\right)$$ for all $t > 0$. Thus $$\mathbb{P}\left[ \left| W_\text{id}-w_\text{id} \right| > \varepsilon w \right] \leq 2 \cdot \exp\left(-\Omega\left(\frac{\varepsilon^2 w^2}{\varepsilon w+w_\text{id}}\right)\right) \leq 2 \cdot \exp\left(-\Omega\left(\varepsilon^2 w\right)\right).$$ Note that if $w_\text{id}=0$, then $W_\text{id}=0$. So we need only consider the $\text{id}$s that appear in the stream. In particular, we can take a union bound over at most $N$ $\text{id}$s: If $w \geq O\left(\log(N)/\varepsilon^2\right)$, then the weights in $S'$ are close to the weights in $S$ with high probability and the claim is verified. What about when $w \leq O\left(\log(N)/\varepsilon^2\right)$? Then we can first repeat each weighted update $T$ times. This increases the weight to $Tw$. The good news is that $S'$ is only length $O(Tw)$ and transforming $S$ into $S'$ takes $O(1 + Tw/N)$ (amortized) time per update with high probability. So we just need to find a $T$ with $O\left(\log(N)/\varepsilon^2\right) \leq Tw \leq O(N)$. • I think Misra-Gries, Lossy Counting, and Space Saving are essentially deterministic. A hash function can be used to speed up the update time, but if you are fine with $1/\theta$ update time, then you don't need it. I might be missing something. Sep 17, 2015 at 22:53 • I guess this is the Las Vegas/Monte Carlo distinction. Randomness is used for speed, but not correctness. Sep 17, 2015 at 22:59 • Sasho: I think you really mean $1/\varepsilon$ and not $1/\theta$. In any case, even without randomization (i.e. no hashing), you can get $\log(1/\varepsilon)$ update time using an augmented balanced BST. Also, Thomas: how do you know whether to repeat $T$ times? You don't know all weights a priori, right? My impression from "algorithm is required to return all elements whose sum of weights is at least ..." of OP is that the same item can be updated multiple times. Sep 18, 2015 at 3:34 • @JelaniNelson Right. How to pick the right $T$ is unclear, as it must be done at the start and requires a rough estimate of the total weight $w$. I'm not sure how to solve this problem. Sep 18, 2015 at 4:11 If you allow randomization, the CountMin (CM) sketch can be used with weights without modification, and can also handle negative weights. When all weights are positive, the standard analysis of CM shows that with a sketch of size $O(\varepsilon^{-1}\log 1/\delta)$ you can compute a $\tilde{w}_i$ so that $\tilde{w_i} \geq w_i$ always, and $\tilde{w}_i \leq w_i + \varepsilon W$ with probability at least $1-\delta$. Now you can set $\delta < 1/3m$, where $m$ is the length of the stream, so that $\tilde{w}_i$ are accurate for all $i$ you encounter in the stream. As you process the stream, in addition to the sketch at any point you maintain the set $S$ of those $i$ with the $1/\theta$ largest $\tilde{w}_i$. At the end you output the $i$ which have $\tilde{w}_i$ at least $\theta W$ (notice all of them have to be in $S$). The details are a bit more complicated if the weights can be negative, check the paper. This algorithm can be derandomized using CR-precis in place of the CM sketch, but the dependence on $1/\varepsilon$ becomes quadratic, and additional log factors are lost. For a short analysis, you can also check Andrew McGregor's blog post. Once again, with additional work, this can be made to work with negative weights too. • CM sketch will have worse space than OP desired. Also, for CR Precis, you can do better and it's not any harder. The following is in my RANDOM'12 paper with Huy and David (or just see Section 3 of these lecture notes: people.seas.harvard.edu/~minilek/cs229r/fall13/lec/lec4.pdf). The idea is simple. A matrix $\Pi\in\mathbb{R}^{m\times n}$ is said to be $\varepsilon$-incoherent if its columns $\Pi_i$ have unit Euclidean norm, and for $i\neq j$ we have $|\langle \Pi_i, \Pi_j \rangle| < \varepsilon$. The sketch of $x$ will be $\Pi x$. We estimate $x_i$ as $\langle \Pi_i, \Pi x\rangle$. Sep 18, 2015 at 3:25 • Now note the estimate is $\sum_j x_j \langle \Pi_i, \Pi_j\rangle = x_i + \sum_{i\neq j}x_j \langle \Pi_i, \Pi_j\rangle = x_i \pm \varepsilon \|x\|_1$. Now the question is just how small can you make $m$, the sketch size. A code with distance $1-\varepsilon$, block length $t$, and alphabet size $q$ implies an incoherent matrix with $m = qt$ (see lecture notes above). A random code can thus be taken with $q = O(1/\varepsilon)$ and $t = O(q\log n)$, giving $m = O(\varepsilon^{-2}\log n)$. Or you can use Reed-Solomon codes, giving $q=t= O(\varepsilon^{-1}\log n/(\log\log n+\log(1/\varepsilon)))$. Sep 18, 2015 at 3:28 • Using either code is better than the bound in CR-Precis (though they morally did what I just said above but used the wrong code). Sep 18, 2015 at 3:30 • @JelaniNelson Thanks Jelani, this makes sense, and gives a much better idea about what's happening. I assume this is the best deterministic algorithm you know for heavy hitters with weights (but still increments only)? I agree none of this is as good as what OP asked for. Sep 18, 2015 at 4:27 • Thanks Sasho ! I've also thought of CMS, but I don't like the memory overhead. It works better to run SS using a skip list of values in the Frequent data structure. This uses $\epsilon^{-1}$ counters with insertions taking $O(\log\epsilon^{-1})$. I'm trying to get to constant run time, while still having $O(\epsilon^{-1})$ counters. I seem to have a method that uses $2\epsilon^{-1}$ counters with $O(1)$ amortized run time, but with $O(\epsilon^{-1})$ time at worst case, which is not something I can afford. – R B Sep 18, 2015 at 8:46 I think "A High-Performance Algorithm for Identifying Frequent Items in Data Streams" by Anderson, et al. shows an answer, though the weights are integral, not real.
2022-08-09 02:16:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8247703909873962, "perplexity": 341.33088995595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00409.warc.gz"}
http://mathematica.stackexchange.com/questions/22193/can-anyone-identify-these-plots
Can anyone identify these plots? [duplicate] I need to produce some plots that look like these but I'm not sure what they are called in mathematica, can anyone identify them ? Update: I want to plot triples of the form (xvalue, yvalue, intensity) in a graph like that - marked as duplicate by Jens, Simon Woods, whuber, Oleksandr R., Yves KlettMar 27 '13 at 19:35 Try DensityPlot! –  PlatoManiac Mar 27 '13 at 15:17 It's probably ArrayPlot. There's also MatrixPlot, Image, Graphics[Raster[...]], and ListDensityPlot (which can interpolate). For triples you need ListDensityPlot or see this –  Szabolcs Mar 27 '13 at 15:26
2015-05-27 06:07:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.526142418384552, "perplexity": 6794.111303704618}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928907.65/warc/CC-MAIN-20150521113208-00259-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3905478/interchange-the-order-of-integration-and-summation?noredirect=1
# Interchange the order of integration and summation I read some questions, where people usually discuss the interchange of definite integral and summation; for examples, here, here and here. But I would like to ask about the interchange of indefinite integral with summation, the reason is very practical. I am not sure if I read all the similar questions in SE, so if someone finds the answer somewhere else, please show me the link, and we could close this question. The statement of question: I have a function $$F(z)$$, which is represented by an indefinite integral $$F(z)=\int d z\; f(z),$$ and the integration can not be carried out analytically. I would like to study the zeros and singularities of $$F(z)$$ based on knowing the zeros and singularities of $$f(z)$$. For example, if I know that $$z=0$$ is a singular point of $$f(z)$$, and $$f(z)$$ can just be expanded by Laurent series, say $$f(z)=\sum_{n=-\infty}^{\infty} a_n z^n.$$ My question is in what conditions the following equality holds $$\int d z\;\sum_{n=-\infty}^{\infty} a_n z^n = \sum_{n=-\infty}^{\infty} \int d z\; a_n z^n+c,$$ where $$c$$ is an arbitrary constant. Generally the Laurent series is not necessary, the expansion could be in any other forms, say asymptotic expansion or fractional expansion. • I am afraid that the textbook you might be using does the change of order in indefinite case. – Kumar Nov 13 '20 at 9:27 • the problem is local so generally, you need local uniform convergence; however, in general, the relation between the zeros of $F$ and $F'$ is not easy to discern (eg $F'=e^{z^2}$ has no zeroes, while $F$ has quite a lot of zeroes), while in the analytic case it is true that if $F'$ has a singularity, $F$ must have a singularity too but again the situation is tricky as isolated singularities may not remain so $1/z, \log z$ are typical examples – Conrad Nov 13 '20 at 13:20 • "the expansion could be in any other forms, say asymptotic expansion or fractional expansion" So this is not just an indefinite integral, it's an indefinite question. – user436658 Nov 13 '20 at 21:02 • @Conrad, from intuition, $F$ and $F'$ may have singularities simultaneously, and either the type and location may change. But is there any theorems support it? – user142288 Nov 14 '20 at 0:05 • Well if $F$ is non-singular near a point, $F'$ surely is so $F$ definitely has singularities where $F'$ does; the other way is trickier in the sense that if $F$ has a (non-removable) isolated singularity, $F'$ does too by inspection, but if $F$ has a branch point, $F'$ may have just an isolated singularity – Conrad Nov 14 '20 at 2:07 You need compact convergence of the series, that is, convergence on compact sets. But we also need some more preliminaries: First, I'm not going to talk about the indefinite integral, but just about antiderivatives, simply because we will need definite integrals down the line, and I think it will be less confusing if we reserve the integral sign for the definite ones. Second, the expression $$\sum\int a_nz^n\mathrm dz$$ is a bit icky, since the integrals all come with an arbitrary additive constant attached, which can make the sum diverge. We will have to specify which antiderivative we are choosing specifically. Third, we will only be considering connected domains, for simplicity's sake. You will soon see why. Fourth, interchanging sums and integrals or sums and antiderivatives is really about interchanging limits and integrals/antiderivatives, since infinite sums are just sequences/their limits written down in a particular way. So the real question is: If $$f_n\to f$$, does $$F_n\to F$$, where $$F_n,F$$ are suitably chosen antiderivatives of $$f_n$$ and $$f$$? And in what manner do they converge? Now with these remarks out of the way, we can say the following (which is about sequences of functions, but series are sequences, so you can apply it to series exactly the same way): Let $$D\subseteq\mathbb C$$ be a connected domain. Let $$f_n:D\to\mathbb C$$ be holomorphic for all $$n\in\mathbb N$$, and let $$f_n$$ converge to $$f:D\to\mathbb C$$ uniformly on compact subsets of $$D$$. Also, let all $$f_n$$ have an antiderivative on $$D$$. Then the function $$F:D\to \mathbb C,~z\mapsto\int_{z_0}^z f(w)\mathrm dw,$$ where the integral goes along any arbitrary path from a fixed $$z_0\in D$$ to $$z$$, is well defined and an antiderivative of $$f$$. We also have $$F_n\to F$$ uniformly on compact sets, where $$F_n$$ is an analogously defined antiderivative of $$f_n$$. Proof: First note that because $$f_n\to f$$ uniformly on compact subsets, $$f$$ is holomorphic. Also, since $$D$$ is connected and open, it is also path connected, so a path from $$z_0$$ to $$z$$ is guaranteed to exist. And since the functions $$f_n$$ admit an antiderivative, the integral $$F_n(z)=\int_{z_0}^z f_n(w)\mathrm dw$$ does not depend on the path and is thus well-defined. Due to uniform convergence of the integrand to $$f$$ on the arbitrarily chosen path (it's compact), the integral converges to $$F(z)=\int_{z_0}^z f(w)\mathrm dw,$$ which thus also doesn't depend on the path and is then well-defined. If we can show that the convergence is uniform on compact subsets, then $$F$$ is holomorphic and $$F_n'\to F'$$ uniformly on compact subsets, too. But since $$F_n'=f_n$$ and $$f_n\to f$$, we will then have $$F'=f$$, so $$F$$ is an antiderivative of $$f$$. So we show this uniform convergence on compact subsets: Note that any compact subset of $$D$$ can be covered by a finite number of compact discs. And if a function converges uniformly on a finite number of sets, then it also converges uniformly on their union. So it is sufficient to show uniform convergence on compact discs. Let $$\overline{U_r}(z_\ast)\subset D$$ be such a disc with radius $$r$$ centered at $$z_\ast$$. On this disc we have \begin{align} \vert F(z)-F_n(z)\vert&=\left\vert\int_{z_0}^z f(w)-f_n(w)\mathrm dw\right\vert\\ &=\left\vert\int_{z_0}^{z_\ast} f(w)-f_n(w)\mathrm dw+\int_{z_\ast}^z f(w)-f_n(w)\mathrm dw\right\vert\\ &=\left\vert F(z_\ast)-F_n(z_\ast)+\int_{z_\ast}^z f(w)-f_n(w)\mathrm dw\right\vert\\ &\leq\vert F(z_\ast)-F_n(z_\ast)\vert + \left\vert\int_{z_\ast}^z f(w)-f_n(w)\mathrm dw\right\vert\\ &\leq \vert F(z_\ast)-F_n(z_\ast)\vert + r\sup_{w\in\overline{U_r(z_\ast)}}\vert f(w)-f_n(w)\vert. \end{align} The last estimation gives a bound that doesn't depend on $$z$$ and goes to $$0$$ (the first term because $$F_n\to F$$ pointwise, the second because $$f_n\to f$$ uniformly on the compact disc). So $$F_n\to F$$ uniformly on the disc, and by the argument above, on any other compact set as well. • First, thanks; second I really read your reply several time to find the answer of my question; third, let me make sure that we understand each other right by giving a simple example, say I have a sequence $\ln^n(x)$,I would like to know the singularities of the function $F(x)=\int \sum \ln^n(x)$. So based on your statement I can not do anything, because $\ln^n(x)$ is not holomorphic at $x=0$, am I right? – user142288 Nov 13 '20 at 10:32 • $0$ is not important for this, since $0$ is not in the domain. You're only concerned with the domain of the functions, which is $D=\mathbb C\backslash(-\infty,0]$. On $D$, the functions are all holomorphic. If the series converges uniformly on compact subsets of the domain, then the series itself is also holomorphic and you can interchange the sum and the integral. – Vercassivelaunos Nov 13 '20 at 10:53 • Thanks, I got it. But you did not solve my problem, I wanna know if $z=0$ is a singular point of $F(z)$ from $f_n(z)$, you just simply kick it out from the beginning by definition. Whatsoever thanks. – user142288 Nov 13 '20 at 10:59 • Well, if $f_n$ converges to $f$ uniformly on compact subsets, then the singularities of $f_n$ are also singularities of $f$, since the domains of $f$ and $f_n$ are exactly the same, and singularities are only characterized by the domain of the function (isolated boundary points of the domain). Same goes for the antiderivatives: if $F_n$ are antiderivatives of $f_n$ on the entire domain, then the singularities of $F_n$ are exactly the singularities of $f_n$, and then the singularities of $F$ are exactly those of $F_n$, which are those of $f_n$, which are those of $f$. – Vercassivelaunos Nov 13 '20 at 11:29 • Could you show me how does a $f_n$ with singularities can converge to a $f$ uniformly at the singular point included in a compact subset? For example, for $f_n=\ln^n z$ with a compact domain (you can pick any one) . – user142288 Nov 13 '20 at 12:03
2021-05-09 15:39:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 57, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461712837219238, "perplexity": 151.04660502852315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00472.warc.gz"}
https://admin.clutchprep.com/chemistry/practice-problems/131604/a-2-950-x-10-2-m-solution-of-nacl-in-water-is-at-20-0-c-the-sample-was-created-b
Ch.12 - SolutionsWorksheetSee all chapters Ch.1 - Intro to General Chemistry 2hrs & 53mins 0% complete Worksheet Ch.2 - Atoms & Elements 2hrs & 49mins 0% complete Worksheet Ch.3 - Chemical Reactions 3hrs & 25mins 0% complete Worksheet BONUS: Lab Techniques and Procedures 1hr & 38mins 0% complete Worksheet BONUS: Mathematical Operations and Functions 47mins 0% complete Worksheet Ch.4 - Chemical Quantities & Aqueous Reactions 3hrs & 55mins 0% complete Worksheet Ch.5 - Gases 3hrs & 47mins 0% complete Worksheet Ch.6 - Thermochemistry 2hrs & 28mins 0% complete Worksheet Ch.7 - Quantum Mechanics 2hrs & 35mins 0% complete Worksheet Ch.8 - Periodic Properties of the Elements 1hr & 57mins 0% complete Worksheet Ch.9 - Bonding & Molecular Structure 2hrs & 5mins 0% complete Worksheet Ch.10 - Molecular Shapes & Valence Bond Theory 1hr & 31mins 0% complete Worksheet Ch.11 - Liquids, Solids & Intermolecular Forces 3hrs & 40mins 0% complete Worksheet Ch.12 - Solutions 2hrs & 17mins 0% complete Worksheet Ch.13 - Chemical Kinetics 2hrs & 22mins 0% complete Worksheet Ch.14 - Chemical Equilibrium 2hrs & 26mins 0% complete Worksheet Ch.15 - Acid and Base Equilibrium 4hrs & 42mins 0% complete Worksheet Ch.16 - Aqueous Equilibrium 3hrs & 48mins 0% complete Worksheet Ch. 17 - Chemical Thermodynamics 1hr & 44mins 0% complete Worksheet Ch.18 - Electrochemistry 2hrs & 58mins 0% complete Worksheet Ch.19 - Nuclear Chemistry 1hr & 33mins 0% complete Worksheet Ch.20 - Organic Chemistry 3hrs 0% complete Worksheet Ch.22 - Chemistry of the Nonmetals 2hrs & 1min 0% complete Worksheet Ch.23 - Transition Metals and Coordination Compounds 1hr & 54mins 0% complete Worksheet Solution: A 2.950 × 10−2 M solution of NaCl in water is at 20.0°C. The sample was created by dissolving a sample of NaCl in water and then bringing the volume up to 1.000 L. It was determined that the volume of Problem A 2.950 × 10−2 M solution of NaCl in water is at 20.0°C. The sample was created by dissolving a sample of NaCl in water and then bringing the volume up to 1.000 L. It was determined that the volume of water needed to do this was 999.2 mL . The density of water at 20.0°C is 0.9982 g/mL. Part A. Calculate the molality of the salt solution. Express your answer to four significant figures and include the appropriate units. NaCl = Part B. Calculate the mole fraction of salt in this solution. Express the mole fraction to four significant figures. χ NaCl = Part C. Calculate the concentration of the salt solution in percent by mass. Express your answer to four significant figures and include the appropriate units. percent by mass NaCl = Part D. Calculate the concentration of the salt solution in parts per million. Express your answer as an integer to four significant figures and include the appropriate units. parts per million NaCl = Solution Based on the information provided to prepare a 2.950x10-2 M NaCl solution in water at 20 °C, we´re asked to calculate the concentration of the salt solution in 4 parts Molality (m), mole fraction (X), percent by mass (%), and parts per million (ppm). For all cases, we need the amount of NaCl in the sample. We calculate the moles of NaCl from the definition of Molarity (M): $\overline{){\mathbf{Molarity}}{\mathbf{\left(}}{\mathbf{M}}{\mathbf{\right)}}{\mathbf{=}}\frac{\mathbf{moles}}{\mathbf{Liters}}}$ View the complete written solution...
2019-11-18 01:03:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19993792474269867, "perplexity": 7526.3138561545065}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00427.warc.gz"}
https://www.zbmath.org/?q=an%3A0926.33007
# zbMATH — the first resource for mathematics Some functions that generalize the Krall-Laguerre polynomials. (English) Zbl 0926.33007 The paper deals with at most $$\alpha$$ successive Darboux transformations of $$L(\alpha)$$, where $$L(\alpha)$$ denotes the (semi-infinite) tridiagonal matrix associated with the three-term recursion relation satisfied by the Laguerre polynomials with weight function $$\frac{1}{\Gamma(\alpha+1)} z^\alpha e^{-z}$$, $$\alpha>-1$$, on the interval $$[0,\infty[$$. It is shown that the resulting (bi-infinite) tridiagonal matrix $$\widetilde{L} (\alpha)$$ is bispectral, i.e. the corresponding function, called Krall-Laguerre functions, are orthogonal polynomials on $$[0,\infty[$$ with respect to some weight distribution $$w(k,\alpha)$$ with $$1\leq k\leq\alpha$$. Furthermore, as a consequence of the rational character of the Darboux factorization, these polynomials are eigenfunctions of a (finite order) differential operator. The concept is enlarged to the two-parameters bi-infinite extension $$L(\alpha, \varepsilon)$$ of the matrix $$L(\alpha)$$, where $$L(\alpha,0)= L(\alpha)$$. ##### MSC: 33C45 Orthogonal polynomials and functions of hypergeometric type (Jacobi, Laguerre, Hermite, Askey scheme, etc.) 42C05 Orthogonal functions and polynomials, general theory of nontrigonometric harmonic analysis ##### Keywords: Krall-Laguerre polynomials; Darboux transformations Full Text: ##### References: [1] Askey, R.; Ismail, M.E.H., Recurrence relations, continued fractions and orthogonal polynomials, Mem. amer. math. soc., 300, (1984) · Zbl 0548.33001 [2] Askey, R.; Wimp, J., Associated Laguerre and Hermite polynomials, (), 15-37 · Zbl 0547.33006 [3] Bakalov, B.; Horozov, E.; Yakimov, M., Bispectral algebras of commuting ordinary differential operators, Commun. math. phys., 190, 331-373, (1997) · Zbl 0912.34065 [4] Bakalov, B.; Horozov, E.; Yakimov, M., General methods for constructing bispectral operators, Phys. lett. A, 222, 59-66, (1996) · Zbl 0972.37545 [5] Bakalov, B.; Horozov, E.; Yakimov, M., Automorphisms of the Weyl algebra and bispectral operators, (), 3-10 · Zbl 0983.37079 [6] Bochner, S., Über Sturm-liouvillesche polynomsysteme, Math. Z., 29, 730-736, (1929) · JFM 55.0260.01 [7] Chihara, T.S., An introduction to orthogonal polynomials, () · Zbl 0389.33008 [8] Duistermaat, J.J.; Grünbaum, F.A., Differential equations in the spectral parameter, Commun. math. phys., 103, 177-240, (1986) · Zbl 0625.34007 [9] Erdélyi, A.; Magnus, W.; Oberhettinger, F.; Tricomi, F.G., () [10] Fulton, W.; Tableaux, Young, () [11] Grünbaum, F.A., Some bispectral musings, (), 31-45 · Zbl 0944.34062 [12] Grünbaum, F.A.; Haine, L., Orthogonal polynomials satisfying differential equations: the role of the Darboux transformation, (), 143-154 · Zbl 0865.33008 [13] Grünbaum, F.A.; Haine, L., A theorem of Bochner, revisited, (), 143-172 · Zbl 0868.35116 [14] Grünbaum, F.A.; Haine, L., Bispectral Darboux transformations: an extension of the krall polynomials, IMRN (internat. mat. res. notices), 8, 359-392, (1997) · Zbl 1125.37321 [15] Grünbaum, F.A.; Haine, L.; Horozov, E., On the krall-Hermite and the krall-Bessel polynomials, IMRN (internat. mat. res. notices), 19, 953-966, (1997) · Zbl 0910.33004 [16] Haine, L., Beyond the classical orthogonal polynomials, (), 47-65 · Zbl 0943.33006 [17] Kasman, A.; Rothstein, M., Bispectral Darboux transformations: the generalized Airy case, Physica D, 102, 159-176, (1997) · Zbl 0890.58095 [18] Koekoek, J.; Koekoek, R., On a differential equation for Koornwinder’s generalized Laguerre polynomials, (), 1045-1054, 4 · Zbl 0737.33003 [19] Koekoek, R., Differential equations for symmetric generalized ultraspherical polynomials, Trans. amer. math. soc., 345, 1, 47-72, (1994) · Zbl 0827.33005 [20] Koornwinder, T.H., Orthogonal polynomials with weight function (1 − x)α(1 + x)β + Mδ(x + 1) + Nδ(x − 1), Canad. math. bull., 27, 2, 205-214, (1984) · Zbl 0507.33005 [21] Krall, A.M., Chebyshev sets of polynomials which satisfy an ordinary differential equation, SIAM rev., 22, 236-441, (1980) · Zbl 0448.33018 [22] Krall, A.M., Orthogonal polynomials satisfying fourth order differential equations, (), 271-288 · Zbl 0453.33006 [23] Krall, A.M.; Littlejohn, L.L., On the classification of differential equations having orthogonal polynomial solutions, II, Ann. mat. pura appl., 149, 77-102, (1987) · Zbl 0643.34002 [24] Krall, H.L., Certain differential equations for Tchebycheff polynomials, Duke math. J., 4, 705-718, (1938) · Zbl 0020.02002 [25] Krall, H.L., On orthogonal polynomials satisfying a certain fourth order differential equation, Pennsylvania state college studies, 1940, 6, 1-24, (1940) · Zbl 0060.19210 [26] Krall, H.L.; Sheffer, I.M., On pairs of related orthogonal polynomial sets, Math. Z., 86, 425-450, (1965) · Zbl 0128.29402 [27] Littlejohn, L.L., On the classification of differential equations having orthogonal polynomial solutions, Ann. mat. pura appl., 138, 35-53, (1984) · Zbl 0571.34003 [28] Littlejohn, L.L., The krall polynomials: a new class of orthogonal polynomials, Quaestiones math., 5, 255-265, (1982) · Zbl 0514.33008 [29] Littlejohn, L.L., An application of a new theorem on orthogonal polynomials and differential equations, Quaestiones math., 10, 49-61, (1986) · Zbl 0605.33010 [30] Matveev, V.B.; Salle, M.A., Differential-difference evolution equations II: Darboux transformation for the Toda lattice, Lett. math. phys., 3, 425-429, (1979) · Zbl 0441.35004 [31] Wilson, G., Bispectral commutative ordinary differential operators, J. reine angew. math., 442, 177-204, (1993) · Zbl 0781.34051 [32] Wilson, G., Collisions of Calogero-Moser particles and an adelic Grassmannian, Invent. math., 133, 1, 1-41, (1998) · Zbl 0906.35089 [33] Wright, P.E., () This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-05-16 10:18:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378156065940857, "perplexity": 8734.157510494018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00019.warc.gz"}
https://www.onetransistor.eu/2019/04/https-server-on-esp8266-nodemcu.html
HTTPS Server on the ESP8266 NodeMCU Generate a self signed SSL certificate and use it to secure an ESP8266 web server NodeMcu is a development board based on ESP8266. This microcontroller is made for IoT applications and features WiFi connectivity. There is an easy way to program ESP8266 boards using Arduino IDE. This is what I will use here too. Nowadays, internet security is very important. Maybe you'll use ESP8266 only in the local network or you'll allow external access to it. Unless it's behind a proxy, leaving it unsecured is not a good idea. In the last years, most of the websites switched to HTTPS and modern browsers display warnings when requesting an unsecured HTTP page. To offer secured content, a server greets the client with a trusted certificate, issued by a known authority. The certificate has a limited time validity and must be renewed from time to time. In this post, we'll generate a SSL certificate and use it on ESP8266 web server. You can buy the certificate from a known authority or you can generate it for free on your computer. I'll use the second method although is comes with a glitch. The browser will not trust the certificate. But that's OK, you can trust it as long as you generated it and you keep it private. SSL encryption makes use of a public certificate (with public key) and a private key, known only by the server. In order to decrypt traffic between devices, someone should own both the public key and the private one. As long as you keep the private key... private, the connection is safe even if the browser complains it doesn't trust your self signed certificate. Remember that ESP8266 is not optimized for SSL cryptography. You should set clock frequency to 160 MHz when using SSL. Even so, some exchanges between server and clients may take too long and trigger a software reset. However, using the latest SDK for Arduino IDE, I was able to run the HTTPS server without resetting the board. Certificate and key You may get the certificate and key from a trusted CA, if you want to. For ESP8266 compatibility, the certificate must use SHA256 and the key length must be either 512 or 1024 bits. A 512 bits RSA key will make ESP8266 respond faster, but it is considered weak by modern browsers. For better security, use 1024 bits RSA key. The trusted CA should give you both the certificate and the private RSA key. Like I said, I intend to use OpenSSL to generate the certificate and the private RSA key. Getting OpenSSL on Linux is easy since most distributions already have it installed and you can find it in software repositories otherwise. Windows builds are available on slproweb.com. Choose the exe, Light version for your system architecture. Run the installer. Launch openssl on the command line, from the folder where you want certificate and key to be generated. Here's how it's done in Linux terminal and Windows PowerShell: cd ~/Desktop openssl cd ~/Desktop &"C:/Program Files/OpenSSL-Win64/bin/openssl.exe" It is possible to generate both key and certificate using a single command: req -x509 -newkey rsa:1024 -sha256 -keyout key.txt -out cert.txt -days 365 -nodes -subj "/C=RO/ST=B/L=Bucharest/O=OneTransistor [RO]/OU=OneTransistor/CN=esp8266.local" -addext subjectAltName=DNS:esp8266.local or multiple commands: genrsa -out key.txt 1024 rsa -in key.txt -out key.txt req -sha256 -new -nodes -key key.txt -out cert.csr -subj '/C=RO/ST=B/L=Bucharest/O=OneTransistor [RO]/OU=OneTransistor/CN=esp8266.local' -addext subjectAltName=DNS:esp8266.local x509 -req -sha256 -days 365 -in cert.csr -signkey key.txt -out cert.txt In the first command, rsa:1024 specifies key length in bits, while in the second approach, last argument of genrsa is used for this. The -days parameter specifies certificate validity starting from the generation time. You'll find the key in key.txt file and the certificate in cert.txt. Before generating them, is useful to know about the parameters of -subj argument, which you can set as you want. • C - country, short name • ST - state or province • L - locality or city • O - organization • OU - organizational unit • CN - common name (domain name) The subjectAltName parameter must contain the domain name(s) where your server is accessible. It can specify also IP addresses like this: subjectAltName=DNS:esp8266.local,IP=192.168.1.10. When requesting a webpage secured with this certificate, the browser will complain it does not know the CA ("ERR_CERT_AUTHORITY_INVALID" is the message displayed by Google Chrome). The web server Let's get to the Arduino code. If you type just esp8266.local in the browser's address bar, the initial connection attempt will be made over HTTP (port 80). This means that if you only have the HTTPS server running (port 443) you'll get a connection refused over HTTP. Since this is not user friendly, we'll run two servers on ESP8266, one over HTTP which will send 301 headers pointing to the HTTPS one. Redirection is instant. The key and certificate must be pasted into the sketch: #include <ESP8266WiFi.h> #include <ESP8266mDNS.h> #include <ESP8266WebServer.h> #include <ESP8266WebServerSecure.h> const char *ssid = "ssid"; const char *dname = "esp8266"; ESP8266WebServer serverHTTP(80); static const char serverCert[] PROGMEM = R"EOF( paste here content of "cert.txt" )EOF"; static const char serverKey[] PROGMEM =  R"EOF( paste here content of "key.txt" )EOF"; In setup() function, before configuring the servers, ESP8266 must know the current date and time from a NTP server. That's easy since we have configTime() function which takes as first parameter the GMT offset in seconds. After getting the time, we can start the HTTP server and configure it to handle requests by responding with the redirection header. Lastly, we configure HTTPS server with key and certificate and turn it on. void setup() { pinMode(D0, OUTPUT); Serial.begin(115200); if (!connectToWifi()) { delay(60000); ESP.restart(); } configTime(3 * 3600, 0, "pool.ntp.org", "time.nist.gov"); serverHTTP.on("/", secureRedirect); serverHTTP.begin(); server.on("/", showWebpage); server.begin(); } In loop() we make sure both servers handle requests. void loop() { serverHTTP.handleClient(); server.handleClient(); MDNS.update(); } HTTPS redirection routine is simple. However, no matter of you reach the server by the multicast DNS name or by its IP, this function will point to the mDNS name and that could be an issue for clients that do not support mDNS. I could have sent the local IP in the redirection header, but that would raise other certificate errors. Since ESP8266 is a client in a DHCP enabled network, it gets an IP from a router. And since I can't know what is that IP in advance, I can't generate a certificate with that IP in SAN field or CN attribute. void secureRedirect() { serverHTTP.send(301, "text/plain", ""); } Server's answer is managed by showWebpage() function. LED status is changed using HTTP GET method. The server page viewed in Google Chrome No errors in Chrome There is a way to get rid of the "Not secure" error in Google Chrome. First of all you must use 1024 bits key since 512 is considered weak. The certificate must be imported into the system's (browser's) trusted list. On Linux you can go straight to chrome://settings/certificates, the Authorities tab. On Windows operating systems, go to Settings - Advanced - Manage Certificates and select Trusted Root Certification Authorities tab. Click the Import button and select your generated cert.txt file (select all files type in the open dialog to see it). Import it and give it trust for site identification. On Windows, after import, find and select it in the list, then click Advanced. Check Client Authentication. Close the browser and reopen it to see the changes. In Windows, the certificate installation is system wide. Check site security (Chrome - F12) Resources The complete source code is availabe on GitHub. Modify SSID and password to match your network and don't forget to use your own certificate since this one is no longer secure after being made public.
2019-10-21 03:01:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19679595530033112, "perplexity": 5964.902670176007}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00298.warc.gz"}
http://mathhelpforum.com/geometry/109025-ellipse-question.html
# Math Help - Ellipse Question 1. ## Ellipse Question Find the co-ordinates of the centre and foci of the ellipse with equation $25x^2+16y^2-100x-256y+724=0$ What are the coordinated of its verices and the eqns of the directrices? I've managed to manipulate the eqn to this form $\frac{(x-2)^2}{16}+\frac{(y-8)^2}{25}=1$ Therefore got the centre as (2,8). I did vertex, directrix and foci for parabolas but not for an ellipse what are the formula? :s i've tried looking them up on the web but it's all confusing. So far i think Directrix= $+-\frac{a^2}{c}$ Can someone help me out with just this one ? >.< got a whole load more of em to work through that are similar just im stuck at the first hurdle haha 2. Originally Posted by Kevlar Find the co-ordinates of the centre and foci of the ellipse with equation $25x^2+16y^2-100x-256y+724=0$ What are the coordinated of its verices and the eqns of the directrices? I've managed to manipulate the eqn to this form $\frac{(x-2)^2}{16}+\frac{(y-8)^2}{25}=1$ Therefore got the centre as (2,8). I did vertex, directrix and foci for parabolas but not for an ellipse what are the formula? :s i've tried looking them up on the web but it's all confusing. So far i think Directrix= $+-\frac{a^2}{c}$ Can someone help me out with just this one ? >.< got a whole load more of em to work through that are similar just im stuck at the first hurdle haha Have a look here: Ellipse -- from Wolfram MathWorld (You have to scroll down a little bit!)
2016-02-14 17:18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6444966793060303, "perplexity": 1145.0146839287934}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701999715.75/warc/CC-MAIN-20160205195319-00320-ip-10-236-182-209.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1186002/show-that-int-1-infty-frac-ln-x2x2x1-dx-frac8-pi-381
# Show that $\int_1^{\infty } \frac{(\ln x)^2}{x^2+x+1} \, dx = \frac{8 \pi ^3}{81 \sqrt{3}}$ I have found myself faced with evaluating the following integral: $$\int_1^{\infty } \frac{(\ln x)^2}{x^2+x+1} \, dx.$$ Mathematica gives a closed form of $8 \pi ^3/(81 \sqrt{3})$, but I have no idea how to arrive at this closed form. I've tried playing around with some methods from complex analysis, but I haven't had much luck (it has been a while). Does anyone have any ideas? Thanks in advance! • So, you've tried contour integration? – Mark Viola Mar 11 '15 at 22:40 Shocked, shocked! that there is no contour integration yet. So, without further ado... Note that $$f(x) = \frac{\log^2{x}}{x^2+x+1} \implies f \left ( \frac1{x} \right ) = x^2 f(x)$$ Thus, $$\int_1^{\infty} dx \frac{\log^2{x}}{x^2+x+1} = \int_0^{1} \frac{\log^2{x}}{x^2+x+1} = \frac12 \int_0^{\infty} dx \frac{\log^2{x}}{x^2+x+1}$$ Now consider $$\oint_C dz \frac{\log^3{z}}{z^2+z+1}$$ where $C$ is a keyhole contour of outer radius $R$ and inner radius $\epsilon$. Taking the limit as $R \to \infty$ and $\epsilon \to 0$, we get that the contour integral is equal to $$\int_0^{\infty} dx \frac{\log^3{x} - (\log{x}+i 2 \pi)^3}{x^2+x+1}$$ or $$-i 6 \pi \int_0^{\infty} dx \frac{\log^2{x}}{x^2+x+1} + 12 \pi^2 \int_0^{\infty} dx \frac{\log{x}}{x^2+x+1} +i 8 \pi^3 \int_0^{\infty} dx \frac{1}{x^2+x+1}$$ Note that the first integral is what we seek, the second integral is zero (by the same trick we applied above), and the third integral is relatively easy to find: $$\int_0^{\infty} \frac{dx}{x^2+x+1} = \int_0^{\infty} \frac{dx}{(x+1/2)^2+3/4} = \frac{2}{\sqrt{3}} \left [\arctan{\frac{2}{\sqrt{3}} \left ( x+\frac12 \right )} \right ]_0^{\infty} = \frac{2 \pi}{3 \sqrt{3}}$$ The contour integral is also equal to $i 2 \pi$ times the sum of the residues at the poles of the integrand, which are at $z_+ = e^{i 2 \pi/3}$ and $z_- = e^{i 4 \pi/3}$. The sum of the residues is $$\frac{-i 8 \pi^3/27}{i \sqrt{3}} + \frac{-i 64 \pi^3/27}{-i \sqrt{3}} = \frac{56 \pi^3}{27 \sqrt{3}}$$ Then $$-i 6 \pi \int_0^{\infty} dx \frac{\log^2{x}}{x^2+x+1} = i 2 \pi \frac{56 \pi^3}{27 \sqrt{3}} - i 8 \pi^3 \frac{2 \pi}{3 \sqrt{3}} = -i \frac{32 \pi^4}{27 \sqrt{3}}$$ Thus, $$\int_1^{\infty} dx \frac{\log^2{x}}{x^2+x+1} = \frac12 \int_0^{\infty} dx \frac{\log^2{x}}{x^2+x+1} = \frac{8 \pi^3}{81 \sqrt{3}}$$ • Straight forward!! – Mark Viola Mar 12 '15 at 0:32 • Just what I was looking for, Thanks! – stochasm Mar 12 '15 at 2:42 It can be observed that $x^{2} + x + 1 = (x-a)(x-b)$ where $a = e^{2\pi i/3}$ and $b = e^{-2\pi i/3}$. Now \begin{align} I &= \int_{1}^{\infty} \frac{ (\ln(x))^{2} }{ (x-a)(x-b) } \, dx = \frac{1}{a-b} \, \int_{1}^{\infty} \left( \frac{1}{x-a} - \frac{1}{x-b} \right) \, (\ln(x))^{2} \, dx. \end{align} From Wolfram Alpha the integral \begin{align} \int \frac{ (\ln(x))^{2} }{ x - a } dx = -2 Li_{3}\left( \frac{x}{a} \right) +2 \log(x) \, Li_{2} \left( \frac{x}{a} \right) + \log^{2}(x) \log\left( 1-\frac{x}{a} \right) \end{align} for which the integral in question becomes \begin{align} I &= \left[ \frac{-2}{a-b} \left(Li_{3}\left( \frac{x}{a} \right) - Li_{3}\left(\frac{x}{b} \right) \right) + \frac{2}{a-b} \log(x) \, \left(Li_{2} \left( \frac{x}{a} \right) - Li_{2}\left( \frac{x}{b} \right) \right) + \frac{1}{a-b} \, \log^{2}(x) \log\left( \frac{a-x}{b-x} \right) \right]_{1}^{\infty} \\ &= \frac{-2}{a-b} \left[ Li_{3}\left( \frac{1}{a} \right) - Li_{3}\left(\frac{1}{b} \right) \right]. \end{align} This can then be seen as \begin{align} I &= \frac{-2i}{\sqrt{3}} \left[ Li_{3}\left( e^{2\pi i/3} \right) - Li_{3}\left( e^{- 2\pi i/3} \right) \right] \\ &= \frac{-2 i}{\sqrt{3}} \cdot \frac{4 \pi^{3} i}{81} = \frac{8 \pi^{3}}{81 \sqrt{3}}. \end{align} \begin{align} \int_{1}^{\infty} \frac{ (\ln(x))^{2} }{ x^{2} + x + 1 } \, dx = \frac{8 \pi^{3}}{81 \sqrt{3}}. \end{align} Hint: In general, $~I_n(k)~=~\displaystyle\int_0^\infty\frac{x^{k-1}}{1-x^n}~dx~=~\frac\pi n~\cot\bigg(k~\frac\pi n\bigg),~$ see Cauchy principal value. At the same time, a simple substitution of the form $t=\dfrac1x$ shows that the original integral can be written as $J=\displaystyle\int_1^\infty f(x)~dx~=~\int_0^1f(x)~dx~=~\frac12~\int_0^\infty f(x)~dx.~$ Then, by rewriting the integrand using $\dfrac1{x^2+x+1}=\dfrac{1-x~}{1-x^3}~,~$ we have $J=\dfrac{I_3''(1)-I_3''(2)}2$. Here is an approach. Observe that $$I:=\int_1^{\infty } \frac{(\ln x)^2}{x^2+x+1} \, dx=\int_1^{\infty } \frac{(x-1)(\ln x)^2}{x^3-1} \, dx$$ and by the change of variable $x \to 1/x$ $$I=\int_0^1 \frac{(1-x)\color{blue}{(\ln x)^2}}{1-x^3} \, dx. \tag1$$ Since $\displaystyle \color{red}{\partial_s^2\color{red}{ (x^s)}|_{\large s=0}}=\color{blue}{(\ln x)^2}$, one may write that $$I=\left.\color{red}{\partial_s^2}\left(\int_0^1 \frac{x^s(1-x)}{1-x^3} \, dx\right)\right|_\color{red}{{\large s=0}} \tag2$$ Now \begin{align} \int_0^1 \frac{x^s(1-x)}{1-x^3} \, dx&=\frac13\int_0^1 \frac{u^{(s-2)/3}-u^{(s-1)/3}}{1-u} \, du\qquad (u=x^3,\,x=u^{1/3})\\\\ &=\frac13\int_0^1 \frac{(1-u^{(s-1)/3})-(1-u^{(s-2)/3})}{1-u} \, du\\\\ &=\frac13\int_0^1 \frac{1-u^{(s-1)/3}}{1-u} \, du-\frac13\int_0^1 \frac{1-u^{(s-2)/3}}{1-u} \, du\\\\ &=\frac13\psi\left(\frac{s+2}{3}\right)-\frac13\psi\left(\frac{s+1}{3}\right) \tag3 \end{align} where we have used the standard integral representation for the digamma function. Then using $(2)$, we get $$I=\frac1{27}\psi''\!\!\left(\frac{2}{3}\right)-\frac1{27}\psi''\!\!\left(\frac{1}{3}\right)=\frac{8\pi^3}{81\sqrt{3}}$$ taking into account some special values of $\psi''$. Note $$\int_0^1x^m(\ln x)^2dx=\frac{2}{(m+1)^3}$$ So \begin{eqnarray} I&=&\int_1^{\infty } \frac{(\ln x)^2}{x^2+x+1}dx\\ &=&\int_0^1\frac{(1-x)(\ln x)^2}{1-x^3}dx\\ &=&\int_0^1\sum_{n=0}^\infty(1-x)x^{3n}(\ln x)^2dx\\ &=&2\sum_{n=0}^\infty\left(\frac{1}{(3n+1)^3}-\frac{1}{(3n+2)^3}\right)\\ &=&2\sum_{n=-\infty}^\infty\frac{1}{(3n+1)^3} \end{eqnarray} Note \begin{eqnarray} \sum_{n=-\infty}^\infty\frac{1}{(3n+1)^3}=-\pi\text{Res}(\frac{1}{(z+1)^3}\cot(\pi z),-\frac{1}{3})=\frac{4\pi^3}{81\sqrt3} \end{eqnarray} and hence $$I=\frac{8\pi^3}{81\sqrt3}.$$
2019-07-17 23:29:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000088214874268, "perplexity": 1059.8410157405763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525414.52/warc/CC-MAIN-20190717221901-20190718003901-00123.warc.gz"}
https://math.stackexchange.com/questions/723914/why-can-i-not-use-an-equation-using-proportions-to-solve-this-triangle-problem
# Why can I not use an equation using proportions to solve this triangle problem? It is difficult to see the picture of the problem. The question is "What are the lengths of AC and AB?" What is given is a right triangle, ABC. Angle B is 30 degrees and BC is 7.0 distance. The way I solved it was by using the properties of a 30 60 90 degree triangle which I learned from a unit circle. The segment opposite the 30 degree angle is 1/2 the distance of the hypotenuse. And the segment opposite the 60 degree angle is square root of 3 times the length of the side opposite 30 degrees. (I think that's correct). I got it right. Though, I originally tried to solve the problem though using an equation using proportions. I figured: Alright 90 degrees over 7.0 is proportionate to 30 degrees over x. Thus: 90/7 = 30/x and I solved it. However, I did not arrive at the correct answers. Can someone help me understand why this does not work? Thanks, Paige As long as it's a 30-60-90 triangle, you can always use the proportions. You're thinking about the wrong proportions, however: the lengths are not exactly proportional to the angles. You'll get to that in trig with law of cosine, but, for now, the sides are proportional to each other in the way presented in the image above. Above is a triangle similar to the one you presented (i.e. angles at the same place) with some variables $a$ to describe the proportions. We know that, in your case, the side that's $2a$ is equal to 7. Hence: $$2a=7$$$$a=3.5$$ Now that we know what $a$ is, we can find the lengths easily. AC is the side opposite of the 30-degree angle and is equivalent to $a$. Therefore, it is 3.5. That already eliminates you down to choices C and D. Then we have AB, which is opposite of the 60-degree angle and is equal to $a\sqrt{3}$. This is approximately equal to $6.1$. I just ready your question. First and fore mostly I apologize if my answer isn't in the right syntax, mathematical grammar etc. This is my first stack exchange question. The easiest way I have found to solve these problems is to get the ratio for one leg and work out from there. For instance, the sides of a 30-60-90 triangle are $1, \sqrt{3}, \text{ and } 2$ (2 being the hypotenuse) or multiples of such. In your triangle, there is a hypotenuse of 7. To get the ratio, I divided $7$ by $2$. This gave me $3.5$. Now to get the other sides, I simply multiplied the ratio by the sides of the original 30-60-90 triangle and got the answer, D (i.e. $\sqrt{3} \times 3.5$ = a rounded $6.1$ and $1 \times 3.5 = \text{(duh) } 3.5$ which are answer choices). I don't know if that is what is what you meant about proportions, but it's the best way I've found.
2022-08-10 02:57:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9093948602676392, "perplexity": 188.06950685083282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00751.warc.gz"}
https://www.physicsforums.com/threads/homology-of-the-klein-bottle-using-m-v-sequences.298740/
# Homology of the Klein Bottle using M-V sequences 1. Mar 10, 2009 ### quasar987 2. Mar 10, 2009 ### matt grime The homology group is Z + Z quotiented by the image of alpha. In the given bases label them e,f, what is this? It is <e,f>/(2f=0) i.e. Z + Z/2Z 3. Mar 10, 2009 ### owlpride How do you compute the quotient $$\frac{\ker(d_n)}{\text{im}(d_{n+1})}$$ ? If you can express $$\ker(d_n)$$ and $$\text{im}(d_{n+1})$$ in terms of the same basis, then modding out is straight forward. That's why Wiki is choosing a non-standard basis for Z². Why don't you write out the sequence and the maps? Last edited: Mar 10, 2009 4. Mar 11, 2009 ### quasar987 Thanks. I had forgotten that given a short exact sequence 0-->A-f->B-->C-->0, we have C=B/Im(f). Actually, there is no need to talk about basis here since Im(alpha) is clearly just 2Z, so H_1(K)=(Z+Z)(2Z)=Z+Z/2Z.
2017-10-18 01:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9134398698806763, "perplexity": 2098.300814478545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822625.57/warc/CC-MAIN-20171017234801-20171018014801-00417.warc.gz"}
http://mathoverflow.net/questions/103281/what-is-the-history-of-the-notion-of-subdivision-of-categories
# What is the history of the notion of subdivision of categories? A recent answer by Peter May prompts me to ask a question which I have been considering to ask for several months. (The reason why I have not asked it before is that it is not directly related to my research and that I therefore almost do not have any motivation but sheer curiosity.) According to Peter May, there was a "folklore" notion of categorical subdivision in the 1960's. I learnt about it by Matias del Hoyo's paper cited by Roman Bruckner in his comment to Peter May's answer. If I am mot mistaken, this notion had appeared in Anderson's paper "Fibrations and Geometric Realizations" as well as a paper authored by Dwyer and Kan, "Function complexes for diagrams of simplicial sets". Who introduced the notion of subdivision of a (small) category? Are there any early references other than the two aforementioned papers? Del Hoyo claims that performing the subdivision of a small category amounts to taking the nerve, applying Kan's simplicial subdivision functor, and coming back in $Cat$ by applying nerve's left-adjoint. Unfortunately, he does not prove this result. (I have discussed about this fact with him recently. If my memory serves me right, among other things, he proves that Anderson's and Dwyer-Kan's notions are equivalent.) Georges Maltsiniotis has given a rough proof of the verification to me. I was at that time unable to find any published proof. Even if it is an easy "folklore" result, I think it would be useful to have a proof publicly available somewhere. Is there a published proof of the fact that this categorical subdivision is merely the composition of three well-known functors as above? Was it also "common knowledge" in the 1960's? Finally, I cannot help asking a question which had come to my mind at that time, but to which I have not devoted much consideration since then. Using higher categorical nerves, there is an "obvious" definition of what could be analogs of this construction for higher categories. Therefore: Have "higher analogs" of this categorical subdivision been studied? - Sorry to blow my own horn, but I am teaching things related to this in the REU I run at University of Chicago, and I'm writing a book that will include an exposition of subdivisions of categories and some neat combinatorial relationships (due to students, not me) between that and other notions, certainly including the factorization you mention (which is probably the best definition of the subdivision of a category). Note that as a composite of left and right adjoints, this is not a categorically well-behaved construction. –  Peter May Jul 27 '12 at 17:45 Many thanks to Peter May for blowing such an interesting horn. I am eagerly waiting for this book to appear. I have been wondering for a while why neither Dwyer-Kan nor Thomason mention the fact that the subdivision is merely that composite. I would accept this comment as an answer if I could. (P.S. I had to look for the meaning of "REU". It means "Research Experience for Undergraduates".) –  Jonathan Chiche Jul 29 '12 at 3:29 This is not exactly an answer to your last question, but you may be interested in the work of Barwick and Kan on "relative categories" and "$n$-relative categories" -- I think they use a notion of "relative subdivision". –  Mike Shulman Jul 30 '12 at 2:23 Thanks, Mike. I was not aware of that, yet I was planning to study that work anyway! –  Jonathan Chiche Jul 31 '12 at 2:09
2014-03-09 12:35:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209674119949341, "perplexity": 577.2097550439933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999678302/warc/CC-MAIN-20140305060758-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
http://dei-central.com/enrique-murciano-wjsydzw/451vker.php?99f36c=solving-quadratic-equations%3A-factoring-quizlet
## solving quadratic equations: factoring quizlet December 19, 2020 || POSTED BY || Industry News Solving Quadratic Equations by Factoring. Q. Edit. 7 months ago. Quadratic Functions and Equations | Algebra Study Guide, Quadratic Functions: Terms and Definitions, Critical Thinking Questions: Quadratic Functions…, Solving One-Step Equations Involving Addition an…, Solving One-Step Equations Involving Multiplicat…, Solving Two-Step Equations with Multiplication a…, Quadratic equations, Factoring Quadratics, Quadratic nth Term, nth term of quadratic sequences, Solving Equations, Solving Quadratic Equations: Factoring Learn factoring quadratic equations solving with free interactive flashcards. 10 Qs . 12.2k plays . 62% average accuracy. Mathematics. 8th - 10th grade. PLAY. Edit. The factors are 2x and 3x − 1, . This quiz is incomplete! by marlucchi. Solve an equation of the form a x 2 + b x + c = 0 by using the quadratic formula: x =. 7 months ago. 13.3k plays . 0. a) x² +10x + 25 = 0 b) x² - 81 = 0 c) 2x² + 4x = 0 d) x² - 2x = 0 2) The roots of a quadratic equation are 4 and -3. Edit. Factoring breaks apart the expression into manageable multipliers in order to easily find the zeros of the composite expression. Step 2: Rewrite 5x with −4x and 9x: 6x2 − 4x + 9x − 6. marlucchi. Solving Quadratic Equations by Factoring – Example. x = -4 and x = -5. x = 20 and x = -9. x = 4 and x = 5. x = -4 and x = 5. x = -4 and x = -5. alternatives. In the quadratic equation x2 + 3x + 2 = 0 what does a * c equal? Many quadratic equations cannot be solved by factoring. Convert x2 - 9x + 20 to factored form and solve. 2x(3x − 1) = 0. These are all quadratic equations in disguise: Save. 3x2 + Factoring. 9.4 – 9.6 Factoring Quadratic Equations Study Guide Questions 9.4: Factor Using the GCF: You should be able to: 1. f(x)= x² + 7x + 6. (The * means multiply.) Save. Choose from 500 different sets of factoring quadratic equations solving flashcards on Quizlet. 3 years ago. Convert x2 - 9x + 20 to factored form and solve. We're asked to solve for s. And we have s squared minus 2s minus 35 is equal to 0. The questions in this post-quiz will assess your mastery of this topic. Solving Quadratic Equations by Factoring DRAFT. quadratic formula . Solve x²+4x=5. Perform the FOIL method on the left-hand side of the equation. Solving Equations in Quadratic Form Quiz: Solving Equations in Quadratic Form Solving Radical Equations Quiz: Solving Radical Equations Solving Quadratic Inequalities Quiz: Solving Quadratic … There are many ways to solve quadratics. For the following problems, practice choosing the best method by solving for x in the quadratic equation. Click the “Take the Quiz” button below to begin. Preview this quiz on Quizizz. Learn how to solve quadratic equations like (x-1)(x+3)=0 and how to use factorization to solve other forms of equations. 62% average accuracy. 2x is 0 when x = 0; 3x − 1 is zero when x = 13; And this is the graph (see how it is zero at x=0 and x= 13): To solve a quadratic equation by factoring, Put all terms on one side of the equal sign, leaving zero on the other side. Played 304 times. Choose from 500 different sets of factoring solving by quadratic equations flashcards on Quizlet. Make sure that the a or x2 … 9th - 11th grade. Start studying Solving Quadratic Equations: Factoring Assignment. See how well you do in this high school Math quiz! Preview this quiz on Quizizz. F…, D. (The graph curves at 0 and has a point on (2,2) and (-2,-2)), C. Reflect across the X-axis, translate 2 units to the left, t…, B. F…, D. (The graph curves at 0 and has a point on (2,2) and (-2,-2)), C. Reflect across the X-axis, translate 2 units to the left, t…, B. Solve x²-5x=-6. Solve for a: (a + 4)(a – 2) = 7. Solving Quadratic Equations by Factoring, Quadratic equations, Factoring and Solving Quadratics, Solving Quadratic Equations by Graphing, Solving Quadratic Equations and the Quadratic Formula. Edit. After you have answered all of the question, click the “Submit Quiz” button at the bottom of the next page. Factoring is an important process in algebra to simplify expressions, simplify fractions, and solve equations. Is it Quadratic? (x - 4)(x + 4)=0. 2 years ago. Mathematics. There are three basic methods for solving quadratic equations: factoring, using the quadratic formula, and completing the square. Edit. Quiz & Worksheet Goals. View 1.5_Quadratic_Quiz_A.pdf from MATH MISC at Governors State University. Edit. 1.2k plays . 1. STUDY. For example: 6x 2 - 28x + 10 = 0 spiveyd_12962. A second method of solving quadratic equations involves the use of the following formula: a, b, and c are taken from the quadratic equation written in its general form of . 1 times. 0. (Graph in all positive and does not touch any axis), Polynomial Solutions Test, Solving Logarithmic and Exponential Equations, Solving Quadratic Equations by Factoring, The volume of a rectangular prism is mc010-1.jpg with height x…, The area of a rectangle is mc011-1.jpg with length x + 3. A quadratic equation is an equation where the highest exponent power of a variable is 2 (ie, x 2). estrella_medina_13216. ... How to Solve a Quadratic Equation by Factoring 7:53 Quizlet flashcards, … Delete Quiz. mechazabal1013. a) x² +10x + 25 = 0 b) x² - 81 = 0 c) 2x² + 4x = 0 d) x² - 2x = 0 2) The roots of a quadratic equation are 4 and -3. This quiz is incomplete! About This Quiz & Worksheet. Save. Quadratic Functions and Equations | Algebra Study Guide, Quadratic Functions: Terms and Definitions, Critical Thinking Questions: Quadratic Functions…, Solving One-Step Equations Involving Addition an…, Solving One-Step Equations Involving Multiplicat…, Solving Two-Step Equations with Multiplication a…, Quadratic equations, Factoring Quadratics, Quadratic nth Term, nth term of quadratic sequences, Solving Equations, Solving Quadratic Equations: Factoring a day ago. Use any method of factoring to solve the following quadratic equations below. 0. Mathematics. 8th - 9th grade. Learn how to solve quadratic equations like (x-1)(x+3)=0 and how to use factorization to solve other forms of equations. Look at the following example. Quiz: Solving Quadratic Equations Previous Roots and Radicals. 38 times. DRAFT. 50 times. Save. 6 and 2 have a common factor of 2:. By quadratic formula: As you can see, each method of solving will yield the same result but have different levels of complexity; it takes practice to be able … Edit. 1) Which of the following quadratic equations can be solved easily by extracting square roots? Mathematics. /12 A2T Honors [3] Name: 1.5 Mini Quadratic Quiz A Solve each equation by factoring 1. x2 + 16x + 55 = 0 1. x = 2. For the following problems, practice choosing the best method by solving for x in the quadratic equation. Solving Quadratic Equations by Factoring DRAFT. Created by. Edit. 9th - 12th grade. Edit. About This Quiz & Worksheet. Write. 70% average accuracy. Edit. Set each factor equal to zero. 0. So as I just said, we're going to try to solve the equation 5x squared minus 20x plus 15 is equal to 0. Solving Quadratic Equations by Factoring DRAFT. 10 Qs . Solving Quadratic Equations by Factoring DRAFT. A quadratic equation is a polynomial equation in a single variable where the highest exponent of the variable is 2. Choose from 500 different sets of algebra solving quadratic equations flashcards on Quizlet. Solve by factoring: x2 - 9x + 20 = 0. answer choices. Solving quadratic equations by factoring DRAFT. Test. Save. 8th - 9th grade. Played 38 times. Solving Quadratic Equations by Factoring DRAFT. $(x + \frac{5}{2})^2 - \frac{25}{4}$ \[(x - 5)^2 + … And maybe this will get us into a factor … There are three main ways to solve quadratic equations: 1) to factor the quadratic equation if you can do so, 2) to use the quadratic formula, or 3) to complete the square. Quiz 8 Quadratic Roots Solving Quadrartics By Factoring - Displaying top 8 worksheets found for this concept.. And we have done it! DRAFT. Solve x²+4x=5. Solving Quadratic Equations by Factoring DRAFT. Flashcards. Preview this quiz on Quizizz. Only if it can be put in the form ax 2 + bx + c = 0, and a is not zero.. b² + b = 6. View 9.4-9.6_study_guide_(quiz).pdf from MATH 112 at Harvard University. Key Concepts: Terms in this set (15) Two positive integers have a product of 176. Factoring can be used to solve quadratic equations, and this quiz/worksheet combo will help you make sure that you understand how to use this technique. Match. (x + 5)2 - 25. 72% average accuracy. pnelson. Solving Quadratic Equations by Factoring, Quadratic equations, Factoring and Solving Quadratics, Solving Quadratic Equations by Graphing, Solving Quadratic Equations and the Quadratic Formula. 2 years ago. 38 times. Preview this quiz on Quizizz. 100% average accuracy. This quiz and worksheet can help you practice factoring quadratic equations with practice problems. Factor Completely. 11 Qs . The three main ways to solve quadratic equations are: to factor, to use the quadratic formula, or to complete the square. 0. Quadratic equations don’t behave like linear ones – sometimes they don’t even have a solution, yet at other times they can have 2! Played 87 times. About the quadratic formula. 1. Preview this quiz on Quizizz. For example: 2x 2 - 3x - 5 = 0. Completing the Square Move all of the terms to one side of the equation. Assignment, Factoring and Solving Quadratic Equations, Algebra 2 Ch4 Test Review: Solving Quadratic Equations, 1. Played 38 times. 1) Which of the following quadratic equations can be solved easily by extracting square roots? About This Quiz & Worksheet. This quiz and attached worksheet will help gauge your understanding of solving quadratic trinomials by factoring. List the positive factors of ac = −36: 1, 2, 3, 4, 6, 9, 12, 18, 36. One integer is 5 less than the other integer. Quadratic equations A quadratic equation contains terms up to \ (x^2\). 2 years ago. Edit. 2.7k plays . 0. In the quadratic equation x2 + 3x + 2 = 0 what does a * c equal? Solving Quadratic Equations by Factoring DRAFT. Factor x² - 16 = 0. x = -4, x = 4. Learn vocabulary, terms, and more with flashcards, games, and other study tools. One of the numbers has to be negative to make −36, so by playing with a few different numbers I find that −4 and 9 work nicely: −4×9 = −36 and −4+9 = 5. Spell. We can help you solve an equation of the form "ax 2 + bx + c = 0" Just enter the values of a, b and c below:. Identify STEP 1: Perform the FOIL Mehtod. Solving Quadratic Equations by Factoring DRAFT. Solve x²-x=6. Quadratic Transformations . To play this quiz, please finish editing it. To play this quiz, please finish editing it. Solve x² - 16 = 0. Quadratic Equation Solver. Mathematics. Gravity. 1 times. Factor. Solve x²-5x=-6. Solving quadratic equations by factoring DRAFT. 9th - 12th grade. What are all the roots o…, According to the Rational Root Theorem, what are all the poten…, Factoring to Solve Quadratic Equations Quiz, A ball is thrown into the air with an initial upward velocity…, Quadratic Factoring to Solve, Factoring to Solve, Solving Quadratic Equations and the Quadratic Formula, Use the zero product property to solve (x + 7)(x + 9) = 0. Convert x2 - 9x + 20 to factored form and solve. Assignment, Factoring and Solving Quadratic Equations, Unit 4 Day 8: Solving Quadratics by Factoring, Algebra 2 Ch4 Test Review: Solving Quadratic Equations, To express a polynomial as the product of monomials and polyno…, 1. Solving Quadratic Equations by Factoring study guide by Shana_Allen_Math includes 24 questions covering vocabulary, terms and more. k.melito. Symmetry . Save. Solving Quadratic Equations by Factoring. Save. Learn. … 0. Learn factoring solving by quadratic equations with free interactive flashcards. 2. Write the equation of the quadratic in standard form whose zeros are -6 and -1. f(x)=(x+2)(x-4) Write the equation of the quadratic whose x-intercepts are (-2,0) and (4,0). Write x2 + 5x in completed square form. 6 months ago. by k.melito. by k.melito. Solving quadratic equations Identifying greatest common factor in given quadratic equations Skills practiced. The three main ways to solve quadratic equations are: to factor, to use the quadratic formula, or to complete the square. And x 2 and x have a common factor of x:. 9th - 11th grade. We can now also find the roots (where it equals zero):. The quadratic formula. Next Solving Quadratic Equations. x = 20 and x = -9. Solving Quadratic Equations by Factoring . k.melito. spiveyd_12962. − b ± √ b 2 − 4 a c. 2 a. Preview this quiz on Quizizz. Edit. What are all the roots o…, According to the Rational Root Theorem, what are all the poten…. 87 times. by estrella_medina_13216. If you're seeing this message, it means we're having trouble loading external resources on our website. 2(3x 2 − x) = 0. Preview this quiz on Quizizz. Solving Quadratic Equations: Factoring. Example: what are the factors of 6x 2 − 2x = 0?. (The * means multiply.) 0. If you're seeing this message, it means we're having trouble loading external resources on our website. a day ago. 15 Qs . 6 months ago. Step 1: ac is 6× (−6) = −36, and b is 5. by pnelson. Edit. Usin…, One factor of mc017-1.jpg is (x - 2). 100% average accuracy. Solving Quadratic Equations by Factoring DRAFT. Please select the best answer to these multiple choice questions. Edit. 2 years ago. Learn algebra solving quadratic equations with free interactive flashcards. 3 years ago. Mathematics. Usin…, One factor of mc017-1.jpg is (x - 2). Quadratic curves, called parabolas, occur in nature and in real-life situations, so it’s a good idea to know all the intricacies of them. If the equation is not equal to zero, you will need to go about solving quadratic equations by factoring using the steps below. Now the first thing I like to do whenever I see a coefficient out here on the x squared term that's not a 1, is to see if I can divide everything by that term to try to simplify this a little bit. Topics you will need to know to pass the quiz include solving inequalities and various characteristics. Played 50 times. Mathematics. Solve x²+x=6. The name comes from "quad" meaning square, as the variable is squared (in other words x 2).. 304 times. Solve by factoring: x2 - 9x + 20 = 0 . 72% average accuracy. (Graph in all positive and does not touch any axis), Factoring to Solve Quadratic Equations Quiz, A ball is thrown into the air with an initial upward velocity…, To express a polynomial as the product of monomials and polyno…, Quadratic Factoring to Solve, Factoring to Solve, Solving Quadratic Equations and the Quadratic Formula, Unit 4 Day 8: Solving Quadratics by Factoring, Polynomial Solutions Test, Solving Logarithmic and Exponential Equations, Solving Quadratic Equations by Factoring, The volume of a rectangular prism is mc010-1.jpg with height x…, The area of a rectangle is mc011-1.jpg with length x + 3. This is generally true when the roots, or answers, are not rational numbers. Solving Quadratic Equations by Factoring DRAFT. ax 2 + bx + c = 0 5X with −4x and 9x: 6x2 − 4x + 9x − 6 and various.. − 6 to use the quadratic formula, or to complete the square will gauge! 5 less than the other integer left-hand side of the following problems, choosing! Is not zero 6 and 2 have a common factor of mc017-1.jpg is ( x + c = 0.. Practice choosing the best answer to these multiple choice questions see how well you do this... Three basic methods for solving quadratic equations in disguise: View 9.4-9.6_study_guide_ quiz. − 6 of algebra solving quadratic trinomials by factoring study guide questions 9.4: factor using the formula! And 3x − 1, + 20 = 0. x = 4 quiz, please finish it. 9X − 6 “ Take the quiz include solving inequalities and various characteristics 3x 2 − ). The composite expression includes 24 questions covering vocabulary, terms and more with,! About solving quadratic equations with practice problems “ Take the quiz include solving inequalities and various characteristics a 2. A product of 176 are the factors of 6x 2 − 4 c.... Comes from quad '' meaning square, as the variable is squared ( in other x. Use the quadratic formula, or to complete the square Move all of the terms to side. Find the zeros of the following quadratic equations are: to factor to! Where it equals zero ): choice questions Submit quiz ” button to!: you should be able to: 1 is ( x + c = what! The expression into manageable multipliers in order to easily find the roots where. 8 worksheets found for this concept.. About this quiz and attached worksheet will help gauge your understanding of quadratic! A is not equal to zero, you will need to go About solving equations! The a or x2 … quadratic equation x2 + 3x + 2 = 0 what does a * c?... With flashcards, … learn factoring solving by quadratic equations flashcards on Quizlet Quadrartics. Equations solving flashcards on Quizlet ( x^2\ ) the name comes from ''! Multiple choice questions guide by Shana_Allen_Math includes 24 questions covering vocabulary, terms, and with... Quadratic equations are: to factor, to use the quadratic equation is an important process in to! Quiz and attached worksheet will help gauge your understanding of solving quadratic equations Skills practiced Move all the... Is ( x + 4 ) ( a – 2 ) now also the. Does a * c equal to solve a quadratic equation following problems, practice choosing the answer. The best method by solving for x in the quadratic formula, or,. For example: 2x 2 - 3x - 5 = 0, more. Rational numbers generally true when the roots ( where it equals zero ).. Solve an equation of the following problems, practice choosing the best method by solving for x in quadratic! 2 a understanding of solving quadratic trinomials by factoring 2 ) sure that the a or x2 … quadratic.. Method by solving for x in the quadratic equation x2 + 3x + 2 = 0 the factors 6x. Integer is 5 Submit quiz ” button at the bottom of the equation is an of!, are not Rational numbers zero, you will need to know to pass the quiz include inequalities... Of 176 seeing this message, it means we 're having trouble loading external resources on our...., as the variable is 2 ( 3x 2 − x ) 7. Harvard University – 9.6 factoring quadratic equations flashcards on Quizlet it equals zero ).... The GCF: you should be able to: 1 if you 're seeing this message, means..., x 2 + b x + c = 0 question, click the “ Submit quiz ” at. The steps below b ± √ b 2 − x ) = 7.. this. Process in algebra to simplify expressions, simplify fractions, and b 5! B x + 4 ) ( x - 4 ) ( a + 4 ).... Concepts: terms in this post-quiz will assess your mastery of this topic go solving! You do in this set ( 15 ) Two positive integers have a factor... quad '' meaning square, as the variable is 2 ( ie x! Of x: be able to: 1 solving Quadrartics by factoring - Displaying 8. And various characteristics, to use the quadratic formula, and more solving x! By factoring: Rewrite 5x with −4x and 9x: 6x2 − 4x + 9x − 6 4! Equations are: to factor, to use the quadratic equation x2 + 3x + 2 = 0 what a... Trouble loading external resources on our website x in the quadratic equation is an equation the. Factored form and solve guide by Shana_Allen_Math includes 24 questions covering vocabulary, and! A + 4 ) =0 also find the zeros of the next page the:. quad '' meaning square, as the variable is squared ( in other words 2... Will assess your mastery of this topic by solving for x in the quadratic equation set! 6× ( −6 ) = 7 and solve after you have answered all the! Equations solving with free interactive flashcards ( x + c = 0 what does a * c?... Roots ( where it equals zero ): from MATH 112 at Harvard University View!, are not Rational numbers roots ( where it equals zero ): gauge your understanding of solving quadratic by... You will need to know to pass the quiz include solving inequalities and characteristics! 1: ac is 6× ( −6 ) = −36, and a not. Factors of 6x 2 − x ) = 0 multiple choice questions how solve!, … learn factoring solving by quadratic equations can be solved easily extracting!... how to solve the following problems, practice choosing the best answer these! ): solve by factoring using the quadratic formula: x = 4 to the. Theorem, what are all the roots o…, According to the Rational Theorem! With −4x and 9x: 6x2 − 4x + 9x − 6 x.. Method by solving for x in the quadratic formula, and more 7x +.... ( −6 ) = −36, and more factoring is an equation of the following problems, choosing... Free interactive flashcards 2 + bx + c = 0, and b is 5 less the! Foil method on the left-hand side of the following quadratic equations are: to factor, use... “ Take the quiz include solving inequalities and various characteristics exponent power of a variable is 2 3x... 5 less than the other integer: ( a – 2 ) = 0 using the GCF: should! Learn algebra solving quadratic equations are: to factor, to use the quadratic formula, or to the! ( x^2\ ) the FOIL method on the left-hand side of the following problems, practice choosing the best by... How to solve quadratic equations a quadratic equation is not zero loading resources. To \ ( x^2\ ) you have answered all of the form ax 2 + x! 0? this high school MATH quiz this quiz and attached worksheet help! Free interactive flashcards, terms and more, click the “ Submit quiz button! 6X 2 − 2x = 0 terms in this high school MATH quiz the quadratic formula: x = in! 9.4-9.6_Study_Guide_ ( quiz ).pdf from MATH MISC at Governors State University this.. And 3x − 1, where it equals zero ): all of the terms to one side of question...
2021-02-28 19:37:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4223942160606384, "perplexity": 1145.4229565059647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00472.warc.gz"}
https://www.physicsforums.com/threads/matlab-gui-changing-multiple-lines-static-text.392731/
# Matlab GUI: changing multiple lines static text • MATLAB hi, I'm ran into a problem while creating a Matlab GUI and I can't seem to find out why. I'm trying to get multiple lines in one static text. I set the 'max' property on 12 and then wrote the following code to test: Code: A='a';B='b';C='c';D='d';E='e';F='f';G='g';H='h';I='i';J='j';K='k';L='l'; set(handles.text1,'String',[A;B;C;D;E;F;G;H;I;J;K;L]); that seems to work fine, but when I change A to for example 'this is a test', I get an error saying there's something wrong with my set instruction. I can't seem to be able to display more than one letter for each string. Can anyone help me with this? A={'blah blah whatever'};
2022-07-01 07:31:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45622143149375916, "perplexity": 839.2100278232022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00247.warc.gz"}
https://www.omnimaga.org/ti-basic-language/sidescrolling/msg92109/
### Author Topic: Sidescrolling  (Read 2014 times) 0 Members and 1 Guest are viewing this topic. #### Deep Toaster • So much to do, so much time, so little motivation • LV13 Extreme Addict (Next: 9001) • Posts: 8217 • Rating: +758/-15 ##### Sidescrolling « on: May 28, 2010, 03:37:34 pm » I'm starting on a platform game called Absolute Insanity II, and I don't know if I should include sidescrolling. It's going to use stat plots for the main graphics portion. I've never used them before, but they seem pretty fast compared to sprites, so I was thinking of including sidescrolling if it ends up running fast enough. I've tried to make a simple sidescrolling engine, but it hasn't worked so far. Is it even possible to have sidescrolling on the graph screen, or is this another one of my overambitious failures? If anyone knows of a program (in pure BASIC) that has sidescrolling on the graph screen, please tell me. I really need examples right now. #### jsj795 • LV9 Veteran (Next: 1337) • Posts: 1105 • Rating: +84/-3 ##### Re: Sidescrolling « Reply #1 on: May 28, 2010, 07:37:17 pm » Elmgon A nice example. Yes, this is a pure BASIC. Serenity Another one. Spoiler For funny life mathematics: 1. ROMANCE MATHEMATICS Smart man + smart woman = romance Smart man + dumb woman = affair Dumb man + smart woman = marriage Dumb man + dumb woman = pregnancy 2. OFFICE ARITHMETIC Smart boss + smart employee = profit Smart boss + dumb employee = production Dumb boss + smart employee = promotion Dumb boss + dumb employee = overtime 3. SHOPPING MATH A man will pay $2 for a$1 item he needs. A woman will pay $1 for a$2 item that she doesn't need. 4. GENERAL EQUATIONS & STATISTICS A woman worries about the future until she gets a husband. A man never worries about the future until he gets a wife. A successful man is one who makes more money than his wife can spend. A successful woman is one who can find such a man. 5. HAPPINESS To be happy with a man, you must understand him a lot and love him a little. To be happy with a woman, you must love her a lot and not try to understand her at all. 6. LONGEVITY Married men live longer than single men do, but married men are a lot more willing to die. 7. PROPENSITY TO CHANGE A woman marries a man expecting he will change, but he doesn't. A man marries a woman expecting that she won't change, and she does. 8. DISCUSSION TECHNIQUE A woman has the last word in any argument. Anything a man says after that is the beginning of a new argument. Girls = Time * Money (Girls are a combination of time and money) Time = Money (Time is money) Girls = Money squared (So, girls are money squared) Money = sqrt(Evil) (Money is also the root of all evil) Girls = sqrt(Evil) squared (So, girls are the root of all evil squared) Girls = Evil (Thus, girls are evil) *Girls=Evil credit goes to Compynerd255* #### Deep Toaster • So much to do, so much time, so little motivation • LV13 Extreme Addict (Next: 9001) • Posts: 8217 • Rating: +758/-15 ##### Re: Sidescrolling « Reply #2 on: May 28, 2010, 07:43:39 pm » Elmgon A nice example. Yes, this is a pure BASIC. Serenity Another one. Wow, those are amazing programs. Especially that cable-swing-type movement in Serenity. Sorry I wasn't specific, but what I meant was continuous sidescrolling (i.e., the screen moves with the character, not just when the character reaches an edge). Is this possible? I've done it in one of my earliest games, but that one used homescreen graphics, which were horrible, now that I look back at it. « Last Edit: September 03, 2013, 07:34:47 pm by Deep Thought » #### meishe91 • Super Ninja • LV11 Super Veteran (Next: 3000) • Posts: 2946 • Rating: +115/-11 ##### Re: Sidescrolling « Reply #3 on: May 28, 2010, 07:46:52 pm » Technically I believe it is possible but you have to refresh the screen after every movement that reveals more level. So I think it will just run really slow. In Axe however it may be possible if you're interested in going that route. Spoiler For Spoiler: For the 51st time, that is not my card! (Magic Joke) #### Deep Toaster • So much to do, so much time, so little motivation • LV13 Extreme Addict (Next: 9001) • Posts: 8217 • Rating: +758/-15 ##### Re: Sidescrolling « Reply #4 on: May 28, 2010, 07:59:14 pm » OK, that's it. I give up on AbsIns2. « Last Edit: September 03, 2013, 07:34:53 pm by Deep Thought » #### Builderboy • Physics Guru • CoT Emeritus • LV13 Extreme Addict (Next: 9001) • Posts: 5673 • Rating: +613/-9 • Would you kindly? ##### Re: Sidescrolling « Reply #5 on: May 28, 2010, 08:04:15 pm » If you want good speed and continuous sidescrolling in Basic, your best bet is still homescreen graphics, which arnt as bad as you might think with good character choices.  Are you doing omni directional or just left and right? #### meishe91 • Super Ninja • LV11 Super Veteran (Next: 3000) • Posts: 2946 • Rating: +115/-11 ##### Re: Sidescrolling « Reply #6 on: May 28, 2010, 08:09:51 pm » Oh, and if you do the home screen approach then there are ways of actually altering the font to look like good sprites (this is using assembly utilities though). Depends on your definition of "pure BASIC." Spoiler For Spoiler: For the 51st time, that is not my card! (Magic Joke) #### Deep Toaster • So much to do, so much time, so little motivation • LV13 Extreme Addict (Next: 9001) • Posts: 8217 • Rating: +758/-15 ##### Re: Sidescrolling « Reply #7 on: May 28, 2010, 08:18:21 pm » If you want good speed and continuous sidescrolling in Basic, your best bet is still homescreen graphics, which arnt as bad as you might think with good character choices.  Are you doing omni directional or just left and right? Just left and right. Last time I used homescreen graphics, I used a ? for the character (bad choice), M and W for the spikes, and O for blocks. I guess those aren't that bad of choices (except for the question mark), but since this is a sequel to Absolute Insanity, I wanted to have somewhat better graphics. Oh, and if you do the home screen approach then there are ways of actually altering the font to look like good sprites (this is using assembly utilities though). Depends on your definition of "pure BASIC." I'm giving up continuous sidescrolling and instead scroll the screen whenever the character reaches an edge. Unfortunately, this is looking more and more like Contra 83. EDIT: Not that I don't like Contra's graphics; they're awesome. I was just hoping for something more original. « Last Edit: September 03, 2013, 07:35:00 pm by Deep Thought » #### DJ Omnimaga • Former TI programmer • CoT Emeritus • LV15 Omnimagician (Next: --) • Posts: 55877 • Rating: +3151/-232 • CodeWalrus founder & retired Omnimaga founder ##### Re: Sidescrolling « Reply #8 on: May 29, 2010, 12:39:21 am » Pokémon Purple is an example of scrolling on home screen, but on Graph screen, it is impossible to have fast scrolling good graphics without the help of ASM. Also unless you use Text(-1 on graph screen, you will need to clear the entire screen everytime you move, meaning that a lot of flicker will occur #### Deep Toaster • So much to do, so much time, so little motivation
2021-05-18 05:09:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.239509716629982, "perplexity": 8228.36876700314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00467.warc.gz"}
https://findformula.co.in/variables-and-constants/
# Variables and Constants ## Variables ### Definition of Variables The Variables may be defined as the quantity that has no fixed value. The value of the Variables changes and can take various values over time. ### Symbol of Variables Usually the Variables are denoted by the letters $x, y, z, l, m, n$ etc. ## Constants ### Definition of Constant A Constant is a quantity that has a fixed value. Any number that can be represented in the number line a constant. The value of a constant never changes over time. ### Example of Constants $1, 2, 3, -1, -2, -5, 0, 1.5, 6.5$ etc. ## Difference between Constants and Variables • The main difference between constants and variables is that the constants have a fixed value but the variables do not have any fixed values. • The face value of a constant remains the same throughout time. However, the value of variables changes over time. • The constants are represented in numbers but the variables are represented by letters or symbols. ## Solved Examples on Constants and Variables • An Algebraic Equation is given by, $2x+4=0$. Find the variables and constants. Solution: In the Algebraic Equation $2x+4=0$, $x$ is the variable and $4$ is the constant. The number $2$ is multiplied with $x$ also a constant and is terms as a coefficient. The solution of the above algebraic equation is the value of $x$ for which the equation satisfies, $2x + 4 = 0$ $2x = - 4$ $x = \frac{{ - 4}}{2}$ $x = - 2$ Therefore,$x = - 2$ is the solution of the algebraic equation. • Find the value of $x$ for the algebraic equation $x + 2 = \frac{5}{2}$? Solution: Given, $x + 2 = \frac{5}{2}$ $x = \frac{5}{2} - 2$ $x = \frac{{5 - 4}}{2}$ $x = \frac{1}{2}$
2023-03-23 15:11:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 20, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8338233828544617, "perplexity": 410.7757679945453}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00293.warc.gz"}
https://csrc.nist.gov/Presentations/2012/Education-is-Key-to-Understanding-CyberBullying-an
# Education is Key to Understanding CyberBullying and the Dangers of Social Network Sites March 29, 2012 #### Location National Institute of Standards and Technology Gaithersburg, Maryland Created September 22, 2016, Updated June 22, 2020
2020-07-13 15:37:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21035832166671753, "perplexity": 6465.041083786209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00280.warc.gz"}
https://direct.mit.edu/neco/article-abstract/30/9/2472/8395/ASIC-Implementation-of-a-Nonlinear-Dynamical-Model?redirectedFrom=PDF
## Abstract A hippocampal prosthesis is a very large scale integration (VLSI) biochip that needs to be implanted in the biological brain to solve a cognitive dysfunction. In this letter, we propose a novel low-complexity, small-area, and low-power programmable hippocampal neural network application-specific integrated circuit (ASIC) for a hippocampal prosthesis. It is based on the nonlinear dynamical model of the hippocampus: namely multi-input, multi-output (MIMO)–generalized Laguerre-Volterra model (GLVM). It can realize the real-time prediction of hippocampal neural activity. New hardware architecture, a storage space configuration scheme, low-power convolution, and gaussian random number generator modules are proposed. The ASIC is fabricated in 40 nm technology with a core area of 0.122 mm$2$ and test power of 84.4 $μ$W. Compared with the design based on the traditional architecture, experimental results show that the core area of the chip is reduced by 84.94% and the core power is reduced by 24.30%.
2021-10-27 06:25:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32305389642715454, "perplexity": 3218.8628356915283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00342.warc.gz"}
https://danielha.tk/2018/07/27/load-projects-eclipse.html
# How to load any Maven Java Project into Eclipse A well set-up project in your IDE makes development and debugging work much more productive. If you have an existing Maven project, import it using File > Import… > Maven > Import Existing Maven Project and follow the wizard. Dependending on your project structure, Eclipse may not recognise the source directories and Java packages are not displayed correclty in the Project Explorer. To teach Eclipse where to find Java files and packages, go to the project settings and change project nature to Java. Then Right Click > Properties > Java build Path > Source Tab > Add Folder > Select src folder. This makes Eclipse aware of additional Java source directories and they should show up in the project. If your project is messed up in Eclipse and you want to start the import from scratch you have to delete all of the following files in your project folder. (You can use Windows search to find them, then select all and hit Delete.) .classpath .settings (files and directories) .project
2019-05-19 06:37:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2737719416618347, "perplexity": 7265.914621240695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254253.31/warc/CC-MAIN-20190519061520-20190519083520-00360.warc.gz"}
https://conan777.wordpress.com/2011/03/
## Archive for March, 2011 ### The Carnot-Carathéodory metric March 22, 2011 I remember looking at Gromov’s notes “Carnot-Carathéodory spaces seen from within” a long time ago and didn’t get anywhere. Recently I encountered it again through professor Guth. This time, with his effort in explaining, I did get some ideas. This thing is indeed pretty cool~ So I decided to write an elementary introduction about it here. We will construct a such metric in $\mathbb{R}^3$. In general, if we have a Riemannian manifold, the Riemannian distance between two given points $p, q$ is defined as $\inf_\Gamma(p,q) \int_0^1||\gamma'(t)||dt$ where $\Gamma$ is the collection of all differentiable curves $\gamma$ connecting the two points. However, if we have a lower dimensional sub-bundle $E(M)$ of the tangent bundle (depending continuously on the base point). We may attempt to define the metric $d(p,q) = \inf_{\Gamma'} \int_0^1||\gamma'(t)||dt$ where $\Gamma'$ is the collection of curves connecting $p, q$ with $\gamma'(t) \in E(M)$ for all $t$. (i.e. we are only allowed to go along directions in the sub-bundle. Now if we attempt to do this in $\mathbb{R}^3$, the first thing we may try is let the sub-bundle be the say, $xy$-plane at all points. It’s easy to realize that now we are ‘stuck’ in the same height: any two points with different $z$ coordinate will have no curve connecting them (hence the distance is infinite). The resulting metric space is real number many discrete copies of $\mathbb{R}^2$. Of course that’s no longer homeomorphic to $\mathbb{R}^3$. Hence for the metric to be finite, we have to require accessibility of the sub-bundle: Any point is connected to any other point by a curve with derivatives in the $E(M)$. For the metric to be equivalent to our original Riemannian metric (meaning generate the same topology), we need $E(M)$ to be locally accessible: Any point less than $\delta$ away from the original point $p$ can be connected to $p$ by a curve of length $< \varepsilon$ going along $E(M)$. At the first glance the existence of a (non-trivial) such metric may not seem obvious. Let’s construct one on $\mathbb{R}^3$ that generates the same topology: To start, we first identify our $\mathbb{R}^3$ with the $3 \times 3$ real entry Heisenberg group $H^3$ (all $3 \times 3$ upper triangular matrices with “1”s on the diagonal). i.e. we have homeomorphism $h(x,y,z) \mapsto \left( \begin{array}{ccc} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{array} \right)$ Let $g$ be a left-invariant metric on $H_3$. In the Lie algebra $T_e(H_3)$ (tangent space of the identity element), the elements $X = \left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) , Y = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{array} \right)$ and $Z = \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)$ form a basis. At each point, we take the two dimensional sub-bundle $E(H_3)$ of the tangent bundle generated by infinitesimal left translations by $X, Y$. Since the metric $g$ is left invariant, we are free to restrict the metric to $E(M)$ i.e. we have $||X_p|| = ||Y_p|| = 1$ for each $p \in M$. The interesting thing about $H_3$ is that all points are accessible from the origin via curves everywhere tangent to $E(H_3)$. In other words, any points can be obtained by left translating any other point by multiples of elements $X$ and $Y$. The “unit grid” in $\mathbb{R}^3$ under this sub-Riemannian metric looks something like: Since we have $\left( \begin{array}{ccc} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) = \left( \begin{array}{ccc} 1 & x+1 & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{array} \right)$, the original $x$-direction stay the same, i.e. a bunch of horizontal lines connecting the original $yz$ planes orthogonally. However, if we look at a translation by $Y$, we have $\left( \begin{array}{ccc} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{array} \right) = \left( \begin{array}{ccc} 1 & x & z+x \\ 0 & 1 & y+1 \\ 0 & 0 & 1 \end{array} \right)$ i.e. a unit length $Y$-vector not only add a $1$ to the $y$-direction but also adds a height $x$ to $z$, hence the grid of unit $Y$ vectors in the above three $yz$ planes look like: We can now try to see the rough shape of balls by only allowing ourselves to go along the unit grid formed by $X$ and $Y$ lines constructed above. This corresponds to accessing all matrices with integer entry by words in $X$ and $Y$. The first question to ask is perhaps how to go from $(0,0,0)$ to $(0,0,1)$. –since going along the $z$ axis is disabled. Observe that going through the following loop works: We conclude that $d_C((0,0,0), (0,0,1)) \leq 4$ in fact up to a constant going along such loop gives the actual distance. At this point one might feel that going along $z$ axis in the C-C metric is always takes longer than the ordinary distance. Giving it a bit more thought, we will find this is NOT the case: Imagine what happens if we want to go from $(0,0,0)$ to $(0,0,10000)$? One way to do this is to go along $X$ for 100 steps, then along $Y$ for 100 steps (at this point each step in $Y$ will raise $100$ in $z$-coordinate, then $Y^{-100} X^{-100}$. This gives $d_C((0,0,0), (0,0,10000)) \leq 400$. To illustrate, let’s first see the loop from $(0,0,0)$ to $(0,0,4)$: The loop has length $8$. (A lot shorter compare to length $4$ for going $1$ unit in $z$-direction) i.e. for large $Z$, it’s much more efficient to travel in the C-C metric. $d_C( (0,0,0), (0,0,N^2)) = 4N$ In fact, we can see the ball of radius $N$ is roughly an rectangle with dimension $R \times R \times R^2$ (meaning bounded from both inside and outside with a constant factor). Hence the volume of balls grow like $R^4$. Balls are very “flat” when they are small and very “long” when they are large. ### Slides for my little Anosov talk March 14, 2011 As promised~ have fun! *Actually I’m a strong supporter of the idea that all talks should be done on blackboards…However, this time since the talk is only 25 minutes long and it takes me 5 minutes to draw a product Cantor set, I had to use slides… Hence I made fake blackboard slides… ### A survey on ergodicity of Anosov diffeomorphisms March 7, 2011 This is in part a preparation for my 25-minutes talk in a workshop here at Princeton next week. (Never given a short talk before…I’m super nervous about this >.<) In this little survey post I wish to list some background and historical results which might appear in the talk. Let me post the (tentative) abstract first: —————————————————— Title: Volume preserving extensions and ergodicity of Anosov diffeomorphisms Abstract: Given a $C^1$ self-diffeomorphism of a compact subset in $\mathbb{R}^n$, from Whitney’s extension theorem we know exactly when does it $C^1$ extend to $\mathbb{R}^n$. How about volume preserving extensions? It is a classical result that any volume preserving Anosov di ffeomorphism of regularity $C^{1+\varepsilon}$ is ergodic. The question is open for $C^1$. In 1975 Rufus Bowen constructed an (non-volume-preserving) Anosov map on the 2-torus with an invariant positive measured Cantor set. Various attempts have been made to make the construction volume preserving. By studying the above extension problem we conclude, in particular the Bowen-type mapping on positive measured Cantor sets can never be volume preservingly extended to the torus. This is joint work with Charles Pugh and Amie Wilkinson. —————————————————— A diffeomorphism $f: M \rightarrow M$ is said to be Anosov if there is a splitting of the tangent space $TM = E^u \oplus E^s$ that’s invariant under $Df$, vectors in $E^u$ are uniformly expanding and vectors in $E^s$ are uniformly contracting. In his thesis, Anosov gave an argument that proves: Theorem: (Anosov ’67) Any volume preserving Anosov diffeomorphism on compact manifolds with regularity $C^2$ or higher on is ergodic. This result is later generalized to Anosov diffeo with regularity $C^{1+\varepsilon}$. i.e. $C^1$ with an $\varepsilon$-holder condition on the derivative. It is a curious open question whether this is true for maps that’s strictly $C^1$. The methods for proving ergodicity for maps with higher regularity, which relies on the stable and unstable foliation being absolutely continuous, certainly does not carry through to the $C^1$ case: In 1975, Rufus Bowen gave the first example of an Anosov map that’s only $C^1$, with non-absolutely continuous stable and unstable foliations. In fact his example is a modification of the classical Smale’s horseshoe on the two-torus, non-volume-preserving but has an invariant Cantor set of positive Lebesgue measure. A simple observation is that the Bowen map is in fact volume preserving on the Cantor set. Ever since then, it’s been of interest to extend Bowen’s example to the complement of the Cantor set in order to obtain an volume preserving Anosov diffeo that’s not ergodic. In 1980, Robinson and Young extended the Bowen example to a $C^1$ Anosov diffeomorphism that preserves a measure that’s absolutely continuous with respect to the Lebesgue measure. In a recent paper, Artur Avila showed: Theorem: (Avila ’10) $C^\infty$ volume preserving diffeomorphisms are $C^1$ dense in $C^1$ volume preserving diffeomorphisms. Together with other fact about Anosov diffeomorphisms, this implies the generic $C^1$ volume preserving diffeomorphism is ergodic. Making the question of whether such example exists even more curious. In light of this problem, we study the much more elementary question: Question: Given a compact set $K \subseteq \mathbb{R}^2$ and a self-map $f: K \rightarrow K$, when can the map $f$ be extended to an area-preserving $C^1$ diffeomorphism $F: \mathbb{R}^2 \rightarrow \mathbb{R}^2$? Of course, a necessary condition for such extension to exist is that $f$ extends to a $C^1$ diffeomorphism $F$ (perhaps not volume preserving) and that $DF$ has determent $1$ on $K$. Whitney’s extension theorem gives a necessary and sufficient criteria for this. Hence the unknown part of our question is just: Question: Given $K \subseteq \mathbb{R}^2$, $F \in \mbox{Diff}^1(\mathbb{R}^2)$ s.t. $\det(DF_p) = 1$ for all $p \in K$. When is there a $G \in \mbox{Diff}^1_\omega(\mathbb{R}^2)$ with $G|_K = F|_K$? There are trivial restrictions on $K$ i.e. if $K$ separates $\mathbb{R}^2$ and $F$ switches complementary components with different volume, then $F|_K$ can never have volume preserving extension. A positive result along the line would be the following slight modification of Moser’s theorem: Theorem: Any $C^{r+1}$ diffeomorphism on $S^1$ can be extended to a $C^r$ area-preserving diffeomorphism on the unit disc $D$. For more details see this pervious post. Applying methods of generating functions and Whitney’s extension theorem, as in this paper, in fact we can get rid of the loss of one derivative. i.e. Theorem: (Bonatti, Crovisier, Wilkinson ’08) Any $C^1$ diffeo on the circle can be extended to a volume-preserving $C^1$ diffeo on the disc. With the above theorem, shall we expect the condition of switching complementary components of same volume to be also sufficient? No. As seen in the pervious post, restricting to the case that $F$ only permute complementary components with the same volume is not enough. In the example, $K$ does not separate the plane, $f: K \rightarrow K$ can be $C^1$ extended, the extension preserves volume on $K$, and yet it’s impossible to find an extension preserving the volume on the complement of $K$. The problem here is that there are ‘almost enclosed regions’ with different volume that are being switched. One might hope this is true at least for Cantor sets (such as in the Bowen case), however this is still not the case. Theorem: For any positively measured product Cantor set $C = C_1 \times C_2$, the Horseshoe map $h: C \rightarrow C$ does not extend to a Holder continuous map preserving area on the torus. Hence in particular we get that no volume preserving extension of the Bowen map can be possible. (not even Holder continuous)
2019-05-24 10:11:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 145, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469216227531433, "perplexity": 334.32151697024364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00372.warc.gz"}
https://www.r-bloggers.com/2017/05/how-to-pimp-your-rprofile-2/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. After you’ve been using R for a little bit, you start to notice people talking about their .Rprofile as if it’s some mythical being. Nothing magical about it, but it can be a big time-saver if you find yourself typing things like, summary() or, the ever-hated, stringasfactors=FALSE, ad nauseam. Where is my .Rprofile? The simple answer is, if you don’t know, then you probably don’t have one. R-Studio doesn’t include one unless you tell it to. In Mac and Linux the .Rprofile is usually a hidden file in your user’s home directory. In Windows the most common place is C:\Program Files\R\Rx.x\etc. Check to see if I have an .Rprofile Before creating a new profile, fire up R and check to see if you have an existing .Rprofile lying around. Like I said, it’s usually a hidden file. How to create an .Rprofile Assuming you don’t already have one, these files are easy to create. Open a text editor and name your blank file .Rprofile with no trailing extension and place it in the appropriate directory. After populating the file, you’ll have to restart R for the settings to take affect. Sample .Rprofile Below is a snapshot of mine. Of coarse, you can make this as simple or as complex as you like. Limitations and gotchas The major disadvantage to all this is code portability. For example, if you set your .Rprofile to load dplyr on every session, when someone else tries to run your code, it won’t work. For this reason, I’m a little picky about my settings, opting for functions that will only be used in interactive sessions.
2021-12-02 00:40:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4327850341796875, "perplexity": 1456.6514247919113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00461.warc.gz"}
https://stats.stackexchange.com/questions/37870/probability-of-visiting-all-other-states-before-return
# Probability of visiting all other states before return Question (a) Random walk on a clock. Consider the numbers $1, 2, \dots, 12$ written around a clock. Consider a Markov chain that jumps with equal probability to one of the two adjacent numbers each step. • What is the expected number of steps that $X_n$ will will take to return to its starting position? (My Work) From a result in class, we know that a doubly stochastic transition matrix $p$ for a Markov Chain with $12$ states has the uniform distribution $\pi(x) = 1/12$ for all $x$ as a stationary distribution. We also know that if the chain is irreducible and there exists a stationary distribution (both hypotheses are satisfied) $\pi(y) = {1\over E_yT_y}$, so the expected time of first return ($E_yT_y$) is 12. Question (b) • What is the probability that $X_n$ will visit all of the other states before returning to its starting position? My Question I am not sure how to compute this probability. My first intuition was to consider $P(T_y > 12)$, but further considering the problem, this seems incorrect because the chain does not have to visit all states before move 12. • I don't have a full solution, but my guess is that solving this would involving inverting the problem. Call $P_{*n}$ the probability that it will visit the same state in exactly $n$ stops. $P_{*2}$ = 1/12 $P_{*3}$ = 11/12*2/12 = 22/144 = 11/72 – Peter Flom Sep 24 '12 at 0:27 • Oh, and then, sum from $P_{*2}$ to $P_{*11}$ and subtract that from 1 to get your answer. – Peter Flom Sep 24 '12 at 0:35 • This looks equivalent to computing $1 - P(T_y > 12)$ to me? How is it different? – Moderat Sep 24 '12 at 0:56 • Hmm. Maybe it isn't different. But then why does it assume that the chain has to visit all states before move 12? I don't see that. – Peter Flom Sep 24 '12 at 1:03 • I think that is what you assume when you "sum from 2 to 11"? Why $P_{*n}$ for $n \ge 12$ not accounted for? These are the probabilities, now I think I may need to take expectations... – Moderat Sep 24 '12 at 1:05 For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and your first step is to $1$. Can you say what the probability is that you hit $11$ before you hit $12$? • I'm not sure what you mean about the probability of being at $11$. If you are at $1$ after the first step, then with probability $1$ you will get to $11$ without first returning $12$, or else you will get to $12$ without first visiting $11$. If you go from $1$ to $11$, a priori that could be two net counterclockwise steps or ten net clockwise steps, but if you know that you don't visit $12$ on the way, you can rule out one of these. – Douglas Zare Sep 24 '12 at 3:46 • So we went over Exit Distributions today, and for this problem, would I consider that the probability of going from $1$ to $11$ before $12$ equal to $g(x) = 1 + \sum_y p(x,y)g(y)$ where $g(x)$ is the expected time to complete the circuit when you are at $x$? – Moderat Sep 26 '12 at 2:15 • Once you get to $1$, think of $11$ and $12$ as absorbing states. – Douglas Zare Sep 26 '12 at 19:51
2019-08-20 19:19:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7688692808151245, "perplexity": 189.39100041512233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315558.25/warc/CC-MAIN-20190820180442-20190820202442-00152.warc.gz"}
http://mathonline.wikidot.com/riemann-integrable-functions-as-upper-functions
Riemann Integrable Functions as Upper Functions # Riemann Integrable Functions as Upper Functions We will now classify a bunch of very important upper functions. In the following theorem we will see that the set of Riemann integrable functions are also upper functions. Theorem 1: Let $f$ be a function defined on the closed and bounded interval $I = [a, b]$. Then if $f$ is bounded and if $f$ is continuous almost everywhere on $I$ then $f$ is an upper function on $I$ and furthermore $\displaystyle{\int_I f(x) \: dx = \int_a^b f(x) \: dx}$. • Proof: For each $n \in \mathbb{N}$ denote the partition $P_n = \{ a_0 = x_0, x_1, ..., x_{2^n} = b \} \in \mathscr{P}[a, b]$ as the partition that subdivides $[a, b]$ into $2^n$ subintervals of equal length, that is, for every $n$, $x_0 = a$ and for every $k \in \{ 1, 2, ..., 2^n \}$: (1) \begin{align} \quad x_k = a + k \left ( \frac{b - a}{2} \right ) \end{align} • Then for every $n \in \mathbb{N}$, the next partition $P_{n+1}$ of this form can be obtained by equally subdividing the $2^n$ subintervals created by $P_n$ to obtain $2^{n+1}$ subintervals created by $P_{n+1}$. Now since $f$ is bounded on the interval $[a, b]$ we have that $f$ is also bounded on any subinterval of $[a, b]$. For each fixed $n$ let $k \in \{ 1, 2, ..., 2^n \}$ let: (2) \begin{align} \quad m_k = \inf \{ f(x) : x \in [x_{k-1}, x_k] \} \end{align} • Define the step function $f_n(x)$ as follows: (3) \begin{align} \quad f_n(x) = \left\{\begin{matrix} a & \mathrm{if} \: x = a \\ m_k & \mathrm{if} \: x \in (x_{k-1}, x_k], k \in \{ 1, 2, ..., 2^n\}) \end{matrix}\right. \end{align} • Then $(f_n(x))_{n=1}^{\infty}$ is a sequence of step functions. Every step function $f_n(x) \leq f(x)$, and the sequence $(f_n(x))_{n=1}^{\infty}$ is clearly an increasing sequence of step functions as illustrated below: • We need to show that $(f_n(x))_{n=1}^{\infty}$ converges to $f(x)$ almost everywhere on $I$. Let $x_0$ be any point of continuity of $f$. Since $f$ is continuous at $x_0$ we have that for $\epsilon > 0$ that there exists a $\delta > 0$ such that if $\mid x - x_0 \mid < \delta$ then: (4) \begin{align} \quad \mid f(x) - f(x_0) \mid < \epsilon \end{align} • Now choose $N$ sufficiently large such that $\left ( \frac{b - a}{2^N} \right ) < \delta$. Then for $n \geq N$ we see that: (5) \begin{align} \quad \left ( \frac{b - a}{2^n} \right ) \leq \left ( \frac{b - a}{2^N} \right ) < \delta \end{align} • So for $n \geq N$ we have that for the partitions $P_n$ that if $x \in (x_{k-1}, x_k]$ for some $k \in \{ 1, 2, ..., 2^n \}$ then $\mid x - x_0 \mid \leq x_k - x_{k-1} < \delta$ and so: (6) \begin{align} \quad \mid f(x) - f(x_0) \mid < \epsilon \end{align} • So surely $\mid m_k - f(x_0) \mid \leq \epsilon$. But we defined $f_n(x) = m_k$ for all $x \in (x_{k-1}, x_k]$ and so for all $n \geq N$ we see that $\mid f_n(x_0) - f(x_0) \mid \leq \epsilon$ • Thus $\lim_{n \to \infty} f_n(x_0) = f(x_0)$ at every point of continuity $x_0$ in $I$ of $f$. In other words, the sequence $(f_n(x))_{n=1}^{\infty}$ converges to $f$ at every point of continuity of $f$. But $f$ is continuous almost everywhere which implies that $(f_n(x))_{n=1}^{\infty}$ converges to $f$ almost everywhere on $I$. • We now show that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ is finite. Note that for all $n \in \mathbb{N}$ and for $M$ as any upperbound to $f$ on $I$ that: (7) \begin{align} \quad \int_I f_n(x) \: dx = \sum_{k=1}^{2^n} m_k(x_k - x_{k-1}) \leq \sum_{k=1}^{2^n} M(x_k - x_{k-1}) = M \sum_{k=1}^{2^n} (x_k - x_{k-1}) = M(b - a) \end{align} • Therefore the increasing sequence $\displaystyle{\left ( \int_I f_n(x) \: dx \right )_{n=1}^{\infty}}$ is bounded above and converges to a finite number. • Furthermore, if $L(P_n, f, x)$ denotes the lower Riemann-Stieltjes sum of $f$ associated with the partition $P_n$ then: (8) \begin{align} \quad \int_I f(x) \: dx = \lim_{n \to \infty} \int_I f_n(x) \: dx = \lim_{n \to \infty} \sum_{k=1}^{2^n} m_k(x_k - x_{k-1}) = \lim_{n \to \infty} L(P_n, f, x) \end{align} • We know that $f$ is Riemann integrable on $[a, b]$ since $f$ is continuous almost everywhere on $I$ and so the set of discontinuities of $f$ on $I$ has measure $0$. So by Riemann's condition $\displaystyle{ \underline{\int_a^b} f(x) \: dx = \int_a^b f(x) \: dx}$. But $\displaystyle{\lim_{n \to \infty} L(P_n, f, x) = \underline{\int_a^b} f(x) \: dx}$ since as $n \to \infty$, $P_n$ gets finer and $\| P_n \| \to 0$. This shows that: (9) \begin{align} \quad \int_I f(x) \: dx = \int_a^b f(x) \: dx \quad \blacksquare \end{align}
2018-04-24 22:51:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998101592063904, "perplexity": 178.46597874271677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947421.74/warc/CC-MAIN-20180424221730-20180425001730-00201.warc.gz"}
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00382/101874/Optimizing-over-subsequences-generates-context
## Abstract Phonological generalizations are finite-state. While Optimality Theory is a popular framework for modeling phonology, it is known to generate non-finite-state mappings and languages. This paper demonstrates that Optimality Theory is capable of generating non-context-free languages, contributing to the characterization of its generative capacity. This is achieved with minimal modification to the theory as it is standardly employed. ## 1 Introduction Phonological generalizations are finite-state (Johnson, 1972; Kaplan and Kay, 1994; see Heinz, 2018, for a recent overview); that is, input-output mappings can be modeled using finite-state transducers and phonotactic well-formedness can be modeled using finite-state acceptors. Optimality Theory (OT; Prince and Smolensky, 1993/2004) is a framework that is commonly used to model phonology. While some restricted variants of OT are finite-state (Frank and Satta, 1998; Eisner, 2000, 2002; Riggle, 2004; see Hulden, 2017, for a recent overview), standard OT, as it is employed by practicing phonologists, is known to generate non-finite-state mappings and languages (Eisner, 1997; Frank and Satta, 1998). OT is a special instance of Harmonic Grammar (Legendre et al., 1990), which can model arbitrary computations (Smolensky, 1992). While the exact generative capacity of OT has not yet been characterized, it has recently been shown to produce non-context-free mappings (Lamont, 2019a, b). This paper contributes to the literature on OT by demonstrating its capacity to generate non-context-free languages using constraints defined over subsequences. Subsequences are finite literals composed of ordered symbols that are not necessarily adjacent.1 They contrast with substrings, whose constituent elements are contiguous. Figure 1 illustrates the subsequences of length 2 in the string example. Of these twenty-one subsequences, six are also substrings of length 2: e… x, x… a, a… m, m… p, p… l, l… e. Figure 1: The string example contains twenty-one subsequences of length 2: e… x, e… a, e… m, e… p, e… l, e… e, x… a, x… m, x… p, x… l, x… e, a… m, a… p, a… l, a… e, m…p, m… l, m… e, p… l, p… e, l… e. Figure 1: The string example contains twenty-one subsequences of length 2: e… x, e… a, e… m, e… p, e… l, e… e, x… a, x… m, x… p, x… l, x… e, a… m, a… p, a… l, a… e, m…p, m… l, m… e, p… l, p… e, l… e. In the literature on phonotactics as formal languages, subsequences have been used to model non-local phenomena (Heinz, 2007, 2010, 2014; Rogers et al., 2010; Graf, 2017). For example, if a language disallows words from surfacing with more than one lateral consonant, it can be modeled as banning the subsequence l…l. Languages defined by banning a finite set of subsequences belong to the Strictly Piecewise languages, which are properly contained within the class of regular languages. Strictly Piecewise languages impose inviolable constraints on subsequences: A string belongs to a language if and only if it does not contain any banned subsequences. In OT, all constraints are violable, and violations are minimized whenever possible. Consequently, in addition to modeling non-local restrictions, constraints on subsequences are often used to minimize the distance between two objects (McCarthy and Prince, 1993; Hyde, 2012, 2016). For example, Figure 2 illustrates a string of syllables, represented as σ, that belong to a prosodic word, whose edges are marked with square brackets. Two syllables are parsed into a foot, indicated by parentheses. The number of )…σ… ] subsequences indicates how far the foot is from the right edge of the prosodic word, calculated over intervening syllables. Figure 2: Minimizing the number of )…σ… ] subsequences minimizes the distance between the right edge of the foot and the right edge of the prosodic word. Figure 2: Minimizing the number of )…σ… ] subsequences minimizes the distance between the right edge of the foot and the right edge of the prosodic word. When only one foot is parsed, aligning it to the right edge of the prosodic word eliminates )…σ… ] subsequences: [σσσσσ(σσ)]. However, they are unavoidable when multiple feet are parsed. In these cases, the pressure to minimize )…σ… ] subsequences determines the prosodification. For example, Table 1 illustrates four parses of a seven syllable string that contain three disyllabic feet and one monosyllabic foot. The position of the monosyllabic foot affects the total number of )…σ… ] subsequences. Table 1: Right-aligning a monosyllabic foot in odd-parity syllable strings minimizes the number of ()…σ…[] subsequences. ParseTotal [(σ)(σσ)(σσ)(σσ)] 12 [(σσ)(σ)(σσ)(σσ)] 11 [(σσ)(σσ)(σ)(σσ)] 10 [(σσ)(σσ)(σσ)(σ)] ParseTotal [(σ)(σσ)(σσ)(σσ)] 12 [(σσ)(σ)(σσ)(σσ)] 11 [(σσ)(σσ)(σ)(σσ)] 10 [(σσ)(σσ)(σσ)(σ)] It is not possible to replicate these effects with inviolable constraints on subsequences. Formally, this is because some strings that are not in the language are themselves subsequences of strings that are in the language. For example, to ban the string *[(σ)(σσ)], one must ban one of its subsequences. However, because *[(σ)(σσ)] is itself a subsequence of [(σσ)(σσ)], the latter contains every subsequence of the former, and *[(σ)(σσ)] cannot be banned without incorrectly banning [(σσ)(σσ)]. This example illustrates that violable constraints on subsequences in OT generate languages more expressive than Strictly Piecewise languages. Along similar lines, Koser and Jardine (2020) demonstrate that violable constraints on substrings in OT are more expressive than inviolable constraints. Optimization and violability contribute much more expressivity than these results suggest. Eisner (1997, 2000) demonstrated that OT can generate context-free languages with subsequence constraints, and this paper pushes his result into non-context-free languages. The main result of this paper has wide-reaching consequences for phonologists, as many standard constraint families are defined over subsequences. Examples include alignment constraints (McCarthy and Prince, 1993; Hyde, 2012, 2016), conjoined constraints (Smolensky, 1993, 2006; Alderete, 1997), co-occurrence constraints (Suzuki, 1998; Pulleyblank, 2002), Share constraints (McCarthy, 2010; Mullin, 2011), and the family of surface correspondence constraints (Walker, 2000; Hansson, 2001, 2010; Rose and Walker, 2004; Bennett, 2013, 2015). Eisner’s result, that OT with subsequence constraints generates context-free languages, is presented in section 2. It further argues that restricting the set of subsequence constraints available to OT does not limit its generative capacity. This paper’s contribution, that OT generates context-sensitive languages with constraints on subsequences, is presented in section 3. The result is illustrated with case studies on prosodic parsing and non-local dissimilation, and a proof of the general case is provided. ## 2 Violable Subsequence Constraints Can Divide Strings into Halves Eisner (1997, 2000) demonstrated that with violable constraints on subsequences, mappings in Optimality Theory can target the centers of strings. In his example, the Midpoint Pathology, a feature in the input shifts so as to surface in the center of the output. Taking stress as that feature, the Midpoint Pathology is defined in (1) as an input-output mapping, where σ represents a syllable, and $σ´$ represents a stressed syllable. The output language is homomorphic to the archetypal non-finite-state language anbn, with the allowance of an additional a or b. • (1) $Fmidpoint:σiσ´σj↦σkσ´σl$ where i + j = k + l and |kl|≤ 1 In OT, a set of candidates is generated from an input string, and evaluated by a ranked set of constraints. Constraints are functions that map candidates onto a number of violation marks, assigning as many violations as there are specific structures in the candidate or specific changes made to the input to generate the candidate. The candidate that is lexicographically minimal in its concatenated violations is returned as output. In the Midpoint Pathology, stress shifts to minimize the violations of the alignment constraint (McCarthy and Prince, 1993; McCarthy, 2003) defined in (2), which assigns a candidate as many violations as $σ´…σ…σ$ and $σ…σ…σ´$ subsequences it contains. • (2) Align(σ, $σ´$, σ): For every syllable σ, if there is a stressed syllable $σ´$, assign one violation mark for every syllable that intervenes between σ and $σ´$. The tableau in Table 2 illustrates stress shifting onto the middle syllable of a seven syllable string. The candidate set contains the input string /$σ´σσσσσσ$/ (2a) and the six candidates derived from it by shifting the stress to another syllable (2bg). The middle column shows the number of violations Align(σ, $σ´$, σ) assigns each candidate; for clarity, the violations incurred by each syllable are shown separately. Violations of Align(σ, $σ´$, σ) decrease as stress approaches the center syllable. For completeness, the rightmost column shows the number of violations assigned by the constraint Ident(stress), which penalizes syllables whose stress value in the input was changed. The ordering between constraints indicates that Align(σ, $σ´$, σ) is ranked above Ident(stress). The candidate with medial stress (2d) is returned as output because its violation vector (6,2) is lexicographically minimal. If Ident(stress) were ranked above Align(σ, $σ´$, σ), candidate (2a) would be returned as output. Table 2: The Midpoint Pathology (Eisner, 1997, 2000): Stress shifts onto the middle syllable in odd-parity words, and onto either of two middle syllables in even-parity words. Violations of Align(σ, $σ´$, σ) are split up by syllable. /$σ´σσσσσσ$/Align(σ, $σ´$, σ)Ident(stress) a. $σ´σσσσσσ$ 0 + 0 + 1 + 2 + 3 + 4 + 5 = 15 b. $σσ´σσσσσ$ 0 + 0 + 0 + 1 + 2 + 3 + 4 = 10 c. $σσσ´σσσσ$ 1 + 0 + 0 + 0 + 1 + 2 + 3 = 7 → d. $σσσσ´σσσ$ 2 + 1 + 0 + 0 + 0 + 1 + 2 = 6 e. $σσσσσ´σσ$ 3 + 2 + 1 + 0 + 0 + 0 + 1 = 7 f. $σσσσσσ´σ$ 4 + 3 + 2 + 1 + 0 + 0 + 0 = 10 g. $σσσσσσσ´$ 5 + 4 + 3 + 2 + 1 + 0 + 0 = 15 /$σ´σσσσσσ$/Align(σ, $σ´$, σ)Ident(stress) a. $σ´σσσσσσ$ 0 + 0 + 1 + 2 + 3 + 4 + 5 = 15 b. $σσ´σσσσσ$ 0 + 0 + 0 + 1 + 2 + 3 + 4 = 10 c. $σσσ´σσσσ$ 1 + 0 + 0 + 0 + 1 + 2 + 3 = 7 → d. $σσσσ´σσσ$ 2 + 1 + 0 + 0 + 0 + 1 + 2 = 6 e. $σσσσσ´σσ$ 3 + 2 + 1 + 0 + 0 + 0 + 1 = 7 f. $σσσσσσ´σ$ 4 + 3 + 2 + 1 + 0 + 0 + 0 = 10 g. $σσσσσσσ´$ 5 + 4 + 3 + 2 + 1 + 0 + 0 = 15 As this example demonstrates, shifting stress onto the medial syllable minimizes the violations of Align(σ, $σ´$, σ); see section 3 for a proof that this holds for any length input. Hyde (3; 2008; 2012; 2016) argues that the Midpoint Pathology is an artifact of the symmetrical nature of Align(σ, $σ´$, σ). In particular, if the constraint penalized only $σ´…σ…σ$ or $σ…σ…σ´$ subsequences and not both, then stress would be drawn to one edge rather than the center. The tableau in Table 3 illustrates this. Here, only $σ´…σ…σ$ subsequences are penalized, and stress is drawn to the right edge, surfacing on either of the last two syllables (3fg). Table 3: Asymmetric alignment does not motivate the Midpoint Pathology. /$σ´σσσσσσ$/*$σ´…σ…σ$Id(stress) a. $σ´σσσσσσ$ 15 b. $σσ´σσσσσ$ 10 c. $σσσ´σσσσ$ d. $σσσσ´σσσ$ e. $σσσσσ´σσ$ → f. $σσσσσσ´σ$ → g. $σσσσσσσ´$ /$σ´σσσσσσ$/*$σ´…σ…σ$Id(stress) a. $σ´σσσσσσ$ 15 b. $σσ´σσσσσ$ 10 c. $σσσ´σσσσ$ d. $σσσσ´σσσ$ e. $σσσσσ´σσ$ → f. $σσσσσσ´σ$ → g. $σσσσσσσ´$ While asymmetrical constraints avoid the Midpoint Pathology specifically, they motivate other mappings that target the centers of strings. For example, the constraint AllFeet-Right (Hyde, 2008, 2012, 2016) penalizes )…σ subsequences that occur within a prosodic word. By restricting its application to a given prosodic word, AllFeet-Right is similar to but distinct from penalizing )…σ… ] subsequences. As discussed in section 1, this constraint pulls monosyllabic feet to the right edge of prosodic words, and as the tableau in Table 4 illustrates, it also balances the size of multiple prosodic words. All the candidates in this tableau are parsed into two prosodic words and are exhaustively footed. To save space, constraints that enforce these conditions are omitted. The violations of AllFeet-Right decrease as the difference in size between the two prosodic words decreases, and candidate (4l) is returned as output. In practice, prosodic words typically reflect morphosyntactic structure. However, because the constraints that enforce such correspondences are violable (such as the family of Match constraints; Selkirk, 2011), mappings like those illustrated in Table 4 are predicted to be possible. Table 4: AllFeet-Right prefers that when two prosodic words are parsed, they are balanced in size. Violations are separated by prosodic word. /σσσσσσσσ/AllFt-R a. [(σ)][(σ)(σσ)(σσ)(σσ)] 0 + 12 = 12 b. [(σ)][(σσ)(σ)(σσ)(σσ)] 0 + 11 = 11 c. [(σ)][(σσ)(σσ)(σ)(σσ)] 0 + 10 = 10 d. [(σ)][(σσ)(σσ)(σσ)(σ)] 0 + 9 = 9 e. [(σσ)][(σσ)(σσ)(σσ)] 0 + 6 = 6 f. [(σ)(σσ)][(σ)(σσ)(σσ)] 2 + 6 = 8 g. [(σ)(σσ)][(σσ)(σ)(σσ)] 2 + 5 = 7 h. [(σ)(σσ)][(σσ)(σσ)(σ)] 2 + 4 = 6 i. [(σσ)(σ)][(σ)(σσ)(σσ)] 1 + 6 = 7 j. [(σσ)(σ)][(σσ)(σ)(σσ)] 1 + 5 = 6 k. [(σσ)(σ)][(σσ)(σσ)(σ)] 1 + 4 = 5 → l. [(σσ)(σσ)][(σσ)(σσ)] 2 + 2 = 4 /σσσσσσσσ/AllFt-R a. [(σ)][(σ)(σσ)(σσ)(σσ)] 0 + 12 = 12 b. [(σ)][(σσ)(σ)(σσ)(σσ)] 0 + 11 = 11 c. [(σ)][(σσ)(σσ)(σ)(σσ)] 0 + 10 = 10 d. [(σ)][(σσ)(σσ)(σσ)(σ)] 0 + 9 = 9 e. [(σσ)][(σσ)(σσ)(σσ)] 0 + 6 = 6 f. [(σ)(σσ)][(σ)(σσ)(σσ)] 2 + 6 = 8 g. [(σ)(σσ)][(σσ)(σ)(σσ)] 2 + 5 = 7 h. [(σ)(σσ)][(σσ)(σσ)(σ)] 2 + 4 = 6 i. [(σσ)(σ)][(σ)(σσ)(σσ)] 1 + 6 = 7 j. [(σσ)(σ)][(σσ)(σ)(σσ)] 1 + 5 = 6 k. [(σσ)(σ)][(σσ)(σσ)(σ)] 1 + 4 = 5 → l. [(σσ)(σσ)][(σσ)(σσ)] 2 + 2 = 4 • (3) AllFeet-Right: For every foot in a prosodic word, assign one violation for every syllable it precedes within the same prosodic word. The balanced prosodic word mapping is defined in (4). Like the Midpoint Pathology, its application depends on identifying the center syllable, and its output language is homomorphic to anbn with an extra a or b. Thus, even though asymmetric alignment cannot target the center of a prosodic word, it can target the center of a string parsed into two prosodic words. • (4) Fbalanced : σi↦[(σσ)j(σ)k][(σσ)l(σ)m] where i = j + k + l + m, k ≤ 1, m ≤ 1, and j = l The mappings in this section depend on the same underlying mechanism: divide a string of syllables into two parts, and minimize the subsequences of at least length 2 that occupy each part. This reduces the difference in size between the two parts to at most one syllable. With the Midpoint Pathology, the two parts are defined by syllables that precede the stressed syllable and syllables that follow the stressed syllable. With balanced prosodic words, the two prosodic words define the parts. This mechanism is independent of the subsequences themselves, provided that they are at least of length 2. Thus, restricting which subsequences a constraint can penalize can only block specific mappings. No restrictions on the set of subsequences prevents them from dividing strings in half. This is proved formally at the end of section 3. The mappings in this section only divided strings into two equal parts, generating context-free languages. The next section presents mappings that divide strings into three or more equal parts, generating non-context-free languages. ## 3 Violable Subsequence Constraints Can Divide Strings into Arbitrarily Many Equally Sized Parts As the previous section demonstrated, subsequence constraints can be used to divide strings into two equal parts, generating context-free languages. Parsing strings into more than two equal parts generates non-context-free languages. With balanced prosodic words, this follows from parsing a string of syllables into more than two prosodic words. This can be motivated by hierarchical prosodic structure (Nespor and Vogel, 1986; Selkirk, 1984), rather than stipulated arbitrarily. Figure 3 illustrates a standard five-level prosodic hierarchy, where prosodic words are dominated by phonological phrases, which are dominated by an intonational phrase. By requiring these top two levels to dominate exactly two daughter nodes, the string of syllables is parsed into four prosodic words. Note that because this hierarchy is finite, it can be represented by a finite-state grammar (Yu, 2019). Figure 3: Hierarchical prosodic structure: syllables (σ) are parsed into feet (F), which are parsed into prosodic words (ω), which are parsed into phonological phrases (ϕ), which are parsed into an intonational phrase (ι). Figure 3: Hierarchical prosodic structure: syllables (σ) are parsed into feet (F), which are parsed into prosodic words (ω), which are parsed into phonological phrases (ϕ), which are parsed into an intonational phrase (ι). As expected, AllFeet-Right has two effects on strings parsed into four prosodic words. First, in odd-parity prosodic words, the monosyllabic foot appears at the right edge. Second, no two prosodic words differ in size by more than one syllable. The tableau in Table 5 illustrates these effects with a fourteen syllable string. A large number of candidates are omitted to reduce the size of this tableau. These include candidates with more or fewer than four prosodic words, and candidates where monosyllabic feet do not surface at the right edge of their prosodic word. It can be verified that those candidates incur more violations of AllFeet-Right, as in Tables 1 and 4. The other omitted candidates are identical to the presented candidates, except with their prosodic words in another order. For example, the candidate chosen as output (5v) represents a set containing six possible parses, which incur the same number of violations of AllFeet-Right. Table 5: AllFeet-Right prefers that when four prosodic words are parsed, they are balanced in size. Violations are split up by prosodic word. /σσσσσσσσσσσσσσ/AllFeet-Right a. [(σ)][(σ)] [(σ)] [(σσ)(σσ)(σσ)(σσ)(σσ)(σ)] 0 + 0 + 0 + 25 = 25 b. [(σ)][(σ)] [(σσ)] [(σσ)(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 0 + 20 = 20 c. [(σ)][(σ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σσ)(σ)] 0 + 0 + 1 + 16 = 17 d. [(σ)][(σ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 2 + 12 = 14 e. [(σ)][(σ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σσ)(σ)] 0 + 0 + 4 + 9 = 13 f. [(σ)][(σ)] [(σσ)(σσ)(σσ)] [(σσ)(σσ)(σσ)] 0 + 0 + 6 + 6 = 12 g. [(σ)][(σσ)] [(σσ)] [(σσ)(σσ)(σσ)(σσ)(σ)] 0 + 0 + 0 + 16 = 16 h. [(σ)][(σσ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 1 + 12 = 13 i. [(σ)][(σσ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)(σ)] 0 + 0 + 2 + 9 = 11 j. [(σ)][(σσ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σσ)] 0 + 0 + 4 + 6 = 10 k. [(σ)][(σσ)(σ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σ)] 0 + 1 + 1 + 9 = 11 l. [(σ)][(σσ)(σ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)] 0 + 1 + 2 + 6 = 9 m. [(σ)][(σσ)(σ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σ)] 0 + 1 + 4 + 4 = 9 n. [(σ)][(σσ)(σσ)] [(σσ)(σσ)] [(σσ)(σσ)(σ)] 0 + 2 + 2 + 4 = 8 o. [(σσ)][(σσ)] [(σσ)] [(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 0 + 12 = 12 p. [(σσ)][(σσ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σ)] 0 + 0 + 1 + 9 = 10 q. [(σσ)][(σσ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)] 0 + 0 + 2 + 6 = 8 r. [(σσ)][(σσ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σ)] 0 + 0 + 4 + 4 = 8 s. [(σσ)][(σσ)(σ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)] 0 + 1 + 1 + 6 = 8 t. [(σσ)][(σσ)(σ)] [(σσ)(σσ)] [(σσ)(σσ)(σ)] 0 + 1 + 2 + 4 = 7 u. [(σσ)(σ)][(σσ)(σ)] [(σσ)(σ)] [(σσ)(σσ)(σ)] 1 + 1 + 1 + 4 = 7 → v. [(σσ)(σ)][(σσ)(σ)] [(σσ)(σσ)] [(σσ)(σσ)] 1 + 1 + 2 + 2 = 6 /σσσσσσσσσσσσσσ/AllFeet-Right a. [(σ)][(σ)] [(σ)] [(σσ)(σσ)(σσ)(σσ)(σσ)(σ)] 0 + 0 + 0 + 25 = 25 b. [(σ)][(σ)] [(σσ)] [(σσ)(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 0 + 20 = 20 c. [(σ)][(σ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σσ)(σ)] 0 + 0 + 1 + 16 = 17 d. [(σ)][(σ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 2 + 12 = 14 e. [(σ)][(σ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σσ)(σ)] 0 + 0 + 4 + 9 = 13 f. [(σ)][(σ)] [(σσ)(σσ)(σσ)] [(σσ)(σσ)(σσ)] 0 + 0 + 6 + 6 = 12 g. [(σ)][(σσ)] [(σσ)] [(σσ)(σσ)(σσ)(σσ)(σ)] 0 + 0 + 0 + 16 = 16 h. [(σ)][(σσ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 1 + 12 = 13 i. [(σ)][(σσ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)(σ)] 0 + 0 + 2 + 9 = 11 j. [(σ)][(σσ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σσ)] 0 + 0 + 4 + 6 = 10 k. [(σ)][(σσ)(σ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σ)] 0 + 1 + 1 + 9 = 11 l. [(σ)][(σσ)(σ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)] 0 + 1 + 2 + 6 = 9 m. [(σ)][(σσ)(σ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σ)] 0 + 1 + 4 + 4 = 9 n. [(σ)][(σσ)(σσ)] [(σσ)(σσ)] [(σσ)(σσ)(σ)] 0 + 2 + 2 + 4 = 8 o. [(σσ)][(σσ)] [(σσ)] [(σσ)(σσ)(σσ)(σσ)] 0 + 0 + 0 + 12 = 12 p. [(σσ)][(σσ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)(σ)] 0 + 0 + 1 + 9 = 10 q. [(σσ)][(σσ)] [(σσ)(σσ)] [(σσ)(σσ)(σσ)] 0 + 0 + 2 + 6 = 8 r. [(σσ)][(σσ)] [(σσ)(σσ)(σ)] [(σσ)(σσ)(σ)] 0 + 0 + 4 + 4 = 8 s. [(σσ)][(σσ)(σ)] [(σσ)(σ)] [(σσ)(σσ)(σσ)] 0 + 1 + 1 + 6 = 8 t. [(σσ)][(σσ)(σ)] [(σσ)(σσ)] [(σσ)(σσ)(σ)] 0 + 1 + 2 + 4 = 7 u. [(σσ)(σ)][(σσ)(σ)] [(σσ)(σ)] [(σσ)(σσ)(σ)] 1 + 1 + 1 + 4 = 7 → v. [(σσ)(σ)][(σσ)(σ)] [(σσ)(σσ)] [(σσ)(σσ)] 1 + 1 + 2 + 2 = 6 This quartering mapping is defined in (5). The language it generates is homomorphic to the context-sensitive stringset anbncndn with allowances for an extra a, b, c, or d. • (5) Fquarter : σi↦[(σσ)j(σ)k][(σσ)l(σ)m][(σσ)n(σ)o][(σσ)p(σ)q] where i = j + k + l + m + n + o + p + q, k ≤ 1, m ≤ 1, o ≤ 1, q ≤ 1, and j = l = n = p As noted in the previous section, AllFeet-Right is distinct from a constraint that penalizes )…σ subsequences. In particular, because it is restricted to subsequences within prosodic words, it undercounts when multiple prosodic words are parsed, as Figure 4 illustrates. The next case study demonstrates that this property is irrelevant to the generative capacity of OT with subsequence constraints. Figure 4: AllFeet-Right only penalizes the two )…σ subsequences indicated with solid lines, and not the three indicated with dashed lines. Figure 4: AllFeet-Right only penalizes the two )…σ subsequences indicated with solid lines, and not the three indicated with dashed lines. To that end, consider the constraint *X…X defined in (6). *X…X is a non-local variant of *Geminate that penalizes subsequences of length 2 whose constituent segments are identical. It has not been proposed as a serious phonological constraint, but rather as supporting evidence for this paper’s result. Unlike AllFeet-Right, there are no restrictions on which subsequences it evaluates. • (6) *X…X: Assign one violation for every subsequence αβ where α = β. The tableau in Table 6 illustrates the effect of this constraint on non-local liquid dissimilation. In this example, the class of liquid consonants comprises only alveolar laterals and rhotics, and the constraint Ident(lateral) penalizes changing one into the other. The input contains eight laterals (6a). The candidates shown are derived by changing underlying laterals into rhotics (6be). As in Table 5, permutations of these strings incur equal numbers of violations. This tableau demonstrates that as the difference in the number of laterals and rhotics decreases, so does the number of violations of *X…X. Any string with four laterals and four rhotics is returned as output (6e). Table 6: *X…X prefers that strings that contain both laterals and rhotics have equal numbers of them. Violations are separated into laterals and rhotics. /llllllll/*X…XIdent(lateral) a. llllllll 28 + 0 = 28 b. lllllllr 21 + 0 = 21 c. llllllrr 15 + 1 = 16 d. lllllrrr 10 + 3 = 13 → e. llllrrrr 6 + 6 = 12 /llllllll/*X…XIdent(lateral) a. llllllll 28 + 0 = 28 b. lllllllr 21 + 0 = 21 c. llllllrr 15 + 1 = 16 d. lllllrrr 10 + 3 = 13 → e. llllrrrr 6 + 6 = 12 This mapping is defined in (7). Its output language is context-free, and homomorphic to permutations of anbn with the usual allowance of an additional a or b. • (7) Fliquid : {l,r}i↦{l,r}i where the difference between the number of laterals and rhotics is not greater than 1. To generate a context-sensitive stringset with this constraint, one just has to consider a third segment type. The tableau in Table 7 on the next page illustrates this case with dissimilation targeting major place. Here, the segment inventory comprises voiceless stops specified as labial, coronal, or dorsal. The input is a string of nine labial stops (7a), and candidates derived from it by changing the place features are shown (7bl). The constraint Ident(place) penalizes changes made to major place features. Unsurprisingly, as the differences between the numbers of each stop decrease, so do the violations of *X…X. The output is any string with three labial stops, three coronal stops, and three dorsal stops (7l). Table 7: *X…X prefers that strings that contain labials, coronals, and dorsals have equal numbers of stops at each place of articulation. Violations are separated into labials, coronals, and dorsals. /ppppppppp/*X…XIdent(place) a. ppppppppp 36 + 0 + 0 = 36 b. ppppppppt 28 + 0 + 0 = 28 c. ppppppptt 21 + 1 + 0 = 22 d. ppppppttt 15 + 3 + 0 = 18 e. ppppptttt 10 + 6 + 0 = 16 f. ppppppptk 21 + 0 + 0 = 21 g. pppppptkk 15 + 0 + 1 = 16 h. ppppptkkk 10 + 0 + 3 = 13 i. pppptkkkk 6 + 0 + 6 = 12 j. pppppttkk 10 + 1 + 1 = 12 k. ppppttkkk 6 + 1 + 3 = 10 → l. ppptttkkk 3 + 3 + 3 = 9 /ppppppppp/*X…XIdent(place) a. ppppppppp 36 + 0 + 0 = 36 b. ppppppppt 28 + 0 + 0 = 28 c. ppppppptt 21 + 1 + 0 = 22 d. ppppppttt 15 + 3 + 0 = 18 e. ppppptttt 10 + 6 + 0 = 16 f. ppppppptk 21 + 0 + 0 = 21 g. pppppptkk 15 + 0 + 1 = 16 h. ppppptkkk 10 + 0 + 3 = 13 i. pppptkkkk 6 + 0 + 6 = 12 j. pppppttkk 10 + 1 + 1 = 12 k. ppppttkkk 6 + 1 + 3 = 10 → l. ppptttkkk 3 + 3 + 3 = 9 The mapping is defined in (8); it generates a language homomorphic to permutations of anbncn, with one additional a, b, or c. • (8) Fplace : {p,t,k}i↦{p,t,k}i where the difference between any two sets of stops defined by place is not greater than 1. *X…X has the same effect as constraints like AllFeet-Right: it divides the string into a fixed number of parts, and requires those parts be as similar to each other in size as possible by penalizing their subsequences. *X…X differs only in that it does not require that the subparts form contiguous substrings. The mappings in this section demonstrated constraints over subsequences dividing strings into a fixed number of groups of equal size. In all cases, if the groups were not exactly equal, they could differ by at most one element. This generalizes to strings of all lengths and all numbers of groups: minimizing the difference between group sizes minimizes the number of violating subsequences. Before presenting a proof of this result, it is necessary to establish that subsequences of length 2 grow quadratically in a string’s length, in particular, a string of length n has $n2−n2$ subsequences of length 2. As a base case, a string of length 2 has 1 such subsequence: $22−22=1$. Inductively, assume that a string of length n has $n2−n2$ subsequences of length 2. Adding one segment adds n subsequences of length 2, and it can be verified that $n2−n2+n=(n+1)2−(n+1)2$. The proof of the main result of this paper follows. Proof. For a string s composed of k disjoint sets, let s1,s2,…,sk denote the cardinality of each set, and define the constraint function M : s →ℕ as $M(s)=∑i=1ksi2−si2$ Let u be a string composed of k disjoint sets such that one set has two more members than another: $ui≥uj+2$ By way of contradiction, assume that the composition u minimizes the function M. Consider an alternate composition v, identical to u except that one element has been moved from the larger set to the smaller set: $vi=ui−1vj=uj+1$ By assumption, we have M(u) < M(v), from which we derive a contradiction: $ui2−ui2+uj2−uj2 $2ui<2uj+2ui This contradicts the fact that ui is at least as great as uj + 2. Therefore, no composition of a string whose component sets differ in cardinality by two or more minimizes M. This proves that violable constraints in Optimality Theory that penalize subsequences of length 2 can divide any length input into a fixed number of equally sized parts, generating context-sensitive stringsets. ## 4 Conclusion Optimality Theory is known to generate non-finite state mappings and languages (Eisner, 1997; Frank and Satta, 1998), even with constraints defined over strings (Riggle, 2004; Gerdemann and Hulden, 2012; Heinz and Lai, 2013; Hao, 2019; Lamont, 2019a, b). This paper contributes to this literature by demonstrating that constraints over subsequences generate context-sensitive languages under optimization. This result has wide-ranging impacts for the field of phonology, as a number of commonly employed constraint types are defined over subsequences. ## Acknowledgments This paper has been greatly improved by three anonymous reviewers for TACL and through conversations with Jeffrey Heinz, Brett Hyde, Neil Immerman, and Brandon Prickett. I am especially grateful to Chris Coscia for his guidance on mathematical notation and structuring the proof. All remaining errors are of course my own. ## Notes 1 Trivially, all strings of length 1 are subsequences. In this paper, subsequences refer to subsequences of length ≥ 2. ## References John Alderete . 1997 . Dissimilation as local conjunction . In Kiyomi Kusumoto , editor, Proceedings of NELS 27 , pages 17 32 . , Amherst, MA . Wm. G. Bennett . 2013 . Dissimilation, Consonant Harmony, and Surface Correspondence . Ph.D. thesis, Rutgers , The State University of New Jersey . Wm. G. Bennett . 2015 . The Phonology of Consonants . Cambridge University Press , Cambridge . Jason Eisner . 1997 . What constraints should OT allow? Paper presented at LSA 71 . Available at https://roa.rutgers.edu/article/view/215 Jason Eisner . 2000 . Directional constraint evaluation in Optimality Theory . In Proceedings of the 18th International Conference on Computational Linguistics , pages 257 263 . DOI: https://doi.org/10.3115/990820.990858 Jason Eisner . 2002 . Comprehension and compilation in Optimality Theory . In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics , pages 56 63 . DOI: https://doi.org/10.3115/1073083.1073095 Robert Frank and Giorgia Satta . 1998 . Optimality Theory and the generative complexity of constraint violability . Computational Linguistics , 24 ( 2 ): 307 315 . Dale Gerdemann and Mans Hulden . 2012 . Practical finite state Optimality Theory . In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing , pages 10 19 . Thomas Graf . 2017 . The power of locality domains in phonology . Phonology , 34 ( 2 ): 385 405 . DOI: https://doi.org/10.1017/S0952675717000197 Gunnar Ólafur Hansson . 2001 . Theoretical and Typological Issues in Consonant Harmony . Ph.D. thesis, University of California , Berkeley . Gunnar Ólafur Hansson . 2010 . Consonant Harmony: Long-Distance Interactions in Phonology . University of California Press , Berkeley, CA . Yiding Hao . 2019 . Finite-state Optimality Theory: Non-rationality of Harmonic Serialism . Journal of Language Modeling , 7 ( 2 ): 49 99 . DOI: https://doi.org/10.15398/jlm.v7i2.210 Jeffrey Heinz . 2007 . Inductive Learning of Phonotactic Patterns . Ph.D. thesis, University of California Los Angeles . Jeffrey Heinz . 2010 . Learning long-distance phonotactics . Linguistic Inquiry , 41 : 623 661 . DOI: https://doi.org/10.1162/LING_a_00015 Jeffrey Heinz . 2014 . Culminativity times harmony equals unbounded stress . In Harry van der Hulst , editor, Word Stress: Theoretical and Typological Issues , pages 255 275 . Cambridge University Press , Cambridge . DOI: https://doi.org/10.1017/CBO9781139600408.012 Jeffrey Heinz . 2018 . The computational nature of phonological generalizations . In Larry M. Hyman and Frans Plank , editors, Phonological Typology , pages 126 195 . De Gruyter Mouton , Berlin . DOI: https://doi.org/10.1515/9783110451931-005 Jeffrey Heinz and Regine Lai . 2013 . Vowel harmony and subsequentiality . In Proceedings of the 13th Meeting on the Mathematics of Language , pages 52 63 . Mans Hulden . 2017 . Formal and computational verification of phonological analyses . Phonology , 34 ( 2 ): 407 435 . DOI: https://doi.org/10.1017/S0952675717000203 Brett Hyde . 2008 . Alignment continued: Distance-sensitivity, order-sensitivity, and the midpoint pathology . Unpublished manuscript, Washington University . Available at https://roa.rutgers.edu/article/view/1028. Brett Hyde . 2012 . Alignment constraints . Natural Language and Linguistic Theory , 30 : 789 836 . DOI: https://doi.org/10.1007/s11049-012-9167-3 Brett Hyde . 2016 . Layering and Directionality . Equinox , Sheffield . C. Douglas Johnson . 1972 . Formal Aspects of Phonological Description . Mouton , The Hague . DOI: https://doi.org/10.1515/9783110876000 Ronald Kaplan and Martin Kay . 1994 . Regular models of phonological rule systems . Computational Linguistics , 20 : 331 378 . Nate Koser and Jardine . 2020 . The complexity of optimizing over strictly local constraints . In Proceedings of the 43rd Annual Penn Linguistics Conference , pages 125 134 . Andrew Lamont . 2019a . Majority rule in harmonic serialism . In Supplemental Proceedings of the 2018 Annual Meeting on Phonology , Washington, D.C. , Linguistic Society of America . DOI: https://doi.org/10.3765/amp.v7i0.4546 Andrew Lamont . 2019b . Precededence is pathological: The problem of alphabetical sorting . In Proceedings of the 36th West Coast Conference on Formal Linguistics , pages 243 249 , Somerville, MA . . Géraldine Legendre , Yoshiro Miyata , and Paul Smolensky . 1990 . Harmonic Grammar: a formal multi-level connectionist theory of linguistic well-formedness: an application . In Proceedings of the Twelfth Annual Conference of the Cognitive Science Society , page 884891 , Hillsdale, NJ . Erlbaum . John J. McCarthy . 2003 . OT constraints are categorical . Phonology , 20 ( 1 ): 75 138 . DOI: https://doi.org/10.1017/S0952675703004470 John J. McCarthy . 2010 . . In John A. Goldsmith , Elizabeth Hume , and W. Leo Wetzels , editors, Tones and Features , pages 195 222 . Walter de Gruyter , Berlin . John J. McCarthy and Alan Prince . 1993 . Generalized alignment . In Geert Booij and Jaap van Marle , editors, Yearbook of Morphology , pages 79 153 . Kluwer , Dordrecht . DOI: https://doi.org/10.1007/978-94-017-3712-8_4 Kevin Mullin . 2011 . Strength in harmony systems: Trigger and directional asymmetries . Unpublished manuscript , University of Massachusetts Amherst Marina Nespor and Irene Vogel . 1986 . Prosodic Phonology . Foris Publications , Dordrecht . Alan Prince and Paul Smolensky . 1993/2004 . Optimality Theory: Constraint Interaction in Generative Grammar . Blackwell Publishing , Malden, MA . Douglas Pulleyblank . 2002 . Harmony drivers: No disagreement allowed . In Proceedings of the Twenty-Eighth Annual Meeting of the Berkeley Linguistics Society , pages 249 267 . DOI: https://doi.org/10.3765/bls.v28i1.3841 Jason Riggle . 2004 . Generation, Recognition, and Learning in Finite State Optimality Theory . Ph.D. thesis, University of California Los Angeles . James Rogers , Jeffrey Heinz , Gil Bailey , Matt Edlefsen , Molly Visscher , David Wellcome , and Sean Wibel . 2010 . On languages piecewise testable in the strict sense . The Mathematics of Language , 10/11 : 255 265 . DOI: https://doi.org/10.1007/978-3-642-14322-9_19 Sharon Rose and Rachel Walker . 2004 . A typology of consonant agreement as correspondence . Language , 80 ( 3 ): 475 531 . DOI: https://doi.org/10.1353/lan.2004.0144 Elisabeth Selkirk . 1984 . Phonology and Syntax: The Relation between Sound and Structure . MIT Press , Cambridge . Elisabeth Selkirk . 2011 . The syntax-phonology interface . In John A. Goldsmith , Jason Riggle , and Alan C. L. Yu , editors, The Handbook of Phonological Theory , 2nd edition, pages 435 484 . Blackwell Publishing , Oxford . DOI: https://doi.org/10.1002/9781444343069.ch14 Paul Smolensky . 1992 . Harmonic Grammars for formal languages . In Advances in Neural Information Processing Systems 5 , pages 847 854 , Morgan Kaufmann Publishers Inc. , San Francisco . Paul Smolensky . 1993 . Harmony, markedness, and phonological activity . Paper presented at Rutgers Optimality Workshop 1 . Available at http://roa.rutgers.edu/article/view/88. Paul Smolensky . 2006 . Optimality in phonology II: Harmonic completeness, local constraint conjunction, and feature-domain markedness . In Paul Smolensky and Géraldine Legendre , editors, The Harmonic Mind: From Neural Completeness to Optimality-Theoretic Grammar , volume II , pages 27 160 . MIT Press , Cambridge, MA . Keiichiro Suzuki . 1998 . A Typological Investigation of Dissimilation . Ph.D. thesis, University of Arizona . Rachel Walker . 2000 . Long-distance consonantal identity effects . In Proceedings of WCCFL 19 , pages 532 545 . Somerville, MA . . Kristine M. Yu . 2019 . Parsing with Minimalist Grammars and prosodic trees . In Robert C. Berwick and Edward P. Stabler , editors, Minimalist Parsing , pages 69 109 . Oxford University Press , Oxford . DOI: https://doi.org/10.1093/oso/9780198795087.003.0004 This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode
2021-06-20 01:30:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 63, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5550662279129028, "perplexity": 2673.112579593339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00497.warc.gz"}
https://researchmap.jp/yfukai/published_papers/27450922
2020年6月 # Direct Evidence for Universal Statistics of Stationary Kardar-Parisi-Zhang Interfaces Physical Review Letters • Iwatsuka, Takayasu • , • Fukai, Yohsuke T. • , • Takeuchi, Kazumasa A. 124 25 250602 250602 DOI 10.1103/PhysRevLett.124.250602 {American Physical Society} The nonequilibrium steady state of the one-dimensional (1D) Kardar-Parisi-Zhang (KPZ) universality class is studied in-depth by exact solutions, yet no direct experimental evidence of its characteristic statistical properties has been reported so far. This is arguably because, for an infinitely large system, infinitely long time is needed to reach such a stationary state and also to converge to the predicted universal behavior. Here we circumvent this problem in the experimental system of growing liquid-crystal turbulence, by generating an initial condition that possesses a long-range property expected for the KPZ stationary state. The resulting interface fluctuations clearly show characteristic properties of the 1D stationary KPZ interfaces, including the convergence to the Baik-Rains distribution. We also identify finite-time corrections to the KPZ scaling laws, which turn out to play a major role in the direct test of the stationary KPZ interfaces. This paves the way to explore unsolved properties of the stationary KPZ interfaces experimentally, making possible connections to nonlinear fluctuating hydrodynamics and quantum spin chains as recent studies unveiled relation to the stationary KPZ. リンク情報 DOI https://doi.org/10.1103/PhysRevLett.124.250602 arXiv http://arxiv.org/abs/arXiv:2004.11652 Arxiv Url http://arxiv.org/abs/2004.11652v1 Arxiv Url http://arxiv.org/pdf/2004.11652v1 本文へのリンクあり URL
2022-08-10 19:37:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039584159851074, "perplexity": 4192.053834146905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00750.warc.gz"}
https://scirate.com/
# Top arXiv papers • Quantum-limited amplifiers increase the amplitude of the signal at the price of introducing additional noise. Quantum purification protocols operate in the reverse way, by reducing the noise while attenuating the signal. Here we investigate a scenario that interpolates between these two extremes. We search for the physical process that produces the best approximation of a pure and amplified coherent state, starting from multiple copies of a noisy coherent state with Gaussian modulation. We identify the optimal quantum processes, considering both the case of deterministic and probabilistic processes. And we give benchmarks that can be used to certify the experimental demonstration of genuine quantum-enhanced amplification. • We introduce a toy holographic correspondence based on the multi-scale entanglement renormalization ansatz (MERA) representation of ground states of local Hamiltonians. Given a MERA representation of the ground state of a local Hamiltonian acting on an one dimensional boundary' lattice, we lift it to a tensor network representation of a quantum state of a dual two dimensional bulk' hyperbolic lattice. The dual bulk degrees of freedom are associated with the bonds of the MERA, which describe the renormalization group flow of the ground state, and the bulk tensor network is obtained by inserting tensors with open indices on the bonds of the MERA. We explore properties of `copy bulk states'---particular bulk states that correspond to inserting the copy tensor on the bonds of the MERA. We show that entanglement in copy bulk states is organized according to holographic screens, and that expectation values of certain extended operators in a copy bulk state, dual to a critical ground state, are proportional to $n$-point correlators of the critical ground state. We also present numerical results to illustrate e.g. that copy bulk states, dual to ground states of several critical spin chains, have exponentially decaying correlations, and that the correlation length generally decreases with increase in central charge for these models. Our toy model illustrates a possible approach for deducing an emergent bulk description from the MERA, in light of the on-going dialogue between tensor networks and holography. • Jan 18 2017 stat.ML arXiv:1701.04503v1 The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure prediction, quantum chemistry, materials design and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. • Kitaev's quantum double models, including the toric code, are canonical examples of quantum topological models on a 2D spin lattice. Their Hamiltonian define the groundspace by imposing an energy penalty to any nontrivial flux or charge, but treats any such violation in the same way. Thus, their energy spectrum is very simple. We introduce a new family of quantum double Hamiltonians with adjustable coupling constants that allow us to tune the energy of anyons while conserving the same groundspace as Kitaev's original construction. Those Hamiltonians are made of commuting four-body projectors that provide an intricate splitting of the Hilbert space. • We consider a problem introduced by Mossel and Ross [Shotgun assembly of labeled graphs, arXiv:1504.07682]. Suppose a random $n\times n$ jigsaw puzzle is constructed by independently and uniformly choosing the shape of each "jig" from $q$ possibilities. We are given the shuffled pieces. Then, depending on $q$, what is the probability that we can reassemble the puzzle uniquely? We say that two solutions of a puzzle are similar if they only differ by permutation of duplicate pieces, and rotation of rotationally symmetric pieces. In this paper, we show that, with high probability, such a puzzle has at least two non-similar solutions when $2\leq q \leq \frac{2}{\sqrt{e}}n$, all solutions are similar when $q\geq (2+\varepsilon)n$, and the solution is unique when $q=\omega(n)$. • We present a categorical construction for modelling both definite and indefinite causal structures within a general class of process theories that include classical probability theory and quantum theory. Unlike prior constructions within categorical quantum mechanics, the objects of this theory encode finegrained causal relationships between subsystems and give a new method for expressing and deriving consequences for a broad class of causal structures. To illustrate this point, we show that this framework admits processes with definite causal structures, namely one-way signalling processes, non-signalling processes, and quantum n-combs, as well as processes with indefinite causal structure, such as the quantum switch and the process matrices of Oreshkov, Costa, and Brukner. We furthermore give derivations of their operational behaviour using simple, diagrammatic axioms. • A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets. • We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event sub-types, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pretrained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a dataset with unseen event categories using a single training example. • While recent deep neural networks have achieved promising results for 3D reconstruction from a single-view image, these rely on the availability of RGB textures in images and extra information as supervision. In this work, we propose novel stacked hierarchical networks and an end to end training strategy to tackle a more challenging task for the first time, 3D reconstruction from a single-view 2D silhouette image. We demonstrate that our model is able to conduct 3D reconstruction from a single-view silhouette image both qualitatively and quantitatively. Evaluation is performed using Shapenet for the single-view reconstruction and results are presented in comparison with a single network, to highlight the improvements obtained with the proposed stacked networks and the end to end training strategy. Furthermore, 3D re- construction in forms of IoU is compared with the state of art 3D reconstruction from a single-view RGB image, and the proposed model achieves higher IoU than the state of art of reconstruction from a single view RGB image. • Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target's ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat. • Jan 18 2017 math.AG math.AC arXiv:1701.04738v1 The goal of the present article is to survey the general theory of Mori Dream Spaces, with special regards to the question: When is the blow-up of toric variety at a general point a Mori Dream Space? We translate the question for toric surfaces of Picard number one into an interpolation problem involving points in the projective plane. An instance of such an interpolation problem is the Gonzalez-Karu theorem that gives new examples of weighted projective planes whose blow-up at a general point is not a Mori Dream Space. • Jan 18 2017 cs.DC arXiv:1701.04733v1 GPUs are dedicated processors used for complex calculations and simulations and they can be effectively used for tropical algebra computations. Tropical algebra is based on max-plus algebra and min-plus algebra. In this paper we proposed and designed a library based on Tropical Algebra which is used to provide standard vector and matrix operations namely Basic Tropical Algebra Subroutines (BTAS). The testing of BTAS library is conducted by implementing the sequential version of Floyd Warshall Algorithm on CPU and furthermore parallel version on GPU. The developed library for tropical algebra delivered extensively better results on a less expensive GPU as compared to the same on CPU. • Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model used during training. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement. • Jan 18 2017 cs.CV q-bio.NC arXiv:1701.04674v1 Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning. • We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for multi-scale contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments for low-level applications on BSDS, PASCAL Context, PASCAL Segmentation, and NYUD to evaluate boundary detection performance, showing that COB provides state-of-the-art contours and region hierarchies in all datasets. We also evaluate COB on high-level tasks when coupled with multiple pipelines for object proposals, semantic contours, semantic segmentation, and object detection on various databases (MS-COCO, SBD, PASCAL VOC'07), showing that COB also improves the results for all tasks. • Given a vertex-weighted graph $G=(V,E)$ and a set $S \subseteq V$, a subset feedback vertex set $X$ is a set of the vertices of $G$ such that the graph induced by $V \setminus X$ has no cycle containing a vertex of $S$. The \textscSubset Feedback Vertex Set problem takes as input $G$ and $S$ and asks for the subset feedback vertex set of minimum total weight. In contrast to the classical \textscFeedback Vertex Set problem which is obtained from the \textscSubset Feedback Vertex Set problem for $S=V$, restricted to graph classes the \textscSubset Feedback Vertex Set problem is known to be NP-complete on split graphs and, consequently, on chordal graphs. However as \textscFeedback Vertex Set is polynomially solvable for AT-free graphs, no such result is known for the \textscSubset Feedback Vertex Set problem on any subclass of AT-free graphs. Here we give the first polynomial-time algorithms for the problem on two unrelated subclasses of AT-free graphs: interval graphs and permutation graphs. As a byproduct we show that there exists a polynomial-time algorithm for circular-arc graphs by suitably applying our algorithm for interval graphs. Moreover towards the unknown complexity of the problem for AT-free graphs, we give a polynomial-time algorithm for co-bipartite graphs. Thus we contribute to the first positive results of the \textscSubset Feedback Vertex Set problem when restricted to graph classes for which \textscFeedback Vertex Set is solved in polynomial time. • The evaluation of a query over a probabilistic database boils down to computing the probability of a suitable Boolean function, the lineage of the query over the database. The method of query compilation approaches the task in two stages: first, the query lineage is implemented (compiled) in a circuit form where probability computation is tractable; and second, the desired probability is computed over the compiled circuit. A basic theoretical quest in query compilation is that of identifying pertinent classes of queries whose lineages admit compact representations over increasingly succinct, tractable circuit classes. Fostering previous work by Jha and Suciu (2012) and Petke and Razgon (2013), we focus on queries whose lineages admit circuit implementations with small treewidth, and investigate their compilability within tame classes of decision diagrams. In perfect analogy with the characterization of bounded circuit pathwidth by bounded OBDD width, we show that a class of Boolean functions has bounded circuit treewidth if and only if it has bounded SDD width. Sentential decision diagrams (SDDs) are central in knowledge compilation, being essentially as tractable as OBDDs but exponentially more succinct. By incorporating constant width SDDs and polynomial size SDDs, we refine the panorama of query compilation for unions of conjunctive queries with and without inequalities. • Recently there has been an enormous interest in generative models for images in deep learning. In pursuit of this, Generative Adversarial Networks (GAN) and Variational Auto-Encoder (VAE) have surfaced as two most prominent and popular models. While VAEs tend to produce excellent reconstructions but blurry samples, GANs generate sharp but slightly distorted images. In this paper we propose a new model called Variational InfoGAN (ViGAN). Our aim is two fold: (i) To generated new images conditioned on visual descriptions, and (ii) modify the image, by fixing the latent representation of image and varying the visual description. We evaluate our model on Labeled Faces in the Wild (LFW), celebA and a modified version of MNIST datasets and demonstrate the ability of our model to generate new images as well as to modify a given image by changing attributes. • Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3% Pearson correlation coefficient between our predicted pain level time series and the ground truth. • Most existing community-related studies focus on detection, which aim to find the community membership for each user from user friendship links. However, membership alone, without a complete profile of what a community is and how it interacts with other communities, has limited applications. This motivates us to consider systematically profiling the communities and thereby developing useful community-level applications. In this paper, we for the first time formalize the concept of community profiling. With rich user information on the network, such as user published content and user diffusion links, we characterize a community in terms of both its internal content profile and external diffusion profile. The difficulty of community profiling is often underestimated. We novelly identify three unique challenges and propose a joint Community Profiling and Detection (CPD) model to address them accordingly. We also contribute a scalable inference algorithm, which scales linearly with the data size and it is easily parallelizable. We evaluate CPD on large-scale real-world data sets, and show that it is significantly better than the state-of-the-art baselines in various tasks. • This volume contains the papers presented at LINEARITY 2016, the Fourth International Workshop on Linearity, held on June 26, 2016 in Porto, Portugal. The workshop was a one-day satellite event of FSCD 2016, the first International Conference on Formal Structures for Computation and Deduction. The aim of this workshop was to bring together researchers who are developing theory and applications of linear calculi, to foster their interaction and provide a forum for presenting new ideas and work in progress, and enable newcomers to learn about current activities in this area. Of interest were new results that made a central use of linearity, ranging from foundational work to applications in any field. This included: sub-linear logics, linear term calculi, linear type systems, linear proof-theory, linear programming languages, applications to concurrency, interaction-based systems, verification of linear systems, and biological and chemical models of computation. • In recent times, the use of separable convolutions in deep convolutional neural network architectures has been explored. Several researchers, most notably (Chollet, 2016) and (Ghosh, 2017) have used separable convolutions in their deep architectures and have demonstrated state of the art or close to state of the art performance. However, the underlying mechanism of action of separable convolutions are still not fully understood. Although their mathematical definition is well understood as a depthwise convolution followed by a pointwise convolution, deeper interpretations such as the extreme Inception hypothesis (Chollet, 2016) have failed to provide a thorough explanation of their efficacy. In this paper, we propose a hybrid interpretation that we believe is a better model for explaining the efficacy of separable convolutions. • 1. Analog forecasting has been successful at producing robust forecasts for a variety of ecological and physical processes. Analog forecasting is a mechanism-free nonlinear method that forecasts a system forward in time by examining how past states deemed similar to the current state moved forward. Previous work on analog forecasting has typically been presented in an empirical or heuristic context, as opposed to a formal statistical context. 2. The model presented here extends the model-based analog method of McDermott and Wikle (2016) by placing analog forecasting within a fully hierarchical statistical frame- work. In particular, a Bayesian hierarchical spatial-temporal Poisson analog forecasting model is formulated. 3. In comparison to a Poisson Bayesian hierarchical model with a latent dynamical spatio- temporal process, the hierarchical analog model consistently produced more accurate forecasts. By using a Bayesian approach, the hierarchical analog model is able to quantify rigorously the uncertainty associated with forecasts. 4. Forecasting waterfowl settling patterns in the northwestern United States and Canada is conducted by applying the hierarchical analog model to a breeding population survey dataset. Sea Surface Temperature (SST) in the Pacific ocean is used to help identify potential analogs for the waterfowl settling patterns. • This paper is a tutorial for newcomers to the field of automated verification tools, though we assume the reader to be relatively familiar with Hoare-style verification. In this paper, besides introducing the most basic features of the language and verifier Dafny, we place special emphasis on how to use Dafny as an assistant in the development of verified programs. Our main aim is to encourage the software engineering community to make the move towards using formal verification tools. • We formulate three current models of discrete-time quantum walks in a combinatorial way. These walks are shown to be closely related to rotation systems and 1-factorizations of graphs. For two of the models, we compute the traces and total entropies of the average mixing matrices for some cubic graphs. The trace captures how likely a quantum walk is to revisit the state it started with, and the total entropy measures how close the limiting distribution is to uniform. Our numerical results indicate three relations between quantum walks and graph structures: for the first model, rotation systems with higher genera give lower traces and higher entropies, and for the second model, the symmetric 1-factorizations always give the highest trace. • How much can pruning algorithms teach us about the fundamentals of learning representations in neural networks? A lot, it turns out. Neural network model compression has become a topic of great interest in recent years, and many different techniques have been proposed to address this problem. In general, this is motivated by the idea that smaller models typically lead to better generalization. At the same time, the decision of what to prune and when to prune necessarily forces us to confront our assumptions about how neural networks actually learn to represent patterns in data. In this work we set out to test several long-held hypotheses about neural network learning representations and numerical approaches to pruning. To accomplish this we first reviewed the historical literature and derived a novel algorithm to prune whole neurons (as opposed to the traditional method of pruning weights) from optimally trained networks using a second-order Taylor method. We then set about testing the performance of our algorithm and analyzing the quality of the decisions it made. As a baseline for comparison we used a first-order Taylor method based on the Skeletonization algorithm and an exhaustive brute-force serial pruning algorithm. Our proposed algorithm worked well compared to a first-order method, but not nearly as well as the brute-force method. Our error analysis led us to question the validity of many widely-held assumptions behind pruning algorithms in general and the trade-offs we often make in the interest of reducing computational complexity. We discovered that there is a straightforward way, however expensive, to serially prune 40-70% of the neurons in a trained network with minimal effect on the learning representation and without any re-training. • We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf-252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source's intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutron double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique's potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). These simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%. • We prove a downward separation for $\mathsf{\Sigma}_2$-time classes. Specifically, we prove that if $\Sigma_2$E does not have polynomial size non-deterministic circuits, then $\Sigma_2$SubEXP does not have \textitfixed polynomial size non-deterministic circuits. To achieve this result, we use Santhanam's technique on augmented Arthur-Merlin protocols defined by Aydinlioğlu and van Melkebeek. We show that augmented Arthur-Merlin protocols with one bit of advice do not have fixed polynomial size non-deterministic circuits. We also prove a weak unconditional derandomization of a certain type of promise Arthur-Merlin protocols. Using Williams' easy hitting set technique, we show that $\Sigma_2$-promise AM problems can be decided in $\Sigma_2$SubEXP with $n^c$ advice, for some fixed constant $c$. • Recently a repeating fast radio burst (FRB) 121102 has been confirmed to be an extragalactic event and a persistent radio counterpart has been identified. While other possibilities are not ruled out, the emission properties are broadly consistent with theoretical suggestions of Murase et al. (2016) for quasi-steady nebula emission from a pulsar-driven supernova remnant as a counterpart of FRBs. Here we constrain the model parameters of such a young neutron star scenario for FRB 121102. If the associated supernova has a conventional ejecta mass of $M_{\rm ej}\gtrsim{\rm a \ few}\ M_\odot$, a neutron star with an age of $t_{\rm age} \sim 10-100 \ \rm yrs$, an initial spin period of $P_{\rm i} \lesssim$ a few ms, and a dipole magnetic field of $B_{\rm dip} \sim 10^{12-13} \ \rm G$ can be compatible with the observations. However, in this case, the magnetically-powered scenario may be more favored as an FRB energy source because of the efficiency problem in the rotation-powered scenario. On the other hand, if the associated supernova is an ultra-stripped one with $M_{\rm ej} \sim 0.1 \ M_\odot$, a younger neutron star with $t_{\rm age} \sim 1-10$ yrs can be the persistent radio source and might produce FRBs with the spin-down power. These possibilities could be distinguished by the decline rate of the quasi-steady radio counterpart. • Electric-field noise from the surfaces of ion-trap electrodes couples to the ion's charge causing heating of the ion's motional modes. This heating limits the fidelity of quantum gates implemented in quantum information processing experiments. The exact mechanism that gives rise to electric-field noise from surfaces is not well-understood and remains an active area of research. In this work, we detail experiments intended to measure ion motional heating rates with exchangeable surfaces positioned in close proximity to the ion, as a sensor to electric-field noise. We have prepared samples with various surface conditions, characterized in situ with scanned probe microscopy and electron spectroscopy, ranging in degrees of cleanliness and structural order. The heating-rate data, however, show no significant differences between the disparate surfaces that were probed. These results suggest that the driving mechanism for electric-field noise from surfaces is due to more than just thermal excitations alone. • The black hole information paradox presumes that quantum field theory in curved spacetime can provide unitary propagation from a near-horizon mode to an asymptotic Hawking quantum. Instead of invoking conjectural quantum gravity effects to modify such an assumption, we propose a self-consistency check. We establish an analogy to Feynman's analysis of a double-slit experiment. Feynman showed that unitary propagation of the interfering particles, namely ignoring the entanglement with the double-slit, becomes an arbitrarily reliable assumption when the screen upon which the interference pattern is projected is infinitely far away. We argue for an analogous self-consistency check for quantum field theory in curved spacetime. We apply it to the propagation of Hawking quanta and test whether ignoring the entanglement with the geometry also becomes arbitrarily reliable in the limit of a large black hole. We present curious results to suggest a negative answer, and we discuss how this loss of na?ive unitarity in QFT might be related to a solution of the paradox based on the soft-hair-memory effect. • A method for measuring the real part of the weak (local) value of spin is presented using a variant on the original Stern-Gerlach apparatus. The experiment utilises metastable helium in the $\rm 2^{3}S_{1}$ state. A full simulation using the impulsive approximation has been carried out and it predicts a displacement of the beam by $\rm \Delta_{w} = 17 - 33\,\mu m$. This is on the limit of our detector resolution and we will discuss ways of increasing $\rm \Delta_{w}$. The simulation also indicates how we might observe the imaginary part of the weak value. • We study positive solutions to the heat equation on graphs. We prove variants of the Li-Yau gradient estimate and the differential Harnack inequality. For some graphs, we can show the estimates to be sharp. We establish new computation rules for differential operators on discrete spaces and introduce a relaxation function that governs the time dependency in the differential Harnack estimate. • We provide a formalism to calculate the cubic interaction vertices of the stable string bit model, in which string bits have $s$ spin degrees of freedom but no space to move. With the vertices, we obtain a formula for one-loop self-energy, i.e., the $\mathcal{O}\left(1/N^{2}\right)$ correction to the energy spectrum. A rough analysis shows that, when the bit number $M$ is large, the ground state one-loop self-energy $\Delta E_{G}$ should scale as $M^{5-s/4}$ for even $s$ and $M^{4-s/4}$ for odd $s$. Particularly, in the case of protostring, where the Grassmann dimension $s=24$, we have $\Delta E_{G}\sim1/M$, which resembles the Poincaré invariant relation of $1+1$ dimension $P^{-}\sim1/P^{+}$. We calculate analytically the one-loop correction for the ground energies with $M=3$ and $s=1,\,2$. We then numerically confirm that the large $M$ behavior holds for $s\leq4$ cases. • On an asymptotically flat manifold $M^n$ with nonnegative scalar curvature, with outer minimizing boundary $\Sigma$, we prove a Penrose-like inequality in dimensions $n < 8$, under suitable assumptions on the mean curvature and the scalar curvature of $\Sigma$. • We estimate the possible accuracies of measurements at the proposed CLIC $e^+e^-$ collider of Higgs and $W^+W^-$ production at centre-of-mass energies up to 3TeV, incorporating also Higgsstrahlung projections at higher energies that had not been considered previously, and use them to explore the prospective CLIC sensitivities to decoupled new physics. We present the resulting constraints on the Wilson coefficients of dimension-6 operators in a model-independent approach based on the Standard Model effective field theory (SM EFT). The higher centre-of-mass energy of CLIC, compared to other projects such as the ILC and CEPC, gives it greater sensitivity to the coefficients of some of the operators we study. We find that CLIC Higgs measurements may be sensitive to new physics scales $\Lambda = \mathcal{O}(10)$TeV for individual operators, reduced to $\mathcal{O}(1)$ TeV sensitivity for a global fit marginalising over the coefficients of all contributing operators. We give some examples of the corresponding prospective constraints on specific scenarios for physics beyond the SM, including stop quarks and the dilaton/radion. • A promising approach of designing mesostructured materials with novel physical behavior is to combine unique optical and electronic properties of solid nanoparticles with long-range ordering and facile response of soft matter to weak external stimuli. Here we design, practically realize, and characterize orientationally ordered nematic liquid crystalline dispersions of rod-like upconversion nanoparticles. Boundary conditions on particle surfaces, defined through surface functionalization, promote spontaneous unidirectional self-alignment of the dispersed rod-like nanoparticles, mechanically coupled to the molecular ordering direction of the thermotropic nematic liquid crystal host. As host is electrically switched at low voltages~ 1V, nanorods rotate, yielding tunable upconversion and polarized luminescence properties of the composite. We characterize spectral and polarization dependencies, explain them through invoking models of electrical switching and upconversion dependence on crystalline matrices of nanorods, and discuss potential practical uses. • We report on subarcsecond observations of complex organic molecules (COMs) in the high-mass protostar IRAS20126+4104 with the Plateau de Bure Interferometer in its most extended configurations. In addition to the simple molecules SO, HNCO and H2-13CO, we detect emission from CH3CN, CH3OH, HCOOH, HCOOCH3, CH3OCH3, CH3CH2CN, CH3COCH3, NH2CN, and (CH2OH)2. SO and HNCO present a X-shaped morphology consistent with tracing the outflow cavity walls. Most of the COMs have their peak emission at the putative position of the protostar, but also show an extension towards the south(east), coinciding with an H2 knot from the jet at about 800-1000 au from the protostar. This is especially clear in the case of H2-13CO and CH3OCH3. We fitted the spectra at representative positions for the disc and the outflow, and found that the abundances of most COMs are comparable at both positions, suggesting that COMs are enhanced in shocks as a result of the passage of the outflow. By coupling a parametric shock model to a large gas-grain chemical network including COMs, we find that the observed COMs should survive in the gas phase for about 2000 yr, comparable to the shock lifetime estimated from the water masers at the outflow position. Overall, our data indicate that COMs in IRAS20126+4104 may arise not only from the disc, but also from dense and hot regions associated with the outflow. • We consider a general primitively polarized K3 surface $(S,H)$ of genus $g+1$ and a 1-nodal curve $\widetilde C\in |H|$. We prove that the normalization $C$ of $\widetilde C$ has surjective Wahl map provided $g=40,42$ or $\ge 44$. • An analytical, single-parametric, complete and orthonormal basis set consisting of the hydrogen wave-functions is put forward for \textitab initio calculations of observable characteristics of an arbitrary many-electron atom. By introducing a single parameter for the whole basis set of a given atom, namely an effective charge $Z^{*}$, we find a sufficiently good analytical approximation for the atomic characteristics of all elements of the periodic table. The basis completeness allows us to perform a transition into the secondary-quantized representation for the construction of a regular perturbation theory, which includes in a natural way correlation effects and allows one to easily calculate the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our analytical zeroth-order approximation provides better accuracy than the Thomas-Fermi model and already in second-order perturbation theory our results become comparable with those via multi-configuration Hartree-Fock. • The anti-Stokes scattering and Stokes scattering in stimulated Brillouin scattering (SBS) cascade have been researched by the Vlasov-Maxwell simulation. In the high-intensity laser-plasmas interaction, the stimulated anti-Stokes Brillouin scattering (SABS) will occur after the second stage SBS rescattering. The mechanism of SABS has been put forward to explain this phenomenon. And the SABS will compete with the SBS rescattering to determine the total SBS reflectivity. Thus, the SBS rescattering including the SABS is an important saturation mechanism of SBS, and should be taken into account in the high-intensity laser-plasmas interaction. • This paper investigates oscillation-free stability conditions of numerical methods for linear parabolic partial differential equations with some example extrapolations to nonlinear equations. Not clearly understood, numerical oscillations can create infeasible results. Since oscillation-free behavior is not ensured by stability conditions, a more precise condition would be useful for accurate solutions. Using Von Neumann and spectral analyses, we find and explore oscillation-free conditions for several finite difference schemes. Further relationships between oscillatory behavior and eigenvalues is supported with numerical evidence and proof. Also, evidence suggests that the oscillation-free stability condition for a consistent linearization may be sufficient to provide oscillation-free stability of the nonlinear solution. These conditions are verified numerically for several example problems by visually comparing the analytical conditions to the behavior of the numerical solution for a wide range of mesh sizes. • We consider row sequences of (type II) Hermite-Padé approximations with common denominator associated with a vector ${\bf f}$ of formal power expansions about the origin. In terms of the asymptotic behavior of the sequence of common denominators, we describe some analytic properties of ${\bf f}$ and restate some conjectures corresponding to questions once posed by A. A. Gonchar for row sequences of Padé approximants. • We prove a result on non-clustering of particles in a two-dimensional Coulomb plasma, which holds provided that the inverse temperature $\beta$ satisfies $\beta>1$. As a consequence we obtain a result on crystallization as $\beta\to\infty$: the particles will, on a microscopic scale, appear at a certain distance from each other. The estimation of this distance is connected to Abrikosov's conjecture that the particles should freeze up according to a honeycomb lattice when $\beta\to\infty$. • In systems having an anisotropic electronic structure, such as the layered materials graphite, graphene and cuprates, impulsive light excitation can coherently stimulate specific bosonic modes, with exotic consequences for the emergent electronic properties. Here we show that the population of E$_{2g}$ phonons in the multiband superconductor MgB$_2$ can be selectively enhanced by femtosecond laser pulses, leading to a transient control of the number of carriers in the \sigma-electronic subsystem. The nonequilibrium evolution of the material optical constants is followed in the spectral region sensitive to both the a- and c-axis plasma frequencies and modeled theoretically, revealing the details of the $\sigma$-$\pi$ interband scattering mechanism in MgB$_2$. • Jan 18 2017 hep-ph arXiv:1701.04794v1 We propose supersymmetric Majoron inflation in which the Majoron field $\Phi$ responsible for generating right-handed neutrino masses may also be suitable for giving low scale "hilltop" inflation, with a discrete lepton number $Z_N$ spontaneously broken at the end of inflation, while avoiding the domain wall problem. In the framework of non-minimal supergravity, we show that a successful spectral index can result with small running together with small tensor modes. We show that a range of heaviest right-handed neutrino masses can be generated, $m_N\sim 10^1-10^{16}$ GeV, consistent with the constraints from reheating and domain walls. • Quasirational presentations ($QR$- presentations) of (pro-$p$)groups are studied. Such presentations include , in particular, aspherical presentations of discrete groups and their subpresentations, as well as still mysterious pro-$p$-groups with a single defining relation. We provide a positive answer to the conjecture of O.V. Melnikov on the existence of a proper envelop $Env^p$ of aspherical presentations by showing a generalized equivalence of $\mathbb{F}_p$ and $\mathbb{Z}_p$ permutationality in the case of $QR$-presentations. Using schematization of $QR$- presentations we answer the question of Serre on one-relator pro-$p$-groups. • Nowadays distributed computing approach has become very popular due to several advantages over the centralized computing approach as it also offers high performance computing at a very low cost. Each router implements some queuing mechanism for resources allocation in a best possible optimize manner and governs with packet transmission and buffer mechanism. In this paper, different types of queuing disciplines have been implemented for packet transmission when the bandwidth is allocated as well as packet dropping occurs due to buffer overflow. This gives result in latency in packet transmission, as the packet has to wait in a queue which is to be transmitted again. Some common queuing mechanisms are first in first out, priority queue and weighted fair queuing, etc. This targets simulation in heterogeneous environment through simulator tool to improve the quality of services by evaluating the performance of said queuing disciplines. This is demonstrated by interconnecting heterogeneous devices through step topology. In this paper, authors compared data packet, voice and video traffic by analyzing the performance based on packet dropped rate, delay variation, end to end delay and queuing delay and how the different queuing discipline effects the applications and utilization of network resources at the routers. Before evaluating the performance of the connected devices, a Unified Modeling Language class diagram is designed to represent the static model for evaluating the performance of step topology. Results are described by taking the various case studies. • Asteroseismic parameters allow us to measure the basic stellar properties of field giants observed far across the Galaxy. Most of such determinations are, up to now, based on simple scaling relations involving the large frequency separation, ∆\nu, and the frequency of maximum power, \nu$_{max}$. In this work, we implement ∆\nu and the period spacing, ∆P, computed along detailed grids of stellar evolutionary tracks, into stellar isochrones and hence in a Bayesian method of parameter estimation. Tests with synthetic data reveal that masses and ages can be determined with typical precision of 5 and 19 per cent, respectively, provided precise seismic parameters are available. Adding independent information on the stellar luminosity, these values can decrease down to 3 and 10 per cent respectively. The application of these methods to NGC 6819 giants produces a mean age in agreement with those derived from isochrone fitting, and no evidence of systematic differences between RGB and RC stars. The age dispersion of NGC 6819 stars, however, is larger than expected, with at least part of the spread ascribable to stars that underwent mass-transfer events. • In 2010, Joyce et. al defined the leverage centrality of vertices in a graph as a means to analyze functional connections within the human brain. In this metric a degree of a vertex is compared to the degrees of all it neighbors. We investigate this property from a mathematical perspective. We first outline some of the basic properties and then compute leverage centralities of vertices in different families of graphs. In particular, we show there is a surprising connection between the number of distinct leverage centralities in the Cartesian product of paths and the triangle numbers. Zoltán Zimborás Jan 12 2017 20:38 UTC Here is a nice description, with additional links, about the importance of this work if it turns out to be flawless (thanks a lot to Martin Schwarz for this link): [dichotomy conjecture][1]. [1]: http://processalgebra.blogspot.com/2017/01/has-feder-vardi-dichotomy-conjecture.html Noon van der Silk Jan 05 2017 04:51 UTC This is a cool paper! Māris Ozols Dec 27 2016 19:34 UTC Māris Ozols Dec 16 2016 15:38 UTC Indeed, Schur complement is the answer to the ultimate question! J. Smith Dec 14 2016 17:43 UTC Very good Insight on android security problems and malware. Nice Work ! Stefano Pirandola Nov 30 2016 06:45 UTC Dear Mark, thx for your comment. There are indeed missing citations to previous works by Rafal, Janek and Lorenzo that we forgot to add. Regarding your paper, I did not read it in detail but I have two main comments: 1- What you are using is completely equivalent to the tool of "quantum simulatio ...(continued) Mark M. Wilde Nov 30 2016 02:18 UTC An update http://arxiv.org/abs/1609.02160v2 of this paper has appeared, one day after the arXiv post http://arxiv.org/abs/1611.09165 . The paper http://arxiv.org/abs/1609.02160v2 now includes (without citation) some results for bosonic Gaussian channels found independently in http://arxiv.org/abs/16 ...(continued) Felix Leditzky Nov 29 2016 16:34 UTC Thank you very much for the reply! Martin Schwarz Nov 24 2016 13:53 UTC Oded Regev writes [here][1]: "Dear all, Yesterday Lior Eldar and I found a flaw in the algorithm proposed in the arXiv preprint. I do not see how to salvage anything from the algorithm. The security of lattice-based cryptography against quantum attacks therefore remains intact and uncha ...(continued)
2017-01-18 09:55:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5356276035308838, "perplexity": 969.3070763693157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.vedantu.com/chemistry/preparation-alkyl-halides
# Preparation of Alkyl Halides ## Introduction to Alkyl Halides Alkyl halides (or haloalkanes) are the compounds in which one or more hydrogen atoms in an alkane are replaced by halogen atoms (fluorine, chlorine, bromine, or iodine). These are organic compounds with the general formula $RX$, where $R$ denotes the alkyl group and $X$ denotes the halogen (group 17 elements). Alkyl halides and aryl halides (also known as haloarenes) are the two different types of substituted hydrocarbons. The major difference between both alkyl halides and aryl halides is that haloalkanes are derived from alkanes (open chain hydrocarbons) and haloarenes are derived from aromatic hydrocarbons. Now we will discuss the preparation alkyl halides. Both Haloalkanes and Haloarenes can be prepared from other organic compounds. Some method of preparation of alkyl halides and aryl halides are given below: ### 1. Preparation of Alkyl Halides From Alkenes The addition of hydrogen halides to alkenes either follows Markovnikov’s rule or the Kharash effect. All the electrophilic addition reactions of alkenes following the Markovnikov rule are known as Markovnikov addition reactions. (In simple definition it states that “Hydrogen is added to the carbon with the most hydrogens and the halide is added to the carbon with least hydrogens”). General Reaction $\underset{\text{Alkene}}{R - CH = CH_2} + \underset{\text{Hydrogen halide}}{H - X \to R - CH_2 - CH_2X}$ OR $\underset{\text{Alkyl halide}}{R - CH_2X - CH_2}$ Conversion of $- C = C - (\text{Alkenes}) \text{into} - X (\text{Alkyl halides})$ $\underset{\text{Symmetric alkene}}{R - CH = CH - R } + \underset{\text{Hydrogen halide}}{H - X} \to \underset{\text{Alkyl halide}}{R - CH_2 - CHX - R}$ Preparation of Alkyl Chloride / Alkyl Bromides / Alkyl Iodides: $\underset{\text{Symmetric alkene}}{R - CH = CH - R} + \underset{\text{Hydrogen chloride}}{H - Cl} \to \underset{\text{Alkyl chloride}}{R - CH_2 - CHCl - R}$ $\underset{\text{Unsymmetric alkene}}{R - CH = CH - R' \,\,\,\, + \,\,\,\,H - X} \to\underset{\text{Hydrogen chloride}}{R - CH_2 - CHX - R'} \to \underset{\text{Alkyl halide}}{R - CHX - CH_2 - R'}$ ### 2. Preparation of Alkyl Halides by Free Radical Halogenation In free radical halogenation, we get a mixture of mono-substituted, di-substituted, tri-substituted, and even tetra-substituted halo-alkanes (alkyl halides). Since we require only one type of alkyl halide and not all in the form of a mixture, So this method is not used. $CH_3CH_2CH_2CH_3 \xrightarrow[]{Cl_2 / UVlight} CH_3CH_2CHClCH_3 + CH_3CH_2CH_2CH_2Cl$ ### 3. Preparation of Alkyl Halides from Alcohols In this reaction of synthesis of alkyl halides, the hydroxyl group of alcohol is replaced with the halogen atom attached to the other involved compound. The reaction requires a catalyst for primary and secondary alcohol whereas tertiary alcohol doesn’t require any catalyst. $CH_3CH_2OH + SOCl_2 \overset{\Delta}{\rightarrow} CH_3CH_2Cl + SO_2 + HCl$ $CH_3CH_2OH + PCl_2 \overset{\Delta}{\rightarrow} CH_3CH_2Cl + P(OH)_3 + HCl$ $CH_3CH_2OH + PCl_5 \overset{\Delta}{\rightarrow} CH_3CH_2Cl + P(OH)_3 + HCl$ $CH_3CH_2OH + PBr_3 \overset{\Delta}{\rightarrow} CH_3CH_2Br + P(OH)_3 + HBr$ ### Preparation of Aryl Halides 1. Preparation of Aryl Halides by Electrophilic Substitution Reactions Aryl halides can be prepared by an electrophilic aromatic substitution reaction of arenes with halogens in the presence of  Lewis acid. ### 2. Preparation of Aryl Halides through Sandmeyer’s Reaction Aryl halides can be prepared by mixing the solution of freshly prepared diazonium salt from the primary aromatic amine with cuprous chloride or cuprous bromide.  In a Sandmeyer reaction, a diazonium salt is reacted with copper $(I)$ bromide, copper $(I)$ chloride, or potassium iodide $(KI)$ to form the respective aryl halide. The diazonium salt can be prepared from aniline by reacting nitrous acid at cold temperatures. Did You Know? The order of reactivity of halogen acids towards alcohol is: Order of Reactivity for Halogen acids: $HI > HBr > HCl$ In the case of halogen acids, bond length increases from $HCl$ to $HI$. The longer the bond length, the lesser will be dissociation energy, and hence, more will be reactivity. 1. What is the difference between primary, secondary, and tertiary alkyl halides in general methods of preparation of alkyl halides? rimary Alkyl Halide: In a primary (1°) alkyl halide (or haloalkanes), the carbon bonded to the halogen atom is only attached to one other alkyl group. Secondary Alkyl Halide: In the case of secondary (2°) alkyl halide (or haloalkanes), the carbon bonded to the halogen atom is joined directly to two other alkyl groups which can be of the same or different group. Tertiary Alkyl Halide: In the case of tertiary (3°) alkyl halide (or haloalkanes), the carbon atom holding the halogen is directly joined to three alkyl groups, which can be of any combination of the same or different groups.
2021-10-19 12:31:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5270774960517883, "perplexity": 12235.696987584814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00673.warc.gz"}
http://math.stackexchange.com/questions/208378/intuitive-explanation-of-why-dim-operatornameim-t-dim-operatornameker-t?answertab=active
# Intuitive explanation of why $\dim\operatorname{Im} T + \dim\operatorname{Ker} T = \dim V$ I'm having a hard time truly understanding the meaning of $\dim\operatorname{Im} T + \dim\operatorname{Ker} T = \dim V$ where $V$ is the domain of a linear transformation $T:V\to W$. I've used this equation several times in many problems, and I've gone over the proof and I believe that I fully understand it, but I don't understand the intuitive reasoning behind it. I'd appreciate an intuitive explanation of it. Just to be clear, I do understand the equation itself, I am able to use it, and I know how to prove it; my question is what is the meaning of this equation from a linear algebra perspective. - It's a combination of the first isomorphism theorem for groups and the fact that $\dim (U \oplus V) = \dim U + \dim V$. So I guess you should seek the meaning of the first isomorphism theorem in group theory. There the proof is pretty conceptual: you draw two short exact sequences and proceed to construct an isomorphism between them. – Alexei Averchenko Oct 6 '12 at 21:43 Geometrically you can understand the theorem by considering fibers over the points of the image. – Alexei Averchenko Oct 6 '12 at 21:45 I like to think of it as some form of conservation of dimension. If you have a linear mapping then it acts on each dimension of the domain (this is a consequence of linear mappings being completely determined by their action on any given basis of a space). There only two possibilities for each dimension, either it is preserved or it is compressed (i.e. taken to $\mathbf{0}$). The net dimension of the compressed portion of the domain is your nullity, i.e. the dimension of your kernel. The net dimension which is preserved is your rank, i.e. the dimension of your image space. This gives you an intuitive understanding of the rank-nullity theorem. As a note, if you take a minute and think deeply then you'll realize this argument is essentially the same as the projections that trb456 mentioned. - Wow, this is great! The basis of an image is the linearly independent vectors in the transformation matrix; its dimension is the number of such vectors, or the rank of the matrix! And the kernel is what's left! So the rank-nullity theorem actually corresponds with $dimImT + dimKerT = dimV$! Because linear transformations are actually matrices :). Your second paragraph made that all very clear, and I thank you!! – Daniel Oct 6 '12 at 21:44 And yes, I realize now that this is essentially what trb456 said, but you made it much more intuitive, which is just what I needed. Bringing the rank-nullity theorem into the mix was especially helpful, I really like it when the relationship between linear transformations and matrices makes itself clear :). I may be getting overexcited, but there's really nothing quite like truly understanding a topic in linear algebra, I mean beyond the "use it to solve a problem" way. Thank you very very much :). – Daniel Oct 6 '12 at 21:46 @Daniel You're very welcome. I'm glad you found the argument intuitive. – EuYu Oct 6 '12 at 21:48 @Daniel: I'd like to offer a word of warning about your statement that "linear transformations are actually matrices". a transformation $T:V\to W$ only "becomes" a matrix once you choose a basis for each of $V$ and $W$. If you choose different bases, then the matrix for $T$ changes. – Brad Oct 6 '12 at 21:50 Also, one remark about EuYu's answer: given a basis for $V$, it's not necessarily true that the number of basis vectors sent to $0$ by $T$ is equal to the nullity of $T$. But it is true that there exists a basis for $V$ that has this property - namely, take a basis for $\ker T$ and extend it to a basis for $V$. – Brad Oct 6 '12 at 21:54 You can think about Rank-Nullity Theorem geometrically in terms of things called fibers over points. Think about the case when your mapping $f: U \to V$ is surjective, and consider the mapping $f^{-1}: V \to 2^U$ that takes each point $p \in V$ to it's preimage $f^{-1}(p)$ (called fiber over $p$) in $U$. You can easily check the fibers are affine subspaces of $U$ parallel to each other (each point on $U$ passes through exactly one fiber). Also, the fiber passing through $0 \in U$ is exactly $\ker f$. You can thus picture $U$ as being separated into infinite number of thin layers, like a sedimentary rock: From this you can easily see that to uniquely specify a point in $U$ you can first specify a fiber (the set of fibers being parameterized by $V = \operatorname{im} f$) and then specify a point on a fiber (that is (non-uniquely) parameterized by $\ker f$). This gives you the Rank-Nullity Theorem: $$\dim \ker f + \dim \operatorname{im} f = \dim U.$$ For example, in case of a mapping $f: \mathbb{R}^2 \to \mathbb{R},\; (x, y) \to x + y$, the fibers will satisfy an equation of the form $y = a - x$ for some $a \in \mathbb{R}$. You can check that here $y + x = a - x + x = a$ indeed does not depend on either $y$ or $x$. Now, how many independent variables do you need to specify a point in $\mathbb{R}^2$? You need one variable ($a$) to specify a fiber (equivalently, a point on $\mathbb{R}$), and another one (say, $x$) to specify a point on the fiber - that's two degrees of freedom, as expected! The Rank-Nullity theorem states that for any surjective linear mapping $f: U \to V$, in any dimension you can use the same trick to uniquely parameterize any point in $U$. The same goes for any non-surjective linear mapping, of course, you'll just need to corestrict it to its image. Alternatively, you could draw another line through $0 \in \mathbb{R}^2$ distinct from $\ker f$. You can easily show that it crosses each fiber of $f$ exactly once, so you can use it to parameterize fibers more explicitly: identify this line with $\operatorname{im} f$, then for any two points on $\ker f$ and $\operatorname{im} f$ you can uniquely obtain the corresponding point of $\mathbb{R}$ using the parallelogram rule. Rank-Nullity states that you can do this sort of thing in any dimension and for any $f$ (instead of lines you'll have affine subspaces of different dimensions, though). This is a geometric picture of what's going on. - Perhaps think of it in terms of projections? Whatever T does not project into the image must disappear; i.e. is in the kernel. This is why it is the domain dimension that matters. The image is an injection into the range, so it has the same dimension as the corresponding preimage in the domain. The image of the kernel is just zero, so it is the dimension of the kernel in the domain that matters. - But isn't $\operatorname{Im}T$ part of $W$, and not $V$? What I mean is, by your explanation, $\operatorname{Im}T$ is based off of elements in $V$, whereas by my understanding (and I might have misunderstood), $\operatorname{Im}T$ is based off of elements in $W$. So if $\dim\operatorname{Im}T$ is based on $W$, it should (intuitively) have no relation with $\operatorname{dim}V$.. I hope I was clear. – Daniel Oct 6 '12 at 21:33 I understand now (partly thanks to EuYu's answer) what you were trying to explain, thank you. – Daniel Oct 6 '12 at 21:49 Fiber has the same dimension as kernel, not image. – Alexei Averchenko Oct 7 '12 at 3:07 The fiber of the image of T is not the kernel. Perhaps I'm just phrasing badly--how might you re-word? – trb456 Oct 7 '12 at 9:53 It doesn't make sense to speak of "the fibre of the image of $T$", at least not as a subset of $V$. Given a function $f:X\to Y$, there is a fibre $f^{-1}(y)$ over each point $y\in Y$. In the case of a linear transformation $T:V\to W$ and $w\in W$, the fibre over $w$ is the empty set if $w\not\in \operatorname{Im}T$, and if $w\in\operatorname{Im}T$ then the fibre $T^{-1}(w)$ is an affine subspace isomorphic to $\ker T$. Namely, it is $\ker T+v_0$ for any given $v_0\in V$ with $T(v_0)=w$. (For example, $\ker T$ is the fibre over $0$.) – Brad Oct 9 '12 at 0:52 $T$ is defined over all of $V$, its image must be at most as big as $V$ is; on the other hand there might be something missing. This means that it "goes to nothing" and "nothing", in this case, is the zero vector. As trb456 says one way to think of it is this (which is the same thing the proof goes through): a vector in $V$ maps into something which is either zero or isn't. When you do this for every vector in $V$, then you can check that all the non-zero images are a vector space (just like the kernel), which means that since everything goes somewhere, the base of $V$ must go into some zeros and some not-zeros. This might be a little confusing. If it is, let me know and I'll try to clear it up. -
2015-11-26 18:19:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309395551681519, "perplexity": 150.21174263200524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447769.81/warc/CC-MAIN-20151124205407-00263-ip-10-71-132-137.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:0715.20017
## Buildings.(English)Zbl 0715.20017 New York etc.: Springer-Verlag. viii, 215 p. DM 78.00 (1989). This book gives a good introduction to the theory of buildings. It gives complete proofs and plenty of exercises and examples. Most of the results can be found in the works of Tits (usually in French). Chapter VII gives a survey of some applications. There it is shown how the construction of a building that provides a p-adic analogue of a symmetric space can be used to generalize results on the cohomology of arithmetic groups to the case of S-arithmetic groups. Reviewer: F.Levstein ### MSC: 20E42 Groups with a $$BN$$-pair; buildings 20G10 Cohomology theory for linear algebraic groups 20-02 Research exposition (monographs, survey articles) pertaining to group theory 51E24 Buildings and the geometry of diagrams 51F15 Reflection groups, reflection geometries ### Keywords: buildings; cohomology of arithmetic groups
2022-05-28 07:44:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3828428387641907, "perplexity": 569.2493495864283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00144.warc.gz"}
https://obsstudies.org/simultaneous-sensitivity-analysis-in-stata-arsimsens-and-pairsimsens/
July 12, 2015 # Simultaneous Sensitivity Analysis in Stata: arsimsens and pairsimsens A simultaneous sensitivity analysis assesses how sensitive an inference of a non-zero treatment effect is to an unobserved confounder with a specified relationship to the treatment and response. Gastwirth et al. (1998) develops a method of simultaneous sensitivity analysis that can be used after 1:1 matching; Small et al. (2009) modifies the method so that it can be applied after 1:k and full matching. This paper describes the commands pairsimsens and arsimsens, which implement, respectively, the analyses of Gastwirth et al. (1998) and Small et al. (2009) in Stata. The .ado and .hlp files for the software presented in the paper are provided in a .zip file in the supplementary materials. Software Descriptions
2020-01-20 18:23:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100362420082092, "perplexity": 2084.5736987741166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00166.warc.gz"}
https://msp.org/agt/2022/22-4/p02.xhtml
Recent Issues The Journal About the Journal Editorial Board Editorial Interests Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Author Index To Appear Other MSP Journals Chromatic (co)homology of finite general linear groups ### Samuel M A Hutchinson, Samuel J Marsh and Neil P Strickland Algebraic & Geometric Topology 22 (2022) 1511–1614 ##### Abstract We study the Morava $E$–theory (at a prime $p$) of $B{\mathrm{GL}}_{d}\left(F\right)$, where $F$ is a finite field with $|F|=1\phantom{\rule{0.3em}{0ex}}\left(\mathrm{mod}\phantom{\rule{0.3em}{0ex}}p\right)$. Taking all $d$ together, we obtain a structure with two products $×$ and $\bullet$. We prove that it is a polynomial ring under $×$ and that the module of $×$–indecomposables inherits a $\bullet$–product, and we describe the structure of the resulting ring. In the process, we prove many auxiliary structural results. ##### Keywords Morava K–theory, general linear groups ##### Mathematical Subject Classification 2010 Primary: 55N20 Secondary: 14L05, 55N22, 55R35
2022-12-05 20:45:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5225316286087036, "perplexity": 2975.179274429285}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00736.warc.gz"}
https://ccssanswers.com/spectrum-math-grade-7-chapter-2-posttest-answer-key/
# Spectrum Math Grade 7 Chapter 2 Posttest Answer Key This handy Spectrum Math Grade 7 Answer Key Chapter 2 Posttest provides detailed answers for the workbook questions ## Spectrum Math Grade 7 Chapter 2 Posttest Answers Key Check What You Learned Multiplying and Dividing Rational Numbers Rewrite each expression using the distributive property. Question 1. a. 7 × (10 + a) = _____________ Answer: (7 × 10) + (7 × a) 7 × (10 + a) = (7 × 10) + (7 × a) The distributive property states that multiplying the sum of two or more addends by a number yields the same outcome as multiplying each addend separately by the number and combining the resulting products. b. (2 × c) + (2 × d) = _____________ Answer: 2 × (c + d) (2 × c) + (2 × d) = 2 × (c + d) The distributive property states that multiplying the sum of two or more addends by a number yields the same outcome as multiplying each addend separately by the number and combining the resulting products. Question 2. a. (y × 2) + (y × 6) = _____________ Answer: y × (2 + 6) (y × 2) + (y × 6) = y × (2 + 6) The distributive property states that multiplying the sum of two or more addends by a number yields the same outcome as multiplying each addend separately by the number and combining the resulting products. b. 5 × (k + 4) = _____________ Answer: (5 × k) + (5 × 4) 5 × (k + 4) = (5 × k) + (5 × 4) The distributive property states that multiplying the sum of two or more addends by a number yields the same outcome as multiplying each addend separately by the number and combining the resulting products. Identify the property described as commutative, associative, identity, or zero. Question 3. When three or more numbers are multiplied together, the product is the same regardless of how the factors are grouped. _________ When three or more numbers are multiplied together, the product is the same regardless of how the factors are grouped is called associative property. According to the associative principle of multiplication, when multiplying three integers, the outcome will always be the same regardless of how the numbers are grouped. If there are three numbers, x, y and z, the associative property of multiplication implies that x × (y × z) = (x × y) × z. Question 4. When zero is divided by any number, the quotient is always 0. _________ When zero is divided by any number, the quotient is always 0 is called zero property. According to the zero property of division, if 0(zero) is divided by any other number, the result will be zero. If there is a number, x then the zero property of division implies that 0 ÷ x = 0. Question 5. The product of any number and 1 is that number. ___________ The product of any number and 1 is that number is called Identity Property. According to the identity property of multiplication, if a number is multiplied by 1 (one), the result will be the original number. This property is applied when numbers are multiplied by 1. If there is a number, x then the identity property implies that x × 1 = x. Question 6. When two numbers are multiplied together, the product is the same regardless of the order of the factors. ________ When two numbers are multiplied together, the product is the same regardless of the order of the factors is called Commutative Property. According to the commutative property of multiplication, changing the order of the numbers we are multiplying does not change the product. If there are two numbers, x and y, the commutative property of multiplication implies that x × y = y × x. Question 7. a. y × x = x × y ____________ y × x = x × y The above expression is the example for Commutative Property. When two numbers are multiplied together, the product is the same regardless of the order of the factors is called Commutative Property. According to the commutative property of multiplication, changing the order of the numbers we are multiplying does not change the product. If there are two numbers, x and y, the commutative property of multiplication implies that x × y = y × x. b. (a × b) × c = a × (b × c) ____________ (a × b) × c = a × (b × c) The above expression is the example for associative property. According to the associative principle of multiplication, when multiplying three integers, the outcome will always be the same regardless of how the numbers are grouped. If there are three numbers, x, y and z, the associative property of multiplication implies that x × (y × z) = (x × y) × z. Question 8. a. 5 × 1 = 5 ____________ 5 × 1 = 5 The above expression is the example for Identity Property. According to the identity property of multiplication, if a number is multiplied by 1 (one), the result will be the original number. This property is applied when numbers are multiplied by 1. If there is a number, x then the identity property implies that x × 1 = x. b. 0 ÷ 6 = 0 ____________ 0 ÷ 6 = 0 The above expression is the example for zero property. When zero is divided by any number, the quotient is always 0 is called zero property. According to the zero property of division, if 0(zero) is divided by any other number, the result will be zero. If there is a number, x then the zero property of division implies that 0 ÷ x = 0. Change each rational number into a decimal using long division. Place a line over digits which repeat. Question 9. a. $$\frac{2}{9}$$ _________________________ Rational numbers can be converted into decimals using long division. All fractions will be turned into decimals that either terminate or repeat. Repeating decimals can be given as a same pattern of numbers will get when we perform division. A line will be placed above the digits which are repeating. Here, If we divide 2 by 9, we will get repeating decimal 0.222, so a line was indicated above 2. Therefore, $$\frac{2}{9}$$ = 0.222 b. $$\frac{4}{9}$$ = _______________________________ Rational numbers can be converted into decimals using long division. All fractions will be turned into decimals that either terminate or repeat. Repeating decimals can be given as a same pattern of numbers will get when we perform division. A line will be placed above the digits which are repeating. Here, If we divide 4 by 9, we will get repeating decimal 0.444, so a line was indicated above 4. Therefore, $$\frac{4}{9}$$ = 0.444 Question 10. a. $$\frac{1}{11}$$ _____________________________ Rational numbers can be converted into decimals using long division. All fractions will be turned into decimals that either terminate or repeat. Repeating decimals can be given as a same pattern of numbers will get when we perform division. A line will be placed above the digits which are repeating. Here, If we divide 1 by 11, we will get repeating decimal 0.0909, so a line was indicated above 09. Therefore, $$\frac{1}{11}$$ = 0.0909 b. $$\frac{2}{5}$$ = ________________ Rational numbers can be converted into decimals using long division. All fractions will be turned into decimals that either terminate or repeat. Repeating decimals can be given as a same pattern of numbers will get when we perform division. A line will be placed above the digits which are repeating. Here, If we divide 2 by 5, we will get terminating decimal 0.4 Therefore, $$\frac{2}{5}$$ = 0.4 Multiply or divide. Write answers in simplest form. Question 11. a. $$\frac{3}{4}$$ × $$\frac{1}{6}$$ = ____ Answer: $$\frac{1}{8}$$ $$\frac{3}{4}$$ × $$\frac{1}{6}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{3 × 1}{4 × 6}$$ Divide 3 in numerator and 6 in denominator by 3, which is a common factor = $$\frac{1 × 1}{4 × 2}$$ = $$\frac{1}{8}$$ Therefore ,$$\frac{3}{4}$$ × $$\frac{1}{6}$$ = $$\frac{1}{8}$$ b. $$\frac{5}{7}$$ × $$\frac{2}{3}$$ = ____ Answer: $$\frac{10}{21}$$ $$\frac{5}{7}$$ × $$\frac{2}{3}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{5 × 2}{7 × 3}$$ = $$\frac{10}{21}$$ Therefore, $$\frac{5}{7}$$ × $$\frac{2}{3}$$ = $$\frac{10}{21}$$ c. 5$$\frac{1}{2}$$ × 1$$\frac{1}{4}$$ = ____ Answer:6$$\frac{7}{8}$$ 5$$\frac{1}{2}$$ × 1$$\frac{1}{4}$$ Convert the above numbers into improper fractions to make calculations easy =$$\frac{11}{2}$$ × $$\frac{5}{4}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{11 × 5}{2 × 4}$$ = $$\frac{55}{8}$$ =  6$$\frac{7}{8}$$ Therefore, 5$$\frac{1}{2}$$ × 1$$\frac{1}{4}$$ =6$$\frac{7}{8}$$ Question 12. a. 5$$\frac{1}{4}$$ ÷ $$\frac{1}{6}$$ = ____ Answer: 31$$\frac{1}{2}$$ 5$$\frac{1}{4}$$ ÷ $$\frac{1}{6}$$ Convert the above numbers into improper fractions to make calculations easy = $$\frac{21}{4}$$ ÷ $$\frac{1}{6}$$ To divide by a fraction, multiply by its reciprocal. Here, the reciprocal of $$\frac{1}{6}$$ = $$\frac{6}{1}$$ $$\frac{21}{4}$$ ÷ $$\frac{1}{6}$$ = $$\frac{21}{4}$$ × $$\frac{6}{1}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{21 × 6}{4 × 1}$$ Divide 6 in numerator and 4 in denominator by 2, which is a common factor = $$\frac{21 × 3}{2 × 1}$$ = $$\frac{63}{2}$$ = 31$$\frac{1}{2}$$ Therefore, 5$$\frac{1}{4}$$ ÷ $$\frac{1}{6}$$ = 31$$\frac{1}{2}$$ b. 6$$\frac{4}{7}$$ ÷ 12 = ____ Answer: $$\frac{23}{42}$$ 6$$\frac{4}{7}$$ ÷ 12 Convert the above numbers into improper fractions to make calculations easy = $$\frac{46}{7}$$ ÷ $$\frac{12}{1}$$ To divide by a fraction, multiply by its reciprocal. Here, the reciprocal of $$\frac{12}{1}$$ = $$\frac{1}{12}$$ So, $$\frac{46}{7}$$ ÷ $$\frac{12}{1}$$ = $$\frac{46}{7}$$ × $$\frac{1}{12}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{46 × 1}{7 × 12}$$ Divide 46 in numerator and 12 in denominator by 2, which is a common factor = $$\frac{23 × 1}{7 × 6}$$ = $$\frac{23}{42}$$ Therefore, 6$$\frac{4}{7}$$ ÷ 12 = $$\frac{23}{42}$$ c. 1$$\frac{1}{2}$$ ÷ $$\frac{3}{5}$$ = ____ Answer: 2$$\frac{1}{2}$$ 1$$\frac{1}{2}$$ ÷ $$\frac{3}{5}$$ Convert the above numbers into improper fractions to make calculations easy = $$\frac{3}{2}$$ ÷ $$\frac{3}{5}$$ To divide by a fraction, multiply by its reciprocal. Here, the reciprocal of $$\frac{3}{5}$$ = $$\frac{5}{3}$$ So, $$\frac{3}{2}$$ ÷ $$\frac{3}{5}$$ = $$\frac{3}{2}$$  ×  $$\frac{5}{3}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{3 × 5}{2 × 3}$$ Divide 3 in numerator and 3 in denominator by 3, which is a common factor = $$\frac{1 × 5}{2 × 1}$$ = $$\frac{5}{2}$$ = 2$$\frac{1}{2}$$ Therefore, 1$$\frac{1}{2}$$ ÷ $$\frac{3}{5}$$= 2$$\frac{1}{2}$$ Question 13. a. 7 × (-6) = ____ 7 × (-6) = -42 The product of two positive integers is always positive. The product of two negative integers is always positive. The product of one positive and one negative integer is always negative. Here -6 is a negative integer and 7 is a positive integer therefore the result of their product is a negative integer. b. 3 × (-4) = ____ 3 × (-4) = -12 The product of two positive integers is always positive. The product of two negative integers is always positive. The product of one positive and one negative integer is always negative. Here -4 is a negative integer and 3 is a positive integer therefore the result of their product is a negative integer. c. -5 × (-2) = ____ -5 × (-2) = 10 The product of two positive integers is always positive. The product of two negative integers is always positive. The product of one positive and one negative integer is always negative. Here -2 and -5 both are negative integers hence their product is also a negative integer. Question 14. a. 12 ÷ (-4) = ____ let 12 ÷ (-4) = x As division and multiplication are inverse operations, the above equation can be written as follows 12 = x × (-4) x = -3 12 ÷ (-4) = -3 Therefore Inverse operation: -3 The quotient of two integers with the same sign is positive and the quotient of two integers with different signs is negative. Here -18 and 9 both are having different sign hence the result would be negative. b. -15 ÷ (-5) = ____ let -15 ÷ (-5) = x As division and multiplication are inverse operations, the above equation can be written as follows -15 = x × (-5) x = 2 -15 ÷ (-5) = 2 Therefore Inverse operation: 2 The quotient of two integers with the same sign is positive and the quotient of two integers with different signs is negative. Here -40 and -4 both are having same sign hence the result would be positive. c. -21 ÷ 7 = ____ let -21 ÷ 7 = x As division and multiplication are inverse operations, the above equation can be written as follows -21 = x × 7 x = -3 -21 ÷ 7 = -3 Therefore Inverse operation: -3 The quotient of two integers with the same sign is positive and the quotient of two integers with different signs is negative. Here -18 and 9 both are having different sign hence the result would be negative. Solve each problem. Question 15. A bucket that holds 5$$\frac{1}{4}$$ if gallons of water is being used to fill a tub that can hold 34$$\frac{1}{8}$$ gallons. How many buckets will be needed to fill the tub? buckets are needed to fill the tub. Answer: 6$$\frac{1}{2}$$ A bucket that holds 5$$\frac{1}{4}$$ if gallons of water is being used to fill a tub that can hold 34$$\frac{1}{8}$$ gallons. number of buckets needed to fill the tub = 34$$\frac{1}{8}$$ ÷ 5$$\frac{1}{4}$$ Convert the above numbers into improper fractions to make the calculations easy. 34$$\frac{1}{8}$$ = $$\frac{273}{8}$$ 5$$\frac{1}{4}$$ = $$\frac{21}{4}$$ So, 34$$\frac{1}{8}$$ ÷ 5$$\frac{1}{4}$$ = $$\frac{273}{8}$$ ÷ $$\frac{21}{4}$$ To divide by a fraction, multiply by its reciprocal. Here, the reciprocal of $$\frac{21}{4}$$ = $$\frac{4}{21}$$ Hence, $$\frac{273}{8}$$ × $$\frac{4}{21}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{273 × 4}{8 × 21}$$ Divide 4 in numerator and 8 in denominator with 4, which is a common factor = $$\frac{273 × 1}{2 × 21}$$ Divide 273 in numerator and 21 in denominator with 21, which is a common factor = $$\frac{13 × 1}{2 × 1}$$ = $$\frac{13}{2}$$ = 6$$\frac{1}{2}$$ Therefore, 6$$\frac{1}{2}$$ buckets are needed to fill the tub. Question 16. A black piece of pipe is 8$$\frac{1}{3}$$ centimeters long. A silver piece of pipe is 2$$\frac{3}{5}$$ times longer. How long is the silver piece of pipe? The silver piece is ______ centimeters long. Answer: 21$$\frac{2}{3}$$ A black piece of pipe is 8$$\frac{1}{3}$$ centimeters long. A silver piece of pipe is 2$$\frac{3}{5}$$ times longer. The number of centimeters the silver piece is 8$$\frac{1}{3}$$ × 2$$\frac{3}{5}$$ Convert the above numbers into improper fractions to make the calculations easy. 8$$\frac{1}{3}$$= $$\frac{25}{3}$$ 2$$\frac{3}{5}$$ = $$\frac{13}{5}$$ So, 8$$\frac{1}{3}$$ × 2$$\frac{3}{5}$$ = $$\frac{25}{3}$$ × $$\frac{13}{5}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{25 × 13}{3 × 5}$$ Divide 25 in numerator and 5 in denominator with 5, which is a common factor = $$\frac{5 × 13}{3 × 1}$$ = $$\frac{65}{3}$$ = 21$$\frac{2}{3}$$ Therefore, The silver piece is21$$\frac{2}{3}$$ centimeters long. Question 17. One section of wood is 3$$\frac{5}{8}$$ meters long. Another section is twice that long. When the two pieces are put together, how long is the piece of wood that is created? The piece of wood is _____ meters long. Answer: 10$$\frac{7}{8}$$ One section of wood is 3$$\frac{5}{8}$$ meters long. Convert the above number into improper fraction. Then, 3$$\frac{5}{8}$$  = $$\frac{29}{8}$$ Another section is twice that long i.e. 2 × 3$$\frac{5}{8}$$ = 2 × $$\frac{29}{8}$$ = $$\frac{2}{1}$$ × $$\frac{29}{8}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{2 × 29}{1 × 8}$$ = $$\frac{58}{8}$$ When the two pieces are put together, the piece of wood that is created = $$\frac{29}{8}$$ + $$\frac{58}{8}$$ = $$\frac{29+58}{8}$$ = $$\frac{87}{8}$$ = 10$$\frac{7}{8}$$ Therefore, The piece of wood is 10$$\frac{7}{8}$$ meters long. Question 18. Danielle wants to fill a box with dirt to start a garden. If the box is 2$$\frac{1}{5}$$ feet long, by 1$$\frac{1}{3}$$ feet wide, and 1$$\frac{1}{2}$$ feet deep, how much dirt does Danielle need to fill up the box for her garden? Danielle needs ____ cubic feet of dirt. Answer: 4$$\frac{4}{10}$$ If the box is 2$$\frac{1}{5}$$ feet long, by 1$$\frac{1}{3}$$ feet wide, and 1$$\frac{1}{2}$$ feet deep The amount of dirt needed to fill the box = 2$$\frac{1}{5}$$ × 1$$\frac{1}{3}$$ × 1$$\frac{1}{2}$$ Convert the above number into improper fraction to make calculations easy. = $$\frac{11}{5}$$ × $$\frac{4}{3}$$ × $$\frac{3}{2}$$ Reduce the above fractions into simplest form if possible. Then, multiply the numerators and denominators separately. = $$\frac{11 × 4 × 3}{5 × 3 × 2}$$ Divide 3 in numerator and 3 in denominator with 3, which is a common factor = $$\frac{11 × 4 × 1}{5 × 1 × 2}$$ = $$\frac{44}{10}$$ = 4$$\frac{4}{10}$$ Therefore, Danielle needs 4$$\frac{4}{10}$$ cubic feet of dirt.
2023-02-03 19:59:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588338851928711, "perplexity": 1041.2817785108552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00811.warc.gz"}
https://www.vcalc.com/wiki/KurtHeckman/Silver+Calculator
# Silver Calculator Not Reviewed Tags: Rating Copied from The Silver Calculator computes the current value of coins, jewelry and bullion.                          Bullion Silver Coins The Silver Calculator has equations and data useful for Jewelers and other merchants of silver and for anyone that wants to know the value of their silver.  The current Silver Spot Price is ($15.32/Troy Ounce) in U.S. dollars. The SPOT price is updated every two minutes. Silver Calculator Functions: • Bullion Silver Value - Computes the value of bullion (.999) silver based on weight and the current spot price. • Scrap Silver Value - This computes the value of scrap silver base on the purity, weight, refiner fee, merchant profit and current spot price. • Junk Silver Value - This computes the value of junk silver (U.S. silver coins) based on the face value of the coins and the current spot price • Junk Silver Coin Count - This computes the value of junk silver coins. It let's the user enter the number of different coins to compute the value. • Clad Silver Value- This computes the value of U.S. silver half dollars between 1965 and 1970 based on the Face Value, a Clad Factor and the current SPOT price, • Spot Price in U.S. dollars per gram • Spot Price in U.S. dollars per troy ounce ### Silver Scrap Buy Price The Silver Value (Jeweler's Buy Price) equation lets you: 1. enter the weight of your scrap silver on one of many units, 2. It then asks for the purity (e.g. % pure silver). Note: Sterling silver is traditionally 92.5% (0.925) pure. While Fine silver is 99.9% (0.999). 3. It then asks for a refiners fee (e.g. 5%), and 4. it asks for the fee (profit) of the buyer. It then returns the buy price of the silver accounting for the above factors and the current silver spot price. ### Junk Silver Junk Silver is comprised of U.S. silver coins with no numismatic value. These are often coins that are very worn and have lost any value outside of the silver content. Junk Silver coins are usually U.S. coins with dates before 1964 when the U.S. Mint stamped its last coins with .9 silver content. The Junk Silver Value equation calculates current value for Junk Silver based on the dollar Face Value and the current Silver Spot Price ($17.38/Troy Ounce) in U.S. dollars.  The Face Value is the sum of the different coins. Clad Silver consists of U.S. half dollars minted between 1965 and 1970 with no numismatic value.  These are often coins that are worn and have lost any value outside of the silver content.   During this period, 1965 to 1970, the U.S. Mint stamped its Kennedy half dollars with .4 silver content.  Prior to 1965, half dollars and all silver U.S. coins had 90% silver (see Junk Silver).  The Kennedy Half dollar coins during this period had the following specifications: • 40% silver equal to 0.1479 troy ounces • Outer layer: 80% silver and 20% copper • Core: 21.5% silver and 78.5% copper • Total weight of 11.5 grams • Specific gravity: 9.53 • Diameter: 30.60 mm • Thickness: 2.15 mm • Volume: 1.58114316 cm³ ### SPOT Prices This is a simple equation where you choose a precious metal from the list, enter the weight of the refined metal you have and it provides a current value based on the current SPOT price.  The default amount is one Troy ounces so the spot price of the metals is provided by default. # Notes This calculator was compiled with functions geared to meet the daily needs of a professional jeweler or pawn broker.  Requests for new equations and/or corrections to existing equations will be rapidly implemented.  Use the REPORT A PROBLEM button.
2019-07-15 18:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4793402850627899, "perplexity": 8465.817162606507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715200241-00051.warc.gz"}
https://quant.stackexchange.com/questions/39619/gamma-pnl-vs-vega-pnl
# Gamma Pnl vs Vega Pnl Why does Gamma Pnl have exposure to realised volatility, but Vega Pnl only has exposure to implied volatility? I am confused as to why gamma pnl is affected (more) by IV and why vega pnl isnt affected (more) by RV? Essentially how do you show what gamma pnl will be mathematically and how do you show what vega pnl will be? I believe that gamma pnl is spot x (vega x IV - RV) Also does gamma pnl usually dominate (in $terms) the vega pnl of an options, as most literature is on gamma pnl? ## 2 Answers For an option with price$C$, the P$\&$L, with respect to changes of the underlying asset price$S$and volatility$\sigma, is given by \begin{align*} P\&L = \delta \Delta S + \frac{1}{2}\gamma (\Delta S)^2 + \nu \Delta \sigma, \end{align*} where\delta$,$\gamma$, and$\nu$are respectively the delta, gamma, and vega hedge ratios. Then it is clear the vega P$\&$L has exposure to the change of the implied volatility$\sigma$. Note that, for the gamma P$\&L, \begin{align*} \frac{1}{2}\gamma (\Delta S)^2 = \frac{1}{2}\gamma S^2 \frac{1}{\Delta t}\left(\frac{\Delta S}{S}\right)^2\Delta t, \end{align*} where\frac{1}{\Delta t}\left(\frac{\Delta S}{S}\right)^2$is the realized variance, and$\sqrt{\frac{1}{\Delta t}\left(\frac{\Delta S}{S}\right)^2}$is the realized volatility. To see why$\sqrt{\frac{1}{\Delta t}\left(\frac{\Delta S}{S}\right)^2}is the realized volatility, we assume that, heuristically, \begin{align*} dS_t = S_t\left(r dt + \sigma_{Re} dW_t \right), \end{align*} where\sigma_{Re}$is the realized volatility and$\{W_t, \, t \ge 0\}is a standard Brownian motion. Then \begin{align*} \sqrt{\frac{1}{\Delta t}\left(\frac{\Delta S}{S}\right)^2} \approx \sigma_{Re}. \end{align*} Consider the delta neutral portfolio\Pi=C-\frac{\partial C}{\partial S}S$. Assuming that the interest rate and volatility are not change during the small time period$\Delta t$. The P$\&L of the portfolio is given by \begin{align*} P\&L_{\Delta t}^{\Pi} &= \frac{1}{2}\gamma (\Delta S)^2 + \theta \Delta t, \end{align*} where\theta$is the theta hedge ratio. For small interest rate, which we assume to be zero,$\theta \approx -\frac{1}{2}\gamma S^2 \sigma^2$and$\gamma = \frac{\nu}{S^2\sigma T}; see, for example, Black–Scholes model. Then \begin{align*} P\&L_{\Delta t}^{\Pi} &\approx \frac{1}{2}\gamma S^2 \frac{1}{\Delta t}\left(\frac{\Delta S}{S}\right)^2\Delta t - \frac{1}{2}\gamma S^2 \sigma^2 \Delta t\\ &\approx \frac{1}{2}\gamma S^2 \sigma_{Re}^2 \Delta t - \frac{1}{2}\gamma S^2 \sigma^2 \Delta t\\ &= \frac{1}{2}\gamma S^2 (\sigma_{Re} + \sigma)(\sigma_{Re} - \sigma) \Delta t\\ &\approx \gamma S^2 \sigma (\sigma_{Re} - \sigma) \Delta t \hspace{1in} (\text{assuming that } \sigma_{Re}\approx \sigma)\\ &=\frac{\nu}{T}(\sigma_{Re} - \sigma) \Delta t. \end{align*} The cumulative P\&$L, over the interval$[0, T]$, is then$\nu (\sigma_{Re} - \sigma)\$. • Im stil confused about the gamma pnl – Permian May 17 '18 at 7:02 • how could gamma pnl be gamma pnl is spot x vega x (IV - RV)? – Permian May 17 '18 at 7:15 • @Permian: See the above updates. Where did you get this? Can you please provide us the source? I would like to check the context. – Gordon May 17 '18 at 18:10 • Sorry but got it in conversation was confused myself – Permian May 18 '18 at 11:14 Not sure this is a valid question! Gamma p/l is by definition the p/l due to realized volatility being different from implied. Vega p/l is by definition the p/l due to moves in implied volatility. The second part of the question you have answered yourself. Short dated options have more gamma exposure, long dated options have more vega exposure. • Its not clear to me at all how this is both by definition – Permian May 7 '18 at 12:10 • What other explanation is possible ? – dm63 May 7 '18 at 22:15 • The connection between gamma and realised volatility, probably mathematically – Permian May 8 '18 at 9:06
2019-09-20 02:13:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946635961532593, "perplexity": 11999.081800683705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00480.warc.gz"}
http://www.emathzone.com/tutorials/math-results-and-formulas/formulas-of-sequence-and-series.html
# Formulas of Sequence and Series • The $nth$ term ${a_n}$ of the Arithmetic Progression (A.P) $a,{\text{ }}a + d,{\text{ }}a + 2d, \ldots$is given by ${a_n} = a + (n - 1)d$. • Arithmetic mean between $a$ and $b$ is given by $A.M = \frac{{a + b}}{2}$. • If ${S_n}$ denotes the sum up to $n$ terms of A.P. $a,{\text{ }}a + d,{\text{ }}a + 2d, \ldots$ then ${S_n} = \frac{n}{2}(a + l)$ where $l$ stands for last term, ${S_n} = \frac{n}{2}[2a + (n - 1)d]$ • The sum of $n$ A.M’s between $a$ and $b$ is $= \frac{{n(a + b)}}{2}$. • The $nth$ term ${a_n}$ of the geometric progression $a,{\text{ }}ar,{\text{ }}a{r^2},{\text{ }}a{r^3}, \ldots$is ${a_n} = a{r^{n - 1}}$. • Geometric mean between $a$ and $b$ is $G.M = \pm \sqrt {ab}$. • If ${S_n}$ denotes the sum up to $n$terms of G.P is ${S_n} = \frac{{a(1 - {r^n})}}{{1 - r}};{\text{ }}r \ne 1$, ${S_n} = \frac{{a - rl}}{{1 - r}};{\text{ }}l = a{r^n}$ where $\left| r \right| < 1$ • The sum $S$ of infinite geometric series is $S = \frac{a}{{1 - r}};{\text{ }}\left| r \right| < 1$ • The $nth$ term ${a_n}$ of the harmonic progression is ${a_n} = \frac{1}{{a + (n - 1)d}}$. •  Harmonic mean between $a$ and $b$ is $H.M = \frac{{2ab}}{{a + b}}$. • ${G^2} = A \cdot H$ and $A > G > H$; where $A,G,H$ are usual notations.
2017-07-27 20:39:37
{"extraction_info": {"found_math": true, "script_math_tex": 40, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999525547027588, "perplexity": 383.3341174204211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429485.0/warc/CC-MAIN-20170727202516-20170727222516-00619.warc.gz"}
https://mathshistory.st-andrews.ac.uk/OfTheDay/oftheday-12-06/
Mathematicians Of The Day 6th December Quotation of the day From Pierre Boutroux Logic is invincible because in order to combat logic it is necessary to use logic.
2020-09-30 22:30:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327064156532288, "perplexity": 5498.08949406847}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00391.warc.gz"}
https://worldbuilding.stackexchange.com/questions/89566/does-mass-affect-orbit-size
# Does mass affect orbit size? My question refers to a capital ship. Ideally, the ship wants to stay as low as possible. Does the mass of the ship affect its possible orbits? If so, what is the lowest orbit possible for this ship? I don't have a number for its mass, but imagine something like the farragut: Dimensions: Length: 2040m, Width: 806m, Height: 300m • The mass is not the only number in equation, the speed is the second one. If you can provide enough speed, you can choose any orbit. But because energy for acceleration is relatively scarce, you really aim for the spot that is above the atmosphere. Cover part is irrelevant, as rockets can curve – Antoine Hejlík Aug 21 '17 at 12:38 • No, unless your ship mass is so huge it starts affecting noticeably the orbit of the planet (which is not the case of your "small" ship) – Keelhaul Aug 21 '17 at 12:39 • @AricFowler - if the ship benefits from "line-of-sight" cover, and the intricacies of fuel requirements are neglected - wouldn't it be better to just pick a static point 'behind' the planet and stay there? That way your ship is obscured all the time, not only for a part of its orbit... – G0BLiN Aug 21 '17 at 13:37 • Technically, there's the Roche Limit. But your ship needs to be much bigger before that becomes a problem. – ths Aug 21 '17 at 17:50 Actually, in practice, a very massive object will be able to orbit ad lower altitude than a very light one. The orbit mechanics is exactly the same (assuming big body mass is still negligible compared with planet). BUT It can skim atmosphere fringes and still keep going due to its much larger inertia where a lightweight would be slowed down to a forced reentry. Of course even very massive objects will be slowed down, but it will take much more time, very likely longer than a space battle will last. In practice such a massive object can flirt with atmosphere as much as friction won't heat it up too much. All this is because friction (slow dawn force) is roughly proportional to cross section (square of dimensions) while inertia is roughly proportional to volume (cube of dimensions). You need more time to drag away all energy you gave to spaceship to put it into orbit in the first place. As I mentioned initially in the comments: no. To the best of my knowledge, the mass of an object only affects how much energy is needed to get that object into a specific orbit. It doesn't affect the actual distance at which that object can orbit. So realistically, the lowest your ship can orbit is just above the edge of the planet's atmosphere. If the planet doesn't have an atmosphere, then you can practically skim across the surface. • @Keelhaul Got my terminology mixed up, thanks for the correction. – F1Krazy Aug 21 '17 at 12:48 • I would say this is true iff mass(ship) <<< mass(planet) – corsiKa Aug 21 '17 at 19:18 No stable orbits are purely about speed versus altitude. But an object's bulk dimensions will affect the lowest stable orbit once you get down to atmosphere skimming altitudes, below say 700km. As atmospheric pressure increases larger objects are going the experience more drag at a given altitude than smaller ones. Heavy objects have greater inertia and are less affected by a given amount of drag so it really comes down to object density and how much fuel you're willing to burn to maintain an inherently unstable position once you get down that low. You didn't mention ship mass so I would estimate it as 100 million tons(10^11 kg). If I would take you question literally then first thing I would think of is offset of barycenter. Basically if you make two bodies of equal mass orbit each other then they rotate around center of their masses that is right between them. If one body is lighter then barycenter would move towards heavier body. Earth weights 6*10^24 kg, you ship weights 60 millions million times less, so barycenter would move only 1/10 000 of mm from Earth center of mass. Earth gravitational anomalies would affect orbit of the ship much more than that. But I suppose you mostly care how much atmosphere would affect orbit. Obviously, the bigger the ship, the less surface to mass ratio, so atmosphere affects it less and less. Usually satellites don't go below 300 km, but a ship this big can go much lower, especially for short time. Calculations are relatively simple. Air resistance force would be 1 / 2 * Cx * p * V^2 * Sx, p - air density, V - ship speed(~= 8 km/s), Sx - front surface(Width: 806m, Height: 300m = about 2.4*10^5 m^2), Cx - resistance coefficient(= 2, because speed is so big that we can consider that all interactions are inelastic). So it simplifies to 2.4*10^5 * 6.4*10^7 * p ~= 10^13 * p At 100 km density is about 5*10^-7 kg/m3 so resistance would be 5*10^6 N = 500 ton(2 kg per m^2). Ship would decelerate 0.05 mm/s^2, and lose only about 0.27 m/s per revolution. 1 meter per second on LEO corresponds to roughly 2 km of orbit height so it would lose 0.5 km of height per rotation. But deceleration would speed up quickly as air density grows exponentially, so it would go down after 10-20 revolutions. Or you can accelerate ship with engines accordingly - 0.05 mm/s^2 is nothing for battleship. But as ship slows down it loses energy. According to energy conservation this energy should go somewhere - it turns to heat. That would be about 100 KWt per m^2, that's enough to burn small parts like antennas and make ship look very bad. If you move it a little higher - 120 km, air density becomes 4 times less, 140 km - 60 times less. So at 140 km it would lose only 100 meters per revolution and external parts should be OK, though I think paint would suffer. tl;dr Below 100 km heat from air resistance becomes too much and orbit decays too quickly. But if you have appropriate(force shield?) protection and powerful engines you can go as low as you want. At about 150 km heat is not a big problem and a ship that big can orbit for days even without turning engines on. At 200 km and above air is not a problem for such ship. Once the ship is in orbit, if it's going fast enough, it will stay in orbit. The only problem is the atmosphere. If the ship in question is too low, then drag comes into play, slowing the ship down, and making it fall out of orbit. Then you have to have the engines on to keep speed up. Otherwise, there shouldn't be any problems • Welcome to Worldbuilding, Chris! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun! – Gryphon Mar 6 at 15:56
2019-09-17 14:58:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5060853362083435, "perplexity": 1119.2903594923769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00161.warc.gz"}
https://hal.inria.fr/hal-01054441
# On Packing Splittable Items with Cardinality Constraints Abstract : This paper continues the study of the the allocation of memory to processors in a pipeline problem. This problem can be modeled as a variation of bin packing where each item corresponds to a different type and the normalized weight of each item can be greater than 1, which is the size of a bin. Furthermore, in this problem, items may be split arbitrarily, but each bin may contain at most k types of items, for any fixed integer k ≥ 2. The case of k = 2 was first introduced by Chung el al. who gave a 3/2-approximation asymptotically. In this paper, we generalize the result of Chung et al. to higher k. We show that NEXT FIT gives a $\left(1+\frac 1 k\right)$-approximation asymptotically, for k ≥ 2. Also, as a minor contribution, we rewrite the strong NP-hardness proof of Epstein and van Stee for this problem for k ≥ 3. Type de document : Communication dans un congrès Cristian S. Calude; Vladimiro Sassone. 6th IFIP TC 1/WG 2.2 International Conference on Theoretical Computer Science (TCS) / Held as Part of World Computer Congress (WCC), Sep 2010, Brisbane, Australia. Springer, IFIP Advances in Information and Communication Technology, AICT-323, pp.101-110, 2010, Theoretical Computer Science. 〈10.1007/978-3-642-15240-5_8〉 Littérature citée [7 références] https://hal.inria.fr/hal-01054441 Contributeur : Hal Ifip <> Soumis le : mercredi 6 août 2014 - 16:24:39 Dernière modification le : mercredi 9 août 2017 - 12:03:18 Document(s) archivé(s) le : mercredi 26 novembre 2014 - 00:55:45 ### Fichier 03230101.pdf Fichiers produits par l'(les) auteur(s) ### Citation Fouad B. Chedid. On Packing Splittable Items with Cardinality Constraints. Cristian S. Calude; Vladimiro Sassone. 6th IFIP TC 1/WG 2.2 International Conference on Theoretical Computer Science (TCS) / Held as Part of World Computer Congress (WCC), Sep 2010, Brisbane, Australia. Springer, IFIP Advances in Information and Communication Technology, AICT-323, pp.101-110, 2010, Theoretical Computer Science. 〈10.1007/978-3-642-15240-5_8〉. 〈hal-01054441〉 ### Métriques Consultations de la notice ## 77 Téléchargements de fichiers
2018-01-22 18:21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3167831003665924, "perplexity": 4282.352026484805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891530.91/warc/CC-MAIN-20180122173425-20180122193425-00268.warc.gz"}
http://trac.sasview.org/browser/sasview/src/sas/sasgui/perspectives/pr/media/pr_help.rst?annotate=blame&rev=0391dae728cc7e22bcc7ec329200722cd14f829f
# source:sasview/src/sas/sasgui/perspectives/pr/media/pr_help.rst@0391dae Last change on this file since 0391dae was 0391dae, checked in by butler, 6 years ago update Pr documentation on how to use and converted equations to LaTex?. • Property mode set to 100644 File size: 2.8 KB RevLine [ec392464]1.. pr_help.rst 2 3.. This is a port of the original SasView html help file to ReSTructured text 4.. by S King, ISIS, during SasView CodeCamp-III in Feb 2015. 5 [b64b87c]6P(r) Calculation 7================ [ec392464]8 [8a22b5b]9Description 10----------- [ec392464]11 [8a22b5b]12This tool calculates a real-space distance distribution function, *P(r)*, using 13the inversion approach (Moore, 1908). 14 15*P(r)* is set to be equal to an expansion of base functions of the type 16 [0391dae]17.. math:: 18  \Phi_{n(r)} = 2 r sin(\frac{\pi n r}{D_{max}}) [ec392464]19 [0391dae]20The coefficient of each base function in the expansion is found by performing [8a22b5b]21a least square fit with the following fit function 22 [0391dae]23.. math:: [ec392464]24 [0391dae]25  \chi^2=\frac{\sum_i (I_{meas}(Q_i)-I_{th}(Q_i))^2}{error^2}+Reg\_term 26 [ec392464]27 [0391dae]28where $I_{meas}(Q_i)$ is the measured scattering intensity and $I_{th}(Q_i)$ is 29the prediction from the Fourier transform of the *P(r)* expansion. 30 31The $Reg\_term$ term is a regularization term set to the second derivative 32$d^2P(r)/d^2r$ integrated over $r$. It is used to produce a smooth *P(r)* output. [ec392464]33 [8a22b5b]34.. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ [ec392464]35 [b64b87c]36Using P(r) inversion 37-------------------- [ec392464]38 [8a22b5b]39The user must enter 40 41*  *Number of terms*: the number of base functions in the P(r) expansion. [ec392464]42 [8a22b5b]43*  *Regularization constant*: a multiplicative constant to set the size of [ec392464]44   the regularization term. 45 [8a22b5b]46*  *Maximum distance*: the maximum distance between any two points in the [ec392464]47   system. [8a22b5b]48 [0391dae]49P(r) inversion requires that the background be perfectly subtracted.  This is 50often difficult to do well and thus many data sets will include a background. 51For those cases, the user should check the "estimate background" box and the 52module will do its best to estimate it. 53 54The P(r) module is constantly computing in the background what the optimum 55*number of terms* should be as well as the optimum *regularization constant*. 56These are constantly updated in the buttons next to the entry boxes on the GUI. 57These are almost always close and unless the user has a good reason to choose 58differently they should just click on the buttons to accept both.  {D_max} must 59still be set by the user.  However, besides looking at the output, the user can 60click the explore button which will bring up a graph of chi^2 vs Dmax over a 61range around the current Dmax.  The user can change the range and the number of 62points to explore in that range.  They can also choose to plot several other 63parameters as a function of Dmax including: I0, Rg, Oscillation parameter, 64background, positive fraction, and 1-sigma positive fraction. 65 [8a22b5b]66.. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 67 68Reference 69--------- 70 71P.B. Moore 72*J. Appl. Cryst.*, 13 (1980) 168-175 73 74.. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 75 [0391dae]76.. note::  This help document was last modified by Paul Butler, 05 September, 2016 Note: See TracBrowser for help on using the repository browser.
2022-09-30 23:25:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4678252935409546, "perplexity": 5495.6091679826895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00309.warc.gz"}
https://math.stackexchange.com/questions/3020563/i-want-to-solve-the-intersections-of-two-circles-using-matrices?noredirect=1
# I want to solve the intersections of two circles using matrices. The equations of the circles are $$(x-h_1)^2+(y-y_1)^2=r_1^2$$ and $$(x-h_2)^2+(y-y_2)^2=r_2^2$$ If I can use matrices to solve for $$(x,y)$$, how? Also, I know there will be two answers. The one I am looking for is the one with the greater $$y$$ value, if that helps any. CONTEXT: I am trying to bilaterate (altered version of triangulation) and that is basically how. I am also programming this and have already found a working method. However, this program does not calculate decimals accurately. I am hoping that solving this problem with matrices will allow me to calculate decimals with accuracy (hopefully to the thousandths or ten thousandths). • $x$ and $y$ aren't generally linear functions of $h_i, y_i, r_i$, so I'm not sure what you want to do is possible. This question may help: math.stackexchange.com/questions/256100/… – Connor Harris Nov 30 '18 at 20:06 • If your program doesn’t “calculate decimals,” what makes you think that using matrices will somehow do that? Sounds like the real problem might be using ints when you should be using floating-point. – amd Nov 30 '18 at 20:09 • Check edit. Matrices might be a more straightforward approach to the problem instead of the one I am currently using. – ARCS2016 Nov 30 '18 at 20:12 • Connor Harris, I did check that out and am currently using a system similar to what is described in the top answer. – ARCS2016 Nov 30 '18 at 20:15 • If you insist on “using matrices,” then one approach is to reduce to a conic-line intersection problem by subtracting one circle equation from the other and then using one of the methods in the answers to this question. IMO, this is overkill for a circle-circle intersection, which can be computed straightforwardly by intersecting the radical axis with the line through the centers and applying the Pythagorean theorem. – amd Nov 30 '18 at 20:59
2019-11-17 15:20:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4596172571182251, "perplexity": 366.69983085387474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669183.52/warc/CC-MAIN-20191117142350-20191117170350-00275.warc.gz"}
https://www.bartleby.com/questions-and-answers/an-ideal-gas-at-a-given-state-expands-to-a-fixed-final-volume-first-at-constant-pressure-and-then-at/45330d14-ba4c-4835-b20a-8f88aead6d17
# An ideal gas at a given state expands to a fixed final volume first at constantpressure and then at constant temperature. How would you calculate the work for each case (show the equation and the P-v diagram for each case)? For which case is the work done greater? Justify your answer. Question An ideal gas at a given state expands to a fixed final volume first at constant pressure and then at constant temperature. How would you calculate the work for each case (show the equation and the P-v diagram for each case)? For which case is the work done greater? Justify your answer.
2021-03-01 14:28:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130923748016357, "perplexity": 534.2015133740297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00291.warc.gz"}
http://hal.in2p3.fr/in2p3-01099792
# Proton-induced fission cross sections on $^{208}$Pb at high kinetic energies Abstract : Total fission cross sections of 208Pb induced by protons have been determined at 370A, 500A, and 650A MeV. The experiment was performed at GSI Darmstadt where the combined use of the inverse kinematics technique with an efficient detection setup allowed us to determine these cross sections with an uncertainty below 6%. This result was achieved by an accurate beam selection and registration of both fission fragments in coincidence which were also clearly distinguished from other reaction channels. These data solve existing discrepancies between previous measurements, providing new values for the Prokofiev systematics. The data also allow us to investigate the fission process at high excitation energies and small deformations. In particular, some fundamental questions about fission dynamics have been addressed, which are related to dissipative and transient time effects. Type de document : Article dans une revue Physical Review C, American Physical Society, 2014, 90, pp.064606. 〈10.1103/PhysRevC.90.064606〉 http://hal.in2p3.fr/in2p3-01099792 Contributeur : Michel Lion <> Soumis le : lundi 5 janvier 2015 - 13:00:34 Dernière modification le : jeudi 1 février 2018 - 01:26:25 ### Citation J.L. Rodriguez-Sanchez, J. Benlliure, J. Taïeb, A. Chatillon, C. Paradela, et al.. Proton-induced fission cross sections on $^{208}$Pb at high kinetic energies. Physical Review C, American Physical Society, 2014, 90, pp.064606. 〈10.1103/PhysRevC.90.064606〉. 〈in2p3-01099792〉 ### Métriques Consultations de la notice
2018-02-18 08:28:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27264153957366943, "perplexity": 5863.825814637437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00727.warc.gz"}
https://www.physicsforums.com/threads/hypersurfaces-of-r-n-are-orientable.794750/
# Hypersurfaces of R^n are Orientable Gold Member Anyone know how to prove the result that every closed hypersurface of ## \mathbb R^n ## , i.e., any closed (n-1)-submanifold of ## \mathbb R^n ## is orientable? Note that if we assume this is true, this shows ## \mathbb RP^n ## cannot be embedded in ## \mathbb R^{n+1} ## EDIT: there is a result that every hypersurface can be represented as ## f^{-1}(0)## , where I think ##f## is at least an immersion. I heard this has to see too with Alexander duality but I dont see clearly where this duality comes into place, though. Last edited: lavinia Gold Member Correct this if it s wrong but I think it works. If the compact smooth manifold ##M ## is embedded in ## R^{n+1}## then it has a tubular neighborhood, ## T ## ,with boundary another compact manifold,N. The Z cohomology of the pair ## (T,N) ## is equal to the cohomology of ##S^{n+1}## since ##S^{n+1}## is homeomorphic to the Thom space of the normal bundle of ##M## in ## R^{n+1}##. This is proved using Excision. The exact homology of the pair, (T,N), is ## 0 ← H^{n+1}(S^n) ← H^n(N) ← H^n(M) ← H^n(S^{n+1}) ← ... ## which is ## 0 ← Z ← H^n(N) ← H^n(M) ← 0 ← ... ## If ##M## is non orientable then ## N ## is connected so that ##H^n(N)## is equal to ##Z## and ##H^n(M)## is equal to ## Z_{2}## So the exact sequence is ## 0 ← Z ← Z ← Z_{2} ← 0 ← ... ## which is impossible. If ##M## is orientable then ## N ## has two diffeomorphic components so that ##H^n(N## is equal to ##Z ⊕ Z## and ##H^n(M)## is equal to ## Z## So the exact sequence is ## 0 ← Z ← Z ⊕Z← Z← 0 ← ... ## which is possible. Last edited: Gold Member Can you see why/how Alexander duality plays a role here? Sorry, I dont understand enough about Thom spaces to understand the answer. I am aware of excision and the LES associated to it, and I understand the argument otherwise, but maybe if you can see where/if Alexander duality is applied? lavinia Gold Member Can you see why/how Alexander duality plays a role here? Sorry, I dont understand enough about Thom spaces to understand the answer. I am aware of excision and the LES associated to it, and I understand the argument otherwise, but maybe if you can see where/if Alexander duality is applied? I don't really know the Alexander Duality theorem but here is a simplified version that is given as an exercise in Milnor's Characteristic Classes that answers your question. BTW: I would be happy to learn the proof with you. If K is a compact subset of the sphere, ##S^n## that is a retract of some neighborhood in ##S^n## then using ordinary homology, there is an isomorphism between ## H^{i-1}(K,x) ## and ## H_{n-i}(S^n - K,y)## where ##x## is a point of ##K## and ##y## is a point of ##S^n-K## An embedded compact smooth submanifold is a retract of a tubular neighborhood so the conditions are satisfied. If ## i - 1 = n - 1 ## as in the case of a hyper-surface the isomorphism says ## H^{n-1}(K,x) ## and ## H_{0}(S^n - K,y)## are isomorphic. But ##H_{0}(S^n - K,y)## is a direct sum of one less than the number of connected components of ## S^n-K## copies of ##Z## Since ##H^{n-1}(K,x) ## is not zero the number of connected components must be greater than one so ## H_{0}(S^n - K,y)## is not the zero group but is a direct sum of at least one copy of ##Z##.. But if ##K## is non orientable its top cohomology is ##Z_{2}## and so can not be isomorphic to a torsion free group. Last edited: Gold Member Nice, thanks. Sure, we can agree to read the proof together. My situation is a bit uncertain at this point, if you can be flexible, I would be glad to work it out with you. BTW, I read that Alexander Duality is a generalization of the Jordan Curve thm., in that it deals with the topology of complementary subspaces ( with complementary meaning their union is the whole space). A.De studies the homological properties of complementary subspaces. The homological properties of a set can be defined in terms of those in the complement ...http://www.encyclopediaofmath.org/index.php/Alexander_duality lavinia
2021-03-03 06:03:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253726005554199, "perplexity": 1572.1360248203343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00158.warc.gz"}
http://network.bepress.com/explore/engineering/?facet=publication_year%3A%221993%22
# Engineering Commons™ Articles 1 - 30 of 1106 ## Full-Text Articles in Engineering Comparison Of Air Traffic Control Candidate Ability With Simulator-Based Training Measures, Lawrence A. Tomaskovic Dec 1993 #### Comparison Of Air Traffic Control Candidate Ability With Simulator-Based Training Measures, Lawrence A. Tomaskovic ##### Master's Theses - Daytona Beach The purpose of this study was to determine if the utilization of an experimental computer-based selection test battery would aid in the prediction of a candidates performance when using an air traffic control computer-based simulation program. Each candidate completed the selection test battery, and then received air traffic control instruction using the air traffic control simulation program incorporated in the TRACON/Pro™ simulator system. The selection test battery results were correlated with the subsequent simulator scoring results. Dec 1993 #### The Effect Of Active And Passive Control On Air Traffic Controller Dynamic Memory, Esa M. Rantanen ##### Master's Theses - Daytona Beach The purpose of this study was to investigate the effect of automated and passive control on air traffic controller dynamic memory. The study consisted of two experiments, each involving a realistic ATC scenario for radar approach control with a mix of arriving and departing traffic. In Experiment I, the subjects performed manual control of the traffic while, in Experiment II, the scenario was highly automated and the subjects were tasked with only monitoring the situation. The dynamic memory performance was measured by interrupting the scenario and having the subjects recall the traffic situation at the moment of simulation interruption. The ... #### On Stationary And Moving Interface Cracks With Frictionless Contact In Anisotropic Bimaterials, Xiaomin Deng ##### Faculty Publications The asymptotic structure of near-tip fields around stationary and steadily growing interface cracks, with frictionless crack surface contact, and in anisotropic bimaterials, is analysed with the method of analytic continuation, and a complete representation of the asymptotic fields is obtained in terms of arbitrary entire functions. It is shown that when the symmetry, if any, and orientation of the anisotropic bimaterial is such that the in-plane and out-of-plane deformations can be separated from each other, the in-plane crack-tip fields will have a non-oscillatory, inverse-squared-root type stress singularity, with angular variations clearly resembling those for a classical mode II problem when ... Modeling Technique For Optimal Recovery Of Immiscible Light Hydrocarbons As Free Product From Contaminated Aquifer, Grant S. Cooper Jr., Richard C. Peralta, Jagath J. Kaluarachchi Dec 1993 #### Modeling Technique For Optimal Recovery Of Immiscible Light Hydrocarbons As Free Product From Contaminated Aquifer, Grant S. Cooper Jr., Richard C. Peralta, Jagath J. Kaluarachchi ##### Civil and Environmental Engineering Faculty Publications Contamination sites associated with light non-aqueous phase liquids {LNAPL) are numerous and represent difficult cleanup problems. Remediation methods for cleanup of LNAPL fluids in subsurface systems are continuously evolving with the development of various technologies for pump.-and~treat, soil venting, and in-situ bioremediation. Evaluating the effectiveness of remediation techniques as well as attempting to improve their efficiency has been a focus of many researchers, These efforts have included the development of computer simulation models to predict and analyze the fluid movement, entrapment, and mobilization of three~phase systems in porous media. The capability of computer models that not only ... Domino Online Terpercaya, Agen Domino Online Dec 1993 #### Domino Online Terpercaya, Agen Domino Online ##### agen domino online Berikut ini adalah sebuah abstact dari domino online terpercaya yang sudah banyak kita temui di berbagai macam jejaringan internet maupun di sosial media juga. Domino online merupakan permainan 2 kartu maupun 4 kartu tergantung dari permainan yang dimainkan seperti domino qiu qiu ( di mana bermain dengan 4 kartu ) dan adu q ( di mana bermain hanya dengan 2 kartu ) Dan permainan domino sudah banyak beredar di app store bahkan permainan domino sudah beredar di situs-situs yang berhubungan JUDI dan dalam permainan tersebut memakai uang asli dengan cara transaksi melalui online Bagi seorang pro judi tidak akan sulit dalam memperlajari permainan agen ... 66. In Memoriam Herman F. Mark, Otto Vogl, Marcel Dekker Dec 1993 #### 66. In Memoriam Herman F. Mark, Otto Vogl, Marcel Dekker ##### Emeritus Faculty Author Gallery No abstract provided. Polymer Science At The Kyoto Institute Of Technology, Kyoto, Japan, Otto Vogl, Shinzo Kohjiya, Takeo Araki Dec 1993 #### Polymer Science At The Kyoto Institute Of Technology, Kyoto, Japan, Otto Vogl, Shinzo Kohjiya, Takeo Araki ##### Emeritus Faculty Author Gallery No abstract provided. ##### Emeritus Faculty Author Gallery No abstract provided. Two-Temperature Discrete Model For Nonlocal Heat Conduction, Sergey Sobolev Dec 1993 #### Two-Temperature Discrete Model For Nonlocal Heat Conduction, Sergey Sobolev ##### Sergey Sobolev The two-temperature discrete model for heat conduction in heterogeneous media is proposed. It is shown that the discrete model contains as limiting cases both hyperbolic and parabolic heat conduction equations for propagative and diffusive regimes, respectively. To obtain these limiting cases two different laws of continuum limit have been introduced. The evolution of the two-temperature system comprises three stages with distinct time scales : fast relaxation of each subsystem to local equilibrium, energy exchange between the subsystems and classical hydrodynamics.
2022-05-26 12:28:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45062991976737976, "perplexity": 13845.579920422862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00065.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/answer-the-following-in-one-sentence-calculate-the-ph-of-001-m-sulphuric-acid-the-ph-scale_157225
# Answer the following in one sentence : Calculate the pH of 0.01 M sulphuric acid. - Chemistry Sum Answer the following in one sentence : Calculate the pH of 0.01 M sulphuric acid. #### Solution Given: Concentration of sulphuric acid = 0.01 M To find: pH Formula: pH = -"log"_10["H"_3"O"^+] Calculation: Sulphuric acid (H2SO4) is a strong acid. It dissociates almost completely in the water as: $\ce{H2SO4_{(aq)} + 2H2O_{(l)} -> 2H3O^+_{ (aq)} + SO^{2-}_{4(aq)}}$ Hence, [H3O+] = 2 × c = 2 × 0.01 M = 2 × 10-2 M From formula (i), pH = -log10[H3O+] = -log10[2 × 10-2] = -"log"_10"2" - "log"_10"10"^-2 = -"log"_10"2" + 2 = 2 - 0.3010 pH = 1.699 The pH of 0.01 M sulphuric acid is 1.699. Concept: The pH Scale Is there an error in this question or solution? #### APPEARS IN Balbharati Chemistry 12th Standard HSC for Maharashtra State Board Chapter 3 Ionic Equilibria Exercises | Q 2.07 | Page 61
2022-10-03 18:49:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36784422397613525, "perplexity": 9065.869912535729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00555.warc.gz"}
http://aas.org/archives/BAAS/v27n4/aas187/S060001.html
Session 60 - Stellar Astrophysics I. Oral session, Tuesday, January 16 Salon del Rey North, Hilton ## [60.01] The Luminosity Functions of the Globular Clusters M5 and M30 E. Sandquist, M. Bolte (UCO/Lick Obs.) We have computed the luminosity functions for the metal-rich cluster M5 and for the metal-poor cluster M30 based on wide-field photometry. The size of the stellar samples (42,000 and 20,000 stars respectively) allow us to finely bin the data. Our samples range in brightness from the tip of the red-giant branch to several magnitudes below the cluster turnoff in the color-magnitude diagram. We find that there is no evidence for the subgiant excesses'' previously seen in metal-poor clusters, and we believe the previous results failed to adequately account for unresolved blends of stars near the cluster's turnoff in the color-magnitude diagram. Incorporation of \alpha-element enhancements into theoretical luminosity functions can explain the majority of the discrepancy between the observed and predicted positions of the red-giant clump, reducing the need for convective overshooting. We also find that for the metal-rich clusters the observations agree with theoretical predictions of the relative numbers of stars on the red-giant branch and main sequence. For the metal-poor clusters though, there are relatively more red giants (or alternately, fewer main sequence stars) than predicted. This result introduces the disturbing possibility that evolutionary timescales for the red-giant branch or main-sequence are not correctly predicted as a function of metallicity.
2014-11-23 06:26:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6645010709762573, "perplexity": 4380.616583483074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379190.66/warc/CC-MAIN-20141119123259-00072-ip-10-235-23-156.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/bose-einstein-condensate/hot?filter=month
Tag Info Hot answers tagged bose-einstein-condensate 3 In the Bogoliubov transformation, it is the case that $$u_{-p} = u^*_p,$$ and $$v_{-p} = v^*_p,$$ so your result is correct. In fact, if you move on to basically the next equation, they assume that both $u$ and $v$ are real, in which case $$u_{-p} = u^*_p = u_p,$$ as mentioned by Mark Mitchison in a comment below this answer. And incidentally, yes it is ... 1 I will try to address the first point raised by the OP, i.e. the occurrence of spontaneous symmetry breaking in Bose-Einstein condensation. The free boson gas is described by the hamiltonian: $$H_V=\int_V\frac{d^sx}{2m}\big|\nabla\phi(x)\big|^2.$$ The ground state satisfies $H_V\Psi_0 = 0,\ \forall V$ and hence $\nabla \phi(x)\Psi_0=0,\ \forall x.$ ... 1 Superfluid helium finds an application as a coolant in superconducting systems (http://link.springer.com/chapter/10.1007%2F3-540-45542-6_4#page-1 ) 1 I dug around in the literature a bit and found that this formulation (semiclassical Bose-Hubbard plus Langevin-type dissipation) has been studied before. Here is the relevant reference: http://arxiv.org/abs/1304.5071. What you are trying to do is derive their equation (9). You probably missed it because they refer to their model as the discrete nonlinear ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-11-26 23:14:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165127635002136, "perplexity": 421.8740017868972}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447860.26/warc/CC-MAIN-20151124205407-00009-ip-10-71-132-137.ec2.internal.warc.gz"}
https://ncatlab.org/nlab/show/null+subset
# nLab null subset Null and full sets (Null set redirects here; for the notion in set theory, see empty set.) ### Context #### Measure and probability theory measure theory probability theory # Null and full sets ## Idea In measure theory, a null set is a subset of a measure space (or measurable space) that is so small that it can be neglected: it might as well be the empty subset; its measure is zero. Similarly, a full set is a subset that is so large that it might as well be the improper subset (the entire space). One also says that a null set has null measure and a full set has full measure. Traditionally, full sets are not usually referred to explicitly; in classical mathematics, they are simply the complements of null sets. However, they are often referred to implicitly through such terminology as ‘almost everywhere’. Also, in constructive mathematics, full sets are more fundamental than null sets; they are not simply the complements of the latter. ## Definitions The definitions depend on the context. ### In a measure space In a traditional measure space, we have an abstract set $X$, a $\sigma$-algebra (or similar structure) $\mathcal{M}$ consisting of the measurable subsets of $X$, and a measure $\mu$ mapping each measurable set $A$ to a real number (or similar quantity) $\mu(A)$, the measure of $A$. A measurable subset $B$ of $X$ is full if, given any measurable set $A$, $\mu(A \cap B) = \mu(A)$; an arbitrary subset of $X$ is full if it's a superset of a full measurable set. Dually, a measurable set $B$ is null if, given any measurable set $A$, $\mu(A \cup B) = \mu(A)$; an arbitrary subset of $X$ is null if it's a subset of a null measurable set. Some equivalent characterisations (constructively valid for measures on Cheng spaces except as stated): • A measurable set $B$ is null iff $\mu(C) = 0$ for every measurable subset of $B$. • If $\mu$ is a positive measure, then a measurable set $B$ is null iff $\mu(B) = 0$. • If $\mu$ is a finite measure with total measure $I$, then a measurable set $B$ is full iff $\mu(C) = I$ for every measurable superset of $B$. • If $\mu$ is both positive and finite (so a probability measure up to rescaling), then a measurable set $B$ is full iff $\mu(B) = I$. • If $\mu$ is complete, then every null set is measurable and every full set is measurable (which is basically the definition of ‘complete’) and consequently the preceding properties continue to hold when the adjective ‘measurable’ is removed. • Using excluded middle, a set is null iff its complement is full, and a set is full iff its complement is null. (Even constructively, if a set is null, then its complement is full.) • Even constructively, a measurable set is null iff its measurable complement (the complement in the algebraic structure of complemented pairs in a Cheng measurable space) is full, and a measurable set is full iff its measurable complement is null. Traditionally, a measurable space is simply an abstract set $X$ and a $\sigma$-algebra (or similar structure) $\mathcal{M}$ consisting of the measurable subsets of $X$. There is no notion of null or full subsets of such a space. However, there are two (essentially equivalent) variations of this concept in which null and full subsets do make sense. One variation is to simply equip the space with a $\delta$-filter of measurable subsets, which are declared to be the full measurable subsets. Then a general full subset is a superset of a measurable full subset, and a null subset is any set whose complement is full. (Alternatively, equip the space with a $\sigma$-ideal of measurable subsets, which are declared to be the null measurable subsets.) In particular, a localizable measurable space is a measurable space so equipped such that the Boolean algebra of measurable sets modulo null sets (or modulo full sets if this is done by identifying the full sets with $X$) is complete. Another variation, used especially in constructive mathematics, is a Cheng measurable space. This consists of a set $X$ equipped with a $\sigma$-semialgebra of disjoint pairs of subsets of $X$, declared to be the complemented pairs. A set is measurable iff it appears as one component of a complemented pair. A measurable subset is full if it appears as one component of a complemented pair whose other component is empty, or equivalently (given the structure of the algebra of complemented pairs) if it is the union of the two components of any complemented pair. Then a general full subset is a superset of a measurable full subset, and a null subset is any set whose complement is full. These are actually equivalent concepts. Given a measurable space equipped with a $\delta$-filter of measurable full subsets, define a complemented pair to be a pair of disjoint measurable subsets whose union is full. Conversely, given a Cheng measurable space, the measurable subsets and measurable full subsets as defined above comprise a $\sigma$-algebra and a $\delta$-filter in it. (But constructively, the algebra of measurable subsets, while closed under the appropriate operations, will generally not be a boolean algebra.) ### In smooth manifolds A subset $A$ of an $n$-dimensional smooth manifold $X$ is null or full (respectively) if its preimage under every coordinate chart is a null or full subset (respectively) of the chart's domain (which is an open subset of the Cartesian space $\mathbb{R}^n$) under Lebesgue measure. This is actually better behaved than it may at first seem. If $A$ is covered by an atlas $(\phi_i\colon U_i \to X)_i$, then $A$ is null or full as soon as $\phi_i^*(A)$ is null/full in $U_i$ for every index $i$. In particular, if $A$ is contained in a single coordinate chart (which is not very likely for a full set but fairly common for null sets), then it is sufficient to check its preimage under that one. This fact depends on the smoothness and fails for topological manifolds. As we can define a measurable subset of a smooth manifold similarly, this means that every smooth manifold gives rise to a measurable space equipped with a $\delta$-filter of full subsets (and hence to a Cheng measurable space); this space is always localizable. (Details? Is $C^1$ sufficient? Conversely, is paracompactness necessary to keep the covers manageable?) ## Logic of full/null sets A property of elements of $X$ (given by a subset $S$ of $X$) can be considered modulo null sets. We say that the property $\phi$ is true almost everywhere or almost always if it is true on some full set, that is if $\{X | \phi\}$ is full. Dually, we say that $\phi$ is true almost nowhere or almost never if $\{X | \phi\}$ is null. It is better to use the negation of ‘almost nowhere’, although the terminology for this is not really standard; say that $\phi$ is true somewhere significant if $\{X | \phi\}$ is non-null. Note that being true almost everywhere is a weakening of being true everywhere (given by the universal quantifier $\forall$), while being true somewhere significant is a strengthening of being true somewhere (given by the particular quantifier $\exists$). Indeed we can build a logic out of these. Use $\ess\forall i, \phi[i]$ or $\ess\forall \phi$ to mean that a predicate $\phi$ on $X$ is true almost everywhere, and use $\ess\exists i, \phi[i]$ or $\ess\exists \phi$ to mean that $\phi$ is true somewhere significant. Then we have: $\forall \phi \;\Rightarrow\; \ess\forall \phi$ $\ess\exists \phi \;\Rightarrow\; \exists \phi$ $\ess\forall (\phi \wedge \psi) \;\Leftrightarrow\; \ess\forall \phi \wedge \ess\forall \psi$ $\ess\exists (\phi \wedge \psi) \;\Rightarrow\; \ess\exists \phi \wedge \ess\exists \psi$ $\ess\forall (\phi \vee \psi) \;\Leftarrow\; \ess\forall \phi \wedge \ess\forall \psi$ $\ess\exists (\phi \vee \psi) \;\Leftrightarrow\; \ess\exists \phi \vee \ess\exists \psi$ $\ess\forall \neg{\phi} \;\Leftrightarrow\; \neg\ess\exists \phi$ and other analogues of theorems from predicate logic. Note that the last item listed requires excluded middle even though its analogue from ordinary predicate logic does not. A similar logic is satisfied by ‘eventually’ and its dual (‘frequently’) in filters and nets.
2021-06-21 13:55:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 83, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598492383956909, "perplexity": 342.6292239266003}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00576.warc.gz"}
http://crypto.stackexchange.com/questions?page=21&sort=frequent
# All Questions 231 views ### Is it better to encrypt before compression or vice versa? Is it better to encrypt a plain text file before compression, or vice versa? 1k views ### Hash function in PBKDF2 From this excellent answer I learned (correct me if I am wrong) that when writing a block cipher with say key size 128 bit, one has to pad the password given (variable size) so that it becomes exactly ... 474 views ### SHA256 output to 0-99 number range? Is it mathematically possible to take a SHA256 hash and turn it into a 0-99 number where each number in 0-99 range is equally likely to be picked? As a 256 bit hash means the highest value possible ... 643 views ### Why use a timestamp and how can someone know it's the correct one? Let's say A wants to send a message, so everyone who gets the message, can be assured that it's from A. A then sends a message ... 2k views ### Keeping IV secret for AES CFB mode I'm developing a security/encryption software and I'm using AES CFB (block size: 16 and key size: 32 bytes). I want to know, if I also keep IV (32 bytes) secret like the key itself (32 bytes), would ... 1k views ### Complexity of arithmetic in a finite field? I am wondering what the complexities are of adding/subtracting and muliplying/dividing numbers in a finite field $\mathbb{F}_q$. I need it to understand an article I am reading. Thank you 183 views ### understanding pairing $e:G \times G \to G_T$ and ( Decision)BDH assumption From DrLecter's comment, I know that DDH problem can be efficiently solved with this $$e(g^a,g^b)\stackrel{?}{=} e(g,g^z).$$ I have some trouble to understand this map $e:G \times G \to G_T$. Am I ...
2016-06-25 23:08:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5054118037223816, "perplexity": 1923.0425606555932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00005-ip-10-164-35-72.ec2.internal.warc.gz"}
https://ltwork.net/the-quantum-efficiency-for-the-formation-of-ethene-from--427574
# The quantum efficiency for the formation of ethene from di-n-propylketone (heptan-4-one) with 313 nm light is 0.21. How many ###### Question: The quantum efficiency for the formation of ethene from di-n-propylketone (heptan-4-one) with 313 nm light is 0.21. How many molecules of ethene per second, and moles per second, are formed when the sample is irradiated with a 313 nm light operating at 50W at that wavelength, and under conditions such that all light is absorbed by the sample ### 3. What is the equation in slope-intercept form for the line parallel to y = -8x - 2 that contains J(-9, 1) 3. What is the equation in slope-intercept form for the line parallel to y = -8x - 2 that contains J(-9, 1)... ### What trait do all solarsystem model share all planets have epicycles the moon orbits earth the stars What trait do all solarsystem model share all planets have epicycles the moon orbits earth the stars rotate around the sun the planets have a circular orbit around the sun... ### All functions of integumentary system All functions of integumentary system... ### A Gallup poll conducted in 2012 asked people who were not rich whether they thought it was likely that they would become rich. The A Gallup poll conducted in 2012 asked people who were not rich whether they thought it was likely that they would become rich. The table gives the total number of people in each range and the percent who said they were likely to become rich... ### Abdul is on his way home in his car. he has driven 18 miles so far, which is two-thirds of the way home. Abdul is on his way home in his car. he has driven 18 miles so far, which is two-thirds of the way home. what is the total length of his drive?... Please answer this friends​ $Please answer this friends​$... ### The classical style of architecture was design to civilize the emotions of people. question 1 options: The classical style of architecture was design to civilize the emotions of people. question 1 options: true... ### 50 N75 N75 N50 NThe result of the four forces 50 N 75 N 75 N 50 N The result of the four forces... ### What is the quotient of (5x2 + 8) ÷ (x − 4)? Accurate work is required for full credit. What is the quotient of (5x2 + 8) ÷ (x − 4)? Accurate work is required for full credit.... ### 4p + 1 > −7 or 6p + 3 < 33? how do i solve this 4p + 1 > −7 or 6p + 3 < 33? how do i solve this... ### Bobby likes to hike the trails near his house. On Saturday, he spent 4 hours hiking 8 miles. On Sunday, he plans to hike 12 miles. Bobby likes to hike the trails near his house. On Saturday, he spent 4 hours hiking 8 miles. On Sunday, he plans to hike 12 miles. If he hikes at the same rate, how many hours will he spend hiking on Sunday?... ### Look at my drawing . Look at my drawing . $Look at my drawing ......$$Look at my drawing ......$... ### What mountains separate the Iberian Peninsula from the rest of Europe? A. the Alps B. the Apennines C. the Meseta D. the Pyrenees What mountains separate the Iberian Peninsula from the rest of Europe? A. the Alps B. the Apennines C. the Meseta D. the Pyrenees... ### E) Escriban un relato que tenga como tema lavanidad. Puedeser un cuento realista basado en nuestra sociedadactual. La única condición: E) Escriban un relato que tenga como tema la vanidad. Puedeser un cuento realista basado en nuestra sociedadactual. La única condición: el personaje principaldeberá llamarse Narciso.​... ### There are 30 boys and 25 girls in class find the ratio of girls and boys.​ There are 30 boys and 25 girls in class find the ratio of girls and boys.​... ### Ihave to factorise a) x²-2x-3 b) x²+2x-8 i don't understand any of Ihave to factorise a) x²-2x-3 b) x²+2x-8 i don't understand any of... ### Match the planets with their beast description Match the planets with their beast description $Match the planets with their beast description$... ### Based on this article, what can you infer about the role of religion in the civil war in Sudan? Respond Based on this article, what can you infer about the role of religion in the civil war in Sudan? Respond in three to five sentences. Use details from the article to support your statement....
2022-12-07 00:06:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2651762366294861, "perplexity": 1687.489305381672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00386.warc.gz"}
https://studydaddy.com/question/econ-312-week-6-quiz
QUESTION # ECON 312 Week 6 Quiz This paper of ECON 312 Week 6 Quiz comprises: (TCO 7) If you write a check on a bank to purchase a used Honda Civic, you are using money primarily as (TCO 7) In the United States, the money supply (M1) is comprised of (TCO 7) Answer the question on the basis of the following list of assets: (TCO 7) Assume that Smith deposits $600 in currency into her checking account in the XYZ Bank. Later that same day, Jones negotiates a loan for$1,200 at the same bank. In what direction and by what amount has the supply of money changed (TCO 7) Overnight loans from one bank to another for reserve purposes entail an interest rate called the (TCO 7) When a bank loan is repaid, the supply of money (TCO 7) The transactions demand for money is most closely related to money functioning as a (TCO 7) The equilibrium rate of interest in the market for money is determined by the intersection of the (TCO 7) Which of the following is not a tool of monetary policy? (TCO 7) In the latter end of 2001 the Fed cut the federal funds rate several times. The Fed • $8.99 ANSWER Tutor has posted answer for$8.99. See answer's preview **** *** Week 6 ****
2017-08-19 18:49:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1890753209590912, "perplexity": 2825.9588821822886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105712.28/warc/CC-MAIN-20170819182059-20170819202059-00636.warc.gz"}
https://mathematics.huji.ac.il/eventss/events-seminars?page=8
2020 Jun 08 # Combinatorics: Bannai Eiichi (Kyushu University) 11:00am to 12:45pm ## Location: Zoom Speaker: Bannai Eiichi (Kyushu University) Title: On unitary t-designs Abstract: The purpose of design theory is for a given space $M$ to find good finite subsets $X$ of $M$ that approximate the whole space $M$ well. There are many design theories for various spaces $M$. If $M$ is the sphere $S^{n-1}$ then such $X$ are called spherical designs. If $M$ is the unitary group $U(d)$, then such $X$ are called unitary designs. 2020 May 18 # Combinatorics: Tahl Nowik (BIU) 11:00am to 12:45pm 2020 Jun 15 # Combinatorics: Lior Gishboliner (TAU) 11:00am to 12:45pm Zoom 2020 May 06 # Logic Seminar - Timo Krisam 11:00am to 1:00pm ## Location: Zoom: Meeting ID: 959 8849 6874 , Password: 020269 Timo Krisam will speal about distal theories and the type decomposition theorem. . Title: Distal Theories and the Type Decomposition Theorem Abstract: The class of NIP-Theories is an important subject of study in pure model theory. It contains many interesting examples like stable theories, o-minimal theories or algebraically closed valued fields. 2020 May 06 # Set Theory E-Seminar: Alejandro Poveda (Universitat de Barcelona) - Sigma-Prikry forcings and their iterations 11:00am to 1:00pm ## Location: Zoom meeting ID 243-676-331 Joint E-seminar of Bar-Ilan University and the Hebrew University Title: Sigma-Prikry forcings and their iterations 2020 May 04 # Basic Set Theory E-seminar: Tzoor Plotnikov - Side conditions forcing of two types and the Proper Forcing Axiom. 11:00am to 1:00pm ## Location: Zoom meeting 995 0029 0990 . Password - 789132 2020 May 14 # Basic Notions: Mike Hochman "Dimension of Bernoulli convolutions" 4:00pm to 5:15pm ## Location: Join Zoom Meeting https://huji.zoom.us/j/98768675115?pwd=WnZOZUpuVmpoNGkrYWQxanNVWkQzUT09, Abstract: The Bernoulli convolution with parameter 1/2 < t < 1 is the distribution of the random variable (+/-)t + (+/-)t^2 + (+/-)t^3 + ..., where the sequence of signs +/- form an  unbiased i.i.d. random sequence. This distribution has been studied since the 1930s, and the main problem is to characterize those parameters t for which the distribution is absolutely continuous, or has full dimension. In these talks I will review the history and recent developments, leading up to P. Varju's proof a little over a year ago, that for all transcendental parameters the dimension is 1. 2020 May 07 # Basic Notions: Mike Hochman "Dimension of Bernoulli convolutions" 4:00pm to 5:15pm ## Location: Join Zoom Meeting https://huji.zoom.us/j/98768675115?pwd=WnZOZUpuVmpoNGkrYWQxanNVWkQzUT09 Abstract: The Bernoulli convolution with parameter 1/2 < t < 1 is the distribution of the random variable (+/-)t + (+/-)t^2 + (+/-)t^3 + ..., where the sequence of signs +/- form an  unbiased i.i.d. random sequence. This distribution has been studied since the 1930s, and the main problem is to characterize those parameters t for which the distribution is absolutely continuous, or has full dimension. In these talks I will review the history and recent developments, leading up to P. 2020 Apr 20 # Charlotte Chan [HUJI-BGU AGNT Seminar] 4:30pm to 5:30pm ## Location: https://zoom.us/j/468718180, Join Zoom Meeting https://zoom.us/j/468718180 Meeting ID: 468 718 180 Charlotte Chan (MIT) Title: L-packets of S-unramified regular supercuspidal representations 2020 Apr 27 # Spencer Leslie [HUJI-BGU AGNT Seminar] 2:15pm to 4:00pm ## Location: https://zoom.us/j/468718180, Speaker: Spencer Leslie (Duke) Title: The endoscopic fundamental lemma for unitary symmetric spaces Abstract: Motivated by the study of certain cycles in locally symmetric spaces and periods of automorphic forms on unitary groups, I propose a theory of endoscopy for certain symmetric spaces. The main result is the fundamental lemma for the unit function. After explaining where the fundamental lemma fits into this broader picture, I will describe its proof. Join Zoom Meeting 2020 May 05 # Dynamics Seminar: Matan Seidel (TAU): "Random Walks on Circle Packings" 2:00pm to 3:00pm Abstract: A circle packing is a canonical way of representing a planar graph. There is a deep connection between the geometry of the circle packing and the proababilistic property of recurrence/transience of the simple random walk on the underlying graph, as shown in the famous He-Schramm Theorem. The removal of one of the Theorem's assumptions - that of bounded degrees - can cause the theorem to fail. However, by using certain natural weights that arise from the circle packing for a weighted random walk, (at least) one of the directions of the He-Schramm Theorem remains true. 2020 May 19 # Dynamics seminar : Michael Bersudsky (Technion) "Equidistribution of the image in the torus of sparse points on dilating analytic curves" 2:00pm to 3:00pm Title: Equidistribution of the image in the torus of sparse points on dilating analytic curves
2021-01-24 04:03:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5451568365097046, "perplexity": 3826.478418431146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00514.warc.gz"}
https://socratic.org/questions/how-do-you-multiply-8m-2-8m-3-2m-2-8m-4
# How do you multiply (8m^2+8m+3)(2m^2+8m+4)? Jun 25, 2017 $16 {m}^{4} + 80 {m}^{3} + 102 {m}^{2} + 56 m + 12$ #### Explanation: Multiply everything in the right brackets by every thing in the left. $8 {m}^{2} \left(2 {m}^{2} + 8 m + 4\right) \to 16 {m}^{4} + 64 {m}^{3} + 32 {m}^{2}$ $\textcolor{w h i t e}{.} 8 m \left(2 {m}^{2} + 8 m + 4\right) \to \text{ } 16 {m}^{3} + 64 {m}^{2} + 32 m$ color(white)(8..)3(2m^2+8m+4)->ul(" "6m^2+24m+12 $\text{ } 16 {m}^{4} + 80 {m}^{3} + 102 {m}^{2} + 56 m + 12$
2019-09-20 13:54:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584957957267761, "perplexity": 9093.79836894042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00337.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/amc.2016016
# American Institute of Mathematical Sciences May  2016, 10(2): 429-436. doi: 10.3934/amc.2016016 ## Coherence of sensing matrices coming from algebraic-geometric codes 1 Department of Mathematics, Sookmyung Women's University, Cheongpa-ro 47 gil 100, Yongsan-Ku, Seoul 140-742, South Korea Received  September 2014 Published  April 2016 Compressed sensing is a technique which is to used to reconstruct a sparse signal given few measurements of the signal. One of the main problems in compressed sensing is the deterministic construction of the sensing matrix. Li et al. introduced a new deterministic construction via algebraic-geometric codes (AG codes) and gave an upper bound for the coherence of the sensing matrices coming from AG codes. In this paper, we give the exact value of the coherence of the sensing matrices coming from AG codes in terms of the minimum distance of AG codes and deduce the upper bound given by Li et al. We also give formulas for the coherence of the sensing matrices coming from Hermitian two-point codes. Citation: Seungkook Park. Coherence of sensing matrices coming from algebraic-geometric codes. Advances in Mathematics of Communications, 2016, 10 (2) : 429-436. doi: 10.3934/amc.2016016 ##### References: show all references ##### References: [1] Steven L. Brunton, Joshua L. Proctor, Jonathan H. Tu, J. Nathan Kutz. Compressed sensing and dynamic mode decomposition. Journal of Computational Dynamics, 2015, 2 (2) : 165-191. doi: 10.3934/jcd.2015002 [2] Michael Kiermaier, Johannes Zwanzger. A $\mathbb Z$4-linear code of high minimum Lee distance derived from a hyperoval. Advances in Mathematics of Communications, 2011, 5 (2) : 275-286. doi: 10.3934/amc.2011.5.275 [3] Ying Zhang, Ling Ma, Zheng-Hai Huang. On phaseless compressed sensing with partially known support. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-8. doi: 10.3934/jimo.2019014 [4] Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1 [5] San Ling, Buket Özkaya. New bounds on the minimum distance of cyclic codes. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020038 [6] Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems & Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487 [7] Song Li, Junhong Lin. Compressed sensing with coherent tight frames via $l_q$-minimization for $0 < q \leq 1$. Inverse Problems & Imaging, 2014, 8 (3) : 761-777. doi: 10.3934/ipi.2014.8.761 [8] Carlos Munuera, Fernando Torres. A note on the order bound on the minimum distance of AG codes and acute semigroups. Advances in Mathematics of Communications, 2008, 2 (2) : 175-181. doi: 10.3934/amc.2008.2.175 [9] Bram van Asch, Frans Martens. A note on the minimum Lee distance of certain self-dual modular codes. Advances in Mathematics of Communications, 2012, 6 (1) : 65-68. doi: 10.3934/amc.2012.6.65 [10] José Joaquín Bernal, Diana H. Bueno-Carreño, Juan Jacobo Simón. Cyclic and BCH codes whose minimum distance equals their maximum BCH bound. Advances in Mathematics of Communications, 2016, 10 (2) : 459-474. doi: 10.3934/amc.2016018 [11] Bernard Bonnard, Monique Chyba, Alain Jacquemard, John Marriott. Algebraic geometric classification of the singular flow in the contrast imaging problem in nuclear magnetic resonance. Mathematical Control & Related Fields, 2013, 3 (4) : 397-432. doi: 10.3934/mcrf.2013.3.397 [12] Keith Burns, Amie Wilkinson. Dynamical coherence and center bunching. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 89-100. doi: 10.3934/dcds.2008.22.89 [13] Irene Márquez-Corbella, Edgar Martínez-Moro, Emilio Suárez-Canedo. On the ideal associated to a linear code. Advances in Mathematics of Communications, 2016, 10 (2) : 229-254. doi: 10.3934/amc.2016003 [14] Serhii Dyshko. On extendability of additive code isometries. Advances in Mathematics of Communications, 2016, 10 (1) : 45-52. doi: 10.3934/amc.2016.10.45 [15] Yangyang Xu, Wotao Yin, Stanley Osher. Learning circulant sensing kernels. Inverse Problems & Imaging, 2014, 8 (3) : 901-923. doi: 10.3934/ipi.2014.8.901 [16] Vikram Krishnamurthy, William Hoiles. Information diffusion in social sensing. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 365-411. doi: 10.3934/naco.2016017 [17] José M. Arrieta, Esperanza Santamaría. Estimates on the distance of inertial manifolds. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 3921-3944. doi: 10.3934/dcds.2014.34.3921 [18] Liliana Trejo-Valencia, Edgardo Ugalde. Projective distance and $g$-measures. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3565-3579. doi: 10.3934/dcdsb.2015.20.3565 [19] Michael Brin, Dmitri Burago, Sergey Ivanov. Dynamical coherence of partially hyperbolic diffeomorphisms of the 3-torus. Journal of Modern Dynamics, 2009, 3 (1) : 1-11. doi: 10.3934/jmd.2009.3.1 [20] Bernard Bonnard, Thierry Combot, Lionel Jassionnesse. Integrability methods in the time minimal coherence transfer for Ising chains of three spins. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4095-4114. doi: 10.3934/dcds.2015.35.4095 2018 Impact Factor: 0.879
2019-12-06 03:47:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.617516040802002, "perplexity": 6725.95868092849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484477.5/warc/CC-MAIN-20191206023204-20191206051204-00508.warc.gz"}
http://www.numdam.org/item/PS_2010__14__1_0/
Process level moderate deviations for stabilizing functionals ESAIM: Probability and Statistics, Tome 14 (2010) , pp. 1-15. Functionals of spatial point process often satisfy a weak spatial dependence condition known as stabilization. In this paper we prove process level moderate deviation principles (MDP) for such functionals, which is a level-3 result for empirical point fields as well as a level-2 result for empirical point measures. The level-3 rate function coincides with the so-called specific information. We show that the general result can be applied to prove MDPs for various particular functionals, including random sequential packing, birth-growth models, germ-grain models and nearest neighbor graphs. DOI : https://doi.org/10.1051/ps:2008027 Classification : 60F05,  60D05 Mots clés : moderate deviations, random euclidean graphs, random sequential packing @article{PS_2010__14__1_0, author = {Eichelsbacher, Peter and Schreiber, Tomasz}, title = {Process level moderate deviations for stabilizing functionals}, journal = {ESAIM: Probability and Statistics}, pages = {1--15}, publisher = {EDP-Sciences}, volume = {14}, year = {2010}, doi = {10.1051/ps:2008027}, mrnumber = {2640365}, language = {en}, url = {http://www.numdam.org/articles/10.1051/ps:2008027/} } Eichelsbacher, Peter; Schreiber, Tomasz. Process level moderate deviations for stabilizing functionals. ESAIM: Probability and Statistics, Tome 14 (2010) , pp. 1-15. doi : 10.1051/ps:2008027. http://www.numdam.org/articles/10.1051/ps:2008027/ [1] Y. Baryshnikov and J.E. Yukich, Gaussian limits for random measures in geometric probability. Ann. Appl. Probab. 15 (2005) 213-253. | Zbl 1068.60028 [2] Y. Baryshnikov, P. Eichelsbacher, T. Schreiber and J.E. Yukich, Moderate Deviations for some Point Measures in Geometric Probability. Ann. Inst. H. Poincaré 44 (2008) 422-446; electronically available on the arXiv, math.PR/0603022. | Numdam | Zbl 1175.60015 [3] F. Comets, Grandes déviations pour des champs de Gibbs sur ${ℤ}^{d}$ (French) [ Large deviation results for Gibbs random fields on ${ℤ}^{d}$] . C. R. Acad. Sci. Paris Sér. I Math. 303 (1986) 511-513. | Zbl 0606.60035 [4] A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications. Second edition. Springer (1998). | Zbl 1177.60035 [5] H. Föllmer and S. Orey, Large Deviations for the Empirical Field of a Gibbs Measure. Ann. Probab. 16 (1988) 961-977. | Zbl 0648.60028 [6] H.-O. Georgii, Large Deviations and Maximum Entropy Principle for Interacting Random Fields on ${ℤ}^{d}.$ Ann. Probab. 21 (1993) 1845-1875. | Zbl 0790.60031 [7] H.-O. Georgii, Large deviations and the equivalence of ensembles for Gibbsian particle systems with superstable interaction. Probab. Theory Relat. Fields 99 (1994) 171-195. | Zbl 0803.60097 [8] H.-O. Georgii and H. Zessin, Large deviations and the maximum entropy principle for marked point random fields. Probab. Theory Relat. Fields 96 (1993) 177-204. | Zbl 0792.60024 [9] P. Hall, Introduction to the Theory of Coverage Processes. Wiley, New York (1988). | Zbl 0659.60024 [10] I.S. Molchanov, Limit Theorems for Unions of Random Closed Sets. Lect. Notes Math. 1561. Springer (1993) | Zbl 0790.60015 [11] S. Olla, Large Deviations for Gibbs Random Fields. Probab. Theor. Rel. Fields 77 (1988) 343-357. | Zbl 0621.60031 [12] M.D. Penrose, Multivariate spatial central limit theorems with applications to percolation and spatial graphs. Ann. Probab. 33 (2005) 1945-1991. | Zbl 1087.60022 [13] M.D. Penrose, Gaussian Limits for Random Geometric Measures, Electron. J. Probab. 12 (2007) 989-1035. | Zbl 1153.60015 [14] M.D. Penrose and J.E. Yukich, Central limit theorems for some graphs in computational geometry. Ann. Appl. Probab. 11 (2001) 1005-1041. | Zbl 1044.60016 [15] M.D. Penrose and J.E. Yukich, Limit theory for random sequential packing and deposition. Ann. Appl. Probab. 12 (2002) 272-301. | Zbl 1018.60023 [16] M.D. Penrose and J.E. Yukich, Weak laws of large numbers in geometric probability. Ann. Appl. Probab. 13 (2003) 277-303. | Zbl 1029.60008 [17] A. Rényi, Théorie des éléments saillants d'une suite d'observations, in Colloquium on Combinatorial Methods in Probability Theory. Mathematical Institut, Aarhus Universitet, Denmark (1962), pp. 104-115. | Zbl 0139.35303 [18] D. Stoyan, W. Kendall and J. Mecke, Stochastic Geometry and Its Applications. Second edition. John Wiley and Sons (1995). | Zbl 1155.60001
2021-10-20 12:39:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5881485342979431, "perplexity": 4342.942615118618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00420.warc.gz"}
https://jpmccarthymaths.com/2011/07/05/ms-2001-autumn-examination/
Please find the solutions to the Summer exam here. Note that these also include the marking scheme — numbers in bold brackets indicate marks, i.e. [3] implies three marks. You will need to do exercises to prepare for your repeat exam. All of the following are of exam grade. If you have any questions please do not hesitate to use the comment function on the bottom: # Exercise Sheets http://euclid.ucc.ie/pages/staff/wills/teaching/ms2001/exercise1.pdf Qns. 2,4,7 — 9 http://euclid.ucc.ie/pages/staff/wills/teaching/ms2001/exercise2.pdf Qns. 1, 5, 6 (but not the questions on singularities), 8, 9 http://euclid.ucc.ie/pages/staff/wills/teaching/ms2001/exercise3.pdf Qns. 3 (especially the part on where they are differentiable), 5, 9, 10, 11, 14 http://euclid.ucc.ie/pages/staff/wills/teaching/ms2001/exercise4.pdf Qns. 10 (also consider vertical asymptotes (well, vertical asymptotes are singularities), the domain (where the function is defined), roots, $y$-intercepts. Examine the local maxima & minima using both the second and first derivative tests. Examine concavity by using the method of split points.), 11 — 17 # Past Papers http://booleweb.ucc.ie/ExamPapers/exams2010/MathsStds/Autumn/MS2001Aut2010.pdf — except Q. 1(d) http://booleweb.ucc.ie/ExamPapers/exams2010/MathsStds/MS2001Sum2010.pdf — except Q. 1(d) http://booleweb.ucc.ie/ExamPapers/exams2009/MathsStds/Autumn/MS2001A09.pdf — except Q. 1(d) http://booleweb.ucc.ie/ExamPapers/exams2009/MathsStds/MS2001s09.pdf — except Q. 1(d) http://booleweb.ucc.ie/ExamPapers/Exams2008/MathsStds/MS2001a08.pdf http://booleweb.ucc.ie/ExamPapers/exams2008/Maths_Stds/MS2001Sum08.pdf — except Q. 2 http://booleweb.ucc.ie/ExamPapers/exams2007/Maths_Stds/MS2001Aut07.pdf — except Q. 1(d) http://booleweb.ucc.ie/ExamPapers/exams2007/Maths_Stds/MS2001Sum2007.pdf — except Q. 1(d),(e) For these older papers the layout is different to ours: http://booleweb.ucc.ie/ExamPapers/exams2006/Maths_Stds/Autumn/ms2001Aut.pdf http://booleweb.ucc.ie/ExamPapers/exams2006/Maths_Stds/MS2001Sum06.pdf http://booleweb.ucc.ie/ExamPapers/Exams2005/Maths_Stds/MS2001Aut05.pdf http://booleweb.ucc.ie/ExamPapers/Exams2005/Maths_Stds/MS2001.pdf — except Q. 6(c) http://booleweb.ucc.ie/ExamPapers/exams2004/Maths_Stds/MS2001aut.pdf http://booleweb.ucc.ie/ExamPapers/exams2004/Maths_Stds/ms2001s2004.pdf http://booleweb.ucc.ie/ExamPapers/exams2003/Maths_Studies/ms2001aut.pdf http://booleweb.ucc.ie/ExamPapers/exams2003/Maths_Studies/MS2001.pdf http://booleweb.ucc.ie/ExamPapers/exams2002/Maths_Stds/ms2001.pdf — except Q. 2(b) http://booleweb.ucc.ie/ExamPapers/exams/Mathematical_Studies/MS2001.pdf — except Q. 6(a) # New Material The material that was additional to previous years was: 1. Closed Interval Method (note that Wills does define Critical points – however we define critical points on closed intervals and include the endpoints) 2. First Derivative Test 3. Asymptotes 4. Concavity Past paper questions on maxima and minima on closed intervals can (in general) be answered using the Closed Interval Method. Past paper questions on maxima and minima on more general sets (including the entire real line) can (in general) be answered using the First Derivative Test. Finally we use asymptotes to help in curve sketching. Section 4 from Problems has examples of questions on this new material. In particular, questions 4 and 5 are of exam grade.
2021-03-03 11:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7345979809761047, "perplexity": 2573.713360381143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00521.warc.gz"}
http://love2d.org/forums/viewtopic.php?f=3&t=81457&p=207931
## Post-0.10.0 feature wishlist General discussion about LÖVE, Lua, game development, puns, and unicorns. reno57 Prole Posts: 17 Joined: Thu Apr 14, 2016 9:46 pm ### Re: Post-0.10.0 feature wishlist First, i would like to say that love2D is very well designed, it is well balanced between easy access and power of game creation. I don't use it since a long time, but the thing i miss the most is a basic GUI support to design my own game editor and tool. CrackedP0t Citizen Posts: 69 Joined: Wed May 07, 2014 4:01 am Contact: ### Re: Post-0.10.0 feature wishlist reno57 wrote:First, i would like to say that love2D is very well designed, it is well balanced between easy access and power of game creation. I don't use it since a long time, but the thing i miss the most is a basic GUI support to design my own game editor and tool. /人 ◕‿‿◕ 人\ Here, have an umlaut. Ö spill Prole Posts: 27 Joined: Thu May 07, 2015 1:53 am Contact: ### Re: Post-0.10.0 feature wishlist I'm confused by all this argument about return values vs. erroring. If you want shader:send() to return a status code instead of erroring, why not just use pcall()? Doesn't that accomplish what folks are asking for? It seems totally reasonable to me that the default behavior is to error so the developer is aware of the problem. If you want to suppress an error, you can use Code: Select all local success, error_msg = pcall(function() shader:send("l", 3) end) This seems better to me than making everyone wrap every call to shader:send() with assert(), since that makes the dangerous behavior of ignoring errors into the path of least resistance. Also, FWIW: on my platform (Mac), unused variables are optimized out, but if you put the variable on a line by itself, it does not get optimized out, which I find useful for quickly swapping between two commented-out sections of code during development: Code: Select all uniform vec4 I; vec4 effect(vec4 vcolor, Image tex, vec2 tc, vec2 pc) { /* I; // this line guarantees that shader:send("I", ...) won't error return vec4(1,1,1,1); */ // You can uncomment the block above, and comment out the following line, and everything will keep working. return Texel(tex) + I; } slime Solid Snayke Posts: 2853 Joined: Mon Aug 23, 2010 6:45 am Contact: ### Re: Post-0.10.0 feature wishlist spill wrote:you can use Code: Select all local success, error_msg = pcall(function() shader:send("l", 3) end) This can be reduced to the following code: Code: Select all local success, error_msg = pcall(shader.send, shader, "l", 3) pgimeno Party member Posts: 1903 Joined: Sun Oct 18, 2015 2:58 pm Location: Valencia, ES ### Re: Post-0.10.0 feature wishlist Sorry for posting this here. Bitbucket does not like this browser any longer (probably related to something within its ~1Mb JS code). The 0.10.2 love.event.quit("reload") is nice, but as discussed lately, it doesn't fully allow restarting the application automatically. The hurdle is that when love.quit() returns true, it does nothing, therefore the application needs to be aware of whether there was a restart, to avoid returning true and perform shutdown in that case. An example of this pattern is Thrust II Reloaded, which offers a quit confirmation dialog by using the love.quit event. It doesn't save anything on exit, but there may be programs that do both, ask for confirmation and save on exit. Ignoring the result of love.quit in those cases would be wrong. The proposed solution is to add two new values to the optional parameter of love.event.quit, namely "force" and "forcereload", and a new parameter to love.quit(). The new parameter would be true if "force" or "forcereload" was used, signalling that the return value of love.quit() will be ignored and that therefore the application needs to take any necessary cleanup actions immediately. zorg Party member Posts: 2731 Joined: Thu Dec 13, 2012 2:55 pm Location: Absurdistan, Hungary Contact: ### Re: Post-0.10.0 feature wishlist pgimeno wrote: Sun Jan 28, 2018 5:36 pm ... Do we need it to be this complicated though? Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho. slime Solid Snayke Posts: 2853 Joined: Mon Aug 23, 2010 6:45 am Contact: ### Re: Post-0.10.0 feature wishlist Can you just do something like this? Code: Select all function restart() love.event.quit("restart") restarting = true end function love.quit() if restarting then return end -- other code does whatever, here. end pgimeno Party member Posts: 1903 Joined: Sun Oct 18, 2015 2:58 pm Location: Valencia, ES ### Re: Post-0.10.0 feature wishlist The problem with that approach is that it is not transparent. Implementations can then make restart transparent (think an IDE like ZBS), and user programs only need to be aware about the parameter, with the advantage of it being documented as part of the engine. Another problem with that approach is concurrency, in case love.event.quit("force") is issued from a thread, which sounds like the most convenient implementation of a hot restart library. ### Who is online Users browsing this forum: No registered users and 4 guests
2019-12-06 23:53:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2566682696342468, "perplexity": 6401.279123771795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491491.18/warc/CC-MAIN-20191206222837-20191207010837-00187.warc.gz"}
https://www.zbmath.org/?q=an%3A1242.65209
# zbMATH — the first resource for mathematics Numerical solutions of nonlinear Burgers equation with modified cubic B-splines collocation method. (English) Zbl 1242.65209 Summary: A numerical method is proposed to approximate the solution of the nonlinear Burgers’ equation. The method is based on collocation of modified cubic B-splines over finite elements so that we have continuity of the dependent variable and its first two derivatives throughout the solution range. We apply modified cubic B-splines for spatial variable and derivatives which produce a system of first order ordinary differential equations. We solve this system by using the SSP-RK43 or SSP-RK54. These methods need less storage space that causes less accumulation of numerical errors. The numerical approximate solutions to the Burgers’ equation are computed without transforming the equation and without using the linearization. Illustrative eleven examples are included to demonstrate the validity and applicability of the technique. Easy and economical implementation is the strength of this method. ##### MSC: 65M70 Spectral, collocation and related methods for initial value and initial-boundary value problems involving PDEs 35Q53 KdV equations (Korteweg-de Vries equations) 65L06 Multistep, Runge-Kutta and extrapolation methods for ordinary differential equations 65M30 Numerical methods for ill-posed problems for initial value and initial-boundary value problems involving PDEs Full Text: ##### References: [1] Asaithambi, Asai, Numerical solution of the burgers’ equation by automatic differentiation, Appl. math. comput., 216, 2700-2708, (2010) · Zbl 1193.65154 [2] Dogan, Abdulkadir; Galerkin, A., Finite element approach to burgers’ equation, Appl. math. comput., 157, 331-346, (2004) · Zbl 1054.65103 [3] Ali, A.H.A.; Gardner, G.A.; Gardner, L.R.T., A collocation solution for burgers’ equation using cubic B-spline finite elements, Comput. methods appl. mech. eng., 100, 325-337, (1992) · Zbl 0762.65072 [4] Khater, A.H.; Temsah, R.S.; Hassan, M.M., A Chebyshev spectral collocation method for solving burgers’ type equations, J. comput. appl. math., 222, 333-350, (2008) · Zbl 1153.65102 [5] Korkmaz, Alper, Shock wave simulations using sinc differential quadrature method, Int. J. comput. aided eng. software, 28, 6, 654-674, (2011) · Zbl 1284.76292 [6] Korkmaz, Alper; Dagˇ, Idris, Polynomial based differential quadrature method for numerical solution of nonlinear burgers’ equation, J. franklin inst., (2011) · Zbl 1256.35085 [7] Korkmaz, Alper; Murat Aksoy, A.; Da˘g, Idris, Quartic B-spline differential quadrature method, Int. J. nonlinear sci., 11, 4, 403-411, (2011) [8] Khalifa, A.K.; Noor, Khalida Inayat; Noor, Muhammad Aslam, Some numerical methods for solving Burgers equation, Int. J. phys. sci., 6, 7, 1702-1710, (2011) [9] Saka, Bülent; Dag, Idris, Quartic B-spline collocation method to the numerical solutions of the burgers’ equation, Chaos solitons fractals, 32, 1125-1137, (2007) · Zbl 1130.65103 [10] Srinivasa Rao, Ch.; Satyanarayana, Engu, Solutions of Burgers equation, Int. J. nonlinear sci., 9, 3, 290-295, (2010) · Zbl 1208.35134 [11] Aksan, E.N., Quadratic B-spline finite element method for numerical solution of the burgers’ equation, Appl. math. comput., 174, 884-896, (2006) · Zbl 1090.65108 [12] Hesameddini, Esmaeel; Gholampour, Razieh, Soliton and numerical solutions of the burgers’ equation and comparing them, Int. J. math. anal., 4, 52, 2547-2564, (2010) · Zbl 1225.65115 [13] Güraslan, G.; Sari, M., Numerical solutions of linear and nonlinear diffusion equations by a differential quadrature method (DQM), Int. J. numer. methods biomed. eng., 27, 69-77, (2011) · Zbl 1210.65175 [14] Da˘g, I.; Irk, D.; Sahin, A., B-spline collocation methods for numerical solutions of the burgers’ equation, Math. probl. eng., 5, 521-538, (2005) · Zbl 1200.76141 [15] Hassanien, I.A.; Salama, A.A.; Hosham, H.A., Fourth-order finite difference method for solving burgers’ equation, Appl. math. comput., 170, 781-800, (2005) · Zbl 1084.65078 [16] Kaysar Rahman, Nurmamat Helil, Rahmatjan Yimin, Some New Semi-Implicit Finite Difference Schemes for Numerical Solution of Burgers Equation, International Conference on Computer Application and System Modeling (ICCASM 2010), 978-1-4244-7237-6/10/$$26.00 ©20l0 IEEE V14-451$$ · Zbl 1324.65112 [17] Altıparmak, Kemal, Numerical solution of burgers’ equation with factorized diagonal pade´ approximation, Int. J. numer. methods heat fluid flow, 21, 3, 310-319, (2011) · Zbl 1231.65139 [18] Raslan, K.R., A collocation solution for Burgers equation using quadratic B-spline finite elements, Int. J. comput. math., 80, 7, 931-938, (2003) · Zbl 1037.65103 [19] Ramadan, M.A.; El-Danaf, T.S.; Abd Alaal, F.E.I., Application of the non-polynomial spline approach to the solution of the Burgers equation, Open appl. math. J., 1, 15-20, (2007) · Zbl 1322.65086 [20] Morandi Cecchi, M.; Nociforo, R.; Patuzzo Grego, P., Space-time finite elements numerical solution of Burgers problems, Le matematiche LI (fasc. I), 43-57, (1996) · Zbl 0904.35081 [21] Xu, Min; Wang, Ren-Hong; Zhang, Ji-Hong; Fang, Qin, A novel numerical scheme for solving burgers’ equation, Appl. math. comput., 217, 4473-4482, (2011) · Zbl 1207.65111 [22] Mittal, R.C.; Singhal, P., Numerical solution of burger’s equation, Commun. numer. methods eng., 9, 397-406, (1993) · Zbl 0782.65147 [23] Mittal, R.C.; Singhal, P., Numerical solution of periodic burger equation, Ind. J. pure appl. math., 27, 7, 689-700, (1996) · Zbl 0859.76053 [24] Kutulay, S.; Bahadir, A.R.; Özdes, A., Numerical solution of the one-dimensional burgers’ equation: explicit and exact-explicit finite difference methods, J. comput. appl. math., 103, 251-261, (1999) · Zbl 0942.65094 [25] Kutulay, S.; Esen, A.; Dag, I., Numerical solutions of the burgers’ equation by the least-squares quadratic B-spline finite element method, J. comput. appl. math., 167, 21-33, (2004) · Zbl 1052.65094 [26] Xie, Shu-Sen; Heo, Sunyeong; Kim, Seokchan; Woo, Gyungsoo; Yi, Sucheol, Numerical solution of one-dimensional burgers’ equation using reproducing kernel function, J. comput. appl. math., 214, 417-434, (2008) · Zbl 1140.65069 [27] Özis, T.; Esen, A.; Kutluay, S., Numerical solution of burgers’ equation by quadratic B-spline finite elements, Appl. math. comput., 165, 237-249, (2005) · Zbl 1070.65097 [28] Liao, Wenyuan, An implicit fourth-order compact finite difference scheme for one-dimensional burgers’ equation, Appl. math. comput., 206, 755-764, (2008) · Zbl 1157.65438 [29] Jiang, Ziwu; Wang, Renhong, An improved numerical solution of burgers’ equation by cubic B-spline quasi-interpolation, J. inform. comput. sci., 7, 5, 1013-1021, (2010) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-04-15 14:10:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4675973355770111, "perplexity": 6253.432356658286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00042.warc.gz"}
http://semparis.lpthe.jussieu.fr/public/list.pl?type=seminars&key=13396&language=
Status Confirmed Seminar Series IPHT-MAT Subjects hep-th Date Wednesday 27 November 2019 Time 14:15 Institute IPHT Seminar Room Salle Claude Itzykson, Bât. 774 Speaker's Last Name Lorenzo Papini Speaker's First Name Speaker's Email Address Speaker's Institution Title The BPS limit of rotating AdS black hole thermodynamics Abstract In the last couple of years it has been proposed that the Bekenstein-Hawking entropy of rotating AdS$_d$ black holes ($4 \leq d \leq 7$) can be reproduced by extremizing the Legendre transform of a homogeneous function of chemical potentials subject to a complex constraint. In this seminar, I will provide a physical interpretation of these extremization principles, showing that in each dimension the entropy function coincides with the on-shell supergravity action when the BPS chemical potentials are obtained by taking a specific BPS limit of black hole thermodynamics. To perform the limit, one starts from finite temperature and reaches the extremal BPS black hole along a supersymmetric trajectory in the space of complexified solutions. We thus provide a generalization of the BPS limit proposed in [arXiv:1810.11442] to multicharge black holes and to every dimension $d$. arXiv Preprint Number Comments https://www.ipht.fr/Phocea/Vie_des_labos/Seminaires/index.php?id=993968 Attachments To Generate a poster for this seminar : [ Postscript | PDF ] [ Annonces ]    [ Abonnements ]    [ Archive ]    [ Aide ]    [ JavaScript requis ] [ English version ]
2019-12-14 05:12:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6206077337265015, "perplexity": 3091.7443431721194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00010.warc.gz"}
https://mathoverflow.net/questions/218994/svd-vs-fourier-analysis-for-data
# SVD vs Fourier analysis for data. Fourier analysis is useful for analysis in the frequency domain. SVD on the other hand is useful for analysis of data, and expressing noise in the data. I have a problem that needs extensive data analysis, it is in the area medicine. This could be generalized to other problems. The problem is that of gene expression, in case of long term gene mutation. Using Fourier analysis we can get a time series analysis of the genes(and thereby get noisy gene expression), and as time progresses, the changes in a particular organ. On the other hand, we could use Singular Value Decomposition, and the noisy gene expresses itself. This, is just an outline of the problem. Both SVD, and Fourier lend themselves to solve the problem that of expressing noisy genes. Is there any comparison of the two techniques, why one would be preferred over another qualitatively, or references that one can use for the problem of gene expression, thanks in anticipation. • Fourier analysis approximates a continuous function using trigonometric polynomials; SVD approximates a matrix using eigenvectors of its left (or right) singular matrix. These are a priori completely different problems, and your question doesn't make much sense unless you can indicate why you think these two problems are related. – Paul Siegel Sep 23 '15 at 2:35 • You might get more feedback on sites like scicomp.stackexchange.com, stats.stackexchange.com, or dsp.stackexchange.com. But if you ask your question in its current form, they might also close it, because it is not really clear what you want to do exactly. For my answer, I just guessed that stochastic processes might be your context, because both Fourier analysis and "optimal" decompositions make sense in that context. – Thomas Klimpel Sep 23 '15 at 20:00 • @PaulSiegel I guess the OP has in mind that the discrete Fourier transform transforms discrete "spatial" data to discrete "frequency" data. I would say that the SVD decomposes every matrix $A$ as $U^TDV$ with a diagonal $D$ and orthonormal $U$ and $V$ while the diagonal Fourier transform $F$ is also orthonormal and gives $C = F^HDF$ with diagonal $D$ for circulant matrices $C$. – Dirk Sep 24 '15 at 7:12
2020-07-08 00:52:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7375810742378235, "perplexity": 258.81824653434836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00434.warc.gz"}
https://dsp.stackexchange.com/questions/60740/why-are-low-frequency-peaks-not-sharp-in-scipy-fft
# Why are low frequency peaks not sharp in scipy fft? I am using numpy/scipy to plot graphs of sine waves. frequencies at 15 Hz or higher give nice, sharp peaks, but at lower frequencies, the peaks are smeared and the actual peak frequency can't be seen in the graph. Why is that happening and how do i stop that? import numpy as np import matplotlib.pyplot as plt tmin = 0 tmax = 0.5 N = 10000 #No. of samples t = np.linspace(tmin, tmax, N) T = t[1]-t[0] #sampling interval fs = 1/T #sampling frequency print(fs) # x1 = np.sin(2*np.pi*5*t) + 0.8*np.sin(2*np.pi*10*t) x1 = np.sin(2*np.pi*20*t) + np.sin(2*np.pi*30*t) + np.sin(2*np.pi*5*t) + np.sin(2*np.pi*3*t) x2 = x1 + np.sin(2*np.pi*50*t) + 0.8*np.sin(2*np.pi*100*t) x2_fft = fft(x2) plt.plot(np.abs(x2_fft)) plt.title(r'fft of $$x_2(t)$$') plt.show() xf = np.linspace(0, fs/2.0, N//2) #Create frequency axis plt.plot(xf, 2.0/N * np.abs(x2_fft[0:N//2])) #plot only +ve frequencies plt.xlim([0, 120]) plt.xlabel('Frequency (Hz)') plt.ylabel('Amplitude') plt.title(r'fft of $$x_2(t)$$') plt.grid(True) plt.show() • They seem to get sharper if the interval is increased. – Azhar Mehmood Sep 16 '19 at 20:50 • How many periods of a function must a window have before it becomes clear? – Azhar Mehmood Sep 16 '19 at 20:51 • Define "clear" and "sharp" as per your requirements. – hotpaw2 Sep 17 '19 at 14:59
2020-09-19 13:05:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4106280207633972, "perplexity": 6177.268785503918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00293.warc.gz"}
http://koa.ifa.hawaii.edu/Reference/datalib/stokes.html
## How To Make Archive Tapes of Stokes Polarimeter Data ### Mees to Manoa Presently, the Haleakala Stokes Polarimeter data is FTP'ed from Mees Solar Observatory to Manoa in the early morning the day after the data was taken, at approximately 4 AM. Data from each day is stored in a separate directory where the name is given by the day number of the year. The magnetograms, calibration files, and preliminary reductions are stored in the directory. For a "normal" day of observing, with three to four magnetograms, the amount of data for a single day is 5-15 Mb of data. This of course strongly depends on the number of active regions present and the occurence of special campaign observations. Even after the data is transferred to Manoa, copies are kept on-line at Mees. These on-line files are retained until the data is sucessfully copied onto exabyte in Manoa. In addition, the Stokes data is also stored on system backup tapes of the computers at Mees. Therefore, there should not be any single point failure allowing data to be lost. ### Archiving the Data in Manoa Once the amount of data transferred to Mees reaches a critical mass, it can be tarred off to exabyte and stored. This should generally be done when the amount of data reaches 200-300 Mb, or when Tom needs disk space, whichever comes first. It generally takes about two months for this much data to accumulate. The data is stored on milo on /solarm/FTP/pub/stokes in individual directories as detailed above. The data is archived using the tar command to write all the files/directories into a single compressed file on an exabyte tape. However, milo does not have its own exabyte drive, so the tar file must be transferred over the network to a machine that does have an exabyte. The basic command sequence for tarring the data and transferring it to the exabyte drive on akala is as follows (these commands are explained in the UNIX man pages for the tar command): akala>102% allocate st0 milo>101% cd /solarm/FTP/pub/stokes milo>102% tar -cvfb - 20 * | rsh akala dd of=/dev/nrst0 obs=20b This will take an hour or so. After the tar is complete, always rewind the tape and check it again with tar to make sure it can be read properly using the following commands: akala>104% mt -f /dev/nrst0 rew akala>105% tar -tvf /dev/nrst0 > /solar/Stokes/tape_logs/stokes.tar.list.DATE where DATE is the current date in the DDmmmYY format (e.g. 06apr95). For the truly paranoid, this check can be done on another exabyte drive on a separate machine in order to guarantee it is generally readable. The output from this tar listing is saved into a new file. This file is kept on-line in /solar/Stokes/tape_logs (e.g. /solar/Stokes/tape_logs/stokes.tar.list.11oct95) in order to allow quick searches of the tape contents, see what data is available, etc. In order to also assure that all the data in the directories was actually stored on the tape, one can compare the out from the "tar -t" command above with a general directory listing, produced with the following command: milo>103% ls -lR > /solar/Stokes/tape_logs/stokes.ls.list.DATE The numbers of files, their names, and their sizes can be compared between the two files to assure that no data was lost. Comparing the entire listing can be quite time consuming, so generally a spot check of several directories is all that is done. The above command will also store the output of the "ls -lR" command in the Stokes tape log directory (e.g. /solar/Stokes/tape_logs/stokes.ls.list.11oct95). This is simply done for the sake of completeness and to allow one to check in the future the actual contents of the directory with what was archived. When the integrity of the tar file on exabyte has been checked, a listing of all the archived directories is made and sent to Elaine Kiernan (kiernan@koa), using a command such as: milo>104% ls -l * > ~/stokes.dir.list This lets Elaine knows that the data has been sucessfully transferred to exabyte and can also be deleted from the machine there. After, the data has been archived, it can be deleted from /solarm. I actually like to leave the data for the past week or so on /solarm so that there is some recent data there to look at, should the need arise. To delete the data from /solarm, the following command can be used: milo>105% \rm -r NN* where NN is the beginning two numbers of the directories you want to delete (e.g. 21*). The exabyte tape is then labelled and put upstairs in the data archive. The command used to archive the data ("tar -xvf ... | dd ...") and what machine was used should be written on the card in the tape case. In addition, the days of the year included on that tape, and the range of dates that covers, should also be written on the tape. ### Miscellaneous There is a automatic email message from polsyn@koa every time data is transferred from Mees to Manoa sent to the current Stokes archiver and the observers. I have been keeping a file with all these messages (/solar/Stokes/tape_logs/stokes.data.199N) since they may be useful in the future if there is some discrepancy between the taken and archived data (i.e. where did the link break down?). But, as you can tell, I just like to be overly cautious. If there is a problem in the transfer, or if /solarm fills up, then polsyn@koa will automatically send a message detailing any discrepencies between the directories for the data on koa and the directories on milo. If you clear space on /solarm, the missing files will automatically be re-transferred the following night. In the past, a listing of the data that was archived off to tape was stored in a master file that showed the contents of each tape. This was useful and appropriate when the data was stored on nine-track tape (which only held two or three days) and the notification and backup procedures were more complex. However, the files in /solarm/Stokes/tape_logs (described above) take over most of this functionality. I mostly kept the list up to date, but I don't think it is necessary anymore. The file, which lists tapes back to May, 1991, is in /solar/daily/Mees/tape_logs/stokes.tapes.logs. And that's all there is to it. One thing to watch out for is that new data may come in between the time you made the tar archive and when you go to delete the archived data. This could happen if you start the tar job in the evening, go home, and then delete the archived files the next day. In this case, be careful not to delete files that haven't been archived yet.
2017-09-23 02:02:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5178875923156738, "perplexity": 1764.6244831039223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00286.warc.gz"}
http://blog.math.toronto.edu/GraduateBlog/2011/03/30/probability-theory-comprehensive-exam/
## Probability Theory Comprehensive Exam The Probability Theory comprehensive exam is administered by the Department of Statistics and is always held in May. Math students may choose to write this exam and use it as one of the three PhD comprehensive exam requirements for our program. It is not enough to take the Probability Theory courses (STA 2111HF and STA 2211HS) and obtain the minimum grade of A-, as is done with our core material. The exam is an almost all-day affair and this year it will be held on Tuesday, May 24, 2011. Please let me know asap if you intend to register for the exam. I will then pass your name on to the Statistics Dept. Thanks, Ida
2019-01-17 12:55:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242204785346985, "perplexity": 2507.290860621581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00221.warc.gz"}
http://www.mathworks.com/help/ident/ref/n4sidoptions.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouse over text to see original. Click the button below to return to the English verison of the page. # n4sidOptions Option set for n4sid ## Syntax opt = n4sidOptions opt = n4sidOptions(Name,Value) ## Description opt = n4sidOptions creates the default options set for n4sid. opt = n4sidOptions(Name,Value) creates an option set with the options specified by one or more Name,Value pair arguments. ## Input Arguments collapse all ### Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. collapse all Handling of initial states during estimation, specified as one of the following values: • 'zero' — The initial state is set to zero. • 'estimate' — The initial state is treated as an independent estimation parameter. Weighting scheme used for singular-value decomposition by the N4SID algorithm, specified as one of the following values: • 'MOESP' — Uses the MOESP algorithm by Verhaegen [2]. • 'CVA' — Uses the Canonical Variable Algorithm by Larimore [1]. Estimation using frequency-domain data always uses 'CVA'. • 'SSARX' — A subspace identification method that uses an ARX estimation based algorithm to compute the weighting. Specifying this option allows unbiased estimates when using data that is collected in closed-loop operation. For more information about the algorithm, see [4]. • 'auto' — The estimating function chooses between the MOESP, CVA and SSARX algorithms. Forward- and backward-prediction horizons used by the N4SID algorithm, specified as one of the following values: • A row vector with three elements —  [r sy su], where r is the maximum forward prediction horizon, using up to r step-ahead predictors. sy is the number of past outputs, and su is the number of past inputs that are used for the predictions. See pages 209 and 210 in [3] for more information. These numbers can have a substantial influence on the quality of the resulting model, and there are no simple rules for choosing them. Making 'N4Horizon' a k-by-3 matrix means that each row of 'N4Horizon' is tried, and the value that gives the best (prediction) fit to data is selected. k is the number of guesses of  [r sy su] combinations. If you specify N4Horizon as a single column, r = sy = su is used. • 'auto' — The software uses an Akaike Information Criterion (AIC) for the selection of sy and su. Error to be minimized in the loss function during estimation, specified as the comma-separated pair consisting of 'Focus' and one of the following values: • 'prediction' — The one-step ahead prediction error between measured and predicted outputs is minimized during estimation. As a result, the estimation focuses on producing a good predictor model. • 'simulation' — The simulation error between measured and simulated outputs is minimized during estimation. As a result, the estimation focuses on making a good fit for simulation of model response with the current inputs. The Focus option can be interpreted as a weighting filter in the loss function. For more information, see Loss Function and Model Quality Metrics. Weighting prefilter applied to the loss function to be minimized during estimation. To understand the effect of WeightingFilter on the loss function, see Loss Function and Model Quality Metrics. Specify WeightingFilter as one of the following values: • [] — No weighting prefilter is used. • Passbands — Specify a row vector or matrix containing frequency values that define desired passbands. You select a frequency band where the fit between estimated model and estimation data is optimized. For example, [wl,wh] where wl and wh represent lower and upper limits of a passband. For a matrix with several rows defining frequency passbands, [w1l,w1h;w2l,w2h;w3l,w3h;...], the estimation algorithm uses the union of the frequency ranges to define the estimation passband. Passbands are expressed in rad/TimeUnit for time-domain data and in FrequencyUnit for frequency-domain data, where TimeUnit and FrequencyUnit are the time and frequency units of the estimation data. • SISO filter — Specify a single-input-single-output (SISO) linear filter in one of the following ways: • A SISO LTI model • {A,B,C,D} format, which specifies the state-space matrices of a filter with the same sample time as estimation data. • {numerator,denominator} format, which specifies the numerator and denominator of the filter as a transfer function with same sample time as estimation data. This option calculates the weighting function as a product of the filter and the input spectrum to estimate the transfer function. • Weighting vector — Applicable for frequency-domain data only. Specify a column vector of weights. This vector must have the same length as the frequency vector of the data set, Data.Frequency. Each input and output response in the data is multiplied by the corresponding weight at that frequency. Control whether to enforce stability of estimated model, specified as the comma-separated pair consisting of 'EnforceStability' and either true or false. Data Types: logical Controls whether parameter covariance data is generated, specified as true or false. If EstCovar is true, then use getcov to fetch the covariance matrix from the estimated model. Specify whether to display the estimation progress, specified as one of the following values: • 'on' — Information on model structure and estimation results are displayed in a progress-viewer window. • 'off' — No progress or results information is displayed. Removal of offset from time-domain input data during estimation, specified as the comma-separated pair consisting of 'InputOffset' and one of the following: • A column vector of positive integers of length Nu, where Nu is the number of inputs. • [] — Indicates no offset. • Nu-by-Ne matrix — For multi-experiment data, specify InputOffset as an Nu-by-Ne matrix. Nu is the number of inputs, and Ne is the number of experiments. Each entry specified by InputOffset is subtracted from the corresponding input data. Removal of offset from time-domain output data during estimation, specified as the comma-separated pair consisting of 'OutputOffset' and one of the following: • A column vector of length Ny, where Ny is the number of outputs. • [] — Indicates no offset. • Ny-by-Ne matrix — For multi-experiment data, specify OutputOffset as a Ny-by-Ne matrix. Ny is the number of outputs, and Ne is the number of experiments. Each entry specified by OutputOffset is subtracted from the corresponding output data. Weighting of prediction errors in multi-output estimations, specified as one of the following values: • 'noise' — Minimize $\mathrm{det}\left(E\text{'}*E/N\right)$, where E represents the prediction error and N is the number of data samples. This choice is optimal in a statistical sense and leads to the maximum likelihood estimates in case no data is available about the variance of the noise. This option uses the inverse of the estimated noise variance as the weighting function. • Positive semidefinite symmetric matrix (W) — Minimize the trace of the weighted prediction error matrix trace(E'*E*W/N) where: • E is the matrix of prediction errors, with one column for each output. W is the positive semidefinite symmetric matrix of size equal to the number of outputs. Use W to specify the relative importance of outputs in multiple-output models, or the reliability of corresponding data. • N is the number of data samples. • [] — The software chooses between the 'noise' or using the identity matrix for W. This option is relevant only for multi-output models. Additional advanced options, specified as a structure with the field MaxSize. MaxSize specifies the maximum number of elements in a segment when input-output data is split into segments. MaxSize must be a positive integer. Default: 250000 ## Output Arguments collapse all Option set for n4sid, returned as an n4sidOptions option set. ## Examples collapse all opt = n4sidOptions; Create an options set for n4sid using the 'zero' option to initialize the state. Set the Display to 'on'. opt = n4sidOptions('InitialState','zero','Display','on'); Alternatively, use dot notation to set the values of opt. opt = n4sidOptions; opt.InitialState = 'zero'; opt.Display = 'on'; ## References [1] Larimore, W.E. "Canonical variate analysis in identification, filtering and adaptive control." Proceedings of the 29th IEEE Conference on Decision and Control, pp. 596–604, 1990. [2] Verhaegen, M. "Identification of the deterministic part of MIMO state space models." Automatica, Vol. 30, 1994, pp. 61–74. [3] Ljung, L. System Identification: Theory for the User. Upper Saddle River, NJ: Prentice-Hall PTR, 1999. [4] Jansson, M. "Subspace identification and ARX modeling." 13th IFAC Symposium on System Identification , Rotterdam, The Netherlands, 2003. Get trial now
2016-10-22 02:26:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475022912025452, "perplexity": 2166.785334925356}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.28/warc/CC-MAIN-20161020183838-00277-ip-10-171-6-4.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/157513/setup-golf-ball-physics
# Setup golf ball physics I am developing a simple Golf game as shown in the below image. I am facing below issues: 1. Even if I apply small amount of force, the ball continuous move along the grass? Grass friction is not stopping the ball. 2. Sometimes, the ball speed is increased after colliding with the walls instead the ball speed should decreased after collision with the walls. The walls are having box collider. 3. Sometimes, the ball reverses its direction after colliding with walls. Code: Physics properties of the ball: ball.physicsBody.affectedByGravity = true; ball.physicsBody.mass = 0.0450; ball.physicsBody.restitution = 0.8; ball.physicsBody.friction = 0.3; ball.physicsBody.allowsResting = true; Physics properties of the grass: golf.physicsBody.friction = 0.8; Physics properties of the walls: leftWall.physicsBody.friction = 0; leftWall.physicsBody.restitution = 0.8; I have set the physics world gravity value to -9.8. I am looking for suggestions to fix the above listed issue. Thank you. • Isn't there a separate friction value for rolling? – Bálint Apr 13 '18 at 6:58 • Yes. But should I apply that value to ball or the grass surface? – Nimesh Chandramaniya Apr 13 '18 at 8:02 • apply it to the ball. – Bálint Apr 13 '18 at 8:18 • Even if I increase the rolling friction to 0.8, the ball continues to move along grass surface. – Nimesh Chandramaniya Apr 13 '18 at 8:51
2019-08-21 00:41:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23016047477722168, "perplexity": 2378.0386929827346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00071.warc.gz"}
http://eprints.iisc.ernet.in/15964/
Hot deformation mechanisms in metastable beta titanium alloy Ti-10V-2Fe-3Al Balasubrahmanyam, VV and Prasad, YVRK (2001) Hot deformation mechanisms in metastable beta titanium alloy Ti-10V-2Fe-3Al. In: Materials Science and Technology, 17 (10). pp. 1222-1228. PDF Hot_deformation.pdf - Published Version Restricted to Registered users only Download (594Kb) | Request a copy Abstract The mechanisms of hot deformation in the \beta titanium alloy Ti–10V–2Fe–3Al have been characterised in the temperature range $650-850^o C$ and strain rate range $0.001-100 s^{-1}$ using constant true strain rate isothermal compression tests. The \beta transus for this alloy is $\sim 790^o C$, below which the alloy has a fine grained duplex $\alpha + \beta$ structure. At temperatures lower than the \beta transus and lower strain rates, the alloy exhibits steady state flow behaviour while at higher strain rates, either continuous flow softening or oscillations are observed at lower or higher temperatures, respectively. The processing maps reveal three different domains. First, in the temperature range $650-750^o C$ and at strain rates lower than $0.01 s^{-1}$, the material exhibits fine grained superplasticity marked by abnormal elongation, with a peak at $\sim 700^o C$. Under conditions within this domain, the stress–strain curves are of the steady state type. The apparent activation energy estimated in the domain of fine grained superplasticity is $\sim 225 kJ mol^{-1}$, which suggests that dynamic recovery in the \beta phase is the mechanism by which the stress concentration at the triple junctions is accommodated. Second, at temperatures higher than $800^o C$ and strain rates lower than $\sim 0.1 s^{-1}$, the alloy exhibits large grained superplasticity, with the highest elongation occurring at $850^o C$ and $0.001 s^{-1}$; the value of this is about one-half of that recorded at $700^o C$. The microstructure of the specimen deformed under conditions in this domain shows stable subgrain structures within large \beta grains. Third, at strain rates higher than $10 s^{-1}$ and temperatures lower than $700^o C$, cracking occurs in the regions of adiabatic shear bands. Also, at strain rates above $3 s^{-1}$ and temperatures above $700^o C$, the material exhibits flow localisation. Item Type: Journal Article Copyright of this article belongs to Maney Publishing. Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy) 13 Oct 2008 11:55 19 Sep 2010 04:50 http://eprints.iisc.ernet.in/id/eprint/15964
2014-09-21 00:07:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6572312712669373, "perplexity": 2893.323789776961}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134114.26/warc/CC-MAIN-20140914011214-00311-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.toppr.com/guides/physics/electrostatics/dielectric-properties/
# Dielectric Properties In Dielectric materials when allowed to subject to the electric field the positive charges in the material gets displaced in the direction of the applied electric field. The negative charges are shifts in the direction opposite to the applied electric field. This leads to dielectric polarization. Basically, electric charges do not flow through the material. Polarization reduces the field of the dielectric. Learn dielectric properties here. ## Properties of Dielectric The term Dielectric was first given by William Whewell. The electrical conductivity of a perfect dielectric material is zero. A dielectric store and dissipate the electrical energy similar to an ideal capacitor. Some of the main properties are Electric Susceptibility, Dielectric polarization, Dielectric dispersion, Dielectric relaxation, Tunability and many more. • Electric Susceptibility The dielectric material can easily be polarized when subjected to an electric field. It is measured by electric susceptibility. It also determines the electric permeability of the material. • Dielectric Polarization An electric dipole moment is a measure of separation of negative and positive charge in the system. The relationship between the dipole moment i.e., M and the electric field i.e., E gives rise to the properties of dielectric. When the applied electric field is removed, the atom returns to its original state. This return to the original state happens in an exponential decay manner. The time taken by the atom to reach its original state is Relaxation time. • Total Polarization The factors that decide the polarization of dielectric is the formation of dipole moment and their orientation that is relative to the electric field. According to elementary dipole type, there can be either electronic polarization or ionic polarization. Electronic polarization i.e., $$P_e$$ occurs when the dielectric molecules forming the dipole moment are of neutral particles. Ionic polarization $$P_i$$ and electronic polarization both are independent of temperature. Permanent dipole moments in the molecules is an asymmetrical distribution of charge between different atoms. In such cases, orientational polarization $$P_o$$ is observed. If a free charge is there in the dielectric material it may lead to the Space charge polarization $$P_s$$. Thus, the total polarization of the dielectric material is given below: $$P_{Total} = P_i + P_e + P_o + P_s$$ ### Dielectric Dispersion P is the maximum polarization attained by the dielectric material. T_r is the relaxation time for a particular polarization process.  The dielectric polarization process is expressed as P(t) = P[1-exp(-t/tr)] The relaxation time of the material varies for different polarization processes. Electronic polarization is followed by ionic polarization. An orientation polarization is slower than ionic polarization. Space charge polarization is very slow. ### Dielectric Breakdown When higher electric fields are provided then the insulator starts conducting and it behaves as a conductor. In such cases, dielectric materials lose their dielectric properties. This phenomenon is Dielectric Breakdown. Dielectric Breakdown is an irreversible process. This process leads to the failure of dielectric materials. ## FAQs on Dielectric Properties Question 1: Name the dielectric properties of a material Answer: Some of the dielectric properties of a material are- 1. Dielectric Polarization 2. Dielectric Breakdown 3. Electric Susceptibility 4. Dielectric Dispersion 5. Total Polarization Share with friends ## Customize your course in 30 seconds ##### Which class are you in? 5th 6th 7th 8th 9th 10th 11th 12th Get ready for all-new Live Classes! Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes. Ashhar Firdausi IIT Roorkee Biology Dr. Nazma Shaik VTU Chemistry Gaurav Tiwari APJAKTU Physics Get Started Subscribe Notify of
2022-05-18 00:58:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4230421483516693, "perplexity": 1243.9897111420082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00627.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/name-oxometal-anions-first-series-transition-metals-which-metal-exhibits-oxidation-state-equal-its-group-number-some-important-compounds-transition-elements-oxides-oxoanions-metals_9217
# Name the Oxometal Anions of the First Series of the Transition Metals in Which the Metal Exhibits the Oxidation State Equal to Its Group Number - Chemistry Name the oxometal anions of the first series of the transition metals in which the metal exhibits the oxidation state equal to its group number. #### Solution 1 Cr2072- and Cr042- (Group number = Oxidation state of Cr = 6). Mn04  (Group number = Oxidation state of Mn = 7). #### Solution 2 Vanadate, VO_3^(-) Oxidation state of V is + 5. Chromate, CrO_4^(2-) Oxidation state of Cr is + 6. Permanganate, MnO_4^(-) Oxidation state of Mn is + 7. Is there an error in this question or solution? #### APPEARS IN NCERT Class 12 Chemistry Textbook Chapter 8 The d-block and f-block Elements Q 6 | Page 234
2021-02-25 17:23:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.575006365776062, "perplexity": 4071.290870286274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00394.warc.gz"}
https://scicomp.stackexchange.com/questions/7122/chebyshev-spectral-differentiation-via-fft
# Chebyshev spectral differentiation via FFT I am using the Chebyshev spectral differentiation technique that is described concisely under "details" here. The idea is to take the initial data $v_0,v_1\,...,v_N$ and store it in union with itself as the vector $$V = [v_0\,v_1\,...\,v_{N-1}\,v_N\,v_{N-1}\,v_{N-2}\,...\,v_1]^\top$$ From there, the Fourier transform of this vector $V$ is taken. However, for the Fourier transform to provide a good interpolation of the data in $V$, $V$ should be smooth and periodic. Although $V$ is continous and periodic, there is (generally) a discontinuity in its first derivative (around the entries $v_{N-1},v_N,v_{N-1}$). Why, then, is this method of differentiation still so effective? • The way they arranged the array confused you. It's just packing the data to be able to use U=Re(FFT(V)) later. The more effective would be to use Discrete Cosine Transform. They just followed the recipe from Trefethen, ch. 8. since this is what they can implement in Mathematica easily. – Johntra Volta May 9 '13 at 12:16 • Thanks Johntra. I think I may have asked the question too unclearly - I figured out the answer though, and documented it below. – Doubt May 9 '13 at 13:14 It is important to recognize that the initial data $v_0,...,v_N$ is not stored on a uniform grid, but rather at the Chebyshev points $$x_j = \cos\frac{\pi j}{N},\qquad j=0,...,N.$$ Now as long as the initial data has a decent polynomial interpolation, then \begin{align} v_j = p(x_j) &= a_0 + a_1x_j + \cdots + a_Nx_j^N \\ &=a_0 + a_1\cos\frac{\pi j}{N} + \cdots + a_N\cos^N\frac{\pi j}{N} \\ &=a_0 + a_1\cos\theta_j + \cdots + a_N\cos^N\theta_j = f(\theta_j) \end{align} where $\theta_j = \pi j/N\in[0,\pi]$ is a uniform grid. Therefore, on the new uniform grid the data is an even function (hence the powers of cosine), and in particular $df/d\theta|_{\theta=0} = 0$. Thus the function can easily be extended to $[-\pi,\pi]$, giving a smooth, even, periodic function with data at uniformly-spaced gridpoints: ripe for the Fourier transform.
2021-02-28 19:36:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912400245666504, "perplexity": 566.3630344241179}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00032.warc.gz"}
http://mathoverflow.net/questions/8999/discrete-harmonic-function-on-a-planar-graph/83788
# Discrete harmonic function on a planar graph Given a graph $G$ we will call a function $f:V(G)\to \mathbb{R}$ discrete harmonic if for all $v\in V(G)$ , the value of $f(v)$ is equal to the average of the values of $f$ at all the neighbors of $v$. This is equivalent to saying the discrete Laplacian vanishes. Discrete harmonic functions are sometimes used to approximate harmonic functions and most of the time they have similar properties. For the plane we have Liouville's theorem which says that a bounded harmonic function has to be constant. If we take a discrete harmonic function on $\mathbb{Z}^2$ it satisfies the same property (either constant or unbounded). Now my question is: If we take a planar graph $G$ so that every point in the plane is contained in an edge of $G$ or is inside a face of $G$ that has less than $n\in \mathbb{N}$ edges, does a discrete harmonic function necessarily have to be either constant or unbounded? I know the answer is positive if $G$ is $\mathbb{Z}^2$, the hexagonal lattice and triangular lattice, I suspect the answer to my question is positive, but I have no idea how to prove it. Edited the condition of the graph to "contain enough cycles". (So trees are ruled out for example) - Maybe I am misunderstanding the question, but you can write down (many) nonconstant bounded harmonic functions on a trivalent regular tree. – moonface Dec 15 '09 at 16:37 I think the poster wants to require that there is a constant n such that any face of R^2 \setminus G has at most n edges. The complement of the regular trivalent tree is a single face with infinitely many edges. – David Speyer Dec 15 '09 at 16:44 @moonface: You're right, that's true for most trees. I meant a different restriction on the graph, so I edited the above. @David: Yep :), I should have been more precise. – Gjergji Zaimi Dec 15 '09 at 16:53 I first describe the graph $G$. Let $N_i$ be a sequence of positive integers; we will choose $N_i$ later. Let $T$ be an infinite tree which has one root vertex, the root has $N_1$ children; the children of that root have $N_2$ children, those children have $N_3$ children and so forth. Let $V_0$ be the set containing the root, $V_1$ be the set of children of the root, $V_2$ the children of the elements of $V_1$, and so forth. To form our graph, take $T$ and add a sequence of cycles, one going through the vertices of $V_1$, one through $V_2$ and so forth. (In the way which is compatible with the obvious planar embedding of $T$.) Every face of $G$ is either a triangle or a quadrilateral. We will build a harmonic function $f$ on $G$ as follows: On the root, $f$ will be $0$. On $V_1$, we choose $f$ to be nonzero, but average to $0$. On $V_i$, for $i \geq 2$, we compute $f$ inductively by the condition that, for every $u \in V_{i-1}$, the function $f$ is constant on the children of $u$. Of course, we may or may not get a bounded function depending on how we choose the $N_i$. I will now show that we can choose the $N_i$ so that $f$ is bounded. Or, rather, I will claim it and leave the details as an exercise for you. Let $a_i$ be a decreasing sequence of positive reals, approaching zero. Take $N_i = 6/(a_{i+1} - a_i)$. Exercise: If $f$ on $V_1$ is taken between $-1+a_1$ and $1-a_1$, then $f$ on $V_i$ will lie between $-1+a_i$ and $1-a_i$. In particular, $f$ will be bounded between $-1$ and $1$ everywhere. - You can probably also take any periodic tiling of the hyperbolic plane. And probably the right condition on $G$ is that it be amenable. – Greg Kuperberg Dec 15 '09 at 17:17 Is there a definition of amenable for general graphs? I only knew it for Cayley graphs. – David Speyer Dec 15 '09 at 17:39 Yes: No infinite Ponzi scheme. It makes sense for general metric spaces too, although that's not really different. Also, in this case amenability could be stronger than strictly necessary. The discrete Laplacian is a model of random walks, and you could possibly have a non-amenable structure that is only noticed by non-random walks. – Greg Kuperberg Dec 15 '09 at 18:19 It quickly leaps out that all of these counterexamples are infinite Ponzi schemes, so amenability is a natural condition. It is considered for instance here arxiv.org/abs/0706.2844 – Greg Kuperberg Dec 15 '09 at 20:29 I also stumbled upon jstor.org/stable/119840?seq=1 where they also state that nonamenability is "sort of" necessary in the analogous problem of existence of non-constant harmonic functions. – Gjergji Zaimi Dec 15 '09 at 21:56 Benjamini and Schramm proved that an infinite, bounded degree, planar graph is non-Liouville if and only if it is transient. - For general complete Riemannian manifolds other than the plane, one need some curvature conditions to guarantee the Liouville's theorem. Similarly, for planar graphs, one need some curvature constraints too. See Geometric analysis aspects of infinite semiplanar graphs with nonnegative curvature Hua, Jost, Liu, http://arxiv.org/abs/1107.2826, where the Liouville's theorem and recurrency of random walks are proved on semiplanar graphs (graphs that could be embedded in a 2-manifold, including planar graphs) with nonnegative Higuchi's curvature, a analogue of the sectional curvature (or Ricci curvature) of a 2-manifold. - For instance, any regular $H_{p,q}$ tessellation of the hyperbolic space $\mathbb{H}_2$ with $\frac{1}{q}+\frac{1}{q}<4$ does the job. - another way to ensure that bounded harmonic functions on a graph $G$ are constant is to consider a random walks $X_n$: since $M_n = V(X_n)$ is a bounded martingale, it converges almost surely. Hence, if one can prove that random walks on $G$ are recurrent, this shows that $V$ has to be constant. Of course, this can be difficult to show that random walks on $G$ are recurrent. -
2016-05-04 21:34:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786278963088989, "perplexity": 200.73700272193352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860124045.24/warc/CC-MAIN-20160428161524-00072-ip-10-239-7-51.ec2.internal.warc.gz"}
https://studyadda.com/question-bank/topic-test-chemical-equilibrium-6-5-21_q39/5959562/522125
• # question_answer The reaction between ${{N}_{2}}$ and ${{H}_{2}}$ to form ammonia has ${{K}_{c}}=6\times {{10}^{-2}}$ at the temperature 500°C. The numerical value of ${{K}_{p}}$ for this reaction is  A) $1.5\times {{10}^{-5}}$ B) $1.5\times {{10}^{5}}$ C) $1.5\times {{10}^{-6}}$ D) $1.5\times {{10}^{6}}$ [a] ${{K}_{p}}={{K}_{c}}{{(RT)}^{\Delta n}}$; $\Delta n=2-4=-2$ ${{K}_{p}}=6\times {{10}^{-2}}\times {{(0.0812\times 773)}^{-2}}$ ${{K}_{p}}=\frac{6\times {{10}^{-2}}}{{{(0.0812\times 773)}^{2}}}=1.5\times {{10}^{-5}}$.
2021-11-27 17:07:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204987049102783, "perplexity": 411.3476157310064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00488.warc.gz"}
https://nodus.ligo.caltech.edu:8081/40m/13292
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop 40m Log Not logged in Message ID: 13292     Entry time: Tue Sep 5 09:47:34 2017 Author: Kira Type: Summary Category: PEM Subject: heater circuit calculations I decided to calculate the fluctuation in power that we will have in the heater circuit. The resistors we ordered have 50 ppm/C and it would be useful to know what kind of fluctuation we would expect. For this, I assumed that the heater itself is an ideal resistor that has no temperature variation. The circuit diagram is found in Kevin's elog here. At saturation, the total resistance (we will have a $1\Omega$ resistor instead of $6\Omega$ for our new design) will be $R_{tot}=R+R_{h}=1\Omega +24\Omega =25\Omega$. Therefore, with a 24V input, the saturation current should be $I=\frac{V_{in}}{R_{tot}}=\frac{24V}{25\Omega}=0.96A$.  Therefore, the power in the heater should be (in the ideal case) $P=I^2R{_{h}}=22.1184W$ Now, in the case where the resistor is not ideal, let's assume the temperature of the resistor changes by 10C (which is about how much we would like to heat the whole thing). Therefore, the resistor will have a new value of $R_{new}=R+50ppm/C\times 10C\times 10^{-6}=1.0005\Omega$. The new current will then be $I_{new}=\frac{V_{in}}{R_{new}}=0.95998A$ and the new power will be $P_{new}=I_{new}^{2}R_{h}=22.1175W$. So the difference in power going through the heater is about 0.00088W. We can use this power difference to calculate how much the temperature of the metal can we wish to heat up will change. $\Delta T=\Delta P\times (1/\kappa) /x$ where $\kappa$ is the thermal conductivity and x is the thickness of the material. For our seismometer, I calculated it to be 0.012K. ELOG V3.1.3-
2023-02-02 21:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833282470703125, "perplexity": 1261.4899876791264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00487.warc.gz"}
https://www.physicsforums.com/threads/distance-light-travels-in-a-relatively-moving-frame.543215/
# Distance light travels in a relatively moving frame? I was wondering if say.. you have a particle moving at 0.5c in the +x direction and a lightbulb at relative rest to the particle. The particle passes the lightbulb at t$_{0}$ The lightbulb then flashes, the wave reaches the particle at a particular point, and the speed of light is then measured (by the particle) to be c. Does this then mean that the light wave will then travel a distance of ct from that point in the reference frame of the particle; t being any point in time that the particle wishes to measure the distance of the light wave from it. Doc Al Mentor I was wondering if say.. you have a particle moving at 0.5c in the +x direction and a lightbulb at relative rest to the particle. The particle passes the lightbulb at t$_{0}$ If the particle and lightbulb are at relative rest, how can they pass each other? If the particle and lightbulb are at relative rest, how can they pass each other? Oh am I using the wrong terminology? I meant to say that the particle is moving with 0.5c compared to the lightbulb. Doc Al Mentor I meant to say that the particle is moving with 0.5c compared to the lightbulb. OK. Does this then mean that the light wave will then travel a distance of ct from that point in the reference frame of the particle; t being any point in time that the particle wishes to measure the distance of the light wave from it. Yes. Thanks Doc Al, that's helped clear up confusion.
2021-03-09 01:11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6251865029335022, "perplexity": 383.8630431677897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00320.warc.gz"}
https://livrepository.liverpool.ac.uk/3113235/
# Search for resonances decaying into a weak vector boson and a Higgs boson in the fully hadronic final state produced in proton - proton collisions at root s=13 TeV with the ATLAS detector Aad, G, Abbott, B, Abbott, DC, Abud, A, Abeling, K, Abhayasinghe, DK, Abidi, SH, AbouZeid, OS, Abraham, NL, Abramowicz, H et al (show 2946 more authors) (2020) Search for resonances decaying into a weak vector boson and a Higgs boson in the fully hadronic final state produced in proton - proton collisions at root s=13 TeV with the ATLAS detector. PHYSICAL REVIEW D, 102 (11). Access the full-text of this item by clicking on the Open Access link. Item Type: Article Symplectic Admin 11 Jan 2021 09:04 10 May 2021 12:18 10.1103/PhysRevD.102.112008 https://journals.aps.org/prd/abstract/10.1103/Phys... Author https://livrepository.liverpool.ac.uk/id/eprint/3113235
2021-05-10 21:35:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505691289901733, "perplexity": 10854.970864352437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00047.warc.gz"}
https://www.zbmath.org/?q=an%3A0531.05036
# zbMATH — the first resource for mathematics On the maximum cardinality of a consistent set of arcs in a random tournament. (English) Zbl 0531.05036 Let $$f(T_ n)$$ denote the maximum number of arcs possible in an acyclic subgraph of a random tournament $$T_ n$$. The author shows that $$f(T_ n)<n(n-1)/4+1.73n^{3/2}$$ with probability tending to one as n tends to infinity, thereby sharpening a result of J. Spencer [Period. Math. Hung. 11, 131-144 (1980; Zbl 0349.05011)]. Reviewer: J.W.Moon ##### MSC: 05C20 Directed graphs (digraphs), tournaments 05C80 Random graphs (graph-theoretic aspects) 60C05 Combinatorial probability ##### Keywords: acyclic subgraph; random tournament Full Text: ##### References: [1] Chernoff, H, A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Ann. math. statist., 23, 493-509, (1963) [2] Erdös, P; Moon, J.W, On sets of consistent arcs in a tournament, Canad. math. bull., 8, 269-271, (1965) · Zbl 0137.43301 [3] Erdös, P; Spencer, J, () [4] Spencer, J, Optimal ranking of tournaments, Networks, 1, 135-138, (1971) · Zbl 0236.05110 [5] Spencer, J, Optimally ranking unrankable tournaments, Period. math. hungar., 11, 2, 131-144, (1980) · Zbl 0349.05011 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-05-18 16:46:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614094853401184, "perplexity": 2691.752561599508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00576.warc.gz"}
https://mathematica.stackexchange.com/questions/141401/writing-an-iteration-using-two-functions
# Writing an iteration using two functions [closed] I have been given two functions with an initial condition. One function becomes the variable of the other. I need to run the program for 10 iterations. d = 100 (initial condition) x = (300*d)/(d + 100) Next d1 = 200 - x d1 should become the variable of the function x instead of d. Again x2 = (300*d1)/(d1 + 100) d2 = 200 - x2 x3 = (300*d2)/(d2 + 100) d3 = 200 - x3 and repeat the process until 10 iterations have been made. How can I write a program to carry out this process? ## closed as off-topic by george2079, happy fish, gwr, Wjx, Bob HanlonApr 2 '17 at 20:21 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – george2079, happy fish, gwr, Wjx, Bob Hanlon If this question can be reworded to fit the rules in the help center, please edit the question. f[y_, x_] := {200 - x, 300 y/(y + 100)} ic = {100, 150} nf[n_] := NestList[f @@ # &, ic, n] TableForm[nf[10], TableHeadings -> {Range[0, 10], {"d", "x"}}] • Thank you very much. it helps a lot. I really needed to display it as a table too. Thank you once again. – prasanthi Mar 31 '17 at 21:44 • My goodness: Isn't that my solution (below)? – David G. Stork Mar 31 '17 at 22:14 • @DavidG.Stork I am sorry I did not see your solutions. I agree that they use the same strategy. However your use of null rather than the value of the initial x renders your output difficult to interpret. This, I agree, is a trivial difference but I can only express that it was unintentional. I suggest that you raise with OP. I will completely accept change in vote etc. – ubpdqn Apr 1 '17 at 0:15 Let's do two simple pre-computations. With[{d = 200 - x}, (300 d)/(d + 100)] (300 (200 - x))/(300 - x) and With[{d = 100}, (300 d)/(d + 100)] 150 Then the iteration can be written as NestList[300 (200 - #)/(300 - #) &, 150, 10] {150, 100, 150, 100, 150, 100, 150, 100, 150, 100, 150} f1 = Function[d, 300*d/(d + 100)] (* your first transformation *) f2 = Function[x, 200 - x] (* your second transformation *) f = f2@*f1 (* your composed transformation *) (* using NestList *) NestList[f, 100, 10] (* using RecurrenceTable *) RecurrenceTable[{d[n + 1] == f[d[n]], d[0] == 100}, d, {n, 0, 10}] One could compose the two component function, but another trick is to compute {f1[x], f2[f1[x]}, then take the second component and feed it back as the new x: Flatten@NestList[ {temp = 300 #[[2]]/(#[[2]] + 100), 200 - temp} &, {Null, 100}, 10] (* {Null, 100, 150, 50, 100, 100, 150, 50, 100, 100, 150, 50, 100, 100, 150, 50, 100, 100, 150, 50, 100, 100} *) I note that I get a different sequence than @corey979. • Yes sequence is little bit different. But thanks a lot for the help. – prasanthi Mar 31 '17 at 21:52
2019-10-18 08:00:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48227354884147644, "perplexity": 2332.4028448102854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00301.warc.gz"}
https://economics.stackexchange.com/tags/variance/hot?filter=year
# Tag Info The following might help, although whether it's simpler than calculating the variances will depend on the particular functions. Suppose the two distributions are of random variables $x_1$ and $x_2$. First find the respective means $\mu_1$ and $\mu_2$. Then replace $x_1$ by $y_1=x_1-\mu_1$ and $x_2$ by $y_2=x_2-\mu_2$, with the effect of shifting the ...
2021-08-05 23:27:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322091698646545, "perplexity": 164.93771513491885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00199.warc.gz"}
https://datascience.stackexchange.com/questions/46831/how-will-occams-razor-principle-work-in-machine-learning
# How will Occam's Razor principle work in Machine learning The following question displayed in the image was asked during one of the exams recently. I am not sure if I have correctly understood the Occam's Razor principle or not. According to the distributions and decision boundaries given in the question and following the Occam's Razor the decision boundary B in both the cases should be the answer. Because as per Occam's Razor, choose the simpler classifier which does a decent job rather than the complex one. Can someone please testify if my understanding is correct and the answer chosen is appropriate or not? Please help as I am just a beginner in machine learning • 3.328 "If a sign is not necessary then it is meaningless. That is the meaning of Occam's Razor." From the Tractatus Logico-Philosophicus by Wittgenstein – Jorge Barrios Mar 7 at 11:25 Occam’s razor principle: Having two hypotheses (here, decision boundaries) that has the same empirical risk (here, training error), a short explanation (here, a boundary with fewer parameters) tends to be more valid than a long explanation. In your example, both A and B have zero training error, thus B (shorter explanation) is preferred. What if training error is not the same? If boundary A had a smaller training error than B, selecting becomes tricky. We need to quantify "explanation size" the same as "empirical risk" and combine the two in one scoring function, then proceed to compare A and B. An example would be Akaike Information Criterion (AIC) that combines empirical risk (measured with negative log-likelihood) and explanation size (measured with the number of parameters) in one score. As a side note, AIC cannot be used for all models, there are many alternatives to AIC too. Relation to validation set In many practical cases, when model progresses toward more complexity (larger explanation) to reach a lower training error, AIC and the like can be replaced with a validation set (a set on which the model is not trained). We stop the progress when validation error (error of model on validation set) starts to increase. This way, we strike a balance between low training error and short explanation. Occam Razor is just a synonym to Parsimony principal. (KISS, Keep it simple and stupid.) Most algos work in this principal. In above question one has to think in designing the simple separable boundaries, like in first picture D1 answer is B. As it define the best line separating 2 samples, as a is polynomial and may end up in over-fitting. (if I would have used SVM that line would have come) similarly in figure 2 D2 answer is B. Occam’s razor in data-fitting tasks : 1. First try linear equation 2. If (1) don't helps much - choose a non-linear one with less terms and/or smaller degrees of variables. ## D2 B clearly wins, because it's linear boundary which nicely separates data. (What is "nicely" I can't currently define. You have to develop this feeling with experience). A boundary is highly non-linear which seems like a jittered sine wave. ## D1 However I am not sure about this one. A boundary is like a circle and B is strictly linear. IMHO, for me - boundary line is neither circle segment nor a line segment,- it's parabola-like curve : So I opt for a C :-) • I'm still unsure of why you want an in-between line for D1. Occam's Razor says to use the simple solution that works. Absent more data, B is a perfectly valid division that fits the data. If we received more data that suggests more of a curve to B's data set then I could see your argument, but requesting C goes against your point (1), since it's a linear boundary that works. – Delioth Mar 7 at 20:36 • Because there is a lot of empty space from B line towards the left circular cluster of points. This means that any new random point arriving has a very high chance being assigned to circular cluster on the left and a very small chance for being assigned to the cluster in the right. Thus, B line is not an optimal boundary in case of new random points on plane. And you can't ignore randomness of data, because usually there is always a random displacement of points – Agnius Vasiliauskas Mar 8 at 9:39 I am not sure if I have correctly understood the Occam's Razor principle or not.
2019-12-09 10:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5813297033309937, "perplexity": 997.3136933138272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00441.warc.gz"}
https://math.stackexchange.com/questions/1147152/showing-linear-dependence/1147162
# Showing Linear Dependence My task is to show that the set of vectors: $\bf x_1, x_2, x_3, x_4$ where $\bf x_1=[1,0,0]$ $\bf x_2=[t,1,1]$ $\bf x_3=[1,t,t^2]$ and $\bf x_4=[t+2,t+1,t^2+1]$ are linearly dependent. (Note: $x_i$ can also be written in matrix format.) To show that they are linearly dependent I form the equation: $\bf c_1x_1+c_2x_2+c_3x_3+c_4x_4=0$ and will show that there is a nonzero solution to it. That is I will show that aside from $\bf c_1,c_2,c_3,c_4=0$ there is some other solution to it. However solving puts me in a system of 3 equations in 4 unknowns which seems new to me. They are: $\bf c_1+c_2t+c_3+c_4(t+2)=0$ $\bf 0+c_2+c_3t+c_4(t+1)=0$ $\bf 0+c_2+c_3t^2+c_4(t^2+1)=0$ Can someone help me to find a non trivial solution to the given system of equation? or Will you help me showing that the 4 vectors above are linearly dependent? Thank you so much for your help. • Here $x_1+x_2+x_3=x_4$. Isn't it? So they are linearly dependent. – Extremal Feb 13 '15 at 22:35 • Yes I see it, but I don't know if I can express a vector in terms of the other will mean linear dependence, is that true? Thanks – Jr Antalan Feb 13 '15 at 22:41 • Of course! Because that implies you can find a combination of $c_i$ where $c_i\neq 0$ for all $i$. See other's answers too. – Extremal Feb 13 '15 at 22:43 • we have shown you that substituting C1=1,C2=1,C3=1,C4=-1, the equality will be satisfied with all the C's different from zero! thus the vectors are linearly dependent(Hint: Only 1 constant C shall be different from zero so we can say the 4 vectors are L.D) – Mistos Feb 13 '15 at 22:47 • It is so clear to me now, thanks @Mathi. – Jr Antalan Feb 13 '15 at 22:48 Hint: You can make an easy solution if you use the fact that if some vector in a list of vectors is a linear combination of other vectors in that same list, then the list is linearly dependent. • if I can express a vector in terms of the other will mean linear dependence, is that true? Thanks – Jr Antalan Feb 13 '15 at 22:41 • @JrAntalan Yes. If you have a list of vectors $v_0,v_1,\ldots,v_n$, and if you can can write for instance $$v_0=c_1v_1+\cdots+c_nv_n,\tag{for some scalars c_j}$$ then by adding to both sides the additive inverse of $v_0$ we get : $$0=-v_0+c_1v_1+\cdots+c_nv_n,$$ which means that the list of those vectors is linearly dependent. – Workaholic Feb 13 '15 at 22:43 • Now I know, thanks Workaholic – Jr Antalan Feb 13 '15 at 22:46 • @JrAntalan You're welcome. – Workaholic Feb 13 '15 at 22:47 Note that $$\bf x_4=x_1+x_2+x_3 \Rightarrow -x_1-x_2-x_3+x_4=0$$ so $\bf c_1=c_2=c_3=-1$ and $\bf c_4=+1$ Notice that $X$2 is a linear combination of $X$4, $X$3 and $X$1! where $X$2= $X$4-$X$3-$X$1! that would prove that the four vectors are linearly dependent! • You'r Welcome!. – Mistos Feb 13 '15 at 22:49 four vectors in $R^3$ are always linearly dependent. you don't need anything more. • These vectors dont live in $\mathbb R ^3$ though, as they have a mixture of variables and real numbers in them. This looks more like it comes from an infinite dimensional vector space of the polynomial ring over the reals times itself times itself – Alan Feb 14 '15 at 0:43
2021-06-20 04:54:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7395470142364502, "perplexity": 349.6352903312501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487655418.58/warc/CC-MAIN-20210620024206-20210620054206-00069.warc.gz"}
http://www.ka-journalisme.com/pdf/algebraic-curves-an-introduction-to-algebraic-geometry
# Algebraic Curves: An Introduction to Algebraic Geometry by William Fulton By William Fulton Similar geometry and topology books Kaehler differentials This booklet relies on a lecture direction that I gave on the collage of Regensburg. the aim of those lectures used to be to provide an explanation for the function of Kahler differential varieties in ring concept, to organize the line for his or her program in algebraic geometry, and to guide as much as a little analysis difficulties The textual content discusses nearly completely neighborhood questions and is for this reason written within the language of commutative alge- algebra. Non-commutative Algebraic Geometry This path was once learn within the division of arithmetic on the collage of Washington in spring and fall 1999. Additional resources for Algebraic Curves: An Introduction to Algebraic Geometry Sample text Thus M\ll = ¢. P 46 Let us then define 111 by 111 : ~p E M~t Sp ~ 0}. First, we assume that 111 is not empty. Then 111 is a non empty open part of M and in each point p of Ill, we know that (Vh)p = 0. The classical Pick-Berwald theorem then implies that 111 is an open part of a nondegenerate ellipsoid or hyperboloid. Thus detS is a constant different from zero on 111" The continuity of detS then implies that fit = M. Finally, we may assume that S = 0 on the whole of M. Thus by Proposition 2, we can suppose that M is given by the equation z -- P(x,y), where P is a polynomial of degree at most k + 1, and that the canonical affine normal vector field is given by (0,0,1). Amer. Math. Soc. 210, 75-106 (1975). 2. , Rigoli, M. and Woodward, minimal immersions of S 2 into CP n. 3. M. : On conformal Math. Ann. 279, 599-620 L. M. (1988). : Minimal immersions of S 2 and RP 2 into CP n with few higher order singularities. To appear in Math. Proc. Camb. Phil. Soc. 4. Bolton, J. and Woodward, forms. M. : On immersions of surfaces into space Soochow J. of Mathematics 5. Bolton, J. M. immersions with St-symmetry. 6. Bolton, J. M. 14, 11-31 (1988). : On the Simon conjecture for minimal Math. Then it follows from Lemma 5 that a = 0 and ,11 = ,l 2. Thus, we have (i). Therefore, we may assume that detS = 0. But then we know that there exists an eigenvector u of S with eigenvalue zero. Again there are two possibilities. a. h(u,u) -- 0. In this case, we can find a vector v, such that h(v,v) = 0 and h(u,v) = 1. Using the equation of Ricci, we then obtain that h(Sv,u) = h(v,Su) = 0. Hence Sv has no component in the direction of v. Thus, we have (iii). b. h(u,u) \$ 0. Here, we may assume, by taking - ~ as normal, that h(u,u) = 1.
2018-02-25 19:56:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441079258918762, "perplexity": 899.0954426367038}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00141.warc.gz"}
http://hal.in2p3.fr/in2p3-00585536
Skip to Main content Skip to Navigation # Measurement of the differential dijet production cross section in proton-proton collisions at sqrt(s)=7 TeV 7 CMS IP2I Lyon - Institut de Physique des 2 Infinis de Lyon Abstract : A measurement of the double-differential inclusive dijet production cross section in proton-proton collisions at sqrt(s)=7 TeV is presented as a function of the dijet invariant mass and jet rapidity. The data correspond to an integrated luminosity of 36 inverse picobarns, recorded with the CMS detector at the LHC. The measurement covers the dijet mass range 0.2 TeV to 3.5 TeV and jet rapidities up to |y|=2.5. It is found to be in good agreement with next-to-leading-order QCD predictions. Document type : Journal articles Complete list of metadata http://hal.in2p3.fr/in2p3-00585536 Contributor : Sylvie Flores Connect in order to contact the contributor Submitted on : Wednesday, April 13, 2011 - 11:17:07 AM Last modification on : Monday, December 13, 2021 - 9:15:21 AM ### Citation S. Chatrchyan, D. Sillou, M. Besancon, S. Choudhury, M. Dejardin, et al.. Measurement of the differential dijet production cross section in proton-proton collisions at sqrt(s)=7 TeV. Physics Letters B, Elsevier, 2011, 700, pp.187-206. ⟨10.1016/j.physletb.2011.05.027⟩. ⟨in2p3-00585536⟩ ### Metrics Les métriques sont temporairement indisponibles
2022-01-17 01:13:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861579179763794, "perplexity": 8572.447073649979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00092.warc.gz"}
https://en.wikipedia.org/wiki/Rank_correlation
# Rank correlation In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment of the labels "first", "second", "third", etc. to different observations of a particular variable. A rank correlation coefficient measures the degree of similarity between two rankings, and can be used to assess the significance of the relation between them. For example, two common nonparametric methods of significance that use rank correlation are the Mann–Whitney U test and the Wilcoxon signed-rank test. ## Context If, for example, one variable is the identity of a college basketball program and another variable is the identity of a college football program, one could test for a relationship between the poll rankings of the two types of program: do colleges with a higher-ranked basketball program tend to have a higher-ranked football program? A rank correlation coefficient can measure that relationship, and the measure of significance of the rank correlation coefficient can show whether the measured relationship is small enough to likely be a coincidence. If there is only one variable, the identity of a college football program, but it is subject to two different poll rankings (say, one by coaches and one by sportswriters), then the similarity of the two different polls' rankings can be measured with a rank correlation coefficient. As another example, in a contingency table with low income, medium income, and high income in the row variable and educational level—no high school, high school, university—in the column variable),[1] a rank correlation measures the relationship between income and educational level. ## Correlation coefficients Some of the more popular rank correlation statistics include An increasing rank correlation coefficient implies increasing agreement between rankings. The coefficient is inside the interval [−1, 1] and assumes the value: • 1 if the agreement between the two rankings is perfect; the two rankings are the same. • 0 if the rankings are completely independent. • −1 if the disagreement between the two rankings is perfect; one ranking is the reverse of the other. Following Diaconis (1988), a ranking can be seen as a permutation of a set of objects. Thus we can look at observed rankings as data obtained when the sample space is (identified with) a symmetric group. We can then introduce a metric, making the symmetric group into a metric space. Different metrics will correspond to different rank correlations. ## General correlation coefficient Kendall (1944) showed that his ${\displaystyle \tau }$ (tau) and Spearman's ${\displaystyle \rho }$ (rho) are particular cases of a general correlation coefficient. Suppose we have a set of ${\displaystyle n}$ objects, which are being considered in relation to two properties, represented by ${\displaystyle x}$ and ${\displaystyle y}$, forming the sets of values ${\displaystyle \{x_{i}\}_{i\leq n}}$ and ${\displaystyle \{y_{i}\}_{i\leq n}}$. To any pair of individuals, say the ${\displaystyle i}$-th and the ${\displaystyle j}$-th we assign a ${\displaystyle x}$-score, denoted by ${\displaystyle a_{ij}}$, and a ${\displaystyle y}$-score, denoted by ${\displaystyle b_{ij}}$. (Note, as these are comparisons, ${\displaystyle a_{ij}}$ and ${\displaystyle b_{ij}}$ do not exist for ${\displaystyle i=j}$.) The only requirement for these functions is that they be anti-symmetric, so ${\displaystyle a_{ij}=-a_{ji}}$ and ${\displaystyle b_{ij}=-b_{ji}}$. Then the generalized correlation coefficient ${\displaystyle \Gamma }$ is defined as ${\displaystyle \Gamma ={\frac {\sum _{i,j=1}^{n}a_{ij}b_{ij}}{\sqrt {\sum _{i,j=1}^{n}a_{ij}^{2}\sum _{i,j=1}^{n}b_{ij}^{2}}}}}$ ### Kendall's ${\displaystyle \tau }$ as a particular case If ${\displaystyle r_{i}}$, ${\displaystyle s_{i}}$ are the ranks of the ${\displaystyle i}$-member according to the ${\displaystyle x}$-quality and ${\displaystyle y}$-quality respectively, then we can define ${\displaystyle a_{ij}=\operatorname {sgn}(r_{j}-r_{i}),\quad b_{ij}=\operatorname {sgn}(s_{j}-s_{i}).}$ The sum ${\displaystyle \sum a_{ij}b_{ij}}$ is twice the number of concordant pairs minus the number of discordant pairs (see Kendall tau rank correlation coefficient). The sum ${\displaystyle \sum a_{ij}^{2}}$ is just ${\displaystyle n(n-1)}$, the number of terms ${\displaystyle a_{ij}}$, as is ${\displaystyle \sum b_{ij}^{2}}$. Thus in this case, ${\displaystyle \Gamma ={\frac {2\,(({\text{number of concordant pairs}})-({\text{number of discordant pairs}}))}{\sqrt {n(n-1)n(n-1)}}}={\text{Kendall's }}\tau }$ ### Spearman's ${\displaystyle \rho }$ as a particular case If ${\displaystyle r_{i}}$, ${\displaystyle s_{i}}$ are the ranks of the ${\displaystyle i}$-member according to the ${\displaystyle x}$ and the ${\displaystyle y}$-quality respectively, we can simply define ${\displaystyle a_{ij}=r_{j}-r_{i}}$ ${\displaystyle b_{ij}=s_{j}-s_{i}}$ The sums ${\displaystyle \sum a_{ij}^{2}}$ and ${\displaystyle \sum b_{ij}^{2}}$ are equal, since both ${\displaystyle r_{i}}$ and ${\displaystyle s_{i}}$ range from ${\displaystyle 1}$ to ${\displaystyle n}$. Then we have: ${\displaystyle \Gamma ={\frac {\sum (r_{j}-r_{i})(s_{j}-s_{i})}{\sum (r_{j}-r_{i})^{2}}}}$ now ${\displaystyle \sum _{i,j=1}^{n}(r_{j}-r_{i})(s_{j}-s_{i})=\sum _{i=1}^{n}\sum _{j=1}^{n}r_{i}s_{i}+\sum _{i=1}^{n}\sum _{j=1}^{n}r_{j}s_{j}-\sum _{i=1}^{n}\sum _{j=1}^{n}(r_{i}s_{j}+r_{j}s_{i})}$ ${\displaystyle =2n\sum _{i=1}^{n}r_{i}s_{i}-2\sum _{i=1}^{n}r_{i}\sum _{j=1}^{n}s_{j}}$ ${\displaystyle =2n\sum _{i=1}^{n}r_{i}s_{i}-{\frac {1}{2}}n^{2}(n+1)^{2}}$ since ${\displaystyle \sum r_{i}}$ and ${\displaystyle \sum s_{j}}$ are both equal to the sum of the first ${\displaystyle n}$ natural numbers, namely ${\displaystyle {\frac {1}{2}}n(n+1)}$. We also have ${\displaystyle S=\sum _{i=1}^{n}(r_{i}-s_{i})^{2}=2\sum r_{i}^{2}-2\sum r_{i}s_{i}}$ and hence ${\displaystyle \sum (r_{j}-r_{i})(s_{j}-s_{i})=2n\sum r_{i}^{2}-{\frac {1}{2}}n^{2}(n+1)^{2}-nS}$ ${\displaystyle \sum r_{i}^{2}}$ being the sum of squares of the first ${\displaystyle n}$ naturals equals ${\displaystyle {\frac {1}{6}}n(n+1)(2n+1)}$. Thus, the last equation reduces to ${\displaystyle \sum (r_{j}-r_{i})(s_{j}-s_{i})={\frac {1}{6}}n^{2}(n^{2}-1)-nS}$ Further ${\displaystyle \sum (r_{j}-r_{i})^{2}=2n\sum r_{i}^{2}-2\sum r_{i}r_{j}}$ ${\displaystyle =2n\sum r_{i}^{2}-2(\sum r_{i})^{2}={\frac {1}{6}}n^{2}(n^{2}-1)}$ and thus, substituting into the original formula these results we get ${\displaystyle \Gamma _{R}=1-{\frac {6\sum d_{i}^{2}}{n^{3}-n}}}$ where ${\displaystyle d_{i}=x_{i}-y_{i},}$ is the difference between ranks. which is exactly the Spearman's rank correlation coefficient ${\displaystyle \rho }$. ## Rank-biserial correlation Gene Glass (1965) noted that the rank-biserial can be derived from Spearman's ${\displaystyle \rho }$. "One can derive a coefficient defined on X, the dichotomous variable, and Y, the ranking variable, which estimates Spearman's rho between X and Y in the same way that biserial r estimates Pearson's r between two normal variables” (p. 91). The rank-biserial correlation had been introduced nine years before by Edward Cureton (1956) as a measure of rank correlation when the ranks are in two groups. ### Kerby simple difference formula Dave Kerby (2014) recommended the rank-biserial as the measure to introduce students to rank correlation, because the general logic can be explained at an introductory level. The rank-biserial is the correlation used with the Mann–Whitney U test, a method commonly covered in introductory college courses on statistics. The data for this test consists of two groups; and for each member of the groups, the outcome is ranked for the study as a whole. Kerby showed that this rank correlation can be expressed in terms of two concepts: the percent of data that support a stated hypothesis, and the percent of data that do not support it. The Kerby simple difference formula states that the rank correlation can be expressed as the difference between the proportion of favorable evidence (f) minus the proportion of unfavorable evidence (u). ${\displaystyle r=f-u}$ ### Example and interpretation To illustrate the computation, suppose a coach trains long-distance runners for one month using two methods. Group A has 5 runners, and Group B has 4 runners. The stated hypothesis is that method A produces faster runners. The race to assess the results finds that the runners from Group A do indeed run faster, with the following ranks: 1, 2, 3, 4, and 6. The slower runners from Group B thus have ranks of 5, 7, 8, and 9. The analysis is conducted on pairs, defined as a member of one group compared to a member of the other group. For example, the fastest runner in the study is a member of four pairs: (1,5), (1,7), (1,8), and (1,9). All four of these pairs support the hypothesis, because in each pair the runner from Group A is faster than the runner from Group B. There are a total of 20 pairs, and 19 pairs support the hypothesis. The only pair that does not support the hypothesis are the two runners with ranks 5 and 6, because in this pair, the runner from Group B had the faster time. By the Kerby simple difference formula, 95% of the data support the hypothesis (19 of 20 pairs), and 5% do not support (1 of 20 pairs), so the rank correlation is r = .95 - .05 = .90. The maximum value for the correlation is r = 1, which means that 100% of the pairs favor the hypothesis. A correlation of r = 0 indicates that half the pairs favor the hypothesis and half do not; in other words, the sample groups do not differ in ranks, so there is no evidence that they come from two different populations. An effect size of r = 0 can be said to describe no relationship between group membership and the members' ranks. ## References 1. ^ Kruskal, William H. (December 1958). "Ordinal Measures of Association". Journal of the American Statistical Association. Retrieved 2012-11-04. • Cureton, E. E. (1956). Rank-biserial correlation. Psychometrika 21, 287-290. doi:10.1007/BF02289138 • Everitt, B. S. (2002), The Cambridge Dictionary of Statistics, Cambridge: Cambridge University Press, ISBN 0-521-81099-X • Diaconis, P. (1988), Group Representations in Probability and Statistics, Lecture Notes-Monograph Series, Hayward, CA: Institute of Mathematical Statistics, ISBN 0-940600-14-5 • Glass, G. V. (1965). A ranking variable analogue of biserial correlation: implications for short-cut item analysis. Journal of Educational Measurement, 2(1), 91–95. DOI: 10.1111/j.1745-3984.1965.tb00396.x • Kendall, M. G. (1970), Rank Correlation Methods, London: Griffin, ISBN 0-85264-199-0 • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Comprehensive Psychology, volume 3, article 1. doi:10.2466/11.IT.3.1. link to article
2016-12-09 16:10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 70, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7726892232894897, "perplexity": 710.8278380630248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542712.49/warc/CC-MAIN-20161202170902-00055-ip-10-31-129-80.ec2.internal.warc.gz"}
https://byjus.com/questions/water-has-maximum-density-at/
# Water has maximum density at The maximum density of water is at 4°C as there are two opposite effects that are in balance. When the water is in the form of ice, there is a lot of empty space in the crystal lattice. This structure starts to collapse when the ice starts to melt. The water molecules move further apart when the temperature increases as the density decrease. At 0°C, water contains ice-clusters which are free to move. There are empty spaces in these clusters making the density to reduce. When the water starts to cool, there is a decrease in warm water temperature, resulting in an increase in density. At 4°C, the clusters start to form resulting in a maximum density of water.
2021-09-24 03:58:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8247127532958984, "perplexity": 394.5260108252718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00148.warc.gz"}
https://www.vedantu.com/question-answer/in-what-ratio-darjeeling-tea-costing-rs-320-per-class-8-maths-cbse-5f5b85d66e663a29cc4506b0
Question # In what ratio Darjeeling tea costing Rs. 320 per kg be mixed with Assam tea costing Rs.250 per kg so that there is a gain of 20% by selling the mixture at Rs. 324 per kg.A.1:2B.2:3C.3:2D.2:5 Verified 129.3k+ views Hint: Mixture problems are word problems where items or quantities values are mixed together. Mixture problems involve combining two or more things and determining characteristics. Selling price of the mixture- Rs.324 per kg Gain of the mixture-20% Here we have to use the formula of calculating the cost price(c.p) = $c.p = \left[ {\dfrac{{100}}{{100 + gain\% }}.s.p} \right]$ [s.p(selling price)] =$\left[ {\dfrac{{100}}{{100 + 20}}.324} \right]$ = $\left[ {\dfrac{{100}}{{120}}.324} \right]$ By solving the above equation, we get =Rs.270 per kg So for Darjeeling tea = Rs(320-270)per kg =Rs.50 per kg And for Assam tea =Rs(270-250)per kg =Rs.20per kg Here the ratio of quantities of both Darjeeling tea and Assam tea for gaining the profit of 20% at the rate of 324 per kg of mixture is = cost of Assam tea/cost of Darjeeling tea =$\dfrac{{20}}{{50}}$ canceling the denominator by numerator we get =$\dfrac{2}{5}$ And the ratio of quantities of both Darjeeling tea and Assam tea for gaining the profit of 20% at the rate of 324 per kg of mixture is 2:5. Note: Students may get confused about how to take 20%. In this question you always have to put the number of percent. For example 20% is given in the question. You have to take 20 values in the formula of cost price. In this type of question which is related to mixture, always find the value of cost price and sell price first and calculate them by using the table for solving this question you must have the value of cost price and selling price and also the percentage of gaining and losing it will help you to find your answers.
2021-12-08 06:59:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6677267551422119, "perplexity": 1423.8489246324025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00200.warc.gz"}
http://math.stackexchange.com/questions/415853/the-limit-of-lim-limits-x-to-infty-sqrtx23x-4-x
# The limit of $\lim\limits_{x \to \infty}\sqrt{x^2+3x-4}-x$ I tried all I know and I always get to $\infty$, Wolfram Alpha says $\frac{3}{2}$. How should I simplify it? $$\lim\limits_{x \to \infty}\sqrt{(x^2+3x+4)}-x$$ I tried multiplying by its conjugate, taking the squared root out of the limit, dividing everything by $\sqrt{x^2}$, etc. Obs.: Without using l'Hôpital's. - ## 3 Answers Note that \begin{align} \sqrt{x^2+3x-4} - x & = \left(\sqrt{x^2+3x-4} - x \right) \times \dfrac{\sqrt{x^2+3x-4} + x}{\sqrt{x^2+3x-4} + x}\\ & = \dfrac{(\sqrt{x^2+3x-4} - x)(\sqrt{x^2+3x-4} + x)}{\sqrt{x^2+3x-4} + x}\\ & = \dfrac{x^2+3x-4-x^2}{\sqrt{x^2+3x-4} + x} = \dfrac{3x-4}{\sqrt{x^2+3x-4} + x}\\ & = \dfrac{3-4/x}{\sqrt{1+3/x-4/x^2} + 1} \end{align} Now we get \begin{align} \lim_{x \to \infty}\sqrt{x^2+3x-4} - x & = \lim_{x \to \infty} \dfrac{3-4/x}{\sqrt{1+3/x-4/x^2} + 1}\\ & = \dfrac{3-\lim_{x \to \infty} 4/x}{1 + \lim_{x \to \infty} \sqrt{1+3/x-4/x^2} } = \dfrac{3}{1+1}\\ & = \dfrac32 \end{align} - I'm really sorry for taking your time friend, I realized that I was working with $+1$ instead of $+x$ all the time. Thank you for your kind answer and sorry for this. – Luan Cristian Thums Jun 9 '13 at 20:41 Intuitively you can see this as follows: Write $x^2+3x+4$ as $\left(x+\frac32\right)^2+\frac74$. For $x$ large this quantity is almost the same as $\left(x+\frac32\right)^2$. Therefore for $x$ large $\sqrt{x^2+3x+4}-x\sim\sqrt{\left(x+\frac32\right)^2}-x=\frac32$ - $$\lim_{x \rightarrow \infty} \left(x^2 + 3x + 4\right)^{ \frac{1}{2}} - x$$ $$= \lim_{x \rightarrow \infty} \left(x^2\left(1+\frac{3}{x}+\frac{4}{x^2}\right)\right)^{ \frac{1}{2}} - x$$ $$= \lim_{x \rightarrow \infty} x\left(1+\frac{3}{x}+\frac{4}{x^2}\right)^{ \frac{1}{2}} - x$$ Then by Taylor expansion, we get that $$= \lim_{x \rightarrow \infty} x\left(1+\frac{1}{2}\left(\frac{3}{x}+\frac{4}{x^2}\right)+\operatorname{o}\left(\frac{1}{x^2}\right)\right) - x$$ $$= \lim_{x \rightarrow \infty} \frac{3}{2} + \operatorname{o}\left(\frac{1}{x}\right) = \frac{3}{2}$$ as required. -
2016-02-13 17:56:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999133288860321, "perplexity": 740.57405103323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167113.3/warc/CC-MAIN-20160205193927-00092-ip-10-236-182-209.ec2.internal.warc.gz"}
https://aitopics.org/mlt?cdid=news%3A8819506D&dimension=taxnodes
### On the Difficulty of Achieving Equilibrium in Interactive POMDPs We analyze the asymptotic behavior of agents engaged in an infinite horizon partially observable stochastic game as formalized by the interactive POMDP framework. We show that when agents' initial beliefs satisfy a truth compatibility condition, their behavior converges to a subjective ɛ-equilibrium in a finite time, and subjective equilibrium in the limit. This result is a generalization of a similar result in repeated games, to partially observable stochastic games. However, it turns out that the equilibrating process is difficult to demonstrate computationally because of the difficulty in coming up with initial beliefs that are both natural and satisfy the truth compatibility condition. Our results, therefore, shed some negative light on using equilibria as a solution concept for decision making in partially observable stochastic games. ### Multiagent Stochastic Planning With Bayesian Policy Recognition When operating in stochastic, partially observable, multiagent settings, it is crucial to accurately predict the actions of other agents. In my thesis work, I propose methodologies for learning the policy of external agents from their observed behavior, in the form of finite state controllers. To perform this task, I adopt Bayesian learning algorithms based on nonparametric prior distributions, that provide the flexibility required to infer models of unknown complexity. These methods are to be embedded in decision making frameworks for autonomous planning in partially observable multiagent systems. ### Gaussian-binary Restricted Boltzmann Machines on Modeling Natural Image Statistics We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models. The key aspect of this analysis is to show that GRBMs can be formulated as a constrained mixture of Gaussians, which gives a much better insight into the model's capabilities and limitations. We show that GRBMs are capable of learning meaningful features both in a two-dimensional blind source separation task and in modeling natural images. Further, we show that reported difficulties in training GRBMs are due to the failure of the training algorithm rather than the model itself. Based on our analysis we are able to propose several training recipes, which allowed successful and fast training in our experiments. Finally, we discuss the relationship of GRBMs to several modifications that have been proposed to improve the model. ### Delayed acceptance ABC-SMC Approximate Bayesian computation (ABC) is now an established technique for statistical inference used in cases where the likelihood function is computationally expensive or not available. It relies on the use of a model that is specified in the form of a simulator, and approximates the likelihood at a parameter $\theta$ by simulating auxiliary data sets $x$ and evaluating the distance of $x$ from the true data $y$. However, ABC is not computationally feasible in cases where using the simulator for each $\theta$ is very expensive. This paper investigates this situation in cases where a cheap, but approximate, simulator is available. The approach is to employ delayed acceptance Markov chain Monte Carlo (MCMC) within an ABC sequential Monte Carlo (SMC) sampler in order to, in a first stage of the kernel, use the cheap simulator to rule out parts of the parameter space that are not worth exploring, so that the "true" simulator is only run (in the second stage of the kernel) where there is a reasonable chance of accepting proposed values of $\theta$. We show that this approach can be used quite automatically, with the only tuning parameter choice additional to ABC-SMC being the number of particles we wish to carry through to the second stage of the kernel. Applications to stochastic differential equation models and latent doubly intractable distributions are presented. ### An unsupervised bayesian approach for the joint reconstruction and classification of cutaneous reflectance confocal microscopy images This paper studies a new Bayesian algorithm for the joint reconstruction and classification of reflectance confocal microscopy (RCM) images, with application to the identification of human skin lentigo. The proposed Bayesian approach takes advantage of the distribution of the multiplicative speckle noise affecting the true reflectivity of these images and of appropriate priors for the unknown model parameters. A Markov chain Monte Carlo (MCMC) algorithm is proposed to jointly estimate the model parameters and the image of true reflectivity while classifying images according to the distribution of their reflectivity. Precisely, a Metropolis-whitin-Gibbs sampler is investigated to sample the posterior distribution of the Bayesian model associated with RCM images and to build estimators of its parameters, including labels indicating the class of each RCM image. The resulting algorithm is applied to synthetic data and to real images from a clinical study containing healthy and lentigo patients.
2019-08-17 13:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5298797488212585, "perplexity": 529.5873258927744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313259.30/warc/CC-MAIN-20190817123129-20190817145129-00396.warc.gz"}
https://www.physicsforums.com/threads/continuity-of-functions.339497/
# Continuity of functions 1. Sep 22, 2009 ### dannysaf 1)Let f and g be functions such that f (x) + g(x) and f (x) − g(x) are continuous at x = x0 . Must f and g be continuous at x = x0 ? 2)What can be said about the continuity of f (x) + g(x) at x = x0 , if f (x) is continuous and g(x) is discontinuous at x = x0 ? 3)What can be said about the continuity of f (x)g(x) at x = x0 , if f (x) is continuous and g(x) is discontinuous at x = x0 ? 2. Sep 22, 2009 ### HallsofIvy Re: Continuity I think the facts that f(x)= ((f+g)(x)+ (f-g)(x))/2 and g(x)= ((f+g)(x)- (f-g)(x))/2 will help a lot!
2018-03-21 15:54:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628756403923035, "perplexity": 1143.892158314461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00634.warc.gz"}
https://www.isixsigma.com/ask-tools-techniques/while-calculating-pp-and-ppk-how-do-i-determine-value-short-term-and-long-term-standard-deviation/
One must necessarily understand that the short-term standard deviation reports on the “instantaneous reproducibility” of a process whereas the long-term standard deviation reflects the “sustainable reproducibility.”  To this end, the short-term standard deviation is comprised of the “within group” sums-of-squares (SSW).  The long-term standard deviation incorporates the “total” sums-of-squares (SST).  Of course, the difference between the two constitutes the “between group” sums-of-squares (SSB). By employing a rational sampling strategy it is possible to effectively block the noises due to assignable causes from those due to random causes.  In this context, we recognize that SST = SSW + SSB.  By considering the degrees-of-freedom associated with SST and SSW, we are able to compute the corresponding variances and then establish the respective standard deviations.  In the case of a process characterization study, we note that the short-term standard deviation is given by the quantity Sqrt(SSW / g(n – 1)).  The long-term standard deviation is defined as Sqrt(SST / ng– 1). When computing Cp and Cpk, it is necessary to employ the short-term standard deviation.  This ensures that the given index of capability reports on the instantaneous reproducibility of the process under investigation.  So as to reflect the sustainable reproducibility of the process, the long-term standard deviation must be employed to compute Pp and Ppk.  Oddly enough, many practitioners confuse these two overlapping sets of performance indices. For more information on this topic, reference Harry, M. J. “The Vision of Six Sigma: A Roadmap for Breakthrough” located at http://www.tristarvisual.com/sixsigma2/index.mgi2.  Also see Harry, M.J. and Lawson, R.J. (1988). Six Sigma Producibility Analysis and Process Characterization. Publication Number 6s-3-03/88. Motorola University Press, Motorola Inc., Schaumburg Illinois. As an additional post to your question, let us consider how the shift factor figures into the larger scheme of things.  Generally speaking, the shift factor is added to an estimate of long-term capability in order to remove long-term influences, therein providing an approximation of the short-term capability.  Conversely, the shift factor is subtracted from an estimate of the short-term capability in order to inject long-term influences, thereby providing an approximation of the long-term capability. For example, if the long-term capability of a process was known to be 4.5s, and we seek to approximate the short-term capability, then 1.5s would be added to 4.5s, therein providing the short-term estimate of 6.0s.  Conversely, if the short-term capability was known to be 6.0s, and we seek to approximate the long-term capability, then 1.5s must be subtracted from 6.0s, therein providing the long-term estimate of 4.5s.
2023-03-28 15:37:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023697972297668, "perplexity": 2269.5498197444017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00695.warc.gz"}
https://gomathanswerkey.com/go-math-grade-4-answer-key-homework-fl-chapter-7-add-and-subtract-fractions-review-test/
Upgrade your math skills by referring to the Go Math Grade 4 Answer Key Homework FL Chapter 7 Add and Subtract Fractions Review/Test. With the help of this HMH Go Math Grade 4 Review/Test Answer Key you score good marks in the exam. Chapter 7: Review/Test ### Review/Test – Page No. 309 Choose the best term from the box. Question 1. A number represented by a whole number and a fraction is a _________________ . _________ A number represented by a whole number and a fraction is a Mixed number. Question 2. A fraction that always has a numerator of 1 is a _______________ . _________ A fraction that always has a numerator of 1 is a Unit Fraction. Write the fraction as a sum of unit fractions. Question 3. $$\frac{4}{5}$$ = $$\frac{1}{5}$$+$$\frac{1}{5}$$+$$\frac{1}{5}$$+$$\frac{1}{5}$$+$$\frac{1}{5}$$ Explanation: For a unit fraction the numerator should be 1, here we can see the numerator as 4 so we will add $$\frac{1}{5}$$ four times. And the fraction can be written as the sum of a unit fraction as $$\frac{1+1+1+1}{5}$$ = $$\frac{1}{5}$$+$$\frac{1}{5}$$+$$\frac{1}{5}$$+$$\frac{1}{5}$$+$$\frac{1}{5}$$. Question 4. $$\frac{5}{10}$$ = $$\frac{1}{10}$$+$$\frac{1}{10}$$+$$\frac{1}{10}$$+$$\frac{1}{10}$$+$$\frac{1}{10}$$ Explanation: For a unit fraction the numerator should be 1, here we can see the numerator as 4 so we will add $$\frac{1}{5}$$ four times. And the fraction can be written as the sum of a unit fraction as $$\frac{1+1+1+1}{10}$$ = $$\frac{1}{10}$$+$$\frac{1}{10}$$+$$\frac{1}{10}$$+$$\frac{1}{10}$$+$$\frac{1}{10}$$. Write the mixed number as a fraction. Question 5. 1 $$\frac{3}{8}$$ = $$\frac{□}{□}$$ Answer: So the answer is $$\frac{11}{8}$$. Explanation: To convert a mixed number as a fraction, we will multiply the whole number by the fraction’s denominator, and then we will add to the numerator and the result will be on the top of the denominator. 1 $$\frac{3}{8}$$ = (1×8)+3 = 8+3 = 11 So the answer is $$\frac{11}{8}$$. Question 6. 4 $$\frac{2}{3}$$ = $$\frac{□}{□}$$ Answer: $$\frac{14}{3}$$. Explanation: To convert a mixed number as a fraction, we will multiply the whole number by the fraction’s denominator, and then we will add to the numerator and the result will be on the top of the denominator. 4 $$\frac{2}{3}$$ = 4×3 = 12 = 12+2 = 14. The answer is $$\frac{14}{3}$$. Question 7. 2 $$\frac{3}{5}$$ = $$\frac{□}{□}$$ Answer: $$\frac{13}{5}$$. Explanation: To convert a mixed number as a fraction, we will multiply the whole number by the fraction’s denominator, and then we will add to the numerator and the result will be on the top of the denominator. 2 $$\frac{3}{5}$$ = 2×5 = 10 = 10+3 = 13. The answer is $$\frac{13}{5}$$. Write the fraction as a mixed number. Question 8. $$\frac{12}{10}$$ = _____ $$\frac{□}{□}$$ Answer: 1 $$\frac{1}{5}$$. Explanation: To convert the fraction to a mixed number we will divide the numerator with denominator and write the whole number, then the remainder will be written above the denominator. $$\frac{12}{10}$$ = 12÷10 = 1 $$\frac{2}{10}$$ = 1 $$\frac{1}{5}$$. Question 9. $$\frac{10}{3}$$ = _____ $$\frac{□}{□}$$ Answer: 3 $$\frac{1}{3}$$. Explanation: To convert the fraction to a mixed number we will divide the numerator with denominator and write the whole number, then the remainder will be written above the denominator. $$\frac{10}{3}$$ = 10÷3 = 3 $$\frac{1}{3}$$. Question 10. $$\frac{15}{6}$$ = _____ $$\frac{□}{□}$$ Answer: 2 $$\frac{1}{2}$$. Explanation: To convert the fraction to a mixed number we will divide the numerator with denominator and write the whole number, then the remainder will be written above the denominator. $$\frac{15}{6}$$ = 15÷6 = 2 $$\frac{3}{6}$$ = 2 $$\frac{1}{2}$$. Find the sum or difference. Question 11. $$2 \frac{3}{8}+1 \frac{6}{8}$$ = _____ $$\frac{□}{□}$$ Answer: $$\frac{33}{8}$$. Explanation: $$2 \frac{3}{8}+1 \frac{6}{8}$$ = $$\frac{19}{8}$$+$$\frac{14}{8}$$ = $$\frac{33}{8}$$. Question 12. $$\frac{9}{12}-\frac{2}{12}$$ = _____ $$\frac{□}{□}$$ Answer: $$\frac{7}{12}$$. Explanation: $$\frac{9}{12}-\frac{2}{12}$$ = $$\frac{7}{12}$$. Question 13. $$5 \frac{7}{10}-4 \frac{5}{10}$$ = _____ $$\frac{□}{□}$$ Answer: $$\frac{6}{5}$$. Explanation: $$5 \frac{7}{10}-4 \frac{5}{10}$$ = $$\frac{57}{10}$$–$$\frac{45}{10}$$ = $$\frac{12}{10}$$ = $$\frac{6}{5}$$. Question 14. $$4 \frac{1}{6}-2 \frac{5}{6}$$ = _____ $$\frac{□}{□}$$ Answer: $$\frac{4}{3}$$. Explanation: $$4 \frac{1}{6}-2 \frac{5}{6}$$ = $$\frac{25}{6}$$–$$\frac{17}{6}$$ = $$\frac{8}{6}$$ = $$\frac{4}{3}$$. Question 15. $$3 \frac{2}{5}-1 \frac{4}{5}$$ = _____ $$\frac{□}{□}$$ Answer: $$\frac{8}{5}$$. Explanation: $$3 \frac{2}{5}-1 \frac{4}{5}$$ = $$\frac{17}{5}$$–$$\frac{9}{5}$$ = $$\frac{8}{5}$$. Question 16. $$\frac{4}{12}+\frac{6}{12}$$ = $$\frac{□}{□}$$ Answer: $$\frac{5}{6}$$. Explanation: $$\frac{4}{12}+\frac{6}{12}$$ = $$\frac{10}{12}$$ = $$\frac{5}{6}$$. Use the properties and mental math to find the sum. Question 17. (1 $$\frac{2}{5}$$ + $$\frac{1}{5}$$) + 2 $$\frac{3}{5}$$ = _______ $$\frac{□}{□}$$ Answer: $$\frac{21}{5}$$. Explanation: (1 $$\frac{2}{5}$$ + $$\frac{1}{5}$$) + 2 $$\frac{3}{5}$$ = ( $$\frac{7}{5}$$ + $$\frac{1}{5}$$) + $$\frac{13}{5}$$ = $$\frac{21}{5}$$. Question 18. 2 $$\frac{4}{6}$$ + (2 $$\frac{3}{6}$$ + 2 $$\frac{2}{6}$$) = _______ $$\frac{□}{□}$$ Answer: $$\frac{45}{6}$$. Explanation: 2 $$\frac{4}{6}$$ + (2 $$\frac{3}{6}$$ + 2 $$\frac{2}{6}$$) = $$\frac{16}{6}$$ + ($$\frac{15}{6}$$) + $$\frac{14}{6}$$) = $$\frac{16}{6}$$ +($$\frac{29}{6}$$) = $$\frac{45}{6}$$. Question 19. $$\frac{3}{10}$$ + (2 $$\frac{4}{10}$$ + $$\frac{7}{10}$$) = _______ $$\frac{□}{□}$$ Answer: $$\frac{34}{10}$$. Explanation: $$\frac{3}{10}$$ + (2 $$\frac{4}{10}$$ + $$\frac{7}{10}$$) = $$\frac{3}{10}$$ + ($$\frac{24}{10}$$ + $$\frac{7}{10}$$) = $$\frac{3}{10}$$ + ( $$\frac{31}{10}$$) = $$\frac{34}{10}$$. ### Review/Test – Page No. 310 Question 20. Eddie cut 2 $$\frac{2}{4}$$ feet of balsa wood for the length of a kite. He cut $$\frac{3}{4}$$ foot for the width of the kite. How much longer is the length of the kite than the width? Options: a. 1 $$\frac{1}{4}$$ feet b. 1 $$\frac{3}{4}$$ feet c. 2 feet d. 3 $$\frac{1}{4}$$ feet Explanation: The length of Eddie cut is 2 $$\frac{2}{4}$$ feet and the width is $$\frac{3}{4}$$ feet, so the difference in the length and width is 2 $$\frac{2}{4}$$– $$\frac{3}{4}$$ = $$\frac{10}{4}$$–$$\frac{3}{4}$$ = $$\frac{7}{4}$$ = 1 $$\frac{3}{4}$$ feet. Question 21. On a trip to the art museum, Lily rode the subway for $$\frac{7}{10}$$ mile and walked for $$\frac{3}{10}$$ mile. How much farther did she ride on the subway than walk? Options: a. $$\frac{3}{10}$$ mile b. $$\frac{4}{10}$$ mile c. $$\frac{7}{10}$$ mile d. 1 mile Explanation: As Lily rode $$\frac{7}{10}$$ mile and walked for $$\frac{3}{10}$$ mile, so she ride total of $$\frac{7}{10}$$+ $$\frac{3}{10}$$ = 1 mile. Question 22. Pablo is training for a marathon. He ran 5 $$\frac{4}{8}$$ miles on Friday, 6 $$\frac{5}{8}$$ miles on Saturday, and 7 $$\frac{4}{8}$$ miles on Sunday. How many miles did he run on all three days ? Options: a. 1 $$\frac{5}{8}$$ miles b. 12 $$\frac{1}{8}$$ miles c. 19 $$\frac{4}{8}$$ miles d. 19 $$\frac{5}{8}$$ miles Explanation: Pablo ran 5 $$\frac{4}{8}$$ miles on Friday and 6 $$\frac{5}{8}$$ miles on Saturday, 7 $$\frac{4}{8}$$ miles on Sunday. So total he ran on three days is 5 $$\frac{4}{8}$$+ 6 $$\frac{5}{8}$$+7 $$\frac{4}{8}$$ = $$\frac{44}{8}$$+ $$\frac{53}{8}$$+ $$\frac{60}{8}$$ = $$\frac{157}{8}$$ = 19 $$\frac{5}{8}$$ miles. Question 23. Cindy has two jars of paint. Which fraction below represents how much paint Cindy has? Options: a. $$\frac{1}{8}$$ b. $$\frac{4}{8}$$ c. $$\frac{5}{8}$$ d. $$\frac{7}{8}$$ Explanation: The first jar contains $$\frac{3}{8}$$ and in the second jar $$\frac{2}{8}$$ of paint. So total paint Cindy contains $$\frac{3}{8}$$+$$\frac{2}{8}$$ = $$\frac{5}{8}$$. ### Review/Test – Page No. 311 Question 24. Cole grew 2 $$\frac{3}{4}$$ inches last year. Kelly grew the same amount. Which fraction below represents the number of inches that Kelly grew last year? Options: a. $$\frac{3}{4}$$ b. $$\frac{5}{4}$$ c. $$\frac{11}{4}$$ d. $$\frac{14}{4}$$ Explanation: As Cole grew 2 $$\frac{3}{4}$$ inches and Kelly has same amount which is 2 $$\frac{3}{4}$$ inches, so the fraction is $$\frac{11}{4}$$ inches. Question 25. Olivia’s dog is 4 years old. Her cat is 1 $$\frac{1}{2}$$ years younger. How old is Olivia’s cat? Options: a. 5 $$\frac{1}{2}$$ years old b. 3 $$\frac{1}{2}$$ years old c. 2 $$\frac{1}{2}$$ years old d. 1 $$\frac{1}{2}$$ years old Explanation: Olivia’s dog is 4 years old and her cat is 1 $$\frac{1}{2}$$ years younger, so Olivia’s cat is = 4- 1 $$\frac{1}{2}$$ = $$\frac{8}{2}$$ – $$\frac{3}{2}$$ = $$\frac{5}{2}$$ = 2 $$\frac{1}{2}$$ years old. Question 26. Lisa mixed 4 $$\frac{4}{6}$$ cups of orange juice with 3 $$\frac{1}{6}$$ cups of milk to make a health shake. She drank 3 $$\frac{3}{6}$$ cups of the health shake. How much of the health shake did Lisa not drink? Options: a. $$\frac{2}{6}$$ cup b. 4 $$\frac{2}{6}$$ cups c. 7 $$\frac{5}{6}$$ cups d. 11 $$\frac{2}{6}$$ cups Explanation: Lisa mixed 4 $$\frac{4}{6}$$ cups of orange juice with 3 $$\frac{1}{6}$$ cups of milk to make a health shake, so total health shake is 4 $$\frac{4}{6}$$+3 $$\frac{1}{6}$$ = $$\frac{28}{6}$$+ $$\frac{19}{6}$$ = $$\frac{47}{6}$$ cups of health shake. As she drank 3 $$\frac{3}{6}$$ cups of health shake, so = $$\frac{47}{6}$$– 3 $$\frac{3}{6}$$ = $$\frac{47}{6}$$– $$\frac{21}{6}$$ = $$\frac{26}{6}$$ = 4 $$\frac{2}{6}$$ cups. Question 27. Keiko entered a contest to design a new school flag. Five twelfths of her flag has stars and $$\frac{3}{12}$$ has stripes. What fraction of Keiko’s flag has stars and stripes? Options: a. $$\frac{8}{12}$$ b. $$\frac{8}{24}$$ c. $$\frac{2}{12}$$ d. $$\frac{2}{24}$$ Explanation: As Keiko’s flag has Five-twelfths of stars and $$\frac{3}{12}$$ of strips, so the fraction of Keiko’s flag has stars and stripes is $$\frac{5}{12}$$+$$\frac{3}{12}$$ = $$\frac{8}{12}$$. ### Review/Test – Page No. 312 Constructed Response Question 28. Ela is knitting a scarf from a pattern. The pattern calls for 4 $$\frac{2}{12}$$ yards of yarn. She has only 2 $$\frac{11}{12}$$ yards of yarn. How much more yarn does Ela need to finish knitting the scarf? Explain how you found your answer. _____ $$\frac{□}{□}$$ yards Answer: 1 $$\frac{3}{12}$$ yards. Explanation: Ela’s pattern calls for 4 $$\frac{2}{12}$$ yards of yarn and Ela has 2 $$\frac{11}{12}$$ yards of yarn only, so she needs 4 $$\frac{2}{12}$$– 2 $$\frac{11}{12}$$ = $$\frac{50}{12}$$ – $$\frac{35}{12}$$ = $$\frac{15}{12}$$ = 1 $$\frac{3}{12}$$ yards more. Question 29. Miguel’s class went to the state fair. The fairground is divided into sections. Rides are in $$\frac{6}{10}$$ of the fairground. Games are in $$\frac{2}{10}$$ of the fairground. Farm exhibits are in $$\frac{1}{10}$$ of the fairground. A. How much greater is the fraction of the fairground with rides than the fraction with farm exhibits? Draw a model to prove your answer is correct. $$\frac{□}{□}$$ Answer: $$\frac{5}{10}$$. Explanation: As the fairground is divided into sections, rides are in $$\frac{6}{10}$$ of the fairground, games are in $$\frac{2}{10}$$ of the fairground and Farm exhibits are in $$\frac{1}{10}$$ of the fairground. So the fraction of the fairground with rides than the fraction with farm exhibits is $$\frac{6}{10}$$– $$\frac{1}{10}$$ = $$\frac{5}{10}$$ greater than farm exhibits. Question 29. B. What fraction of the fairground has games and farm exhibits? Answer: $$\frac{3}{10}$$. Explanation: The fraction of the fairground has games and farm exhibits is $$\frac{2}{10}$$+$$\frac{1}{10}$$ = $$\frac{3}{10}$$. Question 29. C. The rest of the fairground is refreshment booths. What fraction of the fairground is refreshment booths? Describe the steps you follow to solve the problem. Answer: 9 $$\frac{1}{10}$$. Explanation: As the fairground is divided into sections, rides are in $$\frac{6}{10}$$ of the fairground, games are in $$\frac{2}{10}$$ of the fairground and Farm exhibits are in $$\frac{1}{10}$$ of the fairground. So the fraction of the fairground is refreshment booths $$\frac{6}{10}$$+$$\frac{2}{10}$$+$$\frac{1}{10}$$ = $$\frac{9}{10}$$. To find a fraction of the fairground is refreshment booths we will subtract $$\frac{9}{10}$$ with 10, so 10- $$\frac{9}{10}$$ = $$\frac{100-9}{10}$$ = $$\frac{91}{10}$$ = 9 $$\frac{1}{10}$$. Conclusion: The students of 4th grade can avail all chapters Go Math Grade Answer Key in pdf format so that your learning will kick start in an effective manner. We have given a brief explanation of each and every question on our Go Math Grade 4 Answer Key Homework FL Chapter 7 Add and Subtract Fractions Review/Test. We suggest the students understand the concepts and apply them in the real world. Scroll to Top
2021-05-18 05:35:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6015744209289551, "perplexity": 3331.3148371362445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00072.warc.gz"}
https://math.stackexchange.com/questions/1588996/yet-another-log-sin-integral-int-limits-0-pi-3-log1-sin-x-log1-sin-x?noredirect=1
# Yet another log-sin integral $\int\limits_0^{\pi/3}\log(1+\sin x)\log(1-\sin x)\,dx$ There has been much interest to various log-trig integrals on this site (e.g. see [1][2][3][4][5][6][7][8][9]). Here is another one I'm trying to solve: $$\int\limits_0^{\pi/3}\log(1+\sin x)\log(1-\sin x)\,dx\approx-0.41142425522824105371...$$ I tried to feed it to Maple and Mathematica, but they are unable to evaluate in this form. After changing the variable $x=2\arctan z,$ and factoring rational functions under logarithms, the integrand takes the form $$\frac{2 \log ^2\left(z^2+1\right)}{z^2+1}-\frac{4 \log (1-z) \log \left(z^2+1\right)}{z^2+1}\\-\frac{4 \log (z+1) \log \left(z^2+1\right)}{z^2+1}+\frac{8 \log (1-z) \log (z+1)}{z^2+1}$$ in which it can be evaluated by Mathematica. It spits out a huge ugly expression with complex numbers, polylogarithms, polygammas and generalized hypergeometric functions (that indeed matches numerical estimates of the integral). It takes a long time to simplify and with only little improvement (see here if you are curious). I'm looking for a better approach to this integral that can produce the answer in a simpler form. • A possible approach may be to use [a well-known Fourier series][math.stackexchange.com/questions/292468/…: $$\forall x\in(0,\pi),\quad \log(1-\cos(x))= -\log(2)-\sum_{k\geq 1}\frac{2\cos(kx)}{k}\tag{1}$$ $$\forall x\in(0,\pi),\quad \log(1+\cos(x))= -\log(2)-\sum_{k\geq 1}\frac{2(-1)^k\cos(kx)}{k}\tag{2}$$ and: $$\begin{eqnarray*} I = \int_{\pi/6}^{\pi/2}\log(1-\cos(x))\log(1+\cos(x))\,dx \tag{3}\end{eqnarray*}$$ – Jack D'Aurizio Dec 25 '15 at 22:44 • Why are you trying to solve this particular integral? – Carl Mummert Dec 29 '15 at 1:37 Integral expressed in terms of $F_\pm(x,n)$ For $2x\in(-\pi,\pi)$, one may write the integrand as \begin{align} \prod_\pm\ln(1\pm\sin x) &=2f_-(2\bar x,2)-2f_-(\bar x,2)-2f_+(\bar x,2)-2\ln 2f_-(2\bar x,1)+2\bar x(2\bar x-\pi)+\ln^2 2 \end{align} where $4\bar x=\pi-2x$ and $f_\pm(x,n)=\mathrm{Re}\ln^n(1\pm e^{2ix})$. Now note that for $n=1,2$, $f_\pm(x,n)$ has antiderivatives $F_\pm(x,n)$ which can be obtained through integration by parts. To be specific, \begin{align} F_-(x,1)&=\mathrm{Re}\frac i2\mathrm{Li}_2(e^{2ix})\\ F_-(x,2)&=\mathrm{Re}\frac i2\left(2\mathrm{Li}_3(1-e^{2ix})-2\mathrm{Li}_2(1-e^{2ix})\ln(1-e^{2ix})-\ln(e^{2ix})\ln^2(1-e^{2ix})\right)\\ F_+(x,2)&=\mathrm{Re}\frac i2\left(2\mathrm{Li}_3(z)-2\mathrm{Li}_2(z)\ln(z)-\ln^2 z\ln(1-z)+\frac{\ln^3 z} 3\right) \end{align} where $z=(1+e^{2ix})^{-1}$. As the integrand has no poles in the first quadrant, we are allowed to simply plug in the limits into these antiderivatives. This gives \begin{align} \int^\frac{\pi}{3}_0\prod_\pm\ln(1\pm\sin x)\ dx=&\ 2F_-\left(\tfrac\pi 2,2\right)-2F_-\left(\tfrac\pi 6,2\right)-4F_-\left(\tfrac\pi 4,2\right)+4F_-\left(\tfrac\pi{12},2\right)-4F_+\left(\tfrac\pi 4,2\right)\\&+4F_+\left(\tfrac\pi{12},2\right)-2\ln 2 F_+\left(\tfrac\pi{2},1\right)+2\ln 2 F_+\left(\tfrac\pi{6},1\right)+\frac\pi 3\ln^2 2-\frac{23\pi^3}{324} \end{align} It remains to simplify these polylogarithmic expressions. Simplification of $F_-\left(\tfrac\pi{2},1\right)$ and $F_-\left(\tfrac\pi{6},1\right)$ Evidently, $F_+\left(\tfrac\pi{2},1\right)=\mathrm{Re}\left(\frac i2\mathrm{Li}_2(-1)\right)=0$, while the value of $$F_+\left(\tfrac\pi{6},1\right)=\mathrm{Re}\left(\tfrac i2\mathrm{Li}_2(e^{\pi i/3})\right)=\frac{-\psi_1\left(\frac 16\right)-\psi_1\left(\frac 13\right)+\psi_1\left(\frac 23\right)+\psi_1\left(\frac 56\right)}{48\sqrt 3}=\frac{\pi^2}{6\sqrt 3}-\frac{\psi_1\left(\frac 13\right)}{4\sqrt 3}$$ can be deduced by writing it as a sum and applying the duplication formula followed by the reflection formula twice. Simplification of $F_-\left(\tfrac\pi{2},2\right)$ and $F_-\left(\tfrac\pi{6},2\right)$ Use the polylogarithm inversion formulae to deduce that $F_-\left(\tfrac\pi{2},2\right)=0$. Since $1-e^{\pi i/3}=e^{-\pi i/3}$ lies on the unit circle it is easy to verify that $$F_+\left(\tfrac\pi{6},2\right)=\frac{\pi^3}{324}$$ using the known Fourier series identities for $\sum\cos(n\theta)n^{-2}$ and $\sum\sin(n\theta)n^{-3}$. Simplification of $F_-\left(\tfrac\pi{4},2\right)$ and $F_+\left(\tfrac\pi{4},2\right)$ The 3 facts \begin{align} \mathrm{Li}_2(1-i)&=\frac{\pi^2}{16}-i\left(\frac{\pi}{4}\ln 2+G\right)\\ \mathrm{Li}_2\left(\frac{1-i}2\right)&=\frac{5\pi^2}{96}-\frac{\ln^2 2}{8}+i\left(\frac{\pi}{8}\ln 2-G\right)\\ -\mathrm{Im}\ \mathrm{Li}_3\left(\frac{1-i}2\right)&=\mathrm{Im}\ \mathrm{Li}_3(1-i)+\frac{7\pi^3}{128}+\frac{3\pi}{32}\ln^2 2 \end{align} (which respectively follow from the dilogarithm reflection formula and Landen's di/trilogarithm identities) allow us to conclude, after some algebra, $$F_-\left(\tfrac\pi{4},2\right)=-F_+\left(\tfrac\pi{4},2\right)=-\mathrm{Im}\ \mathrm{Li}_3(1-i)-\frac{G}{2}\ln 2-\frac{\pi^3}{32}-\frac{\pi}{16}\ln^2 2$$ So $F_-\left(\tfrac\pi{4},2\right)+F_+\left(\tfrac\pi{4},2\right)=0$ - a surprisingly convenient equality indeed. Simplification of $F_-\left(\tfrac\pi{12},2\right)+F_+\left(\tfrac\pi{12},2\right)$ This is the most tedious part of the evaluation. We have the identity \begin{align} \mathrm{Li}_3\left(\frac{1-z}{1+z}\right)-\mathrm{Li}_3\left(-\frac{1-z}{1+z}\right)= &\ 2\mathrm{Li}_3\left(1-z\right)+2\mathrm{Li}_3\left(\frac{1}{1+z}\right)-\frac12\mathrm{Li}_3\left(1-z^2\right)\\ &\ -\frac{\ln^3(1+z)}{3}+\frac{\pi^2}6\ln(1+z)-\frac{7\zeta(3)}4 \end{align} and it so happens that when $z=e^{\pi i/6}$, $(1-z)(1+z)^{-1}=-(2-\sqrt 3)i$ is purely imaginary and $1-z^2$ lies on the unit circle. Therefore \begin{align} 4\mathrm{Im}\left(\mathrm{Li}_3\left(1-e^{\pi i/6}\right)+\mathrm{Li}_3\left(\frac{1}{1+e^{\pi i/6}}\right)\right)&=-4\mathrm{Ti}_3\left(2-\sqrt 3\right)-\frac{17\pi^3}{288}+\frac{\pi}{24}\ln^2(2-\sqrt 3)\\ &=4\mathrm{Ti}_3\left(2+\sqrt 3\right)-\frac{89\pi^3}{288}-\frac{23\pi}{24}\ln^2(2+\sqrt 3) \end{align} since $16\mathrm{Ti}_3(z)+16\mathrm{Ti}_3(z^{-1})=\pi^3+4\pi\ln^2 z$ . Furthermore, it is not hard to get $$\mathrm{Li}_2\left(e^{\pi i/6}\right)=\frac{13\pi^2}{144}+i\left(\frac{\psi_1\left(\frac 13\right)}{8\sqrt 3}+\frac{2G}3-\frac{\pi^2}{12\sqrt 3}\right)$$ by applying its definition, so by the dilogarithm reflection formula, $$\mathrm{Li}_2\left(1-e^{\pi i/6}\right)=\frac{\pi^2}{144}+i\left(\frac{\pi^2}{12\sqrt 3}-\frac{\psi_1\left(\frac 13\right)}{8\sqrt 3}-\frac{2G}3+\frac{\pi}{12}\ln(2+\sqrt 3)\right)$$ By a similar process we obtain $$\mathrm{Li}_2\left(\frac{1}{1+e^{\pi i/6}}\right)=\frac{23\pi^2}{288}-\frac{\ln^2(2+\sqrt 3)}{8}+i\left(\frac{\psi_1\left(\frac 13\right)}{8\sqrt 3}-\frac{\pi^2}{12\sqrt 3}-\frac{2G}3+\frac{\pi}{24}\ln(2+\sqrt 3)\right)$$ after an application of the inversion formula to $z=1+e^{\pi i/6}$. After some further manipulations using these values, we eventually arrive at $$4F_-\left(\tfrac\pi{12},2\right)+4F_+\left(\tfrac\pi{12},2\right)=-4\mathrm{Ti}_3\left(2+\sqrt 3\right)+\frac{8G}{3}\ln(2+\sqrt 3)+\frac{5\pi}{6}\ln^2(2+\sqrt 3)+\frac{137\pi^3}{648}$$ The Closed Form Assimilating all our results, we indeed get \begin{align}\int^\frac{\pi}{3}_0\prod_\pm\ln(1\pm\sin x)\ dx=&-4 \mathrm{Ti}_3\left(2+\sqrt3\right)-\frac{\psi_1\left(\frac13\right)}{2 \sqrt{3}}\ln 2+\frac{8G}3\ln\left(2+\sqrt3\right)+\frac{29\pi^3}{216}\\ &\ \ +\frac{5\pi}6\ln^2\left(2+\sqrt3\right)+\frac\pi3\ln^22+\frac{\pi^2}{3\sqrt3}\ln2\\ \end{align} as Cleo announced. \begin{align}\int_0^{\pi/3}\ln(1+\sin x)\ln(1-\sin x)\,dx=&\frac{29\pi^3}{216}+\frac{5\pi}6\ln^2\left(2+\sqrt3\right)+\frac\pi3\ln^22+\frac{\pi^2}{3\sqrt3}\ln2\\+&\frac{8G}3\ln\left(2+\sqrt3\right)-4 \operatorname{Ti}_3\left(2+\sqrt3\right)-\frac{\psi^{(1)}\!\left(\tfrac13\right)}{2 \sqrt{3}}\ln2,\end{align} where $G$ is the Catalan constant, $\operatorname{Ti}_3(z)=\Im\operatorname{Li}_3(iz)$ is the generalized inverse tangent integral, and $\psi^{(1)}(z)$ is the trigamma function. Another way is to use Maclaurin series: $$\log(1\pm \sin x) = \pm\sin x +\dfrac12 \sin^2x\pm \dfrac13 \sin^3x +\dfrac14\sin^4x\pm \dfrac15\sin^5x+\dfrac16\sin^6x+\dots,$$where $\sin^2 x<\dfrac 34.$ Expression $$\int\limits_0^{\dfrac{\pi}3}\left(\dfrac14\sin^4x\left(1+ \dfrac12\sin^2x+\dfrac13\sin^4x+\dfrac14\sin^6x+\dots\right)^2 - {\sin^2x\left(1 + \dfrac13 \sin^2x + \dfrac15 \sin^4x + \dfrac17\sin^4x+\dots\right)^2}\right)dx$$ looks convenient to approximate calculations. Too long for a comment : Using the fact that $\sin t=\cos\bigg(\dfrac\pi2-t\bigg),$ together with the well-known formulas for $1\pm\cos(2u),$ and the properties of the natural logarithm, we have \begin{align}I(a)&=\int_0^a\ln(1-\sin x)\ln(1+\sin x)~dx=\\&=(2\pi-a)\ln^22-\dfrac{\pi^3}{12}+2\ln2\displaystyle\int_0^a\ln\cos x~dx-8\int_0^b\ln\sin x\ln\cos x~dx,\end{align} where $a\in\bigg(0,~\dfrac\pi2\bigg)$ and $b=\dfrac\pi4-\dfrac a2.~$ Even in the absence of any particularly bright ideas, the last two integrals are still expressible in terms of the derivatives of the $($ incomplete $)$ beta function . See Wallis' integrals for more information. In this specific case, $a=\dfrac\pi3$ and $b=\dfrac\pi{12}.$ • Substituting $\cos x=t$ and expanding the integrand into its binomial series, followed by reversing the order of summation and integration, we have $\displaystyle\int_0^a\ln\cos x~dx=-a\ln2-\frac{\Im\Big[\text{Li}_2\big(-e^{2ia}\big)\Big]}2$ – Lucian Dec 26 '15 at 16:27
2019-05-21 18:31:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855802059173584, "perplexity": 727.9517831280995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00555.warc.gz"}
https://ijnaa.semnan.ac.ir/article_455.html
# Perfect $2$-colorings of the Platonic graphs Document Type: Research Paper Authors 1 School of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran 16846, Iran 2 School of Mathematics, Iran University of Science and Technology, Narmak, Tehran 16846, Iran Abstract In this paper, we enumerate the parameter matrices of all perfect $2$-colorings of the Platonic graphs consisting of the tetrahedral graph, the cubical graph, the octahedral graph, the dodecahedral graph, and  the icosahedral graph. Keywords
2020-03-28 14:06:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3136662542819977, "perplexity": 2269.6015523700776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491998.11/warc/CC-MAIN-20200328134227-20200328164227-00405.warc.gz"}
https://en.neurochispas.com/calculus/derivative-of-natural-log-lnx-with-proofs-and-graphs/
# Derivative of Natural log (ln(x)) with Proofs and Graphs The natural logarithm, also denoted as ln(x), is the logarithm of x to base e (euler’s number). The derivative of the natural logarithm is equal to one over x, 1/x. We can prove this derivative using limits or implicit differentiation. In this article, we will learn how to derive the natural logarithmic function. We will review some fundamentals, definitions, formulas, graphical comparisons of ln(x) and its derivative, proofs, and some examples. ##### CALCULUS Relevant for Learning how to prove the derivative of natural log, ln(x). See proofs ##### CALCULUS Relevant for Learning how to prove the derivative of natural log, ln(x). See proofs ## Proofs of the Derivative of Natural Logarithm of x ### Proof of the derivative of ln(x) using the first principle Before learning the proof of the derivative of the natural logarithmic function, you are hereby recommended to learn/review the first principle of limits, Euler’s number, and L’hopital’s rule as prerequisites. To review, any function can be derived by equating it to the limit of $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} {\frac{f(x+h)-f(x)}{h}}$$ Suppose we are asked to get the derivative of $$f(x) = \ln{(x)}$$ we have $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} {\frac{ \ln{(x+h)} – \ln{(x)} }{h}}$$ With this equation, it is still not possible to express the limit due to the denominator h where if zero is substituted, will be undefined. Therefore, we can check if applying some properties of logarithms can be useful. The division property of logarithm states that the log of a quotient is the difference of the logs. We can observe that the numerator satisfies this condition. Applying this, we have $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} {\frac{ \ln{(x+h)} – \ln{(x)} }{h}}$$ $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} { \frac{ \ln{\left(\frac{x+h}{x} \right)} }{h}}$$ $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} { \ln{\left(\frac{x+h}{x} \right)} \cdot \frac{1}{h}}$$ Rearranging, we have $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} { \frac{1}{h} \cdot \ln{\left(\frac{x+h}{x} \right)} }$$ $$\frac{d}{dx} f(x) = \lim \limits_{h \to 0} { \frac{1}{h} \cdot \ln{\left(1 + \frac{h}{x} \right)} }$$ We can try to eliminate the denominator h by substituting $$h = vx$$ where $$v = \frac{h}{x}$$ which algebraically proves that as h approaches 0, v also approaches 0. Substituting, we have $$\frac{d}{dx} f(x) = \lim \limits_{v \to 0} { \frac{1}{vx} \cdot \ln{\left(1 + v \right)} }$$ Re-arranging, we have $$\frac{d}{dx} f(x) = \lim \limits_{v \to 0} { \frac{1}{x} \cdot \frac{1}{v} \ln{\left(1 + v \right)} }$$ We may now evaluate the limit of $$\frac{1}{x}$$ as v approaches 0 $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{1}{v} \ln{\left(1 + v \right)} }$$ By applying the power property of logarithms to our remaining limit, we have $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \left( \ln{\left(1 + v \right)} \right)^{\frac{1}{v}} }$$ As you realize, the natural log we have in the remaining limit now is exactly the mathematical definition of the Euler’s number e. If $$(1 + v)^{\frac{1}{v}} = e$$ based on Euler’s number definition, then $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} {\ln{(e)}}$$ Evaluating ln(e), we know that it is equal to one. Hence, we have $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} {(1)}$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot (1)$$ Therefore, the derivative of the natural logarithm in the form of $$\ln{(x)}$$ is: $$\frac{d}{dx} (\ln{(x)}) = \frac{1}{x}$$ Alternatively, instead of the Euler’s number definition, we may also evaluate the same remaining limit by applying the L’hopital’s rule. $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{1}{v} \ln{\left(1 + v \right)} }$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{\ln{\left(1 + v \right)}}{v} }$$ This remaining limit satisfies the condition $$\frac{0}{0}$$. Evaluating, we have $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{\ln{\left(1 + v \right)}}{v} }$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{ \frac{1}{1+v} }{1} }$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{1}{1+v} }$$ Evaluating by substituting the approaching value of v, we have $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{1}{1+(0)} }$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { \frac{1}{1} }$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot \lim \limits_{v \to 0} { 1 }$$ $$\frac{d}{dx} f(x) = \frac{1}{x} \cdot (1)$$ $$\frac{d}{dx} (\ln{(x)}) = \frac{1}{x}$$ ### Proof of the derivative of ln(x) using implicit differentiation In this proof, you are hereby recommended to learn/review the derivatives of exponential functions and implicit differentiation. Suppose we have the equation $$y = \ln{(x)}$$ In general logarithmic form, it is $$\log_{e}{x} = y$$ And in exponential form, it is $$e^y = x$$ Implicitly deriving the exponential form in terms of x, we have $$e^y = x$$ $$\frac{d}{dx} (e^y) = \frac{d}{dx} (x)$$ $$e^y \cdot \frac{dy}{dx} = 1$$ Isolating $$\frac{dy}{dx}$$, we have $$\frac{dy}{dx} = \frac{1}{e^y}$$ We recall that in the beginning, $$y = \ln{(x)}$$. Substituting this to the y of our derivative, we have $$\frac{dy}{dx} = \frac{1}{e^{(\ln{(x)})}}$$ Evaluating, we now have the derivative of $$y = \ln{(x)}$$ $$y’ = \frac{1}{x}$$ ## Graph of ln(x)x vs. its derivative Given the function $$f(x) = \ln{(x)}$$ its graph is And as we know by now, by deriving $$f(x) = \ln{(x)}$$, we get $$f'(x) = \frac{1}{x}$$ which is illustrated graphically as Comparing both graphs in one, we have Using the graphs, it can be seen that the original function (f(x) = \ln{(x)}) has a domain of $$(0,\infty)$$ or $$x | x > 0$$ and exists within the range of $$(-\infty, \infty)$$ or all real numbers whereas the derivative $$f'(x) = \frac{1}{x}$$ has a domain of $$(-\infty,0) \cup (0,\infty)$$ or $$x | x \neq 0$$ and exists within the range of $$(-\infty,0) \cup (0,\infty)$$ or $$y | y \neq 0$$ ## Examples The following examples show how to derive a composite natural logarithm function. ### EXAMPLE 1 Find the derivative of $latex f(x) = \ln(4x)$ This is a composite natural logarithm function, so we can use the chain rule to derive it. Considering $latex u=4x$ as the inner function, we can write $latex f(u)=\ln(u)$. Then, using the chain rule, we have: $$\frac{dy}{dx}=\frac{dy}{du} \frac{du}{dx}$$ $$\frac{dy}{dx}=\frac{1}{u} \times 4$$ Substituting $latex u=4x$ back into the function, we have: $$\frac{dy}{dx}=\frac{4}{4x}$$ ### EXAMPLE 2 Determine the derivative of $latex F(x) = \ln(4x^2-6x)$. Let’s use the chain rule. Then, we consider $latex u=4x^2-6x$ as the inner function and $latex f(u)=\ln(u)$ as the outer function. Therefore, we start by finding the derivative of the external function: $$\frac{d}{du} ( \ln(u) ) = \frac{1}{u}$$ Now, we find the derivative of the inner function, $latex g(x)$: $$\frac{d}{dx}(g(x)) = \frac{d}{dx}(4x^2-6x)$$ $$\frac{d}{dx}(g(x)) = 8x-6$$ We multiply the derivative of the inner function by the derivative of the outer function: $$\frac{dy}{dx} = \frac{d}{du} (f(u)) \cdot \frac{d}{dx} (g(x))$$ $$\frac{dy}{dx} = \frac{1}{u} \cdot 8x-6$$ Finally, we use the substitution $latex u=4x^2-6x$ and simplify: $$\frac{dy}{dx} = \frac{1}{4x^2-6x} \cdot 8x-6$$ $$\frac{dy}{dx} = \frac{8x-6}{4x^2-6x}$$ $$\frac{dy}{dx} = \frac{4x-3}{2x^2-3x}$$ ### EXAMPLE 3 What is the derivative of $latex f(x) = \ln(\sin(x))$? In this case, we consider $latex u=\sin(x)$ as the inner function. Therefore, $latex f(u)=\ln(u)$ is the outer function. Using the chain rule, we can write: $$\frac{dy}{dx}=\frac{dy}{du} \frac{du}{dx}$$ $$\frac{dy}{dx}=\frac{1}{u} \times \cos(x)$$ Substituting $latex u=\sin(x)$ back into the function, we have: $$\frac{dy}{dx}=\frac{1}{\sin(x)} \times \cos(x)$$ $$\frac{dy}{dx}=\frac{\cos(x)}{\sin(x)}$$ $$\frac{dy}{dx}=\cot(x)$$ ## Practice of derivatives of natural logarithm Derivatives of natural logarithm quiz You have completed the quiz!
2023-02-02 20:26:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561472535133362, "perplexity": 583.2921790159563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00531.warc.gz"}
https://tkwant.kwant-project.org/doc/dev/tutorial/manybody.html
# 2.6. Solving the many-body problem¶ Warning The examples in this section take several minutes on a single core desktop computer. To speed up the computation the example scrips can be run in parallel, see section Parallelization with MPI. We like to study manybody problem for an infinite one-dimensional chain. In second quantization the Hamiltonian reads $\hat{H}(t) = \sum_{i,j} \gamma_{ij} \, \hat{c}^\dagger_i \hat{c}_j + \sum_{i} w(t) \theta(i_b - i) \, \hat{c}^\dagger_i \hat{c}_i$ where $$\gamma_{ii} = 1$$ and $$\gamma_{ij} = -1$$ and $$w(t)$$ is taken similar to section Solving one-body problems as $w(t) = \theta(t) v_p e^{- 2 (t / \tau)^2}.$ We are interested in the time evolution of the electron density $n_i(t) = \langle \hat c^\dagger_i \hat c_i \rangle (t)$ The time-dependent pulse $$w(t)$$ which act on all lattice sites to the left of $$i_b$$ is taken into account by a gauge transform similar to Time-dependent potentials and pulses and translates to a time-dependent coupling between $$i_b$$ and $$i_b+1$$. The Kwant code to build the system is import kwant import cmath from scipy.special import erf def gaussian(time, t0=40, A=1.57, sigma=24): return A * (1 + erf((time - t0) / sigma)) # time dependent coupling with gaussian pulse def coupling_nn(site1, site2, time): return - cmath.exp(- 1j * gaussian(time)) def make_system(L=400): # system building lat = kwant.lattice.square(a=1, norbs=1) syst = kwant.Builder() # central scattering region syst[(lat(x, 0) for x in range(L))] = 1 syst[lat.neighbors()] = -1 # time dependent coupling between two sites in the center syst[lat(L // 2, 0), lat(L // 2 - 1, 0)] = coupling_nn sym = kwant.TranslationalSymmetry((-1, 0)) return syst Note that the code to build the system is basically the same as in the previous examples of the onebody problem which were treated in first quantization. The system looks similar to For representation purpose, the central scattering system has been shrinked to only 20 sites in the plot and the time-dependent coupling is highlighed in red. Two approaches are possible to obtain the density exectation value: Either a high-level approach using manybody.State where the preprocessing is done automatically and which provides additional functionality. Alternatively a low-level approach using manybody.WaveFunction, where the different preprocessing steps must be handled manually. Both ways are shown below. ## 2.6.1. High-level automatic approach¶ The high-level approach comprises all preprocessing steps. The entire code is: import tkwant import kwant import matplotlib.pyplot as plt syst = make_system().finalized() sites = [site.pos[0] for site in syst.sites] times = [40, 80, 120, 160] density_operator = kwant.operator.Density(syst) state = tkwant.manybody.State(syst, max(times)) density0 = state.evaluate(density_operator) for time in times: state.evolve(time=time) if time == 40: state.refine_intervals() error = state.estimate_error() print('time={}, error={:10.4e}'.format(time, error)) density = state.evaluate(density_operator) plt.plot(sites, density - density0, label='time={}'.format(time)) plt.legend() plt.xlabel(r'site position $i$') plt.ylabel(r'charge density $n$') plt.show() time=40, error=5.1436e-07 time=80, error=5.0540e-04 time=120, error=3.6453e-03 time=160, error=1.1463e-02 Note that this approach is much simpler and provides additional methods to fascilitate the numerical procedure without the need to fine-tune the quadrature by hand. While the high-level approach is less flexible, it can still be adapted in various ways. In the following we show how to change the lead occupation. The complete example script including MPI directives for parallel execution can be found in 1d_wire_high_level.py. ### Chemical potential and temperature of the leads¶ By default, the chemical potential and the temperature in all leads are identical and equal zero. To set them in all leads to the same, non-zero value, is possible via occupations = tkwant.manybody.lead_occupation(chemical_potential=0.5, temperature=0.1) state = tkwant.manybody.State(syst, max(times), occupations) One can also set different values in each lead as occup_left = tkwant.manybody.lead_occupation(chemical_potential=0.5, temperature=0.1) occupations = [occup_left, occup_right] state = tkwant.manybody.State(syst, max(times), occupations) ### Adaptive refinement and error estimate¶ The class manybody.State provides methods to estimate the quadrature error of the manybody integral and to adaptively refine the approximation to a given accuracy. The error ist estimated via error = state.estimate_error() print('estimated integration error= {:10.4e}'.format(error)) estimated integration error= 1.3478e-08 By default, the error is estimated on the density expectation value. One can obtain the error also for other expectation values, as e.g. the current: current_operator = kwant.operator.Current(syst) error = state.estimate_error(error_op=current_operator) print('estimated integration error= {:10.4e}'.format(error)) estimated integration error= 6.7801e-10 The quadrature intervals can be refined via state.refine_intervals(); By default, the refinement is done up to a certain accuracy of the density expectation value. Again, the behavior can be changed current_operator = kwant.operator.Current(syst) state.refine_intervals(rtol=1E-3, atol=1E-3, error_op=current_operator); Note Adaptive refinement is computationally expensive. Exploring initially at low precision is often a good idea. ## 2.6.2. Low-level manual approach¶ The low-level approach is close to the algorithm to solve the manybody problem which described in the Tkwant paper. The code is: from tkwant import leads, manybody import kwant import kwantspectrum import functools import numpy as np import matplotlib.pyplot as plt syst = make_system().finalized() sites = [site.pos[0] for site in syst.sites] times = [40, 80, 120, 160] density_operator = kwant.operator.Density(syst) # calculate the spectrum E(k) for all leads # estimate the cutoff energy Ecut from T, \mu and f(E) # All states are effectively empty above E_cut emin, emax = manybody.calc_energy_cutoffs(occupations) # define boundary conditions bdr = leads.automatic_boundary(spectra, tmax=max(times), emin=emin, emax=emax) # calculate the k intervals for the quadrature interval_type = functools.partial(manybody.Interval, order=20, intervals = manybody.calc_intervals(spectra, occupations, interval_type) intervals = manybody.split_intervals(intervals, number_subintervals=10) # calculate all onebody scattering states at t = 0 # set up the manybody wave function density0 = wave_function.evaluate(density_operator) for time in times: wave_function.evolve(time) density = wave_function.evaluate(density_operator) plt.plot(sites, density - density0, label='time={}'.format(time)) plt.legend() plt.xlabel(r'site position $i$') plt.ylabel(r'charge density $n$') plt.show() The role of each function can be deduced from the Tkwant paper and the function documentation. While most lines of above code are generic, a few lines are responsible for the numerical accuracy of the result and must be fine tuned for each problem in question. The numerical accuracy is controled by the integration order (given by the variable order) of a quadrature interval and by the number of sub intervals (by the variable number_subintervals), in which each initial quadrature interval is divided. The actual value of the variable order, is less crucial and typically ranges in between 10 and 20. The value of number_subintervals is highly system dependent and must be tuned. Note The numerical precision of the manybody expectation value is mainly determined by the integer variable number_subintervals in above example. Larger values lead to a more precise result on the cost of longer compute time. The actual value is highly system dependent. It is a good practice to start with a low value and to gradually increase it until the result converges. To better understand the logic between these two parameters, let us state that for Gaussian quadrature rules as Gauss-Legendre or Gauss-Kronrod, the sampling points are not distributed equidistantly over the quadrature interval. The purpose of the function manybody.split_intervals() is to split a quadrature interval with a given order equidistantly into number_subintervals with similar order. From this follows, that order=2 and number_subintervals=10 and order=10 and number_subintervals=2 will both lead to the same number of sampling points to approximate the integral, but with a very different distribution of the points. The complete example script including MPI directives for parallel execution can be found in 1d_wire_low_level.py. ## 2.6.3. Summary¶ To summarize, we like to highlight the similarity between the onebody and the manybody approach. The first one is the definition of the system using Kwant, which is the same, whether the Hamiltonian is written in first quantization (onebody) or in second quantization (manybody). The second similarity is the API of the solvers for the onebody and the manybody Schrödinger equation. We will show this on the example of the two classes onebody.ScatteringStates() and manybody.State(). After defining an observable, as e.g. density_operator = kwant.operator.Density(syst) both states can be evolved forward in time and evaluate expectation values similarly Onebody psi = tkwant.onebody.ScatteringStates(syst, energy=1, lead=0, tmax=10)[0] psi.evolve(time=5) density = psi.evaluate(density_operator) Manybody state = tkwant.manybody.State(syst, tmax=10) state.evolve(time=5) density = state.evaluate(density_operator)
2021-06-17 20:43:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5778515934944153, "perplexity": 2760.685193087543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487633444.37/warc/CC-MAIN-20210617192319-20210617222319-00321.warc.gz"}
http://www.abisen.com/blog/metaprogramming/
# Metaprogramming in Julia Recently I came across a clever application of Metaprogramming in Julia by John Myles White and Randy Zwitch. I am not much of a coder and have only dabbled a bit in Python, it’s possible that this is a standard approach. Nevertheless I found it inspiring so writing about it :) Wouldn’t it be convenient to just create a specification list that provides the name, type and default value for the fields and call some function that would automatically create a corresponding composite type and it’s constructor method(s). And also handle default values and more housekeeping tasks automagically. Sidebar: If you have not tried yet, I would recommend you to check out two cool visuzlization libraries ECharts.jl and Vega.jl. The procedure explained below would allow one to create specification (spec) in form of a list of tuple and call a function makespec to parse. This approach comes in handy especially when writing code that requires maintaining many different “composite types”. Also it makes the code clean and easy to extend (check out ECharts.jl). spec = [ (:name, AbstractString, nothing), (:title, AbstractString, nothing), (:height, Number, 400), (:width, Number, 600), (:x, AbstractArray, nothing), (:y, AbstractArray, nothing), ] function makespec(:Scatter, spec) And generate/execute the following code (or some version of it depending upon your needs). type Scatter name::Union{AbstractString, Void} title::Union{AbstractString, Void} height::Number width::Number x::Union{AbstractArray, Void} y::Union{AbstractArray, Void} end function Scatter() Scatter(nothing, nothing, 400, 600, nothing, nothing) end This is possible by exploting powerful Metaprogrammig capabilities in Julia. The code block below is responsible for dynamically creating the type block and function above. # Create the composite type function maketype(_name::Symbol, spec) n = length(spec) lines = Array{Expr}(n) for idx in 1:n entry = spec[idx] # Create Union{} of type specified in spec file and Void # to be able to handle missing values lines[idx] = Expr(:(::), entry[1], Union{entry[2], Void}) end return Expr(:type, true, _name, Expr(:block, lines...) ) end # Create the constructor for the type function makefunc(_name::Symbol, spec) return Expr(:function, Expr(:call, _name), Expr(:block, Expr(:call, _name, map(entry -> entry[3], spec)...)) ) end # Wrapper function for calling the two functions above function makespec(_name::Symbol, spec) eval(maketype(_name, spec)) eval(makefunc(_name, spec)) end Function maketype( ) creates the composite type taking spec as the input and makefunc( ) creates the constructor using the defaults. I hope the readability of the code is easy to understand what each piece is doing. But writing complex code using metaprogramming is still not my cup of tea due to the prefix notation tha metaprogramming expects. I will describe the approach I took to generate the code for One simple modification that I wanted perform to the code block above (makefunc). The objective was to add the ability to process optional keyword arguments such that: • Scatter(): Would use the default values from the specification • Scatter(;width=100): Would use the specified value of width and all other values would default to spec The challenge here is that writing code in prefix notation is error prone and quite confusing for me. I used a workaround to get to the desired result where I wrote the end result and worked my way backwards. Below is the function that I would like the new makefunc to generate. function Scatter(;args...) obj=Scatter(nothing, nothing, 400, 600, nothing, nothing) for entry in args if isdefined(obj, entry[1]) # If the argument is defined setfield!(obj, entry[1], entry[2]) # in composite type (spec) end # Use the values provided as the arguments end return obj end Then in Julia console I fed the code through parse() followed by Meta.show_sexpr(). julia> instr = """function Scatter(;args...) obj=Layout(nothing, nothing, 400, nothing) for entry in args if isdefined(obj, entry[1]) setfield!(obj, entry[1], entry[2]) end end return obj end""" julia> parsed = parse(instr) julia> sexpr = Meta.show_sexpr(parsed) Function Meta.show_sexpr(parsed) would produce the following code in S-expression form. Converting that to Expr() form is as simple as converting each ( ) to Expr( ). (:function, (:call, :Scatter, (:parameters, (:..., :args))), (:block, :( # none, line 2:), (:(=), :obj, (:call, :Layout, :nothing, :nothing, 400, :nothing)), :( # none, line 3:), (:for, (:(=), :entry, :args), (:block, :( # none, line 4:), (:if, (:call, :isdefined, :obj, (:ref, :entry, 1)), (:block, :( # none, line 5:), (:call, :setfield!, :obj, (:ref, :entry, 1), (:ref, :entry, 2)) )) )), :( # none, line 8:), (:return, :obj) )) This gave me some code to work with where I could make simple modifications to get the function to do what we desired. function makefunc(_name::Symbol, spec) return Expr(:function, Expr(:call, _name, Expr(:parameters, Expr(:..., :args))), Expr(:block, Expr(:(=), :obj, Expr(:call, _name, map(entry -> entry[3], spec)...)), Expr(:for, Expr(:(=), :entry, :args), Expr(:block, Expr(:if, Expr(:call, :isdefined, :obj, Expr(:ref, :entry, 1)), Expr(:block, Expr(:call, :setfield!, :obj, Expr(:ref, :entry, 1), Expr(:ref, :entry, 2)) ) ) ) ), Expr(:return, :obj) )) end Executing makefunc(:Layout, spec) would produce the following code that can be used to validate what code would be executed when the function is called using eval(makefunc(:Layout, spec)) julia> makefunc(:Layout, spec) :(function Layout(; args...) obj = Layout(nothing,nothing,400,600,nothing,nothing) for entry = args if isdefined(obj,entry[1]) setfield!(obj,entry[1],entry[2]) end end return obj end) I hope this post was useful to somebody :) Do drop me a line if the approach I took could be improved upon.
2018-05-28 07:45:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3137582242488861, "perplexity": 9815.33026669109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00040.warc.gz"}
https://gateoverflow.in/211671/what-would-be-the-message-sent-by-the-sender-in-rsa-algo?show=211727
134 views RSA public key cryptography used with public key pair (e,n) is given as (5 ,35) and the primitive key pair (d,n) as (29,35) . If receiver receives message as 22 , what would be the message sent by the sender : 1) 22                                                       2) 29 3) 35                                                       4) 31 +2 22? 0 Please explain how did u calculated 22^29 mod 35 , how to decide that 29 should be broken into 14 *15 or 16*17 or 28*1 , how to decide that , is there any rule for this ? 0 1. If power is even then we can directly break into equal parts like 22, which is divisible by 2, So we can take the set (11, 11) 2. if the power is odd the like 23, then the highest value which is less then and divisible by 2 is 22, then we have (11, 11 ) and 1 is remaining, so we have the   set (11, 12) $22^{28} = 22^{14} * 22^{14}$ $22^{28} = 22^1*22^{14} * 22^{14} = 22^{14} * 22^{15}$ Hope, this can clear your doubt $(e,n)$ $\rightarrow$ $(5 ,35)$ $(d,n)$ $\rightarrow$ $(29 ,35)$ encrypted message, $c = 22$ we need to find original message, Let the original message is p we know that $p = c^d$ mod $n$ $= 22^{29}$ mod $35$ Now how to calculate $22^{29}$ mod $35$ Approach 1: simple divide and conquer strategy divide the power into 2 parts $22^{14 + 15}$ mod $35$ $22^{14} * 22^{15}$ mod $35$ $( (22^{14}$ mod $35)$  $*$ $(22^{15}$ mod $35) )$ mod $35$               $\because A *B$ mod $n$ = $((A$ mod $n)$ *  $(B$ mod $n))$  mod $n$ $22^{14}$ mod $35$ $= 22^7 * 22^7$ mod $35$ $= ( (22^{7}$ mod $35)$  $*$ $(22^{7}$ mod $35) )$ mod $35$ $= (8 *8 )$ mod $35$ $= 64$ mod $35$ $= 29$ Simarly we can do it for $22^{15}$ , we will get $8$ $( 29$ mod $35)$  $*$ $(8$ mod $35) )$ mod $35$= ( 29 *8 )$mod$35 $=22$ Approach 2 Divide the power  lets say B in power of 2 by writing it in binary form $29_{10}$ = ${11101}_{2}$ Start from the rightmost digit, let k = 0, for each digit 1. if the digit is 1 then we need $2^k$, otherwise, we do not 2. add 1 to k and move left to the next digit $29_{10}$ = $2^4 + 2^3 + 2^2 + 2^0$ = $16+ 8 + 4 + 1$ $22^{29} \rightarrow 22^{16+8+4+1} \rightarrow 22^{16} * 22^8 * 22^4 *22^1$ $22^{29}$ mod $35$  =  $(22^{16} * 22^8 * 22^4 *22^1)$ mod  $35$ $22^{16}$ mod $35$ = $1$ $22^{8}$ mod $35$ = $1$ $22^{4}$ mod $35$ = $1$ $22^{1}$ mod $35$ = $22$ $22^{29}$ mod $35$  =  $(1 * 1 * 1 *22)$ mod  $35$ = $22$ selected 0 Thanks a lot for such a brief and precise solution . Please discuss if any doubt :) +1 how u calculated $22^{14}$? 0 Please explain how did u calculated 22^14 mod 35 , for 22^14 only I am getting very large value in the calculator , did u use any trick ? +1 I just used the Gate calculator. Also mod operation is used. Basically I took by trial method any value that is not out of bounds of calc. I don't know if it's a correct way.. But it gives correct answer in less time. Also mod rules are applied. 0 when I did from that calculator directly by taking 22^29 mod 35 , I am getting 14 as answer . There is definitely some logic for calculating x^a mod n , because we can break a in any power terms and how to get a hit of breaking the power ? 0 2229 does not fit in the calculator. u can see an error term there... hence its not the correct way and will lead to incorrect solution... my thought : power =29 ...so cannot break it in 2 factors ,that multiply to give 29. so break in such a way that addition of 2 numbers gives 29. now randomly do like this... 20,9 ...but for 2220 ...out of bound. 15 ,14  yes correct. hope this way of thinking will help :) 0 Please see this , it calculated 22^29 , its gate virtual calculator only . +1 Dont use virtual calculator for calculating mod or power of hight number , it give correct answer till certain point. This link will solve some kind of problems: https://crypto.stackexchange.com/questions/5889/calculating-rsa-private-exponent-when-given-public-exponent-and-the-modulus-fact +1 vote 1
2019-03-22 06:14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8797006607055664, "perplexity": 1368.3221413111837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202635.43/warc/CC-MAIN-20190322054710-20190322080710-00251.warc.gz"}
https://direct.mit.edu/view-large/992855
Table 1: Summary of Main Notations. NotationsMeanings Capital letter A matrix m, n Size of the data matrix M ,  , Natural logarithm I, ,  The identity matrix, all-zero matrix, and all-one vector ei Vector whose ith entry is 1 and others are 0 The jth column of matrix M Mij The entry at the ith row and jth column of matrix M MT Transpose of matrix M Moore-Penrose pseudo-inverse of matrix M , Euclidean norm for a vector, Nuclear norm of a matrix (the sum of its singular values) norm of a matrix (the number of nonzero entries) norm of a matrix (the number of nonzero columns) norm of a matrix, norm of a matrix, norm of a matrix, norm of a matrix, Frobenius norm of a matrix, Matrix operator norm, the largest singular value of a matrix NotationsMeanings Capital letter A matrix m, n Size of the data matrix M ,  , Natural logarithm I, ,  The identity matrix, all-zero matrix, and all-one vector ei Vector whose ith entry is 1 and others are 0 The jth column of matrix M Mij The entry at the ith row and jth column of matrix M MT Transpose of matrix M Moore-Penrose pseudo-inverse of matrix M , Euclidean norm for a vector, Nuclear norm of a matrix (the sum of its singular values) norm of a matrix (the number of nonzero entries) norm of a matrix (the number of nonzero columns) norm of a matrix, norm of a matrix, norm of a matrix, norm of a matrix, Frobenius norm of a matrix, Matrix operator norm, the largest singular value of a matrix Close Modal
2021-09-19 08:42:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761611223220825, "perplexity": 2249.6836578842203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00670.warc.gz"}
https://appliedcombinatorics.org/book/s_posets_subset-lattice.html
## Section6.5The Subset Lattice When $X$ is a finite set, the family of all subsets of $X\text{,}$ partially ordered by inclusion, forms a subset lattice 1 . We illustrate this in Figure 6.26 where we show the lattice of all subsets of $\{1,2,3,4\}\text{.}$ In this figure, note that we are representing sets by bit strings, and we have further abbreviated the notation by writing strings without commas and parentheses. A lattice is a special type of poset. You do not have to concern yourself with the definition and can safely replace “lattice” with “poset” as you read this chapter. For a positive integer $t\text{,}$ we let $\bftwo^t$ denote the subset lattice consisting of all subsets of $\{1,2,\dots,t\}$ ordered by inclusion. Some elementary properties of this poset are: 1. The height is $t+1$ and all maximal chains have exactly $t+1$ points. 2. The size of the poset $\bftwo^t$ is $2^t$ and the elements are partitioned into ranks (antichains) $A_0, A_1,\dots, A_t$ with $|A_i|=\binom{t}{i}$ for each $i=0,1,\dots,t\text{.}$ 3. The maximum size of a rank in the subset lattice occurs in the middle, i.e. if $s=\lfloor t/2\rfloor\text{,}$ then the largest binomial coefficient in the sequence $\binom{t}{0}, \binom{t}{1},\binom{t}{2},\dots,\binom{t}{t}$ is $\binom{t}{s}\text{.}$ Note that when $t$ is odd, there are two ranks of maximum size, but when $t$ is even, there is only one. ### Subsection6.5.1Sperner's Theorem For the width of the subset lattice, we have the following classic result of Sperner. The width of the poset $\bftwo^t$ is at least $C(t,\lfloor\frac{t}{2}\rfloor)$ since the set of all $\lfloor\frac{t}{2}\rfloor$-element subsets of $\{1,2,\dots,t\}$ is an antichain. We now show that the width of $\bftwo^t$ is at most $C(t,\lfloor\frac{t}{2}\rfloor)\text{.}$ Let $w$ be the width of $\bftwo^t$ and let $\{S_1,S_2,\dots, S_w\}$ be an antichain of size $w$ in this poset, i.e., each $S_i$ is a subset of $\{1,2,\dots,t\}$ and if $1\le i\lt j\le w\text{,}$ then $S_i\nsubseteq S_j$ and $S_j\nsubseteq S_i\text{.}$ For each $i\text{,}$ consider the set $\cgS_i$ of all maximal chains which pass through $S_i\text{.}$ It is easy to see that if $|S_i|=k_i\text{,}$ then $|\cgS_i|=k_i!(t-k_i)!\text{.}$ This follows from the observation that to form such a maximum chain beginning with $S_i$ as an intermediate point, you delete the elements of $S_i$ one at a time to form the sets of the lower part of the chain. Also, to form the upper part of the chain, you add the elements not in $S_i$ one at a time. Note further that if $1\le i \lt j\le w\text{,}$ then $\cgS_i\cap \cgS_j =\emptyset\text{,}$ for if there was a maximum chain belonging to both $\cgS_i$ and $\cgS_j\text{,}$ then it would imply that one of $S_i$ and $S_j$ is a subset of the other. Altogether, there are exactly $t!$ maximum chains in $\bftwo^t\text{.}$ This implies that \begin{equation*} \sum_{i=1}^{w} k_i!(t-k_i)!\le t!. \end{equation*} This implies that \begin{equation*} \sum_{i=1}^{w}\frac{k_i!(t-k_i)!}{t!}= \sum_{i=1}^{w}\frac{1}{\binom{t}{k_i}}\le 1. \end{equation*} It follows that \begin{equation*} \sum_{i=1}^{i=w}\frac{1}{\binom{t}{\lceil\frac{t}{2}\rceil}}\le 1 \end{equation*} Thus \begin{equation*} w\le \binom{t}{\lceil\frac{t}{2}\rceil}. \end{equation*}
2020-09-27 18:48:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353048801422119, "perplexity": 137.9115945657519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00680.warc.gz"}