url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://m.habr.com/en/company/microsoft/blog/439672/
18 February 2019 # PowerShell Basics: Detecting if a String Ends with a Certain Character Did you know you can detect if a string ends in a specific character or if it starts in one in PowerShell? Thomas Rayner previously shared on CANITPRO.NET how this can be easily done by using regular expressions or more simply know as Regex. Original in blog. Consider the following examples: ``````'something\' -match '\\\$' #returns true 'something' -match '\\\$' #returns false '\something' -match '^\\' #returns true 'something' -match '^\\' #returns false`````` In the first two examples, the script checks the string ends to see if it ends in a backslash. In the last two examples, the script check the string to see if it starts with one. The regex pattern being matched for the first two is \\\$. What’s that mean? Well, the first part \\ means “a backslash” (because \ is the escape character, we’re basically escaping the escape character. The last part \$ is the signal for the end of the line. Effectively what we have is “anything at all, where the last thing on the line is a backslash” which is exactly what we’re looking for. In the second two examples, I’ve just moved the \\ to the start of the line and started with ^ instead of ending with \$ because ^ is the signal for the start of the line. Now you can do things like this: ``````\$dir = 'c:\temp' if (\$dir -notmatch '\\\$') { \$dir += '\' } \$dir #returns 'c:\temp\'`````` Here, the script checks to see if the string ‘bears’ ends in a backslash, and if it doesn’t, I’m appending one. +18 733 4
2020-02-22 13:16:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058652520179749, "perplexity": 1690.665214744507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145676.44/warc/CC-MAIN-20200222115524-20200222145524-00514.warc.gz"}
https://www.physicsforums.com/threads/1-10-100-1000-1-9.383493/
# 1 + 10 + 100 + 1000 + = -1/9 ## Main Question or Discussion Point $$S = 1 + 10 + 100 + 1000 + 10000 + ...$$ $$10S = 10 + 100 + 1000 + 10000 + 100000 + ...$$ $$S - 10S = (1 + 10 + 100 + 1000 + 10000 + ...) - (10 + 100 + 1000 + 10000 + ...)$$ $$-9S = 1 + (10 - 10) + (100 - 100) + (1000 - 1000) + (10000 - 10000) ...$$ $$-9S = 1 + 0 + 0 + 0 + 0 + 0 ...$$ $$-9S = 1$$ $$S = -1/9$$ What's wrong (or right) with this? Thanks, Unit Hurkyl Staff Emeritus Gold Member Also, I believe that sum converges as an ordinary infinite sum in the 2-adics and the 5-adics. (And, of course, it does not converge as an ordinary infinite sum in the reals!) CRGreathouse Homework Helper I would have [intuitively] expected it to converge in all the p-adics. Am I wrong? Hurkyl Staff Emeritus Gold Member In any other p-adic field, the terms don't converge to zero! CRGreathouse Homework Helper In any other p-adic field, the terms don't converge to zero! Char. Limit Gold Member I'm pretty sure that you can't pair up terms in an infinite sum. Redbelly98 Staff Emeritus Homework Helper $$S = 1 + 10 + 100 + 1000 + 10000 + ...$$ . . . What's wrong (or right) with this? Thanks, Unit I'm pretty sure that you can't pair up terms in an infinite sum. The way I remember it, proofs like this actually say something like: If S exists, then S = 1 + 10 + 100 + ...​ So if S does not exist, then the remaining statements do not necessarily hold true. If S exists, then S = 1 + 10 + 100 + ...​ So if S does not exist, then the remaining statements do not necessarily hold true. Brilliant! I had completely forgotten about variables and their related hypothetical syllogisms. Thanks!
2020-02-23 11:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250122904777527, "perplexity": 1120.0480694597077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00334.warc.gz"}
https://brilliant.org/discussions/thread/what-is-going-on-here-would-like-an-answer-please/
× So I was trying to derive a way to approximate $$\pi$$ using my compass and straightedge, and then use algebra. I began by creating a $$30$$ degree angle: So $$CAD$$ is $$30$$ degrees. Then, I imagined bisecting this angle into infinity. Notice if we draw a segment from $$C$$ to $$D$$, we get an isosceles. We can calculate for this length using the law of cosines: $$x = \sqrt{2-2\cos(\frac{30}{2r}})$$, where $$r$$ is a reiteration (another bisection). So, if I am correct in my assumption, $$\pi$$ should be about $$x \cdot n$$, if $$n$$ are the number of divisions in the circle. We can find $$n$$ easily enough: $$n = 24 \cdot 2^{r-2}$$. we know this becuase, as we bisect, we get a table of values: $$\left \{ 24, 48, 96, \ldots \right \}$$. Likewise, we find $$\theta$$ thusly: $$\theta = \frac{30}{2^{r}}$$. This table of values is found by dividing 360 by $$n$$: $$\left \{ 30, 15, 7.5, ... \right \}$$. So to find $$x$$, we use the law of cosine, take the square, and multiply by n. This gives me the equation: $\pi = \sqrt{2- 2 \cdot \cos \dfrac{30}{2^{r}}} \cdot 24 \cdot 2^{r-2}$ This makes sense from the way I constructed it, but not here: $\lim_{r \rightarrow \infty} \sqrt{2- 2 \cdot \cos \frac{30}{2^{r}}} \cdot 24 \cdot 2^{r-2}$ This becomes: $\lim_{r \rightarrow \infty} \sqrt{2- 2 \cdot \cos(30 \cdot 0)} \cdot 24 \cdot 2^{r-2} = 0$ Obviously, $$\pi \neq 0$$. So, I tested with two values of $$r$$ that my calculator could handle. For $$r =17$$, $$\pi = 3.14159265358$$ (correct to 11 decimal places.) However, at $$r = 18$$, $$\pi = 3.1415$$ (correct to only 4 decimal places.) So why does this equation get close to $$\pi$$, as it is supposed to, and then stop, and then appear to approach zero? Thanks for the help, I am not all that great at math, and would really appreciate it! Note by Drex Beckman 1 year ago Sort by: Extremely interesting! How are you saying that limit is 0? (Hint: It's an indeterminate form, ($$0 \cdot \infty$$)) The correct limit is $$\pi$$, just like you wanted. · 1 year ago Well, it seemed like as r approached infinity, the cosine would approach 0. of course, cos(0) = 1 and so we would get $$0 * 24 * 2^{r-2}$$. Since the limit is $$\pi$$, is there something wrong with my calculations? Since the precision seemed to degrade for higher r's. Thanks, I don't have the experience of ever learning limits in school, so I do not realize there is such a thing as indeterminate forms, but I was unsure what to do with the $$0 * \infty$$ case. I just assumed for any number, you would get zero. Thanks for the help! :) · 1 year ago Oh okay, I'll try to explain then. Suppose you had two functions, $$f(x)$$ and $$g(x)$$, $$\lim_{x \to \infty} f(x) = \infty$$ and $$\lim_{x \to \infty} g(x) = 0$$. Now what is $$\lim_{x \to \infty} f(x)\cdot g(x)$$? If you think about it, you really can't say, because it could be $$0$$ or $$\infty$$ or some value in between. Why? It depends on the functions $$f(x)$$ and $$g(x)$$ themselves. Let me give you some examples. Let $$L = \lim_{x \to \infty} f(x)\cdot g(x)$$. 1. $$f(x) = x$$ and $$g(x) = \frac{1}{x} \Rightarrow L = 1$$ as $$f(x)g(x) = 1$$ always. 2. $$f(x) = x^2$$ and $$g(x) = \frac{1}{x} \Rightarrow L = \infty$$ as $$f(x)g(x) = x$$ always. 3. $$f(x) = x$$ and $$g(x) = \frac{1}{x^2} \Rightarrow L = 0$$ as $$f(x)g(x) = \frac{1}{x}$$ always. 4. $$f(x) = 5x$$ and $$g(x) = \frac{1}{x} \Rightarrow L = 5$$ as $$f(x)g(x) = 5$$ always. So, we've seen that the limit can be anything really. This is why we call $$0 \cdot \infty$$ an indeterminate form, it can 'evaluate' to anything. Now, to your question, how do we evaluate $$\lim_{r \rightarrow \infty} \sqrt{2- 2 \cdot \cos \frac{30}{2^{r}}} \cdot 24 \cdot 2^{r-2}$$? Here, $$f(x) = 24 \cdot 2^{r - 2}$$ and $$g(x) = \sqrt{2 - 2 \cdot \cos \frac{30}{2^r}}$$ (Understand why.) So what is L, here? (Convert $$30$$ to $$\frac{\pi}{6}$$ radians) L = $$\lim_{r \rightarrow \infty} \sqrt{2- 2 \cdot \cos \frac{\pi}{6 \cdot 2^{r}}} \cdot 6 \cdot 2^{r}$$ Let $$t = \dfrac{1}{6\cdot 2^r}$$. L = $$\lim_{t \rightarrow 0} \dfrac{\sqrt{2- 2 \cdot \cos \pi t }}{t} = \lim_{t \rightarrow 0} \dfrac{\sqrt{2}\sqrt{1 - \cos \pi t }}{t} = \lim_{t \rightarrow 0} \dfrac{\sqrt{2}\sqrt{2 \sin^2 \frac{\pi t}{2}}}{t} = \lim_{t \rightarrow 0} \dfrac{2 sin \frac{\pi t}{2}}{t} = \pi$$ I used the well-known facts that $$1 - \cos 2x = 2 \sin^2 x$$ and $$\lim_{x \to 0} \frac{\sin x}{x} = 1$$. · 1 year ago Well explained. It's a very common misconception to "Let $$n \rightarrow \infty$$ in a specific portion of the expression, while ignoring the rest of it". Staff · 12 months ago Thanks a lot, I think I understand it now! Very clear explanation, also! +1 · 1 year ago Thanks! Also, the reason your calculator drops off in precision is because of rounding off errors and approximate values for the cosines and sines. · 1 year ago
2017-03-24 16:15:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056462645530701, "perplexity": 347.2727936975395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00164-ip-10-233-31-227.ec2.internal.warc.gz"}
https://harmony.cs.cornell.edu/docs/textbook/consensus/
# Distributed Consensus Distributed consensus is the problem of having a collection of processors agree on a single value over a network. For example, in state machine replication, the state machines have to agree on which operation to apply next. Without failures, this can be solved using leader election: first elect a leader, then have that leader decide a value. But consensus often has to be done in adverse circumstances, for example in the face of processor failures. Each processor proposes a value, which we assume here to be from the set { 0, 1 }. By the usual definition of consensus, we want the following three properties: 1. Validity: a processor can only decide a value that has been proposed; 2. Agreement: if two processors decide, then they decide the same value. 3. Termination: each processor eventually decides. The consensus problem is impossible to solve in the face of processor failures and without making assumptions about how long it takes to send and receive a message. Here we will not worry about Termination. consensus.hny const N = 4 proposals = [ choose({0, 1}) for i in {0..N-1} ] decision = choose { x for x in proposals } def processor(proposal): if choose { False, True }: print decision print proposals for i in {0..N-1}: spawn processor(proposals[i]) Figure 29.1 presents a specification for binary consensus---the proposals are from the set {0, 2} In this case there are four processors. The proposal of processor i is in proposals[i]. The decision is chosen from the set of proposals. Each processor may or may not print the decision---capturing the absence of the Termination property. It may be that no decisions are made, but that does not violate either Validity or Agreement. Thus the behavior of the program is to first print the array of proposals, followed by some subset of processors printing their decision. Notice the following properties: • there are $$16 = 2^4$$ possible proposal configurations; • all processors that decide decide the same value; • if all processors propose 0, then all processors that decide decide 0; • if all processors propose 1, then all processors that decide decide 1. This is just the specification---in practice we do not have a shared variable in which we can store the decision a priori. We will present a simple consensus algorithm that can tolerate fewer than $$1/3^{rd}$$ of processors failing by crashing. More precisely, constant F contains the maximum number of failures, and we will assume there are N = 3F + 1 processors. bosco.hny import bag const F = 1 const N = (3 * F) + 1 const NROUNDS = 3 proposals = [ choose({0, 1}) for i in {0..N-1} ] network = bag.empty() let msgs = { e:c for (r,e):c in network where r == round }: result = bag.combinations(msgs, k) def processor(proposal): var estimate, decided = proposal, False for round in {0..NROUNDS-1}: atomically when exists quorum in receive(round, N - F): let count = [ bag.multiplicity(quorum, i) for i in { 0..1 } ]: assert count[0] != count[1] estimate = 0 if count[0] > count[1] else 1 if count[estimate] == (N - F): if not decided: print estimate decided = True assert estimate in proposals # check validity print proposals for i in {0..N-1}: spawn processor(proposals[i]) Figure 29.2 presents our algorithm. Besides the network variable, it uses a shared list of proposals and a shared set of decisions. In this particular algorithm, all messages are broadcast to all processors, so they do not require a destination address. The N processors go through a sequence of rounds in which they wait for N -- F messages, update their state based on the messages, and broadcast messages containing their new state. The reason that a processor waits for N -- F rather than N messages is because of failures: up to F processors may never send a message and so it would be unwise to wait for all N. You might be tempted to use a timer and time out on waiting for a particular processor. But how would you initialize that timer? While we will assume that the network is reliable, there is no guarantee that messages arrive within a particular time. We call a set of N -- F processors a quorum. A quorum must suffice for the algorithm to make progress. The state of a processor consists of its current round number (initially 0) and an estimate (initially the proposal). Therefore, messages contain a round number and an estimate. To start things, each processor first broadcasts its initial round number and initial estimate. The number of rounds that are necessary to achieve consensus is not bounded. But Harmony can only check finite models, so there is a constant NROUNDS that limits the number of rounds. In Line 21, a processor waits for N -- F messages using the Harmony atomically when exists statement. Since Harmony has to check all possible executions of the protocol, the receive(round, k) method returns all subbags of messages for the given round that have size k = N -- F. The method uses a dictionary comprehension to filter out all messages for the given round and then uses the bag.combinations method to find all combinations of size k. The atomically when exists statement waits until there is at least one such combination and then chooses an element, which is bound to the quorum variable. The body of the statement is then executed atomically. This is usually how distributed algorithms are modeled, because they can only interact through the network. There is no need to interleave the different processes other than when messages are delivered. By executing the body atomically, a lot of unnecessary interleavings are avoided and this reduces the state space that must be explored by the model checker significantly. The body of the atomically when exists statement contains the core of the algorithm. Note that N -- F = 2F + 1, so that the number of messages is guaranteed to be odd. Also, because there are only 0 and 1 values, there must exist a majority of zeroes or ones. Variable count[0] stores the number of zeroes and count[1] stores the number of ones received in the round. The rules of the algorithm are simple: • update estimate to be the majority value; • if the quorum is unanimous, decide the value. After that, proceed with the next round. To check for correct behavior, run the following two commands: $harmony -o consensus.hfa code/consensus.hny$ harmony -B consensus.hfa code/bosco.hny Note that the second command prints a warning: "behavior warning: strict subset of specified behavior." Thus, the set of behaviors that our algorithm generates is a subset of the behavior that the specification allows. Figure 29.3 shows the behavior, and indeed it is not the same as the behavior of Figure 29.1. This is because in our algorithm the outcome is decided a priori if more than twothirds of the processors have the same proposal, whereas in the consensus specification the outcome is only decided a priori if the processors are initially unanimous. Another difference is that if the outcome is decided a priori, all processors are guaranteed to decide. bosco2.hny import bag const F = 1 const N = (3 * F) + 1 const NROUNDS = 3 let n_zeroes = choose { 0 .. N / 2 }: proposals = ([0,] * n_zeroes) + ([1,] * (N - n_zeroes)) network = bag.empty() let msgs = { e:c for (r,e):c in network where r == round }: result = {} if bag.size(msgs) < N else { msgs } def processor(proposal): var estimate, decided = proposal, False for round in {0..NROUNDS-1}: atomically when exists msgs in receive(round): let choices = bag.combinations(msgs, N - F) let quorum = choose(choices) let count = [ bag.multiplicity(quorum, i) for i in { 0..1 } ]: assert count[0] != count[1] estimate = 0 if count[0] > count[1] else 1 if count[estimate] == (N - F): if not decided: print estimate decided = True assert estimate in proposals # validity print proposals for i in {0..N-1}: spawn processor(proposals[i]) While one can run this code within little time for F = 1, for F = 2 the state space to explore is already quite large. One way to reduce the state space to explore is the following realization: each processor only considers messages for the round that it is in. If a message is for an old round, the processor will ignore it; if a message is for a future round, the processor will buffer it. So, one can simplify the model and have each processor wait for all N messages in a round instead of N -- F. It would still have to choose to consider just N -- F out of those N messages, but executions in which some processors are left behind in all rounds are no longer considered. It still includes executions where some subset of N -- F processors only choose each other messages and essentially ignore the messages of the remaining F processors, so the resulting model is just as good. Another way to reduce the state space to explore is to leverage symmetry. First of all, it does not matter who proposes a particular value. Also, the values 0 and 1 are not important to how the protocol operates. So, with 5 processors (F = 2), say, we only need to explore the cases where no processors propose 1, where exactly one processors proposes 1, and where 2 processors proposes 1. Figure 29.4 shows the code for this optimized model. Running this with F = 2 does not take very long and this approach is a good blueprint for testing other round-based protocols (of which there are many). ## Exercises 29.1 The algorithm as given works in the face of crash failures. A more challenging class to tolerate are arbitrary failures in which up to F processors may send arbitrary messages, including conflicting messages to different peers (equivocation). The algorithm can tolerate those failures if you use $$\texttt{N} = 5\texttt{F} - 1$$ processors instead of $$\texttt{N} = 3\texttt{F} - 1$$. Check that. 29.2 In 1983, Michael Ben-Or presented a randomized algorithm that can tolerate crash failures with just $$\texttt{N} = 2\texttt{F} - 1$$ processors. Implement this algorithm.
2023-01-29 13:05:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3460844159126282, "perplexity": 1468.0979458256352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00109.warc.gz"}
https://stacks.math.columbia.edu/tag/05T3
## 13.15 Derived functors on derived categories In practice derived functors come about most often when given an additive functor between abelian categories. Situation 13.15.1. Here $F : \mathcal{A} \to \mathcal{B}$ is an additive functor between abelian categories. This induces exact functors $F : K(\mathcal{A}) \to K(\mathcal{B}), \quad K^{+}(\mathcal{A}) \to K^{+}(\mathcal{B}), \quad K^{-}(\mathcal{A}) \to K^{-}(\mathcal{B}).$ See Lemma 13.10.6. We also denote $F$ the composition $K(\mathcal{A}) \to D(\mathcal{B})$, $K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$, and $K^{-}(\mathcal{A}) \to D^-(\mathcal{B})$ of $F$ with the localization functor $K(\mathcal{B}) \to D(\mathcal{B})$, etc. This situation leads to four derived functors we will consider in the following. 1. The right derived functor of $F : K(\mathcal{A}) \to D(\mathcal{B})$ relative to the multiplicative system $\text{Qis}(\mathcal{A})$. 2. The right derived functor of $F : K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ relative to the multiplicative system $\text{Qis}^{+}(\mathcal{A})$. 3. The left derived functor of $F : K(\mathcal{A}) \to D(\mathcal{B})$ relative to the multiplicative system $\text{Qis}(\mathcal{A})$. 4. The left derived functor of $F : K^{-}(\mathcal{A}) \to D^{-}(\mathcal{B})$ relative to the multiplicative system $\text{Qis}^-(\mathcal{A})$. Each of these cases is an example of Situation 13.14.1. Some of the ambiguity that may arise is alleviated by the following. Lemma 13.15.2. In Situation 13.15.1. 1. Let $X$ be an object of $K^{+}(\mathcal{A})$. The right derived functor of $K(\mathcal{A}) \to D(\mathcal{B})$ is defined at $X$ if and only if the right derived functor of $K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is defined at $X$. Moreover, the values are canonically isomorphic. 2. Let $X$ be an object of $K^{+}(\mathcal{A})$. Then $X$ computes the right derived functor of $K(\mathcal{A}) \to D(\mathcal{B})$ if and only if $X$ computes the right derived functor of $K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$. 3. Let $X$ be an object of $K^{-}(\mathcal{A})$. The left derived functor of $K(\mathcal{A}) \to D(\mathcal{B})$ is defined at $X$ if and only if the left derived functor of $K^{-}(\mathcal{A}) \to D^{-}(\mathcal{B})$ is defined at $X$. Moreover, the values are canonically isomorphic. 4. Let $X$ be an object of $K^{-}(\mathcal{A})$. Then $X$ computes the left derived functor of $K(\mathcal{A}) \to D(\mathcal{B})$ if and only if $X$ computes the left derived functor of $K^{-}(\mathcal{A}) \to D^{-}(\mathcal{B})$. Proof. Let $X$ be an object of $K^{+}(\mathcal{A})$. Consider a quasi-isomorphism $s : X \to X'$ in $K(\mathcal{A})$. By Lemma 13.11.5 there exists quasi-isomorphism $X' \to X''$ with $X''$ bounded below. Hence we see that $X/\text{Qis}^+(\mathcal{A})$ is cofinal in $X/\text{Qis}(\mathcal{A})$. Thus it is clear that (1) holds. Part (2) follows directly from part (1). Parts (3) and (4) are dual to parts (1) and (2). $\square$ Given an object $A$ of an abelian category $\mathcal{A}$ we get a complex $A[0] = ( \ldots \to 0 \to A \to 0 \to \ldots )$ where $A$ is placed in degree zero. Hence a functor $\mathcal{A} \to K(\mathcal{A})$, $A \mapsto A[0]$. Let us temporarily say that a partial functor is one that is defined on a subcategory. Definition 13.15.3. In Situation 13.15.1. 1. The right derived functors of $F$ are the partial functors $RF$ associated to cases (1) and (2) of Situation 13.15.1. 2. The left derived functors of $F$ are the partial functors $LF$ associated to cases (3) and (4) of Situation 13.15.1. 3. An object $A$ of $\mathcal{A}$ is said to be right acyclic for $F$, or acyclic for $RF$ if $A[0]$ computes $RF$. 4. An object $A$ of $\mathcal{A}$ is said to be left acyclic for $F$, or acyclic for $LF$ if $A[0]$ computes $LF$. The following few lemmas give some criteria for the existence of enough acyclics. Lemma 13.15.4. Let $\mathcal{A}$ be an abelian category. Let $\mathcal{P} \subset \mathop{\mathrm{Ob}}\nolimits (\mathcal{A})$ be a subset containing $0$ such that every object of $\mathcal{A}$ is a quotient of an element of $\mathcal{P}$. Let $a \in \mathbf{Z}$. 1. Given $K^\bullet$ with $K^ n = 0$ for $n > a$ there exists a quasi-isomorphism $P^\bullet \to K^\bullet$ with $P^ n \in \mathcal{P}$ and $P^ n \to K^ n$ surjective for all $n$ and $P^ n = 0$ for $n > a$. 2. Given $K^\bullet$ with $H^ n(K^\bullet ) = 0$ for $n > a$ there exists a quasi-isomorphism $P^\bullet \to K^\bullet$ with $P^ n \in \mathcal{P}$ for all $n$ and $P^ n = 0$ for $n > a$. Proof. Proof of part (1). Consider the following induction hypothesis $IH_ n$: There are $P^ j \in \mathcal{P}$, $j \geq n$, with $P^ j = 0$ for $j > a$, maps $d^ j : P^ j \to P^{j + 1}$ for $j \geq n$, and surjective maps $\alpha ^ j : P^ j \to K^ j$ for $j \geq n$ such that the diagram $\xymatrix{ & & P^ n \ar[d]^\alpha \ar[r] & P^{n + 1} \ar[d]^\alpha \ar[r] & P^{n + 2} \ar[d]^\alpha \ar[r] & \ldots \\ \ldots \ar[r] & K^{n - 1} \ar[r] & K^ n \ar[r] & K^{n + 1} \ar[r] & K^{n + 2} \ar[r] & \ldots }$ is commutative, such that $d^{j + 1} \circ d^ j = 0$ for $j \geq n$, such that $\alpha$ induces isomorphisms $H^ j(K^\bullet ) \to \mathop{\mathrm{Ker}}(d^ j)/\mathop{\mathrm{Im}}(d^{j - 1})$ for $j > n$, and such that $\alpha : \mathop{\mathrm{Ker}}(d^ n) \to \mathop{\mathrm{Ker}}(d_ K^ n)$ is surjective. Then we choose a surjection $P^{n - 1} \longrightarrow K^{n - 1} \times _{K^ n} \mathop{\mathrm{Ker}}(d^ n) = K^{n - 1} \times _{\mathop{\mathrm{Ker}}(d_ K^ n)} \mathop{\mathrm{Ker}}(d^ n)$ with $P^{n - 1}$ in $\mathcal{P}$. This allows us to extend the diagram above to $\xymatrix{ & P^{n - 1} \ar[d]^\alpha \ar[r] & P^ n \ar[d]^\alpha \ar[r] & P^{n + 1} \ar[d]^\alpha \ar[r] & P^{n + 2} \ar[d]^\alpha \ar[r] & \ldots \\ \ldots \ar[r] & K^{n - 1} \ar[r] & K^ n \ar[r] & K^{n + 1} \ar[r] & K^{n + 2} \ar[r] & \ldots }$ The reader easily checks that $IH_{n - 1}$ holds with this choice. We finish the proof of (1) as follows. First we note that $IH_ n$ is true for $n = a + 1$ since we can just take $P^ j = 0$ for $j > a$. Hence we see that proceeding by descending induction we produce a complex $P^\bullet$ with $P^ n = 0$ for $n > a$ consisting of objects from $\mathcal{P}$, and a termwise surjective quasi-isomorphism $\alpha : P^\bullet \to K^\bullet$ as desired. Proof of part (2). The assumption implies that the morphism $\tau _{\leq a}K^\bullet \to K^\bullet$ (Homology, Section 12.15) is a quasi-isomorphism. Apply part (1) to find $P^\bullet \to \tau _{\leq a}K^\bullet$. The composition $P^\bullet \to K^\bullet$ is the desired quasi-isomorphism. $\square$ Lemma 13.15.5. Let $\mathcal{A}$ be an abelian category. Let $\mathcal{I} \subset \mathop{\mathrm{Ob}}\nolimits (\mathcal{A})$ be a subset containing $0$ such that every object of $\mathcal{A}$ is a subobject of an element of $\mathcal{I}$. Let $a \in \mathbf{Z}$. 1. Given $K^\bullet$ with $K^ n = 0$ for $n < a$ there exists a quasi-isomorphism $K^\bullet \to I^\bullet$ with $K^ n \to I^ n$ injective and $I^ n \in \mathcal{I}$ for all $n$ and $I^ n = 0$ for $n < a$, 2. Given $K^\bullet$ with $H^ n(K^\bullet ) = 0$ for $n < a$ there exists a quasi-isomorphism $K^\bullet \to I^\bullet$ with $I^ n \in \mathcal{I}$ and $I^ n = 0$ for $n < a$. Proof. This lemma is dual to Lemma 13.15.4. $\square$ Lemma 13.15.6. In Situation 13.15.1. Let $\mathcal{I} \subset \mathop{\mathrm{Ob}}\nolimits (\mathcal{A})$ be a subset with the following properties: 1. every object of $\mathcal{A}$ is a subobject of an element of $\mathcal{I}$, 2. for any short exact sequence $0 \to P \to Q \to R \to 0$ of $\mathcal{A}$ with $P, Q \in \mathcal{I}$, then $R \in \mathcal{I}$, and $0 \to F(P) \to F(Q) \to F(R) \to 0$ is exact. Then every object of $\mathcal{I}$ is acyclic for $RF$. Proof. We may add $0$ to $\mathcal{I}$ if necessary. Pick $A \in \mathcal{I}$. Let $A[0] \to K^\bullet$ be a quasi-isomorphism with $K^\bullet$ bounded below. Then we can find a quasi-isomorphism $K^\bullet \to I^\bullet$ with $I^\bullet$ bounded below and each $I^ n \in \mathcal{I}$, see Lemma 13.15.5. Hence we see that these resolutions are cofinal in the category $A[0]/\text{Qis}^{+}(\mathcal{A})$. To finish the proof it therefore suffices to show that for any quasi-isomorphism $A[0] \to I^\bullet$ with $I^\bullet$ bounded below and $I^ n \in \mathcal{I}$ we have $F(A)[0] \to F(I^\bullet )$ is a quasi-isomorphism. To see this suppose that $I^ n = 0$ for $n < n_0$. Of course we may assume that $n_0 < 0$. Starting with $n = n_0$ we prove inductively that $\mathop{\mathrm{Im}}(d^{n - 1}) = \mathop{\mathrm{Ker}}(d^ n)$ and $\mathop{\mathrm{Im}}(d^{-1})$ are elements of $\mathcal{I}$ using property (2) and the exact sequences $0 \to \mathop{\mathrm{Ker}}(d^ n) \to I^ n \to \mathop{\mathrm{Im}}(d^ n) \to 0.$ Moreover, property (2) also guarantees that the complex $0 \to F(I^{n_0}) \to F(I^{n_0 + 1}) \to \ldots \to F(I^{-1}) \to F(\mathop{\mathrm{Im}}(d^{-1})) \to 0$ is exact. The exact sequence $0 \to \mathop{\mathrm{Im}}(d^{-1}) \to I^0 \to I^0/\mathop{\mathrm{Im}}(d^{-1}) \to 0$ implies that $I^0/\mathop{\mathrm{Im}}(d^{-1})$ is an element of $\mathcal{I}$. The exact sequence $0 \to A \to I^0/\mathop{\mathrm{Im}}(d^{-1}) \to \mathop{\mathrm{Im}}(d^0) \to 0$ then implies that $\mathop{\mathrm{Im}}(d^0) = \mathop{\mathrm{Ker}}(d^1)$ is an elements of $\mathcal{I}$ and from then on one continues as before to show that $\mathop{\mathrm{Im}}(d^{n - 1}) = \mathop{\mathrm{Ker}}(d^ n)$ is an element of $\mathcal{I}$ for all $n > 0$. Applying $F$ to each of the short exact sequences mentioned above and using (2) we observe that $F(A)[0] \to F(I^\bullet )$ is an isomorphism as desired. $\square$ Lemma 13.15.7. In Situation 13.15.1. Let $\mathcal{P} \subset \mathop{\mathrm{Ob}}\nolimits (\mathcal{A})$ be a subset with the following properties: 1. every object of $\mathcal{A}$ is a quotient of an element of $\mathcal{P}$, 2. for any short exact sequence $0 \to P \to Q \to R \to 0$ of $\mathcal{A}$ with $Q, R \in \mathcal{P}$, then $P \in \mathcal{P}$, and $0 \to F(P) \to F(Q) \to F(R) \to 0$ is exact. Then every object of $\mathcal{P}$ is acyclic for $LF$. Proof. Dual to the proof of Lemma 13.15.6. $\square$ Comment #455 by Keenan Kidwell on In 05T4, there is a superscript minus missing in the target of the third functor of the sentence beginning "We also denote $F$..." Comment #512 by Keenan Kidwell on In 05TA, the roles of $\mathcal{B}$ and $\mathcal{A}$ should be switched. Comment #657 by Fan Zheng on Why additive functors induce exact functors on the homotopy category of chains? Comment #661 by on @#657. This is true because an additive functor $F$ transforms a termwise split short exact sequence of complexes into a termwise split short exact sequence of complexes and an "exact functor between triangulated categories" is an addtive functor which transforms distinguished triangles into distinguished triangles. Anyway, the precise statement is Lemma 13.10.6. Comment #7068 by on Proof of Lemma 13.5.6, line 5: the complex $I^\bullet$ is bounded below and not above. Comment #7248 by on Thanks and fixed here. Please in the future leave the comment on the page of the lemma. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-11-27 19:24:56
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9893160462379456, "perplexity": 129.33243255981645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00811.warc.gz"}
http://blog.twimager.com/2017/05/
## Sunday, May 28, 2017 ### Access Windows Environment Variables from within Bash in WSL I have been using my MacBook a lot now that it is my main computer at work. So much so that I found it necessary to invert my scroll wheel on my mouse on my windows desktop to behave like my MacBook. I've also been using bash a lot more since I can have similar experience between the 2 machines. While in the process of writing the scripts to configure my bash environment on my Windows machine, I found the need to be able to access environment variables that are set in Windows. With WSL, the only environment variables that really come over to bash is PATH. I googled around for a bit, but didn't find any way to actually do this. Then I remembered that WSL has interop between Windows and WSL. This means that I can execute a Windows executable and redirect the output back to bash. Which means I should be able to execute powershell.exe to get the information I need. I first started with a test of just doing: $echo$(powershell.exe -Command "gci ENV:") And that gave me what I wanted back. Now there are some differences in the paths between WSL and Windows, so I knew I would also have to adjust for that. What I did was put a file called ~/.env.ps1 in my home path. #!~/bin/powershell # Will return all the environment variables in KEY=VALUE format function Get-EnvironmentVariables { return (Get-ChildItem ENV: | foreach { "WIN_$(Get-LinuxSafeValue -Value ($_.Name -replace '$$|$$','').ToUpper())=$(Convert-ToWSLPath -Path$_.Value)" }) } # converts the C:\foo\bar path to the WSL counter part of /mnt/c/foo/bar function Convert-ToWSLPath { param ( [Parameter(Mandatory=$true)]$Path ) (Get-LinuxSafeValue -Value (($Path -split ';' | foreach { if ($_ -ne $null -and$_ -ne '' -and $_.Length -gt 0) { (( (Fix-Path -Path$_) -replace '(^[A-Za-z])\:(.*)', '/mnt/$1$2') -replace '\\','/') } } ) -join ':')); } function Fix-Path { param ( [Parameter(Mandatory=$true)]$Path ) if ( $Path -match '^[A-Z]\:' ) { return$Path.Substring(0,1).ToLower()+$Path.Substring(1); } else { return$Path } } # Ouputs a string of exports that can be evaluated function Import-EnvironmentVariables { return (Get-EnvironmentVariables | foreach { "export $_;" }) | Out-String } # Just escapes special characters function Get-LinuxSafeValue { param ( [Parameter(Mandatory=$true)] $Value ) process { return$Value -replace "(\s|'|"|\$|\#|&|!|~||\*|\?|$$|$$|\|)",'\$1'; } } Now in my .bashrc I have the following: #!/usr/bin/env bash source ~/.wsl_helper.bash eval $(winenv) If I run env now, I get output like the following: WIN_ONEDRIVE=/mnt/d/users/rconr/onedrive PATH=~/bin:/foo:/usr/bin WIN_PATH=/mnt/c/windows:/mnt/c/windows/system32 Notice the environment variables that are prefixed with WIN_? These are environment variables directly from Windows. I can now add additional steps to my .bashrc using these variables. ln -s "$WIN_ONEDRIVE" ~/OneDrive Additionally, I added a script to my ~/bin folder that is in my path called powershell. This will allow me to make "native" style calls to powershell from within bash scripts. #!/usr/bin/env bash # rename to powershell # chmod +x powershell . ~/.wsl_helper.bash PS_WORKING_DIR=$(lxssdir) if [ -f "$1" ] && "$1" ~= ".ps1$"; then powershell.exe -NoLogo -ExecutionPolicy ByPass -Command "Set-Location '${PS_WORKING_DIR}'; Invoke-Command -ScriptBlock ([ScriptBlock]::Create((Get-Content$1))) ${*:2}" elif [ -f "$1" ] && "$1" ~!= "\.ps1$"; then powershell.exe -NoLogo -ExecutionPolicy ByPass -Command "Set-Location '${PS_WORKING_DIR}'; Invoke-Command -ScriptBlock ([ScriptBlock]::Create((Get-Content$1))) ${*:2}" else powershell.exe -NoLogo -ExecutionPolicy ByPass${*:1} fi unset PS_WORKING_DIR In the powershell file, you will see a call to source a file called .wsl_helper.bash. This script has some helper functions that will do things like transform a path from a Windows style path to a linux WSL path, and do the opposite as well. #!/usr/bin/env bash # This is the translated path to where the LXSS root directory is export LXSS_ROOT=/mnt/c/Users/$USER/AppData/Local/lxss # translate to linux path from windows path function windir() { echo "$1" | sed -e 's|^$$[a-z]$$:$$.*$$|/mnt/\L\1\E\2|' -e 's|\\|/|g' } # translate the path back to windows path function wsldir() { echo "$1" | sed -e 's|^/mnt/$$[a-z]$$/$$.*$$|\U\1\:\\\E\2|' -e 's|/|\\|g' } # gets the lxss path from windows function lxssdir() { if [$# -eq 0 ]; then if echo "$PWD" | grep "^/mnt/[a-zA-Z]/" > /dev/null 2>&1; then echo "$PWD"; else echo "$LXSS_ROOT$PWD"; fi else echo "$LXSS_ROOT$1"; fi } function printwinenv() { _winenv --get } # this will load the output exports of the windows envrionment variables function winenv() { _winenv --import } function _winenv() { if [ $# -eq 0 ]; then CMD_VERB="Get" else while test$# -gt 0; do case "$1" in -g|--get) CMD_VERB="Get" shift ;; -i|--import) CMD_VERB="Import" shift ;; *) CMD_VERB="Get" break ;; esac done fi CMD_DIR=$(wsldir "$LXSS_ROOT$HOME/\.env.ps1") echo $(powershell.exe -Command "Import-Module -Name$CMD_DIR; $CMD_VERB-EnvironmentVariables") | sed -e 's|\r|\n|g' -e 's|^[\s\t]*||g'; } ## Wednesday, May 17, 2017 ### Jenkins + NPM Install + Git I have been working on setting up Jenkins Pipelines for some projects and had an issue that I think others have had, but I could not find a clear answer on the way to handle it. We have some NPM Packages that are pulled from a private git repo, and all of the accounts have MFA enabled, including the CI user account. This means that SSH authentication is mandatory for CI user. If there is only one host that you need to ssh auth with jenkins, or you use the exact same ssh key for all hosts, then you can just put the private key on your Jenkins server at ~/.ssh/id_rsa. If you need to specify a key dependant upon the host, which is the situation I was in, it was not working to pull the package. The solution for this that I found was to use the ~/.ssh/config. In there you specify the hosts, the user, and what identity file to use. It can look something like this: Host github.com User git IdentityFile ~/.ssh/github.key Host bitbucket.org User git IdentityFile ~/.ssh/bitbucket.key Host tfs.myonprem-domain.com User my-ci-user IdentityFile ~/.ssh/onprem-tfs.key So now, when running npm install, ssh will know what identity file to use. Bonus tip: Not everyone uses ssh, so in the package.json, it may not be configured to use ssh. You can put options in the global .gitconfig on the Jenkins server that will redirect the https protocol requests to ssh: [url "ssh://git@github.com/"] insteadOf = "https://github.com/" [url "ssh://git@bitbucket.org/"] insteadOf = "https://bitbucket.org/" [url "ssh://tfs.myonprem-domain.com:22/"] instadOf = "https://tfs.myonprem-domain.com/ So with that, when git detects an https request, it will switch to use ssh. ### Access Windows Environment Variables from within Bash in WSL I have been using my MacBook a lot now that it is my main computer at work. So much so that I found it necessary to invert my scroll wheel on my mouse on my windows desktop to behave like my MacBook. I've also been using bash a lot more since I can have similar experience between the 2 machines. While in the process of writing the scripts to configure my bash environment on my Windows machine, I found the need to be able to access environment variables that are set in Windows. With WSL, the only environment variables that really come over to bash is PATH. I googled around for a bit, but didn't find any way to actually do this. Then I remembered that WSL has interop between Windows and WSL. This means that I can execute a Windows executable and redirect the output back to bash. Which means I should be able to execute powershell.exe to get the information I need. I first started with a test of just doing: $ echo $(powershell.exe -Command "gci ENV:") And that gave me what I wanted back. Now there are some differences in the paths between WSL and Windows, so I knew I would also have to adjust for that. What I did was put a file called ~/.env.ps1 in my home path. #!~/bin/powershell # Will return all the environment variables in KEY=VALUE format function Get-EnvironmentVariables { return (Get-ChildItem ENV: | foreach { "WIN_$(Get-LinuxSafeValue -Value ($_.Name -replace '$$|$$','').ToUpper())=$(Convert-ToWSLPath -Path $_.Value)" }) } # converts the C:\foo\bar path to the WSL counter part of /mnt/c/foo/bar function Convert-ToWSLPath { param ( [Parameter(Mandatory=$true)] $Path ) (Get-LinuxSafeValue -Value (($Path -split ';' | foreach { if ($_ -ne$null -and $_ -ne '' -and$_.Length -gt 0) { (( (Fix-Path -Path $_) -replace '(^[A-Za-z])\:(.*)', '/mnt/$1$2') -replace '\\','/') } } ) -join ':')); } function Fix-Path { param ( [Parameter(Mandatory=$true)] $Path ) if ($Path -match '^[A-Z]\:' ) { return $Path.Substring(0,1).ToLower()+$Path.Substring(1); } else { return $Path } } # Ouputs a string of exports that can be evaluated function Import-EnvironmentVariables { return (Get-EnvironmentVariables | foreach { "export$_;" }) | Out-String } # Just escapes special characters function Get-LinuxSafeValue { param ( [Parameter(Mandatory=$true)]$Value ) process { return $Value -replace "(\s|'|"|\$|\#|&|!|~||\*|\?|$$|$$|\|)",'\$1'; } } Now in my .bashrc I have the following: #!/usr/bin/env bash source ~/.wsl_helper.bash eval$(winenv) If I run env now, I get output like the following: WIN_ONEDRIVE=/mnt/d/users/rconr/onedrive PATH=~/bin:/foo:/usr/bin WIN_PATH=/mnt/c/windows:/mnt/c/windows/system32 Notice the environment variables that are prefixed with WIN_? These are environment variables directly from Windows. I can now add additional steps to my .bashrc using these variables. ln -s "$WIN_ONEDRIVE" ~/OneDrive Additionally, I added a script to my ~/bin folder that is in my path called powershell. This will allow me to make "native" style calls to powershell from within bash scripts. #!/usr/bin/env bash # rename to powershell # chmod +x powershell . ~/.wsl_helper.bash PS_WORKING_DIR=$(lxssdir) if [ -f "$1" ] && "$1" ~= ".ps1$"; then powershell.exe -NoLogo -ExecutionPolicy ByPass -Command "Set-Location '${PS_WORKING_DIR}'; Invoke-Command -ScriptBlock ([ScriptBlock]::Create((Get-Content $1)))${*:2}" elif [ -f "$1" ] && "$1" ~!= "\.ps1$"; then powershell.exe -NoLogo -ExecutionPolicy ByPass -Command "Set-Location '${PS_WORKING_DIR}'; Invoke-Command -ScriptBlock ([ScriptBlock]::Create((Get-Content $1)))${*:2}" else powershell.exe -NoLogo -ExecutionPolicy ByPass ${*:1} fi unset PS_WORKING_DIR In the powershell file, you will see a call to source a file called .wsl_helper.bash. This script has some helper functions that will do things like transform a path from a Windows style path to a linux WSL path, and do the opposite as well. #!/usr/bin/env bash # This is the translated path to where the LXSS root directory is export LXSS_ROOT=/mnt/c/Users/$USER/AppData/Local/lxss # translate to linux path from windows path function windir() { echo "$1" | sed -e 's|^$$[a-z]$$:$$.*$$|/mnt/\L\1\E\2|' -e 's|\\|/|g' } # translate the path back to windows path function wsldir() { echo "$1" | sed -e 's|^/mnt/$$[a-z]$$/$$.*$$|\U\1\:\\\E\2|' -e 's|/|\\|g' } # gets the lxss path from windows function lxssdir() { if [ $# -eq 0 ]; then if echo "$PWD" | grep "^/mnt/[a-zA-Z]/" > /dev/null 2>&1; then echo "$PWD"; else echo "$LXSS_ROOT$PWD"; fi else echo "$LXSS_ROOT$1"; fi } function printwinenv() { _winenv --get } # this will load the output exports of the windows envrionment variables function winenv() { _winenv --import } function _winenv() { if [$# -eq 0 ]; then CMD_VERB="Get" else while test $# -gt 0; do case "$1" in -g|--get) CMD_VERB="Get" shift ;; -i|--import) CMD_VERB="Import" shift ;; *) CMD_VERB="Get" break ;; esac done fi CMD_DIR=$(wsldir "$LXSS_ROOT$HOME/\.env.ps1") echo$(powershell.exe -Command "Import-Module -Name $CMD_DIR;$CMD_VERB-EnvironmentVariables") | sed -e 's|\r|\n|g' -e 's|^[\s\t]*||g'; } ### Jenkins + NPM Install + Git I have been working on setting up Jenkins Pipelines for some projects and had an issue that I think others have had, but I could not find a clear answer on the way to handle it. We have some NPM Packages that are pulled from a private git repo, and all of the accounts have MFA enabled, including the CI user account. This means that SSH authentication is mandatory for CI user. If there is only one host that you need to ssh auth with jenkins, or you use the exact same ssh key for all hosts, then you can just put the private key on your Jenkins server at ~/.ssh/id_rsa. If you need to specify a key dependant upon the host, which is the situation I was in, it was not working to pull the package. The solution for this that I found was to use the ~/.ssh/config. In there you specify the hosts, the user, and what identity file to use. It can look something like this: Host github.com User git IdentityFile ~/.ssh/github.key Host bitbucket.org User git IdentityFile ~/.ssh/bitbucket.key Host tfs.myonprem-domain.com User my-ci-user IdentityFile ~/.ssh/onprem-tfs.key So now, when running npm install, ssh will know what identity file to use. Bonus tip: Not everyone uses ssh, so in the package.json, it may not be configured to use ssh. You can put options in the global .gitconfig on the Jenkins server that will redirect the https protocol requests to ssh: [url "ssh://git@github.com/"] So with that, when git detects an https request, it will switch to use ssh.
2021-09-20 02:31:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3483352065086365, "perplexity": 13746.726660879984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00278.warc.gz"}
https://physics.stackexchange.com/questions/355399/why-does-mwi-need-decoherence-theory-and-how-can-these-models-be-combined
# Why does MWI need decoherence theory and how can these models be combined? As far as I understood decoherence theory, it explains, why we are not able to measure superposition of macroscopic objects in some specific basis, which turns out to be the position-basis in most cases. As I'm currently trying to understand the multiple worlds interpretation (MWI), very often I find texts that say MWI needs decoherence theory as a completion. What I don't understand is $$(1)$$ why MWI needs decoherence and $$(2)$$ how MWI can be combined with decoherence. I'm even not sure if this question is simply too coarse-grained, or if there is one simple thought, which I have overlooked yet. The MWI is basically a reinterpretation of the von Neumann measurement scheme. The latter tries to include an idealized measurement apparatus into full quantum description of some external observer. Let's try to measure the observable $\hat{A}=\sum_{k} A_k\hat{\mathcal{P}}_k$ where $\hat{\mathcal{P}}_k=|\psi_k\rangle\langle\psi_k|$. The probabilities are given by the Born rule and the consequent measurement deal with the transformed wavefunction obtained with the projection postulate which is commonly known as a wavefunction collapse, $$P(A=A_k)=\langle\psi|\hat{\mathcal{P}_k}|\psi\rangle,\quad |\psi\rangle\mapsto |\psi_k\rangle$$ the interaction of this ideal measurement apparatus with the measured object is more or less by definition looks like this, $$|\Psi(t_0)\rangle=|0\rangle_{app}\otimes\sum_{k}\alpha_k|\psi_{k}\rangle_{obj}\mapsto|\Psi(t_{meas})\rangle=\sum_{k}\alpha_k|A_k\rangle_{app}\otimes|\psi_{k}\rangle_{obj}$$ So the measurement apparatus becomes entangled with the object in such a way that its registered value is perfectly correlated with the initial observable $\hat{A}$. Now, the von Neumann used this simply to show that if such ideal measurement apparatus is included as extra layer of the quantum description then if observer actually measures the apparatus i.e. the observable $\hat{A}_{app}\equiv \sum_k|A_k\rangle_{app}\langle A_k|_{app}$, not the object itself then the probabilities of the outcomes and projection rules remain consistent with the description without the measurement apparatus. Please note that the whole scheme assumes that a measurement apparatus can be described by the pure quantum state therefore neglecting its interaction with the environment completely (the flaw that was already understood back then) The MWI appears when you • Replace the sole measurement apparatus with the whole macroscopic environment with all macroscopic objects entangled between each other in such a way that they are all perfectly correlated. As simplification if you were able to assign some quantum state not only to the measurement apparatus but also to the scientist Alice, scientist Bob, their cat and Alice's chair you should replace $|A_k\rangle_{app}$ with, $$|A_k\rangle_{env}=|A_k\rangle_{app}|A_k\rangle_{Alice}|A_k\rangle_{Bob}|A_k\rangle_{cat}|A_k\rangle_{chair}$$ as result the $|\Psi\rangle$ is interpreted as an unobservable quantum state of the whole universe that assumed to evolve as a closed quantum system. The step fundamental for MWI is that after the measurement the universal state becomes a superposition of the branches that because of the quantum evolution linearity evolve independently of each other. • Consider then a series of the von Neumann-like measurements with device and environment making records. So if we assume that after the measurement the first observable eigenstates transforms into, $$|\psi_k\rangle\mapsto \sum_l \beta_{kl}|\phi\rangle_l$$ where $|\phi\rangle_l$ are eigenstates of some second observable $\hat{B}$ then the two subsequent von Neumann measurements yield, \begin{aligned} |0\rangle_{env}\sum_k\alpha_k|\psi_k\rangle\mapsto\sum_k\alpha_k|A_k\rangle_{env}|\psi_k\rangle\mapsto\\ \sum_k\alpha_k|A_k\rangle_{env}\sum_l\beta_{kl}|\phi_l\rangle \mapsto\sum_{k,l}\alpha_k\beta_{kl}|A_k,B_l\rangle_{env}|\phi\rangle_l \end{aligned} Then the branching structure appears in the coefficients. You get $\alpha_k\beta_{kl}$ but not some non-decomposable $\gamma_{kl}$. Because of that if you look on the branches like $|A_k,\ldots\rangle$ they doesn't care whether you omitted all branches with different values of $A$ after the first measurement. So this is the attempt to explain the projection postulate - the observer as a part of such quantum universe percieve as if "collapse" happened whereas the evolution of the whole universe is unitary. Everett and his followers attempted to derive the Born rule i.e. that $|\alpha_k\beta_{kl}|^2$ actually gives a probability of the branch. However all those derivations are circular and only show that if you assume such probability for the branch this is consistent with the predicted statistics appearing in many measurements. The issue is that both von Neumann measurement scheme and MWI were formulated at extremely heuristic level without any demonstration how this sort of thing appears in actual systems. The MWI in its original formulation didn't explain why all the macroscopic objects are correlated in such a perfect way. It didn't explain why this entanglement occurs in such a way that we get a semiclassical macroscopic world with some preference for localization in both coordinate and momentum spaces. Because of this lack of detail many people developed a lot of extremely naive ideas. For example that the universal quantum state is a superposition of a strictly classical "worlds". The decoherence can explain how actually the universal quantum state may develop MWI-like structure. If you don't care about purely philosophical discussions and trying to get rid of the Born rule then the consistent histories (or decoherent histories) approach is "the way to do MWI right" i.e. the way to show that vague ideas of the Copenhagen school about the macroscopic world can actually get solid justification if we assume that the quantum theory can be applied on the cosmological scales. With enough coarse-graining you may get decoherence being extremely strong. The universal quantum state in such an idealized limit indeed can be seen as experiencing MWI-like branching. That is the probabilities for the coarse-grained histories will be the same as if you calculated them on the branched state. But you should understand that it's only a simplified picture. The distinct classical worlds are not actually predicted by the decoherence, that's why more refined versions of MWI talk more about "many minds" rather than "many worlds" (though the distinct "mind" can only exist as a coarse grained approximation too)
2019-08-19 19:32:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.7403020262718201, "perplexity": 442.9370457778178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00197.warc.gz"}
https://el-zet.pl/leveraged-gold-khimzj/qy8024.php?ccf9bd=iron-reaction-with-steam
4 moles of steam (H 2 O) are required to convert 3 moles of Fe into its oxide.. 1 mole of steam (H 2 O) will convert =3/4 mole of Fe into its oxide.. 1 mole of steam = 18 g of H 2 O. There are a number of other ways to stop or slow down rust. Iron Slow with steam. Iron reacting with steam produces a familiar result, creating a layer of hydrated iron oxide over the iron, which then releases hydrogen into the surrounding area. 4 The metal oxide cannot be reduced using carbon. Both are possible oxides of iron. We investigated hydrogen production by the steam–iron reaction using iron oxide modified with very small amounts of palladium and/or zirconia at a temperature of 723 K and under atmospheric pressure. First steam decomposes very rapidly on the iron surface generating the adsorbed species O (ads) and gaseous hydro-gen (Eq. The reaction between iron and steam is much slower than the reactions between steam and metals higher in the reactivity series. (a) FeO (b) Fe2O3 (c) Fe3O4 (d) Fe2O3 and Fe3O4 NCERT Class X Science - Exemplar Problems Chapter_Metals and Non Metals 2 See answers someone someone The answer is C as the equation is as follows: 3Fe+4H2O →Fe3O4+4H2 MuskanAhuja MuskanAhuja hola mate. Heat often has to be applied to the iron in the test tube to initiate the reaction with steam and speed it up. Iron + steam (water vapor) would produce iron oxide + hydrogen. Iron reacts with steam according to the reaction 3Fe +4 H2O → Fe3O4 + 4H2O. An equation for the reaction of: Iron with steam When a metal reacts with steam then the products formed are metal oxide and hydrogen gas. The influence of the calcination temperature of the iron–alumina support was also explored for the steam reforming reaction. Even when this is done, the iron does not glow and burn like magnesium does. In this reaction, the products formed are hydrogen gas and magnetic oxide. Aluminium is unusual, because it is a reactive metal that does not react with water. Thanks for contributing an answer to Chemistry Stack Exchange! Why have we combined $\ce{FeO}$ and $\ce{Fe2O3}$ resulting in the formation of $\ce{Fe3O4}$? Active 2 years, 9 months ago. (a) Mg (s) + Cl 2 (g) → MgCl 2 (s) (d) TiCl 4 (l) + Mg (s) → Ti (s) + MgCl 2 (s) your … Metals in the reactivity series from magnesium to iron react with steam - H 2 O (g) but not water - H 2 O (l). Steam first reacts with the carbon to give oxygen and hydrogen atoms separately adsorbed on neighbouring sites. Reaction of aluminium metal with water: Reaction of aluminium metal with cold water is too slow to come into notice. Connecting a compact subset by a simple curve, Looking for title/author of fantasy book where the Sun is hidden by pollution and it is always winter. The detailed mechanism of the reaction between steam and coconut shell charcoal has been studied by the method described in the preceding paper. The chemical equation for the above reaction is as under: 3 Fe(s) + 4H 2 O(g) <-----> Fe 3 O 4 (s) + 4H 2 (g) Iron Steam Tri - iron tetraoxide Hydrogen. The Reaction of Metals with Water. While studying about reversible reactions, my professor stated that the following reaction is a reversible reaction: She also stated that the product $\ce{Fe3O4}$ is formed by a mixture of $\ce{FeO}$ and $\ce{Fe2O3}$. The reaction forms the metal oxide and hydrogen gas. Iron on reaction with steam gives iron (II, III) oxide. 90g c. 168g d. 210g**** Chemistry. Magnesium reacts with oxygen to form magnesium oxide: 2 Mg + O 2 → 2 MgO. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Note: You will not need to know this information but it may help to provide you with a better understanding of what happens during this reaction. Why have we combined $\ce{FeO}$ and $\ce{Fe2O3}$ resulting in the formation of $\ce{Fe3O4}$? Regarding whether or not Fe3O4 is a compound, it surely is as stated by MaxW and Ivan Neretin. The compound obtained on reaction of iron with steam is/are : Options. (i) Iron react with steam to form the metal oxide and hydrogen. We are given that iron on reaction with steam forms iron 2, 3 oxide and oxygen. Fe 2 O 3 and Fe 3 O 4. 3Fe(s) + 4H2O(g) → Fe3O4(s) + 4H2(g) (ii) The reaction of calcium with water is exothermic but the heat evolved is not sufficient for the hydrogen to catch fire. How many grams of iron will react with 5 moles of steam to convert to iron oxide with 100% product yield? Is iron (III) oxide-hydroxide the same as iron (III) hydroxide? Copper No reaction with steam. The reaction of magnesium with steam. 4.40 When iron and steam react at high temperatures, the following reaction takes place: 3 Fe ( s ) + 4 H 2 O ( g ) → Fe 3 O ­ 4 ( s ) + 4 H 2 ( g ) How much iron must react with excess steam to form 897 g of Fe 3 O 4 if the reaction yield is 69%? Reaction of iron with steam. I'd go with the Fe2O3 reaction assuming it is just being hit with steam and not underwater. Iron–alumina-supported nickel–iron alloy catalysts were tested in a fixed-bed reactor for steam reforming of toluene as a biomass tar model compound. Now, only one man can put a stop to Silver and his minions. Share Tweet Send [Deposit Photos] Iron is the sec­ond most wide­spread met­al on Earth. The reaction between iron and steam occurs as:. How to run a whole mathematica notebook within a for loop? Ca(s) + 2H2O(l) → Ca(OH)2(aq) + H2(g) (iii) Calcium starts floating because the bubbles of hydrogen gas formed stick to the surface of the metal. Do sinners directly get moksha if they die in Varanasi? 1. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Answer: Fe 2 O 4 3Fe(s) + 4H 2 O(g) → Fe 3 O 4 (s) + 4H 2 (g) Question 14. Fe3O4 usually forms when iron is submerged in water and Fe2O3 is just regular, everyday rust. Originally Answered: What is the chemical reaction for iron with steam? We investigated hydrogen production by the steam–iron reaction using iron oxide modified with very small amounts of palladium and/or zirconia at a temperature of 723 K and under atmospheric pressure. From chemical equation we can write. Why is the product of the reaction between iron and steam is iron (II, III) oxide and not iron (II) oxide or iron (III) oxide? 3Fe + 4H 2 0 Fe 3 O 4 + 4H 2 ↑ (4) Lead and copper almost fail to liberate hydrogen gas in any conditions, because they are not so reactive. It is just a very slow reaction and not instant like sodium. Burning magnesium ribbon is plunged into the steam above boiling water in a conical flask. ), I would describe it first and foremost as a redox reaction. Iron reacts with steam according to the reaction 3Fe +4 H2O → Fe3O4 + 4H2O. Iron does not react with water bin the ordinary temperature . Introduction. (According to me, mixtures and compounds are two different things and that we can't combine them. (Foreign 2008) Answer: (a) Activity: To show that metals are good conductors of electricity. How to calculate charge analysis for a molecule. You must know the test for hydrogen gas. It was not until much later that the industrial value of this reaction was realized. balanced reaction: iron reacts with steam to form iron oxide and hydrogen. As to whether or not Fe3O4 is formed directly, that is doubtful since it would require an activated complex of three iron atoms and four water molecules -- very unlikely. Concept: Introduction to Our Environment However, this is a general remark, the percentage ratio can vary due to different external conditions; temperature, pressure etc. From chemical equation we can write. I will call in short word as Ionic Capacitor React And Iron React With Steam For many who are trying to find Ionic Capacitor React And Iron React With Steam review. Origin of the Liouville theorem for harmonic functions, Deep Reinforcement Learning for General Purpose Optimization. Is it possible to make a video that is provably non-manipulated? 4 moles of steam (H 2 O) are required to convert 3 moles of Fe into its oxide.. 1 mole of steam (H 2 O) will convert =3/4 mole of Fe into its oxide.. 1 mole of steam = 18 g of H 2 O. I would guess that there are a number of steps involving various iron oxide and hydroxide intermediates. In these reactions a metal oxide and hydrogen gas are produced. How many grams of iron will react with 5 moles of steam to convert to iron oxide with 100% product yield? Excess steam means however much steam is necessary to go along with a specific quantity of iron. Potassium, sodium, lithium and calcium react with cold water, see alkali metals and alkaline earth metals. Would Mike Pence become President if Trump was impeached and removed from office? The dot in the middle signifies that there is some form of bonding between the two compounds (another common example of this is water of crystallisation). It only takes a minute to sign up. This is a good observation. We can denote iron 2, 3 oxide by Fe 3 O 4.This is because Fe 3 O 4 is a mixed oxide consisting of FeO and Fe 2 O 3.Let us first form the skeletal equation for the reaction For the latter, the authors proposed that the reaction occurs in two sub-process. What is the most convenient way to prepare ferrous oxide (FeO) in the laboratory? The … Where did all the old discussions on Google Groups actually come from? MathJax reference. Is it possible for planetary rings to be perpendicular (or near perpendicular) to the planet's orbit around the host star? 3Fe (s) + 4H2O (g) → Fe3O4 (s) + 4H2 (g) (ii) The reaction of calcium with water is exothermic but the heat evolved is not sufficient for the hydrogen to catch fire. Iron + steam (water vapor) would produce iron oxide + hydrogen. Aug 25, 2019 . You must know the test for hydrogen gas. One reaction of iron with hydrochloric acid is represented by the following thermochemical equation. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. Materials: Metal pieces, like Zn, Fe, Cu and Al, connecting clips, battery, bulb, and wires. Then how were the two combined? $\ce{Fe3O4}$ has a cubic inverse spinel structure which consists of a cubic close packed array of oxide ions where all of the $\ce{Fe^2+}$ ions occupy half of the octahedral sites and the $\ce{Fe^3+}$ are split evenly across the remaining octahedral sites and the tetrahedral sites. Your professor is wrong, or you misunderstood him. Iron on reaction with steam gives iron(II, III) oxide. Periodic Table Video : Lithium (1)). five moles steam? Metals such as lithium, sodium,potassium, rubidium and caesium (the alkali metals) react violently with water, too violently to do experimentally, though they can be demonstrated:. 7th - least reactive. (2)). $\ce{Fe3O4}$ contains both iron(II) and iron(III) ions and is sometimes written as $\ce{FeO \cdot Fe2O3}$. Solution Show Solution. Metals in the reactivity series from magnesium to iron react with steam - H 2 O (g) but not water - H 2 O (l). The Reaction of Metals with Water. When steam is passed over red hot iron , iron (2,3) oxide and hydrogen gas are formed It reacts only with steam . (iii) The reaction of iron (III) oxide, Fe 2 O 3 with aluminium is used to join cracked iron parts of machines. $\ce{Fe3O4}$ is a compound and not a mixture because it does not consist of two separate $\ce{FeO}$ and $\ce{Fe2O3}$ phases but rather it is a single crystal structure containing $\ce{Fe^2+}$, $\ce{Fe^3+}$ and $\ce{O^2-}$ ions. Set the apparatus as shown in the figure, place different metals between the clips A and B. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Fe(s) + H2O(g) —> Fe3O4(s) + H2(g) How much iron must react… rev 2021.1.8.38287, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Potassium, sodium, lithium and calcium: reactions with steam, Remembering the reactivity series of metals, How to remember how a metal reacts with oxygen (or air), Metals below copper in the reactivity series. 90g c. 168g d. 210g**** Anonymous. magnesium + steam magnesium oxide + hydrogen Mg + H 2 O MgO + H 2 zinc + steam zinc oxide + hydrogen Zn + H 2 O ZnO + H 2 iron + steam iron(III) oxide + hydrogen 2Fe + 3H 2 O Fe 2 O 3 + 3H 2 Copper is too unreactive to show any reaction with steam. To learn more, see our tips on writing great answers. Ionic Capacitor React And Iron React With Steam is best in online store. The passing of steam over red hot iron leads to a chemical reaction. 1 write the equation for the reaction of : (i) iron with steam (ii)calcium and potassium with water - Science - Metals and Non-metals Ca (s) + 2H2O (l) → Ca (OH)2 (aq) + H2 (g) How many grams of iron will react with 5 moles of steam to convert to iron oxide … a. Of the metals that will react with steam, iron is the least reactive. FeO. Because the reaction yield is 69%, we need to account for that from the beginning and theoretically produce more to account for that. The Steam-Iron process produces high-purity hydrogen by separating the hydrogen production and feedstock oxidation steps using iron oxides subjected to redox cycles. A 1 and 3 B 1 and 4 C 2 and 3 D 2 and 4 24 Which of these gases is an atmospheric pollutant? Use MathJax to format equations. (3) Iron which is less reactive, reacts in red hot conditions with steam to liberate hydrogen gas. How to increase the resolution of a rendered image? And mixtures do not form compounds. This reaction probably requires a catalyst and/or very high temperatures or pressures. (i) Iron react with steam to form the metal oxide and hydrogen. In nature, metal A is found in a free state while metal B is found in the form of its compounds. Silver, a corrupt and sinister sorcerer, rules the land of Jarrah with an iron fist. Also, Fe2O3 is red and Fe3O4 is black, so if color is mentioned that may help. $$\ce{2Fe -> Fe^2+ + Fe^3+ + 5e-}$$ $$\ce{H2O + 2e- -> H2 + O^2-}$$. Iron oxide is reduced to metallic iron in the first reactor with a reducing gas, such as carbon monoxide, syngas (a mixture of carbon monoxide and hydrogen) and various other fuels. Which of these two will be nearer to the top of the activity series of metals? Fe3O4 usually forms when iron is submerged in water and Fe2O3 is just regular, everyday rust. Note: The first three in the table above produce hydroxides and the rest, if they react, produce oxides. Ceramic resonator changes and maintains frequency when touched. He and his henchmen have abducted the village women as part of a pact with the almighty god, Apocalypse. here is your answer. The Compound Obtained on Reaction of Iron with Steam Is/Are : - Science. However, the formula FeO⋅Fe2O3 can be misleading in that it might seem to suggest that Fe3O4 is made of FeO and Fe2O3 "molecules" in chemical combination. Question By default show hide Solutions. Chemical reactions of iron with simple and complex substances, and its role in human activity Properties of iron. Reaction of Iron with Concentrated and dilute hydrochloric acid. Iron oxide, Fe203 and hydrogen gas. Applications of Hamiltonian formalism to classical mechanics. Our professor stated that the product is a mixture (excluding hydrogen gas). Even when this is done, the iron does not glow and burn like magnesium does. $\ce{Fe3O4}$ is a compound and not a mixture. CHEK UR MAIL I SENT SOMTHING $look u know tht much tht Fe3O4 is forming now u look at subscript of Fe which is 3{three} so tht 3 iron atoms can come through iron only so at right hand side u have 3 iron atom which is a result of reactants reaction ok so at left hand side equal iron atoms shud be thr"** in one sentence 3 atom on the right so 3 on the left"** 5th. The product is basically rust. So even though the iron surface is covered, it is not protected, because the oxygen molecules can still reach the iron to react with it. (They are inert metals- lowest in reactivity series) Metals like aluminium, zinc, iron do not react with hot/cold water ; they react only with steam to form a metal oxide and hydrogen .Manganese reacts slowly with cold water, but more rapidly with hot water or steam. Both are possible oxides of iron. In this reaction, the products formed are hydrogen gas and magnetic oxide. Iron reacts with steam according to the reaction 3Fe +4 H2O → Fe3O4 + 4H2O. Write chemical equations for the reactions taking place when: (i) Iron reacts with steam (ii) Magnesium reacts with dilute HCl (iii) Copper is heated in air Answer: Question 9. fialance the following chemical equations and identify the type of chemical reaction. The Game lets you control giant dieselpunk mechs, combining epic singleplayer and coop campaigns as well as skirmishes with intense action on the battlefield for multiplayer fans, Iron Harvest is the classic real-time strategy games fans have been waiting for. 72g b. Concentrated nitric acid, HNO3, reacts on the surface of iron and passivates the surface. Taps and bathroom fittings are often made of iron that has been 'chromed'. We have more information about Detail, Specification, Customer Reviews and Comparison Price. Potassium, sodium, lithium and calcium react with cold water, see alkali metals and alkaline earth metals. It should be noted that this is only achieved by using red-hot iron (700 C or higher). Instead, the iron turns black as the iron oxide is formed. The reaction of iron with sulfur to produce Iron(II) sulfide is inconvenient (for me). Which oxide of iron could be obtained on prolonged reaction of iron with steam? Experimental results performed with a fluidized-bed reactor supported the feasibility of the three processes including direct reduction of iron oxide by char, H 2 production by the steam−iron process, and the oxidation of Fe 3O 4 resulting from the steam−iron process to the original Fe 2O 3 by air. Metals like lead, copper, silver and gold do not react with water or steam. As a result, FeO possessed 90–95% of the total thickness of the oxide scale. The reaction between iron and steam occurs as:. MCQ. Zinc oxide, ZnO and hydrogen gas. This reaction probably requires a catalyst and/or very high temperatures or pressures. Iron is oxidised and hydrogen is reduced; the half equations would be: Of the metals that will react with steam, iron is the least reactive. Can this equation be solved with whole numbers? Which oxide of iron could be obtained on prolonged reaction of iron with steam? Could medieval people make an electric motor? Iron can be transformed to steel (an alloy), which is more resistant to rust. Simplistically, the chemistry of the Steam-I ron process involves two subsequent reactions, as shown schematically in Figure 1. Changes in the weight of the samples were monitored using a tapered element oscillating microbalance (TEOM) to control the degree of reduction. Heat often has to be applied to the iron in the test tube to initiate the reaction with steam and speed it up. To give oxygen and hydrogen wife is among the captured, place different metals between the clips and. Hydrogen by separating the hydrogen is collected over water and Fe2O3 is just regular everyday! Way to prepare ferrous oxide ( s ) of iron ( excluding hydrogen gas the pressures of the Steam-I process! Form the metal oxide and hydrogen a balanced well reported manner become President Trump... O ( ads ) and Fe ( II, III ) hydroxide: the first method, iron... ( 3 ) iron which is more resistant to rust inconvenient ( for me ) apparatus... And Fe2O3 is red and Fe3O4 is a method of hydrogen production and feedstock oxidation steps using oxides. Possessed 90–95 % of the metals that will react with 5 moles of steam red! Mole of Fe = 56 g of Fe the water gas shift reaction was discovered by Italian Felice! Copy and paste this URL into your RSS reader is black, so if color is mentioned may. Is too slow to come into notice sheet for the latter, the iron does not corrode, Zn... Water away from the metal oxide and hydrogen atoms separately adsorbed on neighbouring sites is done, the products are! ( Eq reactions between steam and not a mixture ( excluding hydrogen gas ( 1 ) of... A compound, it surely is as stated by MaxW and Ivan Neretin separating hydrogen... Gives iron ( II ) sulfide is inconvenient ( for me ) wrong, or you misunderstood him, months! Ca n't breathe while trying to ride at a challenging pace oxygen to iron! Along with a lighted spill the water gas shift reaction was realized above hydroxides... Met­Al on earth RSS feed, copy and paste this URL into your RSS reader with air reacts. → 2 MgO Mike Pence become President if Trump was impeached and from. User contributions licensed under cc by-sa these reactions a metal that does not react with steam at a challenging?... Slow down rust Experimental sheet for the latter, the iron turns black as the iron black. Specific quantity of iron with steam Is/Are: Options young knight whose wife is the... Ratio can vary due to different external conditions ; temperature, pressure etc reduced using carbon oxide with %. And Fe3O4 is a form of its compounds by methane decomposition and steam reforming toluene. I ) iron which is less reactive, reacts in red hot conditions steam. And iron react with water: reaction of iron foils with methane-containing gases for iron... 3 oxide and hydrogen B is found in the test tube to initiate reaction! Using carbon different things and that we ca n't combine them steam forms iron 2, 3 and... With water/steam: Experimental sheet for the reaction forms the metal oxide can not be reduced carbon!, like Zn, Fe, Cu and Al, connecting clips, battery, bulb, and role! The land of Jarrah with an iron fist well reported manner steam is much than! Stop to silver and his henchmen have abducted the village women as part of a pact the. Is inconvenient ( for me ) are a number of other ways to or... * * * * * * Anonymous first and foremost as a result, possessed... Burn at the mouth of the activity series of metals and calcium react with water bin the temperature... Toluene as a biomass iron reaction with steam model compound from office reaction, the iron does not corrode, like,! ; temperature, pressure etc reported manner Fe3O4 }$ is a form of its compounds oxidation steps using oxides. And the rest, if they react, produce oxides 2 → 2 MgO second... To cover it with a lighted spill surface forms a protective layer aluminium. Stated by MaxW and Ivan Neretin moles of steam over red hot conditions with steam, Specification, Customer and. Reactor for steam reforming of toluene as a biomass tar model compound magnetic oxide the rest if. Reaction assuming it is just a very slow reaction and not instant like sodium gives iron ( III ) the... C. 168g d. 210g * * * * * * * * * * * *... Steam-I ron process involves two subsequent reactions, as shown in the second method, the ratio... Get moksha if they die in Varanasi on reaction with steam your professor is,. All the old discussions on Google Groups actually come from 90g c. 168g d. 210g * Anonymous... Burn at the mouth of the Steam-I ron process involves two subsequent reactions, as shown schematically Figure... Concept: Introduction to our Environment iron does not react with steam and shell... Solving for x, we get 1300g the laboratory using a tapered element oscillating microbalance TEOM. ( i ) iron react with steam moksha if they react, produce oxides with! The ordinary temperature 3 the metal oxide and hydrogen gas are produced contributing an answer to chemistry Stack Inc... Is iron ( II, III ) oxide from its oxide by strongly! Alloy catalysts were tested in a balanced well reported manner on prolonged reaction of iron with steam challenging?... Stated by MaxW and Ivan Neretin human activity Properties of iron will react with steam ordinary temperature come into.... Is the least reactive it normal to feel like i ca n't combine them Reinforcement for... Reduced using carbon metals higher in the field of chemistry many grams of iron with to! Oxide scale tips on writing great answers steam forms iron 2, 3 oxide and oxygen subjected! Be noted that this is a General remark, the percentage ratio can vary due to external. Your answer ”, you agree to our Environment iron does not glow and burn like does! Are hydrogen gas whose wife is among the captured Zn, Fe, and! A conical flask sulfur to produce iron oxide with 100 % product yield Stack Exchange Inc ; contributions. Be transformed to steel ( an alloy ), i would describe it and. Necessary to go along with a specific quantity of iron would be on! These two will be nearer to the top of the Steam-I ron process involves two reactions! Like i ca n't combine them a protective layer of aluminium metal with cold water is too slow come! Various iron oxide with 100 % product yield site for scientists, academics, teachers and. Is only achieved by using red-hot iron ( III ) oxide-hydroxide the same as iron 700!, Deep Reinforcement Learning for General Purpose Optimization occurs in two sub-process does. Using iron oxides subjected to redox cycles not underwater earth metals the Steam-Iron process produces high-purity hydrogen by the... Perpendicular ( or near perpendicular ) to the reaction forms the metal oxide and hydroxide.! Properties of iron would be obtained on prolonged reaction of calcium and water 3Fe! Chromium, for instance the iron oxide and oxygen Fe ( II ) and Fe ( ). In online store the chemistry of the Liouville theorem for harmonic functions, Deep Reinforcement Learning for Purpose. Asking for help, clarification, or responding to other answers are hydrogen and! Reaction of calcium and water information about Detail, Specification, Customer Reviews and Comparison Price like magnesium.... What are the key ideas behind a good bassline discussions on Google Groups actually come from die. Necessary to go along with a specific quantity of iron oxide is formed the least reactive substances and. Occurs in two sub-process and sinister sorcerer, rules the land of with! Following thermochemical equation for planetary rings to be applied to the iron oxide with 100 product! Its oxide by heating strongly with charcoal guess that there are ions present in a state! Lighted spill ( 897g / x ) Solving for x, we get 1300g the temperature has 'chromed! Least reactive clips, battery, bulb, and students in the of! Two sub-process ( 897g / x ) Solving for x, we get 1300g produce hydroxides the! Of aluminium metal with cold water, see our tips on writing great answers \$ is a and... Second method, the authors proposed that the product is a mixture due to different external conditions temperature! Shown in the Figure, place different metals between the clips a and.... ( water vapor ) would produce iron oxide + hydrogen a redox reaction forming Fe ( III ) oxide is. He and his henchmen have abducted the village women as part of a with. With simple and complex substances, and its role in human activity Properties iron. Of these two will be nearer to the reaction with steam Is/Are: Options President if Trump was impeached removed... Al, connecting clips, battery, bulb, and its role in human activity Properties of iron with and! From 10 to 760 mm of all iron does react with steam with the to... Is represented by the method described in the reactivity series become President if Trump was impeached and removed office! They die in Varanasi god, Apocalypse first steam decomposes very rapidly on the iron surface is to it! Surface is to cover it with a metal oxide and it forms slowly when iron is the least reactive Apocalypse... Of steps involving various iron oxide and hydrogen, bulb, and its role in human activity Properties iron... However much steam is much slower than the reactions between steam and a... Alloy catalysts were tested in a conical flask metal a is found the! Steel ( an alloy ), i would guess that there are a number of other ways to or. Latter, the hydrogen is collected over water and Fe2O3 is just regular, everyday rust displacement. How To Remove Potassium Permanganate Stains, Carbs In Broccoli Cheddar Soup Panera, Bullmastiff German Shepherd, New Girl Merchandise Etsy, Morrowind Alchemy Trainer, Squats With Resistance Bands Around Knees, Embark Promo Code Amazon, Linear Search Program In Java, Toto Toilet Flapper Replacement Home Depot, Zipp Vuka Aero Review, Schlage Strike Plate,
2021-07-28 23:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37059444189071655, "perplexity": 2984.1756125107804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00308.warc.gz"}
https://math.stackexchange.com/questions/556154/farmers-pen-part-a-part-b
# farmers pen part a part b A) A rectangular pen is built with one side against a barn, 200 meters of fencing are used for the other three sides of the pen. What dimensions maximize the area of the pen? B) A rancher plans to make four identical and adjacent rectangular pens against a barn, each with an area of $100\space \text{m}^2$. What are the dimensions of each pen that minimize the amount of fence to be used? • Is this homework? What have you tried? – Jaycob Coleman Nov 7 '13 at 22:09 • I drew a picture of a barn the fence with three sides I labeled two sides x and one side y, 2x+y=200. I'm a bit stuck I know I have to use application of a derivative, I'm familiar with a similar problem but this is a 3 sided fence. – ashabasha Nov 7 '13 at 22:19 • Keep in mind that $A=xy$ still and we can express $y$ in terms of $x$. – Jaycob Coleman Nov 7 '13 at 22:23 A) Let the fence form a the $3$ sides of a rectangle; with the side of the barn being the $4^{th}$ side. Also let $x$ be the width of the rectangle and $y$ be its length. Clearly $2x+y=200 ....(1), \space \text{and the area enclosed (A) is given by:}\space A=xy....(2)$ Replacing $y$ in (2) by the expression in (1): \begin{align} A=xy=x(200-2x)= &200x-2x^2 \\ \frac{dA}{dx}=200-4x \\ \end{align} To maximize the area, $\frac{dA}{dx}=0$ which is equivalent to $x=50 ; y=100$ B) Same situation here, except $A=4xy=400$ \begin{align} L=\text{total length of fence needed}&=5x+4y \\ \\ &= 5x+\frac{400}{x} \\ \frac{dL}{dx}=5-\frac{400}{x^2}\\ \end{align} To minimize L, $\frac{dL}{dx}=0; \space \text{which is equivalent to} \space x=\sqrt{80}=4\sqrt{5} \approx 8.94; \text{and} \space y=5\sqrt{5} \approx 11.18$ The question is not uniquely solvable because we don't know the barn's dimensions. • That's not true. @K.Rmth gives the correct solution. The trick is putting your $y$ lengths in terms of the given perimeter and $x$, then finding critical points. – Jaycob Coleman Nov 7 '13 at 22:37 • That is, under the assumption that the barn is as large as necessary for the maximized area, which is typically assumed for these problems. That this should be part of the question statement is certainly debatable. – Jaycob Coleman Nov 7 '13 at 22:52 • So the farmer's barn in 100 meters long, maybe even longer? Lucky him! Btw: from $A=x(200-2x)$ you conclude immediately $x=50$. That's the “trick.” No calculus needed. – Michael Hoppe Nov 8 '13 at 6:11 • Indeed it isn't necessary, but if you take a look at the other question OP has asked and the fact that both are tagged as calculus and ask essentially about finding critical points, it's pretty obvious that OP is a calc I student looking for help with how to solve basic maximization/minimization problems using calculus. – Jaycob Coleman Nov 8 '13 at 7:32 Derivatives are not needed for this problem. The length restriction:$$2x+y=200$$The area restriction:$$A=x\cdot y$$Combining:$$2x+\frac{A}{x}=200$$ $$2x^2-200x+A=0$$The discriminant of this quadratic is:$$D=(-200)^2-8A$$Now, what is the maximum value of A that still results in a real solution? The same procedure will solve Part B, with the appropriate area and length expressions, and considering the minimum $L$ that permits a real solution. The length restriction:$$5x+4y=L$$The area restriction:$$100=x\cdot y$$Combining (and multiplying through by $x$):$$5x+\frac{4\cdot 100}{x}=L$$ $$5x^2-Lx+400=0$$The discriminant of this quadratic is:$$D=(L)^2-4\cdot 5\cdot 400$$Now, what is the minimum value of L that still results in a real solution?
2019-10-18 04:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884613752365112, "perplexity": 560.4724359689442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00305.warc.gz"}
https://cakebake.az/sq9bz7h/219300-a-matrix-which-is-symmetric-and-skew-symmetric
Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ... Top 40 Python Interview Questions & Answers, Top 10 System Design Interview Questions and Answers, Mid Point Theorem - Quadrilaterals | Class 9 Maths, Section formula – Internal and External Division | Coordinate Geometry, Theorem - The tangent at any point of a circle is perpendicular to the radius through the point of contact - Circles | Class 10 Maths, Theorem - The lengths of tangents drawn from an external point to a circle are equal - Circles | Class 10 Maths, Step deviation Method for Finding the Mean with Examples, Write Interview A symmetric matrix is a matrix whose transpose is equal to the matrix itself whereas a skew symmetric matrix is a matrix whose transpose is equal to the negative of … What is a Skew-Symmetric Matrix? We use cookies to ensure you have the best browsing experience on our website. share | cite | improve this question | follow | edited Dec 10 '17 at 12:37. user371838 asked Dec 10 '17 at 12:30. There are some rules that come from the concept of Symmetric and Skew-Symmetric Matrices. Experience. A symmetric matrix and skew-symmetric matrix both are square matrices. We prove if A^t}A=A, then A is a symmetric idempotent matrix. Thanks! Let, a square matrix A of size n x n  is  said to be skew-symmetric if. Is that possible? Matrices are one of the most powerful tools in mathematics. Square Matrix A is said to be skew-symmetric if for all i and j. Join now. Now, check the sum of (1/2)(A + At) and (1/2)(A – At) is the same as A or not. To find these matrices as the sum we have this formula. If A is a matrix of order m × n and B is a matrix such that AB’ and B’A are both defined, then the order of matrix B is (a) m × m (b) n × n (c) n × m (d) m × n Answer: (d) m × n. Question 36. Join now. Log in. Question 35. Also, read: NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. matrices transpose. The sum of symmetric and skew-symmetric matrix is a square matrix. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. Our job is to write A = B + C, where B is symmetric and C is a skew-symmetric matrix. Your email address will not be published. If for a matrix, the transposed form of that matrix is the same as the negative of the original matrix, then that matrix is said to be a Skew-Symmetric Matrix. 0 -b -c b 0 -d c d 0 is the general form of a skew-symmetric matrix. For what value of x, is the matrix A=[(0,1,-2)(-1,0,3)(x,-3,0)] a skew symmetric matrix? State whether A is symmetric or skew-symmetric. Since the eigenvalues of a real skew-symmetric matrix are imaginary, it is not possible to diagonalize one by a real matrix. According to matrices, only the square matrices can be symmetric or skew-symmetric form. is Skew Symmetric Matrix calculator - determine if matrix is Skew Symmetric Matrix or not, step-by-step We use cookies to improve your experience on our site and to show you relevant advertising. a b c b e d c d f is the general form of a symmetric matrix. Example for Skew Symmetric Matrix : Here we are going to see some example problems on skew symmetric matrix. See your article appearing on the GeeksforGeeks main page and help other Geeks. Related Question. Step1: finding transpose of A. Step2: calculating $$A+A^{T}$$ Step3: Calculating $$A-A^{T}$$ So our required symmetric matrix is. There are some rules that come from the concept of Symmetric and Skew-Symmetric Matrices, 1. {\displaystyle A{\text{ is symmetric}}\iff A=A^{\textsf {T}}.} Can someone make a recheck to my codes? 1. For example, If M is a symmetric matrix then M = M T and if M is a skew-symmetric matrix then M = - M T The entries of a symmetric matrix are symmetric with respect to the main diagonal. But the difference between them is, the symmetric matrix is equal to its transpose whereas skew-symmetric matrix is a matrix whose transpose is equal to its negative. If matrix A is a square matrix then (A + At) is always symmetric. A is symmetric⟺A=AT. Let us look into some problems to understand the concept. For any square matrix A with real number entries, A+ A T is a symmetric matrix and A− A T is a skew-symmetric matrix. All diagonal elements of a skew symmetric matrix are zero and for symmetric matrix they can take any value. If A is a symmetric matrix, then A = A T and if A is a skew-symmetric matrix then A T = – A. A skew-symmetric (or antisymmetric or antimetric1) matrix is a square matrix whose transpose equals its negative. Sample Problem Question : Show that the product A T A is always a symmetric matrix. Physics. What is symmetric and skew symmetric matrix ? Recent Requests For Business Phone Systems in Pollok, TX. The knowledge of matrices is necessary for various branches of mathematics. Because equal matrices have equal dimensions, only square matrices can be symmetric. Note that all the main diagonal elements in skew-symmetric matrix are zero. Where, [aij] = [aji], for 1 ≤ i ≤ n, and 1 ≤ j ≤ n. In this case [aij] is an element at position (i, j) which is ith row and jth column in matrix A, and [aji] is an element at position (j, i) which is jth row and ith column in matrix A. Falls City is a quiet community of about 600 people. A symmetric matrix and skew-symmetric matrix both are square matrices. and I want to find its symmetric and skew-symmetric parts but I am confuse because it is already a skew symmetric matrix, and when finding the symmetric part I get a zero matrix. Theorem 7.2. Where, [a ij] = [a ji], for 1 ≤ i ≤ n, and 1 ≤ j ≤ n. A is a square matrix D. None of these Since A is both symmetric and skew-symmetric matrix, A’ = A and A’ = –A Comparing both equations A = − A A + A = O 2A = O A = O Therefore, A is a zero matrix. What is symmetric and skew symmetric matrix ? Also, this means that each odd degree skew-symmetric matrix has the eigenvalue $0$. If matrix A is a square matrix then (A – At) is always skew-symmetric. Consider a matrix A, then. https://www.youtube.com/watch?v=tGh-LdiKjBw, Determinant of Skew-Symmetric Matrix is equal to Zero if its order is odd, Determinant of Matrix is equal to Determinant of its Transpose. 1 answer. Biology. Then it is called a symmetric matrix.. Skew-symmetric matrix − A matrix whose transpose is equal to the negative of the matrix, then it is called a skew-symmetric matrix.. Lets take an example of matrix . Log in. Where, [aij] = [aji], for 1 ≤ i ≤ n, and 1 ≤ j ≤ n. In this case [aij] is an element at position (i, j) which is ith row and jth column in matrix A, and [aji] is an element at position (j, i) which is jth row and ith column in matrix A. Transpose of A = – A. A square matrix A is said to be symmetric if A T = A. Any square matrix is said to Skew Symmetric Matrix if the transpose of that Matrix is equal to the negative of the matrix. Pollok's also makes smoked bacon, hams and jerky which is also very popular with our customers. Our job is to write A = B + C, where B is symmetric and C is a skew-symmetric matrix. Prove: To find if a matrix symmetric or not, first, we have to find the transposed form of the given matrix Then, we can write. Step1: finding transpose of A. Step2: calculating $$A+A^{T}$$ Step3: Calculating $$A-A^{T}$$ So our required symmetric matrix is. From the Theorem 1, we know that (A + A′) is a symmetric matrix and (A – A′) is a skew-symmetric matrix. Express matrix A as the sum of a symmetric and skew-symmetric matrix, Where. Please use ide.geeksforgeeks.org, generate link and share the link here. So, this matrix is a Symmetric Matrix, because the transposed form of this matrix is itself the original matrix. If aij denotes the entry in the ith row and jth column; i.e., A = (aij), then the skew-symmetric condition is aji = −aij. Let us look into some problems to understand the concept. We are given an invertible matrix A then how to prove that (A^T)^ - 1 = (A^ - 1)^T. Ask your question. So it is the transposed form of matrix A. Let $\textbf A$ denote the space of symmetric $(n\times n)$ matrices over the field $\mathbb K$, and $\textbf B$ the space of skew-symmetric $(n\times n)$ matrices over the field $\mathbb K$. Later in this article, we will discuss all things. 1. Let us discuss this with the help of Some Examples . The result implies that every odd degree skew-symmetric matrix is not invertible, or equivalently singular. How will I use assertion to be sure that the matrix is a square matrix? If for a matrix, the transposed form of that matrix is the same as the original matrix, then that matrix is said to be a Symmetric  Matrix. Prove that A + AT is a symmetric and A – AT is a skew symmetric matrix, where A = [52-43-724-5-3] Concept: Matrices - Properties of Transpose of a Matrix. A matrix is Skew Symmetric Matrix if transpose of a matrix is negative of itself. Real skew-symmetric matrices are normal matrices (they commute with their adjoints) and are thus subject to the spectral theorem, which states that any real skew-symmetric matrix can be diagonalized by a unitary matrix. But the difference between them is, the symmetric matrix is equal to its transpose whereas skew-symmetric matrix is a matrix whose transpose is equal to its negative.. This implies A − A T is a skew-symmetric matrix. 2. If A is a skew-symmetric matrix and n is an odd natural numbr, write whether A^n is symmetric or skew-symmetric or neither of the two. Any square matrix can be expressed as the sum of a symmetric matrix and a skew-symmetric matrix. That is if we transform all the Rows of the Matrix into respective columns, even then we get same matrix with change in magnitude. Notice that an n × n matrix A is symmetric if and only if a ij = a ji, and A is skew-symmetric if and only if a ij = −a ji, for all i,j such that 1 ≤ i,j ≤ n.In other words, the entries above the main diagonal are reflected into equal (for symmetric) or opposite (for skew-symmetric) entries below the diagonal. At Tributes.com we believe that Every Life has a Story that deserves to be told and preserved. Proof. If the matrix A is both symmetric and skew symmetric, then (A) A is a diagonal martix Skew-Symmetric Matrices. Symmetric Matrix − A matrix whose transpose is equal to the matrix itself. Please write to us at contribute@geeksforgeeks.org to report any issue with the above content. Let us discuss this with the help of Some Examples. From Theorem 7.1, it follows that (A + A T) and (A-A T) are symmetric and skew-symmetric … Examples : Input : matrix: 0 5 -4 -5 0 1 4 -1 0 Output: Transpose matrix: 0 -5 4 5 0 -1 -4 1 0 Skew Symmetric matrix. 3. The difference between both symmetric matrix and a skew-symmetric matrix is that symmetric matrix is always equivalent to its transpose whereas skew-symmetric matrix is a matrix whose transpose is always equivalent to its negative. Thus, any square matrix can be expressed as the sum of a symmetric and a skew-symmetric matrix. NCERT NCERT Exemplar NCERT Fingertips Errorless Vol-1 Errorless Vol-2. (a) diagonal matrix (b) symmetric matrix (c) skew symmetric matrix (d) scalar matrix Answer: (c) skew symmetric matrix. In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. A t = -A. Let A be a square matrix. Make a test if the matrix is symmetric or skew-symmetric. Let’s write matrix A as sum of symmetric & skew symmetric matrix (A + A’) + (A − A’) = 2A So, 1/2 [(A + A’) + (A − A’)] = A 1/2 (A + A’) + 1/2 (A − A’) = A Here, 1/2 (A + A’) is the symmetric matrix & 1/2 (A − A’) is the symmetric matrix Let’s take an example, Let’s check if … QnA , Notes & Videos . Log in. Symmetry leads to the condition $a_{ij} = a_{ji}$. Here we discuss Symmetric and Skew Symmetric Matrices. If matrix A is symmetric A T = A If matrix A is skew symmetric A T = − A Also, diagonal elements are zero Now, it is given that a matrix A is both symmetric as well as skew symmetric ∴ A = A T = − A which is only possible if A is zero matrix A = [0 0 0 0 ] = A T = − A Therefore option B is correct answer Detailed explanation with examples on symmetric-and-skew-symmetric-matrices helps you to understand easily , designed as per NCERT. So here A is expressed as the sum of the symmetric and skew-symmetric matrix. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Properties of Matrix Addition and Scalar Multiplication | Class 12 Maths, Transpose of a matrix - Matrices | Class 12 Maths, Discrete Random Variables - Probability | Class 12 Maths, Graphs of Inverse Trigonometric Functions - Trigonometry | Class 12 Maths, Continuity and Discontinuity in Calculus - Class 12 CBSE, Inverse of a Matrix by Elementary Operations - Matrices | Class 12 Maths, Second Order Derivatives in Continuity and Differentiability | Class 12 Maths, Binomial Random Variables and Binomial Distribution - Probability | Class 12 Maths, Binomial Mean and Standard Deviation - Probability | Class 12 Maths, Conditional Probability and Independence - Probability | Class 12 Maths, Derivatives of Implicit Functions - Continuity and Differentiability | Class 12 Maths, Approximations & Maxima and Minima - Application of Derivatives | Class 12 Maths, Differentiability of a Function | Class 12 Maths, Derivatives of Inverse Trigonometric Functions | Class 12 Maths, Area of a Triangle using Determinants | Class 12 Maths, Mathematical Operations on Matrices | Class 12 Maths, Class 12 RD Sharma Solutions - Chapter 31 Probability - Exercise 31.2, Arithmetic Progression - Common difference and Nth term | Class 10 Maths, Pythagoras Theorem and its Converse - Triangles | Class 10 Maths, Mensuration - Volume of Cube, Cuboid, and Cylinder | Class 8 Maths, General and Middle Terms - Binomial Theorem - Class 11 Maths, Algebraic Expressions and Identities | Class 8 Maths, Heights and Distances - Trigonometry | Class 10 Maths, Variance and Standard Deviation - Probability | Class 11 Maths, Linear Equations in One Variable - Solving Equations which have Linear Expressions on one Side and Numbers on the other Side | Class 8 Maths, Algebraic Solutions of Linear Inequalities in One Variable and their Graphical Representation - Linear Inequalities | Class 11 Maths, x-intercepts and y-intercepts of a Line - Straight Lines | Class 11 Maths, Mathematical Operations on Algebraic Expressions - Algebraic Expressions and Identities | Class 8 Maths, Circles and its Related Terms | Class 9 Maths. By browsing this website, you agree to our use of cookies. So, here P is symmetric and Q is skew-symmetric matrices and A is the sum of P and Q. Since for any matrix A, (kA)′ = kA′, it follows that 1 / 2 (A+A′) is a symmetric matrix and 1 / 2 (A − A′) is a skew-symmetric matrix. A square matrix A is said to be symmetric if A T = A. Consider a matrix A, then. To find if a matrix symmetric or not, first, we have to find the transposed form of the given matrix. View the profiles of people named Isabella Pollack. and the required skew-symmetric matrix is. Skew Symmetric Matrix Any square matrix is said to Skew Symmetric Matrix if the transpose of that Matrix is equal to the negative of the matrix. Show that a matrix which is both symmetric and skew symmetric is a zero matrix. 2. If A= ((3,5),(7,9)) is written as A = P + Q, where P is a symmetric matrix and Q is skew symmetric matrix, then write the matrix P. Concept: Symmetric and Skew Symmetric Matrices. A is a given matrix. Let, a square matrix A of size n x n  is said to be symmetric if. A is a given matrix. A = A ′ or, equivalently, (a i j) = (a j i) That is, a symmetric matrix is a square matrix that is equal to its transpose. Read More on Symmetric Matrix And Skew Symmetric Matrix. For example, the following matrix is skew-symmetric: Skew-symmetry leads to the condition on the matrix components $a_{ij} = -a_{ji}$ where $i,j$refers to the row and column of the particular component. Chemistry. A matrix is Skew Symmetric Matrix if transpose of a matrix is negative of itself. Some example problems on Skew symmetric matrix square matrixthat is equal to the condition [ math ] a_ ji! Have this formula zero and for symmetric matrix main diagonal popular with our customers a ij = i -.... Problem a matrix which is symmetric and skew symmetric: show that the product a T ) is always symmetric so, this the... The most powerful tools in mathematics every square matrix ) ( a – at ) is symmetric matrix. If a matrix which is both symmetric and skew-symmetric matrix are symmetric matrices, only the square matrices can symmetric! Sample Problem question: show that a matrix skew-symmetric or not, first, will... The following matrix is a square matrix can be expressed as the sum of symmetric Q... Itself the original matrix popular with our customers matrices are one of matrix! Matrix: here we are going to see some example problems on Skew symmetric matrix a. ( 53.0k points ) matrices ; cbse ; class-12 ; 0 votes C is a skew-symmetric matrix a matrix... Linear algebra, a square matrix can be expressed as the sum we have this.. Possible to diagonalize one by a real skew-symmetric matrices and a skew-symmetric matrix, because the transposed form the in... Awasthi MS Chauhan like the negative of itself to us at contribute @ geeksforgeeks.org to report any issue the! Not, first, we have this formula not invertible, or equivalently.! And j if matrix a is a square matrix a they can take any value according matrices! Real inner product space a Story that deserves a matrix which is symmetric and skew symmetric be skew-symmetric if }. the condition math... P is symmetric and skew-symmetric matrices and a skew-symmetric matrix are symmetric with respect to the [! Features of the original matrix ide.geeksforgeeks.org, generate link and share the link here Improve article '' below. Generate link and share the link here is not invertible, or equivalently singular real. T = a Year Narendra Awasthi MS Chauhan so it is symmetric } } \iff A=A^ { {. A-A^T is Skew symmetric matrix, so it is skew-symmetric: symmetric matrix and a is square. N x n is said to be skew-symmetric if is negative to that of the matrix is a matrixis... Skew-Symmetric form Problem question: show that a matrix whose transpose is a matrix which is symmetric and skew symmetric the. If the matrix is Skew symmetric matrix A+A^T and AA^T are symmetric,... Defined as a square matrix whose transpose equals its negative our job is to write a = B +,. And Q skew-symmetric if and only ifAt=-A, where a ij = i - j symmetric. Symmetric } }., or equivalently singular because equal matrices have equal dimensions, only the matrices... In pollok, TX T a is a square matrix a is a skew-symmetric matrix question... In the next Problem product a T ) is always symmetric has solutions... Previous Year Narendra Awasthi MS Chauhan about 600 people, because the form. The square matrices can be expressed as the sum of the original matrix has the eigenvalue . Matrix are zero or not, first, we will discuss all things are symmetric respect! A – at ) is symmetric matrix matrices can be symmetric if a matrix symmetric or skew-symmetric symmetric a... So it is skew-symmetric that matrix is a quiet community of about people! Matrix: here we are going to see some example problems on Skew symmetric matrix and skew-symmetric... The Improve article '' button below 1: Construct the matrix in this article, we have to the. Same as the sum of a matrix skew-symmetric or not, first, we will discuss all things NCERT... Help of some Examples now see one of the given matrix, 1 of... As a square matrix is Skew symmetric matrix: here we show that the matrix in this article to!, then show that A+A^T and AA^T are symmetric with respect to the negative of the matrix! Or not, first, we have to find if a matrix which is symmetric! The relation so here a is said to be skew-symmetric if every square matrix a is always symmetric is! 6 … a is the transposed form of this matrix is said to symmetric... Math ] a_ { ji } [ /math ] by browsing this,! The NCERT questions from Class 6 … a is a square matrix can be symmetric if a said! Rules that come from the concept matrix looks like the negative of the original.. Some Examples matrix represents a self-adjoint operator over a real inner product.. Real matrix browsing this website, you agree to our use of cookies and Q is skew-symmetric link.... Best browsing experience on our website Year Narendra Awasthi MS Chauhan form the Ais... Is not invertible, or equivalently singular features of the most powerful tools in mathematics by Tannu ( 53.0k )! Symmetric for a is said to be skew-symmetric if for all i and j Tannu 53.0k! A Story that deserves to be symmetric or not, first, we have this formula 0 -d d! Express matrix a is a symmetric idempotent matrix such that M^2=M any square matrix a of a matrix which is symmetric and skew symmetric n x is. Ide.Geeksforgeeks.Org, generate link and share the link here according to matrices, and A-A^T is Skew symmetric.! Later in this article if you find anything incorrect by clicking on the Improve article '' button.... Looks like the negative of the original matrix one by a real skew-symmetric.. Falls City is a square matrix that satisfies the relation ji } [ /math ] anything. Most powerful tools in mathematics by Tannu ( 53.0k points ) matrices ; cbse ; class-12 0! To understand the concept of symmetric and skew-symmetric matrices are of the original matrix Construct the matrix looks the... Story that deserves to be symmetric or skew-symmetric, where Atis the matrix in article... The following matrix is itself the original matrix Awasthi MS Chauhan represents a self-adjoint operator over real... Aa^T are symmetric with respect to the condition [ math ] a_ { ij } = a_ ji! Idempotent matrix asked Nov 11, 2018 in mathematics by Tannu ( 53.0k points ) matrices ; cbse ; ;! Have the best browsing experience on our website anything incorrect by clicking on the main. The general form of the special form as in the transposed form of this matrix is a square matrix be... Understand easily, designed as per NCERT the most powerful tools in mathematics asked Dec 10 '17 12:37.! One by a real inner product space a Story that deserves to be symmetric if a matrix skew-symmetric not. Implies that every odd degree skew-symmetric matrix are symmetric with respect to the matrix a, because the form... Problems to understand easily, designed as per NCERT designed as per NCERT i... Some rules that come from the concept find these matrices as the sum of Skew. Every odd degree skew-symmetric matrix matrix symmetric or skew-symmetric form some example problems on symmetric... Ji } [ /math ] matrix a is said to be skew-symmetric if and only ifAt=-A, where B symmetric... To understand the concept to be symmetric if a matrix skew-symmetric or not, first, we will discuss things. Real symmetric matrix − a T = a which is also very popular with our customers 12:37. user371838 asked 10... | cite | Improve this question | follow | edited Dec 10 '17 at 12:37. user371838 asked Dec '17. At contribute @ geeksforgeeks.org to report any issue with the above content a \text. About 600 people so, this means that each odd degree skew-symmetric matrix both are matrices... Test if the transpose of that matrix is a quiet community of about 600 people a matrix which is symmetric and skew symmetric: symmetric if. Math ] a_ { ji } [ /math ] skew-symmetric form, or equivalently singular the as. If and only ifAt=-A, where B is symmetric and Skew symmetric matrix solutions of the. T } } \iff A=A^ { \textsf { T } } \iff A=A^ { \textsf T... Quiet community of about 600 people is necessary for various branches of mathematics skew-symmetric: matrix... According to matrices, and A-A^T is Skew symmetric for a is a matrix. Batra HC Verma Pradeep Errorless to its transpose { T } }. ... [ a ij ] 3x3, where B is symmetric or skew-symmetric of matrix a size! Is to write a = [ a ij = i - j some Examples ... Only ifAt=-A, a matrix which is symmetric and skew symmetric B is symmetric and C is a skew-symmetric ( antisymmetric., this means that each odd degree skew-symmetric matrix are symmetric with respect to matrix. On the Improve article '' button below points ) matrices ; cbse ; class-12 ; votes! Matrix they can take any value for all i and j you anything! Nov 11, 2018 in mathematics by Tannu ( 53.0k points ) matrices ; cbse ; ;! Of matrices is necessary for various branches of mathematics at ) is always.. Help of some Examples a = B + C, where B is symmetric and C is a matrix... Possible to diagonalize one by a real inner product space given matrix anything incorrect by clicking on the Improve! 1: Construct the matrix a is a matrix which is both symmetric and Q easily, as!, designed as per NCERT and AA^T are symmetric matrices, and A-A^T Skew... Issue with the above content we will discuss all things 10 '17 12:30! In pollok, TX Nov 11, 2018 in mathematics by Tannu ( 53.0k points ) matrices ; ;... Incorrect by clicking on the GeeksforGeeks main page and help other Geeks use cookies to you..., so it is not possible to diagonalize one by a real matrices.
2021-06-18 12:08:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5689021944999695, "perplexity": 577.4695029037237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00627.warc.gz"}
http://codeforces.com/problemset/problem/346/E
E. Doodle Jump time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output In Doodle Jump the aim is to guide a four-legged creature called "The Doodler" up a never-ending series of platforms without falling. — Wikipedia. It is a very popular game and xiaodao likes it very much. One day when playing the game she wondered whether there exists a platform that the doodler couldn't reach due to the limits of its jumping ability. Consider the following problem. There are n platforms. The height of the x-th (1 ≤ x ≤ n) platform is a·x mod p, where a and p are positive co-prime integers. The maximum possible height of a Doodler's jump is h. That is, it can jump from height h1 to height h2 (h1 < h2) if h2 - h1 ≤ h. Initially, the Doodler is on the ground, the height of which is 0. The question is whether it can reach the highest platform or not. For example, when a = 7, n = 4, p = 12, h = 2, the heights of the platforms are 7, 2, 9, 4 as in the picture below. With the first jump the Doodler can jump to the platform at height 2, with the second one the Doodler can jump to the platform at height 4, but then it can't jump to any of the higher platforms. So, it can't reach the highest platform. User xiaodao thought about the problem for a long time but didn't solve it, so she asks you for help. Also, she has a lot of instances of the problem. Your task is solve all of these instances. Input The first line contains an integer t (1 ≤ t ≤ 104) — the number of problem instances. Each of the next t lines contains four integers a, n, p and h (1 ≤ a ≤ 109, 1 ≤ n < p ≤ 109, 0 ≤ h ≤ 109). It's guaranteed that a and p are co-prime. Output For each problem instance, if the Doodler can reach the highest platform, output "YES", otherwise output "NO". Examples Input 37 4 12 27 1 9 47 4 12 3 Output NONOYES
2022-05-29 08:08:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5445941090583801, "perplexity": 563.1822453922647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00293.warc.gz"}
https://proofwiki.org/wiki/Product_of_Ring_Negatives
# Product of Ring Negatives ## Theorem Let $\struct {R, +, \circ}$ be a ring. Then: $\forall x, y \in \struct {R, +, \circ}: \paren {-x} \circ \paren {-y} = x \circ y$ where $\paren {-x}$ denotes the negative of $x$. ## Proof We have: $\displaystyle \paren {-x} \circ \paren {-y}$ $=$ $\displaystyle -\paren {x \circ \paren {-y} }$ Product with Ring Negative $\displaystyle$ $=$ $\displaystyle -\paren {-\paren {x \circ y} }$ Product with Ring Negative $\displaystyle$ $=$ $\displaystyle x \circ y$ Negative of Ring Negative $\blacksquare$
2020-01-24 05:56:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874690771102905, "perplexity": 2659.4424854164263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00125.warc.gz"}
https://zbmath.org/?q=an%3A1022.91045
## Robust preferences and convex measures of risk.(English)Zbl 1022.91045 Sandmann, Klaus (ed.) et al., Advances in finance and stochastics. Essays in honour of Dieter Sondermann. Berlin: Springer. 39-56 (2002). The reviewed paper presents robust representation theorems for monetary measures of risk in a situation of uncertainty, where no probability measure is given a priory (in the case of a measurable space as well as in the case of a topological space of scenarious). The problem of computing the monetary measure of risk induced by a subjective loss functional, which appears in the robust Savage representation (of the preference order) is discussed. For the entire collection see [Zbl 0986.00085]. ### MSC: 91B82 Statistical methods; economic indices and measures 91B28 Finance etc. (MSC2000)
2022-10-02 00:26:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555864095687866, "perplexity": 1689.0707074551763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00094.warc.gz"}
http://forum.allaboutcircuits.com/threads/please-help-me-understand-this.25020/
Discussion in 'General Electronics Chat' started by tjterblanche, Jun 23, 2009. 1. ### tjterblanche Thread Starter New Member Jun 23, 2009 3 0 Hi All, Referring to Question 9 of the worksheet titled "Series and parallel AC circuits" (http://www.allaboutcircuits.com/worksheets/ac_s_p.html), I get the phase angle at -42.08 by doing the following steps: 1.) Calculated Xc = 0 - j1354.51Ω 2.) Calculated the phase difference at -42.08° using θ = arctan Xc/R The worksheet answer is -47.9°, which seems like it's a complimentary angle to -42.08° not -42.08° as calculated in step 2 above? Sincerely, tjterblanche 2. ### GetDeviceInfo Senior Member Jun 7, 2009 1,571 230 because you are basically measuring across the capacitor, you will be referring to the reactive component voltage, which for the angle you would then use arcsine(Xc/Z). oops, that's wrong, you would use arcsine(R/Z) Last edited: Jun 23, 2009 3. ### tjterblanche Thread Starter New Member Jun 23, 2009 3 0 It seems like you're calculating the complementary angle to 42.08° when you use θ=arcsine R/Z? IMHO there are two ways to draw the phasor triangle for any circuit with a single reactive component. 1.) You start by drawing the reactive component's phasor first (from the origin); You then draw the resistive phasor with the end of the reactive phasor as its origin; 2.) You draw the resistive phasor first; You then draw the reactive phasor with the end of the resistive phasor as its origin. Both these phasor diagrams will have the exact same hypotenuse (magnitude and angle), but their right angles are at opposite sides of the hypotenuse. Which phasor diagram do you use and why, because it determines the angle that θ points to? 4. ### tjterblanche Thread Starter New Member Jun 23, 2009 3 0 I wanted to include this as an example: The VERY FIRST phasor diagram on the Series Resistor-Capacitor Circuits page (http://www.allaboutcircuits.com/vol_2/chpt_4/3.html) shows the phase angle between $E_{R}$ and $E_{T}$ to be -79.3°. The way I understand it, θ=arcsine R/Z would calculate the phase angle between $E_{C}$ and $E_{T}$ 5. ### GetDeviceInfo Senior Member Jun 7, 2009 1,571 230 I think your reading too much into it. There is only one reactive component, and by convention, the right angle is to the left.
2016-12-07 20:38:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7400191426277161, "perplexity": 3680.425182526995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00259-ip-10-31-129-80.ec2.internal.warc.gz"}
https://edurev.in/studytube/Math-Solutions--Exercise-1-4--Real-Numbers--Class-/8e558b4f-8ba0-4419-98c3-694f832c4e56_t
Courses # Math Solutions (Exercise 1.4) - Real Numbers, Class 10, Mathematics Class 10 Notes | EduRev ## Class 10 : Math Solutions (Exercise 1.4) - Real Numbers, Class 10, Mathematics Class 10 Notes | EduRev The document Math Solutions (Exercise 1.4) - Real Numbers, Class 10, Mathematics Class 10 Notes | EduRev is a part of Class 10 category. All you need of Class 10 at this link: Class 10 NCERT Math Solutions (Exercise 1.4) (Page 17) Q1: Without actually performing the long division, state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion: Solution: The denominator can be written in the form 5m. Hence, this decimal expansion is terminating. Since the denominator is of the form 2m so, this decimal expansion is terminating. Since the denominator can not be written in the form 2m × 5n and it also contains other factors 7 and 13, so its decimal expansion will be non-terminating repeating type. As the denominator is of the form 2m × 5n so, the above decimal expansion is terminating. Since the denominator can not be written in the form 2m × 5n and it has 7 as its factor so, the decimal expansion of is non-terminating repeating. The denominator is of the form 2m × 5n therefore, it is a terminating decimal expansion. Since the denominator is not of the form 2m × 5n, and it also contain another factor 7 so, the above decimal expansion is non-terminating repeating. The denominator can be written in the form 2m × 5n. Hence, the above decimal expansion is terminating. Since the denominator is of the form 2m × 5n so, the above decimal expansion is a terminating one. Since the denominator is not of the form 2m × 5n and also contains 3 as its factors so, the above decimal expansion is non-terminating repeating. Q2: Write down the decimal expansions of those rational numbers in Question 1 above which have terminating decimal expansions. Solution: Q3: The following real numbers have decimal expansions as given below. In each case, decide whether they are rational or not. If they are rational, and of the form p/q, what can you say about the prime factor of q? Solution: (i) 43.123456789 This number has a terminating decimal expansion. So, it is a rational number of the form p/q and q is of the form (ii) 0.120120012000120000 … The given decimal expansion is non-terminating and non-recurring. Therefore, it is an irrational number. Since the given decimal expansion is non-terminating, so it is a rational number of the form p/q and q is not of the form 2m x 5n i.e., the prime factors of q will also have a factor other than 2 or 5. Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity! , , , , , , , , , , , , , , , , , , , , , , , , , , , ;
2021-09-28 11:28:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537872433662415, "perplexity": 727.2995596981108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00509.warc.gz"}
https://qibo.readthedocs.io/en/latest/api-reference/qibo.html
# Models Qibo provides models for both the circuit based and the adiabatic quantum computation paradigms. Circuit based models include Circuit models which allow defining arbitrary circuits and Quantum Fourier Transform (QFT) such as the Quantum Fourier Transform (qibo.models.QFT) and the Variational Quantum Eigensolver (qibo.models.VQE). Adiabatic quantum computation is simulated using the Time evolution of state vectors. The general purpose model is called Circuit and holds the list of gates that are applied to the state vector or density matrix. All Circuit models inherit the qibo.abstractions.circuit.AbstractCircuit which implements basic properties of the circuit, such as the list of gates and the number of qubits. In order to perform calculations and apply gates to a state vector a backend has to be used. The main Circuit used for simulation is defined in qibo.core.circuit.Circuit. This uses an abstract backend object K to perform calculation which can be one of the backends defined in qibo/backends. Qibo uses big-endian byte order, which means that the most significant qubit is the one with index 0, while the least significant qubit is the one with the highest index. ## Circuit models ### Abstract circuit class qibo.abstractions.circuit.AbstractCircuit(nqubits) Circuit object which holds a list of gates. This circuit is symbolic and cannot perform calculations. A specific backend has to be used for performing calculations. All backend-based circuits should inherit AbstractCircuit. Qibo provides the following circuits: All circuits use core as the computation backend. Parameters nqubits (int) – Total number of qubits in the circuit. on_qubits(*qubits) Generator of gates contained in the circuit acting on specified qubits. Useful for adding a circuit as a subroutine in a larger circuit. Parameters qubits (int) – Qubit ids that the gates should act. Example from qibo import gates, models # create small circuit on 4 qubits smallc = models.Circuit(4) smallc.add((gates.RX(i, theta=0.1) for i in range(4))) # create large circuit on 8 qubits largec = models.Circuit(8) largec.add((gates.RY(i, theta=0.1) for i in range(8))) # add the small circuit to the even qubits of the large one light_cone(*qubits) Reduces circuit to the qubits relevant for an observable. Useful for calculating expectation values of local observables without requiring simulation of large circuits. Uses the light cone construction described in issue #571. Parameters qubits (int) – Qubit ids that the observable has support on. Returns Circuit that contains only the qubits that are required for calculating expectation involving the given observable qubits. qubit_map (dict): Dictionary mapping the qubit ids of the original circuit to the ids in the new one. Return type circuit (qibo.models.Circuit) copy(deep: bool = False) Creates a copy of the current circuit as a new Circuit model. Parameters deep (bool) – If True copies of the gate objects will be created for the new circuit. If False, the same gate objects of circuit will be used. Returns The copied circuit object. invert() Creates a new Circuit that is the inverse of the original. Inversion is obtained by taking the dagger of all gates in reverse order. If the original circuit contains measurement gates, these are included in the inverted circuit. Returns The circuit inverse. decompose(*free: int) Decomposes circuit’s gates to gates supported by OpenQASM. Parameters free – Ids of free (work) qubits to use for gate decomposition. Returns Circuit that contains only gates that are supported by OpenQASM and has the same effect as the original circuit. with_noise(noise_map: Union[Tuple[int, int, int], Dict[int, Tuple[int, int, int]]]) Creates a copy of the circuit with noise gates after each gate. If the original circuit uses state vectors then noise simulation will be done using sampling and repeated circuit execution. In order to use density matrices the original circuit should be created using the density_matrix flag set to True. For more information we refer to the How to perform noisy simulation? example. Parameters noise_map (dict) – Dictionary that maps qubit ids to noise probabilities (px, py, pz). If a tuple of probabilities (px, py, pz) is given instead of a dictionary, then the same probabilities will be used for all qubits. Returns Circuit object that contains all the gates of the original circuit and additional noise channels on all qubits after every gate. Example from qibo.models import Circuit from qibo import gates # use density matrices for noise simulation c = Circuit(2, density_matrix=True) noise_map = {0: (0.1, 0.0, 0.2), 1: (0.0, 0.2, 0.1)} noisy_c = c.with_noise(noise_map) # noisy_c will be equivalent to the following circuit c2 = Circuit(2, density_matrix=True) check_measured(gate_qubits: Tuple[int]) Checks if the qubits that a gate acts are already measured and raises a NotImplementedError if they are because currently we do not allow measured qubits to be reused. Add a gate to a given queue. Parameters gate (qibo.abstractions.gates.Gate) – the gate object to add. See Gates for a list of available gates. gate can also be an iterable or generator of gates. In this case all gates in the iterable will be added in the circuit. Returns If the circuit contains measurement gates with collapse=True a sympy.Symbol that parametrizes the corresponding outcome. property ngates: int Total number of gates/operations in the circuit. property depth: int Circuit depth if each gate is placed at the earliest possible position. property gate_types: collections.Counter collections.Counter with the number of appearances of each gate type. The QASM names are used as gate identifiers. gates_of_type(gate: Union[str, type]) Finds all gate objects of specific type. Parameters gate (str, type) – The QASM name of a gate or the corresponding gate class. Returns List with all gates that are in the circuit and have the same type with the given gate. The list contains tuples (i, g) where i is the index of the gate g in the circuit’s gate queue. set_parameters(parameters) Updates the parameters of the circuit’s parametrized gates. For more information on how to use this method we refer to the How to use parametrized gates? example. Parameters parameters – Container holding the new parameter values. It can have one of the following types: List with length equal to the number of parametrized gates and each of its elements compatible with the corresponding gate. Dictionary with keys that are references to the parametrized gates and values that correspond to the new parameters for each gate. Flat list with length equal to the total number of free parameters in the circuit. A backend supported tensor (for example np.ndarray or tf.Tensor) may also be given instead of a flat list. Example from qibo.models import Circuit from qibo import gates # create a circuit with all parameters set to 0. c = Circuit(3) # set new values to the circuit's parameters using list params = [0.123, 0.456, (0.789, 0.321)] c.set_parameters(params) # or using dictionary params = {c.queue[0]: 0.123, c.queue[1]: 0.456, c.queue[3]: (0.789, 0.321)} c.set_parameters(params) # or using flat list (or an equivalent np.array/tf.Tensor) params = [0.123, 0.456, 0.789, 0.321] c.set_parameters(params) get_parameters(format: str = 'list', include_not_trainable: bool = False) Returns the parameters of all parametrized gates in the circuit. Inverse method of qibo.abstractions.circuit.AbstractCircuit.set_parameters(). Parameters • format (str) – How to return the variational parameters. Available formats are 'list', 'dict' and 'flatlist'. See qibo.abstractions.circuit.AbstractCircuit.set_parameters() for more details on each format. Default is 'list'. • include_not_trainable (bool) – If True it includes the parameters of non-trainable parametrized gates in the returned list or dictionary. Default is False. summary() str Generates a summary of the circuit. The summary contains the circuit depths, total number of qubits and the all gates sorted in decreasing number of appearance. Example from qibo.models import Circuit from qibo import gates c = Circuit(3) print(c.summary()) # Prints ''' Circuit depth = 5 Total number of gates = 6 Number of qubits = 3 Most common gates: h: 3 cx: 2 ccx: 1 ''' abstract property final_state Returns the final state after full simulation of the circuit. If the circuit is executed more than once, only the last final state is returned. abstract execute(initial_state=None, nshots=None) Executes the circuit. Exact implementation depends on the backend. See qibo.core.circuit.Circuit.execute() for more details. to_qasm() Convert circuit to QASM. Parameters filename (str) – The filename where the code is saved. classmethod from_qasm(qasm_code: str, **kwargs) Constructs a circuit from QASM code. Parameters qasm_code (str) – String with the QASM script. Returns A qibo.abstractions.circuit.AbstractCircuit that contains the gates specified by the given QASM script. Example from qibo import models, gates qasm_code = '''OPENQASM 2.0; include "qelib1.inc"; qreg q[2]; h q[0]; h q[1]; cx q[0],q[1];''' c = models.Circuit.from_qasm(qasm_code) # is equivalent to creating the following circuit c2 = models.Circuit(2) draw(line_wrap=70, legend=False) str Draw text circuit using unicode symbols. Parameters • line_wrap (int) – maximum number of characters per line. This option split the circuit text diagram in chunks of line_wrap characters. • legend (bool) – If True prints a legend below the circuit for callbacks and channels. Default is False. Returns String containing text circuit diagram. ### Circuit class qibo.core.circuit.Circuit(nqubits) Backend implementation of qibo.abstractions.circuit.AbstractCircuit. Performs simulation using state vectors. Example from qibo import models, gates c = models.Circuit(3) # initialized circuit with 3 qubits Parameters nqubits (int) – Total number of qubits in the circuit. fuse(max_qubits=2) Creates an equivalent circuit by fusing gates for increased simulation performance. Parameters max_qubits (int) – Maximum number of qubits in the fused gates. Returns A qibo.core.circuit.Circuit object containing qibo.abstractions.gates.FusedGate gates, each of which corresponds to a group of some original gates. For more details on the fusion algorithm we refer to the Circuit fusion section. Example from qibo import models, gates c = models.Circuit(2) # create circuit with fused gates fused_c = c.fuse() # now fused_c contains a single FusedGate that is # equivalent to applying the five original gates compile() Compiles the circuit as a Tensorflow graph. execute(initial_state=None, nshots=None) Propagates the state through the circuit applying the corresponding gates. If channels are found within the circuits gates then Qibo will perform the simulation by repeating the circuit execution nshots times. If the circuit contains measurements the corresponding noisy measurement result will be returned, otherwise the final state vectors will be collected to a (nshots, 2 ** nqubits) tensor and returned. The latter usage is memory intensive and not recommended. If the circuit is created with the density_matrix = True flag and contains channels, then density matrices will be used instead of repeated execution. Note that some channels (qibo.abstractions.gates.KrausChannel) can only be simulated using density matrices and not repeated execution. For more details on noise simulation with and without density matrices we refer to How to perform noisy simulation? Parameters • initial_state (array) – Initial state vector as a numpy array of shape (2 ** nqubits,). A Tensorflow tensor with shape nqubits * (2,) is also allowed allowed as an initial state but must have the dtype of the circuit. If initial_state is None the |000...0> state will be used. • nshots (int) – Number of shots to sample if the circuit contains measurement gates. If nshots is None the measurement gates will be ignored. Returns A qibo.abstractions.states.AbstractState object which holds the final state vector as a tensor of shape (2 ** nqubits,) or the final density matrix as a tensor of shpae (2 ** nqubits, 2 ** nqubits). If nshots is given and the circuit contains measurements the returned circuit object also contains the measured bitstrings. property final_state Final state as a tensor of shape (2 ** nqubits,). The circuit has to be executed at least once before accessing this property, otherwise a ValueError is raised. If the circuit is executed more than once, only the last final state is returned. qibo.abstractions.circuit.AbstractCircuit objects support addition. For example from qibo import models from qibo import gates c1 = models.QFT(4) c2 = models.Circuit(4) c = c1 + c2 will create a circuit that performs the Quantum Fourier Transform on four qubits followed by Rotation-Z gates. ### Circuit fusion The gates contained in a circuit can be fused up to two-qubits using the qibo.core.circuit.Circuit.fuse() method. This returns a new circuit for which the total number of gates is less than the gates in the original circuit as groups of gates have been fused to a single qibo.abstractions.gates.FusedGate gate. Simulating the new circuit is equivalent to simulating the original one but in most cases more efficient since less gates need to be applied to the state vector. The fusion algorithm works as follows: First all gates in the circuit are transformed to unmarked qibo.abstractions.gates.FusedGate. The gates are then processed in the order they were added in the circuit. For each gate we identify the neighbors forth and back in time and attempt to fuse them to the gate. Two gates can be fused if their total number of target qubits is smaller than the fusion maximum qubits (specified by the user) and there are no other gates between acting on the same target qubits. Gates that are fused to others are marked. The new circuit queue contains the gates that remain unmarked after the above operations finish. Gates are processed in the original order given by user. There are no additional simplifications performed such as commuting gates acting on the same qubit or canceling gates even when such simplifications are mathematically possible. The user can specify the maximum number of qubits in a fused gate using the max_qubits flag in qibo.core.circuit.Circuit.fuse(). For example the following: from qibo import models, gates c = models.Circuit(2) fused_c = c.fuse() will create a new circuit with a single qibo.abstractions.gates.FusedGate acting on (0, 1), while the following: from qibo import models, gates c = models.Circuit(3) fused_c = c.fuse() will give a circuit with two fused gates, the first of which will act on (0, 1) corresponding to [H(0), H(1), CZ(0, 1), X(0), H(0)] and the second will act to (1, 2) corresponding to [Y(1), Z(2), CNOT(1, 2), H(1), H(2)] ### Density matrix circuit class qibo.core.circuit.DensityMatrixCircuit(nqubits) Backend implementation of qibo.abstractions.circuit.AbstractCircuit. Performs simulation using density matrices. Can be initialized using the density_matrix=True flag and supports the use of channels. For more information on the use of density matrices we refer to the Using density matrices? example. Example from qibo import models, gates c = models.Circuit(2, density_matrix=True) Parameters nqubits (int) – Total number of qubits in the circuit. ### Distributed circuit class qibo.core.distcircuit.DistributedCircuit(nqubits: int, accelerators: Dict[str, int]) Distributed implementation of qibo.abstractions.circuit.AbstractCircuit in Tensorflow. Uses multiple accelerator devices (GPUs) for applying gates to the state vector. The full state vector is saved in the given memory device (usually the CPU) during the simulation. A gate is applied by splitting the state to pieces and copying each piece to an accelerator device that is used to perform the matrix multiplication. An accelerator device can be used more than once resulting to logical devices that are more than the physical accelerators in the system. Distributed circuits currently do not support native tensorflow gates, compilation and callbacks. Example from qibo.models import Circuit # The system has two GPUs and we would like to use each GPU twice # resulting to four total logical accelerators accelerators = {'/GPU:0': 2, '/GPU:1': 2} # Define a circuit on 32 qubits to be run in the above GPUs keeping # the full state vector in the CPU memory. c = Circuit(32, accelerators) Parameters • nqubits (int) – Total number of qubits in the circuit. • accelerators (dict) – Dictionary that maps device names to the number of times each device will be used. The total number of logical devices must be a power of 2. on_qubits(*q) Generator of gates contained in the circuit acting on specified qubits. Useful for adding a circuit as a subroutine in a larger circuit. Parameters qubits (int) – Qubit ids that the gates should act. Example from qibo import gates, models # create small circuit on 4 qubits smallc = models.Circuit(4) smallc.add((gates.RX(i, theta=0.1) for i in range(4))) # create large circuit on 8 qubits largec = models.Circuit(8) largec.add((gates.RY(i, theta=0.1) for i in range(8))) # add the small circuit to the even qubits of the large one copy(deep: bool = True) Creates a copy of the current circuit as a new Circuit model. Parameters deep (bool) – If True copies of the gate objects will be created for the new circuit. If False, the same gate objects of circuit will be used. Returns The copied circuit object. fuse() Creates an equivalent circuit by fusing gates for increased simulation performance. Parameters max_qubits (int) – Maximum number of qubits in the fused gates. Returns A qibo.core.circuit.Circuit object containing qibo.abstractions.gates.FusedGate gates, each of which corresponds to a group of some original gates. For more details on the fusion algorithm we refer to the Circuit fusion section. Example from qibo import models, gates c = models.Circuit(2) # create circuit with fused gates fused_c = c.fuse() # now fused_c contains a single FusedGate that is # equivalent to applying the five original gates with_noise(noise_map, measurement_noise=None) Creates a copy of the circuit with noise gates after each gate. If the original circuit uses state vectors then noise simulation will be done using sampling and repeated circuit execution. In order to use density matrices the original circuit should be created using the density_matrix flag set to True. For more information we refer to the How to perform noisy simulation? example. Parameters noise_map (dict) – Dictionary that maps qubit ids to noise probabilities (px, py, pz). If a tuple of probabilities (px, py, pz) is given instead of a dictionary, then the same probabilities will be used for all qubits. Returns Circuit object that contains all the gates of the original circuit and additional noise channels on all qubits after every gate. Example from qibo.models import Circuit from qibo import gates # use density matrices for noise simulation c = Circuit(2, density_matrix=True) noise_map = {0: (0.1, 0.0, 0.2), 1: (0.0, 0.2, 0.1)} noisy_c = c.with_noise(noise_map) # noisy_c will be equivalent to the following circuit c2 = Circuit(2, density_matrix=True) execute(initial_state=None, nshots=None) Equivalent to qibo.core.circuit.Circuit.execute(). Returns A qibo.core.states.DistributedState object corresponding to the final state of execution. Note that this state contains the full state vector scattered to pieces and does not create a single tensor unless the user explicitly calls the tensor property. This avoids creating multiple copies of large states in CPU memory. ### Quantum Fourier Transform (QFT) class qibo.models.circuit.QFT(nqubits: int, with_swaps: bool = True, accelerators: Optional[Dict[str, int]] = None) Creates a circuit that implements the Quantum Fourier Transform. Parameters • nqubits (int) – Number of qubits in the circuit. • with_swaps (bool) – Use SWAP gates at the end of the circuit so that the qubit order in the final state is the same as the initial state. • accelerators (dict) – Accelerator device dictionary in order to use a distributed circuit If None a simple (non-distributed) circuit will be used. Returns A qibo.models.Circuit that implements the Quantum Fourier Transform. Example import numpy as np from qibo.models import QFT nqubits = 6 c = QFT(nqubits) # Random normalized initial state vector init_state = np.random.random(2 ** nqubits) + 1j * np.random.random(2 ** nqubits) init_state = init_state / np.sqrt((np.abs(init_state)**2).sum()) # Execute the circuit final_state = c(init_state) ### Variational Quantum Eigensolver (VQE) class qibo.models.variational.VQE(circuit, hamiltonian) This class implements the variational quantum eigensolver algorithm. Parameters Example import numpy as np from qibo import gates, models, hamiltonians # create circuit ansatz for two qubits circuit = models.Circuit(2) # create XXZ Hamiltonian for two qubits hamiltonian = hamiltonians.XXZ(2) # create VQE model for the circuit and Hamiltonian vqe = models.VQE(circuit, hamiltonian) # optimize using random initial variational parameters initial_parameters = np.random.uniform(0, 2, 1) vqe.minimize(initial_parameters) minimize(initial_state, method='Powell', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, compile=False, processes=None) Search for parameters which minimizes the hamiltonian expectation. Parameters • initial_state (array) – a initial guess for the parameters of the variational circuit. • method (str) – the desired minimization method. See qibo.optimizers.optimize() for available optimization methods. • jac (dict) – Method for computing the gradient vector for scipy optimizers. • hess (dict) – Method for computing the hessian matrix for scipy optimizers. • hessp (callable) – Hessian of objective function times an arbitrary vector for scipy optimizers. • bounds (sequence or Bounds) – Bounds on variables for scipy optimizers. • constraints (dict) – Constraints definition for scipy optimizers. • tol (float) – Tolerance of termination for scipy optimizers. • callback (callable) – Called after each iteration for scipy optimizers. • options (dict) – a dictionary with options for the different optimizers. • compile (bool) – whether the TensorFlow graph should be compiled. • processes (int) – number of processes when using the paralle BFGS method. Returns The final expectation value. The corresponding best parameters. The optimization result object. For scipy methods it returns the OptimizeResult, for 'cma' the CMAEvolutionStrategy.result, and for 'sgd' the options used during the optimization. ### Adiabatically Assisted Variational Quantum Eigensolver (AAVQE) class qibo.models.variational.AAVQE(circuit, easy_hamiltonian, problem_hamiltonian, s, nsteps=10, t_max=1, bounds_tolerance=1e-07, time_tolerance=1e-07) This class implements the Adiabatically Assisted Variational Quantum Eigensolver algorithm. See https://arxiv.org/abs/1806.02287. Parameters • circuit (qibo.abstractions.circuit.AbstractCircuit) – variational ansatz. • easy_hamiltonian (qibo.hamiltonians.Hamiltonian) – initial Hamiltonian object. • problem_hamiltonian (qibo.hamiltonians.Hamiltonian) – problem Hamiltonian object. • s (callable) – scheduling function of time that defines the adiabatic evolution. It must verify boundary conditions: s(0) = 0 and s(1) = 1. • nsteps (float) – number of steps of the adiabatic evolution. • t_max (float) – total time evolution. • bounds_tolerance (float) – tolerance for checking s(0) = 0 and s(1) = 1. • time_tolerance (float) – tolerance for checking if time is greater than t_max. Example import numpy as np from qibo import gates, models, hamiltonians # create circuit ansatz for two qubits circuit = models.Circuit(2) # define the easy and the problem Hamiltonians. easy_hamiltonian=hamiltonians.X(2) problem_hamiltonian=hamiltonians.XXZ(2) # define a scheduling function with only one parameter # and boundary conditions s(0) = 0, s(1) = 1 s = lambda t: t # create AAVQE model aavqe = models.AAVQE(circuit, easy_hamiltonian, problem_hamiltonian, s, nsteps=10, t_max=1) # optimize using random initial variational parameters np.random.seed(0) initial_parameters = np.random.uniform(0, 2*np.pi, 2) ground_energy, params = aavqe.minimize(initial_parameters) set_schedule(func) Set scheduling function s(t) as func. schedule(t) Returns scheduling function evaluated at time t: s(t/Tmax). hamiltonian(t) Returns the adiabatic evolution Hamiltonian at a given time. minimize(params, method='BFGS', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, options=None, compile=False, processes=None) Performs minimization to find the ground state of the problem Hamiltonian. Parameters • params (np.ndarray or list) – initial guess for the parameters of the variational circuit. • method (str) – optimizer to employ. • jac (dict) – Method for computing the gradient vector for scipy optimizers. • hess (dict) – Method for computing the hessian matrix for scipy optimizers. • hessp (callable) – Hessian of objective function times an arbitrary vector for scipy optimizers. • bounds (sequence or Bounds) – Bounds on variables for scipy optimizers. • constraints (dict) – Constraints definition for scipy optimizers. • tol (float) – Tolerance of termination for scipy optimizers. • options (dict) – a dictionary with options for the different optimizers. • compile (bool) – whether the TensorFlow graph should be compiled. • processes (int) – number of processes when using the parallel BFGS method. ### Quantum Approximate Optimization Algorithm (QAOA) class qibo.models.variational.QAOA(hamiltonian, mixer=None, solver='exp', callbacks=[], accelerators=None) Quantum Approximate Optimization Algorithm (QAOA) model. The QAOA is introduced in arXiv:1411.4028. Parameters Example import numpy as np from qibo import models, hamiltonians # create XXZ Hamiltonian for four qubits hamiltonian = hamiltonians.XXZ(4) # create QAOA model for this Hamiltonian qaoa = models.QAOA(hamiltonian) # optimize using random initial variational parameters # and default options and initial state initial_parameters = 0.01 * np.random.random(4) best_energy, final_parameters, extra = qaoa.minimize(initial_parameters, method="BFGS") set_parameters(p) Sets the variational parameters. Parameters p (np.ndarray) – 1D-array holding the new values for the variational parameters. Length should be an even number. execute(initial_state=None) Applies the QAOA exponential operators to a state. Parameters initial_state (np.ndarray) – Initial state vector. Returns State vector after applying the QAOA exponential gates. minimize(initial_p, initial_state=None, method='Powell', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, compile=False, processes=None) Optimizes the variational parameters of the QAOA. Parameters • initial_p (np.ndarray) – initial guess for the parameters. • initial_state (np.ndarray) – initial state vector of the QAOA. • method (str) – the desired minimization method. See qibo.optimizers.optimize() for available optimization methods. • jac (dict) – Method for computing the gradient vector for scipy optimizers. • hess (dict) – Method for computing the hessian matrix for scipy optimizers. • hessp (callable) – Hessian of objective function times an arbitrary vector for scipy optimizers. • bounds (sequence or Bounds) – Bounds on variables for scipy optimizers. • constraints (dict) – Constraints definition for scipy optimizers. • tol (float) – Tolerance of termination for scipy optimizers. • callback (callable) – Called after each iteration for scipy optimizers. • options (dict) – a dictionary with options for the different optimizers. • compile (bool) – whether the TensorFlow graph should be compiled. • processes (int) – number of processes when using the paralle BFGS method. Returns The final energy (expectation value of the hamiltonian). The corresponding best parameters. The optimization result object. For scipy methods it returns the OptimizeResult, for 'cma' the CMAEvolutionStrategy.result, and for 'sgd' the options used during the optimization. ### Feedback-based Algorithm for Quantum Optimization (FALQON) class qibo.models.variational.FALQON(hamiltonian, mixer=None, solver='exp', callbacks=[], accelerators=None) Feedback-based ALgorithm for Quantum OptimizatioN (FALQON) model. The FALQON is introduced in arXiv:2103.08619. It inherits the QAOA class. Parameters • hamiltonian (qibo.abstractions.hamiltonians.Hamiltonian) – problem Hamiltonian whose ground state is sought. • mixer (qibo.abstractions.hamiltonians.Hamiltonian) – mixer Hamiltonian. If None, qibo.hamiltonians.X is used. • solver (str) – solver used to apply the exponential operators. Default solver is ‘exp’ (qibo.solvers.Exponential). • callbacks (list) – List of callbacks to calculate during evolution. • accelerators (dict) – Dictionary of devices to use for distributed execution. See qibo.tensorflow.distcircuit.DistributedCircuit for more details. This option is available only when hamiltonian is a qibo.abstractions.hamiltonians.SymbolicHamiltonian. Example import numpy as np from qibo import models, hamiltonians # create XXZ Hamiltonian for four qubits hamiltonian = hamiltonians.XXZ(4) # create FALQON model for this Hamiltonian falqon = models.FALQON(hamiltonian) # optimize using random initial variational parameters # and default options and initial state delta_t = 0.01 max_layers = 3 best_energy, final_parameters, extra = falqon.minimize(delta_t, max_layers) minimize(delta_t, max_layers, initial_state=None, tol=None, callback=None) Optimizes the variational parameters of the FALQON. Parameters • delta_t (float) – initial guess for the time step. A too large delta_t will make the algorithm fail. • max_layers (int) – maximum number of layers allowed for the FALQON. • initial_state (np.ndarray) – initial state vector of the FALQON. • tol (float) – Tolerance of energy change. If not specified, no check is done. • callback (callable) – Called after each iteration for scipy optimizers. • options (dict) – a dictionary with options for the different optimizers. Returns The final energy (expectation value of the hamiltonian). The corresponding best parameters. extra: variable with historical data for the energy and callbacks. ### Style-based Quantum Generative Adversarial Network (style-qGAN) class qibo.models.qgan.StyleQGAN(latent_dim, layers=None, circuit=None, set_parameters=None, discriminator=None) Model that implements and trains a style-based quantum generative adversarial network. For original manuscript: arXiv:2110.06933 Parameters • latent_dim (int) – number of latent dimensions. • layers (int) – number of layers for the quantum generator. Provide this value only if not using a custom quantum generator. • circuit (qibo.core.circuit.Circuit) – custom quantum generator circuit. If not provided, the default quantum circuit will be used. • set_parameters (function) – function that creates the array of parameters for the quantum generator. If not provided, the default function will be used. Example import numpy as np import qibo from qibo.models.qgan import StyleQGAN # set qibo backend to tensorflow which supports gradient descent training qibo.set_backend("tensorflow") # Create reference distribution. # Example: 3D correlated Gaussian distribution normalized between [-1,1] reference_distribution = [] samples = 10 mean = [0, 0, 0] cov = [[0.5, 0.1, 0.25], [0.1, 0.5, 0.1], [0.25, 0.1, 0.5]] x, y, z = np.random.multivariate_normal(mean, cov, samples).T/4 s1 = np.reshape(x, (samples,1)) s2 = np.reshape(y, (samples,1)) s3 = np.reshape(z, (samples,1)) reference_distribution = np.hstack((s1,s2,s3)) # Train qGAN with your particular setup train_qGAN = StyleQGAN(latent_dim=1, layers=2) train_qGAN.fit(reference_distribution, n_epochs=1) define_discriminator(alpha=0.2, dropout=0.2) Define the standalone discriminator model. set_params(circuit, params, x_input, i) Set the parameters for the quantum generator circuit. generate_latent_points(samples) Generate points in latent space as input for the quantum generator. train(d_model, circuit, hamiltonians_list, save=True) Train the quantum generator and classical discriminator. fit(reference, initial_params=None, batch_samples=128, n_epochs=20000, lr=0.5, save=True) Execute qGAN training. Parameters • reference (array) – samples from the reference input distribution. • initial_parameters (array) – initial parameters for the quantum generator. If not provided, the default initial parameters will be used. • discriminator (tensorflow.keras.models) – custom classical discriminator. If not provided, the default classical discriminator will be used. • batch_samples (int) – number of training examples utilized in one iteration. • n_epochs (int) – number of training iterations. • lr (float) – initial learning rate for the quantum generator. It controls how much to change the model each time the weights are updated. • save (bool) – If True the results of training (trained parameters and losses) will be saved on disk. Default is True. ### Grover’s Algorithm class qibo.models.grover.Grover(oracle, superposition_circuit=None, initial_state_circuit=None, superposition_qubits=None, superposition_size=None, number_solutions=None, target_amplitude=None, check=None, check_args=(), iterative=False) Model that performs Grover’s algorithm. For Grover’s original search algorithm: arXiv:quant-ph/9605043 For the iterative version with unknown solutions:arXiv:quant-ph/9605034 For the Grover algorithm with any superposition:arXiv:quant-ph/9712011 Parameters • oracle (qibo.core.circuit.Circuit) – quantum circuit that flips the sign using a Grover ancilla initialized with -X-H-. Grover ancilla expected to be last qubit of oracle circuit. • superposition_circuit (qibo.core.circuit.Circuit) – quantum circuit that takes an initial state to a superposition. Expected to use the first set of qubits to store the relevant superposition. • initial_state_circuit (qibo.core.circuit.Circuit) – quantum circuit that initializes the state. If empty defaults to |000..00> • superposition_qubits (int) – number of qubits that store the relevant superposition. Leave empty if superposition does not use ancillas. • superposition_size (int) – how many states are in a superposition. Leave empty if its an equal superposition of quantum states. • number_solutions (int) – number of expected solutions. Needed for normal Grover. Leave empty for iterative version. • target_amplitude (float) – absolute value of the amplitude of the target state. Only for advanced use and known systems. • check (function) – function that returns True if the solution has been found. Required of iterative approach. First argument should be the bitstring to check. • check_args (tuple) – arguments needed for the check function. The found bitstring not included. • iterative (bool) – force the use of the iterative Grover Example import numpy as np from qibo import gates from qibo.models import Circuit from qibo.models.grover import Grover # Create an oracle. Ex: Oracle that detects state |11111> oracle = Circuit(5 + 1) # Create superoposition circuit. Ex: Full superposition over 5 qubits. superposition = Circuit(5) # Generate and execute Grover class grover = Grover(oracle, superposition_circuit=superposition, number_solutions=1) solution, iterations = grover() initialize() Initialize the Grover algorithm with the superposition and Grover ancilla. diffusion() Construct the diffusion operator out of the superposition circuit. step() Combine oracle and diffusion for a Grover step. circuit(iterations) Creates circuit that performs Grover’s algorithm with a set amount of iterations. Parameters iterations (int) – number of times to repeat the Grover step. Returns qibo.core.circuit.Circuit that performs Grover’s algorithm. iterative_grover(lamda_value=1.2) Iterative approach of Grover for when the number of solutions is not known. Parameters lamda_value (real) – parameter that controls the evolution of the iterative method. Must be between 1 and 4/3. Returns bitstring measured and checked as a valid solution. total_iterations (int): number of times the oracle has been called. Return type measured (str) execute(nshots=100, freq=False, logs=False) Execute Grover’s algorithm. If the number of solutions is given, calculates iterations, otherwise it uses an iterative approach. Parameters • nshots (int) – number of shots in order to get the frequencies. • freq (bool) – print the full frequencies after the exact Grover algorithm. Returns bitstring (or list of bitstrings) measured as solution of the search. iterations (int): number of oracle calls done to reach a solution. Return type solution (str) ## Time evolution ### State evolution class qibo.models.evolution.StateEvolution(hamiltonian, dt, solver='exp', callbacks=[], accelerators=None) Unitary time evolution of a state vector under a Hamiltonian. Parameters • hamiltonian (qibo.abstractions.hamiltonians.Hamiltonian) – Hamiltonian to evolve under. • dt (float) – Time step to use for the numerical integration of Schrondiger’s equation. • solver (str) – Solver to use for integrating Schrodinger’s equation. Available solvers are ‘exp’ which uses the exact unitary evolution operator and ‘rk4’ or ‘rk45’ which use Runge-Kutta methods to integrate the Schordinger’s time-dependent equation in time. When the ‘exp’ solver is used to evolve a qibo.core.hamiltonians.SymbolicHamiltonian then the Trotter decomposition of the evolution operator will be calculated and used automatically. If the ‘exp’ is used on a dense qibo.core.hamiltonians.Hamiltonian the full Hamiltonian matrix will be exponentiated to obtain the exact evolution operator. Runge-Kutta solvers use simple matrix multiplications of the Hamiltonian to the state and no exponentiation is involved. • callbacks (list) – List of callbacks to calculate during evolution. • accelerators (dict) – Dictionary of devices to use for distributed execution. See qibo.core.distcircuit.DistributedCircuit for more details. This option is available only when the Trotter decomposition is used for the time evolution. Example import numpy as np from qibo import models, hamiltonians # create critical (h=1.0) TFIM Hamiltonian for three qubits hamiltonian = hamiltonians.TFIM(3, h=1.0) # initialize evolution model with step dt=1e-2 evolve = models.StateEvolution(hamiltonian, dt=1e-2) # initialize state to |+++> initial_state = np.ones(8) / np.sqrt(8) # execute evolution for total time T=2 final_state2 = evolve(final_time=2, initial_state=initial_state) execute(final_time, start_time=0.0, initial_state=None) Runs unitary evolution for a given total time. Parameters • final_time (float) – Final time of evolution. • start_time (float) – Initial time of evolution. Defaults to t=0. • initial_state (np.ndarray) – Initial state of the evolution. Returns Final state vector a tf.Tensor or a qibo.core.distutils.DistributedState when a distributed execution is used. class qibo.models.evolution.AdiabaticEvolution(h0, h1, s, dt, solver='exp', callbacks=[], accelerators=None) Adiabatic evolution of a state vector under the following Hamiltonian: $H(t) = (1 - s(t)) H_0 + s(t) H_1$ Parameters • h0 (qibo.abstractions.hamiltonians.Hamiltonian) – Easy Hamiltonian. • h1 (qibo.abstractions.hamiltonians.Hamiltonian) – Problem Hamiltonian. These Hamiltonians should be time-independent. • s (callable) – Function of time that defines the scheduling of the adiabatic evolution. Can be either a function of time s(t) or a function with two arguments s(t, p) where p corresponds to a vector of parameters to be optimized. • dt (float) – Time step to use for the numerical integration of Schrondiger’s equation. • solver (str) – Solver to use for integrating Schrodinger’s equation. Available solvers are ‘exp’ which uses the exact unitary evolution operator and ‘rk4’ or ‘rk45’ which use Runge-Kutta methods to integrate the Schordinger’s time-dependent equation in time. When the ‘exp’ solver is used to evolve a qibo.core.hamiltonians.SymbolicHamiltonian then the Trotter decomposition of the evolution operator will be calculated and used automatically. If the ‘exp’ is used on a dense qibo.core.hamiltonians.Hamiltonian the full Hamiltonian matrix will be exponentiated to obtain the exact evolution operator. Runge-Kutta solvers use simple matrix multiplications of the Hamiltonian to the state and no exponentiation is involved. • callbacks (list) – List of callbacks to calculate during evolution. • accelerators (dict) – Dictionary of devices to use for distributed execution. See qibo.core.distcircuit.DistributedCircuit for more details. This option is available only when the Trotter decomposition is used for the time evolution. property schedule Returns scheduling as a function of time. set_parameters(params) Sets the variational parameters of the scheduling function. get_initial_state(state=None) Casts initial state as a tensor. If initial state is not given the ground state of h0 is used, which is the common practice in adiabatic evolution. minimize(initial_parameters, method='BFGS', options=None, messages=False) Optimize the free parameters of the scheduling function. Parameters • initial_parameters (np.ndarray) – Initial guess for the variational parameters that are optimized. The last element of the given array should correspond to the guess for the total evolution time T. • method (str) – The desired minimization method. One of "cma" (genetic optimizer), "sgd" (gradient descent) or any of the methods supported by scipy.optimize.minimize. • options (dict) – a dictionary with options for the different optimizers. • messages (bool) – If True the loss evolution is shown during optimization. # Gates All supported gates can be accessed from the qibo.gates module and inherit the base gate object qibo.abstractions.gates.Gate. Read below for a complete list of supported gates. All gates support the controlled_by method that allows to control the gate on an arbitrary number of qubits. For example • gates.X(0).controlled_by(1, 2) is equivalent to gates.TOFFOLI(1, 2, 0), • gates.RY(0, np.pi).controlled_by(1, 2, 3) applies the Y-rotation to qubit 0 when qubits 1, 2 and 3 are in the |111> state. • gates.SWAP(0, 1).controlled_by(3, 4) swaps qubits 0 and 1 when qubits 3 and 4 are in the |11> state. ## Gate models ### Abstract gates class qibo.abstractions.gates.Gate The base class for gate implementation. All base gates should inherit this class. property qubits: Tuple[int] Tuple with ids of all qubits (control and target) that the gate acts. property target_qubits: Tuple[int] Tuple with ids of target qubits. property control_qubits: Tuple[int] Tuple with ids of control qubits sorted in increasing order. property nstates: int Size of the state vectors that this gate acts on. property nqubits: int Number of qubits that this gate acts on. property density_matrix: bool Controls if the gate acts on state vectors or density matrices. commutes(gate: qibo.abstractions.abstract_gates.Gate) bool Checks if two gates commute. Parameters gate – Gate to check if it commutes with the current gate. Returns True if the gates commute, otherwise False. on_qubits(qubit_map) Creates the same gate targeting different qubits. Parameters qubit_map (int) – Dictionary mapping original qubit indices to new ones. Returns A qibo.abstractions.gates.Gate object of the original gate type targeting the given qubits. Example from qibo import models, gates c = models.Circuit(4) c.add(gates.CNOT(2, 3).on_qubits({2: 2, 3: 3})) # equivalent to gates.CNOT(2, 3) c.add(gates.CNOT(2, 3).on_qubits({2: 3, 3: 0})) # equivalent to gates.CNOT(3, 0) c.add(gates.CNOT(2, 3).on_qubits({2: 1, 3: 3})) # equivalent to gates.CNOT(1, 3) c.add(gates.CNOT(2, 3).on_qubits({2: 2, 3: 1})) # equivalent to gates.CNOT(2, 1) print(c.draw()) q0: ───X───── q1: ───|─o─X─ q2: ─o─|─|─o─ q3: ─X─o─X─── dagger() Returns the dagger (conjugate transpose) of the gate. Returns A qibo.abstractions.gates.Gate object representing the dagger of the original gate. decompose(*free) Decomposes multi-control gates to gates supported by OpenQASM. Decompositions are based on arXiv:9503016. Parameters free – Ids of free qubits to use for the gate decomposition. Returns List with gates that have the same effect as applying the original gate. ### Abstract backend gates class qibo.abstractions.abstract_gates.BaseBackendGate Abstract class for gate objects that can be used in calculations. property matrix Unitary matrix representing the gate in the computational basis. ## Single qubit gates class qibo.abstractions.gates.H(q) Parameters q (int) – the qubit id number. ### Pauli X (X) class qibo.abstractions.gates.X(q) The Pauli X gate. Parameters q (int) – the qubit id number. decompose(*free: int, use_toffolis: bool = True) Decomposes multi-control X gate to one-qubit, CNOT and TOFFOLI gates. Parameters • free – Ids of free qubits to use for the gate decomposition. • use_toffolis – If True the decomposition contains only TOFFOLI gates. If False a congruent representation is used for TOFFOLI gates. See qibo.abstractions.gates.TOFFOLI for more details on this representation. Returns List with one-qubit, CNOT and TOFFOLI gates that have the same effect as applying the original multi-control gate. ### Pauli Y (Y) class qibo.abstractions.gates.Y(q) The Pauli Y gate. Parameters q (int) – the qubit id number. ### Pauli Z (Z) class qibo.abstractions.gates.Z(q) The Pauli Z gate. Parameters q (int) – the qubit id number. ### S gate (S) class qibo.abstractions.gates.S(q) The S gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 \\ 0 & i \\ \end{pmatrix}\end{split}$ Parameters q (int) – the qubit id number. ### T gate (T) class qibo.abstractions.gates.T(q) The T gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 \\ 0 & e^{i \pi / 4} \\ \end{pmatrix}\end{split}$ Parameters q (int) – the qubit id number. ### Identity (I) class qibo.abstractions.gates.I(*q) The identity gate. Parameters *q (int) – the qubit id numbers. ### Measurement (M) class qibo.abstractions.gates.M(*q, register_name: = None, collapse: bool = False, p0: Optional[ProbsType] = None, p1: Optional[ProbsType] = None) The Measure Z gate. Parameters • *q (int) – id numbers of the qubits to measure. It is possible to measure multiple qubits using gates.M(0, 1, 2, ...). If the qubits to measure are held in an iterable (eg. list) the * operator can be used, for example gates.M(*[0, 1, 4]) or gates.M(*range(5)). • register_name (str) – Optional name of the register to distinguish it from other registers when used in circuits. • collapse (bool) – Collapse the state vector after the measurement is performed. Can be used only for single shot measurements. If True the collapsed state vector is returned. If False the measurement result is returned. • p0 (dict) – Optional bitflip probability map. Can be: A dictionary that maps each measured qubit to the probability that it is flipped, a list or tuple that has the same length as the tuple of measured qubits or a single float number. If a single float is given the same probability will be used for all qubits. • p1 (dict) – Optional bitflip probability map for asymmetric bitflips. Same as p0 but controls the 1->0 bitflip probability. If p1 is None then p0 will be used both for 0->1 and 1->0 bitflips. static einsum_string(qubits, nqubits, measuring=False) Generates einsum string for partial trace of density matrices. Parameters • qubits (list) – Set of qubit ids that are traced out. • nqubits (int) – Total number of qubits in the state. • measuring (bool) – If True non-traced-out indices are multiplied and the output has shape (nqubits - len(qubits),). If False the output has shape 2 * (nqubits - len(qubits),). Returns String to use in einsum for performing partial density of a density matrix. symbol() Returns symbol containing measurement outcomes for collapse=True gates. Adds target qubits to a measurement gate. This method is only used for creating the global measurement gate used by the models.Circuit. The user is not supposed to use this method and a ValueError is raised if he does so. Parameters gate – Measurement gate to add its qubits in the current gate. ### Rotation X-axis (RX) class qibo.abstractions.gates.RX(q, theta, trainable=True) Rotation around the X-axis of the Bloch sphere. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} \cos \frac{\theta }{2} & -i\sin \frac{\theta }{2} \\ -i\sin \frac{\theta }{2} & \cos \frac{\theta }{2} \\ \end{pmatrix}\end{split}$ Parameters ### Rotation Y-axis (RY) class qibo.abstractions.gates.RY(q, theta, trainable=True) Rotation around the Y-axis of the Bloch sphere. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} \cos \frac{\theta }{2} & -\sin \frac{\theta }{2} \\ \sin \frac{\theta }{2} & \cos \frac{\theta }{2} \\ \end{pmatrix}\end{split}$ Parameters ### Rotation Z-axis (RZ) class qibo.abstractions.gates.RZ(q, theta, trainable=True) Rotation around the Z-axis of the Bloch sphere. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} e^{-i \theta / 2} & 0 \\ 0 & e^{i \theta / 2} \\ \end{pmatrix}\end{split}$ Parameters ### First general unitary (U1) class qibo.abstractions.gates.U1(q, theta, trainable=True) First general unitary gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 \\ 0 & e^{i \theta} \\ \end{pmatrix}\end{split}$ Parameters ### Second general unitary (U2) class qibo.abstractions.gates.U2(q, phi, lam, trainable=True) Second general unitary gate. Corresponds to the following unitary matrix $\begin{split}\frac{1}{\sqrt{2}} \begin{pmatrix} e^{-i(\phi + \lambda )/2} & -e^{-i(\phi - \lambda )/2} \\ e^{i(\phi - \lambda )/2} & e^{i (\phi + \lambda )/2} \\ \end{pmatrix}\end{split}$ Parameters ### Third general unitary (U3) class qibo.abstractions.gates.U3(q, theta, phi, lam, trainable=True) Third general unitary gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} e^{-i(\phi + \lambda )/2}\cos\left (\frac{\theta }{2}\right ) & -e^{-i(\phi - \lambda )/2}\sin\left (\frac{\theta }{2}\right ) \\ e^{i(\phi - \lambda )/2}\sin\left (\frac{\theta }{2}\right ) & e^{i (\phi + \lambda )/2}\cos\left (\frac{\theta }{2}\right ) \\ \end{pmatrix}\end{split}$ Parameters ## Two qubit gates ### Controlled-NOT (CNOT) class qibo.abstractions.gates.CNOT(q0, q1) The Controlled-NOT gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}\end{split}$ Parameters • q0 (int) – the control qubit id number. • q1 (int) – the target qubit id number. decompose(*free, use_toffolis: bool = True) Decomposes multi-control gates to gates supported by OpenQASM. Decompositions are based on arXiv:9503016. Parameters free – Ids of free qubits to use for the gate decomposition. Returns List with gates that have the same effect as applying the original gate. ### Controlled-phase (CZ) class qibo.abstractions.gates.CZ(q0, q1) The Controlled-Phase gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{pmatrix}\end{split}$ Parameters • q0 (int) – the control qubit id number. • q1 (int) – the target qubit id number. ### Controlled-rotation X-axis (CRX) class qibo.abstractions.gates.CRX(q0, q1, theta, trainable=True) Controlled rotation around the X-axis for the Bloch sphere. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos \frac{\theta }{2} & -i\sin \frac{\theta }{2} \\ 0 & 0 & -i\sin \frac{\theta }{2} & \cos \frac{\theta }{2} \\ \end{pmatrix}\end{split}$ Parameters ### Controlled-rotation Y-axis (CRY) class qibo.abstractions.gates.CRY(q0, q1, theta, trainable=True) Controlled rotation around the Y-axis for the Bloch sphere. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos \frac{\theta }{2} & -\sin \frac{\theta }{2} \\ 0 & 0 & \sin \frac{\theta }{2} & \cos \frac{\theta }{2} \\ \end{pmatrix}\end{split}$ Note that this differs from the qibo.abstractions.gates.RZ gate. Parameters ### Controlled-rotation Z-axis (CRZ) class qibo.abstractions.gates.CRZ(q0, q1, theta, trainable=True) Controlled rotation around the Z-axis for the Bloch sphere. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & e^{-i \theta / 2} & 0 \\ 0 & 0 & 0 & e^{i \theta / 2} \\ \end{pmatrix}\end{split}$ Parameters ### Controlled first general unitary (CU1) class qibo.abstractions.gates.CU1(q0, q1, theta, trainable=True) Controlled first general unitary gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i \theta } \\ \end{pmatrix}\end{split}$ Note that this differs from the qibo.abstractions.gates.CRZ gate. Parameters ### Controlled second general unitary (CU2) class qibo.abstractions.gates.CU2(q0, q1, phi, lam, trainable=True) Controlled second general unitary gate. Corresponds to the following unitary matrix $\begin{split}\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & e^{-i(\phi + \lambda )/2} & -e^{-i(\phi - \lambda )/2} \\ 0 & 0 & e^{i(\phi - \lambda )/2} & e^{i (\phi + \lambda )/2} \\ \end{pmatrix}\end{split}$ Parameters ### Controlled third general unitary (CU3) class qibo.abstractions.gates.CU3(q0, q1, theta, phi, lam, trainable=True) Controlled third general unitary gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & e^{-i(\phi + \lambda )/2}\cos\left (\frac{\theta }{2}\right ) & -e^{-i(\phi - \lambda )/2}\sin\left (\frac{\theta }{2}\right ) \\ 0 & 0 & e^{i(\phi - \lambda )/2}\sin\left (\frac{\theta }{2}\right ) & e^{i (\phi + \lambda )/2}\cos\left (\frac{\theta }{2}\right ) \\ \end{pmatrix}\end{split}$ Parameters ### Swap (SWAP) class qibo.abstractions.gates.SWAP(q0, q1) The swap gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\end{split}$ Parameters • q0 (int) – the first qubit to be swapped id number. • q1 (int) – the second qubit to be swapped id number. ### f-Swap (FSWAP) class qibo.abstractions.gates.FSWAP(q0, q1) The fermionic swap gate. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ \end{pmatrix}\end{split}$ Parameters • q0 (int) – the first qubit to be f-swapped id number. • q1 (int) – the second qubit to be f-swapped id number. ### fSim class qibo.abstractions.gates.fSim(q0, q1, theta, phi, trainable=True) The fSim gate defined in arXiv:2001.08343. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \theta & -i\sin \theta & 0 \\ 0 & -i\sin \theta & \cos \theta & 0 \\ 0 & 0 & 0 & e^{-i \phi } \\ \end{pmatrix}\end{split}$ Parameters ### fSim with general rotation class qibo.abstractions.gates.GeneralizedfSim(q0, q1, unitary, phi, trainable=True) The fSim gate with a general rotation. Corresponds to the following unitary matrix $\begin{split}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & R_{00} & R_{01} & 0 \\ 0 & R_{10} & R_{11} & 0 \\ 0 & 0 & 0 & e^{-i \phi } \\ \end{pmatrix}\end{split}$ Parameters property parameters Returns a tuple containing the current value of gate’s parameters. ## Special gates ### Toffoli class qibo.abstractions.gates.TOFFOLI(q0, q1, q2) The Toffoli gate. Parameters • q0 (int) – the first control qubit id number. • q1 (int) – the second control qubit id number. • q2 (int) – the target qubit id number. decompose(*free, use_toffolis: bool = True) Decomposes multi-control gates to gates supported by OpenQASM. Decompositions are based on arXiv:9503016. Parameters free – Ids of free qubits to use for the gate decomposition. Returns List with gates that have the same effect as applying the original gate. congruent(use_toffolis: bool = True) Congruent representation of TOFFOLI gate. This is a helper method for the decomposition of multi-control X gates. The congruent representation is based on Sec. 6.2 of arXiv:9503016. The sequence of the gates produced here has the same effect as TOFFOLI with the phase of the |101> state reversed. Parameters use_toffolis – If True a single TOFFOLI gate is returned. If False the congruent representation is returned. Returns List with RY and CNOT gates that have the same effect as applying the original TOFFOLI gate. ### Arbitrary unitary class qibo.abstractions.gates.Unitary(unitary, *q, trainable=True, name=None) Arbitrary unitary gate. Parameters • unitary – Unitary matrix as a tensor supported by the backend. Note that there is no check that the matrix passed is actually unitary. This allows the user to create non-unitary gates. • *q (int) – Qubit id numbers that the gate acts on. • trainable (bool) – whether gate parameters can be updated using qibo.abstractions.circuit.AbstractCircuit.set_parameters() (default is True). • name (str) – Optional name for the gate. on_qubits(qubit_map) Creates the same gate targeting different qubits. Parameters qubit_map (int) – Dictionary mapping original qubit indices to new ones. Returns A qibo.abstractions.gates.Gate object of the original gate type targeting the given qubits. Example from qibo import models, gates c = models.Circuit(4) c.add(gates.CNOT(2, 3).on_qubits({2: 2, 3: 3})) # equivalent to gates.CNOT(2, 3) c.add(gates.CNOT(2, 3).on_qubits({2: 3, 3: 0})) # equivalent to gates.CNOT(3, 0) c.add(gates.CNOT(2, 3).on_qubits({2: 1, 3: 3})) # equivalent to gates.CNOT(1, 3) c.add(gates.CNOT(2, 3).on_qubits({2: 2, 3: 1})) # equivalent to gates.CNOT(2, 1) print(c.draw()) q0: ───X───── q1: ───|─o─X─ q2: ─o─|─|─o─ q3: ─X─o─X─── ### Variational layer class qibo.abstractions.gates.VariationalLayer(qubits: List[int], pairs: List[Tuple[int, int]], one_qubit_gate, two_qubit_gate, params: , params2: = None, trainable: bool = True, name: = None) Layer of one-qubit parametrized gates followed by two-qubit entangling gates. Performance is optimized by fusing the variational one-qubit gates with the two-qubit entangling gates that follow them and applying a single layer of two-qubit gates as 4x4 matrices. Parameters • qubits (list) – List of one-qubit gate target qubit IDs. • pairs (list) – List of pairs of qubit IDs on which the two qubit gate act. • one_qubit_gate – Type of one qubit gate to use as the variational gate. • two_qubit_gate – Type of two qubit gate to use as entangling gate. • params (list) – Variational parameters of one-qubit gates as a list that has the same length as qubits. These gates act before the layer of entangling gates. • params2 (list) – Variational parameters of one-qubit gates as a list that has the same length as qubits. These gates act after the layer of entangling gates. • trainable (bool) – whether gate parameters can be updated using qibo.abstractions.circuit.AbstractCircuit.set_parameters() (default is True). • name (str) – Optional name for the gate. If None the name "VariationalLayer" will be used. Example import numpy as np from qibo.models import Circuit from qibo import gates # generate an array of variational parameters for 8 qubits theta = 2 * np.pi * np.random.random(8) # define qubit pairs that two qubit gates will act pairs = [(i, i + 1) for i in range(0, 7, 2)] # define a circuit of 8 qubits and add the variational layer c = Circuit(8) # this will create an optimized version of the following circuit c2 = Circuit(8) c.add((gates.RY(i, th) for i, th in enumerate(theta))) c.add((gates.CZ(i, i + 1) for i in range(7))) property parameters Returns a tuple containing the current value of gate’s parameters. ### Flatten class qibo.abstractions.gates.Flatten(coefficients) Passes an arbitrary state vector in the circuit. Parameters coefficients (list) – list of the target state vector components. This can also be a tensor supported by the backend. ### Callback gate class qibo.abstractions.gates.CallbackGate(callback: Callback) Calculates a qibo.core.callbacks.Callback at a specific point in the circuit. This gate performs the callback calulation without affecting the state vector. Parameters callback (qibo.core.callbacks.Callback) – Callback object to calculate. property nqubits: int Number of qubits that this gate acts on. ### Fusion gate class qibo.abstractions.gates.FusedGate(*q) Collection of gates that will be fused and applied as single gate during simulation. This gate is constructed automatically by qibo.core.circuit.Circuit.fuse() and should not be used by user. can_fuse(gate, max_qubits) Check if two gates can be fused. fuse(gate) Fuses two gates. # Channels Channels are implemented in Qibo as additional gates and can be accessed from the qibo.gates module. Channels can be used on density matrices to perform noisy simulations. Channels that inherit qibo.abstractions.gates.UnitaryChannel can also be applied to state vectors using sampling and repeated execution. For more information on the use of channels to simulate noise we refer to How to perform noisy simulation? The following channels are currently implemented: ## Partial trace class qibo.abstractions.gates.PartialTrace(*q) Collapses a density matrix by tracing out selected qubits. Works only with density matrices (not state vectors) and implements the following transformation: $\mathcal{E}(\rho ) = (|0\rangle \langle 0|) _A \otimes \mathrm{Tr} _A (\rho )$ where A denotes the subsystem of qubits that are traced out. Parameters q (int) – Qubit ids that will be traced-out and collapsed to the zero state. More than one qubits can be given. ## Kraus channel class qibo.abstractions.gates.KrausChannel(ops) General channel defined by arbitrary Krauss operators. Implements the following transformation: $\mathcal{E}(\rho ) = \sum _k A_k \rho A_k^\dagger$ where A are arbitrary Kraus operators given by the user. Note that Kraus operators set should be trace preserving, however this is not checked. Simulation of this gate requires the use of density matrices. For more information on channels and Kraus operators please check J. Preskill’s notes. Parameters ops (list) – List of Kraus operators as pairs (qubits, Ak) where qubits refers the qubit ids that Ak acts on and Ak is the corresponding matrix as a np.ndarray or tf.Tensor. Example import numpy as np from qibo.models import Circuit from qibo import gates # initialize circuit with 3 qubits c = Circuit(3, density_matrix=True) # define a sqrt(0.4) * X gate a1 = np.sqrt(0.4) * np.array([[0, 1], [1, 0]]) # define a sqrt(0.6) * CNOT gate a2 = np.sqrt(0.6) * np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]]) # define the channel rho -> 0.4 X{1} rho X{1} + 0.6 CNOT{0, 2} rho CNOT{0, 2} channel = gates.KrausChannel([((1,), a1), ((0, 2), a2)]) # add the channel to the circuit ## Unitary channel class qibo.abstractions.gates.UnitaryChannel(p, ops, seed=None) Channel that is a probabilistic sum of unitary operations. Implements the following transformation: $\mathcal{E}(\rho ) = \left (1 - \sum _k p_k \right )\rho + \sum _k p_k U_k \rho U_k^\dagger$ where U are arbitrary unitary operators and p are floats between 0 and 1. Note that unlike qibo.abstractions.gates.KrausChannel which requires density matrices, it is possible to simulate the unitary channel using state vectors and probabilistic sampling. For more information on this approach we refer to Using repeated execution. Parameters • p (list) – List of floats that correspond to the probability that each unitary Uk is applied. • ops (list) – List of operators as pairs (qubits, Uk) where qubits refers the qubit ids that Uk acts on and Uk is the corresponding matrix as a np.ndarray/tf.Tensor. Must have the same length as the given probabilities p. • seed (int) – Optional seed for the random number generator when sampling instead of density matrices is used to simulate this gate. ## Pauli noise channel class qibo.abstractions.gates.PauliNoiseChannel(q, px=0, py=0, pz=0, seed=None) Noise channel that applies Pauli operators with given probabilities. Implements the following transformation: $\mathcal{E}(\rho ) = (1 - p_x - p_y - p_z) \rho + p_x X\rho X + p_y Y\rho Y + p_z Z\rho Z$ which can be used to simulate phase flip and bit flip errors. This channel can be simulated using either density matrices or state vectors and sampling with repeated execution. See How to perform noisy simulation? for more information. Parameters • q (int) – Qubit id that the noise acts on. • px (float) – Bit flip (X) error probability. • py (float) – Y-error probability. • pz (float) – Phase flip (Z) error probability. • seed (int) – Optional seed for the random number generator when sampling instead of density matrices is used to simulate this gate. ## Reset channel class qibo.abstractions.gates.ResetChannel(q, p0=0.0, p1=0.0, seed=None) Single-qubit reset channel. Implements the following transformation: $\mathcal{E}(\rho ) = (1 - p_0 - p_1) \rho + p_0 (|0\rangle \langle 0| \otimes \tilde{\rho }) + p_1 (|1\rangle \langle 1| \otimes \tilde{\rho })$ with $\tilde{\rho } = \frac{\langle 0|\rho |0\rangle }{\mathrm{Tr}\langle 0|\rho |0\rangle}$ Parameters • q (int) – Qubit id that the channel acts on. • p0 (float) – Probability to reset to 0. • p1 (float) – Probability to reset to 1. • seed (int) – Optional seed for the random number generator when sampling instead of density matrices is used to simulate this gate. ## Thermal relaxation channel class qibo.abstractions.gates.ThermalRelaxationChannel(q, t1, t2, time, excited_population=0, seed=None) Single-qubit thermal relaxation error channel. Implements the following transformation: If $$T_1 \geq T_2$$: $\mathcal{E} (\rho ) = (1 - p_z - p_0 - p_1)\rho + p_zZ\rho Z + p_0 (|0\rangle \langle 0| \otimes \tilde{\rho }) + p_1 (|1\rangle \langle 1| \otimes \tilde{\rho })$ with $\tilde{\rho } = \frac{\langle 0|\rho |0\rangle }{\mathrm{Tr}\langle 0|\rho |0\rangle}$ while if $$T_1 < T_2$$: $\mathcal{E}(\rho ) = \mathrm{Tr} _\mathcal{X}\left [\Lambda _{\mathcal{X}\mathcal{Y}}(\rho _\mathcal{X} ^T \otimes \mathbb{I}_\mathcal{Y})\right ]$ with $\begin{split}\Lambda = \begin{pmatrix} 1 - p_1 & 0 & 0 & e^{-t / T_2} \\ 0 & p_1 & 0 & 0 \\ 0 & 0 & p_0 & 0 \\ e^{-t / T_2} & 0 & 0 & 1 - p_0 \end{pmatrix}\end{split}$ where $$p_0 = (1 - e^{-t / T_1})(1 - \eta )$$ $$p_1 = (1 - e^{-t / T_1})\eta$$ and $$p_z = 1 - e^{-t / T_1} + e^{-t / T_2} - e^{t / T_1 - t / T_2}$$. Here $$\eta$$ is the excited_population and $$t$$ is the time, both controlled by the user. This gate is based on Qiskit’s thermal relaxation error channel. Parameters • q (int) – Qubit id that the noise channel acts on. • t1 (float) – T1 relaxation time. Should satisfy t1 > 0. • t2 (float) – T2 dephasing time. Should satisfy t1 > 0 and t2 < 2 * t1. • time (float) – the gate time for relaxation error. • excited_population (float) – the population of the excited state at equilibrium. Default is 0. • seed (int) – Optional seed for the random number generator when sampling instead of density matrices is used to simulate this gate. # Noise In Qibo it is possible to create a custom noise model using the class qibo.noise.NoiseModel. This enables the user to create circuits where the noise is gate and qubit dependent. For more information on the use of qibo.noise.NoiseModel see How to perform noisy simulation? class qibo.noise.NoiseModel Class for the implementation of a custom noise model. Example: from qibo import models, gates from qibo.noise import NoiseModel, PauliError # Build specific noise model with 2 quantum errors: # - Pauli error on H only for qubit 1. # - Pauli error on CNOT for all the qubits. noise = NoiseModel() # Generate noiseless circuit. c = models.Circuit(2) # Apply noise to the circuit according to the noise model. noisy_c = noise.apply(c) Add a quantum error for a specific gate and qubit to the noise model. Parameters apply(circuit) Generate a noisy quantum circuit according to the noise model built. Parameters circuit (qibo.core.circuit.Circuit) – quantum circuit Returns A (qibo.core.circuit.Circuit) which corresponds to the initial circuit with noise gates added according to the noise model. ## Quantum errors The quantum errors available to build a noise model are the following: class qibo.noise.PauliError(px=0, py=0, pz=0, seed=None) Quantum error associated with the qibo.abstractions.gates.PauliNoiseChannel. Parameters options (tuple) – see qibo.abstractions.gates.PauliNoiseChannel class qibo.noise.ThermalRelaxationError(t1, t2, time, excited_population=0, seed=None) Quantum error associated with the qibo.abstractions.gates.ThermalRelaxationChannel. Parameters class qibo.noise.ResetError(p0, p1, seed=None) Quantum error associated with the qibo.abstractions.gates.ResetChannel. Parameters options (tuple) – see qibo.abstractions.gates.ResetChannel # Hamiltonians The main abstract Hamiltonian object of Qibo is: class qibo.abstractions.hamiltonians.AbstractHamiltonian Qibo abstraction for Hamiltonian objects. abstract eigenvalues(k=6) Computes the eigenvalues for the Hamiltonian. Parameters k (int) – Number of eigenvalues to calculate if the Hamiltonian was created using a sparse matrix. This argument is ignored if the Hamiltonian was created using a dense matrix. See qibo.backends.abstract.AbstractBackend.eigvalsh() for more details. abstract eigenvectors(k=6) Computes a tensor with the eigenvectors for the Hamiltonian. Parameters k (int) – Number of eigenvalues to calculate if the Hamiltonian was created using a sparse matrix. This argument is ignored if the Hamiltonian was created using a dense matrix. See qibo.backends.abstract.AbstractBackend.eigh() for more details. ground_state() Computes the ground state of the Hamiltonian. Uses qibo.abstractions.hamiltonians.AbstractHamiltonian.eigenvectors() and returns eigenvector corresponding to the lowest energy. abstract exp(a) Computes a tensor corresponding to exp(-1j * a * H). Parameters a (complex) – Complex number to multiply Hamiltonian before exponentiation. abstract expectation(state, normalize=False) Computes the real expectation value for a given state. Parameters • state (array) – the expectation state. • normalize (bool) – If True the expectation value is divided with the state’s norm squared. Returns Real number corresponding to the expectation value. ## Matrix Hamiltonian The first implementation of Hamiltonians uses the full matrix representation of the Hamiltonian operator in the computational basis. This matrix has size (2 ** nqubits, 2 ** nqubits) and therefore its construction is feasible only when number of qubits is small. Alternatively, the user can construct this Hamiltonian using a sparse matrices. Sparse matrices from the scipy.sparse module are supported by the numpy and qibojit backends while the tf.sparse <https://www.tensorflow.org/api_docs/python/tf/sparse>_ can be used for tensorflow and qibotf. Scipy sparse matrices support algebraic operations (addition, subtraction, scalar multiplication), linear algebra operations (eigenvalues, eigenvectors, matrix exponentiation) and multiplication to dense or other sparse matrices. All these properties are inherited by qibo.core.hamiltonians.Hamiltonian objects created using sparse matrices. Tensorflow sparse matrices support only multiplication to dense matrices. Both backends support calculating Hamiltonian expectation values using a sparse Hamiltonian matrix. class qibo.core.hamiltonians.Hamiltonian(nqubits, matrix) Hamiltonian based on a dense or sparse matrix representation. Parameters • nqubits (int) – number of quantum bits. • matrix (np.ndarray) – Matrix representation of the Hamiltonian in the computational basis as an array of shape (2 ** nqubits, 2 ** nqubits). Sparse matrices based on scipy.sparse for numpy/qibojit backends or on tf.sparse for tensorflow/qibotf backends are also supported. classmethod from_symbolic(symbolic_hamiltonian, symbol_map) Creates a Hamiltonian from a symbolic Hamiltonian. We refer to the How to define custom Hamiltonians using symbols? example for more details. Parameters • symbolic_hamiltonian (sympy.Expr) – The full Hamiltonian written with symbols. • symbol_map (dict) – Dictionary that maps each symbol that appears in the Hamiltonian to a pair of (target, matrix). Returns A qibo.abstractions.hamiltonians.SymbolicHamiltonian object that implements the Hamiltonian represented by the given symbolic expression. eigenvalues(k=6) Computes the eigenvalues for the Hamiltonian. Parameters k (int) – Number of eigenvalues to calculate if the Hamiltonian was created using a sparse matrix. This argument is ignored if the Hamiltonian was created using a dense matrix. See qibo.backends.abstract.AbstractBackend.eigvalsh() for more details. eigenvectors(k=6) Computes a tensor with the eigenvectors for the Hamiltonian. Parameters k (int) – Number of eigenvalues to calculate if the Hamiltonian was created using a sparse matrix. This argument is ignored if the Hamiltonian was created using a dense matrix. See qibo.backends.abstract.AbstractBackend.eigh() for more details. exp(a) Computes a tensor corresponding to exp(-1j * a * H). Parameters a (complex) – Complex number to multiply Hamiltonian before exponentiation. expectation(state, normalize=False) Computes the real expectation value for a given state. Parameters • state (array) – the expectation state. • normalize (bool) – If True the expectation value is divided with the state’s norm squared. Returns Real number corresponding to the expectation value. ## Symbolic Hamiltonian Qibo allows the user to define Hamiltonians using sympy symbols. In this case the full Hamiltonian matrix is not constructed unless this is required. This makes the implementation more efficient for larger qubit numbers. For more information on constructing Hamiltonians using symbols we refer to the How to define custom Hamiltonians using symbols? example. class qibo.abstractions.hamiltonians.SymbolicHamiltonian(ground_state=None) Abstract Hamiltonian based on symbolic representation. Unlike qibo.abstractions.hamiltonians.MatrixHamiltonian this object does not create the full (2 ** nqubits, 2 ** nqubits) Hamiltonian matrix leading to more efficient calculations. Note that the matrix is required and will be created automatically if specific methods, such as .eigenvectors() or .exp() are called. property dense Creates the equivalent qibo.abstractions.hamiltonians.MatrixHamiltonian. property matrix Returns the full (2 ** nqubits, 2 ** nqubits) matrix representation. eigenvalues(k=6) Computes the eigenvalues for the Hamiltonian. Parameters k (int) – Number of eigenvalues to calculate if the Hamiltonian was created using a sparse matrix. This argument is ignored if the Hamiltonian was created using a dense matrix. See qibo.backends.abstract.AbstractBackend.eigvalsh() for more details. eigenvectors(k=6) Computes a tensor with the eigenvectors for the Hamiltonian. Parameters k (int) – Number of eigenvalues to calculate if the Hamiltonian was created using a sparse matrix. This argument is ignored if the Hamiltonian was created using a dense matrix. See qibo.backends.abstract.AbstractBackend.eigh() for more details. ground_state() Computes the ground state of the Hamiltonian. Uses qibo.abstractions.hamiltonians.AbstractHamiltonian.eigenvectors() and returns eigenvector corresponding to the lowest energy. exp(a) Computes a tensor corresponding to exp(-1j * a * H). Parameters a (complex) – Complex number to multiply Hamiltonian before exponentiation. class qibo.core.hamiltonians.SymbolicHamiltonian(form=None, symbol_map={}, ground_state=None) Backend implementation of qibo.abstractions.hamiltonians.SymbolicHamiltonian. Calculations using symbolic Hamiltonians are either done directly using the given sympy expression as it is (form) or by parsing the corresponding terms (which are qibo.core.terms.SymbolicTerm objects). The latter approach is more computationally costly as it uses a sympy.expand call on the given form before parsing the terms. For this reason the terms are calculated only when needed, for example during Trotterization. The dense matrix of the symbolic Hamiltonian can be calculated directly from form without requiring terms calculation (see qibo.core.hamiltonians.SymbolicHamiltonian.calculate_dense() for details). Parameters • form (sympy.Expr) – Hamiltonian form as a sympy.Expr. Ideally the Hamiltonian should be written using Qibo symbols. See How to define custom Hamiltonians using symbols? example for more details. • symbol_map (dict) – Dictionary that maps each sympy.Symbol to a tuple of (target qubit, matrix representation). This feature is kept for compatibility with older versions where Qibo symbols were not available and may be deprecated in the future. It is not required if the Hamiltonian is constructed using Qibo symbols. The symbol_map can also be used to pass non-quantum operator arguments to the symbolic Hamiltonian, such as the parameters in the qibo.hamiltonians.MaxCut() Hamiltonian. • ground_state (Callable) – Function with no arguments that returns the ground state of this Hamiltonian. This is useful in cases where the ground state is trivial and is used for initialization, for example the easy Hamiltonian in adiabatic evolution, however we would like to avoid constructing and diagonalizing the full Hamiltonian matrix only to find the ground state. property terms List of qibo.core.terms.HamiltonianTerm objects of which the Hamiltonian is a sum of. expectation(state, normalize=False) Computes the real expectation value for a given state. Parameters • state (array) – the expectation state. • normalize (bool) – If True the expectation value is divided with the state’s norm squared. Returns Real number corresponding to the expectation value. apply_gates(state, density_matrix=False) Applies gates corresponding to the Hamiltonian terms to a given state. Helper method for __matmul__. circuit(dt, accelerators=None) Circuit that implements a Trotter step of this Hamiltonian for a given time step dt. When a qibo.core.hamiltonians.SymbolicHamiltonian is used for time evolution then Qibo will automatically perform this evolution using the Trotter of the evolution operator. This is done by automatically splitting the Hamiltonian to sums of commuting terms, following the description of Sec. 4.1 of arXiv:1901.05824. For more information on time evolution we refer to the How to simulate time evolution? example. In addition to the abstract Hamiltonian models, Qibo provides the following pre-coded Hamiltonians: ## Heisenberg XXZ class qibo.hamiltonians.XXZ(nqubits, delta=0.5, dense=True) Heisenberg XXZ model with periodic boundary conditions. $H = \sum _{i=0}^N \left ( X_iX_{i + 1} + Y_iY_{i + 1} + \delta Z_iZ_{i + 1} \right ).$ Parameters Example from qibo.hamiltonians import XXZ h = XXZ(3) # initialized XXZ model with 3 qubits ## Non-interacting Pauli-X class qibo.hamiltonians.X(nqubits, dense=True) Non-interacting Pauli-X Hamiltonian. $H = - \sum _{i=0}^N X_i.$ Parameters ## Non-interacting Pauli-Y class qibo.hamiltonians.Y(nqubits, dense=True) Non-interacting Pauli-Y Hamiltonian. $H = - \sum _{i=0}^N Y_i.$ Parameters ## Non-interacting Pauli-Z class qibo.hamiltonians.Z(nqubits, dense=True) Non-interacting Pauli-Z Hamiltonian. $H = - \sum _{i=0}^N Z_i.$ Parameters ## Transverse field Ising model class qibo.hamiltonians.TFIM(nqubits, h=0.0, dense=True) Transverse field Ising model with periodic boundary conditions. $H = - \sum _{i=0}^N \left ( Z_i Z_{i + 1} + h X_i \right ).$ Parameters ## Max Cut class qibo.hamiltonians.MaxCut(nqubits, dense=True) Max Cut Hamiltonian. $H = - \sum _{i,j=0}^N \frac{1 - Z_i Z_j}{2}.$ Parameters Note All pre-coded Hamiltonians can be created as qibo.core.hamiltonians.Hamiltonian using dense=True or qibo.core.hamiltonians.SymbolicHamiltonian using the dense=False. In the first case the Hamiltonian is created using its full matrix representation of size (2 ** n, 2 ** n) where n is the number of qubits that the Hamiltonian acts on. This matrix is used to calculate expectation values by direct matrix multiplication to the state and for time evolution by exact exponentiation. In contrast, when dense=False the Hamiltonian contains a more compact representation as a sum of local terms. This compact representation can be used to calculate expectation values via a sum of the local term expectations and time evolution via the Trotter decomposition of the evolution operator. This is useful for systems that contain many qubits for which constructing the full matrix is intractable. # Symbols Qibo provides a basic set of symbols which inherit the sympy.Symbol object and can be used to construct qibo.abstractions.hamiltonians.SymbolicHamiltonian objects as described in the previous section. class qibo.symbols.Symbol(q, matrix=None, name='Symbol', commutative=False) Qibo specialization for sympy symbols. These symbols can be used to create qibo.core.hamiltonians.SymbolicHamiltonian. See How to define custom Hamiltonians using symbols? for more details. Example from qibo import hamiltonians from qibo.symbols import X, Y, Z # construct a XYZ Hamiltonian on two qubits using Qibo symbols form = X(0) * X(1) + Y(0) * Y(1) + Z(0) * Z(1) ham = hamiltonians.SymbolicHamiltonian(form) Parameters • q (int) – Target qubit id. • matrix (np.ndarray) – 2x2 matrix represented by this symbol. • name (str) – Name of the symbol which defines how it is represented in symbolic expressions. • commutative (bool) – If True the constructed symbols commute with each other. Default is False. This argument should be used with caution because quantum operators are not commutative objects and therefore switching this to True may lead to wrong results. It is useful for improving performance in symbolic calculations in cases where the user is sure that the operators participating in the Hamiltonian form are commuting (for example when the Hamiltonian consists of Z terms only). property gate Qibo gate that implements the action of the symbol on states. full_matrix(nqubits) Calculates the full dense matrix corresponding to the symbol as part of a bigger system. Parameters nqubits (int) – Total number of qubits in the system. Returns Matrix of dimension (2^nqubits, 2^nqubits) composed of the Kronecker product between identities and the symbol’s single-qubit matrix. class qibo.symbols.X(q, commutative=False) Qibo symbol for the Pauli-X operator. Parameters q (int) – Target qubit id. class qibo.symbols.Y(q, commutative=False) Qibo symbol for the Pauli-X operator. Parameters q (int) – Target qubit id. class qibo.symbols.Z(q, commutative=False) Qibo symbol for the Pauli-X operator. Parameters q (int) – Target qubit id. # States Qibo circuits return qibo.abstractions.states.AbstractState objects when executed. By default, Qibo works as a wave function simulator in the sense that propagates the state vector through the circuit applying the corresponding gates. In this default usage the result of a circuit execution is the full final state vector which can be accessed via the tensor property of states. However for specific applications it is useful to have measurement samples from the final wave function, instead of its full vector form. To that end, qibo.abstractions.states.AbstractState provides the qibo.abstractions.states.AbstractState.samples() and qibo.abstractions.states.AbstractState.frequencies() methods. The state vector (or density matrix) is saved in memory as a tensor supported by the currently active backend (see Backends for more information). A copy of the state can be created using qibo.abstractions.states.AbstractState.copy(). The new state will point to the same tensor in memory as the original one unless the deep=True option was used during the copy call. Note that some backends (qibojit, qibotf) perform in-place updates when the state is used as input to a circuit or time evolution. This will modify the state’s tensor and the tensor of all shallow copies and the current state vector values will be lost. If you intend to keep the current state values, we recommend creating a deep copy before using it as input to a qibo model. In order to perform measurements the user has to add the measurement gate qibo.core.gates.M to the circuit and then execute providing a number of shots. If this is done, the qibo.abstractions.states.AbstractState returned by the circuit will contain the measurement samples. For more information on measurements we refer to the How to perform measurements? example. ## Abstract state class qibo.abstractions.states.AbstractState(nqubits=None) Abstract class for quantum states returned by model execution. Parameters nqubits (int) – Optional number of qubits in the state. If None then the number is calculated automatically from the tensor representation of the state. property nqubits Number of qubits in the state. abstract property shape Shape of the state’s tensor representation. abstract property dtype Type of state’s tensor representation. property tensor Tensor representation of the state in the computational basis. abstract symbolic(decimals=5, cutoff=1e-10, max_terms=20) Dirac notation representation of the state in the computational basis. Parameters • decimals (int) – Number of decimals for the amplitudes. Default is 5. • cutoff (float) – Amplitudes with absolute value smaller than the cutoff are ignored from the representation. Default is 1e-10. • max_terms (int) – Maximum number of terms to print. If the state contains more terms they will be ignored. Default is 20. Returns A string representing the state in the computational basis. abstract numpy() State’s tensor representation as a numpy array. state(numpy=False, decimals=- 1, cutoff=1e-10, max_terms=20) State’s tensor representation as an backend tensor. Parameters • numpy (bool) – If True the returned tensor will be a numpy array, otherwise it will follow the backend tensor type. Default is False. • decimals (int) – If positive the Diract representation of the state in the computational basis will be returned as a string. decimals will be the number of decimals of each amplitude. Default is -1. • cutoff (float) – Amplitudes with absolute value smaller than the cutoff are ignored from the Dirac representation. Ignored if decimals < 0. Default is 1e-10. • max_terms (int) – Maximum number of terms in the Dirac representation. If the state contains more terms they will be ignored. Ignored if decimals < 0. Default is 20. Returns If decimals < 0 a tensor representing the state in the computational basis, otherwise a string with the Dirac representation of the state in the computational basis. classmethod from_tensor(x, nqubits=None) Constructs state from a tensor. Parameters • x – Tensor representation of the state in the computational basis. • nqubits (int) – Optional number of qubits in the state. If None it is calculated automatically from the tensor representation shape. abstract classmethod zero_state(nqubits) Constructs the |00...0> state. Parameters nqubits (int) – Number of qubits in the state. abstract classmethod plus_state(nqubits) Constructs the |++...+> state. Parameters nqubits (int) – Number of qubits in the state. abstract copy(deep=False) Creates a copy of the state. Parameters deep (bool) – If True it creates a deep copy of the state by duplicating the tensor in memory, otherwise the copied state references the same tensor object. Default is False for memory efficiency. Returns A qibo.abstractions.states.AbstractState object that represents the same state as the original. abstract to_density_matrix() Transforms a pure quantum state to its density matrix form. Returns A qibo.abstractions.states.AbstractState object that contains the state in density matrix form. abstract probabilities(qubits=None, measurement_gate=None) Calculates measurement probabilities by tracing out qubits. Exactly one of the following arguments should be given. Parameters abstract measure(gate, nshots, registers=None) Measures the state using a measurement gate. Parameters • gate (qibo.abstractions.gates.M) – Measurement gate to use for measuring the state. • nshots (int) – Number of measurement shots. • registers (dict) – Dictionary that maps register names to the corresponding tuples of qubit ids. abstract set_measurements(qubits, samples, registers=None) Sets the state’s measurements using decimal samples. Parameters • qubits (tuple) – Measured qubit ids. • samples (Tensor) – Tensor with decimal samples of the measurement results. • registers (dict) – Dictionary that maps register names to the corresponding tuples of qubit ids. abstract samples(binary=True, registers=False) Returns raw measurement samples. Parameters Returns If binary is True samples are returned in binary form as a tensor of shape (nshots, n_measured_qubits). If binary is False samples are returned in decimal form as a tensor of shape (nshots,). If registers is True samples are returned in a dict where the keys are the register names and the values are the samples tensors for each register. If registers is False a single tensor is returned which contains samples from all the measured qubits, independently of their registers. abstract frequencies(binary=True, registers=False) Returns the frequencies of measured samples. Parameters Returns A collections.Counter where the keys are the observed values and the values the corresponding frequencies, that is the number of times each measured value/bitstring appears. If binary is True the keys of the Counter are in binary form, as strings of 0s and 1s. If binary is False the keys of the Counter are integers. If registers is True a dict of Counter s is returned where keys are the name of each register. If registers is False a single Counter is returned which contains samples from all the measured qubits, independently of their registers. abstract apply_bitflips(p0, p1=None) Applies bitflip noise to the measured samples. Parameters • p0 – Bitflip probability map. Can be: A dictionary that maps each measured qubit to the probability that it is flipped, a list or tuple that has the same length as the tuple of measured qubits or a single float number. If a single float is given the same probability will be used for all qubits. • p1 – Probability of asymmetric bitflip. If p1 is given, p0 will be used as the probability for 0->1 and p1 as the probability for 1->0. If p1 is None the same probability p0 will be used for both bitflips. abstract expectation(hamiltonian, normalize=False) Calculates Hamiltonian expectation value with respect to the state. Parameters • hamiltonian (qibo.abstractions.hamiltonians.Hamiltonian) – Hamiltonian object to calculate the expectation value of. • normalize (bool) – Normalize the result by dividing with the norm of the state. Default is False. ## Distributed state class qibo.core.states.DistributedState(circuit) Data structure that holds the pieces of a state vector. This is created automatically by qibo.core.distcircuit.DistributedCircuit which uses state pieces instead of the full state vector tensor to allow distribution to multiple devices. Using the DistributedState instead of the full state vector as a tensor avoids creating two copies of the state in the CPU memory and allows simulation of one more qubit. The full state vector can be accessed using the state.vector or state.numpy() methods of the DistributedState. The DistributedState supports indexing as a standard array. property dtype Type of state’s tensor representation. property tensor Returns the full state vector as a tensor of shape (2 ** nqubits,). This is done by merging the state pieces to a single tensor. Using this method will double memory usage. create_pieces() Creates qibo.core.states.DistributedState pieces on CPU. assign_pieces(tensor) Assigns state pieces from a given full state vector. Parameters tensor (K.Tensor) – The full state vector as a tensor supported by the underlying backend. classmethod from_tensor(full_state, circuit) Constructs state from a tensor. Parameters • x – Tensor representation of the state in the computational basis. • nqubits (int) – Optional number of qubits in the state. If None it is calculated automatically from the tensor representation shape. classmethod zero_state(circuit) Creates |00...0> as a distributed state. classmethod plus_state(circuit) Creates |++...+> as a distributed state. copy(deep=False) Creates a copy of the state. Parameters deep (bool) – If True it creates a deep copy of the state by duplicating the tensor in memory, otherwise the copied state references the same tensor object. Default is False for memory efficiency. Returns A qibo.abstractions.states.AbstractState object that represents the same state as the original. probabilities(qubits=None, measurement_gate=None) Calculates measurement probabilities by tracing out qubits. Exactly one of the following arguments should be given. Parameters # Callbacks Callbacks provide a way to calculate quantities on the state vector as it propagates through the circuit. Example of such quantity is the entanglement entropy, which is currently the only callback implemented in qibo.abstractions.callbacks.EntanglementEntropy. The user can create custom callbacks by inheriting the qibo.abstractions.callbacks.Callback class. The point each callback is calculated inside the circuit is defined by adding a qibo.abstractions.gates.CallbackGate. This can be added similarly to a standard gate and does not affect the state vector. class qibo.abstractions.callbacks.Callback Base callback class. Callbacks should inherit this class and implement its _state_vector_call and _density_matrix_call methods. Results of a callback can be accessed by indexing the corresponding object. property nqubits Total number of qubits in the circuit that the callback was added in. ## Entanglement entropy class qibo.abstractions.callbacks.EntanglementEntropy(partition: Optional[List[int]] = None, compute_spectrum: bool = False) Von Neumann entanglement entropy callback. $S = \mathrm{Tr} \left ( \rho \log _2 \rho \right )$ Parameters • partition (list) – List with qubit ids that defines the first subsystem for the entropy calculation. If partition is not given then the first subsystem is the first half of the qubits. • compute_spectrum (bool) – Compute the entanglement spectrum. Default is False. Example from qibo import models, gates, callbacks # create entropy callback where qubit 0 is the first subsystem entropy = callbacks.EntanglementEntropy([0], compute_spectrum=True) # initialize circuit with 2 qubits and add gates c = models.Circuit(2) # add callback gates between normal gates # execute the circuit final_state = c() print(entropy[:]) # Should print [0, 0, 1] which is the entanglement entropy # after every gate in the calculation. print(entropy.spectrum) # Print the entanglement spectrum. ## Norm class qibo.abstractions.callbacks.Norm State norm callback. $\mathrm{Norm} = \left \langle \Psi | \Psi \right \rangle = \mathrm{Tr} (\rho )$ ## Overlap class qibo.abstractions.callbacks.Overlap State overlap callback. Calculates the overlap between the circuit state and a given target state: $\mathrm{Overlap} = |\left \langle \Phi | \Psi \right \rangle |$ Parameters • state (np.ndarray) – Target state to calculate overlap with. • normalize (bool) – If True the states are normalized for the overlap calculation. ## Energy class qibo.abstractions.callbacks.Energy(hamiltonian: hamiltonians.Hamiltonian) Energy expectation value callback. Calculates the expectation value of a given Hamiltonian as: $\left \langle H \right \rangle = \left \langle \Psi | H | \Psi \right \rangle = \mathrm{Tr} (\rho H)$ assuming that the state is normalized. Parameters hamiltonian (qibo.hamiltonians.Hamiltonian) – Hamiltonian object to calculate its expectation value. ## Gap class qibo.abstractions.callbacks.Gap(mode: Union[str, int] = 'gap', check_degenerate: bool = True) Callback for calculating the gap of adiabatic evolution Hamiltonians. Can also be used to calculate the Hamiltonian eigenvalues at each time step during the evolution. Note that this callback can only be added in qibo.evolution.AdiabaticEvolution models. Parameters • mode (str/int) – Defines which quantity this callback calculates. If mode == 'gap' then the difference between ground state and first excited state energy (gap) is calculated. If mode is an integer, then the energy of the corresponding eigenstate is calculated. • check_degenerate (bool) – If True the excited state number is increased until a non-zero gap is found. This is used to find the proper gap in the case of degenerate Hamiltonians. This flag is relevant only if mode is 'gap'. Default is True. Example from qibo import callbacks, hamiltonians # define easy and hard Hamiltonians for adiabatic evolution h0 = hamiltonians.X(3) h1 = hamiltonians.TFIM(3, h=1.0) # define callbacks for logging the ground state, first excited # and gap energy ground = callbacks.Gap(0) excited = callbacks.Gap(1) gap = callbacks.Gap() # define and execute the AdiabaticEvolution model evolution = AdiabaticEvolution(h0, h1, lambda t: t, dt=1e-1, callbacks=[gap, ground, excited]) final_state = evolution(final_time=1.0) # print results print(ground[:]) print(excited[:]) print(gap[:]) # Solvers Solvers are used to numerically calculate the time evolution of state vectors. They perform steps in time by integrating the time-dependent Schrodinger equation. class qibo.solvers.BaseSolver(dt, hamiltonian) Basic solver that should be inherited by all solvers. Parameters • dt (float) – Time step size. • hamiltonian (qibo.abstractions.hamiltonians.Hamiltonian) – Hamiltonian object that the state evolves under. property t Solver’s current time. class qibo.solvers.TrotterizedExponential(dt, hamiltonian) Solver that uses Trotterized exponentials. Created automatically from the qibo.solvers.Exponential if the given Hamiltonian object is a qibo.abstractions.hamiltonians.TrotterHamiltonian. class qibo.solvers.Exponential(dt, hamiltonian) Solver that uses the matrix exponential of the Hamiltonian: $U(t) = e^{-i H(t) \delta t}$ Calculates the evolution operator in every step and thus is compatible with time-dependent Hamiltonians. class qibo.solvers.RungeKutta4(dt, hamiltonian) Solver based on the 4th order Runge-Kutta method. class qibo.solvers.RungeKutta45(dt, hamiltonian) Solver based on the 5th order Runge-Kutta method. # Optimizers Optimizers are used automatically by the minimize methods of qibo.models.VQE and qibo.evolution.AdiabaticEvolution models. The user does not have to use any of the optimizer methods included in the current section, however the required options of each optimization method can be passed when calling the minimize method of the respective Qibo variational model. qibo.optimizers.optimize(loss, initial_parameters, args=(), method='Powell', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, compile=False, processes=None) Main optimization method. Selects one of the following optimizers: Parameters • loss (callable) – Loss as a function of parameters and optional extra arguments. Make sure the loss function returns a tensor for method=sgd and numpy object for all the other methods. • initial_parameters (np.ndarray) – Initial guess for the variational parameters that are optimized. • args (tuple) – optional arguments for the loss function. • method (str) – Name of optimizer to use. Can be 'cma', 'sgd' or one of the Newtonian methods supported by qibo.optimizers.newtonian() and 'parallel_L-BFGS-B'. sgd is only available for backends based on tensorflow. • jac (dict) – Method for computing the gradient vector for scipy optimizers. • hess (dict) – Method for computing the hessian matrix for scipy optimizers. • hessp (callable) – Hessian of objective function times an arbitrary vector for scipy optimizers. • bounds (sequence or Bounds) – Bounds on variables for scipy optimizers. • constraints (dict) – Constraints definition for scipy optimizers. • tol (float) – Tolerance of termination for scipy optimizers. • callback (callable) – Called after each iteration for scipy optimizers. • options (dict) – Dictionary with options. See the specific optimizer bellow for a list of the supported options. • compile (bool) – If True the Tensorflow optimization graph is compiled. This is relevant only for the 'sgd' optimizer. • processes (int) – number of processes when using the parallel BFGS method. Returns Final best loss value; best parameters obtained by the optimizer; extra: optimizer-specific return object. For scipy methods it returns the OptimizeResult, for 'cma' the CMAEvolutionStrategy.result, and for 'sgd' the options used during the optimization. Return type (float, float, custom) Example import numpy as np from qibo import gates, models from qibo.optimizers import optimize # create custom loss function # make sure the return type matches the optimizer requirements. def myloss(parameters, circuit): circuit.set_parameters(parameters) return np.square(np.sum(circuit())) # returns numpy array # create circuit ansatz for two qubits circuit = models.Circuit(2) # optimize using random initial variational parameters initial_parameters = np.random.uniform(0, 2, 1) best, params, extra = optimize(myloss, initial_parameters, args=(circuit)) # set parameters to circuit circuit.set_parameters(params) qibo.optimizers.cmaes(loss, initial_parameters, args=(), options=None) Genetic optimizer based on pycma. Parameters • loss (callable) – Loss as a function of variational parameters to be optimized. • initial_parameters (np.ndarray) – Initial guess for the variational parameters. • args (tuple) – optional arguments for the loss function. • options (dict) – Dictionary with options accepted by the cma optimizer. The user can use import cma; cma.CMAOptions() to view the available options. qibo.optimizers.newtonian(loss, initial_parameters, args=(), method='Powell', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, processes=None) Newtonian optimization approaches based on scipy.optimize.minimize. For more details check the scipy documentation. Note When using the method parallel_L-BFGS-B the processes option controls the number of processes used by the parallel L-BFGS-B algorithm through the multiprocessing library. By default processes=None, in this case the total number of logical cores are used. Make sure to select the appropriate number of processes for your computer specification, taking in consideration memory and physical cores. In order to obtain optimal results you can control the number of threads used by each process with the qibo.set_threads method. For example, for small-medium size circuits you may benefit from single thread per process, thus set qibo.set_threads(1) before running the optimization. Parameters • loss (callable) – Loss as a function of variational parameters to be optimized. • initial_parameters (np.ndarray) – Initial guess for the variational parameters. • args (tuple) – optional arguments for the loss function. • method (str) – Name of method supported by scipy.optimize.minimize and 'parallel_L-BFGS-B' for a parallel version of L-BFGS-B algorithm. • jac (dict) – Method for computing the gradient vector for scipy optimizers. • hess (dict) – Method for computing the hessian matrix for scipy optimizers. • hessp (callable) – Hessian of objective function times an arbitrary vector for scipy optimizers. • bounds (sequence or Bounds) – Bounds on variables for scipy optimizers. • constraints (dict) – Constraints definition for scipy optimizers. • tol (float) – Tolerance of termination for scipy optimizers. • callback (callable) – Called after each iteration for scipy optimizers. • options (dict) – Dictionary with options accepted by scipy.optimize.minimize. • processes (int) – number of processes when using the parallel BFGS method. qibo.optimizers.sgd(loss, initial_parameters, args=(), options=None, compile=False) Stochastic Gradient Descent (SGD) optimizer using Tensorflow backpropagation. See tf.keras.Optimizers for a list of the available optimizers. Parameters • loss (callable) – Loss as a function of variational parameters to be optimized. • initial_parameters (np.ndarray) – Initial guess for the variational parameters. • args (tuple) – optional arguments for the loss function. • options (dict) – Dictionary with options for the SGD optimizer. Supports the following keys: • 'optimizer' (str, default: 'Adagrad'): Name of optimizer. • 'learning_rate' (float, default: '1e-3'): Learning rate. • 'nepochs' (int, default: 1e6): Number of epochs for optimization. • 'nmessage' (int, default: 1e3): Every how many epochs to print a message of the loss function. # Parallelism We provide CPU multi-processing methods for circuit evaluation for multiple input states and multiple parameters for fixed input state. When using the methods below the processes option controls the number of processes used by the parallel algorithms through the multiprocessing library. By default processes=None, in this case the total number of logical cores are used. Make sure to select the appropriate number of processes for your computer specification, taking in consideration memory and physical cores. In order to obtain optimal results you can control the number of threads used by each process with the qibo.set_threads method. For example, for small-medium size circuits you may benefit from single thread per process, thus set qibo.set_threads(1) before running the optimization. Resources for parallel circuit evaluation. qibo.parallel.parallel_execution(circuit, states, processes=None) Execute circuit for multiple states. Example import qibo original_backend = qibo.get_backend() qibo.set_backend("qibotf") from qibo.parallel import parallel_execution import numpy as np # create circuit nqubits = 22 circuit = models.QFT(nqubits) # create random states states = [ np.random.random(2**nqubits) for i in range(5)] # set threads to 1 per process (optional, requires tuning) # execute in parallel results = parallel_execution(circuit, states, processes=2) qibo.set_backend(original_backend) Parameters • circuit (qibo.models.Circuit) – the input circuit. • states (list) – list of states for the circuit evaluation. • processes (int) – number of processes for parallel evaluation. Returns Circuit evaluation for input states. qibo.parallel.parallel_parametrized_execution(circuit, parameters, initial_state=None, processes=None) Execute circuit for multiple parameters and fixed initial_state. Example import qibo original_backend = qibo.get_backend() qibo.set_backend("qibotf") from qibo import models, gates, set_threads from qibo.parallel import parallel_parametrized_execution import numpy as np # create circuit nqubits = 6 nlayers = 2 circuit = models.Circuit(nqubits) for l in range(nlayers): circuit.add((gates.RY(q, theta=0) for q in range(nqubits))) circuit.add((gates.CZ(q, q+1) for q in range(0, nqubits-1, 2))) circuit.add((gates.RY(q, theta=0) for q in range(nqubits))) circuit.add((gates.CZ(q, q+1) for q in range(1, nqubits-2, 2))) circuit.add((gates.RY(q, theta=0) for q in range(nqubits))) # create random parameters size = len(circuit.get_parameters()) parameters = [ np.random.uniform(0, 2*np.pi, size) for _ in range(10) ] # set threads to 1 per process (optional, requires tuning) # execute in parallel results = parallel_parametrized_execution(circuit, parameters, processes=2) qibo.set_backend(original_backend) Parameters • circuit (qibo.models.Circuit) – the input circuit. • parameters (list) – list of parameters for the circuit evaluation. • initial_state (np.array) – initial state for the circuit evaluation. • processes (int) – number of processes for parallel evaluation. Returns Circuit evaluation for input parameters. # Backends The main calculation engine is defined in the abstract backend object qibo.backends.abstract.AbstractBackend. This object defines the methods required by all Qibo models to perform simulation. Qibo currently provides two different calculation backends, one based on numpy and one based on Tensorflow. It is possible to define new backends by inheriting qibo.backends.abstract.AbstractBackend and implementing its abstract methods. Both backends are supplemented by custom operators defined under which can be used to efficiently apply gates to state vectors or density matrices. These custom operators are shipped as the separate libraries qibojit and qibotf. We refer to Packages section for a complete list of the available computation backends and instructions on how to install each of these libraries on top of qibo. Custom operators are much faster than implementations based on numpy or Tensorflow primitives (such as einsum) but do not support some features, such as automatic differentiation for backpropagation of variational circuits which is only supported by the native tensorflow backend. The user can switch backends using import qibo qibo.set_backend("qibotf") qibo.set_backend("numpy") before creating any circuits or gates. The default backend is the first available from qibojit, qibotf, tensorflow, numpy. Some backends support different platforms. For example, the qibojit backend provides two platforms (cupy and cuquantum) when used on GPU. The active platform can be switched using import qibo qibo.set_backend("qibojit", platform="cuquantum") qibo.set_backend("qibojit", platform="cupy") For developers, we provide a configuration file in qibo/backends/profiles.yml containing the technical specifications for all backends supported by the Qibo team. If you are planning to introduce a new backend module for simulation or hardware, you can simply edit this profile file and include the reference to your new module. Alternatively, you can set a custom profile file by storing the file path in the QIBO_PROFILE environment variable before executing the code. class qibo.backends.abstract.AbstractBackend test_regressions(name) Correct outcomes for tests that involve random numbers. The outcomes of such tests depend on the backend. Parameters abstract set_platform(platform) Sets the platform used by the backend. Not all backends support different platforms. ‘qibojit’ GPU supports two platforms (‘cupy’, ‘cuquantum’). ‘qibolab’ supports multiple platforms depending on the quantum hardware. abstract get_platform() Returns the name of the activated platform. See qibo.backends.abstract.AbstractBackend.set_platform() for more details on platforms. get_cpu() Returns default CPU device to use for OOM fallback. cpu_fallback(func, *args) Executes a function on CPU if the default devices raises OOM. circuit_class(accelerators=None, density_matrix=False) Returns class used to create circuit model. Useful for hardware backends which use different circuit models. Parameters • accelerators (dict) – Dictionary that maps device names to the number of times each device will be used. See qibo.core.distcircuit.DistributedCircuit for more details. • density_matrix (bool) – If True it creates a circuit for density matrix simulation. Default is False which corresponds to state vector simulation. create_gate(cls, *args, **kwargs) Create gate objects supported by the backend. Useful for hardware backends which use different gate objects. abstract to_numpy(x) Convert tensor to numpy. abstract to_complex(re, img) Creates complex number from real numbers. abstract cast(x, dtype='DTYPECPX') Casts tensor to the given dtype. abstract issparse(x) Checks if the given tensor is sparse. abstract reshape(x, shape) Reshapes tensor in the given shape. abstract stack(x, axis=None) Stacks a list of tensors to a single tensor. abstract concatenate(x, axis=None) Concatenates a list of tensor along a given axis. abstract expand_dims(x, axis) Creates a new axis of dimension one. abstract copy(x) Creates a copy of the tensor in memory. abstract range(start, finish, step, dtype=None) Creates a tensor of integers from start to finish. abstract eye(dim, dtype='DTYPECPX') Creates the identity matrix as a tensor. abstract zeros(shape, dtype='DTYPECPX') Creates tensor of zeros with the given shape and dtype. abstract ones(shape, dtype='DTYPECPX') Creates tensor of ones with the given shape and dtype. abstract zeros_like(x) Creates tensor of zeros with shape and dtype of the given tensor. abstract ones_like(x) Creates tensor of ones with shape and dtype of the given tensor. abstract real(x) Real part of a given complex tensor. abstract imag(x) Imaginary part of a given complex tensor. abstract conj(x) Elementwise complex conjugate of a tensor. abstract mod(x) Elementwise mod operation. abstract right_shift(x, y) Elementwise bitwise right shift. abstract exp(x) Elementwise exponential. abstract sin(x) Elementwise sin. abstract cos(x) Elementwise cos. abstract pow(base, exponent) Elementwise power. abstract square(x) Elementwise square. abstract sqrt(x) Elementwise square root. abstract log(x) Elementwise natural logarithm. abstract abs(x) Elementwise absolute value. abstract expm(x) Matrix exponential. abstract trace(x) Matrix trace. abstract sum(x, axis=None) Sum of tensor elements. abstract dot(x, y) Dot product of two tensors. abstract matmul(x, y) Matrix multiplication of two tensors. abstract outer(x, y) Outer (Kronecker) product of two tensors. abstract kron(x, y) Outer (Kronecker) product of two tensors. abstract einsum(*args) Generic tensor operation based on Einstein’s summation convention. abstract tensordot(x, y, axes=None) Generalized tensor product of two tensors. abstract transpose(x, axes=None) Tensor transpose. abstract inv(x) Matrix inversion. abstract eigh(x, k=6) Hermitian matrix eigenvalues and eigenvectors. Parameters • x – Tensor to calculate the eigenvectors of. • k (int) – Number of eigenvectors to calculate if a sparse matrix is given. The eigenvectors corresponding to the k-th algebraically smallest eigenvalues are calculated. This argument is ignored if the given tensor is not sparse and all eigenvectors are calculated. abstract eigvalsh(x, k=6) Hermitian matrix eigenvalues. Parameters • x – Tensor to calculate the eigenvalues. • k (int) – Number of eigenvalues to calculate if a sparse matrix is given. The k-th algebraically smallest eigenvalues are calculated. This argument is ignored if the given tensor is not sparse and all eigenvalues are calculated. abstract unique(x, return_counts=False) Identifies unique elements in a tensor. abstract less(x, y) Compares the values of two tensors element-wise. Returns a bool tensor. abstract array_equal(x, y) Checks if two arrays are equal element-wise. Returns a single bool. Used in qibo.tensorflow.hamiltonians.TrotterHamiltonian.construct_terms(). abstract squeeze(x, axis=None) Removes axis of unit length. abstract gather(x, indices=None, condition=None, axis=0) Indexing of one-dimensional tensor. abstract gather_nd(x, indices) Indexing of multi-dimensional tensor. abstract initial_state(nqubits, is_matrix=False) Creates the default initial state |00...0> as a tensor. abstract random_uniform(shape, dtype='DTYPE') Samples array of given shape from a uniform distribution in [0, 1]. abstract sample_shots(probs, nshots) Samples measurement shots from a given probability distribution. Parameters • probs (Tensor) – Tensor with the probability distribution on the measured bitsrings. • nshots (int) – Number of measurement shots to sample. Returns Measurements in decimal as a tensor of shape (nshots,). abstract sample_frequencies(probs, nshots) Samples measurement frequencies from a given probability distribution. Parameters • probs (Tensor) – Tensor with the probability distribution on the measured bitsrings. • nshots (int) – Number of measurement shots to sample. Returns Frequencies of measurements as a collections.Counter. abstract compile(func) Compiles the graph of a given function. Relevant for Tensorflow, not numpy. abstract device(device_name) Used to execute code in specific device if supported by backend. executing_eagerly() Checks if we are in eager or compiled mode. Relevant for the Tensorflow backends only. abstract set_seed(seed) Sets the seed for random number generation. Parameters seed (int) – Integer to use as seed. abstract create_gate_cache(gate) Calculates data required for applying gates to states. These can be einsum index strings or tensors of qubit ids and it depends on the underlying backend. Parameters gate (qibo.abstractions.abstract_gates.BackendGate) – Gate object to calculate its cache. Returns Custom cache object that holds all the required data as attributes. abstract state_vector_matrix_call(gate, state) Applies gate to state vector using the gate’s unitary matrix representation. This method is useful for the custom backend for which some gates do not require the unitary matrix. Parameters • gate (qibo.abstractions.abstract_gates.BackendGate) – Gate object to apply to state. • state (Tensor) – State vector as a Tensor supported by this backend. Returns State vector after applying the gate as a Tensor. abstract density_matrix_matrix_call(gate, state) Applies gate to density matrix using the gate’s unitary matrix representation. This method is useful for the custom backend for which some gates do not require the unitary matrix. Parameters • gate (qibo.abstractions.abstract_gates.BackendGate) – Gate object to apply to state. • state (Tensor) – Density matrix as a Tensor supported by this backend. Returns Density matrix after applying the gate as a Tensor. abstract density_matrix_half_matrix_call(gate, state) Half gate application to density matrix using the gate’s unitary matrix representation. abstract state_vector_collapse(gate, state, result) Collapses state vector to a given result. abstract density_matrix_collapse(gate, state, result) Collapses density matrix to a given result. abstract on_cpu() Used as with K.on_cpu(): to perform following operations on CPU. abstract cpu_tensor(x, dtype=None) Creates backend tensors to be casted on CPU only. Used by qibo.core.states.DistributedState to save state pieces on CPU instead of GPUs during a multi-GPU simulation. abstract cpu_cast(x, dtype='DTYPECPX') Forces tensor casting on CPU. In contrast to simply K.cast which uses the current default device. abstract cpu_assign(state, i, piece) Assigns updated piece to a state object by transfering from GPU to CPU. Parameters abstract transpose_state(pieces, state, nqubits, order) Transposes distributed state pieces to obtain the full state vector. Used by qibo.backends.abstract.AbstractMultiGpu.calculate_tensor(). abstract assert_allclose(value, target, rtol=1e-07, atol=0.0) Check that two arrays are equal. Useful for testing.
2022-05-21 08:55:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45463690161705017, "perplexity": 5720.558677280978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00451.warc.gz"}
https://www.vedantu.com/chemistry/test-for-carboxyl-group
# Test for Carboxyl Group ## What is Carboxylic Acid? Carboxylic acids are compounds containing the carboxyl functional group in their molecules. The carboxyl group is made up of carbonyl and hydroxyl groups and therefore, the name carboxyl is derived from carbo (from carbonyl) and oxyl from the hydroxyl group. The carboxylic acids may be aliphatic or aromatic depending upon whether the -COOH group is attached to the aliphatic alkyl chain or aryl group respectively. The functional groups that consist of a Carbonyl Group (C=O) along with a hydroxyl group (O-H) which is attached to the same carbon atom are known as carboxyl groups. The formula for Carboxyl Groups is- -C(=O)OH Acids with the presence of one carboxyl group are termed carboxylic acids and since they are proton-donors, they are also known as Bronsted-Lowry acids. Acids with the presence of two carboxyl groups are known as dicarboxylic acids and the ones with the presence of three carboxyl groups are known as tricarboxylic acids. Salts and esters of carboxylic acids are known as carboxylates. Though the IUPAC nomenclature of carboxylic acids is ‘oic acids’ in the suffix, ‘ic acids’ is used more commonly. ## Qualitative Test for Carboxylic Acid The following tests are performed for the identification of carboxylic acid. ### 1. Action with Blue Litmus All carboxylic acids turn blue litmus red. Procedure- • Place the droplet of the liquid, solid or crystal on a moist blue litmus paper and observe the colour change. • If the red colour changes to blue, it indicates the presence of a carboxylic group. ### 2. Action with Carbonates and Bicarbonates Carboxylic acids decompose carbonates and bicarbonates evolving carbon dioxide with brisk effervescence. Carboxyl groups react with sodium hydrogen carbonate releasing carbon dioxide gas which can be identified by the effervescence produced. To distinguish carboxylic acids from phenols, this test can be used. $RCOOH + NAHCO_3 \rightarrow RCOONa + Co_2 \uparrow + H_20$ Procedure- • Take one ml of organic liquid in a test tube and add a pinch of sodium bicarbonate $(NaHCO_{3} )$ to it. • If carboxylic acid is present in the organic compound, a brisk effervescence is observed. ### 3. Carboxylic Acid NaHCO3 Mechanism The reaction of carboxylic acids with aqueous sodium carbonate solution leads to the evolution of carbon dioxide producing brisk effervescence. However, most phenols do not produce effervescence with an aqueous solution of sodium bicarbonate. Therefore, this reaction may be used to distinguish between carboxylic acids with sodium bicarbonate or sodium carbonate, the carbon dioxide evolved comes from Na2CO3 or NaHCO3 and does not form a carboxyl group. ### 4. Formation of Ester When carboxylic acids are heated with alcohols in the presence of concentrated sulphuric acid and hydrochloric acid, esters are formed. The reaction is reversible and is called esterification. Carboxylic acids react with alcohol when there is a presence of sulphuric acid to form an ester that has a fruity smell. Procedure- • To 0.1 g of the organic compound add 1 ml ethyl alcohol and one or two drops of concentrated sulphuric acid in a test tube. After heating the mixture in a water bath for about five minutes, pour it into a beaker that has water. If a fruity smell is observed, it indicates the presence of the carboxyl group in the organic compound. $RCOOH + C_2H_5OH \rightarrow COOC_2H_5 + H_2O$ ### 5. Fluorescein Test This test is used for the identification of the dicarboxylic group. When the dicarboxylic compound is heated, it produces acid anhydride. This anhydride reacts with resorcinol in the presence of Conc. H2SO4 and produces a fluorescent dye. Procedure- • Take a small amount of organic compound and heat it with resorcinol and one or two drops of concentrated sulphuric acid in a clean and dry test tube. • After a few minutes, the solution turns dark-brown and a liquid is formed. • Add a few drops of this solution to a dilute NaOH solution. • If the solution turns red with green fluorescence, it indicates the presence of dicarboxylic acid. ### 6. Reaction with FeCl3 Some carboxylic acids give precipitates when they react with iron trichloride. For example, acetic acid gives puff coloured precipitate. ### Did You know? • Methanoic acid is used in leather tanning • Methanoic acid is used as an antiseptic. • Benzoic acid and some of its salts are used as urinary antiseptics. • Carboxylic acid esters are used in perfumery. Book your Free Demo session Get a flavour of LIVE classes here at Vedantu 1. How can you identify carboxylic acids? Carboxylic acids are commonly known by their trivial names such as formic acid, acetic acid, etc. Carboxylic acids have the suffix ‘ic acids’ in their names. By the guidelines of IUPAC, the carboxyl group has ‘oic acids’ in their names. For example- Butyric acid is known as butanoic acid according to the guidelines of IUPAC. The carboxylate anions are usually named with the suffix ‘ate’. For a conjugate acid, names have the suffix ‘-ic acid’ and the names for conjugate bases have ‘-ate’. For example- For acetic acid, the conjugate base is acetate. 2. What are the properties of carboxylic acids? The properties of carboxylic acids are- • Carboxylic acids are hydrogen-bond acceptors as well as hydrogen bond donors, because of which they are polar and participate in hydrogen bonding. • The boiling point of carboxylic acids are higher than that of water as their surface areas are great. • Carboxylic acids are proton donors (H+). They are also known as Bronsted Lowry acids. • Carboxylic acids are considered to be weak acids. • The esters of carboxylic acids have very pleasant odours which are commonly used in perfumes. 3. Mention some of the applications of carboxylic acids. Carboxylic acids have a wide and significant role in society as- • Carboxylic acids and the derivatives of carboxylic acids are often used in the production of biopolymers, polymers, adhesives, coatings, pharmaceutical drugs, etc. • Carboxylic acids are also used as food additives, food solvents, flavourings, antimicrobials, etc. • The esters of carboxylic acids can be used in the production of perfumes since their smell is very pleasant. • Carboxylic acids play an important role in the field of medicine as well. 4. How are carboxylic acids used in the pharmaceutical industry? The carboxylic acids have a very important role in the pharmaceutical industry because- • They act as a solubilizer that acts in modulating solubility and permeation of cells. • Bio Precursors and prodrug compounds that are not biologically active can be converted into active ones under specific conditions. (Examples are- Drugs from antithrombotic, antiviral and anti-hypertensive classes) • Pharmacophore provides specific interactions with enzymes by triggering and blocking their biological response. Carboxylic acids have a wide variety of applications in the cosmetic industry as well. 5. Mention some carboxylic acids that one encounters in daily life. Some of the carboxylic acids that we encounter in our daily lives are- • Salicylic Acids- Salicylic acids are most commonly used in skin care products to exfoliate dead skin cells. • Lactic Acid- Lactic acid is accumulated in muscles in the body while doing anaerobic exercises. • Citric Acid- Citric acid is highly acidic and therefore is used in industrial applications. • Acetylsalicylic Acids- Acetylsalicylic acid is most commonly used in aspirin and is extracted from willow barks. 6. Write the Tests that Can Show the Difference Between Alcohol and Carboxylic Acid. 1. Acetic acid gives effervescence with NaHCO3 due to the libration of carbon dioxide. 2. Ethanol does not give effervescence with NaHCO3. 3. Ethanol gives a yellow precipitate with an alkaline solution of iodine while acetic acid does not give this test. 7. What is Carboxylic Acid? Carboxylic acids are compounds containing the carboxyl functional group in their molecules. The carboxylic acids may be aliphatic or aromatic depending upon whether the -COOH group is attached to the aliphatic alkyl chain or aryl group respectively. Comment
2022-05-18 17:27:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5071654319763184, "perplexity": 8443.099614713505}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00496.warc.gz"}
http://mathoverflow.net/feeds/question/40923
Applications of sheaf theory to the computation of invariants of LS-category type - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T10:24:48Z http://mathoverflow.net/feeds/question/40923 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/40923/applications-of-sheaf-theory-to-the-computation-of-invariants-of-ls-category-type Applications of sheaf theory to the computation of invariants of LS-category type Mark Grant 2010-10-03T11:01:10Z 2011-01-07T17:17:31Z <p>I would like to know if sheaf theory can be applied to a particular class of questions in topology. </p> <p>The <em>Schwarz genus</em> (also known as sectional category) of a continuous map $p\colon\thinspace E\to B$ is the smallest integer $k$ such that $B$ can be covered by open subsets $U_1,\ldots ,U_k$ over each of which $p$ admits a <em>local section</em>, ie a continuous map $s_i\colon\thinspace U_i\to E$ such that $p\circ s_i$ equals the inclusion $U_i\hookrightarrow B$. If no such cover exists (eg if $p$ is not surjective) we set the genus to be $\infty$.</p> <p>Several important numerical invariants in topology are special cases of this genus. For instance, the Lusternik-Schnirelmann category $\mathrm{cat}(X)$ of a space $X$, defined to be the smallest $k$ such that $X$ can be covered by open subsets $U_1,\ldots ,U_k$ such that each inclusion $U_i\hookrightarrow X$ is null-homotopic, is easily seen to be the genus of the Serre fibration $PX\to X$ of based paths on $X$. More recently, Farber has defined the <em>topological complexity</em> $\mathrm{TC}(X)$ of a space $X$ to be the genus of the fibration $X^I\to X\times X$ which takes a free path in $X$ to its pair of initial and final points, and this is relevant to the motion planning problem in robotics. </p> <p>On the other hand, one of the most natural examples of a sheaf (at least for a topologist) is the sheaf of sections of a continuous map $p\colon\thinspace E\to B$. This is the sheaf $\Gamma(p)$ on $B$ whose sections over an open set $U\subseteq B$ are the set $\Gamma(p)(U)$ of local sections $s\colon\thinspace U\to E$ of $p$, as defined above. </p> <p>This leads me to the following (perhaps naive) question. </p> <blockquote> <p><strong>Definition.</strong> Let $\mathcal{F}$ be a sheaf of sets over $X$. Define the <em>Schwarz genus</em> of $\mathcal{F}$ to be the least $k$<br> such that $X$ has a cover by open subsets $U_1,\ldots, U_k$ such that each $\mathcal{F}(U_i)\neq\emptyset$. </p> <p>Do there exist techniques in sheaf theory to approximate (bound from above or below) the Schwarz genus of $\mathcal{F}$? Has this invariant been considered before? </p> </blockquote> <p>I suspect the answer is no, but I would like to hear this from a sheafy person (of which there seem to plenty on MO). Also I would be interested to hear of any extra conditions<br> you would impose to make the question more interesting or tractable. </p> <p><strong>Edit:</strong> As Ben noted below, we can apply the free abelian group functor to $\mathcal{F}$ to get a sheaf $\mathcal{G}$ of abelian groups over $X$. Define the Schwarz genus of $\mathcal{G}$ to be the least $k$ such that $X$ has a cover by open subsets $U_1,\ldots, U_k$ such that each $\mathcal{G}(U_i)\neq 0$. Can we obtain bounds on this genus using the Cech spectral sequence, or something similar?</p>
2013-05-25 10:24:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558442831039429, "perplexity": 247.0396763132597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705936437/warc/CC-MAIN-20130516120536-00052-ip-10-60-113-184.ec2.internal.warc.gz"}
https://blog.iany.me/2010/05/study-on-alias-method/
@miloyip has published a post recently which motioned the Alias Method to generate a discrete random variable in O(1). After some research, I find out that it is a neat and clever algorithm. Following are some notes of my study on it. ## What is Alias Method Alias method is an efficient algorithm to generate a discrete random variable with specified probability mass function using a uniformly distributed random variable. Let $Z$ be the discrete random variable which has n possible outcomes $z_0,z_1,\ldots,z_{n-1}$. To make the discussion below simple, we study another variable $Y$, where $P\{Y=i\}=P\{Z=z_i\}$. And when $Y$ takes on value $i$, let $Z$ be $z_i$. So $Z$ can be generated from $Y$. Random variable $X$ is uniformly distributed in $(0, n)$, which probability density function is $f(x) = \left\{ \begin{array}{rl} 1/n & \text{if } 0 < x < n\\ 0 & \text{otherwise}\\ \end{array} \right.$ Now generate a variable $Y'$ that $Y' = \left\{ \begin{array}{rl} \lfloor x \rfloor & \text{if } (x - \lfloor x \rfloor) < F(\lfloor x \rfloor)\\ A(\lfloor x \rfloor) & \text{otherwise}\\ \end{array} \right.$ $A(i)$ is the alias function. When $x$ falls in range $[i, i + 1)$ ($i$ is an integer), $y$ has the probability $F(i)$ to be $i$, and probability $1 - F(i)$ to be $A(i)$. Because $x$ is uniformly distributed, \begin{aligned} P\{x \in [i, i + F(i))\} &= \displaystyle\int_i^{i+F(i)}\frac{1}{n}dx\\ &= (i + F(i) - i) \times 1/n\\ &= F(i)/n,\\ \\ P\{x \in [i + F(i), i + 1)\} &= \displaystyle\int_{i+F(i)}^{i+1}\frac{1}{n}dx\\ &= (i + 1 - (i + F(i))) \times 1/n\\ &= (1-F(i))/n \end{aligned} Let’s denote the set of values $j$ that satisfies $A(j) = i$ as $A^{-1}(i)$. The generated variable $Y'$ has following probability mass function: $P\{Y' = i\} = F(i)/n + \sum_{j \in A^{-1}(i)}\frac{1-F(j)}{n}$ Alias method is the algorithm to construct $A$ and $F$ so that $P\{Y' = i\}$ equals to $P\{Y = i\}$ for all $i$. Because the domain of both $A$ and $F$ are integers $0,1,\ldots,n-1$, they can be stored in array and values can be looked up in O(1), where the space efficiency is in O(n). In miloyip’s implementation, $A$ and $F$ are stored in std::vector<AliasItem> mAliasTable, where $A$'s values are stored in AliasItem::index and $F$'s values are AliasItem::prob. ## Algorithm ### Construct Steps Initialize the set $S$ to be ${0,1,\ldots,n-1}$ and n variables $p_i$ that with values: $p_i = P\{Y=i\}, i \in S$ Denote the number of elements in $S$ as $\|S\|$. We have a important invariant that $\sum_{i \in S}{p_i} = \|S\| / n$ At the beginning of the algorithm, the invariant holds because the sum of all probabilities must equal to 1. The algorithm is performed using following steps. 1. If there is an element $i$ in set $S$ such that $p_i < 1/n$, there must be a $j$ in set $S$ such that $p_j > 1/n$.1 Let $A(i) = j$ and $F(i) = p_i / (1/n) = p_i \times n$. Remove $i$ from $S$ and subtract $n/1 - p_i$ from $p_j$. It is easy to verify that the invariant still holds after these changes.2 2. Repeat step 1 until $S$ is empty or there is no more elements $i$ in $S$ that $p_i < 1/n$. If $S$ is empty, the algorithm finishes. Otherwise for all remaining $i$ in $S$, we must have $p_i = 1/n$.3 Let $A(i)=i$ and $F(i)=p_i\times n=1$ for all remaining $i$, and remove them from the set $S$. The algorithm finishes when $S$ becomes empty, and an element is removed only when its corresponding $A$ and $F$ has been determined, so all values of $A$ and $F$ has been generated. In miloyip’s implementation, $p_i$ is stored in AliasItem::prob before $i$ is removed from the set. When $i$ is removed from the set, AliasItem::prob is set to $F(i)$. ### Correctness The invariant holds at the beginning and at the end of each step, it guarantees that the algorithm can finish. It is easy to prove it using mathematical induction. So we only need to prove $P\{Y'=i\}=P\{Y=i\}$ for any $i$, i.e., $P\{Y = i\} = F(i)/n + \sum_{j \in A^{-1}(i)}\frac{1-F(j)}{n}$ Denote $p'_i$ as the value of $p_i$ when $i$ is removed from set $S$. Check the construction steps again, we get following properties: 1. No $p_i$ can increase. Thus $p_i <= P\{Y=i\}$ in all steps and $p'_i <= P\{X=i\}$. 2. $p_i$ decreases only when its initial value $P\{Y=i\}>1/n$. So if $P\{Y=i\}<=1/n$, $p_i = P\{Y=i\}$ throughout the algorithm and $p'_i=P\{Y=i\}$. 3. $F(i) = p'_i \times n$ 4. $i$ is removed only when $p_i \leq 1/n$, i.e., $p'_i \leq 1/n$, thus $F(i)=p'_i \times n \leq 1$. 5. $A(j)$ is set to a value $i \neq j$ only if $p_i > 1/n$ (see step 1), i.e., $P\{Y=i\}>1/n$. Now consider value $i$ when $P\{Y=i\}<1/n$, $P\{Y=i\}=1/n$ and $P\{Y=i\}>1/n$. #### P{Y=i} < 1/n If $P\{Y=i\} < 1/n$, from property 2 and property 3, $F(i) = p'_i \times n = P\{Y=i\} \times n$. Apparently $A^{-1}(i) = {}$, because $A$ is either set to value $j$ where $p_j>1/n$ in step 1 or $k$ where $p_k = 1/n$ in step 2. Thus \begin{aligned} &F(i)/n + \sum_{j \in A^{-1}(i)}\frac{1-F(j)}{n}\\ =&F(i)/n\\ =&P\{Y=i\} \times n / n\\ =&P\{Y=i\} \end{aligned} which completes the proof. #### P{Y=i} = 1/n If $P\{Y=i\} = 1/n$, apparently $A(i) = i$. If there’s another value $j\neq~i$ also satisfies $A(j) = i$, from property 4, $P\{Y=i\} > 1/n$, conflict with the condition. So $A^{-1}(i) = {i}$ Thus \begin{aligned} &F(i)/n + \sum_{j \in A^{-1}(i)}\frac{1-F(j)}{n}\\ =&F(i)/n + (1-F(i))/n\\ =&1/n \end{aligned} which completes the proof. #### P{Y=i} > 1/n When $P\{Y=i\} > 1/n$, apparently i is not in $A^{-1}(i)$. Consider each value $j$ in set $A^{-1}(i)$. Once $j$ is removed from $S$, $A(j)$ is set to $i$ and $1/n - p'_j$ is subtracted from $p_i$. Thus $p'_i = P\{Y=i\} - \sum_{j \in A^{-1}(i)}(1/n - p'_j)$ Then \begin{aligned} &F(i)/n + \sum_{j \in A^{-1}(i)}\frac{1-F(j)}{n}\\ =&p'_i \times n / n + \sum_{j \in A^{-1}(i)}\frac{1-(p'_j \times~n)}{n}\\ =&P\{Y=i\} - \sum_{j \in A^{-1}(i)}(1/n - p'_j)\ + \sum_{j \in A^{-1}(i)}(1/n - p'_j)\\ =&P\{Y=i\} \end{aligned} For all $i$, $P\{Y'=i\} = P\{Y=i\}$, the proof completes. ## Intuitive Presentation The algorithm can be presented in intuitive meaning. The range $(0, n]$ is split into n consecutive sub ranges $(i, i + 1]$ for $i = 0, 1, \ldots, n - 1$. The probability of $X$ falls into any range is $(i + 1 - i) \times 1/n = 1/n$. For $P\{Y=i\} = 1/n$, we can allocate the whole slot $i$ to it. Let $Y=i$ when $x$ falls in $(i, i + 1]$ which has the probability $1/n$. If $P\{Y=i\} < 1/n$, we can allocate the starting part $(i,i+n\times~P\{Y=i\}]$ in $(i,i+1]$. Let $Y = i$ when $x$ falls in $(i, i + n\times P\{Y=i\}]$, where the probability is $n\times~P\{Y=i\}\times(1/n)=P\{Y=i\}$. If $P\{Y=i\} > 1/n$, we can allocate unused ranges in $(j + n\times P\{Y=j\}, j + 1]$ for any $j$ that $P\{Y=j\} < 1/n$. However, unused range is not allowed to be split again. See the figure below, which demonstrates how to generate $Y$ with probability mass function $n = 5$ • $P\{Y=0\} = 0.16$ • $P\{Y=1\} = 0.1$ • $P\{Y=2\} = 0.32$ • $P\{Y=3\} = 0.22$ • $P\{Y=4\} = 0.2$ $P\{Y=4\}=1/n$, so let $Y = 4$ only when $x$ falls in $(4, 5]$, which probability is $(5-4)\times 0.2 = 0.2$. $P\{Y=0\}=0.16<0.2$, so let $Y = 0$ only when $x$ falls in $(0,0.16\times~5]$, i.e., $(0,0.8]$, which probability is $(0.8-0)\times~0.2=0.16$. $(0.8,1]$ is unused. $P\{Y=1\}$ is the same. $(1,1.5]$ is allocated and $(1.5,2]$ is unused. $P\{Y=2\} = 0.32 > 0.2$, it needs ranges with total length $0.32\times~5=1.6$. We allocate the range $(0.8, 1]$ and $(1.5, 2]$. The remaining length $1.6-0.2-0.5=0.9<1$, then we can allocate a part of its own slot. Finally, three ranges have been allocated, $(0.8,1]$, $(1.5,2]$ and $(2,2.9]$. $(2.9,3]$ is unused. Follow the same step to handle $Y=3$. The final allocation is depicted in $D$. The allocation is not unique, $F$ depicts another solution. ## References 1. If all $j$ except $i$ that $p_j \leq 1/n$, Sum up both end of the inequalities for all $j$ and $p_i < 1/n$, we can get $\sum_{i \in S}{p_i} < \|S\| / n$ which is conflict with the invariant. ↩︎ 2. The right side has decreased $1/n$ because $\|S\|$ has decreased 1. The left side has decreased $p_i + (n/1 - p_i) = 1/n$, because $i$ is removed from the set and $(n/1 - p_i)$ is subtracted from $p_j$. Thus both side decrease the same amount, the equality still holds. ↩︎ 3. Because no $p_i < 1/n$, then $p_i \geq 1/n$. To satisfy the invariant, no $p_i$ can be larger then $1/n$. Thus for all $i$ in $S$, $p_i = 1/n$. ↩︎
2021-01-25 13:13:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977144598960876, "perplexity": 371.55114367903866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00647.warc.gz"}
https://solvedlib.com/n/earth-x27-s-magnetic-field-is-about-5-x-10-s-t-and-points,3850387
# Earth's magnetic field is about 5 x 10-S T and points 608 below the horizon (towards the north). If a ame Find the Z-score that has 12.3% of the distribution'$area to its left, Find the z-score that has 2.5% of the distribution's area to its right: Find the z-scores for which 86%3 af the distribution' area lies between and The annual per capita consumption of fresh bananas (In pounds... 1 answer ##### Using Charles Darwin's four premises of hypothesis of natural selection, how does an antibiotic resistant strain of bacteria develop when a naturally occurring population of bacteria is exposed to an antibiotic? Using Charles Darwin's four premises of hypothesis of natural selection, how does an antibiotic resistant strain of bacteria develop when a naturally occurring population of bacteria is exposed to an antibiotic?... 1 answer ##### The trial balance of Suzhou Tech Ltd at 31 December 20X9 is given below. Debit £... The trial balance of Suzhou Tech Ltd at 31 December 20X9 is given below. Debit £ 000's 258,000 Credit £ 000's 424,600 64,500 12,850 25,850 43,870 8,580 6,890 12,470 4,100 440 Purchases Sales Inventory at 1 January 20X9 Warehouse wages Salespersons' salaries and commission Adm... 5 answers ##### Find the monlhly house payments necessary to amortize 8% loan of 5182,000 over 30 years_ The payment size is sl(Round t0 Ihe nearest cent)Enter your answerin tne answer DoxType here t0 seatch Find the monlhly house payments necessary to amortize 8% loan of 5182,000 over 30 years_ The payment size is sl (Round t0 Ihe nearest cent) Enter your answerin tne answer Dox Type here t0 seatch... 1 answer ##### The following information pertains to Ming Corp. at January 1, 2018: Common stock,$11 par, 44,000... The following information pertains to Ming Corp. at January 1, 2018: Common stock, $11 par, 44,000 shares authorized, 3,100 shares issued and outstanding Paid-in capital in excess of par, common stoclk Retained earnings$34,100 76,400 76,400 Ming Corp. completed the following transactions during 201... ##### If a solution containing 36.51 g of lead(I) chlorate is allowed to react completely with a soluti... If a solution containing 36.51 g of lead(I) chlorate is allowed to react completely with a solution containing 5.102 g of sodium sulfide, how many grams of solid precipitate will be formed? mass of solid precipitate: How many grams of the reactant in excess will remain after the reaction? mass of ex... ##### Forf(z) = {(+5)" 8,find f-1 ()0f '()=7,+8 0f-! () = V2r +8 _ 5 Of-'(c) = V2(+8) _ 5 Of-'() = 2r-5+ 8 Forf(z) = {(+5)" 8,find f-1 () 0f '()=7,+8 0f-! () = V2r +8 _ 5 Of-'(c) = V2(+8) _ 5 Of-'() = 2r-5+ 8... ##### Find the density of U = Yi + Yz; where Y and Yz are independent random variables with densities 6 (y1 +2), 0 < 91 < 2, 42. 2 < 92 < [ fy (y1) = fy (y2) = otherwiSC , otherwise, Find the density of U = Yi + Yz; where Y and Yz are independent random variables with densities 6 (y1 +2), 0 < 91 < 2, 42. 2 < 92 < [ fy (y1) = fy (y2) = otherwiSC , otherwise,... ##### Question 13 (2 points) Saved The doctor orders Ceclor 100 mg p.o. tid. The child weighs 33 Ib. The recommended dosage on the drug Iabel; "Usua dose: Children; 20 mg per kg - day__. three divided doses. Is this dosage safe?Question 14 (2 points) Saved Achil weighing 30 kg has hecn prescribed Ube of benazcpril FO onee daily The drug labct rccomn-Da epe R-t day: Whal IS thc sale wcich based reconuncnded daily dosc for cowt numbt only Entct mx U4tQuestion 15 (2 points) Saled Acetaminophen pres Question 13 (2 points) Saved The doctor orders Ceclor 100 mg p.o. tid. The child weighs 33 Ib. The recommended dosage on the drug Iabel; "Usua dose: Children; 20 mg per kg - day__. three divided doses. Is this dosage safe? Question 14 (2 points) Saved Achil weighing 30 kg has hecn prescribed Ub... ##### This is in C++. (10 points) Write a de-duplication function that iteratively sanitizes (removes) all consecutive... This is in C++. (10 points) Write a de-duplication function that iteratively sanitizes (removes) all consecutive duplicates in a C++ string. Consecutive duplicates are a pair of duplicate English alphabets sitting next to each other in the input string. Example: "AA", "KK", etc., ar... ##### (3) (1) NHz 0(4) NOH (2)O(s) 0 (6) (3) (1) NHz 0 (4) N OH (2) O(s) 0 (6)... ##### Let's consider several models of the Earth's radiation properties. In each part, assume that the Earth's... Let's consider several models of the Earth's radiation properties. In each part, assume that the Earth's radiative output balances the input radiation from the Sun, and that there are no other energy or heat sources. 1) Consider a 'far Earth' (FE) that has an orbital distance 1.3... ##### Consider the differential equation 4y' +Y = 0; e*/2 xe*/? Verify that the functions ex/2 and xe+/2 form fundamental set of solutions of the differential equation on the interval ( ~o 0 ) .The functions satisfy the differential equation and are linearly independent since W(e*/?, xe*/?)forForm the general solution: Consider the differential equation 4y' +Y = 0; e*/2 xe*/? Verify that the functions ex/2 and xe+/2 form fundamental set of solutions of the differential equation on the interval ( ~o 0 ) . The functions satisfy the differential equation and are linearly independent since W(e*/?, xe*/?) for Form... ##### Today, interest rates on 1-year T-bonds yield 1.7%, interest rates on 2-year T-bonds yield 2.5%, and... Today, interest rates on 1-year T-bonds yield 1.7%, interest rates on 2-year T-bonds yield 2.5%, and interest rates on 3-year T-bonds yield 3.4%. a. If the pure expectations theory is correct, what is the yield on 1-year T-bonds one year from now? Be sure to use a geometric average in your calculat... ##### Choose the assumption of the Hardy-Weinberg equilibrium that ! best applies to the following statement: Thefounder effect occurs_ when : newarea Is colonized bya smalLpopulation thaths2 different gene frequency_than the parent population"Large Population SizeNo Gene FlowRandom MatingNo Natural SelectionNo Mutation Choose the assumption of the Hardy-Weinberg equilibrium that ! best applies to the following statement: Thefounder effect occurs_ when : newarea Is colonized bya smalLpopulation thaths2 different gene frequency_than the parent population" Large Population Size No Gene Flow Random Mating No Natu...
2022-05-21 22:03:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5157476663589478, "perplexity": 11918.07647783614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00626.warc.gz"}
https://www.physicsforums.com/threads/question-regarding-different-editions-of-spacetime-physics-by-wheeler.1004722/
# Question Regarding Different Editions of Spacetime Physics by Wheeler • Relativity MidgetDwarf Greetings! I have misplaced my copy of Spacetime Physics by Wheeler (1966 ed/red cover) and wanted to read it again. Was wondering if there is a major difference between the earlier blue hardback? Staff Emeritus It's Taylor & Wheeler (and was more Taylor than Wheeler). The 2nd edition is usually blue. (And many feel the 1st, red, edition is better). I have never seen a hardback first edition. I know they are out there, but they are rare. Keith_McClary Staff Emeritus Gold Member Eleven-and-a-half years ago, I wrote ... Taylor and Wheeler, but I like the (red) paperback version of the first edition. I forget why I prefer the first edition over later later edition(s) (I have compared editions). I prefer the paperback version over the hardcover version of the first edition because the paperback edition has solutions (not just answers) to the problems. My battered and beaten copy (I got it while in high school) ... My memory about the differences between versions is even more fuzzy now than it was then. Staff Emeritus Now free!: 2nd edition. Daverz The 2nd edition dropped the use of rapidity. I suppose they felt it didn't work pedagogically for their students, but it seemed like a shortsighted mistake. The fire-engine red paperback edition had the solutions in the back. The blue hardback did not have solutions. A recent book that emphasizes rapidity (and hyperbolic geometry) is Tevian Dray, The Geometry of Special Relativity, 2nd Edition https://www.amazon.com/dp/1138063924/?tag=pfamazon01-20 PhDeezNutz Homework Helper my blue hardback is apparently the first edition, copyright 1963 and 1966, and is only about 208 pages, lacking the 61 pages of solutions at the end. There is no ISBN but there is a Library of Congress catalog card # 65-13566. Homework Helper Gold Member At an AAPT conference, I asked Edwin Taylor about rapidity being dropped from the 2nd edition. He told me that he dropped rapidity because its users (teachers) reported to him that they didn't use it. A few of us (including Tevian) politely protested and suggested that he put it back in a future edition. In my opinion, the maroon edition that has worked solutions (and rapidity) is the best version. (The second edition has some nice touches, but lacks the worked solutions and the use of rapidity.) (By the way, special relativity uses hyperbolic-trigonometry in a flat spacetime, not curved hyperbolic-geometry [unless you are studying the mass-shell or the space of velocities]) A recent book that emphasizes rapidity (and hyperbolic geometry) is Tevian Dray, The Geometry of Special Relativity, 2nd Edition https://www.amazon.com/dp/1138063924/?tag=pfamazon01-20 I'd like to see what Tevian added in the second edition. He asked to include my "clock diamonds" area-approach on rotated graph paper. (It looks like it's in Ch 15... from the contents, preface, and overview. See p. 143, 147 in the preview/sample.) https://www.routledge.com/The-Geometry-of-Special-Relativity/Dray/p/book/9781138063921 Last edited: dextercioby and vanhees71 MidgetDwarf Thank you all those who replied. I settled on the blue hardback, since I was able to get it for $15 at a local book sale. The content is similar to the red (maroon) edition minus the solutions. Mondayman Mondayman Thank you all those who replied. I settled on the blue hardback, since I was able to get it for$15 at a local book sale. The content is similar to the red (maroon) edition minus the solutions. I cannot find any edition for cheaper than a few hundred bucks. What a steal. my blue hardback is apparently the first edition, copyright 1963 and 1966, and is only about 208 pages, lacking the 61 pages of solutions at the end. There is no ISBN but there is a Library of Congress catalog card # 65-13566. I too have a blue hardback, 208 page version. But it also shows ISBN 0-7167-0314-9 the printer's key indicates mine is from the 3rd printing ("987654"). I think "Copyright 1963, 1966" means this is the second edition (with the first ed being 1963), but I'm not sure about that Staff Emeritus Gold Member I think "Copyright 1963, 1966" means this is the second edition (with the first ed being 1963), but I'm not sure about that I think that 1966 is when the red paperback version (including solutions) of the first edition was released, and that this date was included in later printings of the hardcover first edition. I think that the second edition was published much later. gmax137 Homework Helper Gold Member The hardback may be more expensive. 0-7167-0336-X 269 or: 071670336X 269 might help located an edition with the solutions, where 269 is the number of pages. My maroon softcover has 208pg+61pg (text+solutions) with isbn 0-7167-0336-X and LOC 65-13566. Here's my preface page (which is a little different from the one from @gmax137 ). By the way, the first 26 pages of the solutions are available at Ed’s site https://www.eftaylor.com/pub/stp/STP1stEdExercSolns.pdf gmax137 Gold Member Right now the independent 61 page solution manual (1966) is for sale on abebooks. Of course it is $181.26 … I guess this is what was added to red paperback. Last edited: Science Advisor Homework Helper speaking of startling prices, a hard copy like mine is offered on abebooks for over$1400, but 1971 (apparently red) paperbacks are available for about $30. Science Advisor offered on abebooks for over$1400 Used book selling is a mystery to me. OTOH, I have a friend who says "there's an ass for every seat..." Oh and I like it when they want $1000+ and then want another$4 for shipping. Really? mathwonk Gold Member a hard copy like mine is offered on abebooks for over $1400 Who needs a retirement plan when you own textbooks Science Advisor Homework Helper unfortunately they don't buy them for that, they only sell them. i logged on long ago to a bookstore in portland that offered many of my books at well over$100 each, only to find they were offering me less than \$5 or more often nothing, even for "like new" copies of them. Frabjous
2023-03-26 06:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6950780749320984, "perplexity": 2733.182575337537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00164.warc.gz"}
https://gembeltraveller.com/manuali-i-cmimeve-te-ndertimitl/
# Manuali I Cmimeve Te Ndertimitl Manuali I Cmimeve Te Ndertimitl Manuali I Cmimeve Te Ndertimitl Jun 10, 2018 – hellgavr fe98829e30 January 27, 2022 at 8:14 am. What program to create in. The instructions are in English and German, but that’s not a big deal. Anyway, so – you can download the manual in Russian. The instructions are in English and German, but it’s not that important. The manual is in English and German, but it’s not that important. The manual is in English and German, but it’s not that important. I have a problem. I need to calculate the line equation of a triangle and that of a rectangle.Now the equation will be in the form (y- a)b+c=0 The width of triangle will be given. I have tried with the formula (y+a)b+c=0. But there are some questions in my mind:1) Is it possible to calculate the line equation of a triangle using the equation I have? 2) Or should we use the concept of area? Thanks in advance The line equation of a triangle should be y = (x+a)(x+b) -ab. The line equation of a rectangle is y = x + a. Please could any one help me in finding the line equation of a rectangle Please have a look at the site Thank you very much. A: I think you’re looking for the width-height ratio. An easier way to find the ratio would be by solving the equation: $$\frac{y-a}{b-a} = \frac{x}{c-x}$$ for $x$. If you can’t figure out what the unknown is, it would be equivalent to finding the length of the intersection of a perpendicular bisector and a base (or a side in the case of a triangle). People who were stranded on the roof of a parking garage in Prince George’s County, Md., for three hours and counting have been rescued, officials said. Several helicopters and a police helicopter were dispatched to the parking garage in the 13000 block of Tuckerman Lane at around 7:30 p.m. Thursday, Sgt. Michael Hennessy with Prince George’s County Fire/EMS said. The garage is located on the top of a parking structure. NBC10 learned there were about 120 people on the roof. A police helicopter with rescue crews on board arrived on scene. “They were all able to get down safely in one piece. No one was hurt,” Hennessy said. The cause of the fire is still under investigation. No one from the garage was injured, Hennessy said. Crews will go back on scene this morning to look at the fire’s cause.Protective effects of the novel anti-HIV agent c6a93da74d
2022-12-04 08:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2428160160779953, "perplexity": 866.7835340678356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00387.warc.gz"}
http://salesianipinerolo.it/lzua/linear-programming-excel-solver-template.html
The 3 major blocks in any optimization are shown here in an excel sheet. Oke kita langsung menuju langkah-langkah nya. Under Add-ins, select Solver Add-in and click on the Go button. Also using solver I need to perform sensitivity analysis to determine how much should the manager be willing to pay for 1. Decision Variables B. Kolmogorov and gained its popularity ever since the development of Simplex method by George B. Untungnya, Microsoft menawarkan Solver, sebuah add-in optimisasi numerik untuk membantu tugas ini. It can handle nonlinear problems as well. In excel, using the solver, it's easy: I can choose a cell with a formula then pick the cells i need to change and add constraints and then minimize. SOLVING A LINEAR PROGRAM USING EXCEL'S "SOLVER" TOOL 1. (Solution): Linear programming in Excel Solver. The Quantity & Gross Margin chart is a simple representation of the results of running the linear programming equations. 07 as the parameter when I use Solver and 235. Excel has an add-in called the Solver which can be used to solve systems of equations or inequalities. Consider this problem:. Click on Add-Ins, and then in the Manage box, select Excel Add-ins. Schedule your workforce to meet labor demands (example) The following example demonstrates how you can use Solver to calculate staffing requirements. The resource requirements for each product and the total resources available are as follows: Resource Requirements. Get a better picture of your data. To set up the model, you need to track the number of employees working each day. -check the box for Solver Add-in, then click OK-click the Data tab and verify that Solver shows in the Analysis section. linear programming using excel solver Source : image. Set up a table in an Excel workbook with the following rows (there will be one column in your table for each variable, one column for your right-hand-side coefficients, and one column for equations):. “Topic: Solve the following linear programming problems. If it’s not selected, click on it. A First Linear Programming Example: The Farmer Problem with Excel Solver (REVISITED) - Duration: 17:44. Conic Sections ( Zoom out for better view) Góc ở tâm và góc nội tiếp1. This website uses cookies to ensure you get the best experience. Question: In Using Excel To Solve Linear Programming Problems, The Target Cell Represents The: A. Keywords game theory, linear programming, zero-sum games, primal and dual solutions, linear program formulations, Excel, Solver, Excel Nash equilibrium. LP_Solve solves linear programming (LP), mixed-integer programming (MIP), and semi-continuous and special ordered sets (SOS) problems. Solver is not a Microsoft product. We have done it by applying a formula on profit using changing variable in Excel Solver using GRGE non-linear algorithm. Launch Microsoft Excel. Tool for Solving a Linear Program: Excel has the capability to solve linear (and often nonlinear) programming problems. The first step is to make sure you have Solver installed in your Excel file. Excel demonstration of the effect of random experimental variations - see video Replicate Measurements C Analysis 2: Experimental uncertainty (error) in simple linear data plot A typical set of linear data can be described by the change of the pressure, p , (in pascals) of an ideal gas as a function of the temperature, T , in degrees kelvin. Linear Programming is one of the important concepts in statistics. Excel opens the Solver Parameters dialog box. com) and they offer a great deal. To track the number of employees working each day, enter a 1 or a 0 in each cell in the range C5:I11. To be called a "solver" doesn't do it justice, though, because it is really a powerful optimization algorithm. We can use algorithms for linear program-ming to solve the max-flow problem, solve the min-cost max-flow problem, find minimax-optimal. Add a new language! This video is in. The mathematical technique of linear programming is instrumental in solving a wide range of operations management problems. With an optimization-modeling problem, you want to optimize an objective function but at the same time recognize that there are constraints, or limits. Set up a table in an Excel workbook with the following rows (there will be one column in your table for each variable, one column for your right-hand-side coefficients, and one column for equations):. This function and its arguments correspond to the options in the Solver Options dialog box. Let's take our linear program from above and remove the constraint $$y\leq 4$$ to obtain a nonnegative linear program. However, with any modern spread-sheet, linear optimization problems of any size can be solved very quickly. Enter the X values in column A. Once this is complete go back to the developer tab and stop recording. Then just go to Tools, Solver to open up the add in. A construction schedule is a timeline that is expected to be followed by a construction team to be able to provide the needed project result of the client. xls file (18 KB) (This file contains the example described below. There are no built-in limits for model size. -Buy Decision Problem Here we present a simple hypothetical example to demonstrate basic Linear Programming optimization concepts. ‘Solver’ should then appear as an item on the Tools menu. MS-Excel and a Solver Macro template designed as a technology assistant. Allows you to specify advanced options for your Solver model. A First Linear Programming Example: The Farmer Problem with Excel Solver (REVISITED) - Duration: 17:44. The DP Models add-in uses the DP Solver add-in to find solutions. Check Solver Add-in and click OK. • Excel has the capability to solve linear (and often nonlinear) programming problems with the SOLVER tool, which: - May be used to solve linear and nonlinear optimization problems - Allows integer or binary restrictions to be placed on decision variables - Can be used to solve problems with up to 200 decision variables • SOLVER is an. 5 A Linear Programming Problem with Unbounded Feasible Region: Note that we can continue to make level. Linear programming, as demonstrated by applying Excel's Solver feature, is a viable and cost-effective tool for analysing multi-variable financial and operational problems. Linear and Integer Programming: With Excel Examples. Budget, Speedometers, etc. The main difficulty when using the solver is at the level of information layout in the worksheet. GIPALS32 is a linear programming library that incorporates the power of linear programming solver and simplicity of integration to any software tools like Ms Visual C++, Ms Visual C#. Excel - Linear Programming (Minimize Cost) - Duration: 14:19. Linear Programming - single objective to either max. Linear Programming. To solve a standard form linear program use Microsoft Excel and the Excel Solver add-in. Using Solver for LP Problems. How to specify an IF-THEN constraint with an Integer Linear Programming (ILP) solver How to specify an IF-THEN constraint with an Integer Linear Programming (ILP) solver How to specify an IF-THEN constraint with an Integer Linear Programming (ILP) solver. In the Microsoft Office button, go to excel options to click Add-ins 2. Model A requires 50 pounds of special alloy steel per unit, 130 minutes of machining time per unit, and 60 minutes of assembly time per unit. Microsoft Excel 14. Let us look at the steps of defining a Linear Programming problem generically:. Linear programming is a mathematical technique used in solving a variety of problems related with management, from scheduling, media selection, financial planning to capital budgeting, transportation and many others, with the special characteristic that linear programming expect always to maximize or minimize some quantity. 12) Choose desired output reports. • Under Options: Select "Assume Linear Model", because this is an Linear Programming problem (an optimization problem with a linear objective function and linear constraints). Each product must undergo an assembly process and a finishing process. If an antenna is installed in a sector, it will give that sector wireless access, but it will also give the direct neighbor sectors [the one to the North, East, West and. xla!” text in these procedures or add a test for the Excel version. a method of non-linear regression using the SOLVER function of Excel. Assembly involves two major steps: winding the motor's armature (winding copper wire. Sometimes, though, you might have to draw a chart or graph to help with understanding or solving these problems. Don’t let your day-to-day responsibilities stifle you! In difficult projects, innovation can flourish when you generate. Let's take our linear program from above and remove the constraint $$y\leq 4$$ to obtain a nonnegative linear program. However, I’ve included it here because it provides some understanding into the way that the previous linear regression methods. Linear Programming. • find feasible solutions for maximization and minimization linear programming problems using. In the Manage drop-down box, select Excel Add-ins, and then click Go. 1) An auto parts manufacturer produces three different parts: Model A, Model B, and Model C. solves linear systems, including systems with parameters. Excel Solver's default algorithm. The mathematical programming technology of CPLEX Optimizer enables decision optimization for improving efficiency, reducing costs and increasing profitability. Solved by Expert Tutors. Within the Kellogg School, it is used in multiple courses, including Operations Management, Finance I and II, Strategic Decision Making, and Pricing Strategies, among others. 1) An auto parts manufacturer produces three different parts: Model A, Model B, and Model C. From the Add-ins dialog, check the box for Solver Add-in. Look down to the bottom right side for a field called Manage: Select Excel Add-ins from the drop-down list. Click on Keep Solver Solution and OK then the Reports will be. Solver has now tried every possible combination of numbers in cells C4 through E4. Cell: Each rectangle in excel (Four types of information can be typed into a cell: Number, Fraction, Function, and Text. Total Cost Of The Model. Set up this problem as a linear programming problem and find…. A comparison of the features available in these solvers can be found here. Soft w are comes with man y textb o oks. (Solution): Linear Programming/Excel Solver/Sensitivity Report. Linear programming is a mathematical technique used in solving a variety of problems related with management, from scheduling, media selection, financial planning to capital budgeting, transportation and many others, with the special characteristic that linear programming expect always to maximize or minimize some quantity. If anything is invested, it must meet the fund minimum. The Simplex Algorithm developed by Dantzig (1963) is used to solve linear programming problems. (Solution): Linear programming in Excel Solver. Linear Programming Notes IV: Solving Linear Programming Problems Using Excel 1 In tro duction Soft w are that solv es mo derately large linear programming problems is readily a v ailable. Contact [email protected] The Solver optimization add-in that ships with Excel is used extensively in our books. Suggested by Laurent Godard. If this is not the case, the linear solvers are very likely to return meaningless results. 1 Overview In this lecture we describe a very general problem called linear programming that can be used to express a wide variety of different kinds of problems. Linear and Integer Programming: With Excel Examples. Linear Programming is one of the important concepts in statistics. mixed integer-linear programming. To remind you of it we repeat below the problem and our formulation of it. To formulate this linear programming model, answer the following three questions. Leave other settings at their defaults. It was developed by Frontline Systems, which has developed a number of Solver products, some much more powerful than the version of Solver that comes with Excel. Linear Programming Staff Scheduling Problem - Duration: Excel Solver example and step-by-step explanation. Solver now solves the optimization model. make the required equation. Click the Solver command button in the Analyze group at the end of the Ribbon’s Data tab. 3 Solver Screen:. Our Solver Tutorial takes you. Solver is an Add-in of Excel that can be used to find the best solution, such as allocate scarce resources, maximizing profits, or minimizing costs. Please see Excel Solver algorithms for more details. If you’ve ever ventured into the Excel Solver add-in, you probably noticed that there are many options and it can be a little overwhelming. To avoid a backlog, all … Continue reading (Solution): Linear programming with excel →. Setup the problem by entering the proper formulas in an Excel Worksheet. To do this, I need 'solver' tool which i can get free from the internet. Linear and Integer Programming: With Excel Examples. See more: google linear programming solver, 100 templates wordpress 271, free script automatic youtube videos adding wordpress,. https://www. Now go to Data and open solver. Solving Linear Programs in Excel 11) Excel will solve LP problem based on the formulas you inputted. Click on Keep Solver Solution and OK then the Reports will be. Question: In Using Excel To Solve Linear Programming Problems, The Target Cell Represents The: A. The project is worth 80 points. Excel has an Add-In called Solver that can solve mathematical programming models (linear, nonlinear and integer). Linear Solvers – handling linear optimization problems in both continuous and integer variables. Solving the linear model using Excel Solver. The solution will be put here. Use Solver's linear optimization capabilities. Solved by Expert Tutors. The lifting schedule is also a bit more flexible than the 6 week program. The Fly-Right Airplane Company builds small jet airplanes to sell to corporations for use by their executives. Solved by Expert Tutors. If you get the message "Solver found a message. All Constraints and optimality conditions are satisfied. Now, we have all the steps that we need for solving linear programming problems, which are: Step 1: Interpret the given situations or constraints into inequalities. 70 #14F: Report Created: 21/2/2020 6:48:28 PM. Press the chart button in the toolbar, OR under Insert in the menu, select Chart. A First Linear Programming Example: The Farmer Problem with Excel Solver (REVISITED) - Duration: 17:44. Linear Program Solver (Solvexo) is an optimization package intended for solving linear programming problems. (Mixed Integer Linear Programming) problem -- but Excel's solver can handle those with no problem (as long. If x has a non-negative constraint, check the box 'Make Unconstrained Variables Non-Negative. In case you have no clue a good deal about Linear programming it's solving for x, this article will give you a little insight about that as well. Linear Programming Staff Scheduling Problem - Duration: Excel Solver example and step-by-step explanation. It also includes a series of formulas that constrain the objective formula's coefficients. In addition to solving equations, the Excel solver allows us to find solutions ot optimization problems of all kinds (single or multiple variables, with or without constraints). Which means the values for decision variables should be greater than or equal to 0. Linear Programming is one of the important concepts in statistics. The Simplex Algorithm developed by Dantzig (1963) is used to solve linear programming problems. See below for some example timeline templates to help you get started. Linear Program Solver is suitable for linear, integer and goal programming. For a quick start, click on the following titles to view/download the Excel setups for these two problems: The Product-Mix Problem , The Investment Problem. One of… Read more about Excel Solver: Which Solving Method Should I Choose?. Easy steps on how to solve linear programming and transportation problems with Excel Solver. Solver Engine: Simplex LP Solution Time: 0. The purpose of this project is to obtain hands on experience with a software product for solving linear programs. The Operation Tools of Kutools for Excel can help you to solve this problem quickly and easily. Excel's Solver add-in gives a very simple approach to address problems involving such formulas. Also, if you like to show the equation on the chart, tick the ‘Display Equation on chart’ box. In such cases, one option is to use an open-source linear programming solver. Excel & Data Entry Projects for $10 -$30. Solved by Expert Tutors. It also includes a series of formulas that constrain the objective formula's coefficients. Linear programming is a simple technique where we depict complex relationships through linear functions and then find the optimum points. Maximum budget is 25$for 2 questions. Excel's Solver can help. (Mathematical Modeling and Optimization Using AMPL: A Cutting-Stock Problem) [6 points] This problem is the same as Exercise 2. For a given problem, Excel solver can run various permutations and combinations and find out the best possible solution for you. Highlight the range you want to do the exponential calculation. This video teach you how you can solve the linear programming using excel solver function and the simple graphical method. 000000 ROW SLACK OR SURPLUS. Oke kita langsung menuju langkah-langkah nya. Tool for Solving a Linear Program: Excel has the capability to solve linear (and often nonlinear) programming problems. In Excel, we have Excel Solver which helps us solving the Linear Programming Problems a. Additionally, it is imperative that all formulas and equations are. Most math majors have some exposure to regression in their studies. • Click on the "OK" button to return to the original dialogue box. Once this is complete go back to the developer tab and stop recording. Solve your model in the cloud. If you go to your Data tab, you should now see Solver in the Analyze section. 175 there is no longer an y slack at the optimal solution and the constraint becomes binding. Finally, under the select a solving method, choose the simplex linear programming option. Easy steps on how to solve linear programming and transportation problems with Excel Solver. As you know a LP problem can be solved with the extension of Excel, Excel Solver. Solve your model in the cloud. Applications 1. Select plot type "XY scatter". I am trying to use the solver in Excel to create a linear program to minimize mutual fund expenses. Additionally, it is imperative that all formulas and equations are. Optimize pick path in a warehouse. I need an assignment problem to be solved in way of linear programming ($2-8 USD / hour) New website with 4 sites only ($10-30 USD) I need a graphic designer for a development of shopify based ecommerce portal (₹12500-37500 INR) Need to transer 5 gb wordpress website today ($10-30 AUD) edit microsoft excel ($10-30 USD). dinas pendidikan kabupaten aceh timur. (Solution): Linear Programming/Excel Solver/Sensitivity Report. A series of specialized symbols or boxes connected with arrows represent the steps of the flow chart. The procedure for solving this type of problems is basically the same as the procedure explained for solving nonlinear equations or unconstrained optimization problems in the previous two sections. Excel - Linear Programming (Minimize Cost) - Duration: 14:19. Pastikan fungsi solver sudah diaktifkan di excel anda (lihat langkah 2 jika sudah aktif), kita lanjut cara mengaktifkan fungsi solver di excel. The graph is a fairly typical one showing a linear range up to ~0. I need a constraint that says: X has to be within the range of 150 - 250, or X can equal 0. Assembly involves two major steps: winding the motor's armature (winding copper wire. Solving Linear Programming Problems By Using Excel's Solver Salim A. Specifying the decision variables, the objective function and the relevant constraints. Model A requires 50 pounds of special alloy steel per unit, 130 minutes of machining time per unit, and 60 minutes of assembly time per unit. Create an Excel spreadsheet model for this LP and use Excel Solver to solve it. Function Reference Formulas Charts Conditional Formatting Excel Tables Pivot Tables VBA Knowledge Base Macros User Defined Functions Videos Advanced Excel Course. General Nonlinear Solver. Essay layout examples Essay layout examples how to write college paper mla format,. The feasible region of the linear programming problem is empty; that is, there are no values for x 1 and x 2 that can simultaneously satisfy all the constraints. This video teach you how you can solve the linear programming using excel solver function and the simple graphical method. The purpose of this essay is to show how Geometer's Sketch Pad (GSP) can be used to enhance an introduction to linear programming in a classroom environment. I'll start by showing you how to install Solver, how to organize a worksheet for use in Solver, and how to find a solution to an optimization problem. Learn how to graph linear regression, a data plot that graphs the linear relationship between an independent and a dependent variable, in Excel. Then you run SOLVER and that’s it: done! Examples. The linear programming model presented in this case will illustrate how Excel Solver can be used in project management time-cost trade off (crashing). Southern Sporting Goods Company makes basketballs and footballs. Solved by Expert Tutors. Plus you can maximize or minimize a target, designate changeable cells, and establish constraints. The resource requirements for each product and the total resources available are as follows: Resource Requirements. Click on any of the data points and right-click. As the name implies, the functions must be linear in order for linear programming techniques to be used. Adding constraints should be ready well in advance. I have been able to solve this problem by analyzing each of the 3 possible locations separately (therefore, creating linear programming models with the 2 old factories + a new one), and then comparing the minimum costs that I would incur in each of these three cases. Curt Frye explains how to install Solver, organize worksheets to make the data and summary operations clear, and find a solution using Solver. Solver—a Microsoft Office Excel add-in—can help you analyze your data more efficiently. Game Theory Linear programming help: Advanced Algebra: Jan 4, 2018: Linear Programming - a constraint: Pre-Calculus: Jul 16, 2017: Production/Storage Linear Programming Question - Solver Excel: Business Math: Feb 7, 2011: linear programming question solver excel: Business Math: Oct 3, 2010. This function and its arguments correspond to the options in the Solver Options dialog box. Model A requires 50 pounds of special alloy steel per unit, 130 minutes of machining time per unit, and 60 minutes of assembly time per unit. The wall cabinet sells for$300 and the base sells for \$450. Linear and non- linear solvers are available in most commercially available spreadsheets to include Excel, LibreOffice, GoogleSheets, etc. Nonlinear curve fitting using Excel's SOLVER with step-by-step operations has been reported previously [1,4,13,14]. & Tafamel, A. 3 OD units or ~1ng/ml, with a curve of increasing gradient after that. Specifying the parameters to apply to the model in the Solver Parameters dialog box. Excel - Linear Programming (Minimize Cost) - Duration: 14:19. If you haven’t installed Excel Solver in your Microsoft Excel, then follow the steps below: a. A First Linear Programming Example: The Farmer Problem with Excel Solver (REVISITED) - Duration: 17:44. In Excel, we have Excel Solver which helps us solving the Linear Programming Problems a. To access it just click on the icon on the left, or «PHPSimplex» in the top menu. The resource requirements for each product and the total resources available are as follows: Resource Requirements. From the Add-ins dialog, check the box for Solver Add-in. If 'Solver' does not appear on the 'Tools' menu in Excel, then you need to enable it as follows: ¾ Select the 'Tools' menu in Excel, and then choose 'Add-ins'. The first node cannot receive a path and the last node cannot have a path from it. Linear programming is a technique used to solve models with linear objective function and linear constraints. Microsoft Excel 16. • Excel has the capability to solve linear (and often nonlinear) programming problems with the SOLVER tool, which: • 1. mixed integer-linear programming. Linear Programming Topics Linear programming is a quantitative analysis technique for optimizing an objective function given a set of constraints. The example involves a company that assembles three types of electric motor. Saleh1, Thekra I. Learn Programming. That missing gap is now filled by the Solver for Nonlinear Programming extension. Solver uses a group of cells that are related to a formula in the target cell. Click on any of the data points and right-click. Solver is not a Microsoft product. Use of this system is pretty intuitive: Press "Example" to see an example of a linear programming problem already set up. 3 Solver Screen:. The Simplex Algorithm developed by Dantzig (1963) is used to solve linear programming problems. Solved by Expert Tutors. It's fast, memory efficient, and numerically stable. I need a constraint that says: X has to be within the range of 150 - 250, or X can equal 0. You can use linear programming, quadratic programming, mixed integer linear programming, second order sonic programming, compact quasi-Newton programming, or constraint satisfaction programming in your models. The 'Solver' add-in is an Excel optimization and equation solving tool commonly used in solving various business, programming, and engineering problems. I know that I cannot use if, or, and statements to constrain a solver. Prerequisites. com), you can define and solve many types of optimization problems in Google Sheets, just as you can with the Excel Solver and with Frontline's Solver App for Excel Online. mixed integer-linear programming. To formulate this linear programming model, answer the following three questions. Studying these examples is one of the best ways to learn how to use NMath libraries. We will use excel to get an initial solution. Which means the values for decision variables should be greater than or equal to 0. Numbers in sum. " Do not use commas in large numbers. We will see in this article how to use Excel Solver to optimize the resources associated with business problems with the help of Linear Programming. Check the box titled ‘Solver Add-ins’ and then click ‘OK’. The steps must be repeated every time you start Excel if you wish to generate linear programming reports. Usage is free. (Solution): Linear Programming/Excel Solver/Sensitivity Report. So, when using Excel Solver, the task is to find the solution that best achieves the objective while satisfying all of the relevant constraints. Model A requires 50 pounds of special alloy steel per unit, 130 minutes of machining time per unit, and 60 minutes of assembly time per unit. The important word in previous sentence is depict. Linear Programming Calculator is a free online tool that displays the best optimal solution for the given constraints. Microsoft Excel doesn’t have a component that can help you identify the critical path of your project. การแก้ปัญหา Linear Programming โดยใช้ Microsoft Excel (Solver) ข้อ 2. Excel's Solver can help. When the dialogue box appears, make sure the box is ticked, as shown below. Now type in a new title that describes the chart. When you see the Solver Parameters dialog box, click the Solve button to find the optimal solution. Plus you can maximize or minimize a target, designate changeable cells, and establish constraints. Cell F4 is our equation P which has to be minimized and F6,F7,F8 are the constraints. Basic types of these optimization problems are called linear programming (LP). (Linear Programming) Solver engine for this optimization problem. 1 Linear Programming 0. If you are installing two add-ins, Excel prompts you to install an add-in twice, once for the Analysis. I am trying to use the solver in Excel to create a linear program to minimize mutual fund expenses. The demand function contained in cell C4 is = 1500-24. Obviously we can only solve very simple problems graphically. com for more information. Solver is a free application that helps solve linear programming and nonlinear optimization problems with Excel 2008. Method The method described in this paper, to conduct a curve fitting protocol in an Excel spreadsheet, was carried out on a Gateway Pentium II com-puter running Microsoft Windows 98 and Excel 97. Clausen Algebra II STEP 1 Define Your Coordinates WHAT TO DO: Set up your Excel spreadsheet to make a chart of points for and a graph of a linear equation. To remind you of it we repeat below the problem and our formulation of it. MIDACO is a solver for general optimization problems. It calculates eigenvalues and eigenvectors in ond obtaint the diagonal form in all that symmetric matrix form. Enter the X values in column A. Most corporations want to undertake projects that contribute the greatest net present value (NPV), subject to limited resources (usually capital and labor). Numbers in sum. Open Solver Interface. Figure 4 shows the constrain is added to Excel solver. Cell: Each rectangle in excel (Four types of information can be typed into a cell: Number, Fraction, Function, and Text. Mixed integer linear programming formulation techniques. The cells in yellow specify that each node can only have one path from it and one path to it. NASBA Field (s) of Study and Credits: Personal Development (16) Individual and group practical exercises, self-assessment, discussion, and application planning. Lecture 4: Examples of when to use linear programming. Step 3: A t the bottom, you will see Excel Add. We can enter this set of constraints directly in the Solver dialogs along with the non-negativity conditions: B4:E4 >= 0. To automatically recall and run SOLVER and to keep SOLVER Results dialog box from showing up, we encoded. Hey, I have couple of problems on linear programming and optimization, I am looking for freelancer who can guide me on these. Then modify the example or enter your own linear programming problem in the space below using the same format as the example, and press "Solve. (Solution): Southern Sporting Goods: Linear Programming using EXCEL Solver. See attachment for deails. Thanks to Discretelizard for pointing this out to me. A new pop-up will appear asking if you want to keep the new values or revert to your original values. Solved by Expert Tutors Linear Programming Models in Excel (Solver) TABLE: Hours for Judicial ProblemJan 400 July 200Feb 300 Aug 400Mar 200 Sept 300April 600 Oct 200May 800 Nov 100June 300 Dec 300Suppose each judge works all 12 months and can handle up to 120 hours per month of casework. Problem formulation 3. Linear programming, as demonstrated by applying Excel's Solver feature, is a viable and cost-effective tool for analysing multi-variable financial and operational problems. Customers who receive the video will also receive a coupon for a test drive of the new model. Dari menu File->Option->Add Ins->Analysis ToolPak->Ceklist Solver Add In->Go (lihat. I know that I cannot use if, or, and statements to constrain a solver. While this abstract definition sounds complicated, at least at the. Welcome to Solving Optimization and Scheduling Problems in Excel. Parts of this tutorial borrow from Prof. In this post, I'd like to provide some practical information to help you choose the correct solving method in Excel to efficiently find an optimum solution to your problem. Formulating Linear Programming Models LP Example #1 (Diet Problem) A prison is trying to decide what to feed its prisoners. Optimization and Linear Programming Using Solver. There are several ways to open Excel in this lab. We're here to help -- contact us if you'd like more information or advice on your. Advantages of the Excel Solver. Step 3 shows the completed problem with Decision Variables that have been optimized by the Solver to maximize the Objective while staying within the problem's Constraints. Linear programming is a method to achieve the best outcome of a given function given a series of constraints. Use one of these ways: Click on the Start menu, select ‘All Programs’, select ‘Microsoft Office’, and then select ‘Microsoft Office Excel 2003’. After installing Kutools for Excel, Please do as follows: 1. The 3 major blocks in any optimization are shown here in an excel sheet. The Simplex algorithm is a popular method for numerical solution of the linear programming problem. We want to apply this profit threshold as a percentage of Revenue generated by 200 items only. BACKGROUND OpenSolver is an add-in that extends Excel's Solver with a more powerful linear solver suitable for handling linear programming and mixed integer programming. Formulating Linear Programming Models LP Example #1 (Diet Problem) A prison is trying to decide what to feed its prisoners. Enter the X values in column A. To activate it: 1. A typical linear programming problem consists of a linear objective function which is to be maximized or minimized subject to a finite number of linear constraints. These models have a goal (min or max some value), that consists of a linear function. i have did the same to install solver add in in excel option and then click on solver add in and then okay. You can use Excel's Solver add-in to create a staffing schedule based on those requirements. The cabinets…. This brings you back to the solver window 5. To check if you have it or not go to the Data tab in Excel and check under the Analyze section all the way. The goal is holy: to make optimal decisions and save resources like money, time and materials. Basic types of these optimization problems are called linear programming (LP). “Programming” “ Planning” (term predates computer programming). Excel demonstration of the effect of random experimental variations - see video Replicate Measurements C Analysis 2: Experimental uncertainty (error) in simple linear data plot A typical set of linear data can be described by the change of the pressure, p , (in pascals) of an ideal gas as a function of the temperature, T , in degrees kelvin. The spreadsheet solver is an add-in feature found in recent versions of the spreadsheet software such as Lotus 1-2-3, Microsoft Excel, and Borland's Quattro Pro. The SOLVER add-in must be installed before the nonlinear regres-sion with a series of elective settings can be performed. Using Linear Programming to Solve a Make-vs. Linear programming, mathematical modeling technique in which a linear function is maximized or minimized when subjected to various constraints. com To create your new password, just click the link in the email we sent you. Microsoft has a partnership with Gurobi Optimization to provide their MIP solver in Solver Foundation,. Highlight both (hold down the control key) the Answer Report and Sensitivity Report. Econ 445 Problem Set 3 Linear Programming. In such cases, one option is to use an open-source linear programming solver. This problem of optimization under constraints falls in the general category of (linear) programming, and can be solved mathematically with methods, such as the Simplex Algorithm. There are several ways to open Excel in this lab. Formulate algebraic models for linear programming problems. solver to vary the values for A, C and k to minimize the sum of chi squared. Value Of The Objective Cell C. Problem formulation 3. Linear programming is a technique used to solve models with linear objective function and linear constraints. Excel Solver can be used to solve linear programming problems as well. Linear programming, as demonstrated by applying Excel's Solver feature, is a viable and cost-effective tool for analysing multi-variable financial and operational problems. To automatically recall and run SOLVER and to keep SOLVER Results dialog box from showing up, we encoded. See below for some example timeline templates to help you get started. You should see the problem we considered above set out as:. We will use excel to get an initial solution. Click the target cell in the worksheet or enter its cell reference or range name in the Set Objective text box. For your problem with a quadratic objective and linear constraints, use instead the QP solver:. 000000 X2 216. The real relationships might be much more complex - but we can simplify them to linear relationships. Excel Solver can be used to solve linear programming problems as well. SolverStudio is an add-in for Excel 2007 and later on Windows that allows you to build and solve optimisation models in Excel using any of the following optimisation modelling languages: PuLP, an open-source Python -based COIN-OR modelling language developed by Stu Mitchell. With some tricks you can also perform LS on polynomes using Excel. Linear Programming: Blank Template. Try downloading instead. Add a new language! This video is in. Ziggy MacDonald University of Leicester. By Stephen L. It is a special case of mathematical programming. In our interdisciplinary Department of Defense Analysis at the Naval Postgraduate School, we. This website uses cookies to ensure you get the best experience. Each product is produced from two resources rubber and leather. It was filed under 1 and was tagged with Excel, LINDO, Linear Programming, Lingo, Optimization Problem, Program Linier, Software Optimasi, Solver. You should be able to see the Solver button in your Excel Data Ribbon: What is Linear Programming LP is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships. Now go to Data and open solver. Requirements for Linear Programming Problem 4. Drawing these charts can…. , University of Tikrit , Tikrit , Iraq (Received 19 / 2 / 2008 , Accepted 29 / 6 / 2008) Abstract: This paper describes advanced methods for finding a verified global optimum and finding all solutions of a. Solver is an Add-in of Excel that can be used to find the best solution, such as allocate scarce resources, maximizing profits, or minimizing costs. This last method is more complex than both of the previous methods. All equations on the Excel spreadsheet are linear (1 st order) so we can use the Simplex LP (Linear Programming) Solver engine for this optimization problem. Corner point solution method 5. Linear Programming Solver. Math 5593 Linear Programming Problem Set 6 University of Colorado Denver, Fall 2011 Due December 7, 3:30 p. Excel has an Add-In called Solver that can solve mathematical programming models (linear, nonlinear and integer). Take this spreadsheet and look at Sheet A. If a linear solver is used, there is the option to run a "Linearity Check" after the solve, which tries to make sure the problem was indeed linear. Solving the linear model using Excel Solver. 175 there is no longer an y slack at the optimal solution and the constraint becomes binding. Solved by Expert Tutors. Linear Solvers - handling linear optimization problems in both continuous and integer variables. Powered by Create your own unique website with customizable templates. com for more information. Linear programming is a simple technique where we depict complex relationships through linear functions and then find the optimum points. Thanks for the feedback. Step 2: Select the Add-Ins after Options. By Stephen L. A nonlinear programming model consists of a nonlinear objective function and nonlinear constraints. 64 for example use of this template. Solved by Expert Tutors Linear Programming Models in Excel (Solver) TABLE: Hours for Judicial ProblemJan 400 July 200Feb 300 Aug 400Mar 200 Sept 300April 600 Oct 200May 800 Nov 100June 300 Dec 300Suppose each judge works all 12 months and can handle up to 120 hours per month of casework. The following are to be done on Excel using the solver add-in. A linear programming model takes the following form: Objective function: Z = a 1 X 1 + a 2 X 2 + a. Cell F4 is our equation P which has to be minimized and F6,F7,F8 are the constraints. A construction schedule is a timeline that is expected to be followed by a construction team to be able to provide the needed project result of the client. First formulate the problems and then solve them using Solver in MS Excel. 10 x y The objective function has its optimal value at one of the vertices of the region determined by the constraints. Albright et al. SPECIFY SIZE OF THE SYSTEM: Please select the size of the system from the popup menus. Now click on fiSolvefl. Linear programming is a technique used to solve models with linear objective function and linear constraints. The following videos gives examples of linear programming problems and how to test the vertices. Pastikan fungsi solver sudah diaktifkan di excel anda (lihat langkah 2 jika sudah aktif), kita lanjut cara mengaktifkan fungsi solver di excel. DO NOT TRY TO SOLVE IT MANUALLY. The shadow price associated with. Modelling Linear Programming Prob lem Using Microsoft Excel Solver by Ade kunle, S. Use good modeling techniques such as. oleh: khairuddin, s. The basic Concepts of Linear Programming Problem 2. Select plot type "XY scatter". Enter the X values in column A. Read More on This Topic. An easy video to learn using Microsoft Excel Solver for Linear Programming. Of course, this isn’t the only method, but I think it’s probably the most straightforward one. This technique has been useful for guiding quantitative decisions in business planning, in industrial engineering, and—to a lesser extent—in the social and physical sciences. Standard form linear programs generally use a LP Simplex solving method. An Excel Solver sensitivity report for a linear programming model is given below. Dari menu File->Option->Add Ins->Analysis ToolPak->Ceklist Solver Add In->Go (lihat. Excel provides us with a couple of tools to perform Least Squares calculations, but they are all centered around the simpler functions: simple Linear functions of the shape y=a. Total Cost Of The Model. Refresher: Setting up a Linear Problem On Paper Recall the Cargo Problem from the last homework assignment…. Solver uses a special, efficient algorithm called the simplex method to solve this kind of problem. Model A requires 50 pounds of special alloy steel per unit, 130 minutes of machining time per unit, and 60 minutes of assembly time per unit. Please see Excel Solver algorithms for more details. Inserting a Scatter Diagram into Excel. Within the Kellogg School, it is used in multiple courses, including Operations Management, Finance I and II, Strategic Decision Making, and Pricing Strategies, among others. Click on the Excel Options Button 3. Southern Sporting Goods Company makes basketballs and footballs. " Check the "Solver Add. Each product is produced from two resources rubber and leather. By using this website, you agree to our Cookie Policy. OpenSolver for Google Sheets uses the excellent, Open Source, SCIP and Glop optimization engines to quickly solve large Linear and Integer problems. Requirements for Linear Programming Problem 4. Works amazing and gives line of best fit for any data set. Models developed for optimization can be either linear or non-linear. Then modify the example or enter your own linear programming problem in the space below using the same format as the example, and press "Solve. When Excel finds an optimal solution, the following appears. #N#Added Jul 31, 2018 by vik_31415 in Mathematics. GIPALS32 is a linear programming library that incorporates the power of linear programming solver and simplicity of integration to any software tools like Ms Visual C++, Ms Visual C#. A flow chart template refers to a template used for creating a flow chart. (The Premium Solver can be installed from the course CD. Solved by Expert Tutors. MS Excel Topics - Solver, Linear Programming Problems. A nonlinear programming model consists of a nonlinear objective function and nonlinear constraints. If an antenna is installed in a sector, it will give that sector wireless access, but it will also give the direct neighbor sectors [the one to the North, East, West and. Linear programming is a special case of mathematical programming (also known as mathematical optimization). I was using the binary constraint to incorporate options into the model and to turn on additional calculations within the options that would run calculations if the model turned on that constraint. These models have a goal (min or max some value), that consists of a linear function. (Mixed Integer Linear Programming) problem -- but Excel's solver can handle those with no problem (as long. (Solution): Linear Programming/Excel Solver/Sensitivity Report. Select Solver and click OK Solver allows users to solve optimization problems using three methods, the most common being the Simplex LP method for linear programming problems. t { position: absolute; -webkit-transform-origin: top left; -moz-transform-origin: top left; -o-transform-origin: top left; -ms-transform-origin. This video teach you how you can solve the linear programming using excel solver function and the simple graphical method. 1) An auto parts manufacturer produces three different parts: Model A, Model B, and Model C. Excel uses a tool named Solver to find the solutions to linear programming related problems, and integer and noninteger programming problems. You've formulated an optimization problem in traditional linear programming form and would like to use Excel to solve the problem. Excel's Solver add-in gives a very simple approach to address problems involving such formulas. We will use excel to get an initial solution. initial installation of Excel. The Solver add-in for Excel is an easy way to solve relatively small and simple linear, nonlinear, and integer programming problems. The Simplex Solver in MS Excel is not able to solve nested formulas that are tied to a binary constraint. Use Solver's linear optimization capabilities. Using the Solver function of Linear Programming (LP), associated with concepts of the Theory of Constraints, a case study has been carried out in a furniture factory, Colliseu Indústria de Móveis Ltda. Click OK when you enter all constraints. This problem of optimization under constraints falls in the general category of (linear) programming, and can be solved mathematically with methods, such as the Simplex Algorithm. The Excel Solver add-in is particularly helpful for solving linear programming issues, aka linear optimization troubles, and therefore is sometimes known as a linear programming solver. Upload the MS…” “I have a homework and need to get help on it thanks ATTACHMENT PREVIEW Download attachment. Linear Programming Staff Scheduling Problem - Duration: Excel Solver example and step-by-step explanation. The linear programming model presented in this case will illustrate how Excel Solver can be used in project management time-cost trade off (crashing). Math 5593 Linear Programming Problem Set 6 University of Colorado Denver, Fall 2011 Due December 7, 3:30 p. Click “Add-Ins”, and then in the “Manage” box, select “Excel Add-ins” and click “Go” • 3. Microsoft Excel can be a very useful program. With an optimization-modeling problem, you want to optimize an objective function but at the same time recognize that there are constraints, or limits. Nonlinear curve fitting using Excel's SOLVER with step-by-step operations has been reported previously [1,4,13,14]. Lecture 5: How to identify optimal solutions in linear programming graphically. Step 2: Select the Add-Ins after Options. Please attach Excel file with excel solver solution. Excel has an Add-In called Solver that can solve mathematical programming models (linear, nonlinear and integer). Step 3: Determine the gradient for the line representing the solution (the linear objective function). Step 3 shows the completed problem with Decision Variables that have been optimized by the Solver to maximize the Objective while staying within the problem's Constraints. Let's take our linear program from above and remove the constraint $$y\leq 4$$ to obtain a nonnegative linear program. Tool for Solving a Linear Program: Excel has the capability to solve linear (and often nonlinear) programming problems. slidesharecdn. Otherwise, you end up with an unbounded objective function, and the problem must be solved by other methods, e. Below we solve this LP with the Solver add-in that comes with Microsoft Excel. Identifying limited resources and optimising value is a major part of the work of a management accountant. Microsoft has a partnership with Gurobi Optimization to provide their MIP solver in Solver Foundation,. Structure of Linear Programming Problem 3. Pastikan fungsi solver sudah diaktifkan di excel anda (lihat langkah 2 jika sudah aktif), kita lanjut cara mengaktifkan fungsi solver di excel. Solved by Expert Tutors. MIDACO is suitable for problems with up to several hundreds to some thousands of optimization variables and features parallelization in Matlab, Python, R, C/C++ and Fortran. Add Your Information to the Timeline in Excel. The Simplex algorithm is a popular method for numerical solution of the linear programming problem. Excel Solver. explore the model of the simplified dietary problem through what-if questions. use Excel Solver to solve the problem. learn how to develop a LP model. I am using Excel for a linear programming question. Solved by Expert Tutors. Excel - Linear Programming (Minimize Cost) - Duration: 14:19. OpenSolver for Google Sheets uses the excellent, Open Source, SCIP and Glop optimization engines to quickly solve large Linear and Integer problems. Cbc (Coin-or branch and cut) is an open-source mixed integer programming solver written in C++. They would like to offer some combination of milk, beans, and oranges. If Excel displays a message that states it can’t run this add-in and prompts you to install it, click Yes to install the add-ins. 1 Linear Programming 0. Pada postingan kali ini saya akan membahas pemecahan metode simplex dalam program linier dengan fungsi solver di excel. The Simplex algorithm is a popular method for numerical solution of the linear programming problem. Solve your model in the cloud. See attachment for deails. Schedule your workforce to meet labor demands (example) The following example demonstrates how you can use Solver to calculate staffing requirements. The Simplex Algorithm developed by Dantzig (1963) is used to solve linear programming problems. It is an add-in with Excel since Excel 97. Step 2: Plot the inequalities graphically and identify the feasible region. It has, Complex arithmetic Matrix arithmetic Polynomial arithmetic Numeric Integration and derivation Root finding Linear systems solver Date conversion functions List manipulation (Sorting, Cutting, Selecting, etc. Put a Check in the Box Next to ‘Solver Add-in’ Using Solver On Excel Menu, choose Tools Solver This brings up the Solver Parameters box which will be discussed next. explore the model of the simplified dietary problem through what-if questions. Click the Go… button. This will be minimized. Figure 4 shows the constrain is added to Excel solver. a method of non-linear regression using the SOLVER function of Excel. From the Add-ins dialog, check the box for Solver Add-in. 1) An auto parts manufacturer produces three different parts: Model A, Model B, and Model C. Ziggy MacDonald University of Leicester. You are going to find out how to develop a macro the basic method by using the integrated macro recorder. This technique has been useful for guiding quantitative decisions in business planning, in industrial engineering, and—to a lesser extent—in the social and physical sciences. In case you have no clue a good deal about Linear programming it's solving for x, this article will give you a little insight about that as well. Click the Microsoft Office Button, and then click “Excel Options” • 2. Goal Programming - multiple objectives 3. Select fikeep solver solutionfl and click the fiOKfl button. Timo Salmi ([email protected] Linear Programming Topics Linear programming is a quantitative analysis technique for optimizing an objective function given a set of constraints. Cash drawer bill extractor. ; Create a spreadsheet formula in a cell that calculates the objective function for your model. Solving Linear Programming Problems Using EXCEL Most spreadsheet programs have the capability to solve small linear programming problems. With some time and effort, you can certainly learn to develop macros to discover the absolute most out of Excel. We will see in this article how to use Excel Solver to optimize the resources associated with business problems with the help of Linear Programming. If you see two arrows pointing downward at the bottom of the list of commands, click on the. The Simplex Algorithm developed by Dantzig (1963) is used to solve linear programming problems. You've formulated an optimization problem in traditional linear programming form and would like to use Excel to solve the problem. Solver solution minimum feed cost linear programming lp excel youtube downlaod full pdf free electric circuits problem cubex fr android apk herunterladen solvers guides hands on java maze part 1 day 9 more about the progress window aimms community classroom management cards water text physics based simulation of ocean scenes ~ kappaphigamma. You can use Excel's Solver add-in to create a staffing schedule based on those requirements. fw59hbvpv4hnti, fnn6g46us15x, 089xtj55e4vg, t1wxkpj4gom, yboxt9u518, b6ym8uv4p2xvc, 05keyrmtg19s4, 2x4n6tcg9ok3uh, mnxhqwdrc3v78s, ql6eszubswyn, xre7x2z6cgg7e, ngdzica3izmg, 0ui0kt9fynkeg, p04g7iq46qr, 8oajipkwi000co5, 5va7awotlncair, 9ds15ycp9vmf, k20x59yfpxm, 9ubw3ltcos1a, evu4e897ww2b5, rcvtyemg2l86f, fgfvdoi0vxk, jzfc4a523h, ik7oddajvlvruqk, kt35hw8dnxqbv8v, 7bfq2wuy8kiz, km4kn4196s, m3hhrmylatdfrsm, 1rehjf2fk0, 6w5a0c8lfr2ms
2020-10-22 08:44:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18417128920555115, "perplexity": 1376.2096137086346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00102.warc.gz"}
http://qnet.readthedocs.io/en/latest/API/qnet.algebra.ordering.html
# qnet.algebra.ordering module¶ The ordering package implements the default canonical ordering for sums and products of operators, states, and superoperators. To the extent that commutativity rules allow this, the ordering defined here groups objects of the same Hilbert space together, and orders these groups in the same order that the Hilbert spaces occur in a ProductSpace (lexicographically/by order_index/by complexity). Objects within the same Hilbert space (again, assuming they commute) are ordered by the KeyTuple value that expr_order_key returns for each object. Note that expr_order_key defers to the object’s _order_key property, if available. This property should be defined for all QNET Expressions, generally ordering objects according to their type, then their label (if any), then their pre-factor then any other properties. We assume that quantum operations have either full commutativity (sums, or products of states), or commutativity of objects only in different Hilbert spaces (e.g. products of operators). The former is handled by FullCommutativeHSOrder, the latter by DisjunctCommutativeHSOrder. Theses classes serve as the order_key for sums and products (e.g. OperatorPlus and similar classes) A user may implement a custom ordering by subclassing (or replacing) FullCommutativeHSOrder and/or DisjunctCommutativeHSOrder, and assigning their replacements to all the desired algebraic classes. ## Summary¶ Classes: DisjunctCommutativeHSOrder Auxiliary class that generates the correct pseudo-order relation for operator products. FullCommutativeHSOrder Auxiliary class that generates the correct pseudo-order relation for operator sums. KeyTuple A tuple that allows for ordering, facilitating the default ordering of Operations. Functions: expr_order_key A default order key for arbitrary expressions ## Reference¶ class qnet.algebra.ordering.KeyTuple[source] Bases: tuple A tuple that allows for ordering, facilitating the default ordering of Operations. It differs from a normal tuple in that it falls back to string comparison if any elements are not directly comparable qnet.algebra.ordering.expr_order_key(expr)[source] A default order key for arbitrary expressions class qnet.algebra.ordering.DisjunctCommutativeHSOrder(op, space_order=None, op_order=None)[source] Bases: object Auxiliary class that generates the correct pseudo-order relation for operator products. Only operators acting on disjoint Hilbert spaces are commuted to reflect the order the local factors have in the total Hilbert space. I.e., sorted(factors, key=DisjunctCommutativeHSOrder) achieves this ordering. class qnet.algebra.ordering.FullCommutativeHSOrder(op, space_order=None, op_order=None)[source] Bases: object Auxiliary class that generates the correct pseudo-order relation for operator sums. Operators are first ordered by their Hilbert space, then by their order-key; sorted(factors, key=FullCommutativeHSOrder) achieves this ordering.
2018-03-24 00:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7401002645492554, "perplexity": 4251.981478482638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00628.warc.gz"}
https://www.eclipse.org/n4js/spec/functions.html
## 6. Functions Functions, be they function declarations, expressions or even methods, are internally modeled by means of a function type. In this chapter, the general function type is described along with its semantics and type constraints. Function definitions and expressions are then introduced in terms of statements and expressions. Method definitions and special usages are described in Methods. ### 6.1. Function Type A function type is modeled as Object (see [ECMA11a(p.S13, p.p.98)] in ECMAScript. Function types can be defined by means of; #### 6.1.1. Properties In any case, a function type declares the signature of a function and allows validation of calls to that function. A function type has the following properties: typePars (0-indexed) list of type parameters (i.e. type variables) for generic functions. fpars (0-indexed) list of formal parameters. returnType (possibly inferred) return type (expression) of the function or method. name Name of function or method, may be empty or automatically generated (for messages). body The body of the function, it contains statements $stmts$. The body is null if a function type is defined in a type expression, and it is the last argument in case of a function object constructor, or the content of the function definition body. Additionally, the following pseudo properties for functions are defined: thisTypeRef The this type ref is the type to which the this-keyword would be evaluated if used inside the function or member. The inference rules are described in This Keyword. fpars List of formal parameters and the this type ref. This is only used for sub typing rules. If this is not used inside the function, then any is set instead of the inferred thisTypeRef to allow for more usages. The property is computed as follows: $tfpars=\text{if}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}\phantom{\rule{3.0mm}{0ex}}\text{this is used or explicitly declared}\phantom{\rule{3.0mm}{0ex}}\text{then}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}thisTypeRef+fpars\phantom{\rule{3.0mm}{0ex}}\text{else}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}any+fpars$ Parameters (in $pars$) have the following properties: name Name of the parameter. type Type (expression) of the parameter. Note that only parameter types can be variadic or optional. The function definition can be annotated similar to Methods except that the final and abstract modifiers aren’t supported for function declarations. A function declaration is always final and never abstract. Also, a function has no property advice set. #### Semantics Req. IDE-79: Function Type (ver. 1) Type Given a function type $F$, the following constraints must be true: 1. Optional parameters must be defined at the end of the (formal) parameter list. In particular, an optional parameter must not be followed by a non-optional parameter: $F.fpar{s}_{i}.optional\to \nexists k>i:¬F.fpar{s}_{k}.optvar$ 2. Only the last parameter of a method may be defined as variadic parameter: $F.fpar{s}_{i}.variadic\to i=|F.fpars|-1$ 3. If a function explicitly defines a return type, the last statement of the transitive closure of statements of the body must be a return statement: $F.typeRef\ne Undefined\to$ $|f.body.stmts|>0$ 4. If a function explicitly defines a return type, all return statements must return a type conform to that type: $\phantom{\rule{3.0mm}{0ex}}F.typeRef\ne Undefined$ $\phantom{\rule{3.0mm}{0ex}}⇔$ $\phantom{\rule{3.0mm}{0ex}}r.expr\ne null\wedge \left[\phantom{\rule{-0.167em}{0ex}}\left[r.expr.typeRef\right]\phantom{\rule{-0.167em}{0ex}}\right]<:\left[\phantom{\rule{-0.167em}{0ex}}\left[F.typeRef\right]\phantom{\rule{-0.167em}{0ex}}\right]$ #### 6.1.2. Type Inference For the given non-parameterized function types ${F}_{left}$ with ${F}_{left}.tfpars={L}_{0},{L}_{1},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}{L}_{k}$ and $|{F}_{left}.typesPars|=0$ ${F}_{right}$ with ${F}_{right}.tfpars={R}_{0},{R}_{1},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}{R}_{n}$ and $|{F}_{right}.typesPars|=0$, we say ${F}_{left}$ conforms to ${F}_{right}$, written as ${F}_{left}<:{F}_{right}$, if and only if: • ${F}_{right}.returnType=\text{void}$ $\vee \left({F}_{left}.returnType=\text{void}\wedge {F}_{right}.opt\right)$ $\vee \left({F}_{left}.returnType<:{F}_{right}.returnType\wedge ¬\left({F}_{left}.opt\wedge ¬{F}_{right}.opt\right)\right)$ • if $k\le n$: • else ($k>n$): Function Variance Chart shows a simple example with the function type conformance relations. Figure 6. Function Variance Chart {function()} $<:$ {function(A)} $<:$ {function(A, A)} might be surprising for Java programmers. However, in JavaScript it is possible to call a function with any number of arguments independently from how many formal parameters the function defines. If a function does not define a return type, any is assumed if at least one of the (indirectly) contained return statements contains an expression. Otherwise void is assumed. This is also true if there is an error due to other constraint violations. with $\begin{array}{c}\phantom{\rule{3.0mm}{0ex}}\frac{\left\{r\in F.body.statements|\mu \left(r\right)=\text{ReturnStatement}\right\}\cup \bigcup _{s\in F.body.statements}returns\left(s\right)}{returns\left(F\right):RETS}\\ \phantom{\rule{3.0mm}{0ex}}\frac{\left\{sub\in s.statements|\mu \left(sub\right)=\text{ReturnStatement}\right\}\cup \bigcup _{sub\in s.statements}returns\left(sub\right)}{returns\left(s\right):RETS}\end{array}$ Example 61. Function type conformance The following incomplete snippet demonstrates the usage of two function variables $f1$ and $f2$, in which $\left[\phantom{\rule{-0.167em}{0ex}}\left[f2\right]\phantom{\rule{-0.167em}{0ex}}\right]<:\left[\phantom{\rule{-0.167em}{0ex}}\left[f1\right]\phantom{\rule{-0.167em}{0ex}}\right]$ must hold true according to the aforementioned constraints. A function bar declares a parameter $f1$, which is actually a function itself. $f2$ is a variable, to which a function expression is a assigned. Function bar is then called with $f2$ as an argument. Thus, the type of $f2$ must be a subtype of the $f1$’s type. function bar(f1: {function(A,B):C}) { ... } var f2: {function(A,B):C} = function(p1,p2){...}; bar(f1); The type of this can be explicitly set via the @This annotation. Example 62. Function Subtyping function f(): A {..} function p(): void {..} fAny(log: {function():any}) {...} fVoid(f: {function():void}) {..} fA(g: {function():A}) {...} fAny(f); // --> ok A <: any fVoid(f); // -->error A !<: void fA(f); // --> ok (easy) A <: A fAny(p); // --> ok void <: any fVoid(p); // --> ok void <: void fA(p); // --> error void !<: A Example 63. Subtyping with function types If classes A, B, and C are defined as previously mentioned, i.e. $C<:B<:A$, then the following subtyping relations with function types are to be evaluated as follows: {function(B):B} <: {function(B):B} -> true {function():A} <: {function():B} -> false {function():C} <: {function():B} -> true {function(A)} <: {function(B)} -> true {function(C)} <: {function(B)} -> false {function():void} <: {function():void} -> true {function():undefined} <: {function():void} -> true {function():void} <: {function():undefined} -> true (!) {function():B} <: {function():void} -> true (!) {function():B} <: {function():undefined} -> false (!) {function():void} <: {function():B} -> false {function():undefined} <: {function():B} -> true The following examples demonstrate the effect of optional and variadic parameters: {function(A)} <: {function(B)} -> true {function(A...)} <: {function(A)} -> true {function(A, A)} <: {function(A)} -> false {function(A)} <: {function(A,A)} -> true (!) {function(A, A...)} <: {function(A)} -> true {function(A)} <: {function(A,A...)} -> true (!) {function(A, A...)} <: {function(B)} -> true {function(A?)} <: {function(A?)} -> true {function(A...)} <: {function(A...)} -> true {function(A?)} <: {function(A)} -> true {function(A)} <: {function(A?)} -> false {function(A...)} <: {function(A?)} -> true {function(A?)} <: {function(A...)} -> true (!) {function(A,A...)} <: {function(A...)} -> false {function(A,A?)} <: {function(A...)} -> false {function(A?,A...)} <: {function(A...)} -> true {function(A...)} <: {function(A?,A...)} -> true {function(A...)} <: {function(A?)} -> true {function(A?,A?)} <: {function(A...)} -> true (!) {function(A?,A?,A?)} <: {function(A...)} -> true (!) {function(A?)} <: {function()} -> true (!) {function(A...)} <: {function()} -> true (!) The following examples demonstrate the effect of optional return types: {function():void} <: {function():void} -> true {function():X} <: {function():void} -> true {function():X?} <: {function():void} -> true {function():void} <: {function():Y} -> false {function():X} <: {function():Y} -> X <: Y {function():X?} <: {function():Y} -> false (!) {function():void} <: {function():Y?} -> true (!) {function():X} <: {function():Y?} -> X <: Y {function():X?} <: {function():Y?} -> X <: Y {function():B?} <: {function():undefined} -> false (!) {function():undefined} <: {function():B?} -> true The following examples show the effect of the @This annotation: {@This(A) function():void} <: {@This(X) function():void} -> false {@This(B) function():void} <: {@This(A) function():void} -> false {@This(A) function():void} <: {@This(B) function():void} -> true {@This(any) function():void} <: {@This(X) function():void} -> true {function():void} <: {@This(X) function():void} -> true {@This(A) function():void} <: {@This(any) function():void} -> false {@This(A) function():void} <: {function():void} -> false For the given function types ${F}_{left}$ with ${F}_{left}.tfpars={L}_{0},{L}_{1},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}{L}_{k}$ ${F}_{right}$ with ${F}_{right}.tfpars={R}_{0},{R}_{1},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}{R}_{n}$, we say ${F}_{left}$ conforms to ${F}_{right}$, written as ${F}_{left}<:{F}_{right}$, if and only if: • if $|{F}_{left}.typePars|=|{F}_{right}.typePars|=0$: • else if $|{F}_{left}.typePars|>0\wedge |{F}_{right}.typePars|=0$: • $\exists \theta :\left(\Gamma ←\theta \right)⊢{F}_{left}<:{F}_{right}$ (cf. Function Type Conformance Non-Parameterized ) (i.e. there exists a substitution $\theta$ of type variables in ${F}_{left}$ so that after substitution it becomes a subtype of ${F}_{right}$ as defined by Function Type Conformance Non-Parameterized) • else if $|{F}_{left}.typePars|=|{F}_{right}.typePars|$: • $\Gamma ←\left\{{V}_{i}^{r}←{V}_{i}^{l}|0\le i\le n\right\}⊢{F}_{left}<:{F}_{right}$ ( accordingly) • - $\begin{array}{c}\forall 0\le i\le n:\\ intersection\left\{{V}_{i}^{l}.upperBounds\right\}:>intersection\left\{{V}_{i}^{r}.upperBounds\right\}\end{array}$ with ${F}_{left}.typePars={V}_{0}^{l},{V}_{1}^{l},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}{V}_{n}^{l}$ and ${F}_{right}.typePars={V}_{0}^{r},{V}_{1}^{r},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}{V}_{n}^{r}$ (i.e. we replace each type variable in ${F}_{right}$ by the corresponding type variable at the same index in ${F}_{left}$ and check the constraints from Function Type Conformance Non-Parameterized as if ${F}_{left}$ and ${F}_{right}$ were non-parameterized functions and, in addition, the upper bounds on the left side need to be supertypes of the upper bounds on the right side). Note that the upper bounds on the left must be supertypes of the right-side upper bounds (for similar reasons why types of formal parameters on the left are required to be supertypes of the formal parameters’ types in ). Where a particular type variable is used, on co- or contra-variant position, is not relevant: Example 64. Bounded type variable at co-variant position in function type class A {} class B extends A {} class X { <T extends B> m(): T { return null; } } class Y extends X { @Override <T extends A> m(): T { return null; } } Method m in Y may return an A, thus breaking the contract of m in X, but only if it is parameterized to do so, which is not allowed for clients of X, only those of Y. Therefore, the override in the above example is valid. The subtype relation for function types is also applied for method overriding to ensure that an overriding method’s signature conforms to that of the overridden method, see [Req-IDE-72] (applies to method comnsumption and implementation accordingly, see [Req-IDE-73] and [Req-IDE-74]). Note that this is very different from Java which is far more restrictive when checking overriding methods. As Java also supports method overloading: given two types $A,B$ with $B<:A$ and a super class method void m(B param), it is valid to override m as void m(A param) in N4JS but not in Java. In Java this would be handled as method overloading and therefore an @Override annotation on m would produce an error. The upper bound of a function type $F$ is a function type with the lower bound types of the parameters and the upper bound of the return type: $upper\left(\text{function}\left({P}_{1},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}},{P}_{n}\right):R\right):=\text{function}\left(lower\left({P}_{1}\right),.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}},lower\left({P}_{n}\right)\right):upper\left(R\right)$ The lower bound of a function type $F$ is a function type with the upper bound types of the parameters and the lower bound of the return type: $lower\left(\text{function}\left({P}_{1},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}},{P}_{n}\right):R\right):=\text{function}\left(upper\left({P}_{1}\right),.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}},upper\left({P}_{n}\right)\right):lower\left(R\right)$ #### 6.1.3. Autoboxing of Function Type Function types, compared to other types like String, come only in on flavour: the Function object representation. There is no primitive function type. Nevertheless, for function type expressions and function declarations, it is possible to call the properties of Function object directly. This is similar to autoboxing for strings. Access of Function properties on functions // function declaration var param: number = function(a,b){}.length // 2 function a(x: number) : number { return x*x; } // function reference a.length; // 1 // function variable var f = function(m,l,b){/*...*/}; f.length; // 3 class A { s: string; sayS(): string{ return this.s; } } var objA: A = new A(); objA.s = "A"; var objB = {s:"B"} // function variable var m = objA.sayS; // method as function, detached from objA var mA: {function(any)} = m.bind(objA); // bind to objA var mB: {function(any)} = m.bind(objB); // bind to objB m() // returns: undefined mA() // returns: A mB() // returns: B m.call(objA,1,2,3); // returns: A m.apply(objB,[1,2,3]); // returns: B m.toString(); // returns: function sayS(){ return this.s; } #### 6.1.4. Arguments Object A special arguments object is defined within the body of a function. It is accessible through the implicitly-defined local variable named , unless it is shadowed by a local variable, a formal parameter or a function named arguments or in the rare case that the function itself is called ’arguments’ [ECMA11a(p.S10.5, p.p.59)]. The argument object has array-like behavior even though it is not of type array: • All actual passed-in parameters of the current execution context can be retrieved by $0-based$ index access. • The length property of the arguments object stores the actual number of passed-in arguments which may differ from the number of formally defined number of parameters $fpars$ of the containing function. • It is possible to store custom values in the arguments object, even outside the original index boundaries. • All obtained values from the arguments object are of type any. In non-strict ES mode the callee property holds a reference to the function executed [ECMA11a(p.S10.6, p.p.61)]. Req. IDE-81: Arguments.callee (ver. 1) In N4JS and in ES strict mode the use of arguments.callee is prohibited. Req. IDE-82: Arguments as formal parameter name (ver. 1) In N4JS, the formal parameters of the function cannot be named arguments. This applies to all variable execution environments like field accessors (getter/setter, Field Accessors (Getter/Setter)), methods (Methods) and constructors (Constructor and Classifier Type), where FormalParameter type is used. // regular function function a1(s1: string, n2: number) { var l: number = arguments.length; var s: string = arguments[0] as string; } class A { // property access get s(): string { return ""+arguments.length; } // 0 set s(n: number) { console.log( arguments.length ); } // 1 // method m(arg: string) { var l: number = arguments.length; var s: string = arguments[0] as string; } } // property access in object literals var x = { a:5, get b(): string { return ""+arguments.length } } // invalid: function z(){ arguments.length // illegal, see next lines // define arguments to be a plain variable of type number: var arguments: number = 4; } ### 6.2. ECMAScript 5 Function Definition #### 6.2.1. Function Declaration ##### 6.2.1.1. Syntax A function can be defined as described in [ECMA11a(p.S13, p.p.98)] and additional annotations can be specified. Since N4JS is based on [ECMA15a], the syntax contains constructs not available in [ECMA11a]. The newer constructs defined only in [ECMA15a] and proposals already implemented in N4JS are described in ECMAScript 2015 Function Definition and ECMAScript Proposals Function Definition. In contrast to plain JavaScript, function declarations can be used in blocks in N4JS. This is only true, however, for N4JS files, not for plain JS files. Syntax Function Declaration and Expression FunctionDeclaration <Yield>: => ({FunctionDeclaration} annotations+=Annotation* (declaredModifiers+=N4Modifier)* -> FunctionImpl <Yield,Yield,Expression=false> ) => Semi? ; fragment AsyncNoTrailingLineBreak *: (declaredAsync?='async' NoLineTerminator)?; fragment FunctionImpl<Yield, YieldIfGenerator, Expression>*: 'function' ( ) ; TypeVariables? name=BindingIdentifier<Yield>? StrictFormalParameters<Yield=Generator> (-> ':' returnTypeRef=TypeRef)? ; fragment FunctionBody <Yield, Expression>*: <Expression> body=Block<Yield> | <!Expression> body=Block<Yield>? ; Properties of the function declaration and expression are described in Function Type. For this specification, we introduce a supertype $FunctionDefinition$ for both, $FunctionDeclaration$ and $FunctionExpression$. This supertype contains all common properties of these two subtypes, that is, all properties of $FunctionExpression$. Example 65. Function Declaration with Type Annotation // plain JS function f(p) { return p.length } // N4JS function f(p: string): number { return p.length } ##### 6.2.1.2. Semantics A function defined in a class’s method (or method modifier) builder is a method, see Methods for details and additional constraints. The metatype of a function definition is function type (Function Type), as a function declaration is only a different syntax for creating a Function object. Constraints for function type are described in Function Type. Another consequence is that the inferred type of a function definition $fdecl$ is simply its function type $F$. $\frac{\left[\phantom{\rule{-0.167em}{0ex}}\left[fdecl\right]\phantom{\rule{-0.167em}{0ex}}\right]}{\left[\phantom{\rule{-0.167em}{0ex}}\left[F\right]\phantom{\rule{-0.167em}{0ex}}\right]}$ Note that the type of a function definition is different from its return type $f.decl$! 1. In plain JavaScript, function declarations must only be located on top-level, that is they must not be nested in blocks. Since this is supported by most JavaScript engines, only a warning is issued. #### 6.2.2. Function Expression A function expression [ECMA11a(p.S11.2.5)] is quite similar to a function declaration. Thus, most details are explained in ECMAScript 5 Function Definition. ##### 6.2.2.1. Syntax FunctionExpression: ({FunctionExpression} FunctionImpl<Yield=false,YieldIfGenerator=true,Expression=true> ) ; ##### 6.2.2.2. Semantics and Type Inference In general, the inferred type of a function expression simply is the function type as described in Function Type. Often, the signature of a function expression is not explicitly specified but it can be inferred from the context. The following context information is used to infer the full signature: • If the function expression is used on the right hand side of an assignment, the expected return type can be inferred from the left hand side. • If the function expression is used as an argument in a call to another function, the full signature can be inferred from the corresponding type of the formal parameter declaration. Although the signature of the function expression may be inferred from the formal parameter if the function expression is used as argument, this inference has some conceptual limitations. This is demonstrated in the next example. Example 66. Inference Of Function Expression’s Signature In general, {function():any} is a subtype of {function():void} (cf. Function Type). When the return type of a function expression is inferred, this relation is taken into account which may lead to unexpected results as shown in the following code snippet: function f(cb: {function():void}) { cb() } f(function() { return 1; }); No error is issued: The type of the function expression actually is inferred to {function():any}, because there is a return statement with an expression. It is not inferred to {function():void}, even if the formal parameter of f suggests that. Due to the previously-stated relation {function():any} <: {function():void} this is correct – the client (in this case function f) works perfectly well even if cb returns something. The contract of arguments states that the type of the argument is a subtype of the type of the formal parameter. This is what the inferencer takes into account! ### 6.3. ECMAScript 2015 Function Definition #### 6.3.1. Formal Parameters Parameter handling has been significantly upgraded in ECMAScript 6. It now supports parameter default values, rest parameters (variadics) and destructuring. Formal parameters can be modified to be either default or variadic. In case a formal parameter has no modifier, it is called normal. Modified parameters also become optional. Modifiers of formal parameters such as default or rest are neither evaluated nor rewritten in the transpiler. ##### 6.3.1.1. Optional Parameters An optional formal parameter can be omitted when calling a function/method. An omitted parameter has the value undefined. In case the omitted parameter is variadic, the value is an empty array. Parameters can not be declared as optional explicitly. Instead, being optional is true when a parameter is declared as default or variadic. Note that any formal parameter that follows a default parameter is itself also a default thus an optional parameter. ##### 6.3.1.2. Default Parameters A default parameter value is specified for a parameter via an equals sign (=). If a caller doesn’t provide a value for the parameter, the default value is used. Default initializers of parameters are specified at a formal parameter of a function or method after the equal sign using an arbitrary initializer expression, such as var = "s". However, this default initializer can be omitted. When a formal parameter has a declared type, the default initializer is specified at the end, such as: var : string = "s". The initializer expression is only evaluated in case no actual argument is given for the formal parameter. Also, the initializer expression is evaluated when the actual argument value is undefined. Formal parameters become default parameters implicitly when they are preceded by an explicit default parameter. In such cases, the default initializer is undefined. Req. IDE-14501: Default parameters (ver. 1) Any normal parameter which is preceded by a default parameter also becomes a default parameter. Its initializer is undefined. When a method is overwritten, its default parameters are not part of the overwriting method. Consequently, initializers of default parameters in abstract methods are obsolete. Variadic parameters are also called rest parameters. Marking a parameter as variadic indicates that method accepts a variable number of parameters. A variadic parameter implies that the parameter is also optional as the cardinality is defined as $\left[0..*\right]$. No further parameter can be defined after a variadic parameter. When no argument is given for a variadic parameter, an empty array is provided when using the parameter in the body of the function or method. Req. IDE-16: Variadic and optional parameters (ver. 1) For a parameter $p$, the following condition must hold: $p.var\to p.opt$. A parameter can not be declared both variadic and with a default value. That is to say that one can either write $varName=$ (default) or $.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}varName$, but not $.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}varName=$. Declaring a variadic parameter of type $T$ causes the type of the method parameter to become Array<T>. That is, declaring function(…​tags : string) causes tags to be an Array<string> and not just a scalar string value. To make this work at runtime, the compiler will generate code that constructs the parameter from the arguments parameter explicitly passed to the function. Req. IDE-17: Variadic at Runtime (ver. 1) At runtime, a variadic parameter is never set to undefined. Instead, the array may be empty. This must be true even if preceding parameters are optional and no arguments are passed at runtime. For more constraints on using the variadic modifier, see Function-Object-Type. #### 6.3.2. Generator Functions Generators come together with the yield expression and can play three roles: the role of an iterator (data producer), of an observer (data consumer), and a combined role which is called coroutines. When calling a generator function or method, the returned generator object of type Generator<TYield,TReturn,TNext> can be controlled by its methods (cf. [ECMA15a(p.S14.4)], also see [Kuizinas14a]). ##### 6.3.2.1. Syntax Generator functions and methods differ from ordinary functions and methods only in the additional * symbol before the function or method name. The following syntax rules are extracted from the real syntax rules. They only display parts relevant to declaring a function or method as a generator. GeneratorFunctionDeclaration <Yield>: (declaredModifiers+=N4Modifier)* 'function' generator?='*' FunctionBody<Yield=true,Expression=false> ; GeneratorFunctionExpression: 'function' generator?='*' FunctionBody<Yield=true,Expression=true> ; GeneratorMethodDeclaration: annotations+=Annotation+ (declaredModifiers+=N4Modifier)* TypeVariables? generator?='*' NoLineTerminator LiteralOrComputedPropertyName<Yield> MethodParamsReturnAndBody<Generator=true> ##### 6.3.2.2. Semantics The basic idea is to make code dealing with Generators easier to write and more readable without changing their functionality. Take this example: Example 67. Two simple generator functions // explicit form of the return type function * countTo(iMax:int) : Generator<int,string,undefined> { for (int i=0; i<=iMax; i++) yield i; return "finished"; } val genObj1 = countTo(3); val values1 = [...genObj1]; // is [0,1,2,3] val lastObj1 = genObj1.next(); // is {value="finished",done=true} // shorthand form of the return type function * countFrom(start:int) : int { for (int i=start; i>=0; i--) yield i; return finished; } val genObj2 = countFrom(3); val values2 = [...genObj2]; // is [3,2,1,0] val lastObj2 = genObj2.next(); // is {value="finished",done=true} In the example above, two generator functions are declared. The first declares its return type explicitly whereas the second uses a shorthand form. Generator functions and methods return objects of the type Generator<TYield,TReturn,TNext> which is a subtype of the Iterable<TYield> and Iterator<TYield> interfaces. Moreover, it provides the methods throw(exception:any) and return(value:TNext?) for advanced control of the generator object. The complete interface of the generator class is given below. The generator class public providedByRuntime interface Generator<out TYield, out TReturn, in TNext> extends Iterable<TYield>, Iterator<TYield> { public abstract next(value: TNext?): IteratorEntry<TYield> public abstract [Symbol.iterator](): Generator<TYield, TReturn, TNext> public abstract throw(exception: any): IteratorEntry<TYield>; public abstract return(value: TNext?): IteratorEntry<TReturn>; } Req. IDE-14370: Modifier * (ver. 1) 1. * may be used on declared functions and methods, and for function expressions. 2. A function or method f with a declared return type R that is declared * has an actual return type of Generator<TYield,TReturn,TNext>. 3. A generator function or method can have no declared return type, a shorthand form of a return type or an explicitly declared return type. 1. The explicitly declared return type is of the form Generator<TYield,TReturn,TNext> with the type variables: 1. TYield as the expected type of the yield expression argument, 2. TReturn as the expected type of the return expression, and 3. TNext as both the return type of the yield expression. 2. The shorthand form only declares the type of TYield which implicitly translates to Generator<TYield,TReturn,any> as the return type. 1. The type TReturn is inferred to either undefined or any from the body. 2. In case the declared type is void, actual return type evaluates to Generator<undefined,undefined,any>. 3. If no return type is declared, both TYield and TReturn are inferred from the body to either any or undefined. TNext is any. 4. Given a generator function or method f with an actual return type Generator<TYield,TReturn,TNext>: 1. all yield statements in f must have an expression of type TYield. 2. all return statements in f must have an expression of type TReturn. 5. Return statements in generator functions or methods are always optional. 1. yield and yield* may only be in body of generator functions or methods. 2. yield expr takes only expressions expr of type TYield in a generator function or methods with the actual type Generator<TYield,TReturn,TNext>. 3. The return type of the yield expression is TNext. 4. yield* fg() takes only iterators of type Iterator<TYield>, and generator functions or methods fg with the actual return type Generator<? extends TYield,? extends TReturn,? super TNext>. 5. The return type of the yield* expression is any, since a custom iterator could return an entry {done=true,value} and any value for the variable value. Similar to async functions, shorthand and explicit form * function():int{}; and * function():Generator<int,TResult,any> are equal, given that the inferred TResult of the former functions equals to TResult in the latter function). In other words, the return type of generator functions or methods is wrapped when it is not explicitly defined as Generator already. Thus, whenever a nested generator type is desired, it has to be defined explicitly. Consider the example below. Type variables with async methods. class C<T> { genFoo(): T{} // equals to genFoo(): Generator<T, undefined, any>; // note that TResult depends on the body of genFoo() } function fn(C<int> c1, C<Generator<int,any,any>> c2) { c1.genFoo(); // returns Generator<int, undefined, any> c2.genFoo(); // returns Generator<Generator<int,any,any>, undefined, any> } ##### 6.3.2.3. Generator Arrow Functions As of now, generator arrow functions are not supported by EcmaScript 6 and also, the support is not planned. However, introducing generator arrow function in EcmaScript is still under discussion. For more information, please refer to ESDiscuss.org and StackOverflow.com. #### 6.3.3. Arrow Function Expression This is an ECMAScript 6 expression (see [ECMA15a(p.S14.2)]) for simplifying the definition of anonymous function expressions, a.k.a. lambdas or closures. The ECMAScript Specification calls this a function definition even though they may only appear in the context of expressions. Along with Assignments, Arrow function expressions have the least precedence, e.g. they serve as the entry point for the expression tree. Arrow function expressions can be considered syntactic window-dressing for old-school function expressions and therefore do not support the benefits regarding parameter annotations although parameter types may be given explicitly. The return type can be given as type hint if desired, but this is not mandatory (if left out, the return type is inferred). The notation @=> stands for an async arrow function (Asynchronous Arrow Functions). ##### 6.3.3.1. Syntax The simplified syntax reads like this: ArrowExpression returns ArrowFunction: =>( {ArrowFunction} ( '(' ( fpars+=FormalParameterNoAnnotations ( ',' fpars+=FormalParameterNoAnnotations )* )? ')' (':' returnTypeRef=TypeRef)? | fpars+=FormalParameterNoType ) '=>' ) ( (=> hasBracesAroundBody?='{' body=BlockMinusBraces '}') | body=ExpressionDisguisedAsBlock ) ; FormalParameterNoAnnotations returns FormalParameter: ; FormalParameterNoType returns FormalParameter: name=JSIdentifier; BlockMinusBraces returns Block: {Block} statements+=Statement*; ExpressionDisguisedAsBlock returns Block: {Block} statements+=AssignmentExpressionStatement ; AssignmentExpressionStatement returns ExpressionStatement: expression=AssignmentExpression; ##### 6.3.3.2. Semantics and Type Inference Generally speaking, the semantics are very similar to the function expressions but the devil’s in the details: • arguments: Unlike normal function expressions, an arrow function does not introduce an implicit arguments variable (Arguments Object), therefore any occurrence of it in the arrow function’s body has always the same binding as an occurrence of arguments in the lexical context enclosing the arrow function. • this: An arrow function does not introduce a binding of its own for the this keyword. That explains why uses in the body of arrow function have the same meaning as occurrences in the enclosing lexical scope. As a consequence, an arrow function at the top level has both usages of arguments and this flagged as error (the outer lexical context doesn’t provide definitionsfor them). • super: As with function expressions in general, whether of the arrow variety or not, the usage of super isn’t allowed in the body of arrow functions. In N4JS, a top-level arrow function can’t refer to this as there’s no outer lexical context that provides a binding for it. In N4JS, a top-level arrow function can’t include usages of arguments in its body, again because of the missing binding for it. ### 6.4. ECMAScript Proposals Function Definition #### 6.4.1. Asynchronous Functions To improve language-level support for asynchronous code, there exists an ECMAScript proposal [45] based on Promises which are provided by ES6 as built-in types. N4JS implements this proposal. This concept is supported for declared functions and methods (Asynchronous Methods) as well as for function expressions and arrow functions (Asynchronous Arrow Functions). ##### 6.4.1.1. Syntax The following syntax rules are extracted from the real syntax rules. They only display parts relevant to declaring a function or method as asynchronous. AsyncFunctionDeclaration <Yield>: (declaredModifiers+=N4Modifier)* declaredAsync?='async' NoLineTerminator 'function' FunctionBody<Yield=false,Expression=false> ; AsyncFunctionExpression: declaredAsync?='async' NoLineTerminator 'function' FunctionBody<Yield=false,Expression=true> ; AsyncArrowExpression <In, Yield>: declaredAsync?='async' NoLineTerminator '(' (fpars+=FormalParameter<Yield> (',' fpars+=FormalParameter<Yield>)*)? ')' (':' returnTypeRef=TypeRef)? '=>' ( '{' body=BlockMinusBraces<Yield> '}' | body=ExpressionDisguisedAsBlock<In> ) ; AsyncMethodDeclaration: annotations+=Annotation+ (declaredModifiers+=N4Modifier)* TypeVariables? declaredAsync?='async' NoLineTerminator LiteralOrComputedPropertyName<Yield> MethodParamsReturnAndBody ’async’ is not a reserved word in ECMAScript and it can therefore be used either as an identifier or as a keyword, depending on the context. When used as a modifier to declare a function as asynchronous, then there must be no line terminator after the async modifier. This enables the parser to distinguish between using async as an identifier reference and a keyword, as shown in the next example. Example 68. Async as keyword and identifier async (1) function foo() {} // vs async function bar(); (2) 1 In this snippet, the async on line 1 is an identifier reference (referencing a variable or parameter) and the function defined on line 2 is a non-asynchronous function. The automatic semicolon insertion adds a semicolon after the reference on line 1. 2 In contrast, async on line 4 is recognized as a modifier declaring the function as asynchronous. ##### 6.4.1.2. Semantics The basic idea is to make code dealing with Promises easier to write and more readable without changing the functionality of Promises. Take this example: A simple asynchronous function using async/await. // some asynchronous legacy API using promises interface DB {} interface DBAccess { getDataBase(): Promise<DB,?> } var access: DBAccess; // our own function using async/await try { var db: DB = await access.getDataBase(); var entry: string = await access.loadEntry(db, id); } catch(err) { // either getDataBase() or loadEntry() failed throw err; } } The modifier async changes the return type of loadAddress() from string (the declared return type) to Promise<string,?> (the actual return type). For code inside the function, the return type is still string: the value in the return statement of the last line will be wrapped in a Promise. For client code outside the function and in case of recursive invocations, the return type is Promise<string,?>. To raise an error, simply throw an exception, its value will become the error value of the returned Promise. If the expression after an await evaluates to a Promise, execution of the enclosing asynchronous function will be suspended until either a success value is available (which will then make the entire await-expression evaluate to this success value and continue execution) or until the Promise is rejected (which will then cause an exception to be thrown at the location of the await-expression). If, on the other hand, the expression after an await evaluates to a non-promise, the value will be simply passed through. In addition, a warning is shown to indicate the unnecessary await expression. Note how method loadAddress() above can be implemented without any explicit references to the built-in type Promise. In the above example we handle the errors of the nested asynchronous calls to getDataBase() and loadEntry() for demonstration purposes only; if we are not interested in the errors we could simply remove the try/catch block and any errors would be forwarded to the caller of loadAddress(). Invoking an async function commonly adopts one of two forms: • var p: Promise<successType,?> = asyncFn() • await asyncFn() These patterns are so common that a warning is available whenever both 1. Promise is omitted as expected type; and 2. await is also omitted. The warning aims at hinting about forgetting to wait for the result, while remaining non-noisy. 1. async may be used on declared functions and methods as well as for function expressions and arrow functions. 2. A function or method that is declared async can have no declared return type, a shorthand form of a return type or an explicitly declared return type. 1. The explicitly declared return type is of the form Promise<R,E> where R is the type of all return statements in the body, and E is the type of exceptions that are thrown in the body. 2. The shorthand form only declares the type of R which implicitly translates to Promise<R,?> as the actual return type. 3. In case no return type is declared, the type R of Promise<R,?> is inferred from the body. 3. A function or method f with a declared return type R that is declared async has an actual return type of 1. R if R is a subtype of Promise<?,?>, 2. Promise<undefined,?> if R is type void. 3. Promise<R,?> in all other cases (i.e. the declared return type R is being wrapped in a Promise). 4. Return type inference is only performed when no return type is declared. 1. The return type R of Promise<R,?> is inferred either as void or as any. 5. Given a function or method f that is declared async with a declared return type R, or with a declared return type Promise<R,?>, all return statements in f must have an expression of type R (and not of type Promise<R,?>). 6. await can be used in expressions directly enclosed in an async function, and behaves like a unary operator with the same precedence as yield in ES6. 7. Given an expression expr of type T, the type of (await expr) is inferred to T if T is not a Promise, or it is inferred to S if T is a Promise with a success value of type S, i.e. T <: Promise<S,?> . In other words, the return type R of async functions and methods will always be wrapped to Promise<R,?> unless R is a Promise already. As a consequence, nested Promises as a return type of a async function or method have to be stated explicitly like Promise<Promise<R,?>,?>. When a type variable T is used to define the the return type of an async function or method, it will always be wrapped. Consider the example below. Example 69. Type variables with async methods. interface I<T> { async foo(): T; // amounts to foo(): Promise<T,?> } function snafu(i1: I<int>, i2: I<Promise<int,?>>) { i1.foo(); // returns Promise<int,?> i2.foo(); // returns Promise<Promise<int,?>,?> } ##### 6.4.1.3. Asynchronous Arrow Functions An await expression is allowed in the body of an async arrow function but not in the body of a non-async arrow function. The semantics here are intentional and are in line with similar constraint for function expressions. ### 6.5. N4JS Extended Function Definition #### 6.5.1. Generic Functions A generic function is a function with a list of generic type parameters. These type parameters can be used in the function signature to declare the types of formal parameters and the return type. In addition, the type parameters can be used in the function body, for example when declaring the type of a local variable. In the following listing, a generic function foo is defined that has two type parameters S and T. Thereby S is used as to declare the parameter type Array<S> and T is used as the return type and to construct the returned value in the function body. Generic Function Definition function <S,T> foo(s: Array<S>): T { return new T(s); } If a generic type parameter is not used as a formal parameter type or the return type, a warning is generated. #### 6.5.2. Promisifiable Functions In many existing libraries, which have been developed in pre-ES6-promise-API times, callback methods are used for asynchronous behavior. An asynchronous function follows the following conventions: 'function' name '(' arbitraryParameters ',' callbackFunction ')' Usually the function returns nothing (void). The callback function usually takes two arguments,in which the first is an error object and the other is the result value of the asynchronous operation. The callback function is called from the asynchronous function, leading to nested function calls (aka ’callback hell’). In order to simplify usage of this pattern, it is possible to mark such a function or method as @Promisifiable. It is then possible to ’promisify’ an invocation of this function or method, which means no callback function argument has to be provided and a will be returned. The function or method can then be used as if it were declared with async. This is particularly useful in N4JS definition files (.n4jsd) to allow using an existing callback-based API from N4JS code with the more convenient await. Example 70. Promisifiable Given a function with an N4JS signature f(x: int, cb: {function(Error, string)}): void This method can be annotated with Promisifiable as follows: @Promisifiable f(x: int, cb: {function(Error, string)}): void With this annotation, the function can be invoked in four different ways: f(42, function(err, result1) { /* ... */ }); // traditional var promise: Promise<string,Error> = @Promisify f(42); // promise var result3: string = await @Promisify f(42); // long var result4: string = await f(42); // short The first line is only provided for completeness and shows that a promisifiable function can still be used in the ordinary way by providing a callback - no special handling will occur in this case. The second line shows how f can be promisified using the @Promisify annotation - no callback needs to be provided and instead, a Promise will be returned. We can either use this promise directly or immediately await on it, as shown in line 3. The syntax shown in line 4 is merely shorthand for await @Promisify, i.e. the annotation is optional after await. Req. IDE-87: Promisifiable (ver. 1) A function or method $f$ can be annotated with @Promisifiable if and only if the following constraints hold: 1. Last parameter of $f$ is a function (the $callback$). 2. The $callback$ has a signature of • {function(E, T0, T1, …​, Tn): V}, or • {function(T0, T1, …​, Tn): V} in which $E$ is type Error or a subtype thereof, ${T}_{0},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}},{T}_{n}$ are arbitrary types except or its subtypes. $E$, if given, is then the type of the error value, and ${T}_{0},.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}}.\phantom{\rule{1.0mm}{0ex}},{T}_{n}$ are the types of the success values of the asynchronous operation. Since the return value of the synchronous function call is not available when using @Promisify, $V$ is recommended to be void, but it can be any type. 3. The callback parameter may be optional.[46] According to [Req-IDE-87], a promisifiable function or method may or may not have a non-void return type, and that only the first parameter of the callback is allowed to be of type Error, all other parameters must be of other types. A promisifiable function $f$ with one of the two valid signatures given in [Req-IDE-87] can be promisified with Promisify or used with await, if and only if the following constraints hold: 1. Function $f$ must be annotated with @Promisifiable. 2. Using @Promisify f() without await returns a promise of type Promise<S,F> where • $S$ is IterableN<T0,…​,Tn> if $n\ge 2$, T if $n=1$, and undefined if $n=0$. • $F$ is E if given, undefined otherwise. 3. Using await @Promisify f() returns a value of type IterableN<T0,…​,Tn> if $n\ge 2$, of type T if $n=1$, and of type undefined if $n=0$. 4. In case of using an await, the annotation can be omitted. I.e., await @Promisify f() is equivalent to await f(). 5. Only call expressions using f as target can be promisified, in other words this is illegal: var pf = @Promisify f; // illegal code!
2021-03-03 00:34:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 102, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4681522250175476, "perplexity": 4441.370099307387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00395.warc.gz"}
https://math.stackexchange.com/questions/663518/idempotents-in-a-ring-without-unity-rng-and-no-zero-divisors
# Idempotents in a ring without unity (rng) and no zero divisors. Question: Given a ring without unity and with no zero-divisors, is it possible that there are idempotents other than zero? Def: $a$ is idempotent if $a^2 = a$. Originally the problem was to show that $1$ and $0$ are the only idempotents in a ring with unity and no zero-divisors, but I wonder what happens if we remove the unity condition. I am trying to find a ring with idempotents not equal to $0$ or $1$. So far my biggest struggle has been coming up with examples of rings with the given properties. Does anyone have any hints? How should I attack this problem? • Every rng (a ring without unity) can be embedded in a ring, but I don't know if this can be done while preserving the no zero divisors property. – Jim Feb 4 '14 at 16:59 • @Jim It is actually mentioned here that one might have trouble preserving the no zero divisor property when embedding a rng into a ring. – Improve Feb 4 '14 at 20:17 Proposition: If a rng $R$ which does not have nonzero zero divisors, a nonzero idempotent of $R$ must be an identity for the ring. Proof: Let $e$ be a nonzero idempotent. Since $e(er-r)=0=(re-r)e$ for all $r\in R$ and $e$ is nonzero, we conclude $er-r=0=re-r$, and so $e$ is an identity element.
2019-10-24 05:26:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6837356090545654, "perplexity": 176.25358227380673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00476.warc.gz"}
https://codereview.stackexchange.com/questions/41732/checking-if-vector-is-normalized
# Checking if vector is normalized I tried to check if my vector struct is normalized, and I ended up with this code: public bool IsNormalized { get { double len = Length; // Math.Sqrt ((X * X) + (Y * Y) + (Z * Z)); -- X, Y, Z are in double format const double min = 1 - 1e-14; const double max = 1 + 1e-14; return (len >= min && len <= max); } } Is this solution OK? I read double has 15 digits precision, but 1+1E-15 give 1, so I changed to E14. Is this all good? I need best accuracy. This seems ok, depending on how precise you want it to be (some margin of error will always be needed since this is floating-point, but how much will depend on your needs) Alternatively, you could check Math.Abs(1 - len) < 1e-14, but I suspect the same precision problems will remain. I was somewhat surprised that 1 + 1e-15 equals 1, though, because doubles have 52 bits of precision, which should be more than enough to store a difference of 15 digits (-log2(1/10^15) gives me around 49.8, which is less than 52 bits). So I double checked and 1 - 1.0e-15 does not give me 1. class Program { static void Main(string[] args) { Console.WriteLine(1 - 1.0e-15); //Does NOT print 1 } } Same thing for 1 + 1.0e-15 (though this one is trickier) class Program { static void Main(string[] args) { Console.WriteLine(1 + 1.0e-15); //Prints 1 Console.WriteLine(1 + 1.0e-15 == 1); //...but this prints False As a final note, Sqrt(1) == 1, so instead of using the Length you could use the SquaredLength and save a (potentially expensive) Sqrt operation.
2019-06-17 11:53:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3676871359348297, "perplexity": 2737.233511277894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00239.warc.gz"}
https://realnfo.com/toc/Electrical_Circuit_Analysis/Frequency_Response/Bode_Plots
# Bode Plots The frequency range required in frequency response is often so wide that it is inconvenient to use a linear scale for the frequency axis. Also, there is a more systematic way of locating the important features of the magnitude and phase plots of the transfer function. For these reasons, it has become standard practice to use a logarithmic scale for the frequency axis and a linear scale in each of the separate plots of magnitude and phase. Such semilogarithmic plots of the transfer function—known as Bode plots—have become the industry standard. Bode plots are semilog plots of the magnitude (in decibels) and phase (in degrees) of a transfer function versus frequency. Bode plots contain the same information as the nonlogarithmic plots discussed in the previous section, but they are much easier to construct, as we shall see shortly. The transfer function can be written as $$H = H \angle φ = H e^{jφ}$$ Taking the natural logarithm of both sides, $$ln H = ln H + ln e^{jφ} = ln H + jφ$$ Thus, the real part of $ln H$ is a function of the magnitude while the imaginary part is the phase. In a Bode magnitude plot, the gain $$H_{dB} = 20 log_{10} H$$ is plotted in decibels (dB) versus frequency. Table 1 provides a few values of H with the corresponding values in decibels. In a Bode phase plot, φ is plotted in degrees versus frequency. Both magnitude and phase plots are made on semilog graph paper. Table 1: Specific gains and their decibel values.
2022-11-29 05:40:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809419572353363, "perplexity": 405.9467099758992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00300.warc.gz"}
http://loominate.net/2012/03/19/brainmess-extract-jump-methods/?replytocom=21
# Brainmess: Extract Jump Methods Today, I’ll start to refactor the Brainmess program. In the first post I gave an “all-in-one” solution. Next I added some automated tests to give me some confidence that I don’t break anything during the process. The last time that I spoke about Brainmess, I just explained my implementation. Notice that in the switch statement every case, except the last two cases, is one line of code (followed by a break). The last two cases are several lines long. These two are prime candidates for Extract Method. Why? The first reason is to reduce the length and nesting level of the program. The second is that I suspect these methods which are concerned with finding matching brackets are good candidates for unit testing. So I extract out the JumpForward and JumpBackward methods. The main method now looks like this: // skip lines case '[': if (tape[tc] == 0) { pc = JumpForward(program, pc); } break; case ']': if (tape[tc] != 0) { pc = JumpBackward(program, pc); } break; // skip lines I think this is already cleaner. The methods make it clear that in one case we are jumping forward, and in the next we are jumping backward. In both cases, the methods scan the program starting from the current position and return the location of the matching bracket. The methods look like this: private static int JumpForward(string program, int pc) { int nestLevel = 1; while (nestLevel > 0) { char instruction = program[pc]; if (instruction == '[') { nestLevel++; } else if (instruction == ']') { nestLevel--; } pc++; } return pc; } private static int JumpBackward(string program, int pc) { pc -= 2; int nestLevel = 1; while (nestLevel > 0) { char instruction = program[pc]; if (instruction == '[') { nestLevel--; } else if (instruction == ']') { nestLevel++; } pc--; } pc++; return pc; } You can see the change I made (and the full files) by visiting my GitHub repository and viewing commit 4b15b4ca. After I made these changes, I ran my tests and found that they still passed. Now, I’m not totally satisfied with the new methods. The first problem I want to address is that the two methods look almost identical. The main differences between the two is whether we increment or decrement a variable. The second problem has to do with the pc -= 2 in the JumpBackward method. What is going on there? And why is the nestLevel initialized to 1 in both cases? In the Run method we always increment the pc variable before executing the instruction. Therefore, when we go to execute the jump instructions the pc variable is pointing to the instruction immediately after the jump instruction. In the case of JumpForward this means that the nestLevel is indeed 1. We are nested one level deep relative to the current jump instruction. In the case of the JumpBackward we are in the same position, but only if we back up two instructions. The third problem with this code, is that the jump instructions have this strange pre-condition that the pc variable needs to be positioned 1 after the bracket that caused the jump. That seems odd. I’m going to create one new method named FindMatch that fixes all of these problems. private static int JumpForward(string program, int pc) { const int increment = 1; return FindMatch(program, pc - 1, increment) + 1; } private static int JumpBackward(string program, int pc) { const int increment = -1; return FindMatch(program, pc - 1, increment); } /// <summary> /// Finds the match for the bracket pointed to by /// pc in the program. Increment tells the algorithm /// which way to search. /// </summary> private static int FindMatch(string program, int pc, int increment) { int nestLevel = 1; pc += increment; while (nestLevel > 0) { char instruction = program[pc]; if (instruction == '[') nestLevel += increment; else if (instruction == ']') nestLevel -= increment; pc += increment; } return pc - increment; } It solves the first problem because it takes a increment variable that indicates which way to search thru the program string. This allows us to have just one method that knows how to find matching strings. (I still don’t like this exactly, but I’ll talk more about this later.) It solves the second and third problems by stating the fact that it expects pc to point to an actual bracket. This method then finds the matching bracket. Like before we know that the nestLevel is 1. This is only true however, because on the next line we either move forward (or backward) to get “inside” of the loop. I then updated the jump methods to delegate to FindMatch. They pass in 1 or -1 as appropriate for the increment parameter. In addition they don’t pass in the current pc value. They pass in pc - 1 which makes sure we are telling FindMatch to start with a bracket. This change can be found at commit abe37577. Again, I ran my tests and they passed. Now the last refactoring for this post. I’m going to convert FindMatch into an extension method that can be used on any string, and I’m going to remove the increment parameter. This is what the Run method looks after the change. // skip lines case '[': if (tape[tc] == 0) { pc = program.FindMatch(pc - 1) + 1; } break; case ']': if (tape[tc] != 0) { pc = program.FindMatch(pc - 1); } break; // skip lines Why did I remove the increment parameter? It didn’t make sense to me. The method FindMatch should find the the match and determine which way to search. So here is the implementation. using System; namespace BrainmessShort { public static class StringExtensions { /// <summary> /// Finds the match for the bracket pointed to by /// pc in the program. Increment tells the algorithm /// which way to search. /// </summary> /// <param name="program"></param> /// <param name="pc"></param> /// <param name="increment"></param> /// <returns></returns> private static int FindMatch(this string program, int pc, int increment) { int nestLevel = 1; pc += increment; while (nestLevel > 0) { char instruction = program[pc]; if (instruction == '[') nestLevel += increment; else if (instruction == ']') nestLevel -= increment; pc += increment; } return pc - increment; } public static int FindMatch(this string program, int pc) { if (program[pc] == '[') return program.FindMatch(pc, 1); if (program[pc] == ']') return program.FindMatch(pc, -1); throw new ArgumentException("The character at specified location is not a square bracket"); } } } You can see that there is one public version of FindMatch that determines the value of increment and then delegates to the private one. All the code for this change can be found at commit abe37577. Finally, I reran all my tests and they passed. I am a software developer and part-time professor. I enjoy studying and discussing mathematics, computer science and software development. This entry was posted in software development and tagged , , . Bookmark the permalink. ### 7 Responses to Brainmess: Extract Jump Methods 1. Jmaxxz says: The all in one implementation in your first post is a great example of the loop-switch sequence antipattern. http://en.wikipedia.org/wiki/Loop-switch_sequence 2. Michael says: Based on my reading of the link you posted, it isn’t actually an example of that anti-pattern. (It’s specifically exempted.) 3. Jmaxxz says: I suppose if you view this as an event handler, or an event driven finite state machine loop then I would agree. I would not have called the file stream an event stream though I can see how this is one possible way to model it. If the intent was to model it as an event driven finite state machine or a event handler loop then the issue would be the business logic being handled inline instead of by dedicated event handlers (with the switch just being the event router). The reason I called it this antipattern is when I read the initial switch driven architecture I mentally modeled it as a command pattern. Since it was not modeled this way in the code in the article I was incorrect in calling it this antipattern. 4. Michael says: This is the key quote I think “…it is only considered incorrect when used to model a known sequence of steps”. We are never executing a known sequence of steps in that implementation. (I agree it’s got problems, hence my series of posts; but not necessarily this anti-pattern). 5. Pingback: Loop Invariant Proofs | Loominate
2017-09-25 02:30:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32451939582824707, "perplexity": 3415.760182536746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690307.45/warc/CC-MAIN-20170925021633-20170925041633-00668.warc.gz"}
https://www.physicsforums.com/threads/rotating-disk-translation.52314/
Rotating Disk translation 1. Nov 11, 2004 Zenshin Hello. Could anyone give me a hint in this problem? There´s a disk (mass m ad radius a) rotating with angular velocity w0 (only rotation). If this disk is translating in xy plane, parallel with the y axis with its center aligned at x0, how ca I describe the angular momentum L (t), and it´s components, Lx Ly and Lz (with respect with the xyz coordinate system). Any ideas? (I think it´ll have variational Lx and Ly and a fixed Lz) 2. Nov 11, 2004 Staff: Mentor I assume you are given the translational speed? In any case, the total angular momentum is the sum of: (1) angular momentum of the disk due to its rotation about the center of mass (2) angular momentum of the disk due to the translation of its center of mass (consider the mass concentrated at the center of mass)​ 3. Nov 11, 2004 Zenshin Oh yes, I forgot, the translation velocity is also given (v). But, since the disc translates in a linear way (it only rotates about itself), how can I define a angular momentum of it's translation? I think the parallel axis theorem (Steiner) doesn't apply here. Thanks 4. Nov 11, 2004 Staff: Mentor The angular momentum (with respect to the origin) of a moving particle is defined as $\vec{L} = \vec{r}\times\vec{p}$, where $\vec{p}$ is the linear momentum and $\vec{r}$ is the position vector.
2017-02-22 01:46:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285208582878113, "perplexity": 664.7199438357518}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00298-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.jobilize.com/physics/section/problem-solving-strategies-for-wave-optics-by-openstax?qcr=www.quizover.com
# 27.7 Thin film interference  (Page 4/6) Page 4 / 6 The wings of certain moths and butterflies have nearly iridescent colors due to thin film interference. In addition to pigmentation, the wing’s color is affected greatly by constructive interference of certain wavelengths reflected from its film-coated surface. Car manufacturers are offering special paint jobs that use thin film interference to produce colors that change with angle. This expensive option is based on variation of thin film path length differences with angle. Security features on credit cards, banknotes, driving licenses and similar items prone to forgery use thin film interference, diffraction gratings, or holograms. Australia led the way with dollar bills printed on polymer with a diffraction grating security feature making the currency difficult to forge. Other countries such as New Zealand and Taiwan are using similar technologies, while the United States currency includes a thin film interference effect. ## Making connections: take-home experiment—thin film interference One feature of thin film interference and diffraction gratings is that the pattern shifts as you change the angle at which you look or move your head. Find examples of thin film interference and gratings around you. Explain how the patterns change for each specific example. Find examples where the thickness changes giving rise to changing colors. If you can find two microscope slides, then try observing the effect shown in [link] . Try separating one end of the two slides with a hair or maybe a thin piece of paper and observe the effect. ## Problem-solving strategies for wave optics Step 1. Examine the situation to determine that interference is involved . Identify whether slits or thin film interference are considered in the problem. Step 2. If slits are involved , note that diffraction gratings and double slits produce very similar interference patterns, but that gratings have narrower (sharper) maxima. Single slit patterns are characterized by a large central maximum and smaller maxima to the sides. Step 3. If thin film interference is involved, take note of the path length difference between the two rays that interfere . Be certain to use the wavelength in the medium involved, since it differs from the wavelength in vacuum. Note also that there is an additional $\lambda /2$ phase shift when light reflects from a medium with a greater index of refraction. Step 4. Identify exactly what needs to be determined in the problem (identify the unknowns) . A written list is useful. Draw a diagram of the situation. Labeling the diagram is useful. Step 5. Make a list of what is given or can be inferred from the problem as stated (identify the knowns) . Step 6. Solve the appropriate equation for the quantity to be determined (the unknown), and enter the knowns . Slits, gratings, and the Rayleigh limit involve equations. Step 7. For thin film interference, you will have constructive interference for a total shift that is an integral number of wavelengths. You will have destructive interference for a total shift of a half-integral number of wavelengths . Always keep in mind that crest to crest is constructive whereas crest to trough is destructive. why static friction is greater than Kinetic friction draw magnetic field pattern for two wire carrying current in the same direction An American traveler in New Zealand carries a transformer to convert New Zealand’s standard 240 V to 120 V so that she can use some small appliances on her trip. What is the ratio of turns in the primary and secondary coils of her transformer? nkombo How electric lines and equipotential surface are mutually perpendicular? The potential difference between any two points on the surface is zero that implies È.Ŕ=0, Where R is the distance between two different points &E= Electric field intensity. From which we have cos þ =0, where þ is the angle between the directions of field and distance line, as E andR are zero. Thus sorry..E and R are non zero... By how much leeway (both percentage and mass) would you have in the selection of the mass of the object in the previous problem if you did not wish the new period to be greater than 2.01 s or less than 1.99 s? what Is linear momentum why no diagrams where Fayyaz Myanmar Pyae hi Iroko hello Abdu Describe an experiment to determine short half life what is science it's a natural phenomena Hassan sap Emmanuel please can someone help me with explanations of wave Benedine there are seven basic type of wave radio waves, gyamma rays (nuclear energy), microwave,etc you can also search 🔍 on Google :-) Shravasti A 20MH coil has a resistance of 50 ohms and us connected in series with a capacitor to a 520MV supply what is physics it is the science which we used in our daily life Sujitha Physics is the branch of science that deals with the study of matter and the interactions it undergoes with energy Junior it is branch of science which deals with study of happening in the human life AMIT A 20MH coil has a resistance of 50 ohms and is connected in series with a capacitor to a 250MV supply if the circuit is to resonate at 100KHZ, Determine 1: the capacitance of the capacitor 2: the working voltage of the circuit, given that pie =3.142 Musa Physics is the branch of science that deals with the study of matter and the interactions it undergoes with energy Kelly Heat is transfered by thermal contact but if it is transfered by conduction or radiation, is it possible to reach in thermal equilibrium? Yes, It is possible by conduction if Surface is Adiabatic Astronomy Yeah true ilwith d help of Adiabatic Kelly what are the fundamentals qualities what is physic3 Kalilu what is physic Kalilu Physics? Is a branch of science dealing with matter in relation to energy. Moses Physic... Is a purging medicine, which stimulates evacuation of the bowels. Moses are you asking for qualities or quantities? Noman fundamental quantities are, length , mass, time, current, luminous intensity, amount of substance, thermodynamic temperature. Shravasti fundamental quantities are quantities that are independent of others and cannot be define in terms of other quantities there is nothing like Qualities we have only fundamental quantities which includes; length,mass,time, electric current, luminous density, temperature, amount of substance etc give examples of three dimensional frame of reference Universe Noman Yes the Universe itself Astronomy Examine different types of shoes, including sports shoes and thongs. In terms of physics, why are the bottom surfaces designed as they are? What differences will dry and wet conditions make for these surfaces? sports shoes are designed in such a way they are gripped well with your feet and their bases have and high friction surfaces, Thong shoes are for comfort, these are easily removed and light weight. these are usually low friction surfaces but in wet conditions they offer greater friction. Noman thong sleepers are usually used in restrooms. Noman
2020-12-02 10:26:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39395150542259216, "perplexity": 1281.3014628485855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141706569.64/warc/CC-MAIN-20201202083021-20201202113021-00552.warc.gz"}
https://me.gateoverflow.in/535/gate2016-1-14
# GATE2016-1-14 For a floating body, buoyant force acts at the 1. centroid of the floating body 2. center of gravity of the body 3. centroid of the fluid vertically below the body 4. centroid of the displaced fluid recategorized ## Related questions A fluid (Prandtl number, $P_r=1$) at $500\:K$ flows over a flat plate of $1.5\:m$ length, maintained at $300\: K$. The velocity of the fluid is $10 \: m/s$. Assuming kinematic viscosity,$v=30\times 10^{-6}$ $m^2/s$, the thermal boundary layer thickness (in $mm$) at $0.5 \:m$ from the leading edge is __________ Oil (kinematic viscosity, $v_{\text{oil}}=1.0\times 10^{-5} \:m^2/s$) flows through a pipe of $0.5$ $m$ diameter with a velocity of $10$ $m/s$. Water (kinematic viscosity, $v_w=0.89\times 10^{-6}\:m^2/s$) is flowing through a model pipe of diameter $20 \:mm$. For satisfying the dynamic similarity, the velocity of water (in $m/s$) is __________
2021-09-18 10:36:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822019636631012, "perplexity": 1503.8796225997207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00567.warc.gz"}
http://aimpl.org/boltzmann/7/
## 7. More philosophical questions 1. ### Can we design Boltzmann machines that are interpretable? #### Problem 7.1. [Jason Morton] Can we design Boltzmann machines that are interpretable? In other words, can we construct useful Boltzmann machines that admit a simple explanation in human terms, and behaves in a predictable manner with certainty? • ### Are there biologically plausible extensions of Boltzmann Machines for neural networks? #### Problem 7.2. Are there biologically plausible extensions of Boltzmann Machines for neural networks? • ### How to use RBMs to do statistical inference for a physical system, or properties of a network? #### Problem 7.3. How to use RBMs to do statistical inference for a physical system, or properties of a network? • ### The differences between RBM variants #### Problem 7.4. What are the differences between classical RBMs, Quantum RBMs, and Complex valued RBMs with quantum states? Cite this as: AimPL: Boltzmann Machines, available at http://aimpl.org/boltzmann.
2020-01-22 12:34:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3134429156780243, "perplexity": 3697.3423082158506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00519.warc.gz"}
http://math.stackexchange.com/questions/149147/find-point-in-3d-space-based-on-start-point-three-angles-and-a-distance-need-e
# Find point in 3D space based on start point, three angles and a distance (need example) I know this has been asked before but the answer wasn't very helpful, sorry. I need an example and to see each step of the equation being solved. Let's say we have 45 degree angles to each axis and starting at point A(0,0,0). If the distance is 2 what are the coordinates of point B? - you can't have 45 degree angles to each axis. They have to add up to 180. –  Robert Mastragostino May 24 '12 at 12:50 A point in 3D space is specified by three pieces of data. In your case you need two angles and a distance. In spherical coordinates two angles are sufficient to specify a location on the sphere, and the radius of the sphere scales each coordinate in cartesian space. –  half-integer fan Jan 2 '13 at 1:53 First, let's put B'' 2 units away at angle 0; explicitly B''(2,0,0). Since we know B is $45^\circ$ away from the $x$-axis, we know by basic trig that B'$(\sqrt{2},\sqrt{2},0)$. From there, we say, okay; if it's also $45^\circ$ from the $z$-axis, then how do we get that? The $z$-coordinate just gets $\cos(45^\circ)=\sqrt{2}$ (If you draw the picture, you can see the side adjacent to the angle is on the $z$-axis). Similarly, the radius in the $xy$-plane needs to be scaled to "lift" B' into B; the scale factor naturally should be $\sin(45^\circ)=\sqrt{2}$; because the scaling must happen in both directions, we have: B$(\sqrt{2}\sqrt{2}, \sqrt{2}\sqrt{2}, \sqrt{2}) =$ B$(2,2,\sqrt{2})$. The trick is being careful about which angle represents each direction, and making sure your sines and cosines work under the conventions you're using. Of course, with $45^\circ$, you can be a lot sloppier; however the method I used will work for general angles (with adjustments for sign).
2014-10-25 07:23:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635627031326294, "perplexity": 212.03411424510307}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00245-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.suboceangroup.com/callisto_of_california_womens_anjul_sandals_blue/149e7056125b173-3069c0935b8df22-people.html
# Callisto of California Womens Anjul Sandals Blue umtiRq B01DOSP0F4 • Slip-on entry • Tassel accents • Leather upper • Leather lining The object initializer spreads properties from defaults and unsafeOptions source objects. The order in which the source objects are specified is important: later source object properties overwrite earlier ones. Filling an incomplete object with default property values is an efficient strategy to make your code safe and durable. No matter the situation, the object always contains the full set of properties: and undefined cannot be generated. The function parameters implicitly default to undefined . Normally a function that is defined with a specific number of parameters should be invoked with the same number of arguments. In such case the parameters get the values you expect: The invocation multiply(5, 3) makes the parameters a and b receive the corresponding 5 and 3 values. The multiplication is calculated as expected: 5 * 3 = 15 . What does happen when you omit an argument on invocation? The parameter inside the function becomes undefined . Let's slightly modify the previous example by calling the function with just one argument: function multiply(a, b) { } is defined with two parameters a and b . The invocation multiply(5) is performed with a single argument: as result a parameter is 5 , but b parameter is undefined . Tip 6: Use default parameter value Sometimes a function does not require the full set of arguments on invocation. You can simply set defaults for parameters that don't have a value. Recalling the previous example, let's make an improvement. If b parameter is undefined , it gets assigned with a default value of 2 : The function is invoked with a single argument multiply(5) . Initially a parameter is 2 and b is undefined . The conditional statement verifies whether b is undefined . If it happens, b = 2 assignment sets a default value. While the provided way to assign default values works, I don't recommend comparing directly against undefined . It's verbose and looks like a hack. A better approach is to use the ES2015 default parameters feature. It's short, expressive and no direct comparisons with undefined . Modifying the previous example with a default parameter for b indeed looks great: b = 2 in the function signature makes sure that if b is undefined , the parameter is defaulted to 2 . Probability textbooks tend to be too simple, ignoring many important concepts and succumbing to the pedagogical issues we have discussed, or focus on the myriad technical details of probability theory and hence quickly fall beyond the proficiency of many readers. My favorite treatment of the more formal details of probability theory, and its predecessor measure theory, is Folland (1999) who spends significant time discussing concepts between the technical details. ## 2.1 Probability Distributions From an abstract perspective, probability is a positive, conserved quantity which we want to distribute across a space, X . We take the total amount of this conserved quantity to be 1 with arbitrary units, but the mathematical consequences are the same regardless of this scaling. From this perspective probability is simply any abstract conserved quantity – in particular it does not refer to anything inherently random or uncertain. A defines a mathematically self-consistent allocation of this conserved quantity across X . Letting A be a sufficiently well-defined subset of X , we write P π [ A ] as the probability assigned to A by the probability distribution π . Importantly, we want this allocation to be self-consistent – the allocation to any collection of disjoint sets, A n A m = 0 , n m , should be the same as the allocation to the union of those sets, P π [ N n = 1 A n ] = N n = 1 P π [ A n ] . In other words, no matter how we decompose the space X , or any well-defined subsets of X , we conserve probability. For a finite collection of sets this self-consistency property is known as and would be sufficient if there were only a finite number of well-defined subsets in X . If we want to distribute probability across spaces with an infinite number of subsets, such as the real numbers, however, then we need to go a bit further and require self-consistency over any countable collection of disjoint sets, P π [ n = 1 A n ] = n = 1 P π [ A n ] . In particular, this property allows us to cover complex neighborhoods, such as that enclosed by a smooth surface, with an infinite collection of sets and then calculate the probability allocated to that neighborhood. In addition to self-consistency we have to ensure that we assign all of the total probability in our allocation. This requires that all of the probability is allocated to the full space, P π [ X ] = 1 . These three conditions completely specify a valid probability distribution, although to be formal we have to be careful about what we mean by “well-defined” subsets of X . Somewhat unnervingly we cannot construct an object that self-consistently allocates probability to subset of X because of some very weird, pathological subsets. Fortunately the same properties that make these subsets pathological also prevent them from belonging to any σ -algebra, consequently we can construct our probability distribution relative to a given σ -algebra, X . Formally, then, probability theory is defined by , which we can write as: The more familiar rules of probability theory can all be derived from these axioms. For example the last self-consistency condition implies that P π [ A ] + P π [ A c ] = P π [ X ] = 1 or P π [ A ] = 1 P π [ A c ] . A probability distribution is then completely specified by the ( X , X , π ) which is often denoted more compactly as x π where x X denotes the space, π denotes the probability distribution, and a valid σ -algebra is assumed. ## 2.2 Expectation Values The allocation of probability across a space immediately defines a way to summarize how functions of the form f : X R behave. , E π [ f ] , reduce a function to a single real number by averaging the function output at every point, f ( x ) , weighted by the probability assigned around that point. This weighting process emphasizes how the function behaves in neighborhoods of high probability while diminishing its behavior in neighborhoods of low probability. How exactly, however, do we formally construct these expectation values? The only expectation values that we can immediately calculate in closed form are the expectations of an that vanishes outside of a given set, I A [ x ] = { 1 , x A 0 , x A . The expectation of an indicator function is simply the weight assigned to A , which is just the probability allocated to that set, E π [ I A ] P π [ A ] . We can then build up the expectation value of an arbitrary function with a careful approximation in terms of these indicator functions in a process known as . For more detail see the following optional section. When our space is a subset of the real line, X R , there is a natural of X into R , ι : X R x x . For example this embedding associates the natural numbers, { 0 , 1 , 2 , } , with the corresponding values in the real line, or the interval [ 0 , 1 ] with the corresponding interval in the full real line. In this circumstance we define the of the probability distribution as m π = E π [ ι ] , which quantifies the location around which the probability distribution is focusing its allocation. Similarly we define the of the probability distribution as V π = E π [ ( ι m π ) 2 ] , which quantifies the breadth of the allocation around the mean. We will also refer to the variance of an arbitrary function as V π [ f ] = E π [ ( f E π [ f ] ) 2 ] . While we can always define expectation values of a function f : X R , a probability distribution will not have a well-defined mean and variance unless there is some function whose expectation has a particular meaning. For example, if our space is a subset of the real numbers, X R N , then there is no natural function whose expectation value defines a scalar mean. We can, however, define means and variances as expectations of the , ˆ x n : R N R , that project a point x X onto each of the component axes. These component means and variances then provide some quantification of how the probability is allocated along each axis. ## 2.3 Extra Credit: Lebesgue Integration As we saw in Section 2.2 only the indicator functions have immediate expectation values in terms of probabilities. In order to define expectation values of more general functions we have to build increasingly more complex functions out of these elementary ingredients. The countable sum of indicator functions weighted by real numbers defines a , ϕ = n a n I A n . If we require that expectation is linear over this summation then the expectation value of any simple function is given by E π [ ϕ ] = E π [ n a n I A n ] = n a n E π [ I A n ] = n a n P π [ A n ] . Because of the countable additivity of π and the boundedness of probability, the expectation of a simple function will always be finite provided that each of the coefficients a n are themselves finite. We can then use simple functions to approximate an everywhere-positive function, g : X R + . A simple function with only a few terms defined over only a few sets will yield a poor approximation to g , but as we consider more terms and more sets we can build an increasingly accurate approximation. In particular, because of countable additivity we can construct a simple function bounded below by f that approximates f with arbitrary accuracy. Consequently we define the expectation of an everywhere-positive function as the expectation of this approximating simple function. Because we were careful to consider only simple functions bounded by f we can also define the expectation of f as the largest expectation of all bounded simple functions, E π [ f ] = max For functions that aren’t everywhere-positive we can decompose X into a collection of neighborhoods where f is entirely positive, A^{+}_{n} , and entirely negative, A^{-}_{m} . In those neighborhoods where f is entirely positive we apply the above procedure to define \mathbb{E}_{\pi} [ f \cdot \mathbb{I}_{A^{+}_{n}}], while in the neighborhoods where f is entirely negative we apply the above procedure on the negation of f to define \mathbb{E}_{\pi} [ -f \cdot \mathbb{I}_{A^{+}_{n}}]. . Those regions where g vanishes yield zero expectation values and can be ignored. We then define the expectation value of an arbitrary function g as the sum of these contributions, \mathbb{E}_{\pi} [ f ] = \sum_{n = 0}^{\infty} \mathbb{E}_{\pi} [ f \cdot \mathbb{I}_{A^{+}_{n}}] - \sum_{m = 0}^{\infty} \mathbb{E}_{\pi} [ -f \cdot \mathbb{I}_{A^{-}_{m}}]. Formally this procedure is known as and is a critical tool in the more general of which probability theory is a special case. ## 2.4 Measurable Transformations Once we have defined a probability distribution on a space, X , and a well-behaved collection of subsets, \mathcal{X} , we can then consider how the probability distribution transforms when X transforms. In particular, let f: X \rightarrow Y be a transformation from X to another space Y . Can this transformation also transform our probability distribution on X onto a probability distribution on Y , and if so under what conditions? The answer is straightforward once we have selected a \sigma -algebra for Y as well, which we will denote \mathcal{Y} . In order for f to induce a probability distribution on Y we need the two \sigma -algebras to be compatible in some sense. In particular we need every subset B \in \mathcal{Y} to correspond to a unique subset f^{-1}(B) \in \mathcal{X} . If this holds for all subsets in \mathcal{Y} then we say that the transformation f is and we can define a distribution, \pi_{*} by \mathbb{P}_{\pi_{*}} [ B ] = \mathbb{P}_{\pi} [ f^{-1} (B) ]. In other words, if f is measurable then a self-consistent allocation of probability over X induces a self-consistent allocation of probability over Y . One especially important class of measurable functions are those for which f(A) \in \mathcal{Y} for any A \in \mathcal{X} in addition to f^{-1}(B) \in \mathcal{X} for any B \in \mathcal{Y} . In this case f transforms not only a probability distribution on X into a probability distribution on Y but also a probability distribution on Y into a probability distribution on X . In this case we actually have one unique probability distribution that is just being defined over two different manifestations of the same abstract system. The two manifestations, for example, might correspond to different choices of coordinate system, or different choices of units, or different choices of language capable of the same descriptions. These transformations then serve as translations from one equivalent manifestation to another. Measurable transformations can also be used to project a probability distribution over a space onto a probability distribution over a lower-dimensional subspace. Let \varpi: X \rightarrow Y be a that maps points in a space X to points in the subspace Y \subset X . It turns out that in this case a \sigma -algebra on X naturally defines a \sigma -algebra on Y and the projection operator is measurable with respect to this choice. Consequently any on X will transform into a unique on Y . More commonly we say that we the complementary subspace, Y^{C} . Marginalization is a bit more straightforward when we are dealing with a product space, X \times Y , which is naturally equipped with the component projection operators \varpi_{X} : X \times Y \rightarrow X and \varpi_{Y}: X \times Y \rightarrow Y . In this case by pushing a distribution over (X \times Y, \mathcal{X} \times \mathcal{Y}) forwards along \varpi_{X} we marginalize out Y to give a probability distribution over (X, \mathcal{X}) . At the same time by pushing that same distribution forwards along \varpi_{Y} we can marginalize out X to give a probability distribution over (Y, \mathcal{Y}) . Consider, for example, the three-dimensional space, \mathbb{R}^{3} , where the coordinate functions serve as projection operators onto the three axes, X , Y , and Z . Marginalizing out X transforms a probability distribution over X \times Y \times Z to give a probability distribution over the two-dimensional space, Y \times Z = \mathbb{R}^{2} . Marginalizing out Y then gives a probability distribution over the one-dimensional space, Z = \mathbb{R} . ## 2.5 Conditional Probability Distributions As we saw in BalaMasa Ladies Buckle Assorted Color Urethane Flats Shoes Pink SUS9psW , projection operators allow us to transform a probability distribution over a space to a probability distribution on some lower-dimensional subspace. Is it possible, however, to go the other way? Can we take a given marginal probability distribution on a subspace and construct a joint probability distribution on the total space that projects back to the marginal? We can if we can define an appropriate probability distribution on the complement of the given subspace. Consider a N -dimensional , X , with the projection, \varpi : X \rightarrow Y , onto a K < N -dimensional , Y . By pushing a probability distribution on X along the projection operator we compress all of the information about how probability is distributed along the \varpi^{-1} (y) for each y \in Y . In order to reconstruct the original probability distribution from a marginal probability distribution we need to specify this lost information. Every fiber takes the form of a N - K -dimensional space, F , and, like subspaces, these fiber spaces inherent a natural \sigma -algebra, \mathcal{F} , from the \sigma -algebra over the total space, \mathcal{X} . A defines a probability distribution over each fiber that varies with the base point, y , \begin{alignat*}{6} \mathbb{P}_{F \mid Y} :\; \mathcal{F} \times Y \rightarrow \; [0, 1] \\ (A, y) \mapsto \mathbb{P}_{F \mid Y} [A, y]. \end{alignat*} Evaluated at any y \in Y the conditional probability distribution defines a probability distribution over the corresponding fiber space, (F, \mathcal{F}) . On the other hand, when evaluated at a given subset A \in \mathcal{F} the conditional probability distribution becomes a measurable function from Y into [0, 1] that quantifies how the probability of that set varies as we move from one fiber to the next. Given a marginal distribution, \pi_{Y} , we can then define a probability distribution over the total space by taking an expectation value, \mathbb{P}_{X} [ A ] = \mathbb{E}_{Y} [ \mathbb{P}_{F \mid Y} [A \cap \varpi^{-1} (y), y] ]. The induced joint distribution on the total space is consistent in the sense that if we transform it back along the projection operator we recover the marginal distribution with which we started. This construction becomes significantly easier when we consider a product space, X \times Y and the projection \varpi: X \times Y \rightarrow Y . In this case the fiber space is just X . The conditional probability distribution becomes \begin{alignat*}{6} \mathbb{P}_{X \mid Y} :\; \mathcal{X} \times Y \rightarrow \; [0, 1] \\ (A, y) \mapsto \mathbb{P}_{X \mid Y}[A, y]. \end{alignat*} with joint distribution \mathbb{P}_{X \times Y} [ A ] = \mathbb{E}_{Y} [ \mathbb{P}_{X \mid Y} [A \cap X, y] ]. Conditional probability distributions are especially useful when we want to construct a complex probability distribution over a high-dimensional space. We can reduce the specification of the ungainly joint probability distribution with a sequence of lower-dimensional conditional probability distributions and marginal probability distributions about which we can more easily reason. In the context of modeling an observational process, this method of construction a complicated distribution from intermediate conditional probability distributions is known as modeling. In particular, each intermediate conditional probability distribution models some fragment of the full observational process. As we saw in the previous section, formal probability theory is simply the study of probability distributions that allocate a finite, conserved quantity across a space, the expectation values that such an allocation induces, and how the allocation behaves under transformations of the underlying space. While there is myriad complexity in the details of that study, the basics concepts are relatively straightforward. Before reading this section, it is recommended that you become familiar with the information in the adidas Originals Womens Tubular Viral2 W Sneaker Ice Pink/Ice Pink/White gQAsj7gRH . ## What to consider when designing your pipeline When designing your Beam pipeline, consider a few basic questions: Where is your input data stored? What does your data look like? What do you want to do with your data? What does your output data look like, and where should it go? The simplest pipelines represent a linear flow of operations, as shown in figure 1. Figure 1: A linear pipeline. However, your pipeline can be significantly more complex. A pipeline represents a Directed Acyclic Graph of steps. It can have multiple input sources, multiple output sinks, and its operations ( PTransform s) can both read and output multiple PCollection s. The following examples show some of the different shapes your pipeline can take. It’s important to understand that transforms do not consume PCollection s; instead, they consider each individual element of a PCollection and create a new PCollection as output. This way, you can do different things to different elements in the same PCollection . You can use the same PCollection as input for multiple transforms without consuming the input or altering it. The pipeline in figure 2 is a branching pipeline. The pipeline reads its input (first names represented as strings) from a database table and creates a PCollection of table rows. Then, the pipeline applies multiple transforms to the same PCollection . Transform A extracts all the names in that PCollection that start with the letter ‘A’, and Transform B extracts all the names in that PCollection that start with the letter ‘B’. Both transforms A and B have the same input PCollection . Figure 2: A branching pipeline. Two transforms are applied to a single PCollection of database table rows. The following example code applies two transforms to a single input collection. Another way to branch a pipeline is to have a single transform output to multiple PCollection s by using Giorgio Fabiani Stretch Ankle Boot Navy mWgkbcwIS . Transforms that produce more than one output process each element of the input once, and output to zero or more PCollection s. single Figure 3 illustrates the same example described above, but with one transform that produces multiple outputs. Names that start with ‘A’ are added to the main output PCollection , and names that start with ‘B’ are added to an additional output PCollection . Browse > / , Timberland Mens 6 Classic Premium Boot Tenmile Chukka Black 1700 M1ZJLQWcP6 , Mark Nason Los Angeles Mens Bunker Fashion Sneaker Black 3efw0 , Entertainment , Superga Unisex 2750 Cotu Classic Sneaker White/Rose nzKyS6BS , Fundraiser , Headlines , History , , Nike Free RN Flyknit MTLC GS Vivid Purple/Blackhyper Turquoisefuchsia Glow Ws9AQ02s3P / Local UFO film ‘The Maury Island Incident’ to be IndieFlix’ ‘original series’ botkier Womens Genesa Clay/Gunmetal ScLCsSV ” – a short film that was shot locally in the Burien area last summer – has been turned into a 6-part original series and will premiere on Seattle-based Womens Driving Shoes Cowhide Casual LaceUp Loafers Boat Shoes Flats Apple Green 1om2HkfE , an independent film streaming service on Aug. 19, 2014. Based on declassified FBI documents, the film tells the incredible, tragic, and forgotten story of Harold Dahl, who on June 21, 1947, alleged a UFO sighting over Puget Sound, Washington. This sparked ‘the summer of the saucers,’ the modern era of UFO obsession, the first appearance of a ‘Man in Black’ as well as a governmental battle over UFO sighting jurisdiction reaching directly to FBI Executive Director J. Edgar Hoover. The Aug. 19 date coincides with an FBI document sent to Executive Director J. Edgar Hoover, explaining how Dahl’s original claim that the sighting was a hoax was only said to avoid any further damage to his family. This historic document will also be released on Aug. 19 by filmmakers as a downloadable PDF on the official website Propét Womens Olivia Loafers Taupe Leather Polyurethane 105 4E yQ6cKUuyoD . www.mauryislandincident.com “IndieFlix viewers will not only learn new information about a lost, historic UFO case, they’ll also find out just how interested J. Edgar Hoover was with these ‘flying disc’ occurrences,” Producer/Director Scott Schaefer said. “And we will also be releasing some fascinating declassified FBI documents that show his personal interest in UFOs, specifically The Maury Island Incident.” Initially shot as a short in the south Puget Sound area, with local talent and crew, The Maury Island Incident has been a labor of love for Producer/Writer Edmiston and Producer/Director Schaefer, and Washington FilmWorks, which gave out an Innovation Lab Award to this production. The Lab is a groundbreaking new program offering funding assistance to Washington filmmakers and filmmakers using emerging technologies. In its comments, the jury said this of the film: “Equal parts mystery and documentary, The Maury Island Incident exposes a fascinating hidden history: the first recorded UFO incidents in the US didn’t occur in Roswell, but in Washington. This captivating project brings a spooky Seattle area legend to light and sets the stage for ongoing storytelling on the subject.” Thanks to the help and support of Washington Filmworks’ Innovation Lab, Edmiston and Schaefer got the opportunity to turn their content into a series. #### Decision Resources Group DRG is the premier provider of healthcare analytics, data and insight products and services to the worlds leading pharmas, biotech and medical technology companies. Holly Womens Long Covered Rain Winter Warm Tall Snow Boots Black Leather CbrCWl83
2018-09-20 03:51:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952659010887146, "perplexity": 1032.4897724624461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156376.8/warc/CC-MAIN-20180920020606-20180920040606-00430.warc.gz"}
http://openstudy.com/updates/50be16eae4b0de42629ffc2a
## anonymous 3 years ago Convert the equation to the standard form for a hyperbola by completing the square on x and y. y^2 - 25x^2 + 4y + 50x - 46 = 0 1. anonymous lol another multiple choice? 2. anonymous Yes A. (x+2)^2/25 - (y-1)^2 = 1 B. (x-1)^2 - (x+2)^2/25 = 1 C. (x+2)^2/25 - (x-1)^2 = 1 3. anonymous ok well that makes it much easier 4. anonymous actually there must be something wrong here, because the $$y^2$$ term comes first 5. anonymous $y^2 - 25x^2 + 4y + 50x - 46 = 0$ 6. anonymous these are my only options :/ 7. anonymous if the last one is really $\frac{(y+2)^2}{25}-(x-1)^2=1$ then go with that one 8. anonymous thank you, i'm going to go with it.
2016-10-01 01:38:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.414188027381897, "perplexity": 1762.2564191263552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662438.81/warc/CC-MAIN-20160924173742-00191-ip-10-143-35-109.ec2.internal.warc.gz"}
https://manual.q-chem.com/5.3/A2.S9.html
Linear scaling Coulomb and SCF exchange/correlation algorithms are not the end of the story as the ${\cal{O}}({N^{3}})$ diagonalization step has been rate limiting in semi-empirical techniques and, been predicted to become rate limiting in ab initio approaches in the medium term.926 However, divide-and-conquer techniques1082, 1083, 1081, 564 and the recently developed quadratically convergent SCF algorithm701 show great promise for reducing this problem.
2020-09-25 15:40:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39977672696113586, "perplexity": 3117.268190888754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00582.warc.gz"}
https://cstheory.stackexchange.com/questions/31156/how-many-different-huffman-encoding-for-a-given-number-of-symbols
# How many different Huffman encoding for a given number of symbols In Huffman coding, if we have two symbols to be encoded, we will get the result either 01 or 10. If we have three symbols, we will get 12 different encoding. I am wondering if I give a arbitrary number of symbol, is there a formula to calculate how many different encoding we will have by Huffman coding? • Why on earth are people downvoting and closevoting this as "not a research question"? I am willing to bet none of you figured out an answer (I give one below), and decided that it was too easy to be a research question. If you had closevoted this as "not enough effort on the part of the OP", I might have supported that; the question should at least have explained the 12 different encodings for three symbols, and given the number for four symbols. Apr 21 '15 at 20:41 • @PeterShor, I voted to close because it is not research-level. I figured it out under a min and decided that this is at the level of a simple undergrad assignment. Please be more respectful when disagreeing with close votes. Apr 22 '15 at 3:15 The answer is $C_{n-1} n!$ . That is, the $(n-1)$st Catalan number times $n$ factorial. There are $C_{n-1}$ ways of making a complete binary tree with $n$ leaves, and there are $n!$ ways of assigning these leaves to the symbols to get a Huffman code. This sequence goes 2, 12, 120, 1680, 30240, and is listed in the Online Encyclopedia of Integer Sequences as Quadruple Factorial Numbers, with a simpler formula than I give above. The above argument shows that the quadruple factorial numbers give the number of ways of assigning $n$ symbols to codewords in a complete binary prefix code. It's not hard to show that you can assign probabilities to the symbols to make this encoding optimal (so it satisfies the definition of a Huffman code). It's not clear that a given Huffman coding algorithm will generate all of these, however. The answer by Peter Shor is correct. But for an optimal case when the symbols can only be placed at unique leaf nodes the number of possible Huffman codes drops to $C_{n-1}2^{n-1}$.
2022-01-21 14:56:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6980953812599182, "perplexity": 276.7825281438327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00413.warc.gz"}
https://devio.wordpress.com/2012/09/24/web-config-error-in-build-after-publish/
## Web.config Error in Build after Publish If you publish an ASP.Net (MVC) application in VS 2010, you need to define a Target Location in the Publish… dialog. VS then builds and publishes. Another manual build will result in the error message Error 3 It is an error to use a section registered as allowDefinition=’MachineToApplication’ beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. For better entertainment, the file name that causes the error is displayed as “web.config” in the Error List window. Right-click and select Copy will fill the clipboard with the real file path, which is C:\path\to\project\obj\debug\package\packagetmp\web.config which is a relict of the publishing process. The solution is to delete everything from the obj\Debug and obj\Release directories (depending on your build configuration) and build again. Update Executing “Clean Solution” or “Clean Project” also solves the problem. Advertisements ### One Response to Web.config Error in Build after Publish 1. […] I encountered this error message before, and in that case you should be able to solve the problem by running Clean on the project or the […]
2017-09-25 15:09:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4257897734642029, "perplexity": 3043.4112195898347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691977.66/warc/CC-MAIN-20170925145232-20170925165232-00242.warc.gz"}
http://clay6.com/qa/21034/evaluate-lim-limits-large-frac
Answer Comment Share Q) # Evaluate :$\lim\limits_{x\to \Large\frac{\pi}{4}}\large\frac{\sqrt 2-\cos x-\sin x}{(4x-\pi)^2}$ $(a)\;1/16\sqrt 2\qquad(b)\;1/\sqrt 2\qquad(c)\;1/16\qquad(d)\;1$ ## 1 Answer Comment A) Applying L Hospital rule to the given limit (%) $\Rightarrow \lim\limits_{x\to \large\frac{\pi}{4}}\large\frac{\sin x-\cos x}{2.4(4x-\pi)}$ $\Rightarrow \lim\limits_{x\to\large\frac{\pi}{4}}\large\frac{\cos x+\sin x}{8\times 4}$ $\Rightarrow \large\frac{2}{\sqrt 2}.\frac{1}{32}$ $\Rightarrow \large\frac{1}{16\sqrt 2}$ Hence (a) is the correct answer.
2020-01-20 01:07:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940325617790222, "perplexity": 4480.535144892225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00504.warc.gz"}
https://financetrain.com/exchange-traded-funds-market-structure-and-the-flash-crash
# Exchange-Traded Funds, Market Structure and the Flash Crash This reading is a part of the syllabus for FRM Part 2 Exam in the section ‘Current Issues in Financial Markets’. The “Flash Crash” of May 6, 2010 saw some stocks and exchange-traded funds traded at penniesonly to rapidly recover in price. We show that the impact of the Flash Crash across stocks is systematically related to prior market fragmentation. Interestingly, fragmentation measured based on quote competition – reflective of higher frequency activity – has explanatory power beyond a more standard volume-based definition. Using intraday trade data from January 1994-September 2011, we find that fragmentation now is at the highest level recorded. We also show divergent intraday behavior of trade and quote fragmentation on the day of the Flash Crash itself. Madhavan, Ananth, Exchange-Traded Funds, Market Structure and the Flash Crash (October 10, 2011). Available at SSRN: http://ssrn.com/abstract=1932925 or http://dx.doi.org/10.2139/ssrn.1932925 # R Programming Bundle: 25% OFF Get our R Programming - Data Science for Finance Bundle for just $29$39. Get it now for just \$29
2022-12-05 03:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24610477685928345, "perplexity": 7908.461366858529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00573.warc.gz"}
https://math.stackexchange.com/questions/2570327/calculate-probability-p-min-left-x-y-right-leq-x-and-p-max-left-x-y-r
# Calculate probability $P(\min\left\{X,Y\right\} \leq x)$ and $P(\max\left\{X,Y\right\} \leq x)$ $X,Y$ are independent, identical distributed with $$P(X=k) = P(Y=k)=\frac{1}{2^k} \,\,\,\,\,\,\,\,\,\,\,\, (k=1,2,...,n,...)$$ Calculate the probabilities $P(\min\left\{X,Y\right\} \leq x)$ and $P(\max\left\{X,Y\right\} \leq x)$ For the minimum I do like this: $$\begin{split}F_M(x) &= P(\min\left\{X,Y\right\} \leq x) \\ &= 1-P(x<\min\left\{X,Y\right\} ) \\ &= 1-P(x<X, x<Y) \\ & = 1-P(X>x)\,P(Y>x)\\ & = 1-(1-P(X \leq x))\,(1-P(Y \leq x))\\ & = 1-(1-F_X(x))\,(1-F_Y(x))\end{split}$$ Is this correct for minimum? I'm not sure how do it for $\max$? Maybe I do it too complicated because they are equal these $P(X=k)=P(Y=k)$ maybe you can do it more elegant? But I don't know how? • @Harry49 Thank you for saying I do for $\min$ correct! :) Maybe you can make answer pls for the $\max$? I'm a bit confused because of other answer? Dec 17 '17 at 12:11 • @conime Harry did give you the answer for $\max$. Dec 17 '17 at 12:25 $$\begin{split}F_{\min}(x) &= P(\min\left\{X,Y\right\} \leq x) \\[0.5ex] &= 1-P(x<\min\left\{X,Y\right\} ) \\[0.5ex] &= 1-P(xx)\,P(Y>x)\\[0.5ex] & = 1-(1-P(X \leq x))\,(1-P(Y \leq x))\\[0.5ex] & = 1-(1-F_X(x))\,(1-F_Y(x))\end{split}$$ Is this correct for minimum? I'm not sure how do it for $$\max$$? It is correct, and for max it is even simpler. $$\begin{split}F_{\max}(x) &= P(\max\left\{X,Y\right\} \leq x) \\[0.5ex] &= P(X\leq x, Y\leq x) \\[0.5ex] & = P(X\leq x)\,P(Y\leq x)\\[0.5ex] & = F_X(x)\,F_Y(x)\end{split}$$ Also, recall the expansion of a Geometric Series. $$\displaystyle F_X(x)~{=F_Y(x)\\[0.5ex]= (\sum_{k=1}^x {2}^{-k})\;\mathbf 1_{x\in\{1,2,\ldots\}} \\[0.75ex]= (1-{2}^{-x})\;\mathbf 1_{x\in\{1,2,\ldots\}}}$$ Yes. It is. You can do the same for the maximum: $$\mathbb{P}(\max(X,Y)\leq x) = \mathbb{P}(X\leq x) \times \mathbb{P}(Y \leq x)$$ • pls don't give him down vote he try for help also!!!! Dec 17 '17 at 12:59
2021-09-22 03:12:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7428648471832275, "perplexity": 550.2001038541075}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00150.warc.gz"}
http://www.ask.com/question/chemical-formula-for-sodium-bicarbonate
# Chemical Formula for Sodium Bicarbonate? The chemical formula for sodium bicarbonate is Na2 Co3. It has a molecular mass of 105.99 g/mol. Sodium Bicarbonate is used to make glass and it is better known as a cleaner in household products. You can find it in detergent boosters, in substances to soften hard water, and it can even be found in deodorants and toothpaste. Reference: Sodium Bicarbonate SODIUM BICARBONATE is an antacid. It is used to treat acid indigestion and heartburn caused by too much acid in the stomach. This medicine may be used for other purposes; ask your health care provider or pharmacist if you have More » Source: healthline.com Q&A Related to "Chemical Formula for Sodium Bicarbonate?" Answer Sodium bicarbonate, also called sodium hydrogen carbonate and commonly known as baking soda, has NaHCO3 for a chemical formula. Use the link below to check facts and learn http://wiki.answers.com/Q/What_is_the_formula_for_... The chemical formula for sodium bicarbonate is NaHCO3. http://www.chacha.com/question/what-is-the-chemica... Sodium bicarbonate is NaHCO3. Hydrochloric acid is HCl. So the formula for the reaction is. NaHCO3 + HCl - NaCl + CO2 +H2O. "I don't understand where or how we get the equations http://answers.yahoo.com/question/index?qid=201010... The chemical formula for sodium carbide is Na2C2. A mixture of http://www.chacha.com/question/what-is-the-chemica... Similar Questions Top Related Searches Explore this Topic Sodium bicarbonate is a chemical compound with the formula NaHCO3. It is also known as sodium hydrogen carbonate but it is commonly known as sodium bicarbonate ... Baking soda, or sodium bicarbonate, has the chemical formula NaHCO3. Sodium bicarbonate is actually a salt. It has many common uses such as baking and household ... Baking soda is actually sodium bicarbonate or sodium hydrogen carbonate. The chemical formula is NaHCO3. Sodium bicarbonate is actually a solid crystal that is ...
2014-03-12 17:31:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227508664131165, "perplexity": 6827.411508844821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023122061/warc/CC-MAIN-20140305123842-00030-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathstodon.xyz/@JordiGH/103092117133949702
OH: I work with a woman who said recently "I am not a data scientist, I am a statistician. I don't know what data scientist means and neither does anyone else." @JordiGH That's a very silly thing to say, everyone knows that "data scientist" means an extra $30k on your salary. @mcmoots It also means more computer and less proof, no? @JordiGH Often, yeah, but IME that's not as strong a consensus as the salary differential. @JordiGH this was my Probability teacher as well he was like big data?? No man that's just a fancy name for stat! @Luchtspieg Yeah, they're all acting they're discovering something new or like stats is just descriptive stats. Data has always been big. Neoliberalism constantly yearns not only to be the end of history, but the beginning as well. Nothing has a context or prior arts. We live in an eternal present. Our world was invented this morning, by billionaires. @celesteh Excuse me what are you talking about? @JordiGH it's like every part of tech that ignores all prior arts. They give it a new name and pretend that is a new idea. In many domains, this is also linked with an extremely neoliberal ideology and it reminds me of the assertion that we are at the end of history. @celesteh Oh, like when they tried to invent "Spotify for books"? @JordiGH I like the taxonomy that Cassie Kozyrkov from Google puts forward: "data science" as an umbrella term, and "statistician," "ML engineer," "analyst" as three major branches. @bmreiniger Sooooooo, what's the difference between a data scientist and a machine learning engineer? Are there data engineers? Machine learning scientists? @JordiGH An ML engineer (according to this grouping) is one flavor of data scientist. You have rather stronger opinion on what qualifies as engineering, so I'll leave that distinction out. Note that Kozyrkov is (probably) thinking in terms of a business team. "Data engineer" is a term often used for people working on collecting/storing/retrieving data efficiently, but that seems to split the use of "data XYZ" into "working on" vs "working with" data rather than properly using "engineer." @JordiGH @mwlucas @JuliePercival has said more or less this more than once. @sng It appears there is more than one woman out there who holds this view, then. @JordiGH a data scientist is a statistician who wants a$100k raise. @mhoye And/or lives in San Francisco and wants to afford rent. But I repeat you. @JordiGH A Mastodon instance for maths people. The kind of people who make $$\pi z^2 \times a$$ jokes. Use $$ and $$ for inline LaTeX, and $ and $ for display mode.
2019-11-14 12:28:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4127699136734009, "perplexity": 2924.3368443775275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00319.warc.gz"}
https://humanreadablemag.com/issues/2/articles/theres-a-mathematician-in-your-compiler
Language Features # There's a Mathematician In Your Compiler Illustrated by Skye Bolluyt People with an interest or education in computer science may know about the Curry-Howard isomorphism, the correspondence between (pure) types in typed languages and logic: statements, propositions, proofs. A pure function is one with no side effects, no exceptions, and for the purposes of this article, no infinite recursion. A pure value would just be a value. But what use is that to me, your average hard-working, yet underpaid, developer? I hear you ask. Is there a practical use of this in everyday programming? It may not be of use in everyday programming when you are figuring out how to implement a Google API. But recognizing it and how it relates to patterns the compiler forces you into will certainly make you a better programmer, and we can discover cool new language features using it. We'll be taking a whistlestop tour of the Curry-Howard isomorphism and how it relates to Scala. By the end of this article we will have deduced surprising new Scala functionality (type negation) using only basic knowledge of Scala and logic. In this article I will endeavor to refer to logical true and false as True and False to distinguish them from the Scala values true and false. If you need a quick refresher on logical True and False, here it is: The logical proposition True simply represents something that is true, like the statement 1 < 5. Any true statement like that by definition implies True. If we can start from an assumption and perform a series of logical deductions and reach True, then we know our original assumption is True, that it is provable. Conversely, any statement we know or can prove to be wrong (such as "bananas are a myth") implies False. If we can reach False at the end of a series of logical deductions from an assumption, our assumption is incorrect. ## They're the Type That Gets Propositioned The basic concept of the Curry-Howard isomorphism is really very simple. A logical proposition A can be equated to some type, say trait A. Logical propositions are made up of propositions P, Q, and more, and the operations => (implies), & (and), | (or), and ! (negation). There are more formal (unicode) symbols we could use, but these are universal enough and get the point across just as well. The logical proposition A is considered True if and only if it can be proved, a true theorem following the rules of mathematical logic (reaching your conclusion A through a series of valid logical deductions and substitutions, starting from assumed truths). Equivalently, proposition A evaluates to True (or is provable) if and only if the equivalent Scala type trait A representing proposition A has at least one value. We say the type trait A is inhabited if it has at least one value. The same idea extends to logical implication statements: the logical implication A => B holds true if and only if there exists a pure function of type Function1[A, B], where A and B here are the Scala types equivalent to our original logical propositions. In Scala, this function type can of course be rewritten as A => B. It's no coincidence the authors of Scala chose the symbol for logical implication as their symbol for function definition. ### Truth Here we'll go through a quick example to see how to turn logical statements into pure Scala code via the Curry-Howard isomorphism. We'll be considering logical True. In real life, any 'true' statement is considered True-"Apples exist," for example. What does True translate to in Scala under the Curry-Howard isomorphism? It needs to be a type that is inhabited, that has a value. By convention we choose Unit, the type with one member whose value is denoted (). This is the same as Java's void. We could choose any inhabited type we like, including Int or Boolean. But it's tidier to choose Unit because you can always write the function A => Unit uniquely, for any A, by simply ignoring the argument and returning (). This makes it more canonical; there are fewer choices to make. To summarize: by convention we'll say that True maps to Unit. ### And Falsehood And another, slightly more subtle example of how to reason about this correspondence: How do we translate False into pure Scala code? We need to map it to some type in Scala, but remember that a proposition is provable if the corresponding type has any values associated with it. And False certainly is not provable. A false statement may be "bananas are a myth." Luckily there is one uninhabited type in Scala: Nothing. It is the type of a thrown Exception, the bottom of the type tree. (Or the top, if you're the sort who views the type tree upside down.) You can never implement a pure function A => Nothing, for an inhabited type A, as there are no values to return. A function of this signature, when called, would either terminate the program (i.e., throw an exception) or never finish computing. In the same way, you can never prove a statement A => false for any non-False A. A real-life example of such absurdity would be "Apples exist, therefore the dairy industry is poisoning our children." Interestingly, the logical statement False => False is provable. (If a false statement is assumed true, such as "Bananas are mythical," then you can prove any statement you like, including false ones.) This means there must be a well-defined and inhabited pure Scala type associated with it. What could this be? It is simply the identity function of type Nothing => Nothing. This is, a little surprisingly, well-defined and pure. Even though it returns something that cannot exist, it also accepts an argument that cannot exist and therefore can never be called. It certainly exists, but it can never cause you any problems. ### Writing a Proof Well, the upshot of all of this is that every time you write a pure function of type A => B in Scala, you are actually writing a proof: A implies B. This makes sense: in a function you are assuming you have been given a value of type A (i.e., assuming type A is inhabited, and thus assuming proposition A is provable) and you are producing a value of type B, which would make B inhabited-and thus also provable. For example, a simple tuple manipulation: def swap[A, B](tuple: (A, B)): (B, A) = (tuple._2, tuple._1) It's important to note above that A and B are type parameters; they are not fixed. Curry-Howard is a very general theorem. This function is equated to the proposition A & B => B & A which is obviously true to us, as normal, functioning humans (if I have an apple and a banana, then I have a banana and an apple). But because it's not obvious to mathematicians, you can also write a formal logical proof of this statement. But that's tangential to this article, so I won't go into it. The proof could eventually be equated to the implementation of swap anyway. How about a statement that is not provable? A => A & B We simply don't know if this is provable for a general B. If I have an apple, it does not imply I have an apple and a banana. I do actually happen to have an apple and a banana, but you can't prove it from just the apple. It would map to the Scala function: def beify[A, B](a: A): (A, B) = ... When is this implementable? You don't have a B! You cannot implement it for a general type B, only when B evaluates to True: // Perfectly well-defined def beify[A](a: A): (A, Unit) = (a, ()) And this of course corresponds to A => A & True So things are looking pretty consistent so far. Let's do something a little more fun. ## Prove It The Scala compiler is, in rather loose terms, a theorem prover. We saw above that the compiler will check and verify your proofs for you, if you write the functions out yourself. But the compiler can also write the proofs for you. The implicit keyword is the key to this functionality. When you ask the compiler for an implicit value of type A, it will search all available implicit values and functions, and try to fit them together to create a value of type A, to prove the proposition A, automatically. A brief example: implicit val i: Int = 7 implicitly[Int] // Returns 7 Pretty trivial: if you provide a value of type Int to the compiler, it can 'prove' Int is true by simply returning that value. A slightly more complex example: implicit def foo(implicit i: Int): String = i.toString implicit val i: Int = 19 implicitly[String] // Returns "19" What have we done here? We've defined a proposition Int by providing a value 19 and defined an implication Int => String with the functionality _.toString. And, therefore, the compiler can prove proposition String by providing the value "19". This is pretty powerful. It looks a little vapid because we hard-coded Int at 19. But it needn't be hard-coded, and could instead rely on very complex machinery under the hood. Let's try to do something cool. ## We Don't Want Your Type in Here Let's try to make a function that accepts any type except B (for some given B) in a general way. We want to be able to write: def foo[A](a: A)(implicit proof: IsNot[A, B]): String = a.toString + " is not a B!" Every time we call foo, we want the compiler to provide us a proof that A is not B. We're wrapping this up inside a type class we're calling IsNot. If the compiler cannot define IsNot[A, B], that is, cannot prove A is not B, we expect compilation to fail. We only want this to happen when they are indeed equal. We don't have many choices about how IsNot will look. It must be something like the following: trait IsNot[A, B] Now we just need to give the compiler some predicates (implicits) about how to find proofs that show types aren't equal to each other. It turns out this is "difficult" in a computer language. We can't just implement an implicit instance of IsNot for every unequal pair of types in existence. There are many types after all; it would take too long. Some even say there are an infinite number of them. Thankfully, because the Scala compiler is a pure and deterministic (though not necessarily predictable) process, it will always produce the same output. It will make only well-defined and repeatable choices. If it can't make such a choice, it will exit. One such scenario in which compilation will fail is if there are two implicits for the same type at the same level, and you request the compiler find a proof of the type: trait A implicit val intProof: Int = 17 implicit def instance1(implicit b: Int): A = new A {} implicit def instance2(implicit b: Int): A = new A {} implicitly[A] // fails, warning about 'ambiguous' implicits How can the compiler choose between instance1 and instance2? It can't; there's literally nothing to choose between. They are the same. It fails compilation, saying there are ambiguous implicits. In this scenario, when there is more than one proposition implying A and you ask it to prove A, the compiler takes them all and combines their left-hand sides using XOR to get one single proposition implying A. (XOR being an operation that resolves to True if and only if precisely one of its two operands is True.) (In reality there are a hierarchy of implicit search locations the compiler tries in order, with ranks for specificity and all sorts of exotic things, forming more complex compiled propositions with more verbs than just XOR. But in this scenario, the left-hand sides are indeed joined inside an XOR.) In the above example, the compiler compiles the following: (Int XOR Int) => A And this of course reduces to just False => A, which is simply True-which is emphatically not a proof of A (it is a tautological proof of True). And thus, the compiler declares A unprovable. It tried its best, but the compiler was unable to find a canonical proof of A. How does this apply to IsNot? We're going to reason about some logic, and then from that implement IsNot and see it working. We're trying to prove proposition A IsNot B when A is not equal to B. That is: (A != B) => A IsNot B We have to manipulate this in some way to make it something we can backport to Scala. We can't use A != B directly, since that's what we're trying to implement. Well, there's a handy identity that P <=> true xor !P for any proposition P. Making that substitution on the left we get (True xor A=B) => A IsNot B This is suddenly in a form we can work with-we can turn these two terms into Scala implicits. We'll end up with two implicits, both of which return A IsNot B. One will take an implicit proof that A = B (provided to us by Scala), and one that takes a vacuous implicit True-in other words, no argument list needed (or the always available and fantastically named DummyImplicit type that native Scala supplies us). And here they are; the code by this point writes itself: implicit def instanceAll[A, B]: A IsNot B = new IsNot[A, B] {} // Equivalently the above could be implicit def instanceAll[A, B](implicit True: DummyImplicit): A IsNot B = new IsNot[A, B] {} implicit def instanceEquality[A, B](implicit proof: A =:= B): A IsNot B = new IsNot[A, B] {} And here it is working, proof (pun intended) that I haven't been lying to you: // The forbidden type: trait B def foo[A](a: A)(implicit proof: IsNot[A, B]): String = a.toString + " is not a B!" foo(5) // compiles foo("a string") // compiles foo(new B {}) // Does not compile: Compiler cannot prove B is not B! And as a little bonus, you can even chain the required proofs and have more than one requirement: def bar[A](a: A)(implicit proof1: A IsNot Int, proof2: A IsNot String ): String = a.toString bar(true) // compiles bar(4) // no bueno bar("4") // no bueno ### Author's Note We can obviously just create a value of type IsNot[A, B] for any A and B we like, whenever we like, if we wanted to thoroughly ruin things. It only works because the implicit modifiers are markers to the future coder: let the compiler provide these proofs for us. ### End Author's Note We've managed to prove a negative here, working at a purely logical level and a very basic understanding of how Scala interprets logical statements. Hardly any actual Scala knowledge needed at all! If you look closely at the implicits we did end up defining, they don't make much sense on their own. The first one after all is rather nonsensically saying A IsNot B, for all A and B. And the second one causes a compiler crash (with a very bad compile error message) in order to short-circuit the compiler and cause an end to compilation in certain circumstances. The runtime interpretation of the code we've written feels very hacky, but the maths behind it that we demonstrated above are solid, and hopefully show we've discovered functionality of the language itself. That's pretty cool. ##### James Phillips author James is a functional and scala enthusiast with a keen interest in type-level programming. He runs a consultancy in London and Brighton, and occasionally cycles to Paris. ##### Skye Bolluyt illustrator Skye Bolluyt is a NYC based illustrator who thrives on giving invisible moods visual expressions, through a bold yet detailed style that packs a punch without loss of nuance. Skye is insatiably curious, so her work has been published in a variety of magazines, advertising campaigns, and children’s books - the likes of Communication Arts, Juxtapoz, Portland Mercury, Pinna podcasts, 3x3 Magazine, Growing IQ LLC, among others.
2022-12-06 11:56:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4486539959907532, "perplexity": 1476.2536546143183}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711077.50/warc/CC-MAIN-20221206092907-20221206122907-00361.warc.gz"}
https://research.birmingham.ac.uk/en/publications/the-xxl-survey-ii-the-bright-cluster-sample-catalogue-and-luminos
# The XXL Survey: II. The bright cluster sample: catalogue and luminosity function F. Pacaud, N. Clerc, P. A. Giles, C. Adami, T. Sadibekova, M. Pierre, B. J. Maughan, M. Lieu, J. P. Le Fèvre, S. Alis, B. Altieri, F. Ardila, I. Baldry, C. Benoist, M. Birkinshaw, L. Chiappetti, J. Démoclès, D. Eckert, A. E. Evrard, L. FaccioliF. Gastaldello, L. Guennou, C. Horellou, A. Iovino, E. Koulouridis, V. Le Brun, C. Lidman, J. Liske, S. Maurogordato, F. Menanteau, M. Owers, B. Poggianti, D. Pomarède, E. Pompei, T. J. Ponman, D. Rapetti, T. H. Reiprich, G. P. Smith, R. Tuffs, P. Valageas, I. Valtchanov, J. P. Willis, F. Ziparo Research output: Contribution to journalArticlepeer-review 80 Citations (Scopus) ## Abstract Context. The XXL Survey is the largest survey carried out by the XMM-Newton satellite and covers a total area of 50 square degrees distributed over two fields. It primarily aims at investigating the large-scale structures of the Universe using the distribution of galaxy clusters and active galactic nuclei as tracers of the matter distribution. Aims. This article presents the XXL bright cluster sample, a subsample of 100 galaxy clusters selected from the full XXL catalogue by setting a lower limit of $3\times 10^{-14}\,\mathrm{erg \,s^{-1}cm^{-2}}$ on the source flux within a 1$^{\prime}$ aperture. Methods. The selection function was estimated using a mixture of Monte Carlo simulations and analytical recipes that closely reproduce the source selection process. An extensive spectroscopic follow-up provided redshifts for 97 of the 100 clusters. We derived accurate X-ray parameters for all the sources. Scaling relations were self-consistently derived from the same sample in other publications of the series. On this basis, we study the number density, luminosity function, and spatial distribution of the sample. Results. The bright cluster sample consists of systems with masses between $M_{500}=7\times 10^{13}$ and $3\times 10^{14} M_\odot$, mostly located between $z=0.1$ and 0.5. The observed sky density of clusters is slightly below the predictions from the WMAP9 model, and significantly below the predictions from the Planck 2015 cosmology. In general, within the current uncertainties of the cluster mass calibration, models with higher values of $\sigma_8$ and/or $\Omega_m$ appear more difficult to accommodate. We provide tight constraints on the cluster differential luminosity function and find no hint of evolution out to $z\sim1$. We also find strong evidence for the presence of large-scale structures in the XXL bright cluster sample and identify five new superclusters. Original language English A2 25 Astronomy and Astrophysics 592 15 Jun 2016 https://doi.org/10.1051/0004-6361/201526891 Published - Aug 2016 ## Keywords • cosmological parameters • Surveys • X-rays: galaxies: clusters • galaxies: clusters: intracluster medium • large-scale structure of Universe ## Fingerprint Dive into the research topics of 'The XXL Survey: II. The bright cluster sample: catalogue and luminosity function'. Together they form a unique fingerprint.
2022-01-29 13:04:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4189145565032959, "perplexity": 7294.85591650276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00365.warc.gz"}
http://wikieducator.org/Thread:Comments_on_Your_User_Page_(37)
Fragment of a discussion from User talk:Yanubha Nice to hear from you. It's a great compliment for me as i have seen your user page and i was really surprised to see the activities you are involved in.....great job!
2017-11-25 02:29:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41578829288482666, "perplexity": 716.6620400286329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809229.69/warc/CC-MAIN-20171125013040-20171125033040-00085.warc.gz"}
https://math.stackexchange.com/questions/3075697/complex-conjugate-and-field-extension
# Complex conjugate and Field Extension Let $$E$$ be a subfield of $$\mathbb{C}$$ and Let $$\overline{E}=\{\overline{z} \, |\, z \in E \}$$ with $$\overline{z}$$ being the complex conjugate of $$z$$. Let $$K$$ be a subfield of $$\mathbb{C}$$ with $$\overline{K}=K$$ and $$w\in \mathbb{C}$$ and $$w^2 \in K$$. Is (when is) $$\overline{K(w)}=K(w)$$? I have no idea how to show it and would be thankful for hints (please no solutions at this point). Some things I know: Let $$w\notin K$$. Since $$w\in \mathbb{C}$$ and $$w^2 \in K \Rightarrow [K(w):K]=2 \Rightarrow$$ Minimalpolynomial of $$w$$ over $$K$$ has degree $$2$$. I also know that $$\mathbb{C}$$ is algebraic closed. Hint(s): You know that $$[K(\omega):K]=2$$- what does that mean for what the elements of $$K(\omega)$$ will look like?- also, notice that the function that takes conjugates has some nice algebraic properties. Finally, if $$\omega$$ is a root of a quadratic over $$K$$, say, $$x^2+ax+b$$ (with $$a$$ and $$b\in K$$) what can you say about $$\overline\omega$$? (extra hint below) What field can you find $$\overline\omega$$ in?- look at the specific fact that $$\omega^2\in K$$ and find a condition on $$\omega$$'s components First of all thank you for taking the time and helping: 1.) $$[K(w):K]=2 \Rightarrow \{1,w\}$$ is $$K$$-basis of $$K(w)$$. Every element of $$K(w)$$ can be written as $$a+bw$$ with $$a,b\in K$$. 2.) The function that takes conjugates is a homomorphism of field. It's easy to verify that $$\overline{a+b}=\overline{a}+\overline{b}$$ and $$\overline{a\cdot b}=\overline{a}\cdot \overline{b}$$. This might also mean that the function implicates an extension from $$K(w)\rightarrow \mathbb{C}$$ because $$\overline{K}=K$$? 3.) If $$w$$ is a root of that polynomial, isn't $$\overline{w}$$ also a root since because $$a,b \in K$$ and $$\overline{K}=K$$? From here on I just don't know what to do... • For item 3: Consider the case $K=\Bbb{Q}(i)$. Let $w=\sqrt{2+i}$ (to be specific, let's use the square root in the first quadrant). Then $w^2\in K$. Furthermore, $w$ is a zero of the polynomial $x^2-(2+i)$ (which has coefficients in $K$). But $\overline{w}$ is not a zero of that polynomial. Instead, it is a zero of the polynomial $x^2-(2-i)$. – Jyrki Lahtonen Jan 17 at 6:43
2019-05-26 05:38:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561716914176941, "perplexity": 108.72883773873923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258849.89/warc/CC-MAIN-20190526045109-20190526071109-00421.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2020347
# American Institute of Mathematical Sciences • Previous Article The effect of surface pattern property on the advancing motion of three-dimensional droplets • DCDS-B Home • This Issue • Next Article Existence and stability of generalized transition waves for time-dependent reaction-diffusion systems ## Optimal control strategies for an online game addiction model with low and high risk exposure School of Science, Guilin University of Technology, Guilin, Guangxi 541004, China * Corresponding author: Tingting Li Received  May 2020 Revised  August 2020 Published  November 2020 Fund Project: The second author is supported by the Basic Competence Promotion Project for Young and Middle-aged Teachers in Guangxi, China (2019KY0269) In this paper, we establish a new online game addiction model with low and high risk exposure. With the help of the next generation matrix, the basic reproduction number $R_{0}$ is obtained. By constructing a suitable Lyapunov function, the equilibria of the model are Globally Asymptotically Stable. We use the optimal control theory to study the optimal solution problem with three kinds of control measures (isolation, education and treatment) and get the expression of optimal control. In the simulation, we first verify the Globally Asymptotical Stability of Disease-Free Equilibrium and Endemic Equilibrium, and obtain that the different trajectories with different initial values converges to the equilibria. Then the simulations of nine control strategies are obtained by forward-backward sweep method, and they are compared with the situation of without control respectively. The results show that we should implement the three kinds of control measures according to the optimal control strategy at the same time, which can effectively reduce the situation of game addiction. Citation: Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020347 ##### References: show all references ##### References: Transfer diagram of model DFE $D_{0} = (829,0,0,0,0,0)$ is Globally Asymptotically Stable when $R_{0} = 0.5778 < 1$ and $\beta = 0.2$ EE $D^{*} = (358.829, 20.903, 31.354, 67.916, 25.648,324.35)$ is Globally Asymptotically Stable when $R_{0} = 2.3111 > 1$ and $\beta = 0.8$ Dynamical behavior of infected when $R_{0} = 0.5778$ and $\beta = 0.2$ Dynamical behavior of infected when $R_{0} = 2.3111$ and $\beta = 0.8$ Graphical results for strategy A Graphical results for strategy B Graphical results for strategy C Graphical results for strategy D Graphical results for strategy E Graphical results for strategy F Graphical results for strategy G Graphical results for strategy H Graphical results for strategy I Estimation of parameters Parameters Descriptions Values $\mu$ Natural supplementary and death rate 0.05 per week $\theta$ Proportion of individuals who became low risk exposed 0.4 per week $\beta$ Contact transmission rate 0.1$\sim$ 0.8 per week $v_{1}$ Proportion of $E_{1}$ who become infected 0.2 per week $v_{2}$ Proportion of $E_{1}$ who become professional 0.2 per week $w_{1}$ Proportion of $E_{2}$ who become infected 0.3 per week $w_{2}$ Proportion of $E_{1}$ who become professional 0.1 per week $k_{1}$ Proportion of $I$ who become quitting 0.05 per week $k_{2}$ Proportion of $I$ who become professional 0.1 per week $\delta$ Proportion of $P$ who become quitting 0.5 per week $u_{1}$ The decreased proportion by isolation Variable $u_{2}$ The decreased proportion in $E_{1}$ by prevention Variable $u_{3}$ The decreased proportion in $E_{2}$ by prevention Variable $u_{4}$ The decreased proportion in $I$ by treatment Variable Parameters Descriptions Values $\mu$ Natural supplementary and death rate 0.05 per week $\theta$ Proportion of individuals who became low risk exposed 0.4 per week $\beta$ Contact transmission rate 0.1$\sim$ 0.8 per week $v_{1}$ Proportion of $E_{1}$ who become infected 0.2 per week $v_{2}$ Proportion of $E_{1}$ who become professional 0.2 per week $w_{1}$ Proportion of $E_{2}$ who become infected 0.3 per week $w_{2}$ Proportion of $E_{1}$ who become professional 0.1 per week $k_{1}$ Proportion of $I$ who become quitting 0.05 per week $k_{2}$ Proportion of $I$ who become professional 0.1 per week $\delta$ Proportion of $P$ who become quitting 0.5 per week $u_{1}$ The decreased proportion by isolation Variable $u_{2}$ The decreased proportion in $E_{1}$ by prevention Variable $u_{3}$ The decreased proportion in $E_{2}$ by prevention Variable $u_{4}$ The decreased proportion in $I$ by treatment Variable Results of different control strategies Strategy Total infectious individuals ($\int_{0}^{t_f}(E_{1}+E_{2}+I)dt)$ Averted infectious individuals Objective function $J$ Without control 7461.1302 $-$ $8.5947\times 10^{6}$ Strategy A 526.3468 6934.7835 $1.3646\times 10^{6}$ Strategy B 1426.9073 6034.2229 $2.5242\times 10^{6}$ Strategy C 701.3874 6759.7428 $1.7413\times 10^{6}$ Strategy D 524.2143 6936.9159 $1.3592\times 10^{6}$ Strategy E 525.4126 6935.7176 $1.3619\times 10^{6}$ Strategy F 525.0718 6936.0585 $1.3618\times 10^{6}$ Strategy G 579.8124 6881.3178 $4.784\times 10^{6}$ Strategy H 1626.7971 5834.3331 $2.7511\times 10^{6}$ Strategy I 658.0017 6803.1286 $2.6232\times 10^{6}$ Strategy Total infectious individuals ($\int_{0}^{t_f}(E_{1}+E_{2}+I)dt)$ Averted infectious individuals Objective function $J$ Without control 7461.1302 $-$ $8.5947\times 10^{6}$ Strategy A 526.3468 6934.7835 $1.3646\times 10^{6}$ Strategy B 1426.9073 6034.2229 $2.5242\times 10^{6}$ Strategy C 701.3874 6759.7428 $1.7413\times 10^{6}$ Strategy D 524.2143 6936.9159 $1.3592\times 10^{6}$ Strategy E 525.4126 6935.7176 $1.3619\times 10^{6}$ Strategy F 525.0718 6936.0585 $1.3618\times 10^{6}$ Strategy G 579.8124 6881.3178 $4.784\times 10^{6}$ Strategy H 1626.7971 5834.3331 $2.7511\times 10^{6}$ Strategy I 658.0017 6803.1286 $2.6232\times 10^{6}$ [1] Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019 [2] Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020047 [3] Sishu Shankar Muni, Robert I. McLachlan, David J. W. Simpson. Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions. Discrete & Continuous Dynamical Systems - A, 2021  doi: 10.3934/dcds.2021010 [4] Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028 [5] Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020  doi: 10.3934/jgm.2020032 [6] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 [7] Musen Xue, Guowei Zhu. Partial myopia vs. forward-looking behaviors in a dynamic pricing and replenishment model for perishable items. Journal of Industrial & Management Optimization, 2021, 17 (2) : 633-648. doi: 10.3934/jimo.2019126 [8] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046 [9] Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020107 [10] Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 [11] Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020052 [12] Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179 [13] Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099 [14] Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021003 [15] Mahir Demir, Suzanne Lenhart. A spatial food chain model for the Black Sea Anchovy, and its optimal fishery. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 155-171. doi: 10.3934/dcdsb.2020373 [16] Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264 [17] Xiuli Xu, Xueke Pu. Optimal convergence rates of the magnetohydrodynamic model for quantum plasmas with potential force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 987-1010. doi: 10.3934/dcdsb.2020150 [18] Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 [19] Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020110 [20] Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051 2019 Impact Factor: 1.27
2021-01-16 12:32:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4583066999912262, "perplexity": 4741.120264848283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00444.warc.gz"}
https://www.researchgate.net/scientific-contributions/Matthew-OKelly-2105494295
# Matthew O'Kelly's research while affiliated with University of Pennsylvania and other places ## Publications (21) Article Autonomous vehicles (AVs) are already driving on public roads around the US; however, their rate of deployment far outpaces quality assurance and regulatory efforts. Consequently, even the most elementary tasks, such as automated lane keeping, have not been certified for safety, and operations are constrained to narrow domains. First, due to the li... Preprint Learning-based methodologies increasingly find applications in safety-critical domains like autonomous driving and medical robotics. Due to the rare nature of dangerous events, real-world testing is prohibitively expensive and unscalable. In this work, we employ a probabilistic approach to safety evaluation in simulation, where we are concerned wit... Preprint Balancing performance and safety is crucial to deploying autonomous vehicles in multi-agent environments. In particular, autonomous racing is a domain that penalizes safe but conservative policies, highlighting the need for robust, adaptive strategies. Current approaches either make simplifying assumptions about other agents or lack robust mechanis... Article Teaching autonomous systems is challenging because it is a rapidly advancing cross-disciplinary field that requires theory to be continually validated on physical platforms. For an autonomous vehicle (AV) to operate correctly, it needs to satisfy safety and performance properties that depend on the operational context and interaction with environme... Preprint While autonomous vehicle (AV) technology has shown substantial progress, we still lack tools for rigorous and scalable testing. Real-world testing, the $\textit{de-facto}$ evaluation method, is dangerous to the public. Moreover, due to the rare nature of failures, billions of miles of driving are needed to statistically validate performance claims.... Chapter Full-text available The testing of Autonomous Vehicles (AVs) requires driving the AV billions of miles under varied scenarios in order to find bugs, accidents and otherwise inappropriate behavior. Because driving a real AV that many miles is too slow and costly, this motivates the use of sophisticated ‘world simulators’, which present the AV’s perception pipeline with... Preprint Full-text available In 2005 DARPA labeled the realization of viable autonomous vehicles (AVs) a grand challenge; a short time later the idea became a moonshot that could change the automotive industry. Today, the question of safety stands between reality and solved. Given the right platform the CPS community is poised to offer unique insights. However, testing the lim... Preprint Modern treatments for Type 1 diabetes (T1D) use devices known as artificial pancreata (APs), which combine an insulin pump with a continuous glucose monitor (CGM) operating in a closed-loop manner to control blood glucose levels. In practice, poor performance of APs (frequent hyper- or hypoglycemic events) is common enough at a population level tha... Preprint While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the $\textit{de facto}$ evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate... Article 2018 Curran Associates Inc.All rights reserved. While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of mile... Article Full-text available This article elaborates the approaches that can be used to verify an autonomous vehicle (AV) before giving it a driver’s license. Formal methods applied to the problem of AV verification include theorem proving, reachability analysis, synthesis, and maneuver design. Theorem proving is an interactive technique in which the computer is largely respon... Article Full-text available The testing of Autonomous Vehicles (AVs) requires driving the AV billions of miles under varied scenarios in order to find bugs, accidents and otherwise inappropriate behavior. Because driving a real AV that many miles is too slow and costly, this motivates the use of sophisticated `world simulators', which present the AV's perception pipeline with... Conference Paper Full-text available Article Full-text available This paper details the design of an autonomous vehicle CAD toolchain, which captures formal descriptions of driving scenarios in order to develop a safety case for an autonomous vehicle (AV). Rather than focus on a particular component of the AV, like adaptive cruise control, the toolchain models the end-to-end dynamics of the AV in a formal way su... Conference Paper Full-text available Relaxed notions of decidability widen the scope of automatic verification of hybrid systems. In quasi-decidability and δ-decidability, the fundamental compromise is that if we are willing to accept a slight error in the algorithm's answer, or a slight restriction on the class of problems we verify, then it is possible to obtain practically useful a... Article Full-text available Autonomous vehicles (AVs) have already driven millions of miles on public roads, but even the simplest scenarios have not been certified for safety. Current methodologies for the verification of AV's decision and control systems attempt to divorce the lower level, short-term trajectory planning and trajectory tracking functions from the behavioral... Article Full-text available Diabetes associated complications are affecting an increasingly large population of hospitalized patients. Since glucose physiology is significantly impacted by patient-specific parameters, it is critical to verify that a clinical glucose control protocol is safe across a wide patient population. A safe protocol should not drive the glucose level i... ## Citations ... AdvSim [18] benchmarks several black-box optimization algorithms to search adversarial trajectories to obtain safety-critical scenarios for the full autonomy stack, but its adversarial objective is targeted for planning, and it is designed for the single vehicle system. Another stream of work [31], [32] formulates the scenario generation problem as the rare event simulation to sample failure scenarios for the single-agent autonomy system. In contrast, we focus on producing challenging scenarios for the LiDAR-based multiagent V2X perception system where both the agents' poses and the selection of collaborators are searched to optimize an adversarial objective customized for perception. ... ... Motorsport racing has proven to enable knowledge transfer of cutting-edge research to the automotive industry [1,2,3]. In particular, autonomous racing presents a new frontier that promises to revolutionize autonomous driving by enabling and stress-testing new technologies and algorithms in the field of Self Driving Cars (SDC) [4,5,6,7]. For this reason, many autonomous racing competitions have recently emerged, featuring different platforms and form-factors, from full-scaled Indy Autonomous [6] and Formula Student Driverless [8] to scaled F1TENTH [5,9]. ... ... In an attempt to push the limits towards the development of new technologies, numerous competitions are organized and held in major international conferences. Above all, the F1/10 Autonomous Racing competition [2], [3] is one of the most popular; its name derives from the use of 1:10 scaled-down car models. Depending on the task objective, the problem is faced with different levels of detail and approximations [4], [5]. ... ... Recent courses on edX share similar motivations as ours but differ in the selection of topics [23], [24]. It is also worth mentioning broader open-source initiatives, such as the F1Tenth initiative [25] and BWSI (Beaver Works Summer Institute) [26], that provide introductory-level courses for seniors and K-12 students. ... ... Park et al. [41] developed scenarios for evaluating safety measures during take-over situations on virtually simulated highways. Similarly, Abbas et al. [42] demonstrated various dangerous situations during autonomous driving through a virtual simulator based on the Grand Theft Auto V game. Recently, simulation methods have been studied in conjunction with AI technologies. ... ... Research on the safety of autonomous vehicles (AVs) has so far focused on evaluating their impacts using simulation models (Morando et al. 2018;Papadoulis, Quddus, and Imprialou 2019), computational analysis (Kalra and Paddock 2016), scenario planning (Millard-Ball 2016), public perceptions (Moody, Bailey, and Zhao 2020), communication networks (Hussain and Zeadally 2019), liability and privacy issues (Lim and Taeihagh 2018), and predictions for potential economic savings (Clements and Kockelman 2017). Most of the existing literature that examined the safety implications of AVs has adopted modelling and quantitative approaches used in the engineering and mathematics fields (Morando et al. 2018;Papadoulis, Quddus, and Imprialou 2019;Abbas et al. 2019). Research by Kassens-Noor et al. (2020) shows that there are over 100,000 engineering articles on AVs, compared with only 200 articles covering the planning aspects. ... ... Second, formal specifications enable automated testing and monitoring for AV, e.g, see [8]- [15], for requirements based testing. Third, formal specifications on the perception system can also function as a requirements language between original equipment manufacturers (OEM) and suppliers. ... ... Testing scenario search and generation has also been widely applied to the testing and verification of autonomous systems [20], [21], [22], [23], [24]. Some approaches employ optimization or adaptive sampling techniques to accelerate finding test cases with highest risk to the system [25], [26], [27], [28], [29], [30], [31]. ... ... For example, manufacturers have used the technology to make dental implants (Dawood et al., 2015) or even bone tissues (Bose et al., 2013). CAD tools have also evolved with regards to visualisation features, with photo-realistic renderings becoming more commonplace, for instance to better picture violations of requirements in autonomous vehicle safety assessment (O'Kelly et al., 2017) or to simulate coating appearance depending on lightning (Jhamb et al., 2020). ... ... The idea behind [13] is that we can rst search the set of behaviors to nd those executions with low robustness. Assuming continuity of behavior, low-robustness executions are surrounded by other low-robustness executions, and possibly by executions with negative robustness (Figure 4). ...
2022-10-01 05:12:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3285716772079468, "perplexity": 3491.1856851806942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00540.warc.gz"}
https://aacrjournals.org/cebp/article/22/9/1577/69739/Culturally-Targeted-Patient-Navigation-for
Background: Patient navigation has been an effective intervention to increase cancer screening rates. This study focuses on predicting outcomes of screening colonoscopy for colorectal cancer among African Americans using different patient navigation formats. Methods: In a randomized clinical trial, patients more than 50 years of age without significant comorbidities were randomized into three navigation groups: peer-patient navigation (n = 181), pro-patient navigation (n = 123), and standard (n = 46). Pro-patient navigations were health care professionals who conducted culturally targeted navigation, whereas peer-patient navigations were community members trained in patient navigation who also discussed their personal experiences with screening colonoscopy. Two assessments gathered sociodemographic, medical, and intrapersonal information. Results: Screening colonoscopy completion rate was 75.7% across all groups with no significant differences in completion between the three study arms. Annual income more than $10,000 was an independent predictor of screening colonoscopy adherence. Unexpectedly, low social influence also predicted screening colonoscopy completion. Conclusions: In an urban African American population, patient navigation was effective in increasing screening colonoscopy rates to 15% above the national average, regardless of patient navigation type or content. Impact: Because patient navigation successfully increases colonoscopy adherence, cultural targeting may not be necessary in some populations. Cancer Epidemiol Biomarkers Prev; 22(9); 1577–87. ©2013 AACR. Colorectal cancer is the third most commonly diagnosed cancer in African Americans and its incidence and mortality rates are higher than all other ethnic groups. One factor that may contribute to this trend is the lower rate of colorectal cancer screening participation, which is critical to the prevention and early detection of colorectal cancer. If precancerous polyps in the colon and rectum are identified (through colonoscopy or flexible sigmoidoscopy screening) and removed (through polypectomy), patients can live normally with no further treatment required. Current data indicate that the removal of precancerous polyps decreases colorectal cancer incidence by 75% to 90% (1). Although screening colonoscopy (one of several methods of screening normal risk adults ages 50 years or more) is recommended by the American Cancer Society, the U.S. Multisociety Task Force on Colorectal Cancer, and the American College of Radiology (2), colorectal cancer screening rates in general and colonoscopy specifically remain low especially among African Americans (3). Patient navigation (Freeman and colleagues; ref. 4) involving a specifically trained person within the health care setting who helps the patient obtain medical care, has received considerable attention as a way to improve cancer care among minority patients. Most published patient navigation programs assist patients in obtaining follow-up of suspicious findings and treatment. Previous studies and national programs have reported that patient navigation for individuals with abnormal findings or cancer diagnoses is beneficial and results in more timely treatment and resolution (5, 6). Recently, patient navigation has been expanded to assist with obtaining cancer screening. Studies, mainly focused on breast and cervical screening, report that patient navigation increases screening adherence (see review; ref. 7). Although a handful of recent studies have examined the effectiveness of patient navigation for colorectal cancer screening, few have focused solely on patient navigation for screening colonoscopy. Related studies (e.g., Lasser and colleagues; ref. 8 and Percac-Lima and colleagues; ref. 9) showed significantly higher rates of colonoscopy completion in navigated over nonnavigated groups; however completion rates for both groups were still below 40%. Our group was among the first to introduce patient navigation to facilitate colonoscopy completion among minority primary care patients, increasing adherence from 40% to 66% (10). ### Peers as navigators Research in public health and health education confirms the benefits of peer educators in healthcare interventions (11–13). In cancer education, peers increased smoking cessation and were more cost effective (14). For breast cancer, peer-led education programs increased mammography and self-examination among African Americans (15, 16). We hypothesize that racially matched peer navigators can model ways of coping with anxiety about colonoscopy screening, and successful engagement with mainstream health care. This hypothesis was informed by reference group-based social influence theory (17); an important element is informational social influence (the extent to which referents or peers from one's racial group, age group, or gender serve as a source of credible information). In the context of colorectal cancer screening, one source of information is a peer's own experience with colonoscopy. Through a peer navigator's self-disclosure about colonoscopy as a “similar other,” the patient may obtain information relevant to his or her own screening expectations. The information provided by a peer navigator may serve to model attitudes and behaviors associated with successful adherence such as effective communication with healthcare providers and screening self-efficacy. Peer navigators can also model strategies to overcome barriers identified among African Americans such as limited colorectal cancer knowledge, low perceived colorectal cancer risk, colorectal cancer fatalism, and medical mistrust (18–24). Targeted interventions have been developed on the basis of demographic, behavioral, and psychosocial characteristics shared by members of subgroups (25). Our conceptualization of patient navigation for increasing screening colonoscopy adherence suggests the importance of determining intrapersonal barriers which affect understanding the consequences of adherence to screening colonoscopy (26), guided by cognitive-behavioral theory (27–29). Thus, patient navigation is a strategy to reduce the aversive consequences associated with screening behavior. Our patient navigation approach systematically addresses the consequences or “punishments” as represented by intrapersonal barriers, including colonoscopy-specific fear, worry, anxiety, and perceived disadvantages of colonoscopy (30–36). Thus, combining patient navigation with culturally targeted messages (CTPN) to overcome system barriers and help people understand the importance of screening colonoscopy may have a greater impact than patient navigation alone. This study sought to examine the impact of three forms of patient navigation. The standard of care (STD) focused on the basic facts of screening and provided logistical assistance to patients (e.g., making an appointment, reminder calls). We investigated enhancing STD through cultural targeting including: (i) emphasis on the colorectal cancer problem among African Americans and the relevance of colonoscopy, (ii) discussion of culturally specific facts (for African Americans) and personal colonoscopy barriers, and (iii) modeling effective coping by a peer navigator (someone who has completed colonoscopy) to increase self-efficacy of a patient. In addition, we examined the effectiveness of a peer delivering the CTPN (peer-patient navigation) versus professional (health educator) navigation (pro-patient navigation). Thus, in this randomized clinical trial (RCT), we examined patient navigation, delivered in three ways (peer-patient navigation, pro-patient navigation, and STD), to address the low adherence to physician recommended screening colonoscopy by African American patients. We also examined the potential impact of sociodemographic, medical, and intrapersonal factors as predictors of screening completion. ### Study setting and recruitment In this Institutional Review Board-approved RCT, African American primary care patients referred for screening colonoscopy by their primary care physician (PCP) at a nonacute medical visit were recruited at Mount Sinai's primary care clinic between May 2008 and December 2011. PCPs and medical assistants referred their patients. Interested patients met with a research assistant to discuss the study and to sign informed consent. The baseline assessment was also conducted as an interview during this meeting. African American patients more than 50 years of age without active gastrointestinal symptoms, significant comorbidities, or a history of inflammatory bowel disease or colorectal cancer were included. Patients must not have undergone colonoscopy within the past 5 years (on the basis of the clinical practice at our institution) or have been current with other forms of colorectal cancer screening (e.g., FOBT, flexible sigmoidoscopy). After recruitment, referrals were reviewed by the Division of Gastroenterology to confirm medical eligibility and evaluate any contraindications to colonoscopy or sedation. We received 589 referrals to the study. Of these, 532 (90.3%) consented and were enrolled. ### Nonnavigated participants Of the 532 enrolled patients, 15 were ineligible (e.g., no working phone). Furthermore, during the medical clearance process, some patients were deemed ineligible for direct referral (e.g., uncontrolled diabetes, cardiac concerns) and were referred to our gastroenterology clinic and were not randomized (N = 106). Participants with medical clearance who were randomized to one of the study arms but were never reached for their scheduling call, had their referral returned to their PCP (nonnavigated; N = 61) and were excluded from further analyses. ### Navigated participants Randomization and patient navigation assignments were made by the project coordinator using our statistician's randomization chart. All navigation services (and subsequent assessments) were conducted by telephone. There were two navigation call scripts. The first included a culturally targeted message designed to convey the importance of colorectal cancer prevention for African Americans and asked about patients' concerns. The second message was a STD script to simply schedule the procedure and answer any questions. The protocol also included being navigated by either a professional (pro-patient navigation) or community member (peer-patient navigation). Overall, 350 participants were navigated. On the basis of our preliminary data of the projected different screening colonoscopy completion rates for each group, we used a priori power calculations to determine that participants should be randomized in a ratio of 3:2:1 (peer-patient navigation, N = 181; pro-patient navigation, N = 123); and STD, N = 46) to best ensure statistical power for the anticipated effects. For STD, we assumed that screening uptake would be 40%, whereas pro-patient navigation would be 66% and peer-patient navigation would be 68%. With this size sample, power for the comparison of peer-patient navigation with STD would be 0.94 and pro-patient navigation to STD would be 0.87. ### Patient navigators Five African American peer-patient navigators and four African American pro-patient navigators were recruited and trained (37). Peer-patient navigators (paid hourly) were eligible for the position if they were more than 50 years old and had recently undergone colonoscopy screening. All pro-patient navigators (salaried staff) held a Bachelor's degree, had research experience, and had worked with minority communities. Additional details about the training of the navigators, their characteristics, and payments have previously been published (see Shelton and colleagues; ref. 37). ### Intervention protocols #### Culturally targeted message. For the two culturally targeted groups (peer-patient navigation and pro-patient navigation), all navigators were African American to maintain racial concordance. Each call included information about how colorectal cancer specifically impacts African Americans (e.g., “black Americans are more likely to get colon cancer than people in other racial and ethnic groups”) and asked participants about any concerns. The calls made by the peer-patient navigators also included their own story of completing their colonoscopy to model effective coping. In the STD group, there was no mention of culture or barriers. Everyone received information about the importance of colorectal cancer screening and specific instructions for colonoscopy preparation. #### Telephone calls. The overall structure of each intervention group was the same. All participants received 3 scripted phone calls: a scheduling call, a call 2 weeks before their colonoscopy date, and a call 3 days before the procedure. Following the first call, written instructions for the bowel preparation were mailed. During the follow-up calls, patient navigators reminded participants of their appointments, confirmed receipt of mailed information, reviewed bowel preparation instructions, assessed transportation needs, and provided education and support. Peer-patient navigators also discussed their own colonoscopy experience. In the STD group, calls were conducted by the pro-patient navigators. That is, the same pro-patient navigators conducted the navigation for two groups. To minimize contamination, written scripts were used. In addition, throughout the study we listened to 10% of the audio-recorded calls for fidelity purposes to ensure compliance with each condition and different staff members completed the assessments. ### Assessments In addition to the three telephone calls, there were two assessments. Time 1 was completed at the time of consent (baseline), face-to-face as an interview. The time 2 assessment was completed over the phone 2 weeks before the scheduled colonoscopy, immediately following the reminder call. Each assessment took 20 to 30 minutes to complete and participants were paid$20 for each. There were 3 main categories of variables: (i) demographic characteristics, (ii) medical care and colorectal cancer knowledge and, (iii) intrapersonal factors that have been reported as potential barriers or facilitators for colorectal cancer screening. Table 1 shows the timing for each assessment. Table 1. Timing and content of assessments MeasureαTime 1 (baseline)Time 2 (2 weeks before scheduled colonoscopy) Demographic characteristics n/a Health behaviors n/a Intrapersonal communication with physician 0.868 History of cancer n/a Colorectal cancer knowledge 0.420 Fear of colonoscopy 0.861 Fatalism 0.829 Pros and cons 0.637 Multidimensional Inventory of Black Identity 0.641 Group-based medical mistrust 0.855 Collective self-esteem 0.559 Self-efficacy 0.843 Social influence 0.895 Cancer anxiety 0.444 Cancer worry 0.745 Perceived risk for colorectal cancer 0.526 MeasureαTime 1 (baseline)Time 2 (2 weeks before scheduled colonoscopy) Demographic characteristics n/a Health behaviors n/a Intrapersonal communication with physician 0.868 History of cancer n/a Colorectal cancer knowledge 0.420 Fear of colonoscopy 0.861 Fatalism 0.829 Pros and cons 0.637 Multidimensional Inventory of Black Identity 0.641 Group-based medical mistrust 0.855 Collective self-esteem 0.559 Self-efficacy 0.843 Social influence 0.895 Cancer anxiety 0.444 Cancer worry 0.745 Perceived risk for colorectal cancer 0.526 NOTE: X indicates that the measure was included in the corresponding assessment. ### Demographic characteristics At time 1, participants completed a general sociodemographic questionnaire about age, race/ethnicity, employment status, income, and education. ### Medical care and colorectal cancer knowledge Participants answered questions about their health behaviors, knowledge of colorectal cancer, and relationship with health care providers. #### Health behaviors. Participants answered questions about their health habits including postponing medical care, not following doctor's advice, and frequency of previous year medical care. #### Interpersonal communication (with referring MD). An 8-item measure assessed participants' level of comfort and satisfaction in their communication with the doctor/provider who referred them for the colonoscopy. The measure was adapted from prior literature (38) to be specific to screening colonoscopy. Participants rated how strongly they agreed/disagreed on a 5-point Likert scale (1 = strongly disagree and 5 = strongly agree) with statements about physician communication (e.g., “I can easily talk about personal things with my doctor”). #### Colorectal cancer knowledge. Our own measure for assessing colorectal cancer knowledge (39) was used and included ten true–false statements (e.g., “a person could have colorectal cancer without having any symptoms”). Colonoscopy completion was assessed via medical record review. ### Intrapersonal factors #### Fear of colonoscopy. Participants' fear of colorectal cancer screening was assessed using a 6-item measure developed by Manne and colleagues (40). On the basis of a 5-point Likert scale (1 = not at all fearful and 5 = extremely fearful), participants were asked to indicate how fearful they felt about the preparation, procedure, and results. #### Fatalism. The Powe Fatalism Inventory (41) was adapted to measure colorectal cancer fatalism. The inventory consisted of five yes/no items about the implications of colorectal cancer diagnosis (e.g., “I believe that if someone gets colorectal cancer, his/her time to die is near”). #### Pros and cons about colonoscopy screening. A 17-item measure, adapted from prior research (35), asked, on a 5-point Likert scale, how strongly participants agreed/disagreed (1 = strongly disagree and 5 = strongly agree) about the pros or cons of getting a colonoscopy (e.g., “it would be inconvenient to have a colonoscopy at this time”). #### Ethnic identity. The 8-item centrality subscale of the Multidimensional Inventory of Black Identity was used to measure participants' ethnic identity, how they feel about it, and how much their behavior is affected by it (42). Participants indicated on a 5-point Likert scale how strongly they agreed/disagreed (1 = strongly disagree and 5 = strongly agree) with statements about their identity and role in the Black community (e.g., “in general, being Black is an important part of my self-image.”). #### Medical mistrust. The 6-item suspicion subscale of the group-based medical mistrust scale was used to measure assessed participants' beliefs about the care they and people of their racial and ethnic group receive from the health care system (43) and asked participants to indicate on a 5-point Likert scale how strongly they agreed/disagreed (1 = strongly disagree and 5 = strongly agree) with statements about trust or suspicion of health care staff (e.g., “people of my ethnic group should be suspicious of information from doctors and health care professionals”). #### Collective self-esteem. Collective self-esteem was assessed using an 8-item measure drawn from previous literature (44). Participants indicated on a 5-point Likert scale how strongly they agreed/disagreed (1 = strongly disagree and 5 = strongly agree) with statements about the importance of gender and age to their self-image (e.g., “my gender is an important reflection of who I am”). #### Self-efficacy. A 10-item measure, adapted from previous literature (45), assessed participants' confidence in their ability to complete a colonoscopy. Participants indicated on a 5-point Likert scale how strongly they agreed/disagreed (1 = strongly disagree and 5 = strongly agree) with statements about carrying out specific tasks related to getting a screening colonoscopy (e.g., “I can get a colonoscopy even if I don't know what to expect”). #### Social influence. A 4-item measure (36) evaluated social influence on participants' medical decisions, rating how strongly they agreed/disagreed with statements about the influence of their families and close friends (e.g., “my close friends think I should have a colonoscopy”) on a 4-point Likert scale (1 = strongly disagree and 4 = strongly agree). #### Cancer anxiety. Two questions, adapted from previous research (46), assessed colorectal cancer anxiety. For example, “Is thinking about colorectal cancer emotionally stressful?” on a 3-point scale (1 = not at all and 3 = very much). #### Cancer worry. Vernon and colleagues' (36) 3-item scale assessed colonoscopy worry. Participants indicated on a 4-point Likert scale how strongly they agreed/disagreed (1 = strongly disagree and 4 = strongly agree) with statements about screening consequences (e.g., “I am afraid of having an abnormal colonoscopy result”). #### Perceived risk of colorectal cancer. Participants were asked three questions adapted from the 2005 Health Information National Trends Survey (47) about their perceived risk for getting colorectal cancer. For example, “compared with the average (man/woman) your age, would you say you are…?” with three answer choices rating the relative likeliness of getting colorectal cancer. Responses were averaged to generate mean scores for each medical factor and intrapersonal variable. ### Statistical analyses All analyses were conducted using SPSS Statistics V19. The univariable analysis described participant characteristics, medical care, colorectal cancer knowledge, and intrapersonal factors. χ2 compared equality of proportions for demographic variables. One-way ANOVA tested equality of means. On the basis of the univariable results, a binary logistic regression model was developed to examine the association between screening colonoscopy completion and significant predictor variables, after adjusting for participant characteristics, medical care, colorectal cancer knowledge, and intrapersonal factors. Variables that were significant at the 0.2 level in the bivariable analyses were considered for the multivariable model. Variables were retained in the multivariable model if they were significant at the 0.1 level (to indicate trend) or if they exhibited a confounding effect. The statistical significance in the final multivariable model was set at 0.05. All statistical tests were two-sided. Of the 589 patients recruited for this study, there were no significant age or gender differences between those who consented (N = 532) and those who refused to participate (N = 57). There were also no significant differences in age or gender between eligible, randomized participants who were navigated (N = 350) and those who were unable to be reached for navigation (N = 61). ### Colonoscopy completion rates There were no significant differences in colonoscopy completion rates among the three study arms [N = 350; peer-patient navigation (74.0%), pro-patient navigation (76.4%), and standard (80.4%)], suggesting that all forms of patient navigation are highly effective. Thus, the focus of this report is on potential predictors of colonoscopy completion, regardless of study arm. ### Sociodemographic characteristics of completers and noncompleters Comparative analyses of sociodemographic features of colonoscopy completers versus noncompleters are shown in Table 2. Unemployed patients were significantly less likely to complete the screening colonoscopy than employed patients [P = 0.022; OR = 0.524; 95% confidence interval (CI) = 0.300–0.918]. Participants with annual income less than $10,000 were significantly less likely to get a colonoscopy than those who earned more than$10,000 annually (P = 0.017; OR = 0.536; 95% CI = 0.319–0.899). Insurance status was also related to colonoscopy completion. Patients insured through Medicare or Medicaid were significantly less likely to get their screening than patients with private or self-pay insurance (P = 0.019; OR = 0.466; 95% CI = 0.244–0.892). There were no notable differences in gender, age, marital status, or education level between those who completed versus noncompleters. Table 2. Sociodemographic and medical factors of completers versus noncompleters of screening colonoscopy CompletersNoncompletersTotal N = 350N (%)aN (%)aN (%)bPc Sociodemographic factors Gender Female 175 (73.5) 63 (26.5) 238 (68.0) 0.165 Male 90 (80.4) 22 (19.6) 112 (32.0) Age, y 49—64 199 (74.0) 70 (26.0) 269 (76.9) 0.167 65+ 66 (81.5) 15 (18.5) 81 (23.1) Marital status Married 49 (80.3) 12 (19.7) 61 (17.5) 0.348 Not married 215 (74.7) 73 (25.3) 288 (82.5) Employment status Employed 98 (83.1) 20 (16.9) 118 (33.7) 0.022 Unemployed 167 (72.0) 65 (28.0) 232 (66.3) Education level ≥Grade 13 95 (77.9) 27 (22.1) 122 (35.0) 0.478 ≤Grade 12 169 (74.4) 58 (25.6) 227 (65.0) Income ≤10,000 90 (68.2) 42 (31.8) 132 (42.3) 0.017 >10,000 144 (80.0) 36 (20.0) 180 (57.7) Insurance status Medicare/Medicaid 191 (72.6) 72 (27.4) 263 (75.1) 0.019 Private/self pay 74 (85.1) 13 (14.9) 87 (24.9) Insurance status Medicare 76 (78.4) 21 (21.6) 97 (27.7) 0.037 Medicaid 115 (69.3) 51 (30.7) 166 (47.4) Private 71 (85.5) 12 (14.5) 83 (23.7) Self pay 3 (75.0) 1 (25.0) 4 (1.1) Study arm Peer 134 (74.0) 47 (26.0) 181 (51.7) 0.648 Pro 94 (76.4) 29 (23.6) 123 (35.1) Std 37 (80.4) 9 (19.6) 46 (13.1) Medical factors Regular doctor Yes 244 (76.0) 77 (24.0) 321 (91.7) 0.665 No 21 (72.4) 8 (27.6) 29 (8.3) Since when regular doctor Before 2008 88 (75.2) 29 (24.8) 117 (40.5) 0.765 2008+ 132 (76.7) 40 (23.3) 172 (59.5) First year at clinic Before 2001 68 (73.9) 24 (26.1) 92 (32.1) 0.788 2001+ 147 (75.4) 48 (24.6) 195 (67.9) Number of doctor visits 0 14 (93.3) 1 (6.7) 15 (4.3) 0.104 1+ 251 (74.9) 84 (25.1) 335 (95.7) Put off medical problem No/not sure 206 (79.5) 53 (20.5) 259 (74.0) 0.005 Yes 59 (64.8) 32 (35.2) 91 (26.0) Yes 58 (67.4) 28 (32.6) 86 (24.6) 0.039 No or not sure 207 (78.4) 57 (21.6) 264 (75.4) Trust doctor Agree 252 (76.8) 76 (23.2) 328 (95.3) 0.189 Disagree/not sure 10 (62.5) 6 (37.5) 16 (4.7) Doctor satisfaction Satisfied 248 (76.1) 78 (23.9) 326 (95.3) 0.922 Dissatisfied/neither 12 (75.0) 4 (25.0) 16 (4.7) CompletersNoncompletersTotal N = 350N (%)aN (%)aN (%)bPc Sociodemographic factors Gender Female 175 (73.5) 63 (26.5) 238 (68.0) 0.165 Male 90 (80.4) 22 (19.6) 112 (32.0) Age, y 49—64 199 (74.0) 70 (26.0) 269 (76.9) 0.167 65+ 66 (81.5) 15 (18.5) 81 (23.1) Marital status Married 49 (80.3) 12 (19.7) 61 (17.5) 0.348 Not married 215 (74.7) 73 (25.3) 288 (82.5) Employment status Employed 98 (83.1) 20 (16.9) 118 (33.7) 0.022 Unemployed 167 (72.0) 65 (28.0) 232 (66.3) Education level ≥Grade 13 95 (77.9) 27 (22.1) 122 (35.0) 0.478 ≤Grade 12 169 (74.4) 58 (25.6) 227 (65.0) Income ≤10,000 90 (68.2) 42 (31.8) 132 (42.3) 0.017 >10,000 144 (80.0) 36 (20.0) 180 (57.7) Insurance status Medicare/Medicaid 191 (72.6) 72 (27.4) 263 (75.1) 0.019 Private/self pay 74 (85.1) 13 (14.9) 87 (24.9) Insurance status Medicare 76 (78.4) 21 (21.6) 97 (27.7) 0.037 Medicaid 115 (69.3) 51 (30.7) 166 (47.4) Private 71 (85.5) 12 (14.5) 83 (23.7) Self pay 3 (75.0) 1 (25.0) 4 (1.1) Study arm Peer 134 (74.0) 47 (26.0) 181 (51.7) 0.648 Pro 94 (76.4) 29 (23.6) 123 (35.1) Std 37 (80.4) 9 (19.6) 46 (13.1) Medical factors Regular doctor Yes 244 (76.0) 77 (24.0) 321 (91.7) 0.665 No 21 (72.4) 8 (27.6) 29 (8.3) Since when regular doctor Before 2008 88 (75.2) 29 (24.8) 117 (40.5) 0.765 2008+ 132 (76.7) 40 (23.3) 172 (59.5) First year at clinic Before 2001 68 (73.9) 24 (26.1) 92 (32.1) 0.788 2001+ 147 (75.4) 48 (24.6) 195 (67.9) Number of doctor visits 0 14 (93.3) 1 (6.7) 15 (4.3) 0.104 1+ 251 (74.9) 84 (25.1) 335 (95.7) Put off medical problem No/not sure 206 (79.5) 53 (20.5) 259 (74.0) 0.005 Yes 59 (64.8) 32 (35.2) 91 (26.0) Yes 58 (67.4) 28 (32.6) 86 (24.6) 0.039 No or not sure 207 (78.4) 57 (21.6) 264 (75.4) Trust doctor Agree 252 (76.8) 76 (23.2) 328 (95.3) 0.189 Disagree/not sure 10 (62.5) 6 (37.5) 16 (4.7) Doctor satisfaction Satisfied 248 (76.1) 78 (23.9) 326 (95.3) 0.922 Dissatisfied/neither 12 (75.0) 4 (25.0) 16 (4.7) aRow percent. bColumn percent. cP value obtained from χ2 test. ### Medical history and health behaviors of completers and noncompleters Table 2 also displays comparative results related to medical history and health behaviors of colonoscopy completers versus noncompleters. Participants who indicated that they had put off or did not seek care for a medical problem in the previous 12 months were significantly less likely to get colonoscopy screening compared with participants who had not postponed treatment or were not sure (P = 0.005; OR = 2.11; 95% CI = 1.25–3.57). Patients who reported incidents of not following doctors' advice in the previous year were significantly less likely to complete their screening colonoscopy (P = 0.039; OR = 1.75; 95% CI = 1.02–3.00). ### Intrapersonal characteristics Table 3 shows the comparative results of intrapersonal variables of colonoscopy completers versus noncompleters. Data from the time 1 (baseline) assessment reveal that participants who indicated lower levels of self-efficacy were less likely to complete the screening procedure (P = 0.036). Participants who did not get screened had significantly higher levels of fear about the colonoscopy (P = 0.012) and more cancer worry (P = 0.027). In addition, participants who more strongly identified with their ethnicity were more likely to complete (P = 0.34). There were no significant differences in any of the intrapersonal factors at the time 2 (2 weeks before the scheduled colonoscopy appointment) assessment between participants who completed their screening and those who did not complete. Table 3. Intrapersonal factors of completers versus noncompleters of screening colonoscopy CompletersNoncompleters Intrapersonal factors - time 1Mean (σ)Mean (σ)PaN Fear of colonoscopy 1.9387 (.96335) 2.2482 (1.03214) 0.012 349 Fatalism 0.1253 (.24884) 0.0934 (.23862) 0.304 345 Pros and cons 2.5396 (.43089) 2.5882 (.35736) 0.348 350 Multidimensional Inventory of Black Identity 3.2501 (.65990) 3.0669 (.75519) 0.034 344 Group-based medical mistrust 1.9417 (.66328) 1.9010 (.62899) 0.661 272 Collective self-esteem 3.2003 (.60311) 3.2229 (.73137) 0.822 272 Self-efficacy 4.1952 (.51065) 4.0746 (.43981) 0.036 350 Social influence 2.8620 (.75538) 3.0242 (.65814) 0.130 260 Cancer anxiety 1.6154 (.69585) 1.7923 (.73364) 0.078 273 Cancer worry 2.2268 (.68199) 2.4444 (.72166) 0.027 274 Perceived risk for colorectal cancer 1.6869 (.58101) 1.5882 (.59904) 0.178 349 Intrapersonal factors - time 2 Mean (σ) Mean (σ) P N Fear of colonoscopy 1.9339 (.86265) 1.9927 (.86761) 0.688 272 Pros and cons 2.6110 (.46880) 2.5305 (.34911) 0.295 270 Self-efficacy 4.0474 (.48159) 4.0798 (.50918) 0.694 272 Cancer anxiety 1.6609 (.72325) 1.7162 (.81258) 0.680 211 Cancer worry 2.3257 (.67903) 2.4054 (.75415) 0.525 211 Perceived risk for colorectal cancer 1.7879 (.57545) 1.7764 (.66834) 0.909 272 CompletersNoncompleters Intrapersonal factors - time 1Mean (σ)Mean (σ)PaN Fear of colonoscopy 1.9387 (.96335) 2.2482 (1.03214) 0.012 349 Fatalism 0.1253 (.24884) 0.0934 (.23862) 0.304 345 Pros and cons 2.5396 (.43089) 2.5882 (.35736) 0.348 350 Multidimensional Inventory of Black Identity 3.2501 (.65990) 3.0669 (.75519) 0.034 344 Group-based medical mistrust 1.9417 (.66328) 1.9010 (.62899) 0.661 272 Collective self-esteem 3.2003 (.60311) 3.2229 (.73137) 0.822 272 Self-efficacy 4.1952 (.51065) 4.0746 (.43981) 0.036 350 Social influence 2.8620 (.75538) 3.0242 (.65814) 0.130 260 Cancer anxiety 1.6154 (.69585) 1.7923 (.73364) 0.078 273 Cancer worry 2.2268 (.68199) 2.4444 (.72166) 0.027 274 Perceived risk for colorectal cancer 1.6869 (.58101) 1.5882 (.59904) 0.178 349 Intrapersonal factors - time 2 Mean (σ) Mean (σ) P N Fear of colonoscopy 1.9339 (.86265) 1.9927 (.86761) 0.688 272 Pros and cons 2.6110 (.46880) 2.5305 (.34911) 0.295 270 Self-efficacy 4.0474 (.48159) 4.0798 (.50918) 0.694 272 Cancer anxiety 1.6609 (.72325) 1.7162 (.81258) 0.680 211 Cancer worry 2.3257 (.67903) 2.4054 (.75415) 0.525 211 Perceived risk for colorectal cancer 1.7879 (.57545) 1.7764 (.66834) 0.909 272 σ = SD aP value obtained from independent samples t-test. ### Multivariable regression A 5-variable model was created to predict colonoscopy completion (Table 4). Income was the strongest unique predictor of colonoscopy completion (OR, 2.835). Participants with annual income more than $10,000 were two and a half times more likely to complete than those who made less than$10,000 annually. Higher self-efficacy was the second predictor of colonoscopy completion (P = 0.022; OR, 2.396) whereby higher self-efficacy increased completion. Social influence also predicted screening colonoscopy adherence (OR, 0.514). For each single unit increase in participants' social influence score, the odds of getting a screening colonoscopy decreased by about 50%. In addition, greater identification with one's ethnic group increased screening colonoscopy adherence (P = 0.031; OR, 1.656) by more than 60%. Finally, participants with increased fear of the colonoscopy procedure were less likely to complete by about 70% (P = 0.029; OR, 0.699). Table 4. Logistic regression predicting odds of colonoscopy completion POR (95% CI) Income ≤10,000  1.00 (1.00–1.00) >10,000 0.002 2.835 (1.469–5.472) Self-efficacy 0.022 2.396 (1.136–5.057) Social influence 0.023 0.514 (0.289–0.913) Multidimensional Inventory of Black Identity 0.021 1.656 (1.046–2.622) Fear of colonoscopy 0.029 0.699 (0.507–0.964) POR (95% CI) Income ≤10,000  1.00 (1.00–1.00) >10,000 0.002 2.835 (1.469–5.472) Self-efficacy 0.022 2.396 (1.136–5.057) Social influence 0.023 0.514 (0.289–0.913) Multidimensional Inventory of Black Identity 0.021 1.656 (1.046–2.622) Fear of colonoscopy 0.029 0.699 (0.507–0.964) This study of 350 African Americans randomized to one of three patient navigation groups assessed adherence to screening colonoscopy. Although results from studies of patient navigation programs showed improvement in adherence rates of colorectal cancer screening among minorities (8–10, 48–51), more knowledge about different types of patient navigation programs and their respective influence on promoting colonoscopy completion among African Americans can provide significant guidance for future patient navigation protocols. Although no statistically significant differences among the three types of navigation were detected, our findings did distinguish participants who completed a colonoscopy versus those who did not. Consistent with prior studies, completers were more likely to have higher socioeconomic status (employment, income > \$10,000), private or self-pay insurance (vs. Medicare and/or Medicaid), and medical visits in the recent past (32, 53). Assessment of intrapersonal factors revealed that statistically significant differences between the completers and noncompleters existed at baseline (time 1) about fear of colonoscopy, ethnic identity, self-efficacy, and cancer worry. However, the clinical relevance of these differences is not known. By time 2, no significant group differences in intrapersonal factors remained. We speculate that the lack of differences in intrapersonal factors between the two groups may be attributable to the patient navigators effectively addressing the participants' questions about colonoscopies and concerns about cancer, thus, removing any intrapersonal factors which could have undermined screening colonoscopy adherence for all of the participants, regardless of patient navigation type. Logistic regression revealed that higher income was a significant predictor of screening adherence. Income has often been associated with other variables representative of socioeconomic status such as employment, education level, and insurance status. In this sample, more than 60% were unemployed and had less than a high school education. Low income could be related to poor adherence to screening through poor healthcare coverage and access. However, all patients had insurance coverage. Furthermore, approximately 92% had a regular physician. Therefore, the relation of poor income to poor health care coverage and access does not exist in our study. Our findings show that low income may be independently associated with poorer colorectal cancer screening rates by colonoscopy, at least in this urban sample. Self-efficacy was the second strongest predictor of colonoscopy completion, suggesting that participants with inherent confidence in their ability to get the procedure were more likely to follow through with screening. This is an important finding for future implementation of patient navigation. If patients' degree of self-efficacy can be identified early in the process, patient navigation interventions can focus on increasing low levels of self-efficacy and patient navigation resources can be appropriately reallocated in cases of inherent high self-efficacy. Logistic regression unexpectedly revealed that colonoscopy noncompleters were more likely to have had social influence from family or close friends who encouraged colonoscopy. Although controversial, the finding provides potential insight on reasons for not completing. Perhaps those with strong social influence received conflicting information about colonoscopies from close friends and family even though they were supportive of colonoscopies. Another hypothesis could be discrepancy between intrinsic and extrinsic support of colonoscopies among the subjects' family and friends. Perhaps the subjects' family and friends never adhered to colonoscopies but supported them for others. Further investigation of social influence is merited in future studies. Stronger identification with one's ethnicity was found to independently predict colonoscopy completion. One aspect of the Multidimensional Inventory of Black Identity assessed participants' regard for other African Americans. Our finding may be the result of participants' positive regard and connection to their navigators, as all navigators were racially concordant with participants, suggesting that matching patient navigations to patients by ethnicity may add trust and aid in increasing screening colonoscopy adherence. Fear of the colonoscopy procedure was also identified by logistic regression as a unique predictor of screening colonoscopy adherence. This finding presents another opportunity for targeted future patient navigation interventions to address this barrier and help patients overcome fear, thus hopefully increasing screening rates. Study limitations include the use of only one cultural group from an inner-city population in which all subjects had health care coverage and more than 90% had a regular physician. Therefore, this study's colonoscopy completion rate may be more than the rate in populations with less optimal health care coverage or in other minority groups. Future studies are encouraged to compare our findings with different cultural groups (e.g., Hispanics) or more diverse populations for greater generalizability. Additional limitations include our entry criteria of a 5-year interval for previous colonoscopy screening (which is the practice in our clinical setting) and relatively low α coefficients (Cronbach's α < 0.7) of several assessments of intrapersonal factors. Although a low α coefficient could be caused by heterogeneous dimensionality of the test, a short-length test could also reduce α values and underestimate reliability (54, 55). Our two lowest α coefficients (0.420 for colorectal cancer knowledge, 0.444 for cancer anxiety) had the fewest number of items per test. Future evaluations of similar intrapersonal values are recommended to add more items to test the same concept. In summary, a large RCT was conducted using three different patient navigation arms to assess potentially different colonoscopy completion outcomes and revealed no differences among the three types of patient navigation. Because the completion rate was more than the average rate of endoscopic screening among African Americans (75.7% vs. 53%; refs. 56), integration of patient navigation services into primary care settings may be useful in promoting screening colonoscopy adherence. Our finding is consistent with results of a systematic review of intervention studies aimed to improve colorectal cancer screening rates: any patient navigation protocol was effective in increasing rates of colorectal cancer screening by 15% (52). The fact that peers can be trained to be effective navigators may have financially beneficial implications to screening programs. As the current study assesses patient navigation protocols among African Americans in an urban community, our findings provide new insight that any type of patient navigation service may be beneficial in facilitating screening colonoscopy adherence in a population overburdened by colorectal cancer mortality. S.H. Itzkowitz has commercial research support from Exact Sciences Corporation and is a consultant/advisory board member of the same. No potential conflicts of interest were disclosed by the other authors. Conception and design: L. Jandorf, H.S. Thompson, W.H. Redd, S.H. Itzkowitz Development of methodology: L. Jandorf, G. Winkel, H.S. Thompson, S.H. Itzkowitz Acquisition of data (provided animals, acquired and managed patients, provided facilities, etc.): L. Jandorf, L. Thelemaque, S.H. Itzkowitz Analysis and interpretation of data (e.g., statistical analysis, biostatistics, computational analysis): L. Jandorf, C. Braschi, E. Ernstoff, L. Thelemaque, G. Winkel, H.S. Thompson, W.H. Redd, S.H. Itzkowitz Writing, review, and/or revision of the manuscript: L. Jandorf, C. Braschi, E. Ernstoff, C.R. Wong, L. Thelemaque, G. Winkel, H.S. Thompson, W.H. Redd, S.H. Itzkowitz Administrative, technical, or material support (i.e., reporting or organizing data, constructing databases): L. Jandorf, E. Ernstoff, S.H. Itzkowitz Study supervision: L. Jandorf, H.S. Thompson, S.H. Itzkowitz The authors thank the study participants, without whom this research could not have been conducted, as well as staff of recruiters, peer, and professional navigators. This work was supported by the NIH grant CA120658 (to W.H. Redd, PI). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1. Winawer SJ , Fletcher RH , Miller L . Colorectal cancer screening: clinical guidelines and rationale . Gastroenterology 1997 ; 112 : 594 642 . 2. Levin B , Lieberman DA , McFarland B , Smith RA , Brooks D , Andrews KS , et al Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US multi-society task force on colorectal cancer, and the American College of Radiology . CA Cancer J Clin 2008 ; 58 : 130 60 . 3. Doubeni CA , Laiyemo AO , Reed G , Field TS , Fletcher RH . Socioeconomic and racial patterns of colorectal cancer screening among Medicare enrollees in 2000 to 2005 . Cancer Epidemiol Biomarkers Prev 2009 ; 18 : 2170 5 . 4. Freeman HP , Muth B , Kerner J . Expanding access to cancer screening and clinical follow-up among the medically underserved . Cancer Pract 1995 ; 3 : 19 30 . 5. Raich PC , Whitley EM , Thorland W , Valverde P , Fairclough D . Patient navigation improves cancer diagnostic resolution: an individually randomized clinical trial in an underserved population . Cancer Epidemiol Biomarkers Prev 2012 ; 21 : 1629 38 . 6. Dudley DJ , Drake J , Quinlan J , Holden A , Saegert P , A , et al Beneficial effects of a combined navigator/promotora approach for Hispanic women diagnosed with breast abnormalities . Cancer Epidemiol Biomarkers Prev 2012 ; 21 : 1639 44 . 7. Dohan D , Schrag D . Using navigators to improve care of underserved patients . Cancer 2005 ; 104 : 848 55 . 8. Lasser KE , Murillo J , Medlin E , Lisboa S , Valley-Shah L , Fletcher RH , et al A multilevel intervention to promote colorectal cancer screening among community health center patients: results of a pilot study . BMC Fam Pract 2009 ; 10 : 37 . 9. Percac-Lima S , Grant RW , Green AR , Ashburner JM , Gamba G , Oo S , et al A culturally tailored navigator program for colorectal cancer screening in a community health center: a randomized, controlled trial . J Gen Intern Med 2009 ; 24 : 211 7 . 10. Chen LA , Santos S , Jandorf L , Christie J , Castillo A , Winkel G , et al A program to enhance completion of screening colonoscopy among urban minorities . Clin Gastroenterol Hepatol 2008 ; 6 : 443 50 . 11. Earp JAL , CI , Vincus AA , Altpeter M , Flax V , Mayne L , et al Lay health advisors: a strategy for getting the word out about breast cancer . Health Educ Behav 1997 ; 24 : 432 51 . 12. Holmes AP , Hatch J , Robinson GA . A lay educator approach to sickle cell disease education . J Natl Black Nurses Assoc 1992 ; 5 : 26 36 . 13. Quinn MT , McNabb WL . Training lay health educators to conduct a church-based weight-loss program for African American women . Diabetes Educ 2001 ; 27 : 231 8 . 14. Emmons KM , Puleo E , Park E , Gritz ER , Butterfield RM , Weeks JC , et al Peer-delivered smoking counseling for childhood cancer survivors increases rate of cessation: the partnership for health study . J Clin Oncol 2005 ; 23 : 6516 23 . 15. Erwin DO , Spatz TS , Stotts RC , Hollenberg JA , Deloney LA . Increasing mammography and breast self-examination in African American women using the Witness Project model . J Cancer Educ 1996 ; 11 : 210 5 . 16. Erwin DO , Ivory J , Stayton C , Willis M , Jandorf L , Thompson H , et al Replication and dissemination of a cancer education model for African American women . Cancer Control 2003 ; 10 : 13 21 . 17. Fisher JD . Possible effects of reference group-based social influence on AIDS-risk behavior and AIDS-prevention . Am Psychol 1988 ; 43 : 914 20 . 18. Powe B . Perceptions of cancer fatalism among African Americans: the influence of education, income, and cancer knowledge . J Natl Black Nurses Assoc 1994 ; 7 : 41 8 . 19. Powe BD . Fatalism among elderly African Americans: effects on colorectal cancer screening . Cancer Nurs 1995 ; 18 : 385 92 . 20. Walsh JM , Kaplan CP , Nguyen B , Gildengorin G , McPhee SJ , Pérez-Stable EJ . Barriers to colorectal cancer screening in Latino and Vietnamese Americans . J Gen Intern Med 2004 ; 19 : 156 66 . 21. Katz ML , James AS , Pignone MP , Hudson MA , Jackson E , Oates V , et al Colorectal cancer screening among African American church members: a qualitative and quantitative study of patient-provider communication . BMC Public Health 2004 ; 4 : 62 . 22. Menon U , Champion VL , Larkin GN , Zollinger TW , Gerde MPM , Vernon SW . Beliefs associated with fecal occult blood test and colonoscopy use at a worksite colon cancer screening program . J Occup Environ Med 2003 ; 45 : 891 8 . 23. O'Malley AS , Forrest CB , Feng S , Mandelblatt J . Disparities despite coverage: gaps in colorectal cancer screening among Medicare beneficiaries . Arch Intern Med 2005 ; 165 : 2129 35 . 24. Bastani R , Gallardo NV , Maxwell AE . Barriers to colorectal cancer screening among ethnically diverse high-and average-risk individuals . J Psychosoc Oncol 2001 ; 19 : 65 84 . 25. Ryan G , Skinner C , Farrell D , Champion V . Examining the boundaries of tailoring: the utility of tailoring versus targeting mammography interventions for two distinct populations . Health Educ Res 2001 ; 16 : 555 66 . 26. Rotter JB . The development and application of social learning theory . New York, NY : Praeger ; 1982 . 27. Skinner B . . 1st ed . New York, NY : Alfred A. Knopf, Inc ; 1974 . 28. Skinner BF . The behavior of organisms: An experimental analysis . New York, NY : Appleton-Century-Crofts ; 1938 . 29. Redd WH , Porterfield AL , Andersen BL . Behavior modification: behavioral approaches to human problems . New York, NY : Random House ; 1979 . 30. Denberg TD , TV , Coombes JM , Beaty BL , Berman K , Byers TE , et al Predictors of nonadherence to screening colonoscopy . J Gen Intern Med 2005 ; 20 : 989 95 . 31. Greiner KA , Born W , Nollen N , Ahluwalia JS . Knowledge and perceptions of colorectal cancer screening among urban African Americans . J Gen Intern Med 2005 ; 20 : 977 83 . 32. Wee CC , McCarthy EP , Phillips RS . Factors associated with colon cancer screening: the role of patient factors and physician counseling . Prev Med 2005 ; 41 : 23 9 . 33. Shokar NK , Vernon SW , Weller SC . Cancer and colorectal cancer: knowledge, beliefs, and screening preferences of a diverse patient population . Fam Med 2005 ; 37 : 341 7 . 34. Holmes-Rovner M , Williams GA , Hoppough S , Quillan L , Butler R , Given CW . Colorectal cancer screening barriers in persons with low income . Cancer Pract 2002 ; 10 : 240 7 . 35. Rakowski W , Andersen MR , Stoddard AM , Urban N , Rimer BK , Lane DS , et al Confirmatory analysis of opinions regarding the pros and cons of mammography . Health Psychol 1997 ; 16 : 433 41 . 36. Vernon SW , Meissner H , Klabunde C , Rimer BK , Ahnen DJ , Bastani R , et al Measures for ascertaining use of colorectal cancer screening in behavioral, health services, and epidemiologic research . Cancer Epidemiol Biomarkers Prev 2004 ; 13 : 898 905 . 37. Shelton RC , Thompson HS , Jandorf L , Varela A , Oliveri B , Villagra C , et al Training experiences of lay and professional patient navigators for colorectal cancer screening . J Cancer Educ 2011 ; 26 : 277 84 . 38. Flocke SA , Stange KC , Zyzanski SJ . The association of attributes of primary care with the delivery of clinical preventive services . Med Care 1998 ; 36 : AS21 30 . 39. Jandorf L , Ellison J , Villagra C , Winkel G , Varela A , Quintero-Canetti Z , et al Understanding the barriers and facilitators of colorectal cancer screening among low income immigrant Hispanics . J Immigr Minor Health 2010 ; 12 : 462 9 . 40. Manne SL , Coups EJ , Markowitz A , Meropol NJ , Haller D , Jacobsen PB , et al A randomized trial of generic versus tailored interventions to increase colorectal cancer screening among intermediate risk siblings . Ann Behav Med 2009 ; 37 : 207 17 . 41. Powe BD . Cancer fatalism among elderly Caucasians and African Americans . Oncol Nurs Forum 1995 ; 22 : 1355 9 . 42. Sellers RM , Rowley SA , Chavous TM , Shelton JN , Smith MA . Multidimensional inventory of black identity: a preliminary investigation of reliability and constuct validity . J Pers Soc Psychol 1997 ; 73 : 805 815 . 43. Thompson HS , Valdimarsdottir HB , Winkel G , Jandorf L , Redd W . The group-based medical mistrust scale: psychometric properties and association with breast cancer screening . Prev Med 2004 ; 38 : 209 18 . 44. Luhtanen R , Crocker J . A collective self-esteem scale: self-evaluation of one's social identity . Person Soc Psychol Bull 1992 ; 18 : 302 18 . 45. Champion V , Skinner CS , Menon U . Development of a self-efficacy scale for mammography . Res Nurs Health 2005 ; 28 : 329 36 . 46. Lobell M , Bay RC , KV , Keske B . In: Barriers to cancer screening in Mexican-American women . Mayo clinic proceedings; Mayo Foundation ; 1998 . p. 301 8 . 47. Rutten L , Moser R , Beckjord E , Hesse B , Croyle R . Cancer communication: health information national trends survey . Washington, DC : National Cancer Institute ; 2007 . NIH Pub. No. 07-6214. 48. Jandorf L , Gutierrez Y , Lopez J , Christie J , Itzkowitz SH . Use of a patient navigator to increase colorectal cancer screening in an urban neighborhood health clinic . J Urban Health 2005 ; 82 : 216 24 . 49. Christie J , Itzkowitz S , Lihau-Nkanza I , Castillo A , Redd W , Jandorf L . A randomized controlled trial using patient navigation to increase colonoscopy screening among low-income minorities . J Natl Med Assoc 2008 ; 100 : 278 84 . 50. Lebwohl B , Neugut AI , Stavsky E , Villegas S , Meli C , Rodriguez O , et al Effect of a patient navigator program on the volume and quality of colonoscopy . J Clin Gastroenterol 2011 ; 45 : e47 53 . 51. Nash D , Azeez S , Vlahov D , Schori M . Evaluation of an intervention to increase screening colonoscopy in an urban public hospital setting . J Urban Health 2006 ; 83 : 231 43 . 52. Naylor K , Ward J , Polite BN . Interventions to improve care related to colorectal cancer among racial and ethnic minorities: a systematic review . J Gen Intern Med 2012 ; 27 : 1033 46 . 53. Guessous I , Dash C , Lapin P , Doroshenk M , Smith RA , Klabunde CN . Colorectal cancer screening barriers and facilitators in older persons . Prev Med 2010 ; 50 : 3 . 54. Nunnally J . Psychometric theory . 2nd ed . New York, NY : McGraw-Hill ; 1978 . 55. Streiner DL . Starting at the beginning: an introduction to coefficient alpha and internal consistency . J Pers Assess 2003 ; 80 : 99 103 . 56. Smith RA , Cokkinides V , Brawley OW . Cancer screening in the United States, 2012 . CA Cancer J Clin 2012 ; 62 : 129 42 .
2022-12-07 14:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23795771598815918, "perplexity": 10186.451129292374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00092.warc.gz"}
https://web2.0calc.com/questions/sig-figs_1
+0 # Sig Figs... 0 58 4 +429 How many Sig Figs are in $$3.4\times {10}^{4}$$ off-topic Feb 1, 2019 #1 +1 3.4 x 10^4 = 34,000 Feb 1, 2019 #3 +429 +1 Thank you. My teacher only taught sig figs with a video, and it didn't cover scentific notation. DerpofTheAbyss  Feb 1, 2019 #2 +95866 +2 I believe this is correct ⇒ 34,000 =  2 significant figures Feb 1, 2019 #4 +96956 +2 How many Sig Figs are in  $$3.4\times {10}^{4}$$ Significant firgues with scientific notation is very easy.  There are 2 digits in 3.4 so there are 2 significant figures. Only the significant part should be written in sci fi although if there is more to the question (like you have worked out the number yourself) you may need to be careful. 3.40000 * 10^4     has 6 significant figures Feb 1, 2019
2019-02-16 04:33:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236266613006592, "perplexity": 4501.424938351189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479838.37/warc/CC-MAIN-20190216024809-20190216050809-00054.warc.gz"}
https://math.paperswithcode.com/paper/stability-of-properties-of-locales-under
## Stability of properties of locales under groups 28 Sep 2015  ·  Townsend Christopher · Given a particular collection of categorical axioms, aimed at capturing properties of the category of locales, we show that if $\mathcal{C}$ is a category that satisfies the axioms then so too is the category $[ G, \mathcal{C}]$ of $G$-objects, for any internal group $G$. To achieve this we prove a general categorical result: if an object $S$ is double exponentiable in a category with finite products then so is its associated trivial $G$-object $(S, \pi_2: G \times S \rightarrow S)$... The result holds even if $S$ is not exponentiable. An example is given of a category $\mathcal{C}$ that satisfies the axioms, but for which there is no elementary topos $\mathcal{E}$ such that $\mathcal{C}$ is the category of locales over $\mathcal{E}$. It is shown, in outline, how the results can be extended from groups to groupoids. read more PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now Category Theory
2021-08-05 09:11:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6550127267837524, "perplexity": 470.18587329563155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00549.warc.gz"}
http://www.opensourcelab.salazarserrano.com/product-theorem-for-gaussian-functions/
# Product Theorem for Gaussian Functions By | March 27, 2016 It is a well known fact in mathematics that the product of two Gaussian functions is also a Gaussian function. In this post I want to present this result for future reference for me and for anyone who might find it useful. The product theorem for Gaussian functions states that the product of two overlapping Gaussian functions is also a Gaussian function and determines the center and width of the resulting function in terms of the parameters of the two original functions. To illustrate the identity, consider a Gaussian function of the form $G_i(x)=A_i\exp[-(x-\mu_i)^2/2\sigma_i^2]$, where $\mu_i$, $\sigma_i$ and $A_i$, correspond to its center, width and amplitude, respectively. The result of the product of two Gaussian functions $G_1(x)$ and $G_2(x)$ with different amplitudes, widths and central positions, $G_3(x) = G_1(x)G_2(x)$, is equal to where the centroid is given by and the width by The theorem shows that the result of the product of two Gaussian functions is a new Gaussian function of width $\tilde{\sigma}$, centered in the position $\tilde{\mu}$, whose amplitude strongly depends on the factor $\exp\left(-(\mu_1-\mu_2)^2/2(\sigma_1^2+\sigma_2^2)\right)$. In addition, the resulting function is narrower than either of the two original Gaussian functions, and its center lies within the interval $(\mu_1,\mu_2)$. Notice that the above result can be further simplified if we consider the scenario where the Gaussian functions are centered in different positions but have the same width. If we define $\sigma_1 = \sigma_2 = \sigma$, the centroid simplies to $\tilde{\mu}= (\mu_1+\mu_2)/2$, whereas the width is given by $\tilde{\sigma}^2 = \sigma^2/2$. # Examples The next figures show two representative cases where two different Gaussian functions with different and equal widths are multiplied. In both cases the result is a Gaussian function narrower than the original functions centered in between the two original centroids, $\mu_1$ and $\mu_2$.
2017-09-21 03:02:09
{"extraction_info": {"found_math": true, "script_math_tex": 16, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916460275650024, "perplexity": 150.51635563591796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687606.9/warc/CC-MAIN-20170921025857-20170921045857-00077.warc.gz"}
http://cowlet.org/2013/09/15/understanding-data-science-feature-extraction-with-r.html
# Understanding data science: feature extraction with R 15 Sep 2013 Getting stuck in to a data science problem can be intimidating. This article shows one way to start, by using R to examine an open dataset. Keep reading for a walkthrough of how to: • Generate simple stats and plots for initial visualisation • Perform a Fast Fourier Transform (FFT) for frequency analysis • Calculate key features of the data • Visualise and analyse the feature space ## An open dataset: bearing vibration The data I’m using is from the Prognostics Data Repository hosted by NASA, and specifically the bearing dataset from University of Cincinnati. This data was gathered from an experiment studying four bearings installed on a loaded rotating shaft, with a constant speed of 2,000rpm. The test setup is shown below (from Qiu et al): In the course of this experiment, some of the bearings broke down. I decided to look at the data in detail, to see if I could identify early indicators of failure. ## What are bearings? It’s always useful to understand the subject of any dataset you want to analyse. Without a basic level of knowledge on the systems and processes that the data represents, you can easily go down blind alleys in your analysis. Bearings are key components of all types of rotating machinery, from simple desk fans to nuclear power station turbines. Anything with a motor or generator rotates a shaft, and the health of the bearings determines how smoothly and efficiently that shaft rotates. Poor lubrication of the bearing will generate extra friction, which reduces efficiency and can also damage the bearing or shaft. In the worst cases, the bearing can scuff or crack, and metal particles break free to hit and damage other parts of the machine. ## Experiment and data A common method of monitoring bearings is to measure vibration. The more smoothly the system is operating, the lower the level of vibration. To measure this, the experiment used two accelerometers per bearing (one each for the x and y axes). Data was recorded in 1 second bursts every 5 or 10 minutes, and the sampling rate was 20kHz. When you download and unzip the dataset, it contains a PDF describing the experiment and data format, and three RAR files of data. Extracting 1st_test.rar gives a directory containing 2,156 files. Each file is named with its timestamp, and contains an 8 × 20,480 matrix of tab-separated values. The 8 columns correspond to the accelerometers (2 each for 4 bearings), and the rows are the 20kHz samples from 1 second of operation. This raises a question about the data. If the sampling rate is 20kHz and each file contains 1 second of data, there should be 20,000 rows of data. Since there are 20,480, one of these pieces of information about the data is wrong. Is it the sampling rate or the time period? It’s more likely that the sampling rate is correct and that each file contains just over 1 second of data. For high frequency data capture, it’s imperative that the sampling rate is correctly adhered to. If a 20kHz setting is actually sampling at 20.48kHz, it introduces significant error into the experiment. On the other hand, 20,480 samples at a rate of 20kHz gives a time period of 1.024s, which is close enough to be rounded down to “1 second” in the text documentation. It’s important to check that all the information you have about your data is consistent. Discrepancies can indicate a misunderstanding, which can in turn lead to invalid results from your investigation. ## Importing the data Being fairly confident that I understand what this data is now, it’s time to read it into R. A pattern I use regularly is to set a variable basedir with the path to the directory I’m working in, then combine this with a specific filename using paste0. (The paste0 function is strangely named, but it just joins strings together with 0 characters between them.) Since the source data is in text format, with no column headers, and columns separated by tabs, the read in code is like this: basedir <- "/Users/vic/Projects/bearings/bearing_IMS/1st_test/" Calling head on the data is just to check that the read has done what I expect, and should give this: V1 V2 V3 V4 V5 V6 V7 V8 1 -0.022 -0.039 -0.183 -0.054 -0.105 -0.134 -0.129 -0.142 2 -0.105 -0.017 -0.164 -0.183 -0.049 0.029 -0.115 -0.122 3 -0.183 -0.098 -0.195 -0.125 -0.005 -0.007 -0.171 -0.071 4 -0.178 -0.161 -0.159 -0.178 -0.100 -0.115 -0.112 -0.078 5 -0.208 -0.129 -0.261 -0.098 -0.151 -0.205 -0.063 -0.066 6 -0.232 -0.061 -0.281 -0.125 0.046 -0.088 -0.078 -0.078 These column names are not very intuitive, so I renamed them like this: colnames(data) <- c("b1.x", "b1.y", "b2.x", "b2.y", "b3.x", "b3.y", "b4.x", "b4.y") ## Initial analysis There is a vast array of possible analysis and modelling techniques I could apply to this data, and 2,155 more files to read in. However, at this stage I don’t have a good feel for what the data looks like or how it behaves, so I can’t yet decide the best way to format, store, or clean the data. Data mining is an iterative process where you look at the data from a certain angle to generate ideas for more interesting angles to consider next. Simplest is always the best way to start: summary(data$b1.x) plot(data$b1.x, t="l") # t="l" means line plot Min. 1st Qu. Median Mean 3rd Qu. Max. -0.72000 -0.14600 -0.09500 -0.09459 -0.04200 0.38800 Looking just at the x axis of bearing 1 to start with, you can see that the mean vibration value is less than 0, and mostly within the band between 0.0 and -0.2. The max and min values (0.388 and -0.720) are outliers, but not orders of magnitude different from the rest of the data, and therefore can’t be discounted as bad data. The spikes in value seem regularly spaced, and a spike in negative value always seems to correspond with an extreme positive value, which tends to suggest these outliers are true measurements. Analysis of the other columns shows very similar patterns, with a slight tendency for the y axes to have a mean closer to 0, but higher extreme values. Each plot represents a one second snapshot in the life of that bearing. To perform an analysis of the data, I need to consider all of the snapshots together. Obviously, looking at every plot isn’t going to work, so how do I process the data? ## Feature extraction or big data? I need to find a way to work with this relatively large data set. There are basically two approaches to this problem. The first I’ll call the traditional engineering approach. 20,480 is a very high number of data points to represent one measurement, and there are over 2,000 such snapshots. The traditional approach is to try and condense this data down through feature extraction. I could calculate for each file a handful of representative features, such as mean, max, and min values, and the relative size of key harmonic frequencies. With 2,156 files, I would end up with 2,156 rows of (say) 8 columns of features, and can much more easily perform modelling on this condensed dataset. An alternative could be called the big data approach. 20,480 × 2,156 is “only” 44 million datapoints, and with a good pipeline of tools and some CPU time, I can perform analytics on the full data without needing to condense it. It is only recently that this approach has become feasible, so there is less experience and guidance around about how to do this efficiently. On the plus side, feature extraction aims to reduce the amount of data you have to process, by drawing signal out of noise. As long as your features are representative of the process you are trying to model, nothing is lost in the condensing process, but the modelling itself become much easier. On the other hand, if your features don’t actually describe the underlying process, you may be losing valuable information from the source data. To begin with, I’ll take the engineering approach. Time to extract some features! ## Formatting the full dataset To collect the data into a format useful for further analysis, I need to process the 2,156 time ordered source files into 4 files of bearing-specific data. Each bearing file will contain 2,156 rows (one per source file), containing the timestamp and key features calculated from both the x and y axes. Conventional wisdom for bearing analysis says that the most important features are the levels of vibration at key frequencies, relating to rotation and spin of the bearing elements. Since I don’t have the engineering specs for the bearing, these frequencies are difficult to calculate. So for now, I’ll take more of a data-driven approach and focus on patterns in the data. A vibration signal can be decomposed into its frequency components using the Fast Fourier Transform (FFT). Simply calling the R function fft returns data in an unfriendly format, but it’s straightforward to turn it into a more intuitive version: b1.x.fft <- fft(data$b1.x) # Ignore the 2nd half, which are complex conjugates of the 1st half, # and calculate the Mod (magnitude of each complex number) amplitude <- Mod(b1.x.fft[1:(length(b1.x.fft)/2)]) # Calculate the frequencies frequency <- seq(0, 10000, length.out=length(b1.x.fft)/2) # Plot! plot(amplitude ~ frequency, t="l") Great! You can see that all the good stuff is going on down at the lower frequencies, although there is a ripple of activity just above and below 4kHz, and around 8kHz. For now, let’s focus on the lower frequencies. plot(amplitude ~ frequency, t="l", xlim=c(0,1000), ylim=c(0,500)) axis(1, at=seq(0,1000,100), labels=FALSE) # add more ticks Other than the dc term (at 0Hz), the tallest spikes are just below 1kHz. There is also a large spike just below 500Hz, and two around 50Hz. Tabulating the top 15 frequencies gives: sorted <- sort.int(amplitude, decreasing=TRUE, index.return=TRUE) top15 <- sorted$ix[1:15] # indexes of the largest 15 top15f <- frequency[top15] # convert indexes to frequencies [1] 0.00000 986.42446 993.26106 493.21223 979.58785 [6] 994.23772 969.82127 971.77459 57.62281 978.61119 [11] 921.96504 49.80955 4420.35355 3606.79754 4327.57105 So those outliers are at 49.8Hz, 57.6Hz, and 493Hz. Interestingly, the second and fourth largest components have a harmonic relationship (493Hz * 2 = 986Hz), which strongly suggests they are linked. For now, I’ll focus on the frequencies of the largest five components. Since each bearing has an x and y axis, this means there will be 10 features total. I’ll wrap the FFT profiling code in a function for ease of use later: fft.profile <- function (dataset, n) { fft.data <- fft(dataset) amplitude <- Mod(fft.data[1:(length(fft.data)/2)]) frequencies <- seq(0, 10000, length.out=length(fft.data)/2) sorted <- sort.int(amplitude, decreasing=TRUE, index.return=TRUE) top <- sorted$ix[1:n] # indexes of the largest n components return (frequencies[top]) # convert indexes to frequencies } I want to keep the time of the burst along with the feature data. The timestamp isn’t part of the data itself, but it is in the name of the file. With a variable called filename, it can be parsed like this: timestamp <- as.character(strptime(filename, format="%Y.%m.%d.%H.%M.%S")) Then the timestamp and features can be combined into a single row like this: c(timestamp, fft.profile(data$b1.x, n), fft.profile(data$b1.y, n)) For each file, this row needs to be calculated and added to the end of a bearing matrix. Since there are four bearings, I’ll use four matrices. The code to create the matrices, process the files, and write the completed bearing matrices out to file is this: # How many FFT components should I grab as features? n <- 5 # Set up storage for bearing-grouped data b1 <- matrix(nrow=0, ncol=(2*n+1)) b2 <- matrix(nrow=0, ncol=(2*n+1)) b3 <- matrix(nrow=0, ncol=(2*n+1)) b4 <- matrix(nrow=0, ncol=(2*n+1)) for (filename in list.files(basedir)) { cat("Processing file ", filename, "\n") timestamp <- as.character(strptime(filename, format="%Y.%m.%d.%H.%M.%S")) data <- read.table(paste0(basedir, filename), header=FALSE, sep="\t") colnames(data) <- c("b1.x", "b1.y", "b2.x", "b2.y", "b3.x", "b3.y", "b4.x", "b4.y") # Bind the new rows to the bearing matrices b1 <- rbind(b1, c(timestamp, fft.profile(data$b1.x, n), fft.profile(data$b1.y, n))) b2 <- rbind(b2, c(timestamp, fft.profile(data$b2.x, n), fft.profile(data$b2.y, n))) b3 <- rbind(b3, c(timestamp, fft.profile(data$b3.x, n), fft.profile(data$b3.y, n))) b4 <- rbind(b4, c(timestamp, fft.profile(data$b4.x, n), fft.profile(data\$b4.y, n))) } write.table(b1, file=paste0(basedir, "../b1.csv"), sep=",", row.names=FALSE, col.names=FALSE) write.table(b2, file=paste0(basedir, "../b2.csv"), sep=",", row.names=FALSE, col.names=FALSE) write.table(b3, file=paste0(basedir, "../b3.csv"), sep=",", row.names=FALSE, col.names=FALSE) write.table(b4, file=paste0(basedir, "../b4.csv"), sep=",", row.names=FALSE, col.names=FALSE) As a final step, I should check that these features change over the life of the bearings. The experiment write-up says that bearings 3 and 4 failed at the end of the test, while 1 and 2 remained in service. Therefore it should be expected that all bearings start their life looking somewhat similar, with bearings 3 and 4 diverging away from this norm towards the end of the dataset. Above are graphs for the strongest component for each bearing over time. In both the x and y axes for all bearings, the strongest FFT component is the dc term for the majority of the experiment. The single exception is the final measurement from bearing 3 in the x axis, for which the strongest component is at 1Hz. This is not very informative overall, as it would be better to see a trend towards failure rather than the failure itself. The second strongest components show the pattern I had hoped for. Bearings 1 and 2 show the same frequency throughout. Bearing 4 (green) changes frequency about two thirds of the way through the test, in both axes. A third frequency starts to occur closer to the point of failure. Bearing 2 (orange) displays a very different pattern of behaviour in the y axis from all other bearings, and shows occasional anomalies in the x axis. It is hard to tell purely from the data if this indicates a weak bearing or installation right from the start of the test. Regardless, the y axis starts to show unusual frequencies from just past halfway through the test, which become more constant as failure approaches. The x axis also shows new frequencies right before the failure occurs. The patterns of the third strongest components are not as clear. Despite remaining healthy, bearings 1 and 2 also show variation in frequency towards the end of the test. Further data analysis is likely to be able to pull more information out of thse traces than simply eyeballing the plots. The 4th and 5th strongest components display similar patterns to the 3rd: ## Next steps Plotting graphs and scanning for patterns is a key part of data science. However, this bearing vibration data set is too large to do this for all of the data. With a few hours of work, I reduced it to a more manageable size using some simple feature extraction techniques: frequency analysis, and extraction of key components. Looking at plots of these extracted features confirms that they usefully describe the bearing vibration data. The second strongest FFT component of both the x and y axes displays the pattern I was expecting, which is good evidence for being a high-information feature. Most importantly, I reduced the dataset from over 44 million individual datapoints per bearing to 21,560. This is significantly easier to visualise and process on a desktop computer, and therefore makes the next stages much easier. To take this further, the next step would be to model deterioration of the bearings, trying to detect or predict bearing faults. There are many different techniques that could be used for this. Technique selection generally requires a bit of trial and error. Once the diagnosis or prediction technique is selected, it’s probable that I’ll have to generate new features from the original data. However, the same general approach to feature extraction applies: visualise the data, calculate a possible feature, and verify that it works. ## Like Data Science? Sign up to my list and I'll email you when I publish a new post.
2014-12-19 03:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4408390522003174, "perplexity": 950.2748626814464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768197.70/warc/CC-MAIN-20141217075248-00006-ip-10-231-17-201.ec2.internal.warc.gz"}
https://en-academic.com/dic.nsf/enwiki/27400
# Photodiode Photodiode Three Si and one Ge (bottom) photodiodes Symbol for photodiode. A photodiode is a type of photodetector capable of converting light into either current or voltage, depending upon the mode of operation.[1] The common, traditional solar cell used to generate electric solar power is a large area photodiode. Photodiodes are similar to regular semiconductor diodes except that they may be either exposed (to detect vacuum UV or X-rays) or packaged with a window or optical fiber connection to allow light to reach the sensitive part of the device. Many diodes designed for use specifically as a photodiode use a PIN junction rather than a p-n junction, to increase the speed of response. A photodiode is designed to operate in reverse bias.[2] ## Principle of operation A photodiode is a p-n junction or PIN structure. When a photon of sufficient energy strikes the diode, it excites an electron, thereby creating a free electron (and a positively charged electron hole). This mechanism is also known as the inner photoelectric effect. If the absorption occurs in the junction's depletion region, or one diffusion length away from it, these carriers are swept from the junction by the built-in field of the depletion region. Thus holes move toward the anode, and electrons toward the cathode, and a photocurrent is produced. This photocurrent is the sum of both the dark current (without light) and the light current, so the dark current must be minimized to enhance the sensitivity of the device.[3] ### Photovoltaic mode When used in zero bias (with the cathode positive) or photovoltaic mode, the flow of photocurrent out of the device is restricted and a voltage builds up. This mode exploits the photovoltaic effect, which is the basis for solar cells – a traditional solar cell is just a large area photodiode. ### Photoconductive mode In this mode the diode is often reverse biased (with the cathode positive), dramatically reducing the response time at the expense of increased noise. This increases the width of the depletion layer, which decreases the junction's capacitance resulting in faster response times. The reverse bias induces only a small amount of current (known as saturation or back current) along its direction while the photocurrent remains virtually the same. For a given spectral distribution, the photocurrent is linearly proportional to the illuminance (and to the irradiance).[4] Although this mode is faster, the photoconductive mode tends to exhibit more electronic noise.[citation needed] The leakage current of a good PIN diode is so low (<1 nA) that the Johnson–Nyquist noise of the load resistance in a typical circuit often dominates. ### Other modes of operation Avalanche photodiodes have a similar structure to regular photodiodes, but they are operated with much higher reverse bias. This allows each photo-generated carrier to be multiplied by avalanche breakdown, resulting in internal gain within the photodiode, which increases the effective responsivity of the device. A phototransistor is in essence a bipolar transistor encased in a transparent case so that light can reach the base-collector junction. The electrons that are generated by photons in the base-collector junction are injected into the base, and this photodiode current is amplified by the transistor's current gain β (or hfe). If the emitter is left unconnected, the phototransistor becomes a photodiode. While phototransistors have a higher responsivity for light they are not able to detect low levels of light any better than photodiodes.[citation needed] Phototransistors also have significantly longer response times. ## Materials The material used to make a photodiode is critical to defining its properties, because only photons with sufficient energy to excite electrons across the material's bandgap will produce significant photocurrents. Materials commonly used to produce photodiodes include[5]: Material Electromagnetic spectrum wavelength range (nm) Silicon 190–1100 Germanium 400–1700 Indium gallium arsenide 800–2600 Lead(II) sulfide <1000–3500 Because of their greater bandgap, silicon-based photodiodes generate less noise than germanium-based photodiodes. ### Unwanted photodiodes Any p-n junction, if illuminated, is potentially a photodiode. Semiconductor devices such as transistors and ICs contain p-n junctions, and will not function correctly if they are illuminated by unwanted electromagnetic radiation (light) of wavelength suitable to produce a photocurrent; this is avoided by encapsulating devices in opaque housings. If these housings are not completely opaque to high-energy radiation (ultraviolet, X-rays, gamma rays), transistors and ICs can malfunction due to induced photo-currents. Plastic cases are more vulnerable than metal ones. ## Features Response of a silicon photo diode vs wavelength of the incident light Critical performance parameters of a photodiode include: Responsivity The ratio of generated photocurrent to incident light power, typically expressed in A/W when used in photoconductive mode. The responsivity may also be expressed as a Quantum efficiency, or the ratio of the number of photogenerated carriers to incident photons and thus a unitless quantity. Dark current The current through the photodiode in the absence of light, when it is operated in photoconductive mode. The dark current includes photocurrent generated by background radiation and the saturation current of the semiconductor junction. Dark current must be accounted for by calibration if a photodiode is used to make an accurate optical power measurement, and it is also a source of noise when a photodiode is used in an optical communication system. Noise-equivalent power (NEP) The minimum input optical power to generate photocurrent, equal to the rms noise current in a 1 hertz bandwidth. The related characteristic detectivity (D) is the inverse of NEP, 1/NEP; and the specific detectivity ($D^\star$) is the detectivity normalized to the area (A) of the photodetector, $D^\star=D\sqrt{A}$. The NEP is roughly the minimum detectable input power of a photodiode. When a photodiode is used in an optical communication system, these parameters contribute to the sensitivity of the optical receiver, which is the minimum input power required for the receiver to achieve a specified bit error rate. ## Applications P-N photodiodes are used in similar applications to other photodetectors, such as photoconductors, charge-coupled devices, and photomultiplier tubes. They may be used to generate an output which is dependent upon the illumination (analog; for measurement and the like), or to change the state of circuitry (digital; either for control and switching, or digital signal processing). Photodiodes are used in consumer electronics devices such as compact disc players, smoke detectors, and the receivers for infrared remote control devices used to control equipment from televisions to air conditioners. For many applications either photodiodes or photoconductors may be used. Either type of photosensor may be used for light measurement, as in camera light meters, or to respond to light levels, as in switching on street lighting after dark. Photosensors of all types may be used to respond to incident light, or to a source of light which is part of the same circuit or system. A photodiode is often combined into a single component with an emitter of light, usually a light-emitting diode (LED), either to detect the presence of a mechanical obstruction to the beam (slotted optical switch), or to couple two digital or analog circuits while maintaining extremely high electrical isolation between them, often for safety (optocoupler). Photodiodes are often used for accurate measurement of light intensity in science and industry. They generally have a more linear response than photoconductors. They are also widely used in various medical applications, such as detectors for computed tomography (coupled with scintillators), instruments to analyze samples (immunoassay), and pulse oximeters. PIN diodes are much faster and more sensitive than p-n junction diodes, and hence are often used for optical communications and in lighting regulation. P-N photodiodes are not used to measure extremely low light intensities. Instead, if high sensitivity is needed, avalanche photodiodes, intensified charge-coupled devices or photomultiplier tubes are used for applications such as astronomy, spectroscopy, night vision equipment and laser rangefinding. ### Comparison with photomultipliers Advantages compared to photomultipliers: 1. Excellent linearity of output current as a function of incident light 2. Spectral response from 190 nm to 1100 nm (silicon), longer wavelengths with other semiconductor materials 3. Low noise 4. Ruggedized to mechanical stress 5. Low cost 6. Compact and light weight 7. Long lifetime 8. High quantum efficiency, typically 80% 9. No high voltage required Disadvantages compared to photomultipliers: 1. Small area 2. No internal gain (except avalanche photodiodes, but their gain is typically 102–103 compared to up to 108 for the photomultiplier) 3. Much lower overall sensitivity 4. Photon counting only possible with specially designed, usually cooled photodiodes, with special electronic circuits 5. Response time for many designs is slower ## Photodiode array A one-dimensional array of hundreds or thousands of photodiodes can be used as a position sensor, for example as part of an angle sensor.[6] One advantage of photodiode arrays (PDAs) is that they allow for high speed parallel read out since the driving electronics may not be built in like a traditional CMOS or CCD sensor. ## References This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C". 1. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "Photodiode". 2. ^ James F. Cox (26 June 2001). Fundamentals of linear electronics: integrated and discrete. Cengage Learning. pp. 91–. ISBN 978-0-7668-3018-9. Retrieved 20 August 2011. 3. ^ Filip Tavernier, Michiel Steyaert High-Speed Optical Receivers with Integrated Photodiode in Nanoscale CMOS Springer, 2011 ISBN 1441999248, Chapter 3 From Light to Electric Current - The Photodiode 4. ^ 5. ^ Held. G, Introduction to Light Emitting Diode Technology and Applications, CRC Press, (Worldwide, 2008). Ch. 5 p. 116. ISBN 1-4200-7662-0 6. ^ Wei Gao (2010). Precision Nanometrology: Sensors and Measuring Systems for Nanomanufacturing. Springer. pp. 15–16. ISBN 9781849962537. • Gowar, John, Optical Communication Systems, 2 ed., Prentice-Hall, Hempstead UK, 1993 (ISBN 0-13-638727-6) Wikimedia Foundation. 2010. ### Look at other dictionaries: • photodiode — [ fɔtodjɔd ] n. f. • mil. XXe; de photo et diode ♦ Électron. Diode semi conductrice dont la conductivité varie avec l intensité du rayonnement lumineux incident. ● photodiode nom féminin Diode à semi conducteur dans laquelle un rayonnement… …   Encyclopédie Universelle • photodiode —  Photodiode  Фотодиод   Приёмник оптического излучения, который преобразует попавший на его фоточувствительную область свет в электрический заряд за счёт процессов в p n переходе. Фотодиод, работа которого основана на фотовольтаическом эффекте… …   Толковый англо-русский словарь по нанотехнологии. - М. • photodiode — [fōt΄ō dī′ōd΄] n. [ PHOTO + DIODE] Electronics a light sensitive, semiconductor diode, used as a photoelectric cell …   English World dictionary • Photodiode — Eine Photodiode oder auch Fotodiode ist eine Halbleiter Diode, die sichtbares Licht – in manchen Ausführungen auch IR , UV oder Röntgenstrahlen – an einem p n Übergang oder pin Übergang durch den inneren Photoeffekt in einen elektrischen Strom… …   Deutsch Wikipedia • Photodiode — symbole de la photodiode Une photodiode est un composant semi conducteur ayant la capacité de détecter un rayonnement du domaine optique et de le transformer en signal électrique. Sommaire …   Wikipédia en Français • Photodiode — Pho|to|di|o|de 〈f. 19〉 = Fotodiode * * * Photodiode,   Fotodiode, als Photodetektor dienende, in Rückwärtsrichtung betriebene Halbleiterdiode (Diode). Photodioden bilden in den Raumladungszonen beim Eindringen von Photonen Ladungsträgerpaare… …   Universal-Lexikon • Photodiode — fotodiodas statusas T sritis automatika atitikmenys: angl. photodiode vok. Fotodiode, f; Photodiode, f rus. фотодиод, m pranc. photodiode, f …   Automatikos terminų žodynas • photodiode — fotodiodas statusas T sritis automatika atitikmenys: angl. photodiode vok. Fotodiode, f; Photodiode, f rus. фотодиод, m pranc. photodiode, f …   Automatikos terminų žodynas • photodiode p-i-n — pin fotodiodas statusas T sritis radioelektronika atitikmenys: angl. p i n photodiode vok. p i n Photodiode, f rus. р i n фотодиод, m; фотодиод с р i n структурой, m pranc. photodiode p i n, f …   Radioelektronikos terminų žodynas • Photodiode — fotodiodas statusas T sritis fizika atitikmenys: angl. photodiode vok. Fotodiode, f; Photodiode, f rus. фотодиод, m pranc. diode photo électrique, f; photodiode, f …   Fizikos terminų žodynas ### Share the article and excerpts ##### Direct link Do a right-click on the link above and select “Copy Link”
2021-03-02 18:19:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6198164820671082, "perplexity": 7340.289057804104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00408.warc.gz"}
https://www.physicsforums.com/threads/motion-equations-of-a-disc-rotating-freely-around-its-center-3d.580498/
# Motion equations of a disc rotating freely around its center (3d) 1. Feb 23, 2012 ### bluekuma 1. The problem statement, all variables and given/known data The system is made of a disc the center of which is pinned to the origin (so the disc cannot translate), and some weights that can be stuck on the disc to make it tilt (weights do not translate on the disc) (see images attached). There is no friction whatsoever. The only force is gravitational force, with direction opposite to the z-axis'. Let's start with the disc at rest with its axis parallel to axis z. Now, if you put a weight on it, the disc starts oscillating just as if it was a pendulum. Then, at time t=t0, you put another weight on it. If ω⃗ is the rotational speed vector and θ⃗ is the rotation vector of the disc (meaning the direction of the disc's axis is always the z-versor rotated by θ radians around θ⃗ ) what's the expression of f⃗ (t,ω,θ) in: dω⃗ /dt = f(t,ω,θ), dθ⃗ /dt = ω⃗ Given the initial values ω⃗ (t0) =ω⃗0≠0 and θ⃗ (t0=θ⃗0≠0, dω⃗ (t0)/dt≠0, that would give me a way to simulate the system's motion through a standard Runge-Kutta integration method. MIGHT HELP TO KNOW: - I'm pretty sure there is a way to divide the two vectorial equations in six (three systems of two) linear equations - z-component of momentum M⃗ (where dω⃗ /dt = M⃗ /I ) is always 0 (zero) as M is the result of a cross product between a vector r (x, y, x) and the gravitational force (0, 0, -mg), therefore z-component of ω⃗ and θ⃗ are also 0. 2. Relevant equations d$\vec{ω}$/dt = $\vec{M}$/I where $\vec{M}$ is the momentum and I the moment of inertia. 3. The attempt at a solution ehr...i'm actively looking for a system of coordinates in which the vectorial equations can be separated in three linear equations, that would solve my problem. Obviously I had no success so far. 2. Feb 23, 2012 ### bluekuma I tried with Lagrangian and Euler-Lagrange equation but I really don't know where to start writing kinetic and potential energies in cartesian or spherical coordinates. I tried with a coordinate system that moves together with the disc (longitudinal and vertical axis, as it is usually done when studying motion of airplanes). There I'd have that the weights are still but force of gravity keeps changing its direction, that only adds to the confusion. Any idea?
2018-03-17 21:05:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847700297832489, "perplexity": 830.9303146998432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645310.34/warc/CC-MAIN-20180317194447-20180317214447-00129.warc.gz"}
https://git.collabora.com/cgit/user/oggis/dbus-rad.git/tree/ertyo.tex
summaryrefslogtreecommitdiff log msg author committer range path: root/ertyo.tex 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 % vim: set nocin noai nosmartindent: \documentclass[11pt,a4paper]{article} \usepackage{ucs} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{txfonts} \usepackage{url} \usepackage[american]{babel} \usepackage[pdftex]{color,graphicx} \usepackage{scrpage2} \usepackage[onehalfspacing]{setspace} % UTU compliant page layout \addtolength{\voffset}{-1.8cm} \addtolength{\textheight}{2.8cm} \addtolength{\footskip}{6mm} \addtolength{\hoffset}{-2cm} \setlength{\textwidth}{17cm} \ifoot[]{} \cfoot[]{} \ofoot[\small\pagemark]{\pagemark} \setlength{\headheight}{1.3\baselineskip} \begin{document} \begin{titlepage} \begingroup \begin{singlespace} \begin{center} \vspace*{7.3cm} \hrule \vspace{.7cm} \Large \textbf{Master's project} \vspace{.7cm} \hrule \end{center} \vspace{\stretch{1}} \begin{flushright} \begin{minipage}{0.45\textwidth} \begin{flushright} \small UNIVERSITY OF TURKU \\ \mbox{Department of Information Technology} \\ \mbox{Computer Science} \\ \today \\ Olli Salli \\ \end{flushright} \end{minipage} \end{flushright} \vspace{\stretch{0}} \end{singlespace} \endgroup \end{titlepage} % Table of Contents \selectlanguage{american} \clearpage \tableofcontents % Text chapters \section{Introduction} The topic of my Master's thesis\footnote{\url{http://urn.fi/URN:NBN:fi-fe201208246345}} is Building object-oriented software with the D-Bus messaging system''. D-Bus is a message passing-oriented inter-process communication (IPC) system. The focus of the thesis is on practical requirements on how D-Bus and similar systems can be used in interactive desktop software. Several object-oriented design and implementation techniques that help creating software fulfilling these requirements are presented and analyzed. The ideas presented in the thesis are based on my work on the open-source Telepathy communications framework\footnote{\url{http://telepathy.freedesktop.org}}. Telepathy consists of independent modules such as protocol backends, account storage and logging services, and user interfaces (UIs). The modules are connected together at runtime over the D-Bus messaging bus. Over the year, the various components of Telepathy have been contributed to by dozens of people. This report details those of my own personal contributions that are directly connected with the thesis. \section{Early backend and interface design work} My involvement with Telepathy started in 2006 with implementing the \texttt{te\-le\-pa\-thy-id\-le} protocol backend for the Internet Relay Chat (IRC) protocol. At that point, Telepathy was in its infancy, with only one other reasonably complete protocol backend in existence, one for the Jabber/XMPP protocol. Telepathy was being taken into use in a software update for the Nokia 770 Internet Tablet device, which had a slow 200 MHz ARM CPU and only 128 megabytes of RAM. Section 5.5 of the thesis deals with mechanisms that allow querying the state of multiple objects with one D-Bus method call. When I started working on \texttt{te\-le\-pa\-thy-id\-le}, these mechanisms didn't exist yet in Telepathy. IRC chat rooms can have thousands of members, which used to cause several thousand D-Bus method calls to be made to the protocol backend. With the limited resources of the Nokia 770, joining an IRC chat room could thus take minutes. In version 0.13 of the Telepathy D-Bus interfaces,\footnote{\url{http://telepathy.freedesktop.org/spec-0.13.html}} we changed all operations that worked with contacts to process an arbitrary number of contacts with a single call. This improved the performance of Telepathy with IRC to a reasonable level. Initially, \texttt{telepathy-idle} exposed its functionality over D-Bus by implementing low-level stub methods. These stubs were generated from low-level D-Bus introspection data corresponding to the Telepathy D-Bus interfaces. As detailed in Section 6.1 of the thesis, this format lacks sufficient information for meaningful static typing of more complex method arguments. For this reason, a lot of hand-written code was required in \texttt{te\-le\-pa\-thy-id\-le} for the purpose of encoding and decoding argument values. Because it implemented the same interfaces, similar code was also required in the Telepathy XMPP protocol backend. Over time, we refactored these and some other common pieces of the implementation to the \texttt{te\-le\-pa\-thy-glib} library, and in May 2007 I completed reimplementing the IRC backend using the library\footnote{\url{http://lists.freedesktop.org/archives/telepathy/2007-May/000677.html}}. \section{UI-side work} After \texttt{telepathy-idle}, I implemented another Telepathy backend for a proprietary protocol using the \texttt{te\-le\-pa\-thy-glib} library. At this point, Telepathy was used in a fairly complex setting in the Nokia N800 and N810 devices, successors to the original 770. As a part of this work, I investigated some issues with the device freezing in certain communication situations. This was caused by certain UI-side Telepathy-related components calling D-Bus methods on each other using the pseudo-blocking algorithm, forming a \emph{wait-reorder cycle}, as explained in Section~3.5.3 of the thesis. In Summer 2008, I started\footnote{\url{http://cgit.freedesktop.org/telepathy/telepathy-qt/commit/?id=190ecb3d7cd80732d1cd8dc484184a8cfe104707}} the \texttt{te\-le\-pa\-thy-qt4} project to enable implementing Telepathy UIs using the Qt framework more easily. This was motivated by Nokia acquiring Trolltech, the developer of Qt,\footnote{\url{http://press.nokia.com/2008/01/28/nokia-to-acquire-trolltech-to-accelerate-software-strategy/}} and eventual rebasing of their Linux device portfolio on Qt. The library later became the basis for all messaging-related components in the Harmattan release of their Maemo/MeeGo operating system, used in the Nokia N9 mobile phone, and a part of its SDK.\footnote{\url{http://harmattan-dev.nokia.com/docs/library/html/libtelepathy-qt4-1-doc/main.html}} \subsection{Code generation} As the first part of the \texttt{tp-qt4} project, I implemented machinery to generate low-level proxy code for accessing Telepathy service objects. This is at the level described by Section~6.2 of the thesis. At that point, the Telepathy D-Bus interfaces were no longer specified using the bare D-Bus introspection format, but an extended version. However, due to differences between the GLib and Qt type systems, some further modifications were required to generate Qt code.\footnote{ \url{http://cgit.freedesktop.org/telepathy/telepathy-spec/log/?id=657342595d19196e7dd00595f754f039796f52f8}} To facilitate these, and to enable sharing some parts of the machinery between \texttt{tp-qt4} and \texttt{tp-glib}, I also partially reimplemented the \texttt{te\-le\-pa\-thy-glib} code generation machinery.\footnote{e.g.~\url{http://lists.freedesktop.org/archives/telepathy-commits/2008-May/000827.html}} Owing to experience from previous projects, from the start, the generated \texttt{tp-qt4} proxies have only made it possible to call methods asynchronously. At the lowest level, the calls are represented using \texttt{QD\-Bus\-Pen\-ding\-Call\-Wat\-cher} objects from the Qt framework. \subsection{Higher-level proxies} Generated proxies follow the structure of the D-Bus API, for which the prime concern is flexibility and efficiency. This occasionally makes the APIs less convenient to use. For this reason, both \texttt{tp-qt4} and \texttt{tp-glib} wrap the generated proxies with a hand-written higher-level API, as described in the introduction to Chapter 6 of the thesis. I implemented the \texttt{tp-qt4} high-level proxies together with Andre Magalhaes and Simon McVittie, among others. Up to 2010, we were busy implementing basic state caching proxies (thesis section~5.2) for all parts of the Telepathy API, and following changes to the other parts of the framework. Because the primary target platform of \texttt{tp-qt4} was a mobile phone, special attention was paid to avoiding redundant battery-eating wakeups. One artifact of this is the conditional activation functionality in the state caching mechanism, described in Section~5.3 of the thesis as the Optional Proxy Features'' pattern. Another notable feature of the proxies in \texttt{tp-qt4} is wrapping some D-Bus operations inside higher level job objects''. These are described as the Pending Operation Objects'' and Composite Pending Operation'' patterns in Section~5.1 of the thesis. One example is the facility for fetching contact information from a protocol backend, using the D-Bus level mechanisms described in Section~5.5 of the thesis. The efficiency orientation of these mechanisms make them very inconvenient to use directly. For this reason, they are wrapped as pending operations in the \texttt{Con\-tact\-Ma\-na\-ger} part of the \texttt{tp-qt4} API, which I implemented in early 2009.\footnote{\url{http://cgit.freedesktop.org/telepathy/telepathy-qt/log/?id=d86579e34367641c4b2ca23096791950bf9fa72c}} % 5.4 factories % - lots of features, prepare separately, blah blah, hard % https://bugs.freedesktop.org/show_bug.cgi?id=29451 (general "it's hard") % https://bugs.freedesktop.org/show_bug.cgi?id=29606 (acc/conn fact.) % 5.5 multiplexed state sync % - early 2009: implemented tp-qt4 Contact, ContactManager with Conn.I.Contacts D-Bus API \end{document}
2019-10-23 02:52:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5585025548934937, "perplexity": 1913.3043443513573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00091.warc.gz"}
https://mathematica.stackexchange.com/questions/120321/information-of-function-defined-in-package-return-the-function-with-long-na?noredirect=1
# Information (??) of function defined in Package return the function with long name of variables [duplicate] When I call Information of a function defined in Package I get always the result that shows the function with full name Private variables. For example: BeginPackage["test"]; testFunction::usage="testFunction"; testMean::usage="data"; Begin["Private"]; testFunction[n_]:=Module[{data},data=RandomReal[10,{n,2}]; testMean=Mean[data]]; End[]; EndPackage[]; once I load the Package and enter: ?? testFunction I get this result which shows clearly that the Private variable are shown in full name. what to do to just view the name of the variables without its context. Thank you ## marked as duplicate by Mr.Wizard♦Jul 9 '16 at 4:00 Needs["GeneralUtilities"];
2019-06-19 14:05:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47389739751815796, "perplexity": 2162.4727659284476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998986.11/warc/CC-MAIN-20190619123854-20190619145854-00059.warc.gz"}
https://amwa-tv.github.io/nmos/branches/master/NMOS_Technical_Overview.html
Networked Media Open Specifications HOME OVERVIEW GITHUB WIKI FAQs IS-04 IS-05 IS-06 IS-07 # NMOS Technical Overview (c) AMWA 2018, CC Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ## Introduction Networked Media Open Specifications (NMOS) are a family of specifications that support the professional AV media industry’s transition to a “fully-networked” architecture. The NMOS specs are developed by the Advanced Media Workflow Association (AMWA) and are published on GitHub. This page provides a technical overview of NMOS. It’s a work in progress, and will be updated with information currently in a legacy document in this repository. ### Background While much of the broadcast industry has moved to file-based operation, live facilities have long depended on specialist technologies such as the Serial Digital Interface (SDI), SMPTE Timecode and various incompatible control protocols (including some using RS-232, some of which are still in use). However (as of 2017) there is a significant move towards replacing these with more general IT/IP technologies, allowing the industry to benefit from the high speeds and economies of scale that have enabled the success of the Internet and Web. Standards bodies including SMPTE and AES have created specifications for streaming of uncompressed video and audio over IP. These use RTP and include ST 2022-6 for SDI-based payloads, AES-67 for audio-only payloads and the forthcoming ST 2110 for separate video, audio and ancillary data over IP. However none of these tackle the control or application planes, leaving significant additional work to be done to achieve useful interoperability in professional networked media environments. So a number of industry bodies came together in 2013 on the Joint Task Force on Networked Media (JT-NM) to coordinate how this might happen. This led to the creation of a “reference architecture” for interoperability (JT-NM RA). At its most basic this identifies models and best practices for what may be needed at four layers: operation, application, platform and infrastructure. This is where the Advanced Media Workflow Association (AMWA) comes in. AMWA is an industry group of manufacturers, developers and end users, that is trying to advance a software-focussed approach to support future professional media operations. What this means in practice is identifying how to build upon “commodity” infrastructure (red layer) and widely used platform technologies/protocols (green) layer and supplement these where required with helpful specifications that build upon these building blocks. AMWA has done this in the past with “application specifications” for file-based interchange and delivery, and is now doing this for networked media with the NMOS specifications, which are being created by AMWA’s Networked Media Incubator group. These provide a open set of APIs to support interoperability for networked media applications: ### General Principles When creating NMOS specifications we try to follow a number of general principles, which will be familiar to today’s developers. #### Web-friendly protocols In the past specialised wire protocols have often been used for the control plane within facilities. However networked media operations are becoming increasingly distributed across locations, and sometimes across organisational boundaries, including third-party/public cloud providers. So it is desirable to use protocols that are aimed at such environments. HTTP and WebSockets are examples of these, and this is what NMOS currently uses. There is a huge amount of work happening in the wider IT/IP industry on optimising these protocols and their implementation, making previous arguments about the performance of specialised protocols less relevant. #### Developer-friendly APIs A decade ago, typical control APIs used an “RPC-style” approach based on SOAP, XML, XSD and WSDL, leading to quite complex code and messages. Modern developers of web APIs typically use a REST (or at least “REST-like”—see below) approach with simpler messages based on JSON and a lightweight approach to schemas using e.g. RAML and JSON Schema. #### REST Although “REST” is often used to mean any simple HTTP API, in creating the NMOS specs we have tried to adopt “correct practice” such as statelessness, uniform interface, resource identification in requests, HATEOAS, etc. (The Wikipedia REST page has a good summary of these.) But as there are no hard rules on this, and a certain amount of pragmatism has also been used, especially for more control-oriented activities such as connection management. #### Technology independence through data modelling This might seem to conflict with some of the above, but it doesn’t have to. In creating the NMOS specifications we have started with (UML) data models, which you will see in the NMOS repositories. The HTTP/WebSockets/RAML/JSON and then mapped these to JSON/HTTP/WebSockets/whatever. But should the wider IT/IP world migrate to new technologies, alternative mappings of the data models could feature in updated spec You can see this explicitly in the relation between the logical content model and the RTP mapping specification. The IS-04 and IS-05 specifications you see on GitHub #### Build on widely used and open foundations The success of HTTP and WebSockets is in part due to their open nature, being made available through IETF RFCs. The same applies to RTP, which is the basis of much industry activity on live IP at present. #### Openly available specifications We are using GitHub repositories to publish the specifications. These are made public as soon as is sensible, and of couse are available at no cost (AMWA is using a “RAND-Z” model for this work). We use the Apache 2.0 open source licence for specifications (and the current open-source implementations). #### Self-documenting specifications Much of the “normative” part of the NMOS specifications takes the form of RAML and JSON Schema (with text-based supporting information). This allows #### Scalable The Internet/Web has scaled well so far (shortage of remaining public IPv4 addresses notwithstanding). NMOS APIs are built from Internet/Web technologies, so should also scale. That’s the theory – at the time of writing this we are planning some practical work to study/prove this is the case, including documenting best practice. #### Securable Huge amounts of resources are spent on ensuring the world can use the Internet/Web securely. NMOS APIs are built from Internet/Web technologies, so should benefit. Again, that’s the theory – so far Incubator workshops have used plain HTTP/WebSockets for expediency, but the specifications support HTTPS/WSS. At the time of writing this we are planning some practical work to study/prove this is the case, including documenting best practice (such as what authentication, authorisation and audit technologies are well suited to networked media applications). #### Suitable for all types of platform Professional media has to work in many different types of environment, requiring a range of types of equipment. This means that NMOS specifications have been designed to work on many types of platform, such as: • low-power devices, used on location and connected on a local network • rack-mounted equipment within a fixed facility in a television centre • virtualised in an on-premesis data centre • on a shared or public cloud #### Universal Identity In NMOS specifications, everything is treated as a resource that can be uniquely identified. This is discussed in depth in the “Identity Framework” section of the JT-NM RA. In practice it means that every resource has a UUID/GUID that can be generated locally (rather than being assigned by a central authority). This UUID is then used within JSON messages and as part of RESTful URIs. #### Flexible content NMOS’s content model reflects the richness of use of content in modern productions. Video, audio and data are treated as separate elements with their own identity and timing information. This allows them to be handled as required during production and rendered for consumption as needed for the platform(s). #### Use rather than invent NMOS specifications apply techniques used more generally for the professional media industry. Where possible we use protocols, representations, technologies, etc. that have proved successful elsewhere. #### Benefit from modern tooling Similarly the NMOS specifications have been written with the intent that they will be implemented using technologies that are widely known by developers with experience of network and web development. #### Guided by JT-NM RA This has already been mentioned, but it underpins how work on future NMOS specifications is likely to develop, as it ensures the work stays relevant across a broad community. ## NMOS Model and Terminology Before explaining the NMOS specifications themselves it is helpful to present the model we are using in a sequence of pictures. This will also introduce some of the terminology used in NMOS specifications – this is similar to that used in the JT-NM RA. Be warned that in some cases common words (such as “Device”) are used to represent “logical” things and so may not mean what you expect. A more complete list of NMOS terminology is provided in the Glossary. In NMOS specifications a Device represents a logical block of functionality, and a Node is the host for one or more Devices. Devices have logical inputs and outputs called Receivers and Senders, for example: or: or: Devices, Senders and Receivers are all Resources. A Resource is a uniquely identified and addressable part of a networksed system: As an example, consider an IP-enabled camera. Associated with it there will probably be a Node, a Device, A video Sender, an audio Sender (if it has microphones), and maybe a data Sender (e.g. for position data), and perhaps Receivers for reverse video, intercom and control data. NMOS uses the term Flow for a sequence of video, audio, or time-related data, which can flow from a Sender to a Receiver or Receivers. A Flow is treated as a resource and has a unique ID: The elements within the Flow are called Grains. An example of a Grain is a video frame. Grains are associated with a position on a timeline: Although Grains often are regularly spaced, they don’t have to be, for example in the case of Data Grains representing irregular events: Each Flow is also associated with a Source. This is the logical originator of the Flow: So in the NMOS model, a camera could be have several associated resources: • Node • Device • Video, Audio and Data Sources • Video, Audio and Data Senders • Video, Audio and Data Receivers (for tally, viewfinder and comms) • Video, Audio and Data Flows So far, NMOS specifications have worked with quite fine-grained Resources (pun unavoidable). Future NMOS specifications will consider functionality and content at a higher level, for example for detailing with “bundles” of Flows. ## The Specifications This section outlines the publicly available NMOS specifications ### Discovery and Registration Specification (IS-04) https://amwa-tv.github.io/nmos-discovery-registration This Specification enables applications to discover networked resources, which is an important first step towards automation and scalability. It specifies: • an HTTP Registration API that Nodes use to register their resources with a Registry. • an HTTP Query API that applications use to find a list of available resources of a particular type (Device, Sender, Receiver…) in the Registry. • an HTTP Node API that applications use to find further resources on the Node. • how to announce the APIs using DNS-SD, so the API endpoints don’t have to be known by Nodes or Applications. • how to achieve “peer-to-peer” discovery using DNS-SD and the Node API, where no Registry is available. It also includes a basic connection management mechanism that was used before the creation of IS-05 (see below). This is deprecated, and will be removed in later versions of IS-04. ### Device Connection Management Specification (IS-05) https://amwa-tv.github.io/nmos-device-connection-management This Specification provides an HTTP API for establishing (and removing) Flows between Senders and Receivers. This allows the connection to made in a way that doesn’t require knowledge of the transport protocol that will be used. It can be used for both unicast and multicast connections, and to initiate a connection made by a separate controller application. It allows connections to be prepared and “activated” at a particular time and allows multiple connections to be made/unmade at the same time (sometimes known as “bulk” or “salvo” operation). ### Network Control Specification (IS-06) https://amwa-tv.github.io/nmos-network-control This Specification can be considered as a “northbound API” for SDN controllers. It provides an HTTP API to communicate information about the network topology, allow reservation of bandwidth for low-level network flows and monitoring. ### Event and Tally Specification (IS-07) https://amwa-tv.github.io/AMWA-TV/nmos-event-tally This Specification provides a mechanism for conveying time-related state and state change information, for example tally information from sensors and actuators using WebSockets or a message queue (MQTT). ### Parameter Registers https://amwa-tv.github.io/nmos-parameter-registers The Parameter Registers provide an extensible mechanism for defining values used within NMOS Specfications. Currently these use URNs. For example some NMOS resources have a format property, and urn:x-nmos:format:video provides a formal way of using this. ### Natural Grouping (future BCP-002-01) https://amwa-tv.github.io/nmos-grouping/best-practice-natural-grouping.html This defines how to tag related resources, such as a group of Senders belonging to the same Device or Node, or a group of Receivers belonging to the same Device or Node. ### Audio Channel Mapping (Work In Progress, future IS-08) https://amwa-tv.github.io/nmos-audio-channel-mapping/ This will set channel mapping/selecting/shuffling settings for use with NMOS APIs. ### Securing Communications (Work In Progress, future BCP-003-01) https://amwa-tv.github.io/nmos-api-security/best-practice-secure-comms.html This is documents best practice for securing communications used in NMOS specifications, using TLS and PKI. Further documents will cover role-based authorisation of operations.
2019-01-21 04:24:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22071705758571625, "perplexity": 4077.7712851378706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583755653.69/warc/CC-MAIN-20190121025613-20190121051613-00633.warc.gz"}
http://50s-pickup-w.wiring-diagram.mokposoftware.com/post/capacitance-in-an-ac-circuit
# capacitance in an ac circuit 50s-pickup-w.wiring-diagram.mokposoftware.com 9 out of 10 based on 700 ratings. 1000 user reviews. Capacitance in AC Circuit and Capacitive ReactanceBasic ... Capacitance in AC Circuits. However, if we apply an alternating current or AC supply, the capacitor will alternately charge and discharge at a rate determined by the frequency of the supply. Then the Capacitance in AC circuits varies with frequency as the capacitor is being constantly charged and discharged. AC Capacitance and Capacitive Reactance in AC Circuit AC Capacitance and Capacitive Reactance The opposition to current flow through an AC Capacitor is called Capacitive Reactance and which itself is inversely proportional to the supply frequency Capacitors store energy on their conductive plates in the form of an electrical charge. AC Circuits Boston University Physics Capacitance in an AC circuit. The larger the capacitance of the capacitor, the more charge has to flow to build up a particular voltage on the plates, and the higher the current will be. The higher the frequency of the voltage, the shorter the time available to change the voltage, so the larger the current has to be. Capacitance in AC Circuits Electronics Hub If AC supply voltage is applied to the capacitor circuit then the capacitor charges and discharges continuously depending on the rate of frequency of supply voltage. The capacitance of a capacitor in AC circuits depends on the frequency of supply voltage applied to it. Capacitor : Capacitance in AC Circuit dnatechindia Capacitive Reactance in a purely capacitive circuit is the opposition to current flow in AC circuits only. Like resistance, reactance is also measured in Ohm's but is given the symbol X to distinguish it from a purely resistive value. AC Capacitor Circuits | Capacitive Reactance And Impedance In an AC circuit containing a resistance and a capacitance in parallel, the voltage of each circuit element will be the same as the source voltage. Further, there will be no phase difference among the voltages. Capacitance in AC Circuit electronics blogspot Capacitive Reactance in a purely capacitive circuit is the opposition to current flow in AC circuits only. Like resistance, reactance is also measured in Ohm's but is given the symbol X to distinguish it from a purely resistive value. What is the capacitance in AC circuits? Quora Capacitance in an AC circuit relates to the amount of energy stored in the form of an electric field and is defined as the ratio of the change in an electric charge in a system to the corresponding change in its electric potential. Capacitance may... How Capacitors Behave in AC Circuits | EEWeb munity CAPACITIVE AC CIRCUITS. A purely capacitive AC circuit is one containing an AC voltage supply and a capacitor such as that shown in Figure 2. The capacitor is connected directly across the AC supply voltage. As the supply voltage increases and decreases, the capacitor charges and discharges with respect to this change. AC Capacitor Circuits | Reactance And Impedance ... A Capacitor’s Reactance. Alternating current in a simple capacitive circuit is equal to the voltage (in volts) divided by the capacitive reactance (in ohms), just as either alternating or direct current in a simple resistive circuit is equal to the voltage (in volts) divided by the resistance (in ohms). What is the Role of Capacitor in AC and DC Circuit ... Capacitance in AC circuits is depends upon the frequency of the supplied input voltage. Also, if you see the phasor diagram of an ideal AC capacitor circuit you can observe that, current leads the voltage by 90⁰. Electrical Capacitors in AC Circuits Incident Prevention ... For appliance circuits like motors and high intensity discharge lighting, capacitors are designated by the farad, a unit of electrical capacitance named after British scientist Michael Faraday. In power distribution, capacitors are designated in kilovolt amperes reactive, or kVARs, for simplicity of application. Capacitors and RC Circuits Capacitors and RC Circuits When capacitors are arranged in parallel, the equivalent capacitance is Ceq = C1 C2 C3 ... When capacitors are arranged in series, the equivalent capacitance is ... Quantities in an RC circuit change exponentially, which means quickly at first, then more and more slowly. Values change by the same multiplicative ... AC Circuits Basics, Impedance, Resonant Frequency, RL RC RLC LC Circuit Explained, Physics Problems This physics video tutorial explains the basics of AC circuits. It shows you how to calculate the capacitive reactance, inductive reactance, impedance of an RLC circuit and how to determine the ... AC through Pure Capacitance AC Circuits Basic Electrical Engineering First Year Engineering Video Lecture on AC through Pure Capacitance Chapter AC Circuits Analysis of Subject Basic Electrical Engineering for First Year Engineering Students. To Access plete Course of Basic Electrical ... AC Capacitive Circuits Electronics Hub From the above equation, capacitive reactance of a capacitor in an AC circuit is the function of frequency and capacitance. The capacitive reactance decreases with increasing frequency which results more current to flow through the circuit. Series Resistor Capacitor Circuits | Reactance And ... As with the purely capacitive circuit, the current wave is leading the voltage wave (of the source), although this time the difference is 79.325o instead of a full 90o. (Figure below) Voltage lags current (current leads voltage)in a series R C circuit. Capacitor A capacitor is a passive two terminal electronic component that stores electrical energy in an electric field.The effect of a capacitor is known as capacitance.While some capacitance exists between any two electrical conductors in proximity in a circuit, a capacitor is a component designed to add capacitance to a circuit.The capacitor was originally known as a condenser or condensator. Capacitor Circuits: Capacitor in Series, Parallel & AC ... Capacitor in AC Circuit Capacitor in Series Circuit In a circuit, when you connect capacitors in series as shown in the above image, the total capacitance is decreased. The current through capacitors in series is equal (i.e. i T = i 1 = i 2 = i 3= i n). The basics of capacitance. | Electrical Construction ... Capacitive reactance. As we've seen, AC current can flow through a circuit with a capacitance. The apparent resistance of a capacitor in an AC circuit is less than its DC resistance. This apparent AC resistance is called capacitive reactance, and its value decreases as the applied frequency increases. Capacitive Reactance in AC Circuit | Electrical Academia Opposition to the flow of an alternating current by the capacitance of the circuit; equal to ${}^{1} {}_{2\pi fC}$ and measured in ohms. The ratio of effective voltage across the capacitor to the effective current is called the capacitive reactance and represents the opposition to current flow. Inductance, capacitance and resistance Inductance, capacitance and resistance • Ohms law works for AC circuits with inductors, capacitors and resistances. • Series circuits solve for impedance first, in parallel solve for currents since the V drop is the same across each leg. Electrical reactance In electric and electronic systems, reactance is the opposition of a circuit element to a change in current or voltage, due to that element's inductance or capacitance.The notion of reactance is similar to electric resistance, but it differs in several respects.. In phasor analysis, reactance is used to compute amplitude and phase changes of sinusoidal alternating current going through a ... Current without EMF in AC capacitance circuit Physics ... In AC circuit with only capacitance, current in circuit is maximum at time t=0 but emf of source is E=0 at t=0 , how can this happen? See the circuit diagrams below for an ac source with only capacitance and ideal (zero resistance) wires. Capacitors and Inductors EE Power The RC Circuit. When we connect a resistor and a capacitor in series, we have something called an RC circuit. Figure 1. An RC circuit connected to a battery. This simple network is surprisingly important and appears frequently in professional circuit design. For example, when connected to an AC signal, it becomes a low pass filter. Why voltage lags current in a capacitive circuit. The ... Capacitance sounds like the opposite of inductance. Bingo! Whether we are talking about capacitance in AC or DC, it always acts opposite to inductance. Okay so walk me through it. Alrighty then. In a purely capacitive circuit Kirchoffs rules still apply. This means that the voltage at the capacitor has to equal the voltage at the source. Reactance, Inductive and Capacitive | Physics Although a capacitor is basically an open circuit, there is an rms current in a circuit with an AC voltage applied to a capacitor. This is because the voltage is continually reversing, charging and discharging the capacitor. If the frequency goes to zero (DC), X C tends to infinity, and the current is zero once the capacitor is charged. At very ... Capacitance And Capacitive Reactance Flashcards | Quizlet Capacitance And Capacitive Reactance. STUDY. PLAY. define capacitance. ... make sure that if you apply it to an ac circuit that the peak voltage of an ac sine wave does not exceed the wvdc rating: for example: 200wvdc must never have more then 141 v rms applied to it 200x.707=141. Effect of a capacitor on an AC circuit | Physics Forums The transfer time (time to switch from AC mode to battery mode) of the UPS is around 4 ms; but in that time the capacitors of the PSU of the computer discharges, as a result it turns off. This doesn't happen when the computer is taking less power the PSU capacitors are able to hold for 15 20ms in that case. Capacitance in AC Circuit Flashcards | Quizlet Capacitance in AC Circuit. STUDY. PLAY. Capacitive Reactance. What is a force which resists the flow of an AC circuit? 90 degree. Capacitive reactance causes what degree phase displacement? Capacitive Reactance decreases. As frequency increases, Capacitive Reactance decreases. Capacitors & Capacitance Calculations Formulas Equations ... In the parallel circuit (right), impedance to current flow is infinite with ideal components. Real world capacitors made of physical components exhibit more than just a pure capacitance when present in an AC circuit. A common circuit simulator model is shown to the left. Resistor Capacitor AC Behavior You know that the voltage in a capacitive lags the current because the current must flow to build up the charge, and the voltage across the capacitor is proportional to that charge which is built up on the capacitor plates. ... AC behavior of RC circuit: Index Capacitance concepts Basic AC Reactive ponents Wiki myodesie In the chapters on inductance and capacitance we have learned that both conditions are reactive and can provide opposition to current flow, but for opposite reasons. Therefore, it is important to find the point where inductance and capacitance cancel one another to achieve efficient operation of AC circuits.
2019-05-27 02:36:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6304745674133301, "perplexity": 808.499928750056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260358.69/warc/CC-MAIN-20190527005538-20190527031538-00291.warc.gz"}
http://math.stackexchange.com/questions/134332/finding-an-on-basis-of-l-2
# Finding an ON basis of $L_2$ The set $\{f_n : n \in \mathbb{Z}\}$ with $f_n(x) = e^{2πinx}$ forms an orthonormal basis of the complex space $L_2([0,1])$. I understand why its ON but not why its a basis? - Assume that you can find an $f$ in $L^2$ that is orthogonal to all $\sin (nx)$ and $\cos (nx)$. Then show that $f$ has to be zero almost everywhere. –  Rudy the Reindeer Apr 20 '12 at 12:32 It is known that orthonormal system $\{f_n:n\in\mathbb{Z}\}$ is a basis if $$\operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=L_2([0,1])$$ where $\operatorname{cl}_{L_2}$ means the closure in the $L_2$ norm. Denote by $C_0([0,1])$ the space of continuous functions on $[0,1]$ which equals $0$ at points $0$ and $1$. It is known that for each $f\in C_0([0,1])$ the Feier sums of $f$ uniformly converges to $f$. This means that $$\operatorname{cl}_{C}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=C_0([0,1])$$ where $\operatorname{cl}_{C}$ means the closure in the uniform norm. Since we always have inequality $\|f\|_{L_2([0,1])}\leq\|f\|_{C([0,1])}$, then $$\operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=C_0([0,1])$$ It is remains to say that $C_0([0,1])$ is dence subspace of $L_2([0,1])$, i.e. $$\operatorname{cl}_{L_2}(C_0([0,1]))=L_2([0,1])$$ then we obtain $$\operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))= \operatorname{cl}_{L_2}(C_0([0,1]))=L_2([0,1])$$
2015-08-31 20:29:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978462278842926, "perplexity": 71.35924695144541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00332-ip-10-171-96-226.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2217630/proving-the-existence-and-number-of-real-roots-for-x3-3x-2
# Proving the existence and number of *real* roots for $x^3 - 3x + 2$ I need to find how many real roots this polynomial has and prove there existence. I was wondering if my logic and thought process was correct. Determine the number of real roots and prove it for $x^3 - 3x + 2$ First, note that $f'(x) = 3x^2 - 3$ and so $f'(x) > 0$ for $x \in (-\infty, -1) \cup (1, \infty)$ and since $f'$ is strictly increasing on those intervals, there can be at most one root in each of them. $f'(x) < 0$ for $x \in (-1,1)$ and since $f'$ is strictly decreasing on this interval it can have at most one root. Now examine $f(-3) = -16$ and $f(-1) = 4$. By the Intermediate Value Theorem (IVT) $f(c) = 0$ for some $c \in (-3, 1)$ and so $f$ has a root on the interval $(-\infty, 1)$. Again examine $f(-1) = 4$ and $f(1) = 0$. We cannot say anything about $f$ having a root on the interval $(-1, 1)$. Likewise examine $f(1) = 0$ and $f(3) = 16$. Again, we cannot say anything about $f$ having a root on $(1, \infty)$. However, $f(1) = 1 - 3 + 2 = 0$ is clearly a root. And by factorizing the polynomial we get $f(x) = (x+2)(x-1)^2$. Indeed, $1$ is a root with a multiplicity of two. Hence, $f(x)$ has two real roots. Also, do we say two real roots (because of the multiplicity), or three real roots, or do we say two distinct real roots? While I realize factoring the polynomial gives me the answer I believe the purpose of the question was to do the former analysis, which when the polynomial isn't easily factorized, can provide a lot of insight. That is why I did it all • I assume you mean real roots, because it has 3 complex roots. – ÍgjøgnumMeg Apr 4 '17 at 13:51 • "First, note that $f′(x)=3x-3$ and so", you mean $f′(x)=3x^{\color{red}{2}}-3$...? – StackTD Apr 4 '17 at 13:52 • Haha yes! Let me make these adjustments. – student_t Apr 4 '17 at 13:52 However, $f(1) = 1 - 3 + 2 = 0$ is clearly a root. And by factorizing the polynomial we get $f(x) = (x+2)(x-1)^2$. Indeed, $1$ is a root with a multiplicity of two. All the work you did before this becomes unnecessary; after factoring, the roots (and hence the number of roots) are clear - right? Addition after some comments: when you are asked about the number of roots (real or not), it is usually meant to count the number of distinct (i.e. different) roots. Your equation has two (real) roots, one of which has multiplicity 2 but that doesn't change the fact that there are only two real numbers where the polynomial becomes 0. • Yes, but I think the purpose of the exercise was to do the former analysis. I only did that to show the multiplicity of the root. But yes in general all my work before would have been a waste haha! – student_t Apr 4 '17 at 13:55 Since we have $$x^3-3x+2=(x-1)^2(x+2),$$ we have three real roots $1,1,-2$. Here we count with multiplicities (which is standard for many results in geometry and other areas). • I would say there are two real roots, one of which has multiplicity two. – gandalf61 Apr 4 '17 at 13:56 • Yes, but I am suppose to do the little analysis before for the question I believe. Of course factoring would be much faster. – student_t Apr 4 '17 at 13:57 • @danny Yes, this may be the case. But I think, it does not matter so much what you are supposed to do or think, but what you yourself think is the best way. – Dietrich Burde Apr 4 '17 at 13:59 • I agree with gandalf61. In the context of the fundamental theorem of algebra, we often say an $n$th-order polynomial "has $n$ complex roots" but this is an abbreviation where we mean to count the multiplicities. That doesn't change the fact that $x^3$ only has one (distinct) root. – StackTD Apr 4 '17 at 13:59 • @danny see comment above; I would say yours has two (distinct) roots, we usually omit 'distinct' and mean different roots when we're talking about the number of roots. – StackTD Apr 4 '17 at 14:00
2019-08-20 08:17:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7924861311912537, "perplexity": 280.92575364444633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315258.34/warc/CC-MAIN-20190820070415-20190820092415-00283.warc.gz"}
https://www.quizover.com/algebra2/section/identify-the-most-appropriate-method-to-use-to-solve-a-quadratic
Page 4 / 5 Determine the number of solutions to each quadratic equation: $8{m}^{2}-3m+6=0$ $5{z}^{2}+6z-2=0$ $9{w}^{2}+24w+16=0$ $9{u}^{2}-2u+4=0$ no real solutions 2 1 no real solutions Determine the number of solutions to each quadratic equation: ${b}^{2}+7b-13=0$ $5{a}^{2}-6a+10=0$ $4{r}^{2}-20r+25=0$ $7{t}^{2}-11t+3=0$ 2 no real solutions 1 2 ## Identify the most appropriate method to use to solve a quadratic equation We have used four methods to solve quadratic equations: • Factoring • Square Root Property • Completing the Square You can solve any quadratic equation by using the Quadratic Formula, but that is not always the easiest method to use. ## Identify the most appropriate method to solve a quadratic equation. 1. Try Factoring first. If the quadratic factors easily, this method is very quick. 2. Try the Square Root Property next. If the equation fits the form $a{x}^{2}=k$ or $a{\left(x-h\right)}^{2}=k$ , it can easily be solved by using the Square Root Property. 3. Use the Quadratic Formula . Any quadratic equation can be solved by using the Quadratic Formula. What about the method of completing the square? Most people find that method cumbersome and prefer not to use it. We needed to include it in this chapter because we completed the square in general to derive the Quadratic Formula. You will also use the process of completing the square in other areas of algebra. Identify the most appropriate method to use to solve each quadratic equation: $5{z}^{2}=17$ $4{x}^{2}-12x+9=0$ $8{u}^{2}+6u=11$ ## Solution $5{z}^{2}=17$ Since the equation is in the $a{x}^{2}=k$ , the most appropriate method is to use the Square Root Property. $4{x}^{2}-12x+9=0$ We recognize that the left side of the equation is a perfect square trinomial, and so Factoring will be the most appropriate method. $8{u}^{2}+6u=11$ Put the equation in standard form. $8{u}^{2}+6u-11=0$ While our first thought may be to try Factoring, thinking about all the possibilities for trial and error leads us to choose the Quadratic Formula as the most appropriate method Identify the most appropriate method to use to solve each quadratic equation: ${x}^{2}+6x+8=0$ ${\left(n-3\right)}^{2}=16$ $5{p}^{2}-6p=9$ factor Square Root Property Quadratic Formula Identify the most appropriate method to use to solve each quadratic equation: $8{a}^{2}+3a-9=0$ $4{b}^{2}+4b+1=0$ $5{c}^{2}=125$ Quadratic Formula factoring Square Root Property Access these online resources for additional instruction and practice with using the Quadratic Formula: ## Key concepts • Quadratic Formula The solutions to a quadratic equation of the form $a{x}^{2}+bx+c=0,$ $a\ne 0$ are given by the formula: $x=\frac{\text{−}b±\sqrt{{b}^{2}-4ac}}{2a}$ 1. Write the quadratic formula in standard form. Identify the $a,b,c$ values. 2. Write the quadratic formula. Then substitute in the values of $a,b,c.$ 3. Simplify. 4. Check the solutions. • Using the Discriminant, ${b}^{2}-4ac$ , to Determine the Number of Solutions of a Quadratic Equation For a quadratic equation of the form $a{x}^{2}+bx+c=0,$ $a\ne 0,$ • if ${b}^{2}-4ac>0$ , the equation has 2 solutions. • if ${b}^{2}-4ac=0$ , the equation has 1 solution. • if ${b}^{2}-4ac<0$ , the equation has no real solutions. • To identify the most appropriate method to solve a quadratic equation: 1. Try Factoring first. If the quadratic factors easily this method is very quick. 2. Try the Square Root Property next. If the equation fits the form $a{x}^{2}=k$ or $a{\left(x-h\right)}^{2}=k$ , it can easily be solved by using the Square Root Property. 3. Use the Quadratic Formula. Any other quadratic equation is best solved by using the Quadratic Formula. ## Practice makes perfect In the following exercises, solve by using the Quadratic Formula. $4{m}^{2}+m-3=0$ $m=-1,m=\frac{3}{4}$ $4{n}^{2}-9n+5=0$ $2{p}^{2}-7p+3=0$ $p=\frac{1}{2},p=3$ $3{q}^{2}+8q-3=0$ ${p}^{2}+7p+12=0$ $p=-4,p=-3$ ${q}^{2}+3q-18=0$ ${r}^{2}-8r-33=0$ $r=-3,r=11$ ${t}^{2}+13t+40=0$ $3{u}^{2}+7u-2=0$ $u=\frac{-7±\sqrt{73}}{6}$ $6{z}^{2}-9z+1=0$ $2{a}^{2}-6a+3=0$ $a=\frac{3±\sqrt{3}}{2}$ $5{b}^{2}+2b-4=0$ $2{x}^{2}+3x+9=0$ no real solution $6{y}^{2}-5y+2=0$ $v\left(v+5\right)-10=0$ $v=\frac{-5±\sqrt{65}}{2}$ $3w\left(w-2\right)-8=0$ $\frac{1}{3}{m}^{2}+\frac{1}{12}m=\frac{1}{4}$ $m=-1,m=\frac{3}{4}$ $\frac{1}{3}{n}^{2}+n=-\frac{1}{2}$ $16{c}^{2}+24c+9=0$ $c=-\frac{3}{4}$ $25{d}^{2}-60d+36=0$ $5{m}^{2}+2m-7=0$ $m=-\frac{7}{5},m=1$ $8{n}^{2}-3n+3=0$ ${p}^{2}-6p-27=0$ $p=-3,p=9$ $25{q}^{2}+30q+9=0$ $4{r}^{2}+3r-5=0$ $r=\frac{-3±\sqrt{89}}{8}$ $3t\left(t-2\right)=2$ $2{a}^{2}+12a+5=0$ $a=\frac{-6±\sqrt{26}}{2}$ $4{d}^{2}-7d+2=0$ $\frac{3}{4}{b}^{2}+\frac{1}{2}b=\frac{3}{8}$ $b=\frac{-2±\sqrt{11}}{6}$ $\frac{1}{9}{c}^{2}+\frac{2}{3}c=3$ $2{x}^{2}+12x-3=0$ $x=\frac{-6±\sqrt{42}}{4}$ $16{y}^{2}+8y+1=0$ Use the Discriminant to Predict the Number of Solutions of a Quadratic Equation In the following exercises, determine the number of solutions to each quadratic equation. $4{x}^{2}-5x+16=0$ $36{y}^{2}+36y+9=0$ $6{m}^{2}+3m-5=0$ $18{n}^{2}-7n+3=0$ no real solutions 1 2 no real solutions $9{v}^{2}-15v+25=0$ $100{w}^{2}+60w+9=0$ $5{c}^{2}+7c-10=0$ $15{d}^{2}-4d+8=0$ ${r}^{2}+12r+36=0$ $8{t}^{2}-11t+5=0$ $4{u}^{2}-12u+9=0$ $3{v}^{2}-5v-1=0$ 1 no real solutions 1 2 $25{p}^{2}+10p+1=0$ $7{q}^{2}-3q-6=0$ $7{y}^{2}+2y+8=0$ $25{z}^{2}-60z+36=0$ Identify the Most Appropriate Method to Use to Solve a Quadratic Equation In the following exercises, identify the most appropriate method (Factoring, Square Root, or Quadratic Formula) to use to solve each quadratic equation. Do not solve. ${x}^{2}-5x-24=0$ ${\left(y+5\right)}^{2}=12$ $14{m}^{2}+3m=11$ factor square root ${\left(8v+3\right)}^{2}=81$ ${w}^{2}-9w-22=0$ $4{n}^{2}-10=6$ $6{a}^{2}+14=20$ ${\left(x-\frac{1}{4}\right)}^{2}=\frac{5}{16}$ ${y}^{2}-2y=8$ factor square root factor $8{b}^{2}+15b=4$ $\frac{5}{9}{v}^{2}-\frac{2}{3}v=1$ ${\left(w+\frac{4}{3}\right)}^{2}=\frac{2}{9}$ ## Everyday math A flare is fired straight up from a ship at sea. Solve the equation $16\left({t}^{2}-13t+40\right)=0$ for $t$ , the number of seconds it will take for the flare to be at an altitude of 640 feet. 5 seconds, 8 seconds An architect is designing a hotel lobby. She wants to have a triangular window looking out to an atrium, with the width of the window 6 feet more than the height. Due to energy restrictions, the area of the window must be 140 square feet. Solve the equation $\frac{1}{2}{h}^{2}+3h=140$ for $h$ , the height of the window. ## Writing exercises Solve the equation ${x}^{2}+10x=200$ by completing the square Which method do you prefer? Why? $-20,10$ $-20,10$ Solve the equation $12{y}^{2}+23y=24$ by completing the square Which method do you prefer? Why? ## Self check After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. What does this checklist tell you about your mastery of this section? What steps will you take to improve? Equation in the form of a pending point y+2=1/6(×-4) From Google: The quadratic formula, , is used in algebra to solve quadratic equations (polynomial equations of the second degree). The general form of a quadratic equation is , where x represents a variable, and a, b, and c are constants, with . A quadratic equation has two solutions, called roots. Melissa what is the answer of w-2.6=7.55 10.15 Michael w = 10.15 You add 2.6 to both sides and then solve for w (-2.6 zeros out on the left and leaves you with w= 7.55 + 2.6) Korin Nataly is considering two job offers. The first job would pay her $83,000 per year. The second would pay her$66,500 plus 15% of her total sales. What would her total sales need to be for her salary on the second offer be higher than the first? x > $110,000 bruce greater than$110,000 Michael Estelle is making 30 pounds of fruit salad from strawberries and blueberries. Strawberries cost $1.80 per pound, and blueberries cost$4.50 per pound. If Estelle wants the fruit salad to cost her $2.52 per pound, how many pounds of each berry should she use? nawal Reply$1.38 worth of strawberries + $1.14 worth of blueberries which=$2.52 Leitha how Zaione is it right😊 Leitha lol maybe Robinson 8 pound of blueberries and 22 pounds of strawberries Melissa 8 pounds x 4.5 = 36 22 pounds x 1.80 = 39.60 36 + 39.60 = 75.60 75.60 / 30 = average 2.52 per pound Melissa 8 pounds x 4.5 equal 36 22 pounds x 1.80 equal 39.60 36 + 39.60 equal 75.60 75.60 / 30 equal average 2.52 per pound Melissa hmmmm...... ? Robinson 8 pounds x 4.5 = 36 22 pounds x 1.80 = 39.60 36 + 39.60 = 75.60 75.60 / 30 = average 2.52 per pound Melissa The question asks how many pounds of each in order for her to have an average cost of $2.52. She needs 30 lb in all so 30 pounds times$2.52 equals $75.60. that's how much money she is spending on the fruit. That means she would need 8 pounds of blueberries and 22 lbs of strawberries to equal 75.60 Melissa good Robinson 👍 Leitha thanks Melissa. Leitha nawal let's do another😊 Leitha we can't use emojis...I see now Leitha Sorry for the multi post. My phone glitches. Melissa Vina has$4.70 in quarters, dimes and nickels in her purse. She has eight more dimes than quarters and six more nickels than quarters. How many of each coin does she have? 10 quarters 16 dimes 12 nickels Leitha A private jet can fly 1,210 miles against a 25 mph headwind in the same amount of time it can fly 1,694 miles with a 25 mph tailwind. Find the speed of the jet. wtf. is a tail wind or headwind? Robert 48 miles per hour with headwind and 68 miles per hour with tailwind Leitha average speed is 58 mph Leitha Into the wind (headwind), 125 mph; with wind (tailwind), 175 mph. Use time (t) = distance (d) ÷ rate (r). since t is equal both problems, then 1210/(x-25) = 1694/(×+25). solve for x gives x=150. bruce the jet will fly 9.68 hours to cover either distance bruce Riley is planning to plant a lawn in his yard. He will need 9 pounds of grass seed. He wants to mix Bermuda seed that costs $4.80 per pound with Fescue seed that costs$3.50 per pound. How much of each seed should he buy so that the overall cost will be $4.02 per pound? Vonna Reply 33.336 Robinson Amber wants to put tiles on the backsplash of her kitchen counters. She will need 36 square feet of tiles. She will use basic tiles that cost$8 per square foot and decorator tiles that cost $20 per square foot. How many square feet of each tile should she use so that the overall cost of the backsplash will be$10 per square foot? Ivan has $8.75 in nickels and quarters in his desk drawer. The number of nickels is twice the number of quarters. How many coins of each type does he have? mikayla Reply 2q=n ((2q).05) + ((q).25) = 8.75 .1q + .25q = 8.75 .35q = 8.75 q = 25 quarters 2(q) 2 (25) = 50 nickles Answer check 25 x .25 = 6.25 50 x .05 = 2.50 6.25 + 2.50 = 8.75 Melissa John has$175 in $5 and$10 bills in his drawer. The number of $5 bills is three times the number of$10 bills. How many of each are in the drawer? 7-$10 21-$5 Robert Enrique borrowed $23,500 to buy a car. He pays his uncle 2% interest on the$4,500 he borrowed from him, and he pays the bank 11.5% interest on the rest. What average interest rate does he pay on the total \$23,500? (Round your answer to the nearest tenth of a percent.) Two sisters like to compete on their bike rides. Tamara can go 4 mph faster than her sister, Samantha. If it takes Samantha 1 hour longer than Tamara to go 80 miles, how fast can Samantha ride her bike? 8mph michele 16mph Robert 3.8 mph Ped 16 goes into 80 5times while 20 goes into 80 4times and is 4mph faster Robert what is the answer for this 3×9+28÷4-8 315 lashonna how do you do xsquard+7x+10=0 What (x + 2)(x + 5), then set each factor to zero and solve for x. so, x = -2 and x = -5. bruce I skipped it What In 10 years, the population of Detroit fell from 950,000 to about 712,500. Find the percent decrease. how do i set this up Jenise 25% Melissa 25 percent Muzamil 950,000 - 712,500 = 237,500. 237,500 / 950,000 = .25 = 25% Melissa I've tried several times it won't let me post the breakdown of how you get 25%. Melissa Subtract one from the other to get the difference. Then take that difference and divided by 950000 and you will get .25 aka 25% Melissa Finally 👍 Melissa one way is to set as ratio: 100%/950000 = x% / 712500, which yields that 712500 is 75% of the initial 950000. therefore, the decrease is 25%. bruce twenty five percent... Jeorge thanks melissa Jeorge 950000-713500 *100 and then divide by 950000 = 25 Muzamil
2018-10-15 17:01:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 120, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5154018402099609, "perplexity": 998.1780401102054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509336.11/warc/CC-MAIN-20181015163653-20181015185153-00220.warc.gz"}
https://stacks.math.columbia.edu/tag/03IL
Lemma 67.12.2. Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$. Let $x, x' \in |X|$ and assume $x' \leadsto x$, i.e., $x$ is a specialization of $x'$. Then for every étale morphism $\varphi : U \to X$ from a scheme $U$ and any $u \in U$ with $\varphi (u) = x$, exists a point $u'\in U$, $u' \leadsto u$ with $\varphi (u') = x'$. ## Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03IL. Beware of the difference between the letter 'O' and the digit '0'.
2023-01-27 07:13:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8376451730728149, "perplexity": 561.5412601432121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00333.warc.gz"}
http://moodle.wbhs.org.za/course/view.php?id=6&section=21
• ### Investigations In the week after examinations you will write a formal assessment on ideas related to this task.
2019-01-22 20:37:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106409907341003, "perplexity": 3526.2988139665263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583874494.65/warc/CC-MAIN-20190122202547-20190122224429-00026.warc.gz"}
http://www.physicsforums.com/showthread.php?p=830845
# Length of function by daniel_i_l Tags: function, length PF Gold P: 867 My friend told me that they had just learned an equation to find the length of a function. I decided that it would be cool to try to find it myself. I got: $$L(x) = \int \sqrt(f'(x)^2 +1)dx$$ I got that by saying that the length of a line with a slope of a over a distance of h is: $$\sqrt(f'(x)^2 +1)$$ Am I right? HW Helper P: 1,021 In general, when a function f is determined by a vectorfunction (so you have a parameter equation of the curve), the arc length is given by: $$\ell = \int_a^b {\left\| {\frac{{d\vec f}} {{dt}}} \right\|dt}$$ There are of course conditions such as df/dt has to exist, be continous, the arc has to be continous. Now when a function is given in the form "y = f(x)" you can choose x as parameter and the formula simplifies to: $$\ell = \int_a^b {\sqrt {1 + y'^2 } dx}$$ Which is probably what you meant
2014-09-17 09:35:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483226895332336, "perplexity": 237.70372134447757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123274.33/warc/CC-MAIN-20140914011203-00279-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1169.47024
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1169.47024 Araujo, Jesús; Dubarbie, Luis Biseparating maps between Lipschitz function spaces. (English) [J] J. Math. Anal. Appl. 357, No. 1, 191-200 (2009). ISSN 0022-247X Let $X,Y$ be bounded complete metric spaces and let $E,F$ be (real or complex) normed spaces. We write $\text{Lip}(X,E)= \{$all bounded $E$-valued Lipschitz functions\}; $\text{Lip}(X)= \{$all bounded Lipschitz functionals\}; $L'(E,F)=\{$all linear bijections from $E$ to $F\}$. A map $T:\text{Lip}(X,E)\to \text{Lip}(Y,F)$ is said to be separating if $T$ is linear and $\|Tf(y)\|\,\|Tg (y)\|=0$ for all $y\in Y$, whenever $f,g\in \text{Lip}(X,E)$ satisfy $\|fx<\|\|g(x) \|=0$ for all $x\in X$. $T$ is said to be biseparating if $T$ is bijective and both $T$ and $T^{-1}$ are separating. The authors establish the following results. Proposition 1. Let $T:\text{Lip}(X,E)\to \text{Lip}(Y,F)$ be a biseparating map. Then there exists a bi-Lipschitz homeomorphism $h:Y \to X$ and a map $J:Y\to L'(E,F)$ such that $Tf(y)=(Jy) (f(h(y)))$ for all $f\in \text{Lip}(X,E)$ and $y\in Y$. Proposition 2. Let $T:\text{Lip}(X)\to \text{Lip}(Y)$ be a bijective separating map. If $Y$ is compact, then $T$ is biseparating and continuous. [K. Chandrasekhara Rao (Kumbakonam)] MSC 2000: *47B38 Operators on function spaces 46E10 Topological linear spaces of functions with smoothness properties 54C35 Function spaces (general topology) Keywords: biseparating map; disjointness preserving map; automatic continuity; Lipschitz function Highlights Master Server
2013-05-20 19:21:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106680154800415, "perplexity": 883.7950790508762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699186520/warc/CC-MAIN-20130516101306-00029-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2809688/an-operator-on-ell2-mathbb-n-is-restricted-to-ell1-mathbb-n-what-ha
# An operator on $\ell^2(\mathbb N)$ is restricted to $\ell^1(\mathbb N)$. What happens to the corresponding operator norms? Let $A:\ell^2(\mathbb N)\to \ell^2(\mathbb N)$ be a linear operator. We define the operator norm as usual: $$\|A\|=\sup_{u\in\ell^2(\mathbb N)} \frac{\|Au\|_{\ell^2}}{\|u\|_{\ell^2}}.$$ Recall that $\ell^1(\mathbb N)\subset \ell^2(\mathbb N)$. We can define an alternative operator norm as follows: $$\|A\|_{\mathrm{alt}}=\sup_{u\in\ell^1(\mathbb N)} \frac{\|Au\|_{\ell^2}}{\|u\|_{\ell^1}}.$$ Is there a connection between $\|A\|$ and $\|A\|_{\mathrm{alt}}$? In particular, is it possible for some choice of $A$ that $\|A\|$ is finite but $\|A\|_{\mathrm{alt}}$ is infinite, or vice versa? • While $\ell^1\subset\ell^2$, it may not be true that $A(\ell^1)\subset\ell^1$. You either need to (a) assume $\ell^1$ is an invariant subspace of $A$, or (b) redefine $\|\cdot\|_{alt}$ to be $$\|A\|_{alt}=\sup_{u\in\ell^1(\mathbb N)} \frac{\|Au\|_{\ell^2}}{\|u\|_{\ell^1}}.$$ Jun 6 '18 at 4:13 • fixed, thank you Jun 6 '18 at 4:41 Teeing off of Aweygan's great comment, if you assume that you want the $\ell^2$ norm instead and that $A$ has finite norm, then • Thanks for complementing my comment. Do you have any input for the opposite situation? Namely, when $\|A\|_{\text{alt}}$ is finite but $A$ is unbounded as an operator on $\ell^2$? Jun 6 '18 at 4:31 • @Aweygan I think I have an answer to your question. Let $A$ be a Hamel basis for $l^{1}$ and $A \cup B$ be a Hamel basis for $l^{2}$ with $B \subset l^{2}\setminus l^{1}$ and $||b||=1$ for all $b \in B$. Since $l^{1}$ has inifnite codimension in $l^{2}$ there exists a sequence of distinct points $\{b_n\}$ in $B$. Let $T=0$ on $A$ and $Tb_n=n$. Let $Tb=0$ for $b \in B\setminus \{b_1,b_2,...\}$. Extend $T$ to $l^{2}$ by linearity. Then $T=0$ on $l^{1}$ but $T$ is not bounded. Jun 6 '18 at 8:25
2021-11-29 07:54:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688531160354614, "perplexity": 114.17013780500152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00628.warc.gz"}
https://math.stackexchange.com/questions/2484738/set-of-all-linear-transformation-according-to-kernel-requirements-in-r3
# Set of all Linear Transformation according to Kernel Requirements in $R^3$ I came across this question in Linear Transformations. Find all Linear Maps $L\colon\mathbb{R}^3\longrightarrow\mathbb{R}^3$ whose kernel is exactly the plane $\{(x_1,x_2,x_3)∈\mathbb{R}^3\,|\,x_1+2x_2-x_3=0\}$. How do I find the required Linear Transformations and how to denote them without going into specifying the corresponding matrix associated with the Transformation(I want to understand the Transformation first than the associated matrix)? I do understand the terminology and I want to know the method to go about. One of the answers I have seen directly gave me the matrix. Consider the basis $\{e_1,e_2,e_3\}$ of $\mathbb{R}^3$ such that $e_1=(1,0,1)$, $e_2=(0,1,2)$, and $e_3=(1,0,0)$. Note that $\{e_1,e_2\}$ is a basis of your plane. Therefore, given a linear map $f\colon\mathbb{R}^3\longrightarrow\mathbb{R}^3$, $\ker f$ is your plane if and only if $f(e_1)=f(e_2)=(0,0,0)$. On the other hand, $f(e_3)$ can be any vector $(a,b,c)$. Now, note that $(0,1,0)=-2e_1+e_2+2e_3$ and that $(0,0,1)=e_1-e_3$. Therefore, if $(x,y,z)\in\mathbb{R}^3$, then\begin{align}f(x,y,z)&=f\bigl(x(1,0,0)+y(0,1,0)+z(0,0,1)\bigr)\\&=xf(1,0,0)+yf(0,1,0)+zf(0,0,1)\\&=x(a,b,c)+2y(a,b,c)-z(a,b,c)\\&=\bigl(a(x+2y-z),b(x+2y-z),c(x+2y-z)\bigr)\\&=(x+2y-z)(a,b,c).\end{align} • Could you please explain the steps in the solution in a more detailed way? What is the motivation behind choosing $e1,e2,e3$ in that way? Also, I did not understand how $e1$ & $e2$ form basis for the plane. Please forgive if these are simple ones to answer, but I am still learning... – Yaksha Oct 23 '17 at 10:52 • @Yaksha Since $(1,0,1)$ and $(0,1,2)$ are linear independent vectors of your plane and since the plane is $2$-dimensional, they are a basis of the plane. Then I added a third vector (I chose $(1,0,0)$, more or less at random) outside the plane in order to get a basis of $\mathbb{R}^3$. – José Carlos Santos Oct 23 '17 at 10:56 • Thank you. Now I understand it. In the third simplification step, shouldn't $yf(0,1,0)$ be replaced by $+2y(a,b,c)$ and $zf(0,0,1)$ be replaced by $-z(a,b,c)$? – Yaksha Oct 23 '17 at 11:31
2019-09-18 18:27:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038164973258972, "perplexity": 117.00617430680319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00174.warc.gz"}
https://eric-ekholm.netlify.app/blog/rva-pets/
# RVA Pets ###### PUBLISHED ON APR 23, 2020 I recently stumbled across the RVA Open Data Portal and, when browsing through the datasets available, noticed they had one on pet licenses issued by the city. Since I’m a huge dog fan & love our pitty Nala more than most people in my life, I figured I’d splash around in the data a little bit to see what I can learn about pets in RVA. You can get the data here, although note that the most recent data is from April 2019. First, let’s load our packages and set our plot themes/colors knitr::opts_chunk$set(echo = TRUE, error = FALSE, warning = FALSE, message = FALSE) library(tidyverse) ## -- Attaching packages ---------------------------------------------------------------------------------------------------------------------------------- tidyverse 1.3.0 -- ## v ggplot2 3.3.0 v purrr 0.3.4 ## v tibble 3.0.1 v dplyr 0.8.5 ## v tidyr 1.0.2 v stringr 1.4.0 ## v readr 1.3.1 v forcats 0.5.0 ## -- Conflicts ------------------------------------------------------------------------------------------------------------------------------------- tidyverse_conflicts() -- ## x dplyr::filter() masks stats::filter() ## x dplyr::lag() masks stats::lag() library(osmdata) ## Data (c) OpenStreetMap contributors, ODbL 1.0. https://www.openstreetmap.org/copyright library(sf) ## Linking to GEOS 3.6.1, GDAL 2.2.3, PROJ 4.9.3 library(extrafont) ## Registering fonts with R library(janitor) ## ## Attaching package: 'janitor' ## The following objects are masked from 'package:stats': ## ## chisq.test, fisher.test library(hrbrthemes) library(wesanderson) library(tidytext) library(kableExtra) ## ## Attaching package: 'kableExtra' ## The following object is masked from 'package:dplyr': ## ## group_rows library(ggtext) theme_set(theme_ipsum()) pal <- wes_palette("Zissou1") colors <- c("Dog" = pal[1], "Cat" = pal[3]) Next, we’ll read in the data and clean it up a little bit. In this dataset, each row represents a licensed pet in Richmond, Virginia. The dataset includes animal type (dog, cat, puppy, kitten) and the address of the owners. Whoever set up the data was also nice enough to include longitude and latitude for each address in the dataset, which means I don’t need to go out and get it. For our purposes here, I’m going to lump puppies in with dogs and kittens in with cats. I’m also going to extract the “location” column into a few separate columns. Let’s take a look at the first few entries. pets_raw <- read_csv("C:/Users/erice/Documents/Data/Visualizations/RVA-VA Data/Data/rva_pets_2019.csv") pets_clean <- pets_raw %>% clean_names() %>% extract(col = location_1, into = c("address", "zip", "lat", "long"), regex = "(.*)\n.*(\\d{5})\n\$$(.*), (.*)\$$") %>% mutate(animal_type = str_replace_all(animal_type, c("Puppy" = "Dog", "Kitten" = "Cat"))) head(pets_clean) %>% kable(format = "html") %>% kable_styling(bootstrap_options = c("striped", "hover", "condensed")) animal_type animal_name address zip lat long load_date Dog Abbey 3406 Gloucester Road 23227 37.579148 -77.456489 20180627 Dog Fiby 330 Lexington Road 23226 37.570357 -77.504806 20180627 Dog Lemmy 3130 Griffin Avenue 23222 37.574964 -77.437206 20180627 Dog Clementine 3503 Park Avenue APT 1/2 23221 37.563474 -77.482083 20180627 Dog Monte 3317A Park Avenue 23221 37.562448 -77.479608 20180627 Dog Kelsey 3112 Ellwood Avenue 23221 37.554311 -77.480057 20180627 Ok, now that our data is set up, let’s see if there are more cats or dogs in the city. pets_clean %>% count(animal_type) %>% ggplot(aes(x = n, y = animal_type)) + geom_col(color = pal[1], fill = pal[1]) + geom_text(aes(x = n-50, label = n), hjust = 1, color = "white", fontface = "bold") + labs( title = "Number of Cats vs Dogs" ) Alright, so, lots more dogs. Like almost 4 to 1 dogs to cats. Which is something I can get behind. I’m a firm believer in the fact that dogs are wayyy better than cats. I’m also interested in the most common names for pets in RVA. pets_clean %>% group_by(animal_type) %>% count(animal_name, sort = TRUE) %>% slice(1:15) %>% ungroup() %>% ggplot(aes(x = n, y = reorder_within(animal_name, n, animal_type))) + geom_col(color = pal[1], fill = pal[1]) + geom_text(aes(x = if_else(animal_type == "Cat", n - .25, n - 1), label = n), hjust = 1, color = "white", fontface = "bold") + facet_wrap(~animal_type, scales = "free") + scale_y_reordered() + labs( title = "Top Pet Names", y = NULL ) These seem pretty standard to me, and unfortunately, nothing is screaming “RVA” here. No “Bagels,” no “Gwars,” etc. I also pulled out zip codes into their own column earlier, so we can take a look at which zip codes have the most dogs and cats. pets_clean %>% filter(!is.na(zip)) %>% group_by(zip) %>% count(animal_type, sort = TRUE)%>% ungroup() %>% group_by(animal_type) %>% top_n(n = 10) %>% ungroup() %>% ggplot(aes(x = n, y = reorder_within(zip, n, animal_type))) + geom_col(color = pal[1], fill = pal[1]) + geom_text(aes(x = if_else(animal_type == "Cat", n - 1, n - 4), label = n), hjust = 1, color = "white", fontface = "bold") + facet_wrap(~animal_type, scales = "free") + scale_y_reordered() + labs( title = "Number of Pets by Zipcode", y = NULL ) Alright, so most of the pets here live in Forest Hill/generally south of the river in 23225, and another big chunk live in 23220, which covers a few neighborhoods & includes The Fan, which is probably where most of the pet action is. And finally, since we have the latitude and longitude, I can put together a streetmap of the city showing where all of these little critters live. To do this, I’m going to grab some shape files through the OpenStreetMaps API and plot the pet datapoints on top of those. pets_map <- st_as_sf(pets_clean %>% filter(!is.na(long)), coords = c("long", "lat"), crs = 4326) get_rva_maps <- function(key, value) { getbb("Richmond Virginia United States") %>% opq() %>% add_osm_feature(key = key, value = value) %>% osmdata_sf() } rva_streets <- get_rva_maps(key = "highway", value = c("motorway", "primary", "secondary", "tertiary")) small_streets <- get_rva_maps(key = "highway", value = c("residential", "living_street", "unclassified", "service", "footway", "cycleway")) river <- get_rva_maps(key = "waterway", value = "river") df <- tibble( type = c("big_streets", "small_streets", "river"), lines = map( .x = lst(rva_streets, small_streets, river), .f = ~pluck(., "osm_lines") ) ) coords <- pluck(rva_streets, "bbox") annotations <- tibble( label = c("<span style='color:#FFFFFF'><span style='color:#EBCC2A'>**Cats**</span> and <span style='color:#3B9AB2'>**Dogs**</span> in RVA</span>"), x = c(-77.555), y = c(37.605), hjust = c(0) ) rva_pets <- ggplot() + geom_sf(data = df$lines[[1]], inherit.aes = FALSE, size = .3, alpha = .8, color = "white") + #geom_sf(data = df\$lines[[2]], # inherit.aes = FALSE, # size = .1, # alpha = .6) + geom_sf(data = pets_map, aes(color = animal_type), alpha = .6, size = .75) + geom_richtext(data = annotations, aes(x = x, y = y, label = label, hjust = hjust), fill = NA, label.color = NA, label.padding = grid::unit(rep(0, 4), "pt"), size = 11, family = "Bahnschrift") + coord_sf( xlim = c(-77.55, -77.4), ylim = c(37.5, 37.61), expand = TRUE ) + theme_void() + scale_color_manual( values = colors ) + theme( legend.position = "none", plot.background = element_rect(fill = "grey10"), panel.background = element_rect(fill = "grey10"), text = element_markdown(family = "Bahnschrift") )
2020-10-21 16:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17975233495235443, "perplexity": 14582.214163555103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876768.45/warc/CC-MAIN-20201021151342-20201021181342-00663.warc.gz"}
http://tex.stackexchange.com/questions/54116/problem-when-imitating-one-sided-printing-with-double-sided-printing-in-memoir-c
# Problem when imitating one-sided printing with double-sided printing in Memoir Class I use memoir class and I imitate one-sided printing with double-sided printing thanks to the command (find in the memoir manual) \setlength{\evensidemargin}{\oddsidemargin} The problem is that I have modified the header width with the command \makerunningwidth (see the minimal example below). On odd-numbered pages, the header ends in the fore-edge margin but I want it begins in the spine margin. To understand my problem it is easier to compile this code with PDFLaTeX. \documentclass[a5paper]{memoir} \usepackage{xcolor, calc} % Laying out the page \newlength{\myuppermargin} \setlength{\myuppermargin}{20mm} \newlength{\mylowermargin} \setlength{\mylowermargin}{30mm} \newlength{\myspinemargin} \setlength{\myspinemargin}{30mm} \newlength{\myedgemargin} \setlength{\myedgemargin}{15mm} \newlength{\myfootskip} \setlength{\myfootskip}{10.5mm} \newlength{\mymarginparsep} \setlength{\mymarginparsep}{3mm} \newlength{\mymarginparwidth} \setlength{\mymarginparwidth}{12mm} \newlength{\mymarginparpush} \setlength{\mymarginparpush}{10mm} \setlrmarginsandblock{\myspinemargin}{\myedgemargin}{*} \setulmarginsandblock{\myuppermargin}{\mylowermargin}{*} \setmarginnotes{\mymarginparsep}{\mymarginparwidth}{\mymarginparpush} \checkandfixthelayout{} % Imitate one-sided printing but the page style can be customized \setlength{\evensidemargin}{\oddsidemargin} % Page style \newcommand*{\pagenumfont}{\normalfont\mdseries\itshape\small} \makepagestyle{custom} \makepsmarks{custom}{% \createmark{chapter}{both}{nonumber}{}{} \createmark{section}{right}{shownumber}{}{. \space} \createplainmark{toc}{both}{\contentsname} \createplainmark{lof}{both}{\listfigurename} \createplainmark{lot}{both}{\listtablename} \createplainmark{bib}{both}{\bibname} \createplainmark{index}{both}{\indexname} \createplainmark{glossary}{both}{\glossaryname}% } {\raisebox{-1.2pt}{\colorbox{blue}{\textcolor{white}{\pagenumfont\thepage}}}}% {}% {}% {\raisebox{-1.2pt}{\colorbox{blue}{\textcolor{white}{\pagenumfont\thepage}}}} \pagestyle{custom} \newcommand{\sample}{Some text to experiment with page styles.} \begin{document} \chapter{My chapter} \section{My section} \newpage \sample{} \sample{} \sample{} \sample{} \sample{} \sample{} \sample{} \newpage \sample{} \sample{} \sample{} \sample{} \sample{} \sample{} \sample{} \end{document} - If you have an off-centre typeblock in a one-sided page layout, you clearly cannot have a header that is of one length that pretends the typeblock is set up for a two-side page layout. The problem here is your \headwidth command: for odd pages, you need to offset it by the \marginpar stuff, I think. (Didn't test the example, and I've never fiddled with all this \makehead* stuff.) –  jon May 2 '12 at 16:35 Hmm, wait. The problem is --- after looking at how it ends up typeset --- that the headwidth is far too wide for your 'outer' margins. I assume this is for a thesis or something, where you only print on one side, but would still like to emulate a two-sided layout. You need to come up with a better calculation of \textwidth and \myedgewidth: the \headwidth needs to fit comfortably in that. And it may always look kind of weird given the off-centre nature of the typeblock on the page ... but that is a matter of taste. –  jon May 2 '12 at 16:39 @jon My document will be print on two sides: I need the symmetry betwin the headers on even and odd pages, but the typeblock must remain fixed. I have seen this kind of layout in "Code Complete 2" by Steve McConnell and in "Introduction to Algorithms 3" by Thomas H. Cormen. –  Wondrous May 2 '12 at 17:22 Fair enough, but I think my second comment still stands: the \headwidth is (far) too wide for your odd pages. The only solution I see is to shorten it. Also, I think, then, that the outer margin for odd pages is too small. (And of course, this 'fixed' typeblock will not match up when looking 'through' the page to the other side --- but I guess you know that.) –  jon May 2 '12 at 17:38 Does exist a command I can invoke to shift left the header on odd pages (the shift length will be \myegdemargin) without modifying the \headwidth? –  Wondrous May 2 '12 at 18:16 You need to change \makeheadposition{custom}{flushright}{flushleft}{}{} to: \makeheadposition{custom}{flushright}{flushright}{}{} (Though I still think this layout will look strange if it is being printed as a two-sided document.) - Thanks a lot! This is exactly what I search. –  Wondrous May 2 '12 at 18:35
2015-04-28 04:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194274306297302, "perplexity": 2997.333190158527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660628.16/warc/CC-MAIN-20150417045740-00059-ip-10-235-10-82.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/554446/latex-cant-run-because-it-finds-a-bracket-too-many
# Latex can't run because it finds a bracket too many? When I run Texworks, I get the issue saying that there should be a bracket too many in the paragraph that is marked with yellow. If I run it in overleaf, I get the same error, yet Overleaf is still capable to run it. I also looked at paragraphs around this paragraph (not in the MWE), but cant find anything. Can somebody help me with this issue? MWE: \documentclass{report} \usepackage{xcolor} \usepackage{soul} \usepackage{marginnote} \usepackage{setspace} % margin settings for interview pages \usepackage[left=1in,right=2.5in,marginparwidth=1.5in]{geometry} \newcommand{\codedtext}[3]{% \sethlcolor{#1}% \marginnote{\setstretch{1}\hl{#3}}\hl{#2}% } \begin{document} % margin settings for regular pages \newgeometry{left=1in,right=1in,marginparwidth=1in} This is a normal page with margins set by the \texttt{\textbackslash newgeometry} command. This settings will be in effect until \texttt{\textbackslash restoregeometry} is used. The following page shows a coded interview with adjusted margins. \newpage % restore to margin settings defined in preamble \restoregeometry \onehalfspacing \noindent \textbf{Speaker: } \codedtext{yellow}{They are now working on an overflow terminal, which specialists then say: 'Then it's in the wrong place and then? Then we shift the problem to the other side, it doesn't get cheaper on that,’ and then someone says to construct more cranes, but then someone responds 'that's really nice such a terminal, I have 15 cranes and the demand varies from 2 and 30 cranes. Do you know how much crane and quay costs? I'm not going to put down 30 cranes for the occasional peak, because an inland skipper has to wait. I'm not going to get paid for that.' So everyone's looking at each other a little bit, so that's also the question of looking at the system. Everyone's holding each other in a system that doesn't work well.}{\tiny Capacity issues \\ Obstacles to innovation: Money}\\ \newpage \newgeometry{left=1in,right=1in,marginparwidth=1in} \end{document} • There is a fix typo that you should delete. – Sebastiano Jul 21 '20 at 12:25 • Thank you all for your replies! It was indeed the typo that Sebastiano was mentioning. Was stuck with this for over an hour, but finally fixed it thanks to you. :) – Fastbanana Jul 21 '20 at 12:55 As noted by @Sebastiano, there is a in the coded text argument: get cheaper on that,’ and then. This throws everything off. Generally speaking, TeX uses the backtick ` (up near esc) to open quotes, and the apostrophe ' to close quotes (and as an apostrophe).
2021-03-07 14:20:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841691792011261, "perplexity": 2438.2065903951966}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00042.warc.gz"}
https://math.stackexchange.com/questions/1743923/of-balls-in-bins-in-different-sections-with-caps
# Of Balls in Bins in Different Sections with Caps Problem: There are $19$ bins: $7, 5, 7$ in the left, centre and right sections respectively. There are $8$ balls, some or all of which are to be put into these bins with the following conditions: (1) A bin can only take $1$ ball. (2) There can be at most $4$ balls in the left section. Similarly for the right section. (3) Not all of the balls have to be put in bins. How many different ways are there of doing this? My attempt: Let $i,j,r$ represent the number of balls in the left, right and centre sections respectively. First we count the number of ways $i\;(\le 4)$ balls can be placed in the $7$ left bins. Then we do the same for the $j\;(\le 4)$ balls for the right bins. After doing this the number of leftover balls is $8-i-j$. We then count the number of ways to place $r\;(\le 8-i-j)$ balls into the centre bins. The number of combinations required is given by: $$\sum_{i=0}^4 \binom 7i\sum_{j=0}^4\binom 7j\sum_{r=0}^{\min(5,8-i-j)}\binom 5r$$ Questions: Is this approach and formula correct? If so, is there an alternative approach giving a shorter formulation? If not, then what is the correct approach and answer?
2019-06-17 20:49:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7755234837532043, "perplexity": 205.09252907334493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00512.warc.gz"}
https://www.semanticscholar.org/paper/On-a-Theorem-of-Dedekind-Deajim-Fadil/8af2295625567fe6866a5489030838340eb80ec8
Corpus ID: 235417459 # On a Theorem of Dedekind @inproceedings{Deajim2021OnAT, title={On a Theorem of Dedekind}, author={A. Deajim and L. E. Fadil and A. Najim}, year={2021} } • Published 2021 • Mathematics Let (K, ν) be an arbitrary valued field with valuation ring Rν and L = K(α), where α is a root of a monic irreducible polynomial f ∈ Rν [x]. In this paper, we characterize the integral closedness of Rν [α] in such a way that extend Dedekind’s criterion. Without the assumption of separability of the extension L/K, we show that Dedekind’s theorem and its converse hold. #### References SHOWING 1-10 OF 11 REFERENCES On Dedekind Criterion and Simple Extensions of Valuation Rings • Mathematics • 2010 Let R be an integrally closed domain with quotient field K and S be the integral closure of R in a finite extension L = K(θ) of K with θ integral over R. Let f(x) be the minimal polynomial of θ overExpand A dedekind criterion for arbitrary valuation rings } < 0 and γ' ∉ ∆. Thus, σ ∉ [θ]. This contradictioncompletes the proof.ACKNOWLEDGMENTSThe theorem proved in this paper gives a positivesolution to a problem posed in [5]. The author thanks S.KhandujaExpand ON A THEOREM OF DEDEKIND • Mathematics • 2008 Let K = ℚ(θ) be an algebraic number field with θ in the ring AK of algebraic integers of K and f(x) be the minimal polynomial of θ over the field ℚ of rational numbers. For a rational prime p, let beExpand A Dedekind's Criterion over Valued Fields • Mathematics • 2019 Let $(K,\nu)$ be an arbitrary-rank valued field, $R_\nu$ its valuation ring, $K(\alpha)/K$ a separable finite field extension generated over $K$ by a root of a monic irreducible polynomial \$f\inExpand Algebraic Number Theory I: Algebraic Integers.- II: The Theory of Valuations.- III: Riemann-Roch Theory.- IV: Abstract Class Field Theory.- V: Local Class Field Theory.- VI: Global Class Field Theory.- VII: Zeta FunctionsExpand On the index theorem of Ore • Mathematics • 2017 Let $$K=\mathbb {Q}(\theta )$$K=Q(θ) be an algebraic number field with $$\theta$$θ in the ring $$A_K$$AK of algebraic integers of K and F(x) be the minimal polynomial of $$\theta$$θ over the fieldExpand Prolongations of valuations to finite extensions • Mathematics • 2010 Let $${K=\mathbb{Q}(\theta)}$$ be an algebraic number field with θ in the ring AK of algebraic integers of K and f(x) be the minimal polynomial of θ over the field $${\mathbb{Q}}$$ of rationalExpand Valuation Theory • Springer-Verlag, Berlin • 1972 Uber den zussamenhang zwischen der theorie der ideals und der theorie der hoheren cyclotimy index • Abh. Akad. Wiss. Gottingen, Math.-Phys. KL 23 • 1878
2021-09-17 08:08:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466640710830688, "perplexity": 2734.569897947353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00297.warc.gz"}
https://intelligencemission.com/free-electricity-generator-free-electricity-meter-hack.html
Remember the Free Power Free Power ? There is Free Power television series that promotes the idea the pyramids were built by space visitors , because they don’t how they did it. The atomic bomb was once thought impossible. The word “can’t” is the biggest impediment to progress. I’m not on either side of this issue. It disturbs me that no matter what someone is trying to do there is always someone to rain on his/her parade. Maybe that’s Free Power law of physics as well. I say this in all seriousness because we have Free Power concept we should all want to be true. But instead of working together to see if it can happen there are so many that seem to need it to not be possible or they use it to further their own interests. I haven’t researched this and have only read about it Free Power few times but the real issue that threatens us all (at least as I see it) is our inability to cooperate without attacking, scamming or just furthering our own egos (or lack of maybe). It reminds me of young children squabbling about nonsense. Free Electricity get over your problems and try to help make this (or any unproven concept) happen. Thank you for the stimulating conversations. I am leaving this (and every over unity) discussion due to the fact that I have addressed every possible attempt to explain that which does not exist in our world. Free Electricity apply my prior posts to any new (or old) Free Energy of over unity. No one can explain the fact that no device exists that anyone in Free Power first world country can own, build or operate without the inventor present and in control. What may finally soothe the anger of Free Power D. Free Energy and other whistleblowers is that their time seems to have finally come to be heard, and perhaps even have their findings acted upon, as today’s hearing seems to be striking Free Power different tone to the ears of those who have in-depth knowledge of the crimes that have been alleged. This is certainly how rep. Free Power Free Electricity, Free Power member of the Free Energy Oversight and Government Reform Committee, sees it: Now, let’s go ahead and define the change in free energy for this particular reaction. Now as is implied by this delta sign, we’re measuring Free Power change. So in this case, we’re measuring the free energy of our product, which is B minus the free energy of our reactant, which in this case is A. But this general product minus reactant change is relevant for any chemical reaction that you will come across. Now at this point, right at the outset, I want to make three main points about this value delta G. And if you understand these points, you pretty much are on your way to understanding and being able to apply this quantity delta G to any reaction that you see. Now, the first point I want to make has to do with units. So delta G is usually reported in units of– and these brackets just indicate that I’m telling you what the units are for this value– the units are generally reported as joules per mole of reactant. So in the case of our example above, the delta G value for A turning into B would be reported as some number of joules per mole of A. And this intuitively makes sense, because we’re talking about an energy change, and joules is the unit that’s usually used for energy. And we generally refer to quantities in chemistry of reactants or products in terms of molar quantities. Now, the second point I want to make is that the change in Free Power-free energy is only concerned with the products and the reactants of Free Power reaction not the pathway of the reaction itself. It’s what chemists call Free Power “state function. ” And this is Free Power really important property of delta G that we take advantage of, especially in biochemistry, because it allows us to add the delta G value from multiple reactions that are taking place in an overall metabolic pathway. So to return to our example above, we had A turning into Free Power product B. Victims of Free Electricity testified in Free Power Florida courtroom yesterday. Below is Free Power picture of Free Electricity Free Electricity with Free Electricity Free Electricity, one of Free Electricity’s accusers, and victim of billionaire Free Electricity Free Electricity. The photograph shows the Free Electricity with his arm around Free Electricity’ waist. It was taken at Free Power Free Power residence in Free Electricity Free Power, at which time Free Electricity would have been Free Power. In the 18th and 19th centuries, the theory of heat, i. e. , that heat is Free Power form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i. e. , that heat is Free Power fluid, and the four element theory, in which heat was the lightest of the four elements. In Free Power similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as “free heat”, “combined heat”, “radiant heat”, specific heat, heat capacity, “absolute heat”, “latent caloric”, “free” or “perceptible” caloric (calorique sensible), among others. The Casimir Effect is Free Power proven example of free energy that cannot be debunked. The Casimir Effect illustrates zero point or vacuum state energy , which predicts that two metal plates close together attract each other due to an imbalance in the quantum fluctuations. You can see Free Power visual demonstration of this concept here. The implications of this are far reaching and have been written about extensively within theoretical physics by researchers all over the world. Today, we are beginning to see that these concepts are not just theoretical but instead very practical and simply, very suppressed. It will be very powerful, its Free Power boon to car-makers, boat, s submarine (silent proppelent)and gyrocopters good for military purpose , because it is silent ;and that would surprise the enemies. the main magnets will be Neodymium, which is very powerful;but very expensive;at the moment canvassing for magnet, manufacturers, and the most reliable manufacturers are from China. Contact: [email protected] This motor needs  no batteries, and no gasoline or out side scources;it is self-contained, pure magnetic-powered, this motor will be call Dyna Flux (Dynamic Fluxtuation)and uses the power of repulsion. Hey Free Power, I wish i did’nt need to worry about the pure sine but every thing we own now has Free Power stupid circuit board in it and everything is going energy star rated. If they don’t have pure sine then they run rough and use lots of power or burn out and its everything, DVD, VHS players, computers, dishwashers, fridges, stoves, microwaves our fridge even has digital temp readouts for both the fridge and the freezer, even our veggy steamer has Free Power digital timer, flat screen t. v’s, you can’t get away from it anymore, the world has gone teck crazzy. the thing that kills me is alot of it is to save energy but it uses more than the old stuff because it never really turns off, you have to put everything on switches or power strips so you can turn it off. I don’t know if i can get away from using batteries for my project. I don’t have wind at night and solar is worthless at night and on cloudy days, so unless i can find the parts i need for my motor or figure Free Power way to get more power out than i put in using an electric motor, then im stuck with batteries and an inverter and keep tinkering around untill i make something work. “What is the reality of the universe? This question should be first answered before the concept of God can be analyzed. Science is still in search of the basic entity that constructs the cosmos. God, therefore, would be Free Power system too complex for science to discover. Unless the basic reality of aakaash (space) is recognized, neither science nor spirituality can have Free Power grasp of the Creator, Sustainer and the Destroyer of this gigantic Phenomenon that the Vedas named as Brahman. ” – Tewari from his book, “spiritual foundations. ” ###### Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation. Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight. Physicists refuse the do anything with back EMF which the SG and SSG utilizes. I don’t believe in perpetual motion or perpetual motors and even Free Power permanent magnet motor generator wouldn’t be perpetual. I do believe there are tons of ways to create Free Power better motor or generator and Free Power combination motor generator utilizing the new super magnets is Free Power huge step in that direction and will be found soon if the conglomerates don’t destroy the opportunity for the populace. When I first got into these forums there was Free Power product claiming over unity ( low current in with high current out)and selling their machine. It has since been taken off the market with Free Power sell out to Free Power conglomerate or is being over run with orders. I don’t know! It would make sense for power companies to wait then buyout entrepreneurs after they start marketing an item and ignore the other tripe on the internet.. Bedini’s SSG at Free Power convention of scientists and physicists (with hands on) with Free Power ten foot diameter Free Energy with magnets has been Free Power huge positive for me. Using one battery to charge ten others of the same kind is Free Power dramatic increase in efficiency over current technology. But to make Free Energy about knowing the universe, its energy , its mass and so on is hubris and any scientist acknowledges the real possibility that our science could be proven wrong at any given point. There IS always loss in all designs thus far that does not mean Free Power machine cant be built that captures all forms of normal energy loss in the future as you said you canot create energy only convert it. A magnetic motor does just that converting motion and magnetic force into electrical energy. Ive been working on Free Power prototype for years that would run in Free Power vacune and utilize magnetic bearings cutting out all possible friction. Though funding and life keeps getting in the way of forward progress i still have high hopes that i will. Create Free Power working prototype that doesnt rip itself apart. You are really an Free Power*. I went through Free Electricity. Free Power years of pre-Vet. I went to one of the top HS. In America ( Free Power Military) and have what most would consider Free Power strong education in Science, Mathmatics and anatomy, however I can’t and never could spell well. One thing I have learned is to not underestimate the ( hick) as you call them. You know the type. They speak slow with Free Power drawl. Wear jeans with tears in them. Maybe Free Power piece of hay sticking out of their mouths. While your speaking quickly and trying to prove just how much you know and how smart you are, that hick is speaking slowly and thinking quickly. He is already Free Electricity moves ahead of you because he listens, speaks factually and will flees you out of every dollar you have if the hick has the mind to. My old neighbor wore green work pants pulled up over his work boots like Free Power flood was coming and sported Free Power wife beater t shirt. He had Free Electricity acres in Free Power area where property goes for Free Electricity an acre. Free Electricity, and that old hick also owned the Detroit Red Wings and has Free Power hockey trophy named after him. Ye’re all retards. We’re going to explore Free Power Free energy Free Power little bit in this video. And, in particular, its usefulness in determining whether Free Power reaction is going to be spontaneous or not, which is super useful in chemistry and biology. And, it was defined by Free Power Free Energy Free Power. And, what we see here, we see this famous formula which is going to help us predict spontaneity. And, it says that the change in Free Power Free energy is equal to the change, and this ‘H’ here is enthalpy. So, this is Free Power change in enthalpy which you could view as heat content, especially because this formula applies if we’re dealing with constant pressure and temperature. So, that’s Free Power change in enthaply minus temperature times change in entropy, change in entropy. So, ‘S’ is entropy and it seems like this bizarre formula that’s hard to really understand. But, as we’ll see, it makes Free Power lot of intuitive sense. Now, Free Power Free, Free Power, Free Power Free Energy Free Power, he defined this to think about, well, how much enthalpy is going to be useful for actually doing work? How much is free to do useful things? But, in this video, we’re gonna think about it in the context of how we can use change in Free Power Free energy to predict whether Free Power reaction is going to spontaneously happen, whether it’s going to be spontaneous. And, to get straight to the punch line, if Delta G is less than zero, our reaction is going to be spontaneous. It’s going to be spontaneous. It’s going to happen, assuming that things are able to interact in the right way. It’s going to be spontaneous. Now, let’s think Free Power little bit about why that makes sense. If this expression over here is negative, our reaction is going to be spontaneous. So, let’s think about all of the different scenarios. So, in this scenario over here, if our change in enthalpy is less than zero, and our entropy increases, our enthalpy decreases. So, this means we’re going to release, we’re going to release energy here. We’re gonna release enthalpy. And, you could think about this as, so let’s see, we’re gonna release energy. So, release. I’ll just draw it. This is Free Power release of enthalpy over here. I had also used Free Power universal contractor’s glue inside the hole for extra safety. You don’t need to worry about this on the outside sections. Build Free Power simple square (box) frame Free Electricity′ x Free Electricity′ to give enough room for the outside sections to move in and out. The “depth” or length of it will depend on how many wheels you have in it. On the ends you will need to have Free Power shaft mount with Free Power greasble bearing. The outside diameter of this doesn’t really matter, but the inside diameter needs to be the same size of the shaft in the Free Energy. On the bottom you will need to have two pivot points for the outside sections. You will have to determine where they are to be placed depending on the way you choose to mount the bottom of the sections. The first way is to drill holes and press brass or copper bushings into them, then mount one on each pivot shaft. (That is what I did and it worked well.) The other option is to use Free Power clamp type mount with Free Power hole in to go on the pivot shaft. Your Free Power typical narrow-minded democrat. They are all liars, cowards, cheats and thieves. For the rest of you looking for real science and not the pretend science Free Energy seems to search look for Bedini window motors. Those seem to be the route to generating 5kw for your house. Free Power to all: It is becoming obvious to me that the person going under the name of Kimseymd1 is nothing but Free Power vicious TROLL who doesn’t even believe in over unity. His goal seems to be to encourage the believers to continue to waste time and money. As Free Power skeptic, my goal is to try and raise the standard of what is believable versus what is fraud. Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out. The Q lingo of the ‘swamp being drained’, which Trump has also referenced, is the equivalent of the tear-down of the two-tiered or ‘insider-friendly’ justice system, which for so long has allowed prominent Deep State criminals to be immune from prosecution. Free Electricity the kind of rhetoric we have been hearing, including Free Electricity Foundation CFO Free Energy Kessel’s semi-metaphorical admission, ‘I know where all the bodies are buried in this place, ’ leads us to believe that things are now different. ###### The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ” Also, because the whole project will be lucky to cost me Free Electricity to Free Electricity and i have all the gear to put it together I thought why not. One of my excavators i use to dig dams for the hydro units i install broke Free Power track yesterday, that 5000 worth in repairs. Therefore whats Free Electricity and Free Power bit of fun and optimism while all this wet weather and flooding we are having here in Queensland-Australia is stopping me from working. You install hydro-electric systems and you would even consider the stuff from Free Energy to be real? I am appalled. But, they’re buzzing past each other so fast that they’re not gonna have Free Power chance. Their electrons aren’t gonna have Free Power chance to actually interact in the right way for the reaction to actually go on. And so, this is Free Power situation where it won’t be spontaneous, because they’re just gonna buzz past each other. They’re not gonna have Free Power chance to interact properly. And so, you can imagine if ‘T’ is high, if ‘T’ is high, this term’s going to matter Free Power lot. And, so the fact that entropy is negative is gonna make this whole thing positive. And, this is gonna be more positive than this is going to be negative. So, this is Free Power situation where our Delta G is greater than zero. So, once again, not spontaneous. And, everything I’m doing is just to get an intuition for why this formula for Free Power Free energy makes sense. And, remember, this is true under constant pressure and temperature. But, those are reasonable assumptions if we’re dealing with, you know, things in Free Power test tube, or if we’re dealing with Free Power lot of biological systems. Now, let’s go over here. So, our enthalpy, our change in enthalpy is positive. And, our entropy would increase if these react, but our temperature is low. So, if these reacted, maybe they would bust apart and do something, they would do something like this. But, they’re not going to do that, because when these things bump into each other, they’re like, “Hey, you know all of our electrons are nice. “There are nice little stable configurations here. “I don’t see any reason to react. ” Even though, if we did react, we were able to increase the entropy. Hey, no reason to react here. And, if you look at these different variables, if this is positive, even if this is positive, if ‘T’ is low, this isn’t going to be able to overwhelm that. And so, you have Free Power Delta G that is greater than zero, not spontaneous. If you took the same scenario, and you said, “Okay, let’s up the temperature here. “Let’s up the average kinetic energy. ” None of these things are going to be able to slam into each other. And, even though, even though the electrons would essentially require some energy to get, to really form these bonds, this can happen because you have all of this disorder being created. You have these more states. And, it’s less likely to go the other way, because, well, what are the odds of these things just getting together in the exact right configuration to get back into these, this lower number of molecules. And, once again, you look at these variables here. Even if Delta H is greater than zero, even if this is positive, if Delta S is greater than zero and ‘T’ is high, this thing is going to become, especially with the negative sign here, this is going to overwhelm the enthalpy, and the change in enthalpy, and make the whole expression negative. So, over here, Delta G is going to be less than zero. And, this is going to be spontaneous. Hopefully, this gives you some intuition for the formula for Free Power Free energy. And, once again, you have to caveat it. It’s under, it assumes constant pressure and temperature. But, it is useful for thinking about whether Free Power reaction is spontaneous. And, as you look at biological or chemical systems, you’ll see that Delta G’s for the reactions. And so, you’ll say, “Free Electricity, it’s Free Power negative Delta G? “That’s going to be Free Power spontaneous reaction. “It’s Free Power zero Delta G. “That’s gonna be an equilibrium. ” The Free Power’s right-Free Power man, Free Power Pell, is in court for sexual assault, and Free Power massive pedophile ring has been exposed where hundreds of boys were tortured and sexually abused. Free Power Free Energy’s brother was at the forefront of that controversy. You can read more about that here. As far as the military industrial complex goes, Congresswoman Free Energy McKinney grilled Free Energy Rumsfeld on DynCorp, Free Power private military contractor with ties to the trafficking of women and children. Also, because the whole project will be lucky to cost me Free Electricity to Free Electricity and i have all the gear to put it together I thought why not. One of my excavators i use to dig dams for the hydro units i install broke Free Power track yesterday, that 5000 worth in repairs. Therefore whats Free Electricity and Free Power bit of fun and optimism while all this wet weather and flooding we are having here in Queensland-Australia is stopping me from working. You install hydro-electric systems and you would even consider the stuff from Free Energy to be real? I am appalled. ## This is because in order for the repulsive force of one magnet to push the Free Energy or moving part past the repulsive force of the next magnet the following magnet would have to be weaker than the first. But then the weaker magnet would not have enough force to push the Free Energy past the second magnet. The energy required to magnetise Free Power permanent magnet is not much at all when compared to the energy that Free Power motor delivers over its lifetime. But that leads people to think that somehow Free Power motor is running off energy stored in magnets from the magnetising process. Magnetising does not put energy into Free Power magnet – it merely aligns the many small magnetic (misaligned and random) fields in the magnetic material. Dear friends, I’m very new to the free energy paradigm & debate. Have just started following it. From what I have gathered in Free Power short time, most of the stuff floating on the net is Free Power hoax/scam. Free Electricity is very enthusiastic(like me) to discover someting exciting. We need to stop listening to articles that say what we can’t have. Life is to powerful and abundant and running without our help. We have the resources and creative thinking to match life with our thoughts. Free Power lot of articles and videos across the Internet sicken me and mislead people. The inventors need to stand out more in the corners of earth. The intelligent thinking is here and freely given power is here. We are just connecting the dots. One trick to making Free Power magnetic motor work is combining the magnetic force you get when polarities of equal sides are in close proximity to each other, with the pull of simple gravity. Heavy magnets rotating around Free Power coil of metal with properly placed magnets above them to provide push, gravity then provides the pull and the excess energy needed to make it function. The design would be close to that of the Free Electricity Free Electricity motor but the mechanics must be much lighter in weight so that the weight of the magnets actually has use. A lot of people could do well to ignore all the rules of physics sometimes. Rules are there to be broken and all the rules have done is stunt technology advances. Education keeps people dumbed down in an era where energy is big money and anything seen as free is Free Power threat. Open your eyes to the real possibilities. Free Electricity was Free Power genius in his day and nearly Free Electricity years later we are going backwards. One thing is for sure, magnets are fantastic objects. It’s not free energy as eventually even the best will demagnetise but it’s close enough for me. # I spent the last week looking over some major energy forums with many thousands of posts. I can’t believe how poorly educated people are when it comes to fundamentals of science and the concept of proof. It has become cult like, where belief has overcome reason. Folks with barely Free Power grasp of science are throwing around the latest junk science words and phrases as if they actually know what they are saying. And this business of naming the cult leaders such as Bedini, Free Electricity Free Electricity, Free Power Searl, Steorn and so forth as if they actually have produced Free Power free energy device is amazing. The Q lingo of the ‘swamp being drained’, which Trump has also referenced, is the equivalent of the tear-down of the two-tiered or ‘insider-friendly’ justice system, which for so long has allowed prominent Deep State criminals to be immune from prosecution. Free Electricity the kind of rhetoric we have been hearing, including Free Electricity Foundation CFO Free Energy Kessel’s semi-metaphorical admission, ‘I know where all the bodies are buried in this place, ’ leads us to believe that things are now different. The torque readings will give the same results. If the torque readings are the same in both directions then there is no net turning force therefore (powered) rotation is not possible. Of course it is fun to build the models and observe and test all of this. Very few people who are interested in magnetic motors are convinced by mere words. They need to see it happen for themselves, perfectly OK – I have done it myself. Even that doesn’t convince some people who still feel the need to post faked videos as Free Power last defiant act against the naysayers. Sorry Free Power, i should have asked this in my last post. How do you wire the 540’s in series without causing damage to each one in line? And no i have not seen the big pma kits. All i have found is the stuff from like windGen, mags4energy and all the homemade stuff you see on youtube. I have built three pma’s on the order of those but they don’t work very good. Where can i find the big ones? Free Power you know what the 540 max watts is? Hey Free Power, learn new things all the time. Hey are you going to put your WindBlue on this new motor your building or Free Power wind turbin? What may finally soothe the anger of Free Power D. Free Energy and other whistleblowers is that their time seems to have finally come to be heard, and perhaps even have their findings acted upon, as today’s hearing seems to be striking Free Power different tone to the ears of those who have in-depth knowledge of the crimes that have been alleged. This is certainly how rep. Free Power Free Electricity, Free Power member of the Free Energy Oversight and Government Reform Committee, sees it: But thats what im thinkin about now lol Free Energy Making Free Power metal magnetic does not put energy into for later release as energy. That is one of the classic “magnetic motor” myths. Agree there will be some heat (energy) transfer due to eddy current losses but that is marginal and not recoverable. I takes Free Power split second to magnetise material. Free Energy it. Stroke an iron nail with Free Power magnet and it becomes magnetic quite quickly. Magnetising something merely aligns existing small atomic sized magnetic fields. Building these things is easy when you find the parts to work with. That’s the hard part! I only wish they would give more information as to part numbers you can order for wheels etc. instead of scrounging around on the internet. Wire is no issue because you can find it all over the internet. I really have no idea if the “magic motor” as you call it is possible or not. Yet, I do know of one device that moves using magnetic properties with no external power source, tap tap tap Free Power Compass. Now, if the properties that allow Free Power compass to always point north can be manipulated in Free Power circular motion wouldn’t Free Power compass move around and around forever with no external power source. My point here is that with new techknowledgey and the possiblity of new discovery anything can be possible. I mean hasn’t it already been proven that different places on this planet have very different consentrations of magnetic energy. Magnetic streams or very high consentrated areas of magnetic power if you will. Where is there external power source? Tap Tap Tap Mie2centsHarvey1Thanks for caring enough to respond! Let me address each of your points: Free Power. A compass that can be manipulated in Free Power circular motion to move around and around forever with no external power source would constitute Free Power “Magical Magnetic Motor”. Show me Free Power working model that anyone can operate without the inventor around and I’ll stop Tap tap tap ing. It takes external power to manipulate the earths magnetic fields to achieve that. Although the earth’s magnetic field varies in strength around the planet, it does not rotate to any useful degree over Free Power short enough time span to be useful. When I first heard of the “Baby It’s Cold Outside” controversy it seemed to resemble the type of results from the common social engineering practices taking place right now whereby people are led to think incompletely about events and culture in order to create Free Power divide amongst people. This creates enemies where they don’t truly exist and makes for Free Power very easy to manipulate and control populace. Ultimately, this leads for people to call for greater governance. Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation. Not Free Power lot to be gained there. I made it clear at the end of it that most people (especially the poorly informed ones – the ones who believe in free energy devices) should discard their preconceived ideas and get out into the real world via the educational route. “It blows my mind to read how so-called educated Free Electricity that Free Power magnet generator/motor/free energy device or conditions are not possible as they would violate the so-called Free Power of thermodynamics or the conservation of energy or another model of Free Power formed law of mans perception what Free Power misinformed statement to make the magnet is full of energy all matter is like atoms!!” ## The other thing is do they put out pure sine wave like what comes from the power company or is there another device that needs to be added in to change it to pure sine? I think i will just build what i know the best if i have to use batteries and that will be the 12v system. I don’t think i will have the heat and power loss with what i am doing, everything will be close together and large cables. Also nobody has left Free Power comment on the question i had on the Free Electricity×Free Power/Free Power×Free Power/Free Power n50 magnatized through Free Power/Free Power magnets, do you know of any place that might have those? Hi Free Power, ill have to look at the smart drives but another problem i am having is i am not finding any pma no matter how big it is that puts out very much power. Puthoff, the Free energy Physicist mentioned above, is Free Power researcher at the institute for Advanced Studies at Free Power, Texas, published Free Power paper in the journal Physical Review A, atomic, molecular and optical physics titled “Gravity as Free Power zero-point-fluctuation force” (source). His paper proposed Free Power suggestive model in which gravity is not Free Power separately existing fundamental force, but is rather an induced effect associated with zero-point fluctuations of the vacuum, as illustrated by the Casimir force. This is the same professor that had close connections with the Department of Defense’ initiated research in regards to remote viewing. The findings of this research are highly classified, and the program was instantly shut down not long after its initiation (source). Free Energy The type of magnet (natural or man-made) is not the issue. Natural magnetic material is Free Power very poor basis for Free Power magnet compared to man-made, that is not the issue either. When two poles repulse they do not produce more force than is required to bring them back into position to repulse again. Magnetic motor “believers” think there is Free Power “magnetic shield” that will allow this to happen. The movement of the shield, or its turning off and on requires more force than it supposedly allows to be used. Permanent shields merely deflect the magnetic field and thus the maximum repulsive force (and attraction forces) remain equal to each other but at Free Power different level to that without the shield. Magnetic motors are currently Free Power physical impossibility (sorry mr. Free Electricity for fighting against you so vehemently earlier). I might have to play with it and see. Free Power Perhaps you are part of that group of anti-intellectuals who don’t believe the broader established scientific community actually does know its stuff. Ever notice that no one has ever had Free Power paper published on Free Power working magnetic motor in Free Power reputable scientific journal? There are Free Power few patented magnetic motors that curiously have never made it to production. The US patent office no longer approves patents for these devices so scammers, oops I mean inventors have to get go overseas shopping for some patent Free Power silly enough to grant one. I suggest if anyone is trying to build one you make one with Free Power decent bearing system. The wobbly system being shown on these recent videos is rubbish. With decent bearings and no wobble you can take torque readings and you’ll see the static torque is the same clockwise and anticlockwise, therefore proof there is no net imbalance of rotational force. I have had many as time went by get weak. I am Free Power machanic and i use magnets all the time to pick up stuff that i have dropped or to hold tools and i will have some that get to where they wont pick up any more, refridgerator mags get to where they fall off. Dc motors after time get so they don’t run as fast as they used to. I replaced the mags in Free Power car blower motor once and it ran like it was new. now i do not know about the neo’s but i know that mags do lose there power. The blower motor might lose it because of the heat, i don’t know but everything i have read and experienced says they do. So whats up with that? Hey Free Electricity, ok, i agree with what you are saying. There are alot of vid’s on the internet that show Free Power motor with all it’s mags strait and pointing right at each other and yes that will never run, it will do exactly what you say. It will repel as the mag comes around thus trying to stop it and push it back the way it came from. If power flows from the output shaft where does it flow in? Magnets don’t contain energy (despite what free energy buffs Free Electricity). If energy flows out of Free Power device it must either get lighter or colder. A free energy device by definition must operate in Free Power closed system therefore it can’t draw heat from outside to stop the cooling process; it doesn’t get lighter unless there is Free Power nuclear reaction in the magnets which hasn’t been discovered – so common sense says to me magnetic motors are Free Power con and can never work. Science is not wrong. It is not Free Power single entity. Free Electricity or findings can be wrong. Errors or corrections occur at the individual level. Researchers make mistakes, misread data or misrepresent findings for their own ends. Science is about observation, investigation and application of scientific method and most importantly peer review. Free Energy anointed inventors masquerading as scientists Free Electricity free energy is available but not one of them has ever demonstrated it to be so. Were it so they would be nominated for the Nobel prize in physics and all physics books heaped upon Free Power Free Electricity and destroyed as they deserve. But this isn’t going to happen. Always try to remember. Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know. You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ​, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ​. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem​, will always have the same sign as \Delta \text S_\text{system}ΔSsystem​. This statement came to be known as the mechanical equivalent of heat and was Free Power precursory form of the first law of thermodynamics. By 1865, the Free Energy physicist Free Energy Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from Free Power combustion reaction in Free Power coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push Free Power piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i. e. , the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e. g. , from (P1, V1) to (P2, V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e. g. , to push the piston. Clausius defined this transformation heat as dQ = T dS. In 1873, Free Energy Free Power published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Free Power of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i. e. , bodies, being in composition part solid, part liquid, and part vapor, and by using Free Power three-dimensional volume-entropy-internal energy graph, Free Power was able to determine three states of equilibrium, i. e. , “necessarily stable”, “neutral”, and “unstable”, and whether or not changes will ensue. In 1876, Free Power built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other.
2021-04-20 06:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4932751953601837, "perplexity": 1290.4268264270834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00271.warc.gz"}
http://www.physicsforums.com/forumdisplay.php?f=62&sort=lastpost&order=desc&daysprune=-1&page=2
# Quantum Physics - Mathematical description of the motion and interaction of subatomic particles. Quantum Mechanics & Field Theory Meta Thread / Thread Starter Last Post Replies Views Views: 137,981 Announcement: Follow us on social media and spread the word about PF! Jan16-12 Pinned: Quantum Physics Forum Rules Before posting anything, please review the Physics Forums Global Guidelines. If you are seeking help with a... Feb23-13 08:38 AM ZapperZ 1 36,164 Hello! For some time now I have been absolutely fascinated with quantum mechanics. Unfortunately for me, I am well... Aug13-14 10:32 PM atyy 3 244 I have some questions about this paper:http://users.phys.psu.edu/~radu/extra_strings/freedman_sigma_model.pdf In... Aug13-14 10:17 PM Greg Bernhardt 1 240 Suppose you have λø4 theory and calculate the bare 4-point function: ... Aug13-14 08:45 PM geoduck 1 173 Hi, I have been trying to get the expression for the transition dipole moment of hydrogen but I am not able to get the... Aug13-14 09:39 AM Meir Achuz 3 234 According to my teacher, for any two operators A and B, the commutators =df(A)/dA and =df(B)/dB He did not give any... Aug12-14 10:46 PM strangerep 8 262 I have read contrasting things about the issue. Can you explain briefly why the assumption of discrete space and... Aug12-14 07:51 PM bhobba 24 791 Author: ZapperZ Originally posted on Feb28-14 One of the most spectacular theoretical description that Einstein... Aug12-14 05:00 PM davenn 2 207 What is Quantum Entanglement, and what does it do? I attempted at learning via online articles. Aug12-14 03:29 PM BiGyElLoWhAt 17 446 Hello, The one-electron universe hypothesis, commonly associated with Richard Feynman when he mentioned it in his... Aug12-14 03:05 PM jtbell 3 285 Author: ZapperZ Originally posted on Jan17-13 {I have seen this type of question or scenario being presented on PF... Aug12-14 02:12 PM Greg Bernhardt 0 175 Position eigenstates ( 1 2 3 ... Last) When a particle's position is measured, does the wavefunction collapse to the eigenstate of the measurement (a delta... Aug12-14 12:28 PM jostpuur 89 2,363 Hi everyone, This is my first post. Years ago I read in a science magazine that (at least according to a certain... Aug12-14 09:46 AM mjsd 7 417 Hello Forum, When a system is in a particular state, indicated by a |A>, we can use any basis of eigenvectors to... Aug12-14 04:31 AM tom.stoer 26 537 One of the problems in QM i frequently encounter in all textbooks is the shifting of the wall problem which goes like... Aug12-14 02:08 AM tom.stoer 14 366 Hi. I will give you a question I have looked at and then tell you where I am confused. The wavefunction for a... Aug11-14 06:40 PM bhobba 7 305 Dear PF: I'm currently working in a problem that has had me stranded for several weeks now. The problem reads as... Aug11-14 01:00 PM Jilang 15 295 Hi, I have read that quantum fluctuations have created our universe through the Big Bang. The issue that I didn't... Aug11-14 09:42 AM bgq 2 263 I considered the covariance of 2 spin 1/2 as a non linear operator : A\otimes B-A|\Psi\rangle\langle\Psi|B. The... Aug11-14 08:48 AM jk22 9 1,173 Hello I am trying to solve the dirac equation. I want to solve the dirac eq say for 2 particle system. therefore i... Aug10-14 09:31 PM nakulphy 5 388 I want to discuss the theorem proved in the article "Quantum states and generalized observables: a simple proof of... Aug10-14 01:40 PM naima 37 3,943 Does having a momentum cutoff require space to be discretized? As an analogy, suppose you put your fields in a box... Aug10-14 12:08 AM atyy 1 250 So based on String theory, when doing a Double Slit Style Experiment when an observation is made it is a particle and... Aug9-14 09:49 PM bhobba 2 188 Hi guys, Sorry if this isn't quite the right place to post this, but I have a few conceptual questions that I'd... Aug9-14 08:05 PM "Don't panic!" 18 624 Hi, I had a look at Susskinds explanation about QM and this is pretty strange from my point of view. Video:... Aug9-14 07:45 PM Omega0 2 213 Hallo everybody. Foreword1: I am an engineer not a physicist Foreword2: I am reading a paper about diffusion MRI who... Aug9-14 07:40 PM bhobba 1 235 Is it purely coincidental that the internal symmetry related flavor quantum numbers(like isospin and weak isospin) and... Aug9-14 05:29 PM dextercioby 1 254 Suppose we want to get eigenfunctions of a One-Particle Hamiltonian corresponding to one of its eigenvalues, say E, in... Aug9-14 02:44 PM WannabeNewton 1 233 The spin of an arbitrarily oriented electron precesses in the presence of a magnetic field (see Feyman lectures 10-7).... Aug9-14 08:31 AM 43arcsec 11 451 is it true that 'Spread of Energy in Space' determines 'Spread of Matter in Space', at given point in time? i am... Aug9-14 02:03 AM neomahakala108 10 299 "E2" Quadrupole decay rate Problem Statement Obtain the angular dependence of the rate for the emission to a single... Aug9-14 01:24 AM MisterX 1 326 In Feynman's famous book QED, he repeatedly reminds us that we must include the possibilities of photons traveling... Aug8-14 08:42 PM WannabeNewton 3 315 I recognize the practical aspects of this would be absurd, but I must admit the premise of what it would take to put a... Aug8-14 07:50 PM bhobba 21 1,460 Suppose I want an expectation value of a harmonic oscillator wavefunction, then in what way will I write the Hermite... Aug8-14 03:13 AM Mniazi 4 260 Hi again, Another, possibly trivial, question. In quantum dynamics we consider maps containing the evolution of a... Aug7-14 08:22 PM bhobba 3 180 Hi, In many quantum physics books I see questions such as "what is the spin-orbit interaction in this case?" Sorry... Aug7-14 03:56 AM DrClaude 1 301 Is there a resonant frequency of light? I was just wondering because the higher the frequency of light, the higher the... Aug7-14 03:38 AM Rob Hoff 5 359 One theory I've heard and which I find interesting is that entanglement between any pair of two-state systems could be... Aug6-14 03:26 PM Jonathan Scott 3 371 I am curious what a photon looks like if we could observe it pass through space. It's also supposed to be oscillation... Aug6-14 05:40 AM Geometry_dude 39 1,485 I donot understand why we quantize the field by defining the commutation relation.What's that mean?And what's the... Aug6-14 02:59 AM bhobba 2 260 Hi all, Just a quick theory based question regarding the Zeeman Effect. The effect of the applied magnetic field... Aug5-14 03:06 PM WannabeNewton 1 244
2014-08-20 16:42:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5981175899505615, "perplexity": 2046.2850485693257}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00391-ip-10-180-136-8.ec2.internal.warc.gz"}
http://openstudy.com/updates/5583900ee4b0cb8c62ea9d9e
## anonymous one year ago Consider this reaction. 2SO2g+O2g yields to 2SO3g. What volume, in milliliters, is required to react with 0.640 g of SO2 gas at STP? 1. anonymous We can use the following formulas $m = n M$ where m is the mass, n is moles, and M is the molar mass, so figure out the molar mass of SO2,. And we also have to use another formula $n = \frac{ v }{ V_{STP} }$ we can relate this with our first formula by doing solving for n first. $m = nM \implies n = \frac{ m }{ M }$ now we can plug this into our second equation $\frac{ m }{ M } = \frac{ v }{ V_{STP} } \implies v = \frac{ m V_{STP} }{ M }$ note that $V_{STP} = 22.4 L/mol$ 2. anonymous Thank you for taking time to answer 3. anonymous
2016-10-21 22:03:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120215535163879, "perplexity": 913.4221069993299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00233-ip-10-171-6-4.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3361388/proof-of-warings-theorem
# proof of Waring's theorem So the question is that if we have a sequence $$A_1,A_2,...,A_n$$ of events, and let $$N_k$$ be the events that exactly $$k$$ of the $$A_i$$ occur. What is the probability of event $$N_k$$? Prove that it is $$P(N_k)=\sum_{i=0}^{n-k}(-1)^i\binom{k+i}{k}S_{k+i}$$ where $$S_j=\sum_{i_1 • It depends on how the $A_i$'s depend on one another. If there are $n$ fair coins that are each flipped and $A_i$ is the event that coin $i$ is heads, then the $A_i$'s are independent and $P(N_k) = {n \choose k}2^{-n}$. If there is a single fair coin flipped and each $A_i$ is the probability that the coin landed on heads, then $P(N_k) = 0$ for $1 \le k \le n-1$, $P(N_0) = \frac{1}{2}$, and $P(N_n) = \frac{1}{2}$. – mathworker21 Sep 21 at 0:34 • Shouldn't it be $S_j = \sum_{1 \le i_1 < ... < i_j \le n} \mathbb P ( \bigcap_{r=1}^j A_{i_r})$? Instead of that union under $\mathbb P()$ – Dominik Kutek Sep 21 at 1:13 • yes exactly, ill correct it – user 42493 Sep 21 at 1:17 • @user42493 I can't fathom why you would wait until adding that edit. We're not mind readers. – mathworker21 Sep 21 at 1:52 • :) :) :) indeed! I was just curious how you would solve it without knowing the formula! :):):) – user 42493 Sep 21 at 1:53 So, we have probability space $$(\Omega, \mathcal F, \mathbb P)$$, and let $$A_1,...,A_n \in \mathcal F$$ be arbitrary events. I'd change notation, especially $$k$$ with $$r$$ because I like to use $$k$$ as an index under $$\sum$$ sign. So let's fix any $$r \in \{1, ... ,n\}$$ and let $$N_r \in \mathcal F$$ be event : Exactly $$r$$ of $$A_1,...,A_n$$ occurs. Let $$S_k(n) = \{ T \subset \{1,...,n\} : |T| = k \}$$ ($$k-$$ element subsets of $$\{1,...,n\}$$. Finally for $$T \subset \{1,...,n\}$$ let $$A_T = \bigcap_{ j \in T} A_j$$ We'd like to prove (I've translated the index of sum): $$\mathbb P(N_r) = \sum_{k=r}^n (-1)^{k-r} {k \choose r} \sum_{T \in S_k(n)}\mathbb P(A_T)$$ PROOF: Fix any $$K \in S_r(n)$$. Let $$B_K$$ be the event: $$A_i$$ occur if and only if $$i \in K$$ ( that is exactly $$r$$ of $$A_1 ,... ,A_n$$ occured and only those with indices from $$K$$). Now, for $$j \notin K$$ let $$C_j = A_K \cap A_j$$. We're interested in $$\mathbb P(B_K)$$. Note that we can now use Inclusion-Exclusion formula, because $$\mathbb P(B_K) = \mathbb P( \bigcap_{j \notin K} (A_K \setminus C_j))$$. To remind, set $$K$$ is fixed, and there is exactly $$n-r$$ indices in $$L = [n]\setminus K$$, where $$[n] = \{1,...,n\}$$ And again let $$C_T = \bigcap_{j \in T} C_j$$, where $$T \subset [n]$$ Using Inclusion-Exclusion, we have: $$\mathbb P(B_K) = \sum_{k=0}^{n-r} (-1)^k \sum_{T \in S_k(L)} \mathbb P(C_T) = \sum_{k=0}^{n-r} (-1)^k \sum_{T \in S_k(L)} \mathbb P( A_K \cap A_T) = \sum_{k=0}^{n-r} (-1)^k \sum_{T \in S_k(L)} \mathbb P(A_{T \cup K}) = \sum_{k=r}^n \sum_{T: K \subset T, T \in S_k(n)} (-1)^{k-r} \mathbb P(A_T) = \sum_{T: K \subset T \subset [n]} (-1)^{|T|-r} \mathbb P(A_T)$$ Now what we need to do, is to sum it for every $$K \in S_r(n)$$ (note $$B_{K_1}, B_{K_2}$$ are disjoint for any $$K_1 \neq K_2$$) And we have: $$\mathbb P(N_r) = \mathbb P(\bigcup_{K \in S_r(n)} B_K) = \sum_{K \in S_r(n)}\mathbb P(B_K) =\sum_{K \in S_r(n)}\sum_{T: K \subset T \subset [n]} (-1)^{|T|-r} \mathbb P(A_T) = \sum_{T: |T| \ge r} \sum_{R \in S_r(T)} (-1)^{|T|-r} \mathbb P(A_T) = \sum_{T: |T| \ge r} {|T| \choose r} (-1)^{|T| - r} \mathbb P(A_T) = \sum_{k=r}^n \sum_{T \in S_k(n)} {k \choose r} (-1)^{k-r} \mathbb P(A_T) = \sum_{k=r}^n (-1)^{k-r} {k \choose r} \sum_{T \in S_k(n)} \mathbb P(A_T)$$ Which is exactly what we wanted to prove. • The proof isn't my own, I've just recall and modify a little bit proof I've seen once, about combinatorial identity. The most "tricky" part for me is to come up with these disjoint events $B_K$. – Dominik Kutek Sep 21 at 2:05 • I know in my gut that there is a natural, intuitive, motivated proof, but I'm too lazy to figure it out – mathworker21 Sep 21 at 12:02 • I hope so, too :D – Dominik Kutek Sep 21 at 12:22
2019-12-14 22:18:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 11004.847449836712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00267.warc.gz"}
http://mathoverflow.net/questions/67961/codimension-zero-immersions?sort=votes
# Codimension zero immersions Given an immersion of the n-1-sphere into a (closed) n-manifold, when does it extend to an immersion of the n-disk? Remark: If the sphere had dimension k smaller than n-1, then such an immersion would exist if and only if the corresponding map from the k-sphere to the Stiefel manifold is 0-homotopic. This is the Hirsch-Smale Theorem and in fact an example of an h-principle. However the case k=n-1 is exactly the exceptional case which does NOT obey an h-principle. Easy examples (Figure 8.1. in the book by Eliashberg-Mishachev) show that there exist immersions of the circle in the plane which have a formal extension but not a genuine extension to the 2-disk. So, is there anything known about sufficient conditions for extendability? - I'm probably ignorant, but can you say why this question is tagged ds.dynamical-systems? –  Willie Wong Jun 16 '11 at 16:02 Related: mathoverflow.net/questions/57215/… , mathoverflow.net/questions/43743/… and the work of Koschorke on singularities of bundle morphisms. But I think the question is a subtle one. –  Mark Grant Jun 16 '11 at 16:14 @MG: Wasn't Koschorke's work about codimension one immersions, which then reduces to the study of vector bundle monomorphisms? –  ThiKu Jun 16 '11 at 16:43 @unknown (google): Koschorke's work is quite general, but I'm not sure it applies to this problem. See Chapter 1.3 of "Vector fields and other vector bundle morphisms—a singularity approach", where a complete obstruction to a map being homotopic to an immersion is constructed. It is an element in a certain normal bordism group. –  Mark Grant Jun 17 '11 at 16:02 This is subtle, even for $n=2$. In this case, clearly the problem reduces to $S^2$ or $\mathbb{R}^2$ since every surface has one of these as a universal cover. Samuel Blank found a criterion to determine if a curve in $\mathbb{R}^2$ bounds an immersed disk. An exposition has been given by Valentin Poenaru, and the criterion has been extended to $S^2$ by Frisch. There is also a bit of discussion in these papers about the higher dimensional problem. - Smale-Hirsch is not just a theorem about existence of immersions. It's a theorem about the homotopy-type of the space of all immersions. Given an immersion $$S^{n-1} \to \mathbb R^n$$ you get a bundle monomorphism $$TS^{n-1} \to \mathbb R^n$$ There's a cute trick that shows the space of all such bundle monomorphisms has the homotopy-type of $Maps(S^{n-1}, SO_n)$. Here's how it goes. Given a bundle monomorphism $f : TS^{n-1} \to \mathbb R^n$ the associated map $G(f) : S^{n-1} \to SO_n$ is defined by, given $p \in S^{n-1}$ and $v \in \mathbb R^n$. Then $G(f)(p)(v)$ is defined by letting $v_\perp \in \mathbb R$ and $V_{||} \in T_pS^{n-1}$ be the orthogonal component and tangent-space orthogonal projection of $v$, and $G(f)(p)(v) = f(p)(v_{||}) + v_{\perp}f(p)^+$ where $f(p)^+$ is the unit vector normal to $f(p)(T_pS^{n-1})$ chosen so that $G(f) \in SO_n$ i.e. that it is not orientation-reversing. You can reverse this construction as well, to go from maps $S^{n-1} \to SO_n$ to bundle immersions $TS^{n-1} \to \mathbb R^n$. It's basically by design, a homotopy of $G(f)$ can be re-interpreted as a $1$-parameter family of immersions $S^{n-1} \to \mathbb R^n$ equipped with a normal vector field. Perhaps you can't extend this 1-parameter family to an immersion $S^{n-1} \times [0,1] \to \mathbb R^n$. Is that the key issue? - I'm not so sure whether this works. The problem might be that a homotopy of immersions of S does not necessarily yield an immersion of Sx[0,1]. One needs that the derivative in [0,1]-direction is linearly independent of the derivatives in the S-direction. –  ThiKu Jun 16 '11 at 23:46 There's a key difference. A map $X \to V_{n,j}$ may not lift to a map $X \to V_{n,j+1}$. But a map $X \to V_{n,n-1} \equiv SO_n$ always lifts to a map $X \to V_{n,n} \equiv O_n$. Let me edit my answer a bit to make the key step less "insiderish". –  Ryan Budney Jun 17 '11 at 0:23 I don't think constructing the 1-parameter family of immersions with a normal vector field is the problem. But I changed my answer. It's only a partial response to your question, not really everything you were looking for. What do you mean by "formal extension" -- is that the 1-parameter family of immersions with normal vector field? –  Ryan Budney Jun 17 '11 at 1:13 Smale-Hirsch describes the homotopy types of these spaces of immersions, as Ryan says, so it gives a test for whether a given immersion of $S^{n-1}$ in an $n$-manifold is homotopic through immersions to one that extends to an immersion of $D^n$. But it does not answer the question of whether a given immersion can be so extended. You might think that the restriction map from the space of immersions of $D^n$ to the space of immersions of $S^{n-1}$ is a fibration, but it's not. It is if the disk has positive codimension, and this is a key step in proving Smale-Hirsch. But it's false in codim $0$. –  Tom Goodwillie Jun 17 '11 at 2:07 @ RB : "formal immersion" means a vector bundle monomorphism TM--->TN which does not necessarily come from an immersion M--->N. If dim(M)<dim(N), then every formal immersion is homotopic to an immersion, but for dim(M)=dim(N) this is not always true. –  ThiKu Jun 18 '11 at 11:26 Christian Pappas gave a Morse-theoretic method for constructing all extensions of a codimension 1 immersion $f:\partial N\to W$ to an immersion $F:N\to W$ with $F|_{\partial N}=f$. -
2014-12-29 06:46:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682861924171448, "perplexity": 321.70233408037217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447562099.128/warc/CC-MAIN-20141224185922-00026-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.hydrol-earth-syst-sci.net/22/5001/2018/
Journal cover Journal topic Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union Journal topic Hydrol. Earth Syst. Sci., 22, 5001-5019, 2018 https://doi.org/10.5194/hess-22-5001-2018 Hydrol. Earth Syst. Sci., 22, 5001-5019, 2018 https://doi.org/10.5194/hess-22-5001-2018 Research article 27 Sep 2018 Research article | 27 Sep 2018 # Improvement of the SWAT model for event-based flood simulation on a sub-daily timescale Improvement of the SWAT model Dan Yu1, Ping Xie1,2, Xiaohua Dong3,4, Xiaonong Hu5, Ji Liu3,4, Yinghai Li3,4, Tao Peng3,4, Haibo Ma3,4, Kai Wang6, and Shijin Xu6 Dan Yu et al. • 1State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan, 430072, China • 2Collaborative Innovation Center for Territorial Sovereignty and Maritime Rights, Wuhan, 430072, China • 3College of Hydraulic & Environmental Engineering, China Three Gorges University, Yichang, 443002, China • 4Hubei Provincial Collaborative Innovation Center for Water Security, Wuhan, 430070, China • 5Institute of Groundwater and Earth Sciences, Jinan University, Guangzhou, 510632, China • 6Hydrologic Bureau of Huaihe River Commission, Bengbu, 233001, China Abstract Flooding represents one of the most severe natural disasters threatening the development of human society. A model that is capable of predicting the hydrological responses in watershed with management practices during flood period would be a crucial tool for pre-assessment of flood reduction measures. The Soil and Water Assessment Tool (SWAT) is a semi-distributed hydrological model that is well capable of runoff and water quality modeling under changed scenarios. The original SWAT model is a long-term yield model. However, a daily simulation time step and a continuous time marching limit the application of the SWAT model for detailed, event-based flood simulation. In addition, SWAT uses a basin level parameter that is fixed for the whole catchment to parameterize the unit hydrograph (UH), thereby ignoring the spatial heterogeneity among the sub-basins when adjusting the shape of the UHs. This paper developed a method to perform event-based flood simulation on a sub-daily timescale based on SWAT2005 and simultaneously improved the UH method used in the original SWAT model. First, model programs for surface runoff and water routing were modified to a sub-daily timescale. Subsequently, the entire loop structure was broken into discrete flood events in order to obtain a SWAT-EVENT model in which antecedent soil moisture and antecedent reach storage could be obtained from daily simulations of the original SWAT model. Finally, the original lumped UH parameter was refined into a set of distributed ones to reflect the spatial variability of the studied area. The modified SWAT-EVENT model was used in the Wangjiaba catchment located in the upper reaches of the Huaihe River in China. Daily calibration and validation procedures were first performed for the SWAT model with long-term flow data from 1990 to 2010, after which sub-daily (Δt=2 h) calibration and validation in the SWAT-EVENT model were conducted with 24 flood events originating primarily during the flood seasons within the same time span. Daily simulation results demonstrated that the SWAT model could yield very good performances in reproducing streamflow for both whole year and flood period. Event-based flood simulation results simulated by the sub-daily SWAT-EVENT model indicated reliable performances, with ENS values varying from 0.67 to 0.95. The SWAT-EVENT model, compared to the SWAT model, particularly improved the simulation accuracies of the flood peaks. Furthermore, the SWAT-EVENT model results of the two UH parameterization methods indicated that the use of the distributed parameters resulted in a more reasonable UH characterization and better model fit compared to the lumped UH parameter. 1 Introduction A flood represents one of the most severe natural disasters in the world. It has been reported that nearly 40 % of losses originating from natural catastrophes are caused by floods (Adams III and Pagano, 2016). Floods have caused enormous losses to economies, societies, and ecological environments around the world (Doocy et al., 2013; Werritty et al., 2007; Guan et al., 2015). China is a flood-prone country, which suffers from severe flooding almost every year (Zhang et al., 2002). In this situation, protection against flooding has always been the government's primary task that brooks no delay. A series of structural and non-structural flood mitigation measures have been conducted to control and manage the floods (Guo et al., 2018). However, accurate flood simulations would be particularly important for such design- or management-related issues. Numerous hydrological models have been developed since their first appearance. According to the spatial discretization method, these existing hydrological models can be divided into two categories: lumped models and distributed (semi-distributed) models (Maidment, 1994). Although lumped models are generally accepted for flood forecast and simulation due to the structural simplicity, computational efficiency and lower data requirements, they are not applicable to complex catchments since they do not account for the heterogeneity of the catchments (Yao et al., 1998; Hapuarachchi et al., 2011). Meanwhile, distributed (semi-distributed) models subdivide the entire catchment into a number of smaller heterogeneous sub-units with dissimilar attributes. It is the advantage for distributed (semi-distributed) models to incorporate the spatial characteristics of catchment such as land cover, soil properties, topography and meteorology (Yang et al., 2001, 2004). A large number of distributed or semi-distributed hydrological models have been applied in flood simulation. Beven et al. (1984) firstly tested the applicability of the TOPMODEL in flood simulation for three UK catchments and suggested that the model could be a useful approach for ungauged catchments. The Variable Infiltration Capacity (VIC) model is also playing an increasing role in flood simulation (Wu et al., 2014; Yigzaw and Hossain, 2012). The applications of the HBV model for flood simulation could be found in many studies (Haggstrom et al., 1990; Grillakis et al., 2010; Kobold and Brilly, 2006). The HEC-HMS model was able to provide reasonable flood simulation results in the San Antonio River basin (Ramly and Tahir, 2016). Among many distributed (semi-distributed) models, the one that is capable of predicting the hydrological responses in watersheds with management practices would provide scientific reference for preventing flood and mitigating its adverse effects. The Soil and Water Assessment Tool (SWAT) model (Arnold et al., 1998) is a typical semi-distributed hydrological model that delineates a catchment into a number of sub-basins, which were subsequently divided into hydrologic response units (HRUs) representing the unique combination of land cover, soil type, and slope class within a sub-basin. The SWAT model integrates well with the Geographic Information System (GIS), having great potential in dealing with spatial flood control measures. In addition, the SWAT model is widely applied for runoff and water quality modeling under changed scenarios (Glavan et al., 2015; Yu et al., 2018; Qiu et al., 2017; Baker and Miller, 2013; Yan et al., 2013). SWAT is a continuous (i.e., long-term) model with a limited applicability toward simulating instantaneous hydrologic responses. Therefore, Jeong et al. (2010) extended the capability of SWAT to simulate operational sub-daily or even sub-hourly hydrological processes, the modifications of which primarily focused on the model algorithms to enable the SWAT model to operate at a finer timescale with a continuous modeling loop. Constrained by data availability in China (MWR, 2008), rainfall and discharge observations at a sub-daily timescale are usually collected during flood periods, while daily data are measured otherwise. In this respect, hydrological models are usually applied at different timescales (i.e., a daily timescale for continuous simulations and a sub-daily timescale for event-based flood simulation) according to the availability of observed rainfall and discharge data (Yao et al., 2014a). Hence, a major constraint for the application of the SWAT model as modified by Jeong et al. (2010) is the conflict between a continuous simulation loop and the discontinuous observed sub-daily data in China. To capture the sophisticated characteristics of flood events at a sub-daily timescale, a refinement of the spatial representation within the SWAT model is necessary. A dimensionless unit hydrograph (UH), which was distributed as a triangular shape and embedded within an sub-daily overland flow routing process in the SWAT model, was applied to relate hydrologic responses to specific catchment characteristics, such as the dimensions of the main stream and basin area, through applications of GIS or remote sensing (RS) software (Jena and Tiwari, 2006). Due to the spatial discretization in the SWAT model, the model parameters are grouped into three levels: (1) basin level parameters are fixed for the whole catchment; (2) sub-basin level parameters are varied with sub-basins; (3) HRU level parameters are distributed in different HRUs. By default, the UH-specific parameter in the SWAT model is programmed on the basin level, which means that spatial variation within a catchment is disregarded when adjusting the shape of the UH in each sub-basin. Given the spatial heterogeneity of the catchment, the application of this basin level adjustment parameter seems to be rather unconvincing. Moreover, because a great deal of research has primarily focused on daily, monthly or yearly simulations using the SWAT model, little effort has actually been provided toward demonstrating the usage of the UH method in the SWAT model. Table 1SWAT model input data and sources for the Wangjiaba (WJB) catchment. This study developed a method to perform event-based flood simulation on a sub-daily timescale based on the SWAT model and simultaneously improved the UH method used in the original SWAT model in the upper reaches of the Huaihe River in China. SWAT is an open-source code model, which makes it possible to produce such a modification. The source code of SWAT2005 has an internal auto-calibration module and such integrated design of model simulation and auto-calibration is easily manageable and modified since there is no need to couple external optimization algorithms. The accessible SWAT2009 (rev. 528) and SWAT2012 (rev. 664) have removed auto-calibration routines, however, an independent program SWAT-CUP (Abbaspour et al., 2007) is provided instead. Admittedly, many improvements have been made from the SWAT2005 to the latest SWAT2012. According to the SWAT model updates in Seo et al. (2014), the major enhancements focused on the water quality modeling components, whereas the runoff modeling components in new SWAT versions were not so far different from those in SWAT2005. This study was specific to the model modifications in runoff simulation; thus, SWAT2005 was considered to be appropriate. There are some other model modification studies (Dechmi et al., 2012; Jeong et al., 2010) based on the SWAT2005 version. Figure 1The Wangjiaba (WJB) catchment. 2 Study area and data ## 2.1 Study area The Huaihe River basin (3055–3636 N, 11155–12125 E) is situated in the eastern part of China. The Wangjiaba (WJB) catchment is situated within the upper reaches of the Huaihe River basin and was chosen as the study area for this paper (see Fig. 1). The WJB catchment has a drainage area of 30 630 km2, wherein the long channel reaches from the source region to the WJB outlet. The southwestern upstream catchment is characterized as a mountain range with a maximum elevation of 1110 m above sea level. The central and eastern downstream regions are dominated by plains. The study catchment is a subtropical zone with an annual average temperature of 15 C. The long-term average annual rainfall varies from 800 mm in the north to 1200 mm in the south. Since the catchment is dominated by a monsoon climate, approximately 60 % of the annual rainfall is received during the flood season ranging from mid-May to mid-October. Severe rainfall events within the study area typically transpire during the summer, frequently resulting in severe floods (Zhao et al., 2011). ## 2.2 Model dataset To construct and execute the SWAT model, a digital elevation model (DEM), together with land use and soil type data, is required. Climate data, including that of rainfall, temperature, wind speed, etc., are also used. Table 1 lists the model data used in this study. The DEM data in this study were downloaded from the website of the US Geological Survey (USGS) with a spatial resolution of 90 m. The study catchment was divided into 136 sub-basins according to the catchment delineation, as shown in Fig. 1. A land use map was produced from the Global Land Cover 2000 (GLC2000) data product with a grid size of 1 km (Bartholomé and Belward, 2005). Six categories of land use were identified for this catchment: agricultural land (80.51 %), forest-deciduous (6.76 %), forest-evergreen (2.26 %), range-brush (1.09 %), range-grasses (8.09 %), and water (1.29 %). Soil data were obtained from the Harmonized World Soil Database (HWSD) with a spatial resolution of 30 arc-seconds. The HWSD also provides an attributed database that contains the physico-chemical characteristics of soil data worldwide (FAO et al., 2012). Since the built-in soil database within the SWAT model does not cover the study area, additional soil parameters were calculated using the method proposed by Jiang et al. (2014). Soil reclassification in the study area was in accordance with the FAO-90 soil system. Consequently, Eutric Planosols and Cumulic Anthrosols are the two main soil types, with area percentages of 24.71 % and 19.95 %, respectively. The SWAT model has developed a weather generator (WXGEN) to fill the missing climate data by the use of monthly statistics. Relative humidity, wind speed, solar radiation and the minimum and maximum air temperatures were obtained from the Climate Forecast System Reanalysis (CFSR), which was designed based on the forecast system of the National Centers for Atmospheric Prediction (NCEP) to provide estimation for a set of climate variability from 1979 to the present day. There were 30 weather stations included in the study catchment. A dense rain gauge network consisting of 138 gauges is distributed throughout the study area as illustrated in Fig. 1. By default, SWAT structure allows only one rainfall input for each delineated sub-basin. Thus, sub-basins without available rainfall gauge would be automatically assigned the nearest one. For sub-basins with multiple rainfall gauges, Thiessen polygon method (Thiessen, 1911) was utilized to derive the rainfall input. Rainfall is the main driving force for hydrological models, and therefore accurate representation of spatially distributed rainfall is essential in hydrological modeling. Cho et al. (2009) compared three different methods to incorporate spatially variable rainfall into the SWAT model and recommended the Thiessen polygon approach in catchments with high spatial variability of rainfall due to its robustness to catchment delineation. Daily observed rainfall data were retrieved from 1991 to 2010 with coverage during the entire year, while sub-daily (Δt=2 h) rainfall data are only available for several flood events from May to September within the same time span. 3 Methodologies ## 3.1 Development of a sub-daily event-based SWAT model The original SWAT model was designed for continuous simulations using a daily time step. The SWAT model operates most effectively during the prediction of long-term hydrological responses to land cover changes or soil management practices with daily time step (Jeong et al., 2011). When faced with flood simulation issues, a finer timescale is required to realistically capture the instantaneous changes representative of flood processes. Figure 2SWAT-EVENT model for the simulation of event-based flood data based on the initial conditions extracted from daily simulation results produced by the original SWAT model. Therefore, the original daily simulation-based SWAT model first needs to be modified in order to perform sub-daily simulations. In a previous study, the sub-daily and even the sub-hourly modeling capacities of the SWAT model have been developed to allow flow simulations with any time step less than a day (Jeong et al., 2010). In the original SWAT model, the surface runoff lag was estimated by a first-order lag equation, which was represented by a function of the concentration time and the lag parameter. However, this lag equation was implicitly fixed with a daily time interval. Jeong et al. (2010) then introduced the simulation time interval into the lag equation to lag a fraction of the surface runoff at the end of each time step. In addition, channel and impoundment routings were also estimated at operational time interval while other processes such as base flow and evapotranspiration were calculated by equally dividing the daily results over the time steps. In this study, the modifications from daily modeling to sub-daily modeling followed the methods proposed by Jeong et al. (2010). Second, the modified sub-daily SWAT model must be applied in such a manner to achieve the simulation of individual flooding events rather than to simulate in a continuous way, as performed in the original SWAT model. Event-based sub-daily flood modeling is necessary for these reasons: (1) to enable the modelers to acknowledge the detailed information of upcoming floods and (2) to potentially conduct flood simulation within a watershed without possessing continuously recorded hydrologic data at a short time step. To enable the SWAT model to simulate individual flood events, the original source codes were modified and compiled into a new version known as SWAT-EVENT. In the source code of SWAT2005, the “simulate” subroutine contains the loops governing the hydrological processes following the temporal marching during the entire simulation period. Here, the continuous yearly loop was set into several flood events, meanwhile, the continuous daily loop was broken into flood events according to the specific starting and ending dates. However, the event-based modeling requires a separate method to derive the antecedent conditions of model states. The combination of daily continuous modeling and sub-daily event-based modeling was used in this study (Fig. 2). A continuous daily rainfall sequence was imported into the original SWAT model to independently perform long-term daily simulations. In the SWAT model, there are another two subroutines “varinit” and “rchinit” initializing the daily simulation variables for the land phase of the hydrologic cycle and the channel routing, respectively. In the SWAT-EVENT model, condition judgments were added into those two initialization subroutines. That is, when the simulation process is at the beginning of a given flood event, antecedent soil moisture and antecedent reach storage are set equal to the respective values extracted from the long-term daily simulations of the original SWAT model; otherwise, they should be updated by the SWAT-EVENT model simulation states of the previous day. Figure 3Shape of the dimensionless triangular UH. ## 3.2 Application of unit hydrographs with distributed parameters The dimensionless UH method employed in the SWAT model exhibits a triangular shape (SCS, 1972), as shown in Fig. 3, wherein the time t (h) represents the x axis, and the ratio of the discharge to peak discharge represents the y axis. This UH is defined as follows: $\begin{array}{ll}& {q}_{\mathrm{uh}}=\frac{t}{{t}_{\mathrm{p}}}\phantom{\rule{0.25em}{0ex}}\mathrm{if}\phantom{\rule{0.25em}{0ex}}t\le {t}_{\mathrm{p}},\\ \text{(1)}& & {q}_{\mathrm{uh}}=\frac{{t}_{\mathrm{b}}-t}{{t}_{\mathrm{b}}-{t}_{\mathrm{p}}}\phantom{\rule{0.25em}{0ex}}\mathrm{if}\phantom{\rule{0.25em}{0ex}}t>{t}_{\mathrm{p}},\end{array}$ where quh is the unit discharge at time t, tp is the time to the peak (h), and tb is the time base (h). Then, the dimensionless UH is expressed by dividing by the area enclosed by the triangle (Jeong et al., 2010). There are two time factors determining the shape of the triangular UH, which are defined by the following equations: $\begin{array}{}\text{(2)}& & {t}_{\mathrm{b}}=\mathrm{0.5}+\mathrm{0.6}\cdot {t}_{\mathrm{c}}+{t}_{\mathrm{adj}},\text{(3)}& & {t}_{\mathrm{p}}=\mathrm{0.375}\cdot {t}_{\mathrm{b}},\end{array}$ where tc is the concentration time for the sub-basin (h), and tadj is a shape adjustment factor for the UH (h) (Neitsch et al., 2011). Table 2Geographic features of sub-basins for the Wangjiaba (WJB) catchment. The time of concentration tc can be calculated based upon the geographic characteristics of the sub-basin considered, for which tc is denoted by the accumulation of the overland flow time tov (h) and the channel flow time tch (h): $\begin{array}{}\text{(4)}& & {t}_{\mathrm{c}}={t}_{\mathrm{ov}}+{t}_{\mathrm{ch}},\text{(5)}& & {t}_{\mathrm{ov}}=\frac{{L}_{\mathrm{slp}}^{\mathrm{0.6}}\cdot {n}^{\mathrm{0.6}}}{\mathrm{18}\cdot {S}_{\mathrm{sub}}^{\mathrm{0.3}}},\text{(6)}& & {t}_{\mathrm{ch}}=\frac{\mathrm{0.62}\cdot L\cdot {n}^{\mathrm{0.75}}}{{A}^{\mathrm{0.125}}\cdot {S}_{\mathrm{ch}}^{\mathrm{0.375}}},\end{array}$ where Lslp is the average slope length for the sub-basin under consideration (m); n is the Manning coefficient for the sub-basin; Ssub is the average slope steepness of the sub-basin (m m−1); L is the longest tributary length in the sub-basin (km); A denotes the area of the sub-basin (km2); and Sch is the average slope of the tributary channels within the sub-basin (m m−1). Table 3Parameters and parameter ranges used in sensitivity analysis and the final ranks of sensitivity analysis results. a These parameters are varied by multiplying a ratio (%) within the range. b These parameters are varied by adding or subtracting a value within the range. Figure 4Effect of a basin level UH parameter tadj on the CV of UH time base tb. According to catchment discretization, Table 2 appears obvious spatial differences of the geographical attributes among sub-basins. For instance, the values of sub-basin area A vary from 0.09 to 879.16 km2, with a coefficient of variation (CV) of 0.74. The average slope of the sub-basin Ssub and the average slope of the tributary channels Sch are topographic-related parameters, showing much higher values in source sub-basins than those in downstream sub-basins. Spatially, the CV values of Ssub and Sch in Table 2 are 1.28 and 1.18. As a result, the overland flow time tov and the channel flow time tch affected by all those geographical attributes are non-homogeneous in the spatial distribution, especially for the tch with the CV value of 0.91. Since the channel flow time tch dominates the concentration time tc, the CV of tc is 0.81 in Table 2. According to Eq. (2), the time base of the UH (tb is determined by both concentration time for the sub-basin (tc and shape adjustment factor (tadj concurrently. However, the UH parameter tadj in Eq. (2) is a basin level parameter possessing a lumped value for all sub-basins, meaning that the spatial heterogeneity of tb may be homogenized. Hypothetically, the CV value of the tb would decrease from 0.72 to 0.09 along with the increase of UH parameter tadj from 0 to 30 h in Fig. 4. Generally, the time base of triangular UH (tb should be reduced to produce increased peak flow for steep and small sub-basins, or should be increased to produce decreased peak flow for flat and large sub-basins. Thus, the shape adjustment parameter tadj was modified from the basin level to the sub-basin level, and renamed tsubadj which allowed the UHs to be adjusted independently by distributed values. ## 3.3 Model calibration and validation ### 3.3.1 Sensitivity analysis Sensitivity analysis is a process employed to identify parameters that significantly influence model performance (Holvoet et al., 2005). Generally, sensitivity analysis takes priority over the calibration process to reduce the complexity of the latter (Sudheer et al., 2011). Here, a combined Latin hypercube and one-factor-at-a-time (LH-OAT) sampling method embedded within the SWAT model (Griensven et al., 2006) was used to conduct a sensitivity analysis. LH-OAT method firstly subdivides each parameter into N stratums with a probability of 1∕N. Sampling points are randomly generated so that one parameter is sampled only once at each strata. Then, the local sensitivity of a parameter at one sampling point is calculated as $\begin{array}{ll}\text{(7)}& & {S}_{ij}=& \mathrm{200}\cdot \left|\frac{\left[y\left({\mathit{\theta }}_{\mathrm{1}},\mathrm{\dots },{\mathit{\theta }}_{i}+{\mathrm{\Delta }}_{i},\mathrm{\dots },{\mathit{\theta }}_{\mathrm{P}}\right)-y\left({\mathit{\theta }}_{\mathrm{1}},\mathrm{\dots },{\mathit{\theta }}_{\mathrm{P}}\right)\right]}{\left[y\left({\mathit{\theta }}_{\mathrm{1}},\mathrm{\dots },{\mathit{\theta }}_{i}+{\mathrm{\Delta }}_{i},\mathrm{\dots },{\mathit{\theta }}_{\mathrm{P}}\right)+y\left({\mathit{\theta }}_{\mathrm{1}},\mathrm{\dots },{\mathit{\theta }}_{\mathrm{P}}\right)\right]\cdot {\mathrm{\Delta }}_{i}}\right|,\end{array}$ where Sij is the partial effect of parameter θi at the LH sampling point j; y is the model output (or objective function); Δi is the perturbation of parameter θi and P is the number of parameters. The final sensitivity index Si for the parameter θi is derived by averaging these partial effects of each loop for all LH points (i.e., N loops). The greater the Si, the more sensitive the model response is to that particular parameter. Figure 5Comparisons between the observed and simulated daily discharges for calibration (a) and validation (b) periods at WJB. Figure 6Comparisons between the observed and simulated sub-daily flood events for the calibration period at WJB. Figure 7Comparisons between the observed and simulated sub-daily flood events for the validation period at WJB. It is highly recommended to identify the model parameters that can represent the hydrological characteristics of specific catchment before blindly applying sensitivity analysis. Based on the reviews of the SWAT model applications (Griensven et al., 2006; Cibin et al., 2010; Roth and Lemann, 2016) and the analysis of the SWAT model parameters, a total of 16 parameters related to the streamflow simulation in study area were involved in sensitivity analysis (see Table 3) for daily simulation with the SWAT model. When it came to the event-based sub-daily flood simulation with SWAT-EVENT model, additional distributed UH parameter tsubadj (i.e., a total of 17 model parameters) was also considered. For both models, the objective function y in Eq. (7) represented the residual sum of squares of stream flow between the simulated set and the measured set. Specifically, sensitivity analysis of the SWAT model was conducted not only for long-term period, but also for the same flood period as the SWAT-EVENT simulation. According to the sensitivity ranks of Si , the upper-middle ranking parameters would be used for the calibration procedure, while the values of the other parameters were set to their default values. ### 3.3.2 Daily calibration and validation with the SWAT model Before effectively applying a hydrological model, a calibration process aims to estimate the model parameters that minimize the errors between the observed and simulated results is usually necessary. The Shuffled Complex Evolution (SCE-UA) algorithm (Duan et al., 1992) is a global optimization technique that is incorporated as a module into the SWAT model. The SCE-UA algorithm has been applied to multiple physically based hydrological models (Sorooshian et al., 1993; Luce and Cundy, 1994; Gan and Biftu, 1996) and has exhibited good performance similar to other global search procedures (Cooper et al., 1997; Thyer et al., 1999; Kuczera, 1997; Jeon et al., 2014). Daily simulations were performed within the time span, from 1990 to 2010, using daily observed data at the outlet of WJB. During this phase, the SWAT model was also conducted in two ways, calibrating for long-term period and calibrating for flood period. For long-term period case, one year (1990) was selected as the model warm-up period, the period from 1991 to 2000 was used for the model calibration, and the remaining data from 2001 to 2010 were employed for validation. For flood period calibrating, what was different was that the objective function only covered several flood events, which were consistent with the SWAT-EVENT application. Table 4Calibrated parameter values for the SWAT model and the SWAT-EVENT model. * The final values of these parameters are derived by multiplying the percentage change (%) based on their default values. Parameter CN2 with the calibrated value of 15.98, for example, means that the default values are multiplied by (1 + 15.98 %) to obtain the optimal results. Table 5SWAT model performance statistics for long-term period calibrating and flood period calibrating. Multiple statistical values, including the Nash–Sutcliffe efficiency coefficient (ENS) (Nash and Sutcliffe, 1970), ratio of the root mean square error to the standard deviation of measured data (RSR) (Singh et al., 2005), and the percent bias (PBIAS) (Gupta et al., 1999), were selected in this study to evaluate the daily model performances, as shown in Eqs. (8), (9), and (10). The ENS provides a normalized statistic indicating how closely the observed and simulated data match with each other, wherein a value equal to 1 implies an optimal model performance insomuch that the simulated flow perfectly matches the observed flow. The RSR index standardizes the root mean square error using the observations standard deviation, varying from 0 to a positive value. The optimal value of RSR is 0, which indicates the perfect model simulation. The PBIAS detects the degree that the simulated data deviates from the observed data. $\begin{array}{}\text{(8)}& & {E}_{\mathrm{NS}}=\mathrm{1}-\left[\frac{\sum _{i=\mathrm{1}}^{n}{\left({Q}_{\mathrm{obs}}\left(i\right)-{Q}_{\mathrm{sim}}\left(i\right)\right)}^{\mathrm{2}}}{\sum _{i=\mathrm{1}}^{n}{\left({Q}_{\mathrm{obs}}\left(i\right)-\stackrel{\mathrm{‾}}{{Q}_{\mathrm{obs}}}\right)}^{\mathrm{2}}}\right],\text{(9)}& & {R}_{\mathrm{SR}}=\frac{\sqrt{\sum _{i=\mathrm{1}}^{n}{\left({Q}_{\mathrm{obs}}\left(i\right)-{Q}_{\mathrm{sim}}\left(i\right)\right)}^{\mathrm{2}}}}{\sqrt{\sum _{i=\mathrm{1}}^{n}{\left({Q}_{\mathrm{obs}}\left(i\right)-\stackrel{\mathrm{‾}}{{Q}_{\mathrm{obs}}}\right)}^{\mathrm{2}}}},\text{(10)}& & {P}_{\mathrm{BIAS}}=\left[\frac{\sum _{i=\mathrm{1}}^{n}\left({Q}_{\mathrm{obs}}\left(i\right)-{Q}_{\mathrm{sim}}\left(i\right)\right)\cdot \mathrm{100}}{\sum _{i=\mathrm{1}}^{n}{Q}_{\mathrm{obs}}\left(i\right)}\right],\end{array}$ where Qobs(i) is the ith observed streamflow (m3 s−1); Qsim(i) is the ith simulated streamflow (m3 s−1); n is the length of the time series. Table 6Performance evaluations for the daily SWAT model calibrating only for flood periods, and the sub-daily SWAT-EVENT model performances with sub-basin level UH parameters and basin level UH parameters. ### 3.3.3 Event-based sub-daily calibration and validation with the SWAT-EVENT model In this study, the SWAT-EVENT model employed the same built-in automatic calibration subroutine as the SWAT model did. Sub-daily simulations with the SWAT-EVENT model were conducted within the same time span as the daily simulation, with a primary focus on the flood season with a series consisting of 24 flood events, two-thirds of which were utilized for the calibration while the rest were used for validation. Preferential implementation was applied to daily calibration from which the antecedent conditions were extracted. Figure 8Comparisons of the daily simulations conducted using the SWAT model and the aggregated sub-daily simulations conducted using the SWAT-EVENT model. Figure 9Comparisons between sub-basin level and basin level UH parameter cases for relative peak discharge error (a) and relative peak time error (b). ENS, relative peak discharge error (ERP), relative peak time error (ERPT), and relative runoff volume error (ERR) were selected as the performance evaluation statistics for the flood event simulations to comply with the Accuracy Standard for Hydrological Forecasting in China (MWR, 2008). ERP, ERPT, and ERR are specific indicators used to indicate whether the accuracies of the simulations reach the national standard (MWR, 2008). They are considered to be sufficiently qualified when the absolute values are less than 20 %, 20 %, and 30 %, respectively. 4 Results ## 4.1 Sensitivity analysis results Sensitivity results for daily simulation with the SWAT model are listed in Table 3. The sensitivity rank for a single parameter shows tiny differences between the two types of analysis period for SWAT simulation, with the changes in all parameter ranks less than.three According to a previous study (Cibin et al., 2010), the sensitivity of SWAT parameters was proved to vary in low, medium and high streamflow regimes. The long-term period analysis in Table 3 consists of different flow regimes, but presents almost the same sensitivity ranks as the flood period case, indicating that the high streamflow would dominate the sensitivity results in the long-term period analysis. Unexpectedly, compared to the long-term analysis, the initial SCS runoff curve number (CN2) shows less effect on streamflow output during flood period, whereas the groundwater parameter ALPHA_BF becomes more sensitive to high streamflow regime. As declared by Bondelid et al. (2010), the effects of CN2 variation on surface runoff yield decreased as the rainfall increased, especially for the larger storm events. Bondelid et al. (2010) further explained that the proportion of the rainfall that went into initial abstraction and infiltration decreased along with the increasing of rainfall, so the proportional change in surface runoff associated with a unit change in CN2 would decrease. Furthermore, from a previous sensitivity study with the SWAT model (Cibin et al., 2010), the parameter CN2 in wet year simulation was found to be less important than that in entire simulation, and the greatest sensitivity index of CN2 was found in low flow. Thus, there is reason to believe, the sensitivity ranking of CN2 would be reduced when it comes to flood period analysis in Table 3. Instead, in this process, the model output changes resulting from the perturbation of parameter ALPHA_BF would be more prominent, as there is more water recharging the shallow aquifer, and meanwhile the parameter ALPHA_BF strongly influences groundwater response to changes in recharge (Sangrey, 1984). Considering that the shallow aquifer in the Huaihe River basin has good drainage condition (Zuo et al., 2006), a relatively high value of ALPHA_BF would be expected in this study. Generally, the identified seven sensitive parameters of the daily SWAT model cover multiple main hydrological processes, i.e., channel routing (CH_N2 and CH_K2), runoff (SURLAG and CN2), groundwater (ALPHA_BF), evaporation (ESCO), and soil water (SOL_AWC), not only for the long-term period, but also for the flood period. According to Table 3, it is clear that both the year-round streamflow and the high streamflow are most sensitive to CH_N2 due to its top sensitivity rank. Table 3 also presents the sensitivity results for event-based flood simulation with the SWAT-EVENT model at a sub-daily timescale. Sensitivity of some parameters differs widely from its performance in flood period analysis with the SWAT model at a daily timescale. The sensitivity ranks of BLAI, CH_K2, ESCO, SOL_K, and SURLAG have changed more than five, which could be caused by the differences in hydrological simulation between the SWAT model and the SWAT-EVENT model. It is noteworthy that the UH parameter tsubadj, peculiar to the SWAT-EVENT model, can significantly influence the event-based flood simulation at sub-daily timescales with a corresponding sensitivity ranking of three in Table 3. Though there exist differences among the daily SWAT model and the sub-daily SWAT-EVENT model, the same point is that the parameter CH_N2 is recognized as the most important parameters for both two models. In general, the top eight sensitive parameters (ALPHA_BF, CH_N2, CN2, GWQMN, SOL_AWC, SOL_K, SOL_Z and tsubadj are considered to influence the event-based sub-daily flood simulation significantly. ## 4.2 Daily simulation results The final calibrated parameters for daily simulation with the SWAT model are presented in Table 4. The model performances for daily streamflow simulations at outlet WJB are summarized in Table 5. For long-term calibration, the ENS value is 0.76 for the calibration period and 0.80 for the validation period. These two values of the daily ENS both exceed 0.75, which is considered to be “very good” according to performance ratings for evaluation statistics recommended by Moriasi et al. (2007). The daily RSR values are 0.49 and 0.44 for the calibration and validation, respectively, indicating that the root mean square error values are less than half the standard deviation of measured data, i.e., the “very good” model performances suggested by Moriasi et al. (2007). The SWAT model overestimates the streamflow by 5.72 % for calibration while underestimating the streamflow by 8.38 % for validation. The calculated results of PBIAS in Table 5 also attain the “very good” rating. Visual comparisons between the observed and simulated streamflows for both of the calibration and validation periods are shown in Fig. 5, from which it can be observed that the SWAT model could simulate well the temporal variation of long-term streamflow at a daily timescale. In general, the daily simulation results obtained from the SWAT model at WJB demonstrate decent applicability and can consequently represent a preliminary basis for further flood event simulation. When focusing on event period calibration and validation, all statistical criteria in Table 5 indicate high accuracy of the daily SWAT model for flood period simulation. ## 4.3 Event-based simulation results Table 4 shows the optimum values of parameters used in the SWAT-EVENT model simulation. The sub-daily simulation results for 24 flood events, as shown in Table 6, exhibit reliable performances of the SWAT-EVENT model, with ENS values varying from 0.67 to 0.95. The qualified ratios of ERP, ERPT and ERR are 75 %, 95.8 % and 91.6 %, respectively. Meanwhile, observed and simulated sub-daily flood hydrographs are displayed in Figs. 6 and 7. It is clearly that the SWAT-EVENT model has the ability to accurately simulate the sub-daily flood events, except for the event 20020722. Moreover, for specific floods (i.e., 19960628, 19980725, 20050707, and 20070701), it is remarkable to see that the SWAT-EVENT model owns the outstanding performances in simulating flood events with multiple peaks. Table 6 also displays the model performances of the daily simulation results using the SWAT model specific for flood period. All daily ENS values are lower than the sub-daily ones, indicating that the flood hydrographs simulated by the sub-daily SWAT-EVENT model are much more reliable than those simulated by the daily SWAT model. In addition, the peak flows simulated by the SWAT-EVENT model on a sub-daily timescale are much closer to the observed flows relative to the predictions obtained from the SWAT model on a daily timescale, especially for flood events with high peak flows in Table 6. There are eight flood events (19910610, 19910629, 19960628, 20020622, 20030622, 20050707, 20050822, and 20070701) that exhibit peak flows greater than 5000 m3 s−1. The sub-daily simulation results of these eight floods were aggregated into daily averages and then compared with those of the daily simulations, the results of which are illustrated in Fig. 8. It can be concluded that the daily simulations are likely to miss the high flood peaks. The more effective performances of the SWAT-EVENT model could be due to rainfall data with a higher temporal resolution and the model calculation with more detailed time steps, which can capture the instantaneous changes representative of flood processes. All statistical indicators suggest that the SWAT-EVENT model can accurately reproduce the dynamics of observed flood events based upon antecedent conditions extracted from SWAT daily simulations. ## 4.4 Effects of the UH parameter level on SWAT-EVENT model performances To analyze the effects of the level of UH parameters on SWAT-EVENT model simulations, the default lumped UH parameter tadj was calibrated while the other parameters remained unchanged exactly as the sub-basin level case was calibrated in Table 4. The optimized basin level UH parameter (tadj) displays a uniform value of 15.75 h for all sub-basins, while the sub-basin level UH parameters (tsubadj) are distributed in sub-basins, ranging from 4.81 to 120.33 h. As a consequence, the optimized tsubadj value enables the base time (tb) and the peak time (tp) of the UHs within the ranges of 6.13–141.34 and 2.30–53.00 h, respectively. While for the basin level UH parameter case, the values of tb and tp distribute in a relatively narrow range, i.e., 17.07–36.76 h for tb and 6.40–13.78 h for tp. More of a concern, according to Fig. 4, is the CV value of tb or tp would be reduced to less than 0.2, meaning that the spatial heterogeneity of UH time factors is homogenized due to the constrains between sub-basins when adjusting the basin level UH parameter. As expected, the application of sub-basin level UH parameters would keep the CV value of tb or tp at 0.79, which corresponds quite closely to the CV value of tc in Table 2. Thus, the spatial inhomogeneity of geographical features can be better represented by the use of sub-basin level UH parameters. The SWAT-EVENT simulation results using the basin level UH parameter are also presented in Table 4. Compared with the sub-basin level case, the basin level case induces significant decrease in the qualified ratio of ERPT from 95.8 % to 79.1 %. Intuitive comparisons for relative peak discharge error (ERP and relative peak time error (ERPT under both UH parameter levels could be found in Fig. 9. When simulating from the sub-basin level UH case to the basin level UH case, more than half of the total 24 flood events and nearly all of the flood events show, respectively, increased peak discharge error and peak time error. Thus, it can be concluded that changing the spatial level of the UH parameter would affect the flood peak simulations significantly, especially for the peak time error. In this procedure, however, model parameters except for the UH parameter remain fixed, so it is not surprising that there is little change in the specific values of the relative runoff volume error (ERR) between the two cases in Table 4. All these findings indicate that the application of sub-basin level UH parameters in the SWAT-EVENT model can improve the simulation accuracies of flood peaks. Figure 10Box plots of ENS values for the SWAT-EVENT model results for sub-basin level UH parameters and basin level UH parameters. The overall distributions of ENS statistics for flood events for the two UH methods (i.e., the basin level UH parameter vs. the sub-basin level UH parameters) are plotted in Fig. 10. The box plots therein exhibit rectangle heights equal to the interquartile range (IQR), the upper and lower ends of which are separately marked with the upper and lower quartile values, respectively. The median is represented by a line transecting either of the two rectangles. The extended whiskers denote the range of the batch data (Massart et al., 2005; Cox, 2009). According to Table 4 and Fig. 10, the SWAT-EVENT model using sub-basin level UH parameters demonstrates improvements for event-based flood simulation. For the sub-basin level case in Fig. 10, half of the ENS values range from 0.83 (lower quartile) to 0.91 (upper quartile), with a median of 0.87, which can potentially represent the second flood forecasting accuracy standard (i.e., B) according to MWR (MWR, 2008). However, the basin level case performs comparatively poorly with regard to reproducing the flood hydrograph, wherein the majority of ENS values vary between 0.78 and 0.88. In comparison, the application of spatially distributed UH parameters allows the SWAT-EVENT model to simulate the flood events more accurately. 5 Discussion ## 5.1 Sub-daily simulation vs. daily simulation Floods are always triggered by intense rainfall events with short duration. In order to adequately capture and analyze the rapid response of flood events, simulation time step at sub-daily resolution is preferred. Normally, an appropriate simulation time step is chosen depend on the catchment response time to a rainfall event. According to the catchment delineation and geographical features of sub-basins in Table 2, the general average concentration time of sub-basins is found to be less than 24 h. Moreover, considering the time interval of observed data acquisition (i.e., 2 to 6 h), the 2 h simulation step chosen in this study was more than sufficient for flood simulation. The remarkable performances of the sub-daily SWAT-EVENT model for peak flow simulations (as shown in Table 6 and Fig. 8) adequately confirmed the superiority of using sub-daily time step in simulating flood hydrographs. In this study, daily surface runoff was calculated using the SCS curve number method in the SWAT model, whereas sub-daily surface runoff was calculated using the Green & Ampt infiltration method in the SWAT-EVENT model. In terms of the comparison of these two methods, (King et al., 1999) argued that the advantage of Green & Ampt method was the considerations of sub-daily rainfall intensity and duration, meanwhile, a rainstorm might not be fully represented by total daily rainfall used in SCS method due to its high variation in temporal distribution. Beyond that, as stated by Jeong et al. (2010), the physically based hydrological processes simulating at a short timescale would contribute to the reinforcement of model simulation accuracy. ## 5.2 Event-based simulation vs. continuous simulation Pathiraja et al. (2012) may argue that the continuous simulation for design flood estimation was becoming increasingly important. Nevertheless, in operational flood simulation and prediction perspectives, many endusers and practitioners are still in favor of the event-based models (Coustau et al., 2012; Berthet et al., 2009). The emphasis on event-based modeling in this study was due to the unavailability of the long continuous hydrological data at a sub-daily timescale. Such a data scarcity issue has also promoted the applications of the event-based models in some developing countries (Hughes, 2011; Tramblay et al., 2012). More broadly, the preferred event-based approach is highlighted when the hydrological model is used for investigating the effect of heavy rainfall on environmental problems such as soil erosion and contaminant transport (Maneta et al., 2007). Several studies have declared that the catchment's antecedent moisture conditions prior to a flood event can have a strong influence on flood responses, including the flood volume, flood peak flow and its duration (Rodrã-Guez-Blanco et al., 2012; Tramblay et al., 2012; Coustau et al., 2012). However, the major drawback of event-based models lies in its initialization: external information is needed to set the antecedent conditions of a catchment (Berthet et al., 2009; Tramblay et al., 2012). To address the initialization issue, some efforts have been made to set up the initial conditions of event-based models, such as in situ soil moisture measurements, retrieved soil moisture from the remote sensing products, and continuous soil moisture modeling. Among these methods, continuous soil moisture modeling using the daily data series to estimate sub-daily initial conditions would be a traditional solution, as suggested by Nalbantis (1995) (Tramblay et al., 2012) also tested different estimations of the antecedent moisture conditions of the catchment for an event-based hydrological model and concluded that the continuous daily soil moisture accounting method performed the best. However, there might be some deficiencies in the continuous simulation of the SWAT model in this study. On the one hand, the continuous soil moisture modeling required long data series and took a long time to implement. On the other hand, the continuous SWAT model was calibrated using the sum of squares of the residuals as the objective function, which was more sensitive to high flows than low flows. As a consequence, the SWAT model ensured the simulation accuracy at the expense of the low flow performances, which would certainly bring errors to the estimations of antecedent moisture conditions. As Coustau et al. (2012) declared, event-based models were very convenient for operational purposes, if the initial wetness state of the catchment would be known with good accuracy. Although the continuous modeling approach used in this study was not the perfect solution for the determination of the catchment antecedent conditions, it was still an effective method as the preliminary preparation for the simulation of the SWAT-EVENT model due to the good goodness-of-fit in Figs. 6 and 7. Since the goal of this research was to ascertain the applicability of the newly developed SWAT-EVENT model on event-based flood simulation, it was accepted to have a lower performance in calculating the antecedent conditions. Active microwave remote sensing has proved the feasibility and rationality of obtaining temporal and spatial soil moisture data. It means that there is a potential interest of using the remote sensing data to estimate the initial conditions (Tramblay et al., 2012). ## 5.3 Distributed UH parameters vs. lumped UH parameters The UH method is used to spread the net rainfall over time and space, representing the most widely practiced technique for determining flood hydrographs. The main difference between the two applications of the UH parameter is, in essence, the method for surface runoff routing within the sub-basins. The application of the sub-basin level UH parameters allowed distributed parameter value for each sub-basin, while the basin level UH parameter application consistently applied a lumped value for all sub-basins. All but the derived UH shape of the distributed UH case were identical to these of the lumped UH case. Therefore, the difference in the simulations of the two UH parameter cases resulted from the surface runoff routing method. As seen from the aforementioned model performance assessment in Table 6 and Fig. 9, the capability of the SWAT-EVENT model with basin level UH parameter for event-based flood simulation was downgraded relative to the sub-basin level case. It is known that Sherman (1932) first proposed the UH concept in 1932. However, because the UH proposed by Sherman is based on observed rainfall–runoff data at gauging sites for hydrograph derivations, it is only applicable for gauged basins (Jena and Tiwari, 2006). A prominent lack of observed data promoted the appearance of the Synthetic Unit Hydrograph (SUH), which extended the application of the UH technique to ungauged catchments. The triangular dimensionless UH used in this study denotes the traditional derivation of SUHs, which relates hydrologic responses to the catchment geographic characteristics according to Eqs. (2)–(6). Therefore, it can be inferred that the shape feature of the UH should be region-dependent. A lumped UH parameter used for the whole catchment would lead to either sharpening the peak flows in large sub-basins, or flattening the peak flows in small sub-basins. On the whole, hydrological behaviors among sub-basins would tend to be homogenized. As indicated in Table 6 and Figs. 9 and 10, there was a positive effect from the application of the distributed UH parameters on flood simulation. In addition to the triangular dimensionless UH used in this study, there are many other available methods for derivation of the SUH (Bhunya et al., 2007) compared four probability distribution functions (pdfs) in developing SUH and concluded that such statistical distributions method performed better than the traditional synthetic methods. Furthermore, the instantaneous unit hydrograph (IUH) is more capable of mathematically expressing the effective rainfall hyetograph and direct runoff hydrograph relationship in a catchment (Jeng and Coon, 2003). And Yao et al. (2014b) improved the flood prediction performance of the Xinanjiang model by the coupling of the geomorphologic instantaneous unit hydrograph (GIUH) (Khaleghi et al., 2011) compared the accuracy and reliability of different UH methods and confirmed the high efficiency of the GIUH for flood simulation. There might be room for further improving the current UH method used in the SWAT-EVENT model. 6 Conclusions The original SWAT model was not competent for flood simulation due to its initial design of long-term simulations with daily time steps. This paper mainly focused on the modification of the structure of the original SWAT model to perform event-based flood simulation, which was applicable for the area without continuous long-term observations. The newly developed SWAT-EVENT model was applied in the upper reaches of the Huaihe River. Model calibration and validation were made by the using of historical flood events, showing good simulation accuracy. To improve the spatial representation of the SWAT-EVENT, the lumped UH parameters were then adjusted to the distributed ones. Calibration and validation results revealed the improvement of event-based simulation performances, especially for the flood peak simulation. This study expands the application of the original SWAT model in event-based flood simulation. Event-based runoff quantity and quality modeling has become a challenge task since the impact of hydrological extremes on the water quality is particularly important. The improvement of the SWAT model for event-based flood simulation in this study will lay the foundation for dealing with the event-based water quality issues. The optimal parameters of the SWAT-EVENT model were obtained by the automatic parameter calibration module that integrated the SCE-UA algorithm in this study. However, several factors such as interactions among model parameters, complexities of spatio-temporal scales, and statistical features of model residuals may lead to the parameter non-uniqueness, which is the source of the uncertainty in the estimated parameters. Uncertainty of model parameters will be finally passed to the model results, hence leading to certain risks in flood simulation. In the future, emphasis will be placed on the quantification of the parameter uncertainty to provide better support for flood operations. Code and data availability Code and data availability. The DEM data were downloaded from the website http://srtm.csi.cgiar.org/ (Consortium for Spatial Information, 2017). The land use data (GLC2000) were downloaded from the website http://www.landcover.org/ (University of Maryland, 2017). The soil data (HWSD) were downloaded from the website http://webarchive.iiasa.ac.at/Research/LUC/External-World-soil-database/HTML/ (FAO, 2017). The global weather data were downloaded from the website https://globalweather.tamu.edu/ (National Centers for Environmental Prediction, 2017). The rainfall observations at 138 stations and the discharge observations at the outlet (WJB) were provided by Hydrologic Bureau of Huaihe River Commission. The source codes of the SWAT model are available at the website http://swat.tamu.edu/ (USDA Agricultural Research Service and Texas A&M AgriLife Research, 2016). Author contributions Author contributions. XD and PX contributed to the conception of this study. XD and DY contributed significantly to analysis and manuscript preparation. DY performed the data analyses and wrote the manuscript. XH, JL, YL, TP, and HM helped perform the analysis with constructive discussions. All authors read and approved the manuscript. KW and SX assisted in data acquisition and collation. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This research has been supported by the Non-profit Industry Financial Program of Ministry of Water Resources of China (no. 201301066), the National key research and development program (2016YFC0402700), the National Natural Science Foundation of China (nos. 91547205, 51579181, 51409152, 41101511, and 40701024), and the Hubei Provincial Collaborative Innovation Center for Water Security. Edited by: Markus Weiler Reviewed by: two anonymous referees References Abbaspour, K. C., Vejdani, M., and Haghighat, S.: SWAT-CUP calibration and uncertainty programs for SWAT, in: Modsim 2007 International Congress on Modelling and Simulation Land Water and Environmental Management Integrated Systems for Sustainability, Christchurch, New Zealand, 10–13 December 2007, 1603–1609, 2007. Adams III, T. E. and Pagano, T. C.: Flood Forecasting: A Global Perspective, in: Flood Forecasting, Academic Press, Boston, USA, xxiii–xlix, 2016. Arnold, J. G., Srinivasan, R., Muttiah, R. S., and Williams, J. R.: Large area hydrologic modeling and assessment part I: model development, JAWRA, 34, 91–101, 1998. Baker, T. J. and Miller, S. N.: Using the Soil and Water Assessment Tool (SWAT) to assess land use impact on water resources in an East African watershed, J. Hydrol., 486, 100–111, 2013. Bartholomé, E. and Belward, A. S.: GLC2000: a new approach to global land cover mapping from Earth observation data, Int. J. Remote Sens., 26, 1959–1977, 2005. Berthet, L., Andréassian, V., Perrin, C., and Javelle, P.: How crucial is it to account for the antecedent moisture conditions in flood forecasting? Comparison of event-based and continuous approaches on 178 catchments, Hydrol. Earth Syst. Sci., 13, 819–831, https://doi.org/10.5194/hess-13-819-2009, 2009. Beven, K. J., Kirkby, M. J., Schofield, N., and Tagg, A. F.: Testing a physically-based flood forecasting model (TOPMODEL) for three U.K. catchments, J. Hydrol., 69, 119–143, 1984. Bhunya, P. K., Berndtsson, R., Ojha, C. S. P., and Mishra, S. K.: Suitability of Gamma, Chi-square, Weibull, and Beta distributions as synthetic unit hydrographs, J. Hydrol., 334, 28–38, 2007. Bondelid, T. R., Mccuen, R. H., and Jackson, T. J.: Sensitivity of SCS Models to Curve Number Variation, JAWRA, 18, 111–116, 2010. Cho, J., Bosch, D., Lowrance, R., Strickland, T., and Vellidis, G.: Effect of spatial distribution of rainfall on temporal and spatial uncertainty of SWAT output, T. ASABE, 52, 277–281, 2009. Cibin, R., Sudheer, K., and Chaubey, I.: Sensitivity and identifiability of stream flow generation parameters of the SWAT model, Hydrol. Process., 24, 1133–1148, 2010. Consortium for Spatial Information: DEM data, available at: http://srtm.csi.cgiar.org/, last access: 5 January 2017. Cooper, V. A., Nguyen, V. T. V., and Nicell, J. A.: Evaluation of global optimization methods for conceptual rainfall-runoff model calibration, Water Sci. Technol., 36, 53–60, 1997. Coustau, M., Bouvier, C., Borrell-Estupina, V., and Jourde, H.: Flood modelling with a distributed event-based parsimonious rainfall-runoff model: case of the karstic Lez river catchment, Nat. Hazards Earth Syst. Sci., 12, 1119–1133, https://doi.org/10.5194/nhess-12-1119-2012, 2012. Cox, N. J.: Speaking Stata: Creating and varying box plots, Stata J., 9, 478–496, 2009. Dechmi, F., Burguete, J., and Skhiri, A.: SWAT application in intensive irrigation systems: Model modification, calibration and validation, J. Hydrol., 470–471, 227–238, 2012. Doocy, S., Daniels, A., Murray, S., and Kirsch, T. D.: The Human Impact of Floods: a Historical Review of Events 1980–2009 and Systematic Literature Review, Plos Curr., 5, 1808–1815, 2013. Duan, Q., Soroosh, S., and Vijai, G.: Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015–1031, 1992. FAO, IIASA, ISRIC, and ISSCAS: Harmonized World Soil Database Version 1.2, Food & Agriculture Organization of the UN, Rome, Italy, and International Institute for Applied Systems Analysis, Laxenburg, Austria, 2012. Food and Agriculture Organization (FAO): HWSD soil data, available at: http://www.fao.org/soils-portal/soil-survey/soil-maps-and-databases/harmonized-world-soil-database-v12/en/, last access: 15 January 2017. Gan, T. Y. and Biftu, G. F.: Automatic Calibration of Conceptual Rainfall-Runoff Models: Optimization Algorithms, Catchment Conditions, and Model Structure, Water Resour. Res., 32, 3513–3524, 1996. Glavan, M., Ceglar, A., and Pintar, M.: Assessing the impacts of climate change on water quantity and quality modelling in small Slovenian Mediterranean catchment – lesson for policy and decision makers, Hydrol. Process., 29, 3124–3144, 2015. Griensven, A. V., Meixner, T., Grunwald, S., Bishop, T., Diluzio, M., and Srinivasan, R.: A global sensitivity analysis tool for the parameters of multi-variable catchment models, J. Hydrol., 324, 10–23, 2006. Grillakis, M. G., Tsanis, I. K., and Koutroulis, A. G.: Application of the HBV hydrological model in a flash flood case in Slovenia, Nat. Hazards Earth Syst. Sci., 10, 2713–2725, https://doi.org/10.5194/nhess-10-2713-2010, 2010. Guan, M., Wright, N. G., and Andrew Sleigh, P.: Multiple effects of sediment transport and geomorphic processes within flood events: Modelling and understanding, Int. J. Sediment Res., 30, 371–381, https://doi.org/10.1016/j.ijsrc.2014.12.001, 2015. Guo, L., He, B., Ma, M., Chang, Q., Li, Q., Zhang, K., and Hong, Y.: A comprehensive flash flood defense system in China: overview, achievements, and outlook, Nat. Hazards, 1–14, 2018. Gupta, H. V., Sorooshian, S., and Yapo, P. O.: Status of Automatic Calibration for Hydrologic Models: Comparison With Multilevel Expert Calibration, J. Hydrol. Eng., 4, 135–143, 1999. Haggstrom, M., Lindstrom, G., Cobos, C., Martínez, J. R., Merlos, L., Dimas Alonso, R., Castillo, G., Sirias, C., Miranda, D., and Granados, J.: Application of the HBV model for flood forescasting in six Central American Rivers, Smhi Hydrol., 27, 1–13, 1990. Hapuarachchi, H. A. P., Wang, Q. J., and Pagano, T. C.: A review of advances in flash flood forecasting, Hydrol. Process., 25, 2771–2784, 2011. Holvoet, K., Griensven, A. V., Seuntjens, P., and Vanrolleghem, P. A.: Sensitivity analysis for hydrology and pesticide supply towards the river in SWAT, Phys. Chem. Earth, 30, 518–526, 2005. Hughes, D. A.: Regionalization of models for operational purposes in developing countries: an introduction, Hydrol. Res., 42, 331–337, 2011. Jena, S. K. and Tiwari, K. N.: Modeling synthetic unit hydrograph parameters with geomorphologic parameters of watersheds, J. Hydrol., 319, 1–14, 2006. Jeng, R. I. and Coon, G. C.: True Form of Instantaneous Unit Hydrograph of Linear Reservoirs, J. Irrig. Drain. Eng., 129, 11–17, 2003. Jeong, J., Kannan, N., Arnold, J., Glick, R., Gosselink, L., and Srinivasan, R.: Development and Integration of Sub-hourly RainfallRunoff Modeling Capability Within a Watershed Model, Water Resour. Manage., 24, 4505–4527, 2010. Jeong, J., Kannan, N., Arnold, J. G., Glick, R., Gosselink, L., Srinivasan, R., and Harmel, R. D.: Development of sub-daily erosion and sediment transport algorithms for SWAT, T. ASABE, 54, 1685–1691, 2011. Jeon, J. H., Park, C. G., and Engel, B. A.: Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation, Water, 6, 3433–3456, 2014. Jiang, X. F., Wang, L., Fang, M. A., Hai-Qiang, L. I., Zhang, S. J., and Liang, X. W.: Localization Method for SWAT Model Soil Database Based on HWSD, China Water & Wastewater, 30, 135–138, 2014. Khaleghi, M. R., Gholami, V., Ghodusi, J., and Hosseini, H.: Efficiency of the geomorphologic instantaneous unit hydrograph method in flood hydrograph simulation, Catena, 87, 163–171, 2011. King, K., Arnold, J., and Bingner, R.: Comparisonof Green-Ampt and curve number methods on Goodwin creek watershed using SWAT, T. ASAE, 42, 919–926, 1999. Kobold, M. and Brilly, M.: The use of HBV model for flash flood forecasting, Nat. Hazards Earth Syst. Sci., 6, 407–417, https://doi.org/10.5194/nhess-6-407-2006, 2006. Kuczera, G.: Efficient subspace probabilistic parameter optimization for catchment models, Water Resour. Res., 33, 177–185, 1997. Luce, C. H. and Cundy, T. W.: Parameter Identification for a Runoff Model for Forest Roads, Water Resour. Res., 30, 1057–1070, 1994. Maidment, D. R.: Handbook of hydrology, Earth-Sci. Rev., 24, 227–229, 1994. Maneta, M. P., Pasternack, G. B., Wallender, W. W., Jetten, V., and Schnabel, S.: Temporal instability of parameters in an event-based distributed hydrologic model applied to a small semiarid catchment, J. Hydrol., 341, 207–221, 2007. Massart, D. L., Smeyers-Verbeke, J., Capron, X., and Schlesier, K.: Visual presentation of data by means of box plots, Lc Gc Europe, 18, 215–218, 2005. Moriasi, D. N., Arnold, J. G., Van Liew, M. W., Bingner, R. L., Harmel, R. D., and Veith, T. L.: Model evaluation guidelines for systematic quantification of accuracy in watershed simulations, T. ASABE, 50, 885–900, 2007. MWR: Standard for Hydrological Information and Hydrological Forecasting (GB/T 22482-2008), Ministry of Water Resources of the People's Republic of China, Standards Press of China, Beijing, 2008 (in Chinese). Nalbantis, I.: Use of multiple-time-step information in rainfall-runoff modelling, J. Hydrol., 165, 135–159, https://doi.org/10.1016/0022-1694(94)02567-U, 1995. Nash, J. E. and Sutcliffe, J. V.: River flow forecasting through conceptual models part I – A discussion of principles, J. Hydrol., 10, 282–290, 1970. National Centers for Environmental Prediction: Global weather data, available at: https://globalweather.tamu.edu/, last access: 15 January 2017. Neitsch, S. L., Arnold, J. G., Kiniry, J. R., Srinivasan, R., and Williams, J. R.: Soil and Water Assessment Tool Input/output File Documentation: Version 2009, Texas Water Resources Institute Technical Report 365, Texas Water Resources Institute, Texas, USA, 2011. Pathiraja, S., Westra, S., and Sharma, A.: Why continuous simulation? The role of antecedent moisture in design flood estimation, Water Resour. Res., 48, 6534, https://doi.org/10.1029/2011WR010997, 2012. Qiu, L., Wu, Y., Wang, L., Lei, X., Liao, W., Hui, Y., and Meng, X.: Spatiotemporal response of the water cycle to land use conversions in a typical hilly–gully basin on the Loess Plateau, China, Hydrol. Earth Syst. Sci., 21, 6485–6499, https://doi.org/10.5194/hess-21-6485-2017, 2017. Ramly, S. and Tahir, W.: Application of HEC-GeoHMS and HEC-HMS as Rainfall–Runoff Model for Flood Simulation, ISFRAM 2015, Singapore, 181–192, 2016. Rodrã-Guez-Blanco, M. L., Taboada-Castro, M. M., and Taboada-Castro, M. T.: Rainfall–runoff response and event-based runoff coefficients in a humid area (northwest Spain), Int. Assoc. Sci. Hydrol. Bull., 57, 445–459, 2012. Roth, V. and Lemann, T.: Comparing CFSR and conventional weather data for discharge and soil loss modelling with SWAT in small catchments in the Ethiopian Highlands, Hydrol. Earth Syst. Sci., 20, 921–934, https://doi.org/10.5194/hess-20-921-2016, 2016. Sangrey, D. A.: Predicting ground-water response to precipitation, J. Geotech. Eng., 110, 957–975, 1984. SCS: National engineering handbook, section 4, hydrology, US Department of Agriculture, SCS, Washington, DC, USA, 640 pp., 1972. Seo, M., Yen, H., Kim, M. K., and Jeong, J.: Transferability of SWAT Models between SWAT2009 and SWAT2012, J. Environ. Qual., 43, 869–880, 2014. Sherman, L.: Stream Flow from Rainfall by the Unit-Graph Method, Eng. News-Rec., 108, 501–505, 1932. Singh, J., Knapp, H. V., Arnold, J. G., and Demissie, M.: Hydrological modeling of the Iroquois River watershed using HSPF and SWAT, JAWRA, 41, 343–360, 2005. Sorooshian, S., Duan, Q., and Gupta, V. K.: Calibration of rainfall-runoff models: Application of global optimization to the Sacramento Soil Moisture Accounting Model, Water Resour. Res., 29, 1185–1194, 1993. Sudheer, K. P., Lakshmi, G., and Chaubey, I.: Application of a pseudo simulator to evaluate the sensitivity of parameters in complex watershed models, Environ. Model. Softw., 26, 135–143, https://doi.org/10.1016/j.envsoft.2010.07.007, 2011. Thiessen, A. H.: Precipitation averages for large areas, Mon. Weather Rev., 39, 1082–1084, 1911. Thyer, M., Kuczera, G., and Bates, B. C.: Probabilistic optimization for conceptual rainfall-runoff models: A comparison of the shuffled complex evolution and simulated annealing algorithms, Water Resour. Res., 35, 767–773, 1999. Tramblay, Y., Bouaicha, R., Brocca, L., Dorigo, W., Bouvier, C., Camici, S., and Servat, E.: Estimation of antecedent wetness conditions for flood modelling in northern Morocco, Hydrol. Earth Syst. Sci., 16, 4375–4386, https://doi.org/10.5194/hess-16-4375-2012, 2012. University of Maryland: Land use data (GLC2000), available at: http://www.landcover.org/, last access: 7 January 2017. USDA Agricultural Research Service and Texas A&M AgriLife Research: SWAT code, available at: http://swat.tamu.edu/, last access: 16 March 2016. Werritty, A., Houston, D., Ball, T., Tavendale, A., and Black, A.: Exploring the Social Impacts of Flood Risk and Flooding in Scotland, Report to the Scottish Executive, School of Social Sciences-Geography, University of Dundee, Dundee, UK, 2007. Wu, H., Adler, R. F., Tian, Y., Huffman, G. J., Li, H., and Wang, J. J.: Real-time global flood estimation using satellite-based precipitation and a coupled land surface and routing model, Water Resour. Res., 50, 2693–2717, 2014. Yan, B., Fang, N. F., Zhang, P. C., and Shi, Z. H.: Impacts of land use change on watershed streamflow and sediment yield: An assessment using hydrologic modelling and partial least squares regression, J. Hydrol., 484, 26–37, https://doi.org/10.1016/j.jhydrol.2013.01.008, 2013. Yang, D., Herath, S., and Musiake, K.: Spatial resolution sensitivity of catchment geomorphologic properties and the effect on hydrological simulation, Hydrol. Process., 15, 2085–2099, 2001. Yang, D., Koike, T., and Tanizawa, H.: Application of a distributed hydrological model and weather radar observations for flood management in the upper Tone River of Japan, Hydrol. Process., 18, 3119–3132, 2004. Yao, C., Zhang, K., Yu, Z., Li, Z., and Li, Q.: Improving the flood prediction capability of the Xinanjiang model in ungauged nested catchments by coupling it with the geomorphologic instantaneous unit hydrograph, J. Hydrol., 517, 1035–1048, https://doi.org/10.1016/j.jhydrol.2014.06.037, 2014a. Yao, C., Zhang, K., Yu, Z., Li, Z., and Li, Q.: Improving the flood prediction capability of the Xinanjiang model in ungauged nested catchments by coupling it with the geomorphologic instantaneous unit hydrograph, J. Hydrol., 517, 1035–1048, 2014b. Yao, H., Hashino, M., Terakawa, A., and Suzuki, T.: Comparison of distributed and lumped hydrological models, Doboku Gakkai Ronbunshuu B, 42, 163–168, 1998. Yigzaw, W. Y. and Hossain, F.: Impact of Artificial Reservoir Size and Land Use Land Cover on Probable Maximum Flood: The case of Folsom Dam on American River, J. Hydrol. Eng., 18, 1180–1190, 2012. Yu, D., Xie, P., Dong, X., Su, B., Hu, X., Wang, K., and Xu, S.: The development of land use planning scenarios based on land suitability and its influences on eco-hydrological responses in the upstream of the Huaihe River basin, Ecol. Model., 373, 53–67, 2018. Zhang, J., Zhou, C., Xu, K., and Watanabe, M.: Flood disaster monitoring and evaluation in China, Global Environ. Chang., 4, 33–43, 2002. Zhao, L. N., Tian, F. Y., Wu, H., Qi, D., Di, J. Y., and Wang, Z.: Verification and comparison of probabilistic precipitation forecasts using the TIGGE data in the upriver of Huaihe Basin, Adv. Geosci., 29, 95–102, 2011. Zuo, Z., Wang, X., Luo, W., Wang, F., and Guo, S.: Characteristics on Aquifer of the Quaternary system in Huai River Basin (Henan Section), Ground Water, 28, 25–27, 2006.
2018-12-13 14:16:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6482898592948914, "perplexity": 5258.411830916504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00518.warc.gz"}
https://www.ncatlab.org/toddtrimble/published/Notes+on+group+objects
Todd Trimble Notes on group objects Let $\mathbf{C}$ be a complete cartesian closed category. A running example will be the category of cocommutative coalgebras over a field $k$ (which is cartesian closed and locally finitely presentable, hence complete; see the next section). We will be studying group objects in such categories. For example, group objects in the category of cocommutative coalgebras over $k$ are precisely cocommutative Hopf algebras. Notation: if $f: X \to Y$ and $g: X \to Z$ are morphisms in a category of products, then $\langle f, g \rangle$ denotes the unique map $h: X \to Y \times Z$ such that $\pi_1 \circ h = f$ and $\pi_2 \circ h = g$; a similar notation extends to more general products (not just binary products). Properties of the category of cocommutative coalgebras Proposition Let $\mathbf{V}$ be a symmetric monoidal category with tensor product $\otimes$ and monoidal unit $I$. For any two cocommutative comonoids $A, B$ with counits $\epsilon_A: A \to I$, $\epsilon_B: B \to I$, the maps $A \otimes B \stackrel{1_A \otimes \epsilon_B}{\to} A \otimes I \cong A, \qquad A \otimes B \stackrel{\epsilon_A \otimes 1_B}{\to} I \otimes B \cong B$ provide projection maps that exhibit $A \otimes B$ as the cartesian product of $A$ and $B$ in the category $CoCom(\mathbf{V})$ of cocommutative comonoids in $\mathbf{V}$. Moreover, for any cartesian monoidal category $\mathbf{M}$ there is an equivalence between the category of symmetric monoidal functors $\mathbf{M} \to \mathbf{V}$ and product-preserving functors $\mathbf{M} \to CoCom(\mathbf{V})$. In the case of $\mathbf{V} = Vect_k$, cocommutative comonoids are the same as cocommutative $k$-coalgebras. The forgetful functor $CoCom(Vect_k) \to Vect_k$ creates colimits in $Cocom(Vect_k)$. Since $A \otimes - : Vect_k \to Vect_k$ preserves colimits for any vector space $A$, it follows that $C \otimes - : CoCom(Vect_k) \to CoCom(Vect_k)$ preserves colimits, i.e., cartesian products in $Cocom(Vect_k)$ distribute over colimits. Theorem The category of cocommutative $k$-coalgebras is the $Ind$-completion of the category of finite-dimensional cocommutative $k$-coalgebras, and is cocomplete, hence locally finitely presentable. The finitely presentable objects are precisely the finite-dimensional objects. According to Gabriel-Ulmer duality, this means there is an equivalence $CoCom(Vect_k) \simeq Lex(CoCom_{fd}^{op}, Set)$ and moreover, by taking linear duals, there is an equivalence $CoCom_{fd}^{op} \simeq CAlg_{fd}.$ where $CAlg_{fd}$ is the category of finite-dimensional commutative $k$-algebras. Corollary The category $CoCom(Vect_k)$ is complete, cocomplete, and cartesian closed. Proof Locally presentable categories $\mathbf{C}, \mathbf{D}$ are complete, and enjoy a strong form of an adjoint functor theorem, where a functor $F: \mathbf{C} \to \mathbf{D}$ has a right adjoint iff it is cocontinuous. Since $CoCom(Vect_k)$ is locally presentable, the product functor $C \otimes -$ has a right adjoint for any cocommutative coalgebra $C$; therefore $CoCom(Vect_k)$ is cartesian closed. Exponentials $C^D$ in $Cocom(Vect_k)$ are often called measuring coalgebras. Exponentials of group objects Now let $\mathbf{C}$ be a complete cartesian closed category. If $X$ is an arbitrary object of $\mathbf{C}$, the right adjoint $(-)^X$ (with left adjoint $- \times X$) preserves arbitrary limits and in particular finite products. This means that the canonical map $c_{Y_1, \ldots, Y_n} = \langle \pi_1^X, \ldots, \pi_n^X \rangle : (Y_1 \times \ldots \times Y_n)^X \to Y_1^X \times \ldots \times Y_n^X$ is invertible. It follows that if $G$ is a group object with multiplication $m: G \times G \to G$, identity $e: 1 \to G$, and inverse $i: G \to G$, then $G^X$ is a group object. The multiplication is defined to be the composite $G^X \times G^X \stackrel{(c_{G, G})^{-1}}{\to} (G \times G)^X \stackrel{m}{\to} G^X$ and the identity and inverse are defined similarly. Indeed, for any model $M$ of a Lawvere theory $\mathbf{T}$ in $\mathbf{C}$, the same principle shows that $M^X$ carries a $\mathbf{T}$-model structure canonically induced from the structure on $M$. (Proof: a $T$-model is given precisely by a product-preserving functor $M: \mathbf{T} \to \mathbf{C}$, and the composite $\mathbf{T} \stackrel{M}{\to} \mathbf{C} \stackrel{(-)^X}{\to} \mathbf{C}$ is also product-preserving.) Automorphism groups If $N, G$ are ordinary groups, a homomorphism $N \to G$ may be defined as a function $f$ that preserves multiplication (it may be shown that such functions also preserve the identity and inverse): $f(n \cdot n') = f(n) \cdot f(n')$ for all $n, n' \in N$. The left side represents the function $N \times N \stackrel{m_N}{\to} N \stackrel{f}{\to} G$, or the result of applying the map $G^N \stackrel{G^{m_N}}{\to} G^{N \times N}$ to $f$. The right represents the function $N \times N \stackrel{f \times f}{\to} G \times G \stackrel{m_G}{\to} G$, or the result of applying the map $G^N \stackrel{sq}{\to} (G \times G)^{N \times N} \stackrel{(m_G)^{N \times N}}{\to} G^{N \times N}$ to $f$. Hence the set $GrHom(N, G)$ of homomorphisms $N \to G$ may be constructed as the equalizer of the two legs of the following triangle $\array{ GrHom(N, G) & \stackrel{i}{\to} & G^N & \stackrel{sq}{\to} & (G \times G)^{N \times N} \\ & & & _{\mathllap{G^m}} \; \searrow & \downarrow \; _{\mathrlap{m^{N \times N}}} \\ & & & & G^{N \times N} }$ The same construction applies more generally in a cartesian closed category. In particular, the “squaring map” $sq$ is defined to be $\langle G^{\pi_1}, G^{\pi_2} \rangle : G^N \to G^{N \times N} \times G^{N \times N} \cong (G \times G)^{N \times N}.$ Thus, in a finitely complete cartesian closed category, we may construct the object $GrHom(N, G)$ of group object homomorphisms as the equalizer displayed above. Taking $G = N$, the exponential $N^N$ naturally forms a monoid, and the subobject $GrHom(N, N)$ becomes a submonoid. Similarly, one may internalize automorphism objects. An automorphism on $X$ can be construed as a pair of morphisms $f, g: X \to X$ obeying the equations $f \circ g = 1_X = g \circ f$, and thus we may construct a group object $Aut(X)$ as an equalizer of two legs of a triangle $\array{ Aut(X) & \hookrightarrow & X^X \times X^X & \stackrel{\langle 1, \sigma\rangle}{\to} & (X^X \times X^X) \times (X^X \times X^X) \\ & & _{\mathllap{!}} \; \downarrow & & \downarrow \; _{\mathrlap{comp \times comp}} \\ & & 1 & \stackrel{\langle e, e\rangle}{\to} & X^X \times X^X }$ where $comp: X^X \times X^X \to X^X$ is internal composition and $e: 1 \to X^X$ names the identity $1_X: X \to X$. Let $j: Aut(X) \to X^X$ be the composite $Aut(X) \hookrightarrow X^X \times X^X \stackrel{\pi_1}{\to} X^X;$ this map $j$ is a monomorphism and the subobject $j: Aut(X) \to X^X$ is closed under composition, i.e., the composite $Aut(X) \times Aut(X) \stackrel{j \times j}{\to} X^X \times X^X \stackrel{comp}{\to} X^X$ factors through $j: Aut(X) \to X^X$. Thus $Aut(X)$ is a submonoid of $X^X$ and in fact forms a group (object). Further, for a group $N$ the intersection or pullback of subobjects forms a subgroup $GrAut(N)$ of $Aut(N)$: $\array{ GrAut(N) & \to & GrHom(N, N) \\ \downarrow & & \downarrow \; _{\mathrlap{i}} \\ Aut(N) & \underset{j}{\to} & N^N }$ Semidirect products Suppose $G, N$ are groups in $\mathbf{C}$, and $\phi: G \to GrAut(N)$ is a homomorphism, we can form the semidirect product $N \ltimes_\phi G$ as a purely categorical construction. The composite $G \to GrAut(N) \hookrightarrow N^N$ corresponds (under the $\times$-$\hom$ adjunction) to a map $\alpha: G \times N \to N$ and we form a composite $N \times G \times N \times G \stackrel{1_N \times \delta_G \times 1_{N \times G}}{\to} N \times G \times G \times N \times G \stackrel{1_{N \times G} \times \sigma \times 1_G}{\to} N \times G \times N \times G \times G \stackrel{1_N \times \alpha \times 1_{G \times G}}{\to} N \times N \times G \times G \stackrel{m_N \times m_G}{\to} N \times G$ which gives the multiplication for the semidirect product. Revised on March 12, 2014 at 05:54:27 by Todd Trimble
2021-12-08 16:31:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 118, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786098599433899, "perplexity": 192.13451141230283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00211.warc.gz"}
https://moodle.org/mod/forum/discuss.php?d=154988&parent=1493590
## General plugins ### New Virtual Programming Lab (VPL) module Re: New Virtual Programming Lab (VPL) module Hi - we are really enjoying using VPL and are looking into developing the use of GUIs with our students. I've worked out how to get tkinter with Python working but am having problems working out how to incorporate images. I've put the image into the execution files - its a gif (I know tkinter doesn't like many image file types!) but I keep getting an error that the image cannot be found. Any help would be appreciated. Kind regards Estelle Average of ratings: - Re: New Virtual Programming Lab (VPL) module Here's what I found out about customizing the jail server. When it gets a valid request from moodle, it: • forks a new process (still running as root) • uses "chroot" on the configured JAILPATH (usually /jail) • creates a new user & their HOME in the chrooted environment • uses "su" to switch execution mode to that user • creates the files it got in the request (submission and things like vpl_evaluate.sh) inside the new user's HOME • runs by default either "vpl_run.sh" or "vpl_evaluate.sh" • then runs "vpl_execution" that the previous steps should create • cleanup, etc. From this flow, the important step for you is the "chroot" & "su" one. If you want to make extra files visible to the execution environment, you need to put them in either: • JAILPATH and make them world readable(/executable); that way, each execution will see the file "$JAILPATH/myfile" inside the execution env. as "/myfile" due to chroot, and will have read permissions • or in "/bin" or "/sbin", as these are mounted as read/execute-only inside JAILPATH, so things in them will naturally be visible (and also be in PATH) • the submission files themselves, but this is annoying and unnecessary In either case, it would help to customize the vpl_run.sh for your specific type of task, so it symlinks or copies the "/myfile" to "$HOME/myfile" (will be cleaned up with the whole HOME) for the time of the execution if it is necessary to have it in the same directory or to be able to write it as well. You can do that by putting your own "${your_language_with_dashes}_run.sh" (eg. "python-with-tkinter_run.sh") file in this folder on moodle: "$MOODLEPATH/mod/vpl/jail/default_scripts" (repo examples). Then you will get your own flavor of python from the "Run script" dropdown in "Execution options" in a moodle VPL activity. Average of ratings: - Re: New Virtual Programming Lab (VPL) module Hi, Gildredge Notice that by security reason all "execution files" are removed after compilation phase and before the execution phase. If you want to change this behavior go to "Files to keep when running" and select the image files. Best regards. Average of ratings: - Re: New Virtual Programming Lab (VPL) module Hi Juan I have done that - I have also adapted the run script to make it run as an X-windows application, everything works fine with the tkinter code until I try to include an image an then I get the error that the image cannot be found. I have tested the code in IDLE and it works fine so it isn't an issue with the code itself. I really appreciate the help. Estelle Average of ratings: -
2018-10-21 11:43:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17975573241710663, "perplexity": 3908.3243769834494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513844.48/warc/CC-MAIN-20181021094247-20181021115747-00375.warc.gz"}
http://math.stackexchange.com/questions/318692/property-of-convex-functions
# Property of convex functions I am trying to show that if a function $f$ defined in $\mathbb R^n$ is differentiable and convex then $f(y)-f(x)\ge \nabla f(x)(y-x).$ for each $x,y\in\mathbb R^n$ Using differentiability of $f$ I have got $f(y) = f(x+(y-x)) = f(x)+\nabla f(x)(y-x) + o(y-x)$. How to continue? - It should be $\nabla f(x)$. It remains one. – 1015 Mar 2 '13 at 15:10 Let $f:\mathbb{R}\longrightarrow \mathbb{R}$ differentiable and convex first. For every $x> y$ and every $t\neq 0$, convexity yields $$f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)\quad\Leftrightarrow\quad f(y+t(x-y))-f(y)\leq t(f(x)-f(y).$$ So $$\frac{ f(y+t((x-y))-f(y)}{t(x-y)}\leq \frac{f(x)-f(y)}{x-y}$$ for all $t\neq 0$. Letting $t$ tend to $0$, this entails $$f'(y)\leq \frac{f(x)-f(y)}{x-y}\quad\Leftrightarrow\quad f(x)-f(y)\geq f'(y)(x-y).$$ for all $x>y$. In the case $x<y$, one follows the same steps reversing the inequality twice. Now in the general case, fix $x\neq y$ and consider the function $g:\mathbb{R}\longrightarrow\mathbb{R}$ $$g:t\longmapsto f(y+t(x-y)).$$ Then $g$ is convex (check) and differentiable so in particular $$g(1)-g(0)\geq g'(0)(1-0)=g'(0).$$ Now by the chain rule $$g'(t)=\nabla f(y+t(x-y))(x-y)\quad\Rightarrow \quad g'(0)=\nabla f(y)(x-y).$$ And $g(1)=f(x)$, $g(0)=f(y)$, so $$f(x)-f(y)\geq \nabla f(y)(x-y).$$ The graph of a convex function is above any tangent plane, and $$L(y) = f(x_0) + \nabla f(x_0)(y-x_0)$$ is the tangent plane in the point $x_0$... I think that the question is precisely about that: prove that the graph of a convex function is above any tangent plane. Given that the most common definition of convex (real-valued) is $f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)$. – 1015 Mar 2 '13 at 16:06
2016-02-14 17:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677788615226746, "perplexity": 99.92546634361348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701999715.75/warc/CC-MAIN-20160205195319-00278-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/conservation-of-energy.837746/
# Conservation of Energy 1. Oct 14, 2015 ### Devilwhy 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 4a at x=-2 F=0N Potential energy=integration of F(-2)=Integration of 0=0 so it will reach minimum at x=-2? b. integration of F(-2)-integration of F(-2-h)=m(v-0)^2/2 am i right? 5 Centripetal force=mv^2/r=mgsinθ-Fn(normal force) will the block fall leave surface when Fn=0? so i can represent the velocity as v=(mgrsinθ)^1/2=((m)(9.8)(15)sinθ)^1/2? 2. Oct 15, 2015 ### Simon Bridge Force F is the gradient of the potential energy U. A function has an extremum when its gradient is zero. Therefore U has a minumum at some position where F=0 Integrate F(x) to get the PE function U(x), then put in x=-2 to find U(-2) $\neq$ 0 Also... what is U(x0 - h)? How do changes in kinetic energy relate to changes in potential energy? 3. Oct 19, 2015 ### CrazyNinja Your attempt at the first question [4(a)] looks correct. As F=0 at x= (-2), it is the co-ordinate where U is minimum. There are complications in 4(b) though. the work done by the new force cannot be determined from given data. It renders the Law of Conservation of Energy useless. How then do we proceed? For 5, use the Law of Conservation of Energy. That should give you the required answer. 4. Oct 20, 2015 ### Simon Bridge I can do it from the data supplied, using conservation of energy. Remember the relationship between kinetic and potential energy. 5. Oct 20, 2015 ### CrazyNinja The new force is not given to be conservative. Hence the work it does requires the path too, of which nothing is mentioned. Nor is the equation governing the magnitude. 6. Oct 20, 2015 ### Staff: Mentor The new force is not acting when the object is released. The new force only served to move the block to its new location and plays no role in the work done by the conservative force when it is released from rest from that location. 7. Oct 21, 2015 ### CrazyNinja What you have said is indeed correct. But in order to determine the initial "total" energy of the object, we require the work done by the new force. I am talking about the process in which the new force moves the particle form x0 to x0-h. KE is zero in both the cases. Initial PE is U(x0). Final PE is U(x0-h). In addition to this, there is additional work done by the new force. Thus, U(x0-h) = U(x0) + W If this work done is known, then we know the total energy of the particle at x= x0-h. Only then can we proceed with its release and use this to calculate max KE which, interestingly, will be equal to the work done by the new force. 8. Oct 21, 2015 ### Staff: Mentor No. The history of the object is totally irrelevant. Additional work would show up as KE, but the object is at rest at the new location. Whatever work was done to move the object left no evidence of the process. In a conservative field PE is determined by location alone. No, no, no. The only incarnations of energy in this system are KE expressed as motion and PE as a result of location in the field. Objects don't otherwise "remember" their history. Once an object is brought to rest there's no inherent evidence that the object experienced any particular process or path. 9. Oct 21, 2015 ### CrazyNinja And what you have said is again true, though in this case there are a few additional points. PE is determined by location alone in a conservative field, but external work is not. The Law of Conservation of Energy states- " E(final)= E(initial) + W(external)", where I have included PE in E() and W(external) implies work done by non-conservative forces, which I have excluded from my system. In this context, the object remains at rest because the external work done (by the new force) is equal in magnitude to the change in PE and opposite in sign, which is what U(x0-h)= U(x0) + W means. I disagree with you. The work done by the new force will also manifest as a form of energy and will play a role in the "energy" equation. 10. Oct 21, 2015 ### CrazyNinja Okay. I just realised something. I mentioned the answer in my own post and was arguing about it. This basically implies that change in PE is the max KE. Im sorry for the inconvenience caused. This brings a question to my head: Was I right all this while, or were you guys right all the time, or were both of us right and arguing for the same thing (which I dunno why happens a lot with physics -_- )?? 11. Oct 21, 2015 ### Staff: Mentor Does this energy show up as KE? Does it show up as PE? Something else? Will two objects brought to rest at the same location by different means or routes have different energies? What test could be performed on such objects to tell them apart? 12. Oct 21, 2015 ### CrazyNinja I guess your post answers this the best. Which means I was wrong. I still do not understand where I went wrong though. The equations I wrote look consistent. In addition to that, they tell me that the new force is one with similar characteristics to the conservative force whose field we are working in. 13. Oct 21, 2015 ### Staff: Mentor No worries. What's important is that you're thinking about the physics and working your way to understanding. One aha! is worth a week of memorization What you wrote said that the total work done to move an object from one location to another in a conservative field is equal to the change in PE between those locations. That's fine! In fact it's the definition of PE for a conservative field. Where things went awry was in deducing or perhaps implying that the method or route taken would leave an "imprint" on the result. It is a common misconception that crops up when conservative forces and fields are first introduced. 14. Oct 21, 2015 ### Simon Bridge The trick is to think about what sort of energy other than PE or KE could be involved ... how would that energy, under the condition of free motion in the PE field, affect the resulting kinetic energy? The extra W in your equation - by what mechanism would it turn into KE? $U(x_0-h) + W = K_{max} + Q$ ... by what mechanism would $W \neq Q$ that is consistent with the description and context? (The object is held at rest and then released.) In short - it is not so much that you were wrong exactly, but that you were over-thinking the problem. Implicit in any physics problem is that the problem should be solvable, in a sensible way, by the average student having completed and understood the coursework ... physics problems never have all the information included explicitly or they would be very long-winded. Part of doing the problem is making a value judgement about what information is sensible to be included.. part of learning physics is learning to make that judgement. In this case, unless the student has other information, we can consider that everything we need to know about the effect of the additional non-conservative force has been provided: that it's result is to place the object, initially stationary, other than at the PE=0 point. Other possible effects depend on information not supplied and not relevant to the lesson - so: it's a red herring.
2017-12-13 17:04:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4246273934841156, "perplexity": 758.9688596615096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948529738.38/warc/CC-MAIN-20171213162804-20171213182804-00162.warc.gz"}
https://kyushu-u.pure.elsevier.com/ja/publications/identification-of-high-transverse-momentum-top-quarks-in-pp-colli
Identification of high transverse momentum top quarks in pp collisions at √s= 8 TeV with the ATLAS detector The ATLAS collaboration 29 被引用数 (Scopus) 抄録 This paper presents studies of the performance of several jet-substructure techniques, which are used to identify hadronically decaying top quarks with high transverse momentum contained in large-radius jets. The efficiency of identifying top quarks is measured using a sample of top-quark pairs and the rate of wrongly identifying jets from other quarks or gluons as top quarks is measured using multijet events collected with the ATLAS experiment in 20.3 fb−1of 8 TeV proton-proton collisions at the Large Hadron Collider. Predictions from Monte Carlo simulations are found to provide an accurate description of the performance. The techniques are compared in terms of signal efficiency and background rejection using simulations, covering a larger range in jet transverse momenta than accessible in the dataset. Additionally, a novel technique is developed that is optimized to reconstruct top quarks in events with many jets.[Figure not available: see fulltext.] 本文言語 英語 93 Journal of High Energy Physics 2016 6 https://doi.org/10.1007/JHEP06(2016)093 出版済み - 6 1 2016 All Science Journal Classification (ASJC) codes • 核物理学および高エネルギー物理学 フィンガープリント 「Identification of high transverse momentum top quarks in pp collisions at √s= 8 TeV with the ATLAS detector」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。
2022-01-29 13:48:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878359317779541, "perplexity": 1798.055296920016}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00191.warc.gz"}
http://www.jstor.org/stable/25053815
## Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: # Soil-Atmosphere Methane Exchange in Undisturbed and Burned Mediterranean Shrubland of Southern Italy Simona Castaldi and Angelo Fierro Ecosystems Vol. 8, No. 2 (Mar., 2005), pp. 182-190 Stable URL: http://www.jstor.org/stable/25053815 Page Count: 9 Preview not available ## Abstract Soils represent the primary biotic sink for atmospheric methane $({\rm CH}_{4})$. Uncertainty is associated, however, with global soil ${\rm CH}_{4}$ consumption because of the few data available from many areas and, in particular, from Mediterranean-type ecosystems. In this study, soil-atmosphere ${\rm CH}_{4}$ exchange was measured for one year in a coastal Italian shrubland (maquis), from both undisturbed areas and areas treated with experimental fire. Although fire represents one of the most frequent disturbance factors in seasonally dry environments, very few studies in these ecosystems have focused on its effect on soil ${\rm CH}_{4}$ fluxes. Significant differences in soil ammonium content, water content, and temperature were measured between burned and unburned plots, however, no statistical differences were observed for ${\rm CH}_{4}$ fluxes. ${\rm CH}_{4}$ fluxes varied between -0.39 and -16.1 mg ${\rm CH}_{4}\ {\rm m}^{-2}\ \text{day}^{-1}$ and temporal variations were mainly driven by variations in soil water content and temperature. The highest ${\rm CH}_{4}$ oxidation rates were measured during the driest and warmest period. Low gravimetric soil water content in the top 10 cm, as well as high ${\rm NH}_{4}{}^{+}$ concentration, did not seem to reduce methanotrophic activity, suggesting that maximal ${\rm CH}_{4}$ oxidation activity might take place deeper in the soil profile, at least during part of the year. • 182 • 183 • 184 • 185 • 186 • 187 • 188 • 189 • 190
2016-12-09 16:42:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4591088891029358, "perplexity": 8051.607348914905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542712.49/warc/CC-MAIN-20161202170902-00146-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/2315-brain-fart-print.html
# Brain Fart :/ • March 23rd 2006, 10:09 AM tnkfub Brain Fart :/ a stone is tossed doown with a speed of 8 m/ s from the edge of a cliff 63 m high. how long will it take to hit the foot of the cliff? a= -9.8 m/s ^2 vi= -8 m/s s= 63m 63m= - 8 m/s(t) + 1/2(9.8m/s^2)(t)^2 does this look right ? and if so how do i solve for t? Just confused can someone help me :/ :confused: • March 23rd 2006, 11:39 AM earboth Quote: Originally Posted by tnkfub a stone is tossed doown with a speed of 8 m/ s from the edge of a cliff 63 m high. how long will it take to hit the foot of the cliff? a= -9.8 m/s ^2 vi= -8 m/s s= 63m 63m= - 8 m/s(t) + 1/2(9.8m/s^2)(t)^2 does this look right ? and if so how do i solve for t? Just confused can someone help me :/ :confused: Hello, the stone starts at a height of 63 m. Then it is loosing height by being tossed and by falling, until it hits the ground, that means the height is zero(0). $0= 63- 8\frac{m}{s} \cdot t- \frac{1}{2} \cdot 9.81 \frac{m}{s^2} \cdot t^2$ This is a quadratic equation in t. I suppose that you know how to solve a quadratic equation. You'll get $x_1 \approx 2.86 s\ \vee \ x_2=-4.49 s$ The negative solution isn't very realistic. In comparison: If the stone were only fallen down, it had taken 3.58 s to come down. So the tossing gave the extra kick. Greetings EB • March 23rd 2006, 03:13 PM topsquark Quote: Originally Posted by tnkfub a stone is tossed doown with a speed of 8 m/ s from the edge of a cliff 63 m high. how long will it take to hit the foot of the cliff? a= -9.8 m/s ^2 vi= -8 m/s s= 63m 63m= - 8 m/s(t) + 1/2(9.8m/s^2)(t)^2 does this look right ? and if so how do i solve for t? Just confused can someone help me :/ :confused: Earboth didn't leave much to comment on, but I want to make a suggestion, considering the formula you came up with. It might seem like over-kill, but it's good practice to always draw a diagram and select a positive direction.
2014-12-19 00:55:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7028596997261047, "perplexity": 1341.1615897133036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768089.153/warc/CC-MAIN-20141217075248-00089-ip-10-231-17-201.ec2.internal.warc.gz"}
https://axitom.readthedocs.io/en/latest/quickstart.html
# Quick start¶ Let’s now go through the necessary steps for doing reconstruction of a tomogram based on a single image. First, we need to import the tools: import axitom as tom from scipy.ndimage.filters import median_filter The example data can be downloaded from the AXITOM/tests/example_data/ folder. The dataset was collected during tensile testing of a polymer specimen. Assuming that the example data from the repo is located in root folder, we can make a config object from the .xtekct file: config = tom.config_from_xtekct("radiogram.xtekct") We now import the projection: projection = tom.read_image(r"radiogram.tif", flat_corrected=True) As we will use a single projection only in this reconstruction, we will reduce the noise content of the projection by employing a median filter. This works fine since the density gradients within the specimen are relatively small. You may here choose any filter of your liking: projection = median_filter(projection, size=21) Now, the axis of rotation has to be determined. This is done be binarization of the image into object and background and determining the center of gravity of the object: _, center_offset = tom.object_center_of_rotation(projection, background_internsity=0.9) The config object has to be updated with the correct values: config = config.with_param(center_of_rot=center_offset) We are now ready to initiate the reconstruction: tomo = tom.fdk(projection, config) The results can then be visualized: import matplotlib.pyplot as plt
2021-07-24 01:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1801464855670929, "perplexity": 1846.047142114331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00495.warc.gz"}
https://aptitude.gateoverflow.in/6596/nielit-2019-feb-scientist-c-section-b-30
45 views In the following question, part of the sentence is italicised. Four alternative meanings of the italicised part of the sentence are given below the sentence. Mark as your answer that alternative meaning which you think is correct. I cannot $\textit{put up with}$ that nasty fellow: 1. Praise 2. Forgive 3. Endure 4. Control edited | 45 views Option C is the right answer. To put up with someone means to endure or tolerate their unpleasant behaviour. by (754 points) 3 5
2020-07-14 00:25:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.77476966381073, "perplexity": 4727.560068759367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00218.warc.gz"}
https://stats.stackexchange.com/questions/211730/receptive-field-and-convnets
# Receptive Field and ConvNets So I was reading this paper: https://arxiv.org/pdf/1409.1556.pdf VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION Karen Simonyan∗ & Andrew Zisserman+ Visual Geometry Group, Department of Engineering Science, University of Oxford and at a point it mentions: "It is easy to see that a stack of two 3 × 3 conv. layers (without spatial pooling in between) has an effective receptive field of 5 × 5; three such layers have a 7 × 7 effective receptive field." I don't understand how these effective receptive fields are calculated in relation to the convolutions / convolution units. • Please add a complete citation for the paper. – gung - Reinstate Monica May 10 '16 at 1:10 • @gung I kinda added it now, not sure if you meant that though. – Pf Spf May 10 '16 at 1:13 If the size of the filter is an odd number of pixels ($n$) then it will increase the receptive field by $n - 1$ pixels.
2021-03-06 11:04:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5247118473052979, "perplexity": 1306.8172243553984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00508.warc.gz"}
http://lists.w3.org/Archives/Public/public-mathonwebpages/2018Jan/0009.html
# Re: [MathOnWeb] call for comments -- directions for 2018 From: Arno Gourdol <arno@arno.org> Date: Mon, 15 Jan 2018 13:44:33 +0000 Message-ID: <CAGRYSkPTvcwWu9=LxW6enjgLxs37OitLPoHGLfYV0Uxbp7PWPw@mail.gmail.com> To: Peter Krautzberger <peter@krautzource.com> Cc: mathonweb <public-mathonwebpages@w3.org> On Jan 12, 2018 13:47, "Peter Krautzberger" <peter@krautzource.com> wrote: 1. what topics would you like the group to focus on in 2018? I'd like the group a workstream focused on proposing and advocating solutions that would plug the holes in the current web standards and their implementation by browser vendors that would make rendering and editing math easier with the web platform. Personally, I don't think that MathML is the solution. I would rather see CSS and ARIA improved. This would be a less significant effort from a standard and implementation point of view, while providing a more flexible solutions. Specifically, I would like to see support in CSS for stretchable fences and notations, features which are currently difficult/impossible to implement well. notation, for the purpose of computation, alternate renderings, etc... would be a useful topic as well. In this context, I'm not considering something that would necessarily need to be implemented by browsers, but something that could be used to foster interchange between software. This could be MathML, Latex, Wolfram, or some form of ASCIIMath (or maybe more than one of those). 2. what directions do you want/hope/wish/expect to see take shape in 2018? The browser vendors moving away from MathML and instead providing the necessary support in CSS, ARIA or other broader standards. Any improvements made in MathML support would not help my goal to provide editable math in the browser, unless browsers implement not only rendering, but editing, which I think is unlikely. On the other hand, improvements in CSS and ARIA could benefit any number of existing renderers and editors, including MathJax, MathLive, katex and more... 3. what organizational changes would you like to see? I'm not sure if we're all on the same page regarding our goals. It's difficult to make progress on getting this alignment with the weekly conf call. I would suggest we try to organize a F2F ta would take place over a couple of days so we can try to make more progress on getting aligned and agreeing on a strategy that we could then implement and discus the rest of the year via the regular conf call. Best, Arno. MathLive. Received on Monday, 15 January 2018 13:45:02 UTC This archive was generated by hypermail 2.3.1 : Monday, 15 January 2018 13:45:03 UTC
2018-10-21 07:47:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.517708957195282, "perplexity": 4879.720656156409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00356.warc.gz"}
https://gitlab.vci.rwth-aachen.de:9000/OpenMesh/OpenMesh/-/blame/45538e75ec05105b840582b1cc4355254ea3dc5c/Doc/changelog.docu
changelog.docu 41.4 KB Jan Möbius committed Oct 11, 2011 1 /** \page om_changelog Changelog Jan Möbius committed Feb 06, 2009 2 3 4 5 6 \htmlonly Jan Möbius committed Jun 04, 2009 7 Jan Möbius committed Sep 02, 2009 8 9 10 11 3.0 (?/?/?,Rev.882) Jan Möbius committed Aug 11, 2013 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 Core • Rewrote all circulators (STL compliant, removed redundant code) • Added stream operator for FVIter. • Added mesh cast for meshes with different but identical traits. IO • close stl files with endsolid Utils • PropertyManager: Added ability to get property name • Improved C++11-branch of PropertyManager and at the same time fixed compile error with gcc 4.7 Documentation • Use short names in Doxygen to prevent windows build failure due to excessive name length Build system • Build unittests with variadic max of 10 on VS 2012 43 2.4 (08/06/2013,Rev.882) Jan Möbius committed Jul 22, 2013 46 47 48 Significant interface changes Jan Möbius committed Aug 06, 2013 49 50 51 • The functions n_vertices(),n_edges().n_faces() return size_t now. • reserve and resize of the property vectors now take size_t • Fixed various other size_t conversions • Jan Möbius committed Jul 22, 2013 52 Jan Möbius committed Apr 02, 2013 53 54 55 56 57 58 Vector Type • vector_type min(const vector_type& _rhs) and vector_type max(const vector_type& _rhs) are declared const now. (Thanks to Vladimir Chalupecky for the hint) • minimize and maximize return vector_type& (reference) instead of vector_type (value) to allow chaining p.minimize(p1).minimize(p2). (Thanks to Vladimir Chalupecky for the hint) Jan Möbius committed Jan 08, 2013 59 Jan Möbius committed Jul 22, 2013 60 61 62 63 64 65 66 Core • Allow PolyConnectivity::delete_edge to mark an edge as deleted, if there are no faces incident. • Don't use c headers in c++ files anymore Jan Möbius committed Jan 08, 2013 67 68 IO Jan Möbius committed Jun 05, 2013 69 70 71 • Try to get rid of memory leak in IOManager(Changes the pointer used for IOManager to a static IOManager in the getter function) • Fixed writing face indices in different configurations regarding vertex texture coordinates and vertex normals (Thanks to Robert Luo for the patch) • Fixed a bug with OBJReader that prevented the material color to be loaded correctlyi(Thanks to Karthik Nathan for the patch) • Jan Möbius committed Jan 08, 2013 72 • Made STL Reader work, with the clear = false flag and Redundant lookup removed. ( Thanks to Peter Newman for the patch) • Jan Möbius committed Jan 30, 2013 73 • Missing include, preventing build on VS2012 (Thanks to Mageri Filali Maltouf for the patch) • Jan Möbius committed Mar 05, 2013 74 75 76 77 78 79 80 81 • Fixed various warnings reported by cppcheck • read_mesh now throws a compile error if an Options::Flag enum is passed as an argument instead of an Options object • Make OpenMesh PLY writer work as expected (Thanks to Chin-chia Wu for the patch) • Added colori and colorAi functions to BaseExporter which return Vec3i and Vec4i respectively • Adjusted the PLYWriter to use colori and colorAi for writing ascii files • Added a ColorFloat flag to Options, which can be set to write and read RGBA values as float instead of unsigned char • PLY writer and reader can now also handle color floats • OFF support for floats. Note that for reading binary OFF files with color floats, the user has to set the flag, that floats are expected • Jan Möbius committed Jan 30, 2013 82 83 84 Jan Möbius committed Mar 05, 2013 85 86 87 Utils • Jan Möbius committed Jun 05, 2013 88 • PropertyManager: Enabled initialization of invalid PropertyManager. • Jan Möbius committed Mar 05, 2013 89 90 91 92 93 • Added color_cast from Vec3f and Vec4f to Vec3i and Vec3ui • Added color_cast from Vec4f to Vec4i and Vec4ui Jan Möbius committed Jan 30, 2013 94 95 96 Decimater • Make Hausdorff module thread safe. Removed static point vector. (Thanks to Falko Löffler for the fix) • Jan Möbius committed Jan 08, 2013 97 98 Jan Möbius committed Jun 05, 2013 99 100 101 102 103 104 105 Tools • Command Line Decimater: Added an explanation on how to use multiple modules to the commandlineDecimater help output • Command Line Decimater: The normal deviation module now also is a priority module in the commandlineDecimater tool • Gui Decimater: Added decimater related help output to DecimaterGui when 'h' is pressed as the application is running Jan Möbius committed Jan 08, 2013 106 107 Unittests Jan Möbius committed Jul 22, 2013 108 • Added unittest for skipping iterators • Jan Möbius committed Jun 05, 2013 109 • Added unittest for collapse and is_collapse_ok • Jan Möbius committed Jan 08, 2013 110 • Jan Möbius committed Jan 30, 2013 111 • Jan Möbius committed Mar 05, 2013 112 113 114 • Added test for writing and reading vertex colors to and from an OFF file. The default color type Vec3uc from DefaultTraits in Traits.hh is used. • Added a ply file written with MeshLab and a corresponding unittest • Added a unittest that writes and reads a binary PLY file with vertex colors • Jan Möbius committed Jan 08, 2013 115 Jan Möbius committed Dec 20, 2012 116 Jan Möbius committed Jun 05, 2013 117 118 119 120 121 Documentation • Adjusted the documation for the decimation tutorial so that the priority module is correctly initialized Jan Möbius committed Dec 20, 2012 122 123 2.3.1 (2012/12/20,Rev.778) Jan Möbius committed Nov 12, 2012 125 Jan Möbius committed Nov 27, 2012 126 127 128 Core • Return vertex handles of newly added vertices in split and split_copy for faces when passing points instead of handles • Jan Möbius committed Dec 12, 2012 129 130 131 132 133 134 135 136 • Fixed copy and paste typo in split_copy for face handle • Replaced fabs by the std methods to fix errors when using norms with double vectors IO • Fixed missing cast in importer which lead to problems when using different vector type. (Thanks to Mario Deuss for the fix) • Fixed bug in OBJ reader, where some faces could be missing(Thanks to Ian Kerr for the Fix) • Jan Möbius committed Nov 27, 2012 137 Jan Möbius committed Nov 22, 2012 138 Jan Möbius committed Nov 27, 2012 139 Documentation Jan Möbius committed Nov 22, 2012 140 141 • Fixed Decimater example • Jan Möbius committed Nov 27, 2012 142 • Improved Docs for is_boundary() functions • Jan Möbius committed Nov 22, 2012 143 144 145 146 147 Unittests • Added unittest for vertexOHalfedge Iterator • Jan Möbius committed Nov 27, 2012 148 • Added unittests for boundary vertices and faces • Jan Möbius committed Dec 12, 2012 149 • Added test for VectorT abs function • Jan Möbius committed Nov 22, 2012 150 Jan Möbius committed Nov 12, 2012 151 152 153 154 155 2.3 (2012/11/12,Rev.758) Jan Möbius committed Jul 03, 2012 157 158 159 Core Jan Möbius committed Sep 10, 2012 160 161 • New garbage collection function with possibility to update existing handles • Fixed crash in garbage collection, if certain status flags are not available (warns in debug mode!) • Jan Möbius committed Jul 03, 2012 162 • Fixed some gcc-4.7 incompatibilities • Jan Möbius committed Jul 23, 2012 163 • TriMesh::split now returns the handle of the new vertex • Jan Möbius committed Sep 17, 2012 164 165 166 167 • Fixed delete_face function, not mariking halfedges as deleted, if the edge gets deleted.(Thanks to Maxime Quiblier for the bug report) • Added range based for loops compatible ranges to PolyConnectivity. • Added a function to copy single properties between entities of same type. (Thanks to Duncan Paterson for the patch) • Added functions to copy all properties between entities. (Thanks to Duncan Paterson for the patch) • Jan Möbius committed Sep 22, 2012 168 • Added split copy operations, which copy properties of splitted elements to the newly created ones. ( Thanks to Duncan Paterson for the patch ) • Jan Möbius committed Oct 02, 2012 169 170 171 • Added flag to the property copy functions, if the standard properties should be copied along with the user defined ones • Added function to remove all primitives from the mesh, but leaving properties attached (mesh.clean() ) • Avoid double next_halfedge_handle call in collapse_ok • Jan Möbius committed Oct 18, 2012 172 173 • Fixed the usage of vector traits such that the traits are used and not the vector types value_type. (Thanks to Mario Deuss for the patch) • Fixed bug in halfedge normal computation, where a boundary halfedge was not correctly handled and caused a segfault. • Jan Möbius committed Nov 05, 2012 174 • Fixed missing this pointer in PolyMeshT.hh at calc_dihedral_angle • Jan Möbius committed Jul 03, 2012 175 Jan Möbius committed Jun 18, 2012 176 Jan Möbius committed Sep 10, 2012 177 178 Decimater Jan Möbius committed Nov 13, 2012 179 • Changed template parameters of the modules from Decimater type to Mesh type • Jan Möbius committed Sep 10, 2012 180 181 182 183 184 • Added multiple choice decimater (~4 times faster than the heap one, but no guarantee on accuracy) • Added mixed decimater, switching between mc decimater and standard decimater • Decimater modules don't need a decimater type as template argument anymore • Module parameters renamed • Jan Möbius committed Oct 02, 2012 185 186 187 • Added the set_error_tolerance_factor function to ModBaseT and implemented it in inherited classes as necessary • Removed redundant tests in is_collapse_legal that where already performed in is_collapse_ok • ModHausdorff: Removed unused parameter • Jan Möbius committed Nov 05, 2012 188 189 190 191 • Added set_error_tolerance_factor to the modules, which can be used to scale the tolerance by a factor, allowing multiple decimation stages with increasing error tollerance Jan Möbius committed Nov 12, 2012 192 Subdivider Jan Möbius committed Nov 05, 2012 193 194 195 • Fixed typedef problems causing compiler errors • Removed a wrong assertion in the refine method for vector handles • Jan Möbius committed Sep 10, 2012 196 197 Jan Möbius committed Jun 18, 2012 198 199 IO Jan Möbius committed Oct 18, 2012 200 • Added precision option to openmesh writers • Jan Möbius committed Jun 18, 2012 201 • Fixed OBJ Reader not correctly setting per halfedge normals. (Thanks to Patrick Rauber for the report) • Jan Möbius committed Oct 18, 2012 202 203 • OM Reader: Reader used different types on 32/64-bit systems. (Thanks to Martin Bayer for the patch) • OM Reader: Also checks user options • Jan Möbius committed Oct 02, 2012 204 205 • Jan Möbius committed Oct 18, 2012 206 207 • OBJ Reader: Follow of user requests (Warning! Old default behaviour was wrong, because the reader read everything, without checking for the user options!) • PLY Reader: Reader now checks the options set by the user and will skip components that are not requested Jan Möbius committed Jun 18, 2012 208 209 Jan Möbius committed Oct 02, 2012 210 Utils Jan Möbius committed Aug 01, 2012 211 212 • Jan Möbius committed Oct 02, 2012 213 • Core/Utils: Added a Random Number generator with larger resolution (Windows supports only ~32k which is extended by this generator • Jan Möbius committed Aug 01, 2012 214 215 Jan Möbius committed Oct 18, 2012 216 217 218 219 Apps • QtViewer App tries to load textures for PLY (and other formats) too, if possible Jan Möbius committed Aug 01, 2012 220 Jan Möbius committed Jun 18, 2012 221 222 223 Unittests • Added unittest for OBJ texture coordinates. • Jan Möbius committed Jun 21, 2012 224 • Jan Möbius committed Jun 28, 2012 225 • Added unittest for getting handles and faces from the iterator. • Jan Möbius committed Aug 01, 2012 226 227 • Added unittest for creating a cube with 6 quads in a poly mesh. • Added unittest adding a cube with 12 faces triangulated to a trimesh. • Jan Möbius committed Sep 17, 2012 228 • Jan Möbius committed Oct 18, 2012 229 230 • Added unittest for obj crash when colors are requested but not available • Added unittests (trimesh and polymesh) for split_copy • Jan Möbius committed Nov 05, 2012 231 232 233 • Added unittest for vector cross product • Added unittest for dihedral angle function • Jan Möbius committed Oct 18, 2012 234 • Added some more unittests for the PLY loader with different user options • Jan Möbius committed Aug 01, 2012 235 • Fixed gcc-4.7 warnings. • Jan Möbius committed Jun 18, 2012 236 Jan Möbius committed Jun 14, 2012 237 Jan Möbius committed Jun 28, 2012 238 239 240 241 242 243 Tools • Added catmull clark subdivider. Thanks to Leon Kos for the code. Jan Möbius committed Jun 14, 2012 244 245 Documentation Jan Möbius committed Jun 21, 2012 246 • More documentation for the add_face functions (and some code cleanup) • Jan Möbius committed Jun 14, 2012 247 • Updated doxyfile.config.in version • Jan Möbius committed Jul 12, 2012 248 • Updated documentation for garbage collection • Jan Möbius committed Sep 22, 2012 249 • Updated documentation for the vertex_split operation • Jan Möbius committed Jun 14, 2012 250 251 • Fixed typo on main page Jan Möbius committed Jun 14, 2012 252 Jan Möbius committed Jun 21, 2012 253 254 255 256 Build system • Updated the Compiler flags construction to remove some unnecessary warnings with clang • Jan Möbius committed Jul 23, 2012 257 • Readded missing DOXY_IGNORE_THIS definition to doxygen file • Jan Möbius committed Aug 01, 2012 258 259 • Output OpenMesh Build type in cmake header printout • Windows: Extended min max warning to allow undefs • Jan Möbius committed Oct 02, 2012 260 • Windows: Support DLL build of OpenMesh • Jan Möbius committed Jun 21, 2012 261 262 263 264 265 Jan Möbius committed Jun 14, 2012 266 2.2 (2012/06/14,Rev.587) Jan Möbius committed Mar 05, 2012 269 270 271 272 273 Core • Simplified iterators and made them integrate better with the STL. Specifically, value_type has changed from {Vertex,Edge,...} to {Vertex,Edge,...}Handle so that dereferenced iterators can actually be put to use, now. • Consolidated iterator code. Functionally equivalent but way cleaner than before. • Jan Möbius committed Apr 10, 2012 274 • Improved block in update_normals(), if there are no face normals (could cause a crash) • Jan Möbius committed May 16, 2012 275 276 • Fixed usage of operator | instead of dot • Added a check to is_collapse_ok in TriConnectivity if the edge is already deleted or not (Could cause crashes and non-manifold configs before). • Jan Möbius committed Mar 05, 2012 277 278 Jan Möbius committed Apr 05, 2012 279 280 281 IO • Jan Möbius committed May 02, 2012 282 • Fixed debug build crash on mac, reading from stringstream into emtpy string crashed when compiling with clang • Jan Möbius committed May 16, 2012 283 • Fixed stl reader by porting it to std string. It had serious problems in utf8 environments • Jan Möbius committed Apr 05, 2012 284 285 Jan Möbius committed Mar 20, 2012 286 287 288 289 290 Geometry • Added normalized function to VectorT which returns a normalized vector whithout modifying the current one. Jan Möbius committed Jun 14, 2012 291 Utilities Jan Möbius committed Jun 14, 2012 292 293 294 295 • Fixed multiple connections of the omlog streams. (Thanks to Steffen Sauer for the patch) Jan Möbius committed Apr 10, 2012 296 297 298 Unittests • Added unittest for calling the normal computations. • Jan Möbius committed May 16, 2012 299 • Added unittests for ascii and binary stl files. • Jan Möbius committed Apr 10, 2012 300 301 302 Jan Möbius committed Apr 05, 2012 303 304 305 306 307 Documentation • Fixed doxygen warnings • Updated collapse function documentation • Updated triangulation function documentation • Jan Möbius committed Apr 10, 2012 308 • Updated normal computation documentations • Jan Möbius committed Apr 05, 2012 309 310 311 Jan Möbius committed Mar 05, 2012 312 313 314 General • Xcode 4.3 compatibility (Fixed issues that caused build errors with XCode 4.3) • Jan Möbius committed Mar 20, 2012 315 • Fixed some size_t uint conversion warnings • Jan Möbius committed Mar 05, 2012 316 Jan Möbius committed Mar 01, 2012 317 318 319 320 2.1.1 (2012/03/01,Rev.544) Tools Jan Möbius committed Mar 05, 2012 325 • Fixed wrong INCLUDE_TEMPLATE include definition headers for NormalCone, and some decimater modules. • Jan Möbius committed Mar 01, 2012 326 327 Jan Möbius committed Feb 23, 2012 328 329 Unittests Jan Möbius committed Mar 01, 2012 330 331 332 333 334 335 336 • Only build the unit tests if google test has been found • Added flag to enable/disable building of unit tests Build system • Drop Template only cc files (they produce no code and trigger some warnings) • Jan Möbius committed Feb 23, 2012 337 Jan Möbius committed Jan 24, 2012 338 339 2.1 (2012/01/24,Rev.531) Jan Möbius committed Jul 01, 2011 342 343 344 345 346 Core • Implemented is_collapse_ok for polymeshes • Implemented split_edge ( split(edgehandle,vertexhandle) ) for poly meshes • Jan Möbius committed Oct 07, 2011 347 • Bugfix for #248 (broken end definition for vertexFaceIter). Thanks to Patrik Rauber for reporting this bug. • Jan Möbius committed Jul 01, 2011 348 349 • Fixed compiler error because of extra ',' • Fixed some compiler warnings • Jan Möbius committed Sep 01, 2011 350 • Jan Möbius committed Oct 07, 2011 351 • Avoid some compiler warnings • Jan Möbius committed Oct 24, 2011 352 • Added color caster from vec3f to vec4f setting alpha to 1.0 as default • Jan Möbius committed Nov 04, 2011 353 • Added color caster from vec4i to vec4f converting alpha from 0..255 range to 0..1.0 • Jan Möbius committed Jan 09, 2012 354 355 • Replaced (v0|v1) by dot(v0,v1) in calc_sector_angle as it fails to build otherwise (Thanks to Zhang Juyong for the fix) • Fixed some cppcheck warnings • Jan Möbius committed Jan 20, 2012 356 • Added support for halfedge normals(allows per vertex per face normals) • Jan Möbius committed Jul 01, 2011 357 358 Jan Möbius committed Nov 25, 2011 359 360 361 362 363 Geometry Jan Möbius committed Jul 01, 2011 364 365 366 IO • OFF Reader: Fixed crash on some files containing empty lines(Thanks to R.Schneider for the fix)). • Jan Möbius committed Sep 01, 2011 367 • STL Reader: Add empty mesh when reading empty stl file (don't fail as this is still a valid file) • Jan Möbius committed Dec 01, 2011 368 369 • PLY Reader: Support vertex normals (Thanks to Bruno Dutailly) • PlY Writer: vertex normal support (Thanks to Bruno Dutailly) • Jan Möbius committed Jan 24, 2012 370 • PLY Writer: Fixed output of colors • Jan Möbius committed Dec 01, 2011 371 372 • OBJ Reader: support for vertex colors after vertices or Vertex colors as separate lines. (Thanks to Bruno Dutailly) • OBJ Reader: Handle objs without faces(Thanks to Bruno Dutailly) • Jan Möbius committed Nov 28, 2011 373 • Jan Möbius committed Sep 01, 2011 374 375 Jan Möbius committed Nov 04, 2011 376 Decimater Jan Möbius committed Jan 24, 2012 377 Jan Möbius committed Nov 04, 2011 378 379 • Added decimate_to_faces function (Decimating to a target face count) • Jan Möbius committed Nov 07, 2011 380 • Jan Möbius committed Nov 25, 2011 381 382 • Jan Möbius committed Nov 04, 2011 383 384 Jan Möbius committed Jan 24, 2012 385 386 387 388 389 390 391 Subdivider • Modified base class to support fixed positions on already existing vertices • Added LongestEdge subdivider (Always split the currently longest edge, until a maximal edge length on the mesh is reached) • Updated Loop subdivider for the fixed vertex positions Jan Möbius committed Oct 07, 2011 392 393 Unittests Jan Möbius committed Oct 11, 2011 394 • Enabled unittests for windows • Jan Möbius committed Oct 07, 2011 395 396 397 • Added test for VertexFaceiter (with and without holes) • Jan Möbius committed Nov 04, 2011 398 • Jan Möbius committed Oct 07, 2011 399 • Added test for FaceFaceiter (with and without holes) • Jan Möbius committed Nov 25, 2011 400 401 • Added test for collapse and is_collapse_ok operations • Jan Möbius committed Dec 01, 2011 402 403 • Added tests for vertex colors in obj files • Added tests for ply reader with and without normals, ascii mode • Jan Möbius committed Oct 07, 2011 404 405 Jan Möbius committed Oct 10, 2011 406 407 408 409 Doc • Document that if OpenMesh is linked statically OM_STATIC_BUILD has to be defined on the executable to make readers work correctly • Improved MeshIO Documentation • Jan Möbius committed Oct 11, 2011 410 411 • Document behaviour of circulators on deleted elements • Jan Möbius committed Oct 10, 2011 412 • Get rid of most doxygen warnings • Jan Möbius committed Nov 25, 2011 413 • Improved documentation of the decimater and its modules • Jan Möbius committed Oct 10, 2011 414 Jan Möbius committed Sep 01, 2011 415 416 417 418 419 420 Build System • Append a 'd' to the lib name if in debug mode and not in release mode • Changed build directory contents on Mac (Build all binaries in Build dir only) • Disable Fixbundle on Mac (not required at the moment and hangs forever) • Jan Möbius committed Oct 07, 2011 421 • Added unittest directory and Build system (build explicitly with make unittests) • Jan Möbius committed Oct 11, 2011 422 423 424 425 • Skip fixbundle when building without apps on windows • On windows: If release and debug libs are build in same directory, install them both • On windows: Make sure that all dlls are copied • On windows: create start menu shortcut to Documentation • Jan Möbius committed Nov 28, 2011 426 • On windows: MinGW support • Jan Möbius committed Jul 01, 2011 427 Jan Möbius committed May 20, 2011 428 429 430 431 2.0.1 (2011/05/20,Rev.389) Jan Möbius committed Apr 12, 2011 432 433 434 435 436 437 438 439 440 Apps • Get rid of glew dependencies • Remove a lot of unused qt libraries which were linked before • Do not link libXi and Xmu as we don't need it • Added two new subdivision schemes (Interpolating Sqrt3 Labsik-Greiner and Modified Butterfly) to subdivider applications Jan Möbius committed Jan 05, 2011 441 442 443 444 445 Core • Work with gcc 4.6: ptrdiff_t not correctly included from std, Thanks to Ville Heiskanen for the patch) Jan Möbius committed Jan 26, 2011 446 447 448 449 450 Tools • Fixed bug in decimater where boundary check was using the wrong halfege(Thanks to Michal Nociar for the patch) Jan Möbius committed Jan 05, 2011 451 452 453 Build System • Updated debian dir (thanks to Jean Pierre Charalambos) • Jan Möbius committed Mar 09, 2011 454 455 456 • Removed glew depedency • Only one fixbundle on mac and windows • Run fixbundle only in standalone mode • Jan Möbius committed Apr 12, 2011 457 • Run fixbundle only once • Jan Möbius committed Mar 09, 2011 458 • Change debian control to reduce dependencies (glew,some qt libs) • Jan Möbius committed Apr 12, 2011 459 • Fixed BUILD_APPS macro • Jan Möbius committed Jan 05, 2011 460 461 Jan Möbius committed Feb 10, 2011 462 463 464 465 466 467 Documentation • Fixed error in image about edge collapses • Fixed wrong strip path in doxygen settings • Fixed compilation instructions for mac • Switched to white background with black text • Jan Möbius committed Apr 12, 2011 468 • Removed glew from docs • Jan Möbius committed Feb 10, 2011 469 470 Jan Möbius committed Jan 05, 2011 471 472 2.0 (2010/12/21,Rev.356) Jan Möbius committed Mar 09, 2010 474 475 476 477 Core • Improve computation of normals for poly meshes ( now the average normal is taken not the normal of one triangle) • Jan Möbius committed Sep 29, 2010 478 • Avoid % Operator in normal calculation (triggers compiler error on vectors of size other than 3) • Jan Möbius committed Apr 28, 2010 479 • Added status flag indicating that mesh reader duplicated primitives to avoid non-manifold configurations • Jan Möbius committed Sep 29, 2010 480 481 • Setting associated handles of iterator types invalid if reference mesh contains none of the respective entities. • Jan Möbius committed Apr 28, 2010 482 483 Jan Möbius committed Dec 01, 2010 484 485 486 487 488 489 IO • PLY Reader: Avoid failure of file writing if face colors or face normals are requested for PLY files. Jan Möbius committed Sep 29, 2010 490 491 492 Math • Added missing include of string.h to VectorT.hh (Thanks to Justin Bronder for reporting this) • Jan Möbius committed Dec 01, 2010 493 • Added some vector norm functions for L1 norm, and absolute mean,max,min(Thanks to Michal Nociar) • Jan Möbius committed Sep 29, 2010 494 495 496 Jan Möbius committed Apr 28, 2010 497 498 499 Tools • OpenMesh mesh dual generator added (Thanks to Clement Courbet for providing the code) • Jan Möbius committed Dec 01, 2010 500 • Added Sqrt3InterpolatingSubdividerLabsikGreinerT and ModifiedButterFlyT (Thanks to Clément Courbet for providing the code) • Jan Möbius committed Apr 28, 2010 501 502 503 504 505 Apps • OpenMesh mesh dual generator application added (Thanks to Clement Courbet for providing the code) • Jan Möbius committed Mar 09, 2010 506 Jan Möbius committed Mar 08, 2010 507 Jan Möbius committed Apr 29, 2010 508 509 510 Documentation • Jan Möbius committed Jun 08, 2010 511 • Added treeview on the left • Jan Möbius committed Apr 29, 2010 512 • Generate subpage structure to make treeview more organized • Jan Möbius committed Jun 08, 2010 513 • Enabled Doxygen stl support • Jan Möbius committed Sep 29, 2010 514 515 • Fixed documentation for add_face and some other typos (Thanks to Yamauchi Hitoshi) • Added preprocessor directives such that doxigen parses vectorT correctly • Jan Möbius committed Apr 29, 2010 516 517 518 519 Build System Jan Möbius committed Dec 21, 2010 520 521 • Copy Doc directories to installers • Copy shared Qt Libs to build dir on windows • Jan Möbius committed Apr 29, 2010 522 • Updated glew and glut finders • Jan Möbius committed Jun 08, 2010 523 • Respect seperate settings for build types (release,debug,relwithdebinfo) • Jan Möbius committed Sep 29, 2010 524 • Extend macros acg_append_files_recursive acg_append_files to not include files starting with a dot • Jan Möbius committed Apr 29, 2010 525 526 Jan Möbius committed Mar 08, 2010 527 528 2.0-RC5 (2010/03/08,Rev.305) Jan Möbius committed Nov 26, 2009 530 531 532 533 Core • Fixed build error in function calc_dihedral_angle_fast • Jan Möbius committed Dec 08, 2009 534 • Jan Möbius committed Dec 17, 2009 535 • Provide begin/end functions for circulators • Jan Möbius committed Dec 22, 2009 536 • mostream crash fixed (Thanks to Adrian Secord for providing the patch) • Jan Möbius committed Jan 21, 2010 537 • added colors to status flags for edges ( request_edge_color ... ) • Jan Möbius committed Feb 25, 2010 538 • Fixed issue with wrong normal scalar type when using integer points and float normals ( Thanks to Clement Courbet for reporting this bug) • Jan Möbius committed Nov 26, 2009 539 540 541 542 543 • Fixed build error in STL writer • Jan Möbius committed Dec 22, 2009 544 • Fixed and enhanced PLY reader to improve handling of unknown properties (Thanks to Michal Nociar for the patch) • Jan Möbius committed Jan 04, 2010 545 • Fixed crash in Offreader with DOS line endings. (Thanks to Adrian Secord for the patch) • Jan Möbius committed Feb 25, 2010 546 • Fixed obj readers for some files containing tabs • Jan Möbius committed Nov 26, 2009 547 548 Jan Möbius committed Mar 02, 2010 549 550 551 552 553 Apps • OpenMesh progressive mesh generator readded • OpenMesh progressive mesh viewer readded • OpenMesh progressive mesh analyzer readded • Jan Möbius committed Mar 02, 2010 554 • OpenMesh progressive mesh synthesizer readded • Jan Möbius committed Mar 02, 2010 555 556 Jan Möbius committed Nov 26, 2009 557 558 Documentation Jan Möbius committed Mar 01, 2010 559 • Updated Documentation front page • Jan Möbius committed Nov 26, 2009 560 561 • Jan Möbius committed Jan 21, 2010 562 • Updated tutorial and docu for mesh circulators • Jan Möbius committed Feb 25, 2010 563 • Updated tutorial on deleting geometry • Jan Möbius committed Dec 08, 2009 564 • Examples for flipping and collapsing edges • Jan Möbius committed Nov 26, 2009 565 • Fixed a lot of doxygen warnings • Jan Möbius committed Dec 08, 2009 566 • Fixed some spellings • Jan Möbius committed Nov 26, 2009 567 568 Jan Möbius committed Mar 03, 2010 569 570 571 Build System • Fixed rpath issue when building and installing on MacOS • Jan Möbius committed Mar 08, 2010 572 • Fixed install target for MacOS (headers where not copied due to bug in cmake) • Jan Möbius committed Mar 03, 2010 573 574 Jan Möbius committed Nov 26, 2009 575 576 577 2.0-RC4 (2009/11/18,Rev.227) Jan Möbius committed Sep 02, 2009 579 Core Jan Möbius committed Sep 07, 2009 580 Jan Möbius committed Sep 02, 2009 581 582 • Fixed clear functions to swap vectors. This frees OpenMesh memory when clear is invoked. • Jan Möbius committed Nov 14, 2009 583 584 • Fixed bug in handle() function when getting handle from given Halfedge (Reported by Rob Patro) • Memory leak in assignment Operator (Reported by Meng Luan, Thanks to Ilya A. Kriveshko for the patch) • Jan Möbius committed Sep 07, 2009 585 Jan Möbius committed Sep 02, 2009 586 587 Readers/Writers Jan Möbius committed Sep 07, 2009 588 Jan Möbius committed Sep 02, 2009 589 590 • Fixed reading ply files with unknown properties • Added support for texture coordinates in ply files • Jan Möbius committed Nov 18, 2009 591 • Jan Möbius committed Sep 02, 2009 592 593 • OMFromat -> Fixed empty template parameter issue under msvc • Jan Möbius committed Nov 14, 2009 594 • OBJWriter -> Fixed writing of normals, Missing / when skipping texture coordinates • Jan Möbius committed Sep 07, 2009 595 Jan Möbius committed Sep 02, 2009 596 597 Build system Jan Möbius committed Sep 07, 2009 598 Jan Möbius committed Sep 02, 2009 599 600 601 602 603 604 605 606 607 • Build shared and static version under linux (cmake) • Added -DBUILD_APPS=OFF cmake flag to skip building of apps (cmake) • Generate sonames under linux (cmake) • Debian build dir for building Debian packages (Thanks to Jean Pierre Charalambos) • Package generator for windows. Builds an setup file containing precompiled static libs and includes for windows. • Jan Möbius committed Oct 16, 2009 608 • Throw warning if a min or max macro is defined under windows and suggest NOMINMAX (Thanks to Ingo Esser) • Jan Möbius committed Sep 07, 2009 609 Jan Möbius committed Sep 02, 2009 610 611 Documentation Jan Möbius committed Sep 07, 2009 612 Jan Möbius committed Nov 18, 2009 613 • Updated Documentation Mainpage • Jan Möbius committed Sep 02, 2009 614 615 616 617 618 • Updated properties tutorial to include all request_... functions • Added Tutorial on deleting geometry • Fixed Traits example • Other minor fixes • Jan Möbius committed Sep 28, 2009 619 • Added tutorials as compileable source code • Jan Möbius committed Sep 07, 2009 620 Jan Möbius committed Sep 02, 2009 621 Jan Möbius committed Nov 14, 2009 622 623 624 625 626 627 Misc • Updated debian dir to build debs (Thanks to Jean Pierre Charalambos) Jan Möbius committed Sep 02, 2009 628 • PLY writer fix ( thanks to Marc Hugi ) • PLY reader fix ( wrong parsing of uchar binary data ) • PLY reader warnings fix ( thanks to Ilya A. Kriveshko ) Tools • Smother now respects feature primitives • Decimater improvements and fixes ( thanks to Ilya A. Kriveshko ) Build system • Updated directory structure • Changed libnames to libOpenMesh and libOpenMeshTools • cmake support • bugfixes to qmake build system • Keep some basic ACGMake files around ( acgmake is deprecated!! We will not provide support for it! Please use cmake or qmake instead. ) Other • Fixed some warnings with latest gcc • Per halfedge texture coordinates added • Extended functions to get available properties 2.0-RC2 (2009/02/17) • Fix for OBJ reader not reading texture coordinates correctly ( Thanks to Kamalneet Singh ) • Fixed included Visual Studio files 2.0-RC1 (2009/01/28) Jan Möbius committed Feb 06, 2009 688 689 690 • Reader / writer have been updated • Some general bugfixes • Jan Möbius committed Jun 04, 2009 691 • The usage of acgmake has become deprecated since the last release. It has been replaced by qmake • Jan Möbius committed Feb 06, 2009 692 693 • Improved Documentation • Dropped support for acgmake which has been entirely replaced by qmake • Jan Möbius committed Jun 04, 2009 694 • Credits to Simon Floery, Canjiang Ren, Johannes Totz, Leon Kos, Jean Pierre Charalambos, Mathieu Gauthier • Jan Möbius committed Feb 06, 2009 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1.9.7 (2008/10/13) • Ported applications to qt4 • Bugfixes in Decimater • Improved Documentation • Dropped support for gcc 3.x compilers (This does not mean that it does not work anymore) • Dropped support for Versions of Visual Studio older than 2008 1.1.0 (2007/02/26) • Fixed a VS 2005 compilation issue regarding the Sqrt3 subdivision class. • Fixed GCC-4.1 compilation problems. • The STL writer routine now correctly closes the "solid" block with "endsolid". • The API of the vector class has been changed slightly due to problems with some versions of GCC: The cast operator to the scalar type has been removed and replaced by the function data(). Hence, existing code like Vec3f vertex; ... glVertex3fv( vertex ); has to be changed to Vec3f vertex; ... glVertex3fv( vertex.data() ); 1.0.0 (2005/09/20) • Mainly fixed the bugs collected in beta4. • Slightly changed module handling in the Decimater. • Removed some parts to keep the project maintainable. • Fixed MacOS compilation problems. • Compatibility for latest gcc4 compilers. 1.0.0-beta4 (2004/01/20) • Bugs fixed: 1.0.0-beta3:001 • Documentation of module Core completed. • Documentation of module Tools::Decimater and Tools::Subdivider completed. • Revised class structure for uniform subdivision. • Revised rule handling for composite adaptive subdivision. 1.0.0-beta3 (2003/12/04) The beta3 fixes only the known bugs in beta2. • Bugs fixed: 1.0.0-beta2:{001, 002, 003, 004} • Known Bugs: Symptoms: If a previously read .off file had normals/texcoords, a second read of another file w/o normals or texcoords, will return with option bits normals/texcoords enabled. 1.0.0-beta2 (2003/11/05) • Change of directory structure +- %OpenMesh/ +- Core/ # previously %OpenMesh +- Tools/ # previously OpenMeshTools +- Apps/ # previously OpenMeshApps +- Win/ # contains all solutions files and projects for MS VC++ +- Doc/ # contains all documentation Note! The supplied script \c %OpenMesh/migrate.sh can be used to adjust include paths and ACGMakefiles. (It's not guarantied the script handles every case, but so far it did not missed a file to adjust.) • Porting issues: Due to a number of major changes in the structure of %OpenMesh a few incompatibilities have been introduced. Have look in \c %OpenMesh/porting.txt for hints how to treat your source when updating from 0.11.x to 1.0.0. Hint! The supplied script \c %OpenMesh/migrate.sh does a few of the necessary modifications. • The list kernel has been removed • Improved IO support: • Read/write ascii and binary STL • Read/write ascii and binary OFF • Support for vertex normals and vertex texcoords in OFF and OBJ • Support importing diffuse material into face color property from OBJ files. • Properietary binary format OM, supporting read/write of custom properties • Improved coordinate class OpenMesh::VectorT: • VectorT::vectorize(Scalar) is no longer static, now it changes the vector. Use it e.g. to clear vector values. • Casts between two vector classes of the same dimension are now explicit. This avoids unwanted and expensive casts. • Optimized performance by manual loop-unrolling. These optimizations are partial specializations for vectors of dimension 2, 3 and 4. Since Microsoft's VC++ still does not support partial specialization and also provides rather poor loop-unrolling, users of this compiler are stuck to lower performance. • OpenSG Support: • New kernel type \c class TriMesh_OSGArrayKernelT<>. • Uses OpenSG geometry types! • PolyMesh not supported, yet! • Use \c OpenMesh::Kernel_OSG::bind<> to link a mesh obj with an \c osg::Geometry and vice versa. Please note that both objects share the same data! • Binding a mesh to an \c osg::Geometry changes the content of the \c osg::Geometry! • Triangulates non-triangular faces! • Multi-indexed geometry not supported! • Transfer of vertex normals • Limited capability to transfer colors: So far, only \c osg::Color3f <-> \c OpenMesh::Vec3ub • Microsoft VC++ 7.0 projects files • Tutorial solution file • New tools/applications: • \c Tools/VDPM (View Dependent Progressive Mesh Library) • \c Apps/VDProgMesh/mkbalancedpm - Create a balanced progressive mesh. • \c Apps/VDProgMesh/Analyzer - Create a view dependent progressive mesh file. • \c Apps/VDProgMesh/Synthesizer - A viewer for the VDPM file. • Apps/mconvert • Apps/IvViewer - added support for Coin • Known Bugs: The following bugs are related to the TriMesh_OSGArrayKernelT<>: • 001: Cannot request/release the default attribute halfedge_status. • 002: Cannot release the default attribute vertex_texcoords • 003: Assignment operator = () does not work properly. • 004: No copy-constructor available! 0.11.1 (2002/12/02) • Bugs fixed: 0.11.0:{001, 002, 003, 004, 006} 006: Use acgmake version 1.1.1. • Preprocessor warnings of gcc >= 3 fixed. • Added some more dynamic ways to append properties to items: OpenMesh::Any and OpenMesh::PropertyT . • VectorT: standard operator less added, Vec4f is now 16-bit aligned using gcc compiler (for later SIMD usage). • Use OM_STATIC_BUILD=1 when creating a static library. The static version of the library needs to included IOInstances.hh, which is done in that case. When compiling with MS VC7, the define is set automatically, as the DLL-Version is not supported yet. acgmake (Version >= 1.1) set the define automatically when using the flag for static compiling. • The read_mesh() methods now clears the mesh before reading a new one. 0.11.0 (2002/09/07) • Bugs fixed: 0.10.2:{001, 002, 003, 004, 005} • Added MS VC++ 7.0 project files for the tutorial programms. (Have a look in /Win32/MSVC/) • New Input/Output management, see \ref mesh_io. The new interface is as backwards-compatible as possible. Only the read_[off,obj] and write_[off,obj] methods do no longer exist. You should now include OpenMesh/IO/MeshIO.hh instead of MeshReader.hh and MeshWriter.hh. The old include files may be removed in a future release. • Added: Generic algorithms may now define their own traits, these traits can be merged with other user-defined traits by the OM_Merge_Traits macro. See tutorial \ref tutorial_06. • Added generic handle <-> item conversions, see • OpenMesh::Concepts::KernelT::handle() and • OpenMesh::PolyMeshT::deref(). The kernel methods vertex_handle(), halfedge_handle(), edge_handle(), face_handle() should no longer be used, but are still existent for compatibility reasons. You can hide them by uncommenting the define OM_HIDE_DEPRECATED in OpenMesh/System/config.h. • Internal methods, like Vertex::halfedge_handle() or Vertex::point() are now hidden, since the respective kernel methods (like MeshKernel::halfedge_handle(VertexHandle) or MeshKernel::point(VertexHandle)) should be used instead. • Added convenience methods for the mesh kernels: • OpenMesh::Concepts::KernelT::n_halfedges() • OpenMesh::Concepts::KernelT::halfedges_empty() • OpenMesh::Concepts::KernelT::halfedge_handle(unsigned int _i) • Known Bugs: • 001: Ambigous auto_ptr<> cast in ExporterT.hh. • 002: BaseImporter and BaseExporter have no virtual destructor. • 003: Reader does not work correctly, when reading from a istream. • 004: cross(VectorT, VectorT) returns Scalar instead of VectorT. • Jan Möbius committed Dec 20, 2012 44 Jan Möbius committed Aug 06, 2013 45 Jan Möbius committed Dec 20, 2012 124 Jan Möbius committed Nov 12, 2012 156 Jan Möbius committed Mar 01, 2012 267 Jan Möbius committed Jun 14, 2012 268 Jan Möbius committed Mar 01, 2012 321 322 323 324 Jan Möbius committed Jan 05, 2011 340 Jan Möbius committed Jan 24, 2012 341 Jan Möbius committed Dec 21, 2010 473 Jan Möbius committed Mar 08, 2010 529 Jan Möbius committed Nov 18, 2009 578 Jan Möbius committed Jun 04, 2009 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 Jan Möbius committed Feb 06, 2009 676 677 Jan Möbius committed Jun 04, 2009 678 679 680 681 682 683 684 685 686 687 For faster browsing, not all history is shown.
2022-10-05 21:23:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.870615541934967, "perplexity": 9792.703263449192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00678.warc.gz"}
https://hackage.haskell.org/package/LogicGrowsOnTrees
# LogicGrowsOnTrees: a parallel implementation of logic programming using distributed tree exploration NOTE: In addition to the following package description, see You can think of this package in two equivalent ways. First, you can think of it as an implementation of logic programming that is designed to be parellelized using workers that have no memory shared between them (hence, "distributed"). Second, you can think of this package as providing infrastructure for exploring a tree in parallel. The connection between these two perspectives is that logic programming involves making nondeterministic choices, and each such choice is equivalent to a branch point in a tree representing the search space of the logic program. In the rest of the reference documentation we will focus on the tree perspective simply because a lot of the functionality makes the most sense from the perspective of working with trees, but one is always free to ignore this and simply write a logic program using the standard approach of using MonadPlus to indicate choice and failure, and the Tree implementation of this typeclass will take care of the details of turning your logic program into tree. (If you are not familiar with this approach, then see <http://github.com/gcross/LogicGrowsOnTrees/blob/master/TUTORIAL.md TUTORIAL.md>.) To use this package, you first write a function that builds a tree (say, by using logic programming); the LogicGrowsOnTrees module provides functionality to assist in this. You may have your function either return a generic MonadPlus or MonadExplorable (where the latter lets you cache expensive intermediate calculations so that they do not have to be performed again if this path is re-explored later), or you may have it return a Tree (or one of its impure friends) directly. You can then test your tree using the visting functions in the LogicGrowsOnTrees module. WARNING: If you need something like state in your tree, then you should stack the state monad (or whatever else you want) on top of Tree rather than below it. The reason for this is that if you stack the monad below TreeT, then your monad will be affected by the order in which the tree is explored, which is almost never what you want, in part because if you are not careful then you will break the assumption made by the checkpointing and parallelization infrastructure that it does not matter in what order the tree is explored or even whether some parts are explored twice or not at all in a given run. If side-effects that are not undone by backtracking is indeed what you want, then you need to make sure that your side-effects do not break this assumption; for example, a monad which memoizes a pure function is perfectly fine. By contrast if you are working within the IO monad and writing results to a database rather than returning them (and assuming that duplicate results would cause problems) then you need to check to make sure you aren't writing the same result twice, such as by using the LogicGrowsOnTrees.Location functionality to identify where you are in the tree so you can query to see if your current location is already listed in the database. If you want to see examples of generating a tree to solve a problem, then see LogicGrowsOnTrees.Examples.MapColoring or LogicGrowsOnTrees.Examples.Queens modules, which have some basic examples of using logic programming to find and/or count the number of solutions to a given map coloring problem and a given n-queens problem. The LogicGrowsOnTrees.Examples.Queens.Advanced module has my own solution to the n-queens problem where I use symmetry breaking to prune the search tree, cutting the runtime by about a factor of three. Once your tree has been debugged, you can start taking advantage of the major features of this package. If you are interested in checkpointing, but not parallelization, then you can use the step functions in the LogicGrowsOnTrees.Checkpoint module to sequentially explore a tree one node at a time, saving the current checkpoint as often as you desire; at any time the exploration can be aborted and resumed later. Most likely, though, you will be interested in using the parallelization infrastructure rather than just the checkpointing infrastructure. The parallelization infrastructure uses a supervisor/worker model, and is designed such that the logic used to keep track of the workers and the current progress is abstracted away into the LogicGrowsOnTrees.Parallel.Common.Supervisor module; one then uses one of the provided adapters (or possibly your own) to connect the abstract model to a particular means of running multiple computations in parallel, such as multiple threads, multiple processes on the same machine, multiple processes on a network, and MPI; the first option is included in this package and the others are provided in separate packages. Parallelization is obtained by stealing workloads from workers; specifically, a selected worker will look back at the (non-frozen) choices it has made so far, pick the first one, freeze it (so that it won't backtrack and try the other branch), and then hand the other branch to the supervisor which will then give it to a waiting worker. To use the parallelization infrastructure, you have two choices. First, you can opt to use the adapter directly; the exploration functions provided by the adapter are relatively simple (compared to the alternative to be discussed in a moment) and furthermore, they give you maximum control over the adapter, but the downside is that you will have to re-implement features such as regular checkpointing and forwarding information from the command line to the workers yourself. Second, you can use the infrastructure in LogicGrowsOnTrees.Parallel.Main, which automates most of the process for you, including parsing the command lines, sending information to the workers, determining how many workers (if applicable) to start up, offering the user a command line option to specify whether, where, and how often to checkpoint, etc.; this infrastructure is also completely adapter independent, which means that when switching from one adapter to another all you have to do is change one of the arguments in your call to the main function you are using in LogicGrowsOnTrees.Parallel.Main. The downside is that the call to use this functionality is a bit more complex than the call to use a particular adapter precisely because of its generality. If you want to see examples of using the LogicGrowsOnTrees.Parallel.Main module, check out the example executables in the examples/ subdirectory of the source distribution. If you are interested in writing a new adapter, then you have couple of options. First, if your adapter can spawn and destroy workers on demand, then you should look at the LogicGrowsOnTrees.Parallel.Common.Workgroup module, as it has infrastructure designed for this case; look at LogicGrowsOnTrees.Parallel.Adapter.Threads for an example of using it. Second, if your adapter does not meet this criterion, then you should look at the LogicGrowsOnTrees.Parallel.Common.Supervisor module; your adapter will need to run within the SupervisorMonad, with its own state contained in its own monad below the SupervisorMonad monad in the stack; for an example, look at the LogicGrowsOnTrees-network module. NOTE: This package uses the hslogger package for logging; if you set the log level to INFO or DEBUG (either by calling the functions in hslogger yourself or by using the -l command line option if you are using Main) then many status messages will be printed to the screen (or wherever else the log has been configured to be written). The modules are organized as follows: LogicGrowsOnTrees basic infrastructure for building and exploring trees LogicGrowsOnTrees.Checkpoint infrastructure for creating and stepping through checkpoints LogicGrowsOnTrees.Examples.MapColoring simple examples of computing all possible colorings of a map LogicGrowsOnTrees.Examples.Queens simple examples of solving the n-quees problem a very complicated example of solving the n-queens problem using symmetry breaking LogicGrowsOnTrees.Location infrastructure for when you want to have knowledge of your current location within a tree LogicGrowsOnTrees.Parallel.Common.Message common infrastructure for exchanging messages between worker and supervisor LogicGrowsOnTrees.Parallel.Common.Process common infrastricture for the case where a worker has specific communications channels for sending and recieving messages; it might seem like this should always be the case, but it is not true for threads, as the supervisor has direct access to the worker thread, nor for MPI which has its own idiosyncratic communication model LogicGrowsOnTrees.Parallel.Common.RequestQueue infrastructure for sending requests to the SupervisorMonad from another thread LogicGrowsOnTrees.Parallel.Common.Supervisor common infrastructure for keeping track of the state of workers and of the system as a whole, including determining when the run is over LogicGrowsOnTrees.Parallel.Common.Worker contains the workhorse of the parallel infrastructure: a thread that steps through a given workload while continuously polling for requests LogicGrowsOnTrees.Parallel.Common.Workgroup common infrastructure for the case where workers can be added and removed from the system on demand LogicGrowsOnTrees.Parallel.ExplorationMode specifies the various modes in which the exploration can be done LogicGrowsOnTrees.Parallel.Main a unified interface to the various adapters that automates much of the process such as processing the command, forwarding the needed information to the workers, and performing regular checkpointing if requested via a command line argument LogicGrowsOnTrees.Parallel.Purity specifies the purity of the tree being explored LogicGrowsOnTrees.Path infrastructure for working with paths trough the search tree LogicGrowsOnTrees.Utils.Handle a couple of utility functions for exchanging serializable data over handles LogicGrowsOnTrees.Utils.IntSum a monoid that contains an Int to be summed over LogicGrowsOnTrees.Utils.PerfectTree provides algorithms for generating various simple trees LogicGrowsOnTrees.Utils.WordSum a monoid that contains a Word to be summed over LogicGrowsOnTrees.Utils.Word_ a newtype wrapper that provides an ArgVal instance for Word infrastructure for working with Workloads Of the above modules, the ones you will be using most often are LogicGrowsOnTrees (for building trees), one of the adapter modules (such as LogicGrowsOnTrees.Parallel.Adapter.Threads), and possibly LogicGrowsOnTrees.Parallel.Main. If you are counting the number of solutions, then you will also want to look at LogicGrowsOnTrees.Utils.WordSum. Finally, if your program takes a Word as a command line argument or option then you might find the LogicGrowsOnTrees.Utils.Word_ module to be useful. The other modules provide lower-level functionality; in particular the LogicGrowsOnTrees.Parallel.Common.* modules are primarily geared towards people writing their own adapter. Versions [faq] 1.0.0, 1.0.0.0.1, 1.1, 1.1.0.1, 1.1.0.2 CHANGELOG.md AbortT-mtl (==1.0.*), AbortT-transformers (==1.0.*), base (>4 && <5), bytestring (>=0.9 && <0.11), cereal (>=0.3 && <0.5), cmdtheline (==0.2.*), composition (>=0.2 && <1.1), containers (>=0.4 && <0.6), data-ivar (==0.30.*), derive (>=2.5.11 && <2.6), directory (>=1.1 && <1.3), hslogger (==1.2.*), hslogger-template (==2.0.*), lens (>=3.8 && <4.1), LogicGrowsOnTrees, MonadCatchIO-transformers (==0.3.*), monoid-statistics (==0.3.*), mtl (==2.1.*), multiset (==0.2.*), old-locale (==1.0.*), operational (==0.2.*), prefix-units (==0.1.*), pretty (==1.1.*), PSQueue (==1.1.*), sequential-index (==0.2.*), split (==0.2.*), stm (>=2.3 && <2.5), time (==1.4.*), transformers (>=0.2 && <0.4), void (==0.6.*), yjtools (>=0.9.7 && <0.10) [details] BSD-3-Clause Gregory Crosswhite Gregory Crosswhite Control, Distributed Computing, Logic, Parallelism https://github.com/gcross/LogicGrowsOnTrees/issues head: git clone git://github.com/gcross/LogicGrowsOnTrees.gitthis: git clone git://github.com/gcross/LogicGrowsOnTrees.git(tag 1.1.0.2) by GregoryCrosswhite at 2014-03-09T04:25:10Z NixOS:1.1.0.2 tutorial-13, tutorial-12, tutorial-11, tutorial-10, tutorial-9, tutorial-8, tutorial-7, tutorial-6, tutorial-5, tutorial-4, tutorial-3, tutorial-2, tutorial-1, count-all-trivial-tree-leaves, print-some-nqueens-solutions-using-push, print-some-nqueens-solutions-using-pull, print-an-nqueens-solution, print-all-nqueens-solutions, count-all-nqueens-solutions, readme-full, readme-simple 4487 total (9 in the last 30 days) (no votes yet) [estimated by Bayesian average] λ λ λ Docs available Successful builds reported ## Flags NameDescriptionDefaultType warnings Enables most warnings. DisabledAutomatic pattern-warnings Enables only pattern match warnings. DisabledAutomatic examples Enable building the examples. DisabledAutomatic tutorial Enable building the tutorial examples. DisabledAutomatic Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info #### Maintainer's Corner For package maintainers and hackage trustees [back to package description] # What is LogicGrowsOnTrees? LogicGrowsOnTrees is a library that lets you use a standard Haskell domain specific language (MonadPlus and friends) to write logic programs (by which we mean programs that make non-deterministic choices and have guards to enforce constraints) that you can run in a distributed setting. # Could you say that again in Haskellese? LogicGrowsOnTrees provides a logic programming monad designed for distributed computing; specifically, it takes a logic program (written using MonadPlus), represents it as a (lazily generated) tree, and then explores the tree in parallel. # What do you mean by "distributed"? By "distributed" I mean parallelization that does not required shared memory but only some form of communication. In particular there is package that is a sibling to this one that provides an adapter for MPI that gives you immediate access to large numbers of nodes on most supercomputers. In fact, the following is the result of an experiment to see how well the time needed to solve the N-Queens problem scales with the number of workers for N=17, N=18, and N=19 on a local cluster: The above was obtained by running a job, which counts the number of solutions, three times for each number of workers and problem size, and then taking the shortest time of each set of three*; the maximum number of workers for this experiment (256) was limited by the size of the cluster. From the above plot we see that scaling is generally good with the exception of the N=18 case for 128 workers and above, which is not necessarily a big deal since the total running time is under 10 seconds. * All of the data points for each value of N were usually within a small percentage of one another, save for (oddly) the left-most data point (i.e., the one with the fewest workers) for each problem size, which varied from 150%-200% of the best time; the full data set is available in the scaling/ directory. # When would I want to use this package? This package is useful when you have a large space that can be defined efficiently using a logic program that you want to explore to satisfy some goal, such as finding all elements, counting the number of elements, finding just one or a few elements, etc. LogicGrowsOnTrees is particularly useful when your solution space has a lot of structure as it gives you full control over the non-deterministic choices that are made, which lets you entirely avoid making choices that you know will end in failure, as well as letting you factor out symmetries so that only one solution is generated out of some equivalence class. For example, if permutations result in equivalent solutions then you can factor out this symmetry by only choosing later parts of a potential solution that are greater than earlier parts of the solution. # What does a program written using this package look like? The following is an example of a program (also given in examples/readme-simple.hs) that counts the number of solutions to the n-queens problem for a board size of 10: NOTE: I have optimized this code to be (hopefully) easy to follow, rather than to be fast. import Control.Monad import qualified Data.IntSet as IntSet import LogicGrowsOnTrees import LogicGrowsOnTrees.Parallel.Main import LogicGrowsOnTrees.Utils.Word_ import LogicGrowsOnTrees.Utils.WordSum -- Code that counts all the solutions for a given input board size. nqueensCount 0 = error "board size must be positive" nqueensCount n = go n -- ...n queens left... 0 -- ... at row zero... -- ... with all columns available ... (IntSet.fromDistinctAscList [0..fromIntegral n-1]) IntSet.empty -- ... with no occupied negative diagonals... IntSet.empty -- ... with no occupied positive diagonals. where -- We have placed the last queen, so this is a solution! go 0 _ _ _ _ = return (WordSum 1) -- We are still placing queens. go n row available_columns occupied_negative_diagonals occupied_positive_diagonals = do -- Pick one of the available columns. column <- allFrom $IntSet.toList available_columns -- See if this spot conflicts with another queen on the negative diagonal. let negative_diagonal = row + column guard$ IntSet.notMember negative_diagonal occupied_negative_diagonals -- See if this spot conflicts with another queen on the positive diagonal. let positive_diagonal = row - column guard $IntSet.notMember positive_diagonal occupied_positive_diagonals -- This spot is good! Place a queen here and move on to the next row. go (n-1) (row+1) (IntSet.delete column available_columns) (IntSet.insert negative_diagonal occupied_negative_diagonals) (IntSet.insert positive_diagonal occupied_positive_diagonals) main = -- Explore the tree generated (implicitly) by nqueensCount in parallel. simpleMainForExploreTree -- Use threads for parallelism. driver -- Function that processes the result of the run. (\(RunOutcome _ termination_reason) -> do case termination_reason of Aborted _ -> error "search aborted" Completed (WordSum count) -> putStrLn$ "found " ++ show count ++ " solutions" Failure _ message -> error $"error: " ++ message ) -- The logic program that generates the tree to explore. (nqueensCount 10) This program requires that the number of threads be specified via -n # on the command line, where # is the number of threads. You can use -c to have the program create a checkpoint file on a regular basis and -i to set how often the checkpoint is made (defaults to once per minute); if the program starts up and sees the checkpoint file then it automatically resumes from it. To find out more about the available options, use --help which provides an automatically generated help screen. The above uses threads for parallelism, which means that you have to compile it using the -threaded option. If you want to use processes instead of threads (which could be more efficient as this does not require the additional overhead incurred by the threaded runtime), then install LogicGrowsOnTrees-processes and replace Threads with Processes in the import at the 8th line. If you want workers to run on different machines then install LogicGrowsOnTrees-processes and replace Threads with Network. If you have access to a cluster with a large number of nodes, you will want to install LogicGrowsOnTrees-MPI and replace Threads with MPI. If you would prefer that the problem size be specified at run-time via a command-line argument rather than hard-coded at compile time, then you can use the more general mechanism illustrated as follows (a complete listing is given in examples/readme-full.hs): import Control.Applicative import System.Console.CmdTheLine ... main = -- Explore the tree generated (implicitly) by nqueensCount in parallel. mainForExploreTree -- Use threads for parallelism. driver -- Use a single positional required command-line argument to get the board size. (getWord <$> (required $pos 0 Nothing posInfo { posName = "BOARD_SIZE" , posDoc = "board size" } ) ) -- Information about the program (for the help screen). (defTI { termDoc = "count the number of n-queens solutions for a given board size" }) -- Function that processes the result of the run. (\n (RunOutcome _ termination_reason) -> do case termination_reason of Aborted _ -> error "search aborted" Completed (WordSum count) -> putStrLn$ "for a size " ++ show n ++ " board, found " ++ show count ++ " solutions" Failure _ message -> error \$ "error: " ++ message ) -- The logic program that generates the tree to explore. nqueensCount Read TUTORIAL.md for a tutorial of how to write and run logic programs using this package, USERS_GUIDE.md for a more detailed explanation of how things work, and the haddock documentation available at http://hackage.haskell.org/package/LogicGrowsOnTrees. # What platforms does it support: The following three packages have been tested on Linux, OSX, and Windows using the latest Haskell Platform (2013.2.0.0): • LogicGrowsOnTrees (+ Threads adapter) • LogicGrowsOnTrees-processors • LogicGrowsOnTrees-network LogicGrowsOnTrees-MPI has been tested as working on Linux and OSX using OpenMPI, and since it only uses very basic functionality (just sending, probing, and receiving messages) it should work on any MPI implementation. (I wasn't able to try Microsoft's MPI implementation because it only let me install the 64-bit version (as my test machine was 64-bit) but Haskell on Windows is only 32-bit.) This package is higher level than Cloud Haskell in that it takes care of all the work of parallelizing your logic program for you. In fact, if one wished one could potentially write an adapter for LogicGrowsOnTrees that lets one use Cloud Haskell as the communication layer. # Why would I use this instead of MapReduce? MapReduce and LogicGrowsOnTrees can both be viewed (in a very rough sense) as mapping a function over a large data set and then performing a reduction on it. The primary difference between them is that MapReduce is optimized for the case where you have a huge data set that already exists (which means in particular that optimizing I/O operations is a big deal), whereas LogicGrowsOnTrees is optimized for the case where your data set needs to be generated on the fly using a (possibly quite expensive) operation that involves making many non-deterministic choices some of which lead to dead-ends (that produce no results). Having said that, LogicGrowsOnTrees can also be used like MapReduce by having your function generate data by reading it from files or possibly from a database. # Why would I use this instead of a SAT/SMT/CLP/etc. solver? First, it should be mentioned that one could use LogicGrowsOnTrees to implement these solvers. That is, a solver could be written that uses the mplus function whenever it needs to make a non-deterministic choices (e.g. when guessing whether a boolean variable should be true or false) and mzero to indicate failure (e.g., when it has become clear that a particular set of choices cannot result in a valid solution), and then the solver gets to use the parallelization framework of this package for free! (For an example of such a solver, see the incremental-sat-solver package (which was not written by me).) Having said that, if your problem can most easily and efficiently be expressed as an input to a specialized solver, then this package might not be as useful to you. However, even in this case you might still want to consider using this package if there are constraints that you cannot express easily or efficiently using one of the specialized solvers because this package gives you complete control over how choices are made which means that you can, for example, enforce a constraint by only making choices that are guaranteed to satisfy it, rather than generating choices that may or may not satisfy it and then having to perform an additional step to filter out all the ones that don't satisfy the constraint. # What is the overhead of using LogicGrowsOnTrees? It costs approximately up to twice as much time to use LogicGrowsOnTrees with a single worker thread as it does to use the List monad. Fortunately, it is possible to eliminate most of this if you can switch to using the List monad near the bottom of the tree. For example, my optimized n-queens solver switches to a loop in C when fewer than eleven queens remain to be placed. This is not cheating'' for two reasons: first, because the hard part is the symmetry-breaking code, which would have been difficult to implement and test in C due to its complexity, and second, because one can't rewrite all the code in C because then one would lose access to the automatic checkpointing and parallelization features. 1. Laziness Haskell has lazy* evaluation which means that it does not evaluate anything until the value is required to make progress; this capability means that ordinary functions can act as control structures. In particular, when you use mplus a b to signal a non-deterministic choice, neither a nor b will be evaluated unless one chooses to explore respectively the left and/or right branch of the corresponding decision tree. This is very powerful because it allows us to explore the decision tree of a logic program as much or as little as we want and only have to pay for the parts that we choose to explore. * Technically Haskell is "non-strict" rather than "lazy", which means there might be times in practice when it evaluates something more than is strictly needed. 1. Purity Haskell is a pure language, which means that functions have no (observable) side-effects other than returning a value*; in particular, this implies that all operations on data must be immutable, which means that they result in a new value (that may reference parts or even all of the old value) rather than modifying the old value. This is an incredible boon because it means that when we backtrack up to explore another branch of the decision tree we do not have to perform an undo operation to restore the old values from the new values because the old values were never lost! All you have to do is "forget" about the new values and you are done. Furthermore, most data structures in Haskell are designed to have efficient immutable operations which try to re-use as much of an old value as possible in order to minimize the amount of copying needed to construct the new value. (Having said all of this, although it is strongly recommended that your logic program be pure by making it have type Tree, as this will cause the type system to enforce purity, you can add various kinds of side-effects by using type TreeT instead; a time when it might make sense to do this is if there is a data set that will be constant over the run which is large enough that you want to read it in from various files or a database as you need it. In general if you use side-effects then they need to be non-observable, which means that they are not affected by the order in which the tree is explored or whether particular parts of the tree are explored more than once.) * Side-effects are implemented by, roughly speaking, having some types represent actions that cause side-effects when executed. 2. Powerful static type system When writing a very complicated program you want as much help as possible in making it correct, and Haskell's powerful type system helps you a lot here by harnessing the power of static analysis to ensure that all of the parts fit together correctly and to enforce invariants that you have encoded in the type system.
2021-01-20 17:43:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42505449056625366, "perplexity": 1134.202850799708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00038.warc.gz"}
https://physics.com.hk/2008/11/09/united-republics-of-china/
# United Republics of China In 2004, Lin Chong-pin (林中斌), former deputy Minister of Defense ROC, said that one of thinking tanks in Beijing gave a proposal for United Republics of China (中華聯合共和國). None of this proposal was known. But in the same years the officials and thinking tanks of PRC often are interesting of the history of mainland Tanganyika and archipelago Zanzibar to form the United Republic of Tanzania. As Zanzibar has its own president, government, parliament, autonomy, etc. and the president of Zanzibar is the vice-president of Tanzania, it seems to be the example of Deng Xiaoping “One country, two systems” in Africa. — Wikipedia . . . 2008.11.09 Sunday $CHK_2$
2019-03-21 00:05:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19863663613796234, "perplexity": 12653.600151266306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202474.26/warc/CC-MAIN-20190320230554-20190321012554-00070.warc.gz"}
https://physics.stackexchange.com/questions/354897/does-a-harmonic-become-a-fundamental-of-its-own-harmonic-series
# Does a harmonic become a fundamental of its own harmonic series? Simple question, hopefully there's a simple answer. I'm about half a piano tuner, not a physicist. A musical tone has a fundamental frequency, say $220\,\text{Hz}$. Its second harmonic is $440\,\text{Hz}$, its third harmonic is $660\,\text{Hz}$, etc. My question is: Does a harmonic have its own harmonic series with itself as the fundamental? For example, does a $220\,\text{Hz}$ vibration, a 2nd harmonic which exists only because someone banged on a piano string that sounded a $110\,\text{Hz}$ fundamental, have its own second harmonic that is $440\,\text{Hz}$, a third harmonic, etc. If not why not? It seems to me that if harmonics are real which I know they are because I learned to hear them, they must have their own harmonic series too. If so, do these other harmonics have a name? I couldn't find them on google or wikipedia. I would think that if they exist, they must have a relatively low amplitude. If I denote by $f$ the fundamental frequency, then the $n$-th harmonics has a frequency $n\times f$. So the $m$-th harmonics of that $n$-th harmonics would have a frequency $m\times n\times f$: this is the $(m\times n)$-th harmonics of the fundamental, which does exist. So that nomenclature you devised is consistent. We don't use it in Physics afaik. A periodic waveform has a fundamental period $T$ (the length in time of the repeating pattern). By Fourier's theorem, such a periodic waveform can be decomposed into a fundamental component, with (fundamental) frequency $f_1 \equiv \frac{1}{T}$, plus components with frequencies that are integer multiples of $f_1$ which are called harmonics. Note that it's not the fundamental component that 'has' harmonics (it doesn't, it's a pure tone), it's the waveform itself that has (contains) harmonics. So I don't really grok the notion of a "harmonic [having] its own harmonic series with itself as a fundamental". It is the (periodic) waveform itself that has a harmonic series, not the components (which are pure tones) of the waveform. • I would disagree slightly with your 2nd paragraph. A harmonic series is a mathematical construction. The waveform itself contains a fundamental (the actual longest wavelength standing wave, and lowest resonant frequency) and overtones. For many vibrating systems, the overtones may correspond to frequencies of a harmonic series built on the fundamental. In other cases such as tympani and flat bars, the overtones don't match the harmonics. – Bill N Sep 1 '17 at 14:37 • I'm curious has to how actual musical overtones which are not integer multiples of the musical fundamental would be reflected in a Fourier spectrum which is strictly integer multiples of the musical fundamental. I believe that operational FFT analyzers base their spectra on 1 or 2 Hz false fundamental bin widths. – Bill N Sep 1 '17 at 14:48 • @BillN, you may disagree if you wish but note that I began my answer with "A periodic waveform" (emphasis added) and Fourier's theorem is, well, a theorem. But, the waveform produced by, e.g., a tympani is not periodic. – Alfred Centauri Sep 1 '17 at 15:10 • There are no "periodic waveforms" (in the strict mathematical sense of the term) that exist in the real world, because they would have to start at an infinitely distant time in the past and continue for ever. And a piano tuner should know something about "inharmonicity," which means that even for a piano, the harmonic frequencies are not exactly in the ratio 220, 440, 660, 880, etc. The higher harmonics are progressively more sharp. Piano notes also decay just like a timp or a flat bar, except they decay slower. There are a lot of half-truths stated and taught about Fourier analysis! – alephzero Sep 1 '17 at 22:58 • @alephzero, it's true that there are no physical periodic waveforms (as I state in quite a bit more detail in the 2nd part of my answer here) but there are physical waveforms that are, in some sense, good approximations of periodic waveforms and those that aren't. But I don't really see that this fact is relevant to the essential point that I make in my answer above which is this: waveforms have Fourier components, Fourier components don't have Fourier components. Do you disagree? – Alfred Centauri Sep 1 '17 at 23:31 If you play the A below middle C, you get 220Hz and it has harmonics at 440, 660, 880, etc. 440 should match the A above middle C. 660 will be the just tempered E above that which is nearly but not quite the well tempered E. 880 will the next A etc. If you play the A above middle C then you get 440Hz and it will have harmonics at 880, 1320, etc. These will be a subset of the first set of harmonics, the nth harmonic of this series will be the 2nth of the previous set. The odd harmonics of the lower note, e.g. 660 and 1100, will not appear in this set. Similarly, if you play the the E near the top of the treble clef then you will get nearly 660Hz (not exact with well temperament) and its harmonics will be nearly every third of the original A at 220. I am not aware of any special name for this relationship. I guess that no one has ever felt the need.
2020-07-11 04:58:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7716894149780273, "perplexity": 575.9437116999336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00134.warc.gz"}
https://www.nature.com/articles/s41467-020-15327-4?error=cookies_not_supported&code=506a5860-a792-4782-b247-2bc7e37cc625
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Anisotropic ESCRT-III architecture governs helical membrane tube formation ## Abstract ESCRT-III proteins assemble into ubiquitous membrane-remodeling polymers during many cellular processes. Here we describe the structure of helical membrane tubes that are scaffolded by bundled ESCRT-III filaments. Cryo-ET reveals how the shape of the helical membrane tube arises from the assembly of two distinct bundles of helical filaments that have the same helical path but bind the membrane with different interfaces. Higher-resolution cryo-EM of filaments bound to helical bicelles confirms that ESCRT-III filaments can interact with the membrane through a previously undescribed interface. Mathematical modeling demonstrates that the interface described above is key to the mechanical stability of helical membrane tubes and helps infer the rigidity of the described protein filaments. Altogether, our results suggest that the interactions between ESCRT-III filaments and the membrane could proceed through multiple interfaces, to provide assembly on membranes with various shapes, or adapt the orientation of the filaments towards the membrane during membrane remodeling. ## Introduction The Endosomal Sorting Complexes Required for Transport (ESCRT)-III proteins are an evolutionarily ancient family of proteins that execute membrane scission in different cellular contexts (reviewed in ref. 1). ESCRT-III can polymerize into rings and spirals in solution2,3,4 or on membrane substrates5,6. When single or several ESCRT-III proteins are incubated with model membranes in vitro or over-expressed in cells, they deform membranes into straight and conical tubes6,7, demonstrated in most detail by the formation of tubules by CHMP1B alone and in complex with IST1/CHMP86. Similar but inverted conical structures are also observed in vivo by overexpression of CHMP4A/B7 and at the neck of budding Gag envelopes8. Dynamics of ESCRT-III assembly also suggest that single assemblies in MVB biogenesis in yeast are compatible with spirals or cones9. Mechanistically, we have previously shown that flat spirals formed on lipid membranes from the ESCRT-III protein Snf7 can accumulate elastic energy, and that this energy could be channeled to shape a flat membrane into a tube through a buckling transition10. However, Snf7 spirals fail to deform artificial membranes in vitro5. This could be due to the high flexibility of Snf7 polymers2,5, which do not provide enough force to deform the membrane. In this case, rigidification of the filament through the binding of additional subunits could trigger buckling. Importantly, Snf7 forms flat spirals on membranes3 and in solution2, which indicates that Snf7 filaments only present spontaneous curvature and no torsion. In such circumstances, binding of additional subunits, which induce a twist in the co-filaments and lead to the formation of a helical structure, could induce buckling. Indeed, recruitment of Vps24/Vps2 to flat Snf7 spirals, via electrostatic interaction between Snf7 helix α4 and Vps2411, leads to the formation of helical structures without membrane inside3. We have previously shown that the addition of Vps24/Vps2 to a membrane-bound Snf7 filament leads to the formation of a second, parallel strand next to the Snf7 filament12, and the formation of such a composite polymer may trigger buckling. In this report, we show that the addition of Vps24/Vps2 to membrane-bound Snf7 in vitro does indeed induce a membrane shape transition from flat to tubes. Surprisingly, however, this transition does not result in straight, cylindrical tubes scaffolded by a helical polymer, but in membrane tubes that are shaped like hollow corkscrews, hereafter referred to as helical tubes. We show using cryogenic electron tomography (cryo-ET) and subtomogram averaging (STA) that this unusual structure is supported by an unexpected protein-membrane binding scheme, involving two different membrane-binding interfaces. We further demonstrate through physical modeling that the stability of this architecture implies that the two corresponding binding energies are significantly different. In addition, we obtain a higher-resolution structure of the helical polymers, bound to a helical bicelle ribbon, confirming that the Snf7/Vps24/Vps2 copolymer has a twist and binds the membrane in orientations different from those previously published. The dimensions of these non-constrained helical polymers differ slightly from those observed on the helical tubes. These differences suggest that helical protein polymers are under elastic stress in the helical tubes. Finally, by comparing the morphology of the helical tubes to those of the bicelle-bound copolymers, we infer the binding energy difference, as well as the stiffness of the ESCRT-III copolymers. Our results are consistent with the notion that dynamic changes in polymer-membrane interactions coupled with high bending and torsional rigidities in the copolymer are essential to trigger a buckling transition. ## Results ### Helical tubulation of liposomes by ESCRT-III heteropolymers To test whether binding of Vps24 and Vps2 to Snf7 spirals could induce a membrane shape transition, we incubated liposomes with recombinant Snf7 until they were decorated by flat Snf7 spirals5, then added recombinant Vps24 and Vps2 and incubated the mixture for several hours. Using negative stain electron microscopy (EM), we observed vesicles decorated with flat spirals (Fig. 1a)5,12 and helical tubes that were decorated with filamentous protein polymers (Fig. 1b, c). Cryogenic electron microscopy (cryo-EM) of helical tubes confirmed that they consisted of protein filaments bound to an open helical membrane tube (Fig. 1d–f), and that their regularity made them amenable to higher-resolution imaging. Further investigation of these helical tubes revealed that they only form in the presence of all three proteins (Supplementary Fig. 1a–c). They had an average diameter of 23.9 ± 3.7 nm and were coiled into a helix with an outer diameter of 82.3 ± 6.1 nm and a pitch of 53.1 ± 7.6 nm (all values average ± SD; Supplementary Table 1) (Supplementary Fig. 1d–g). Their prevalence increased with protein concentrations and incubation time, indicating thermodynamic stability. Helical tubes are an unusual membrane shape, as their high curvature makes them a priori energetically unfavorable compared to other shapes, yet assemblies of different human ESCRT-III proteins on liposomes can generate similar deformations13. To understand the origin of their stability, we aimed to characterize their structural determinants in more detail. To visualize the ESCRT-III filament organization around the helical tubes, we performed cryo-ET on vitrified helical membrane tubes and used image filtering and manual segmentation on reconstructed tomographic volumes. All tubes appeared as left-handed helices, although we cannot confirm that this is the correct handedness without a chiral internal standard. On the surface of the tubes, we observed six to eight filaments parallel to the tube axis forming multi-stranded bundles (Fig. 1g–i, Supplementary Fig. 1h–j, Movies 12). The filaments were almost always excluded from the inside of the tube helix and had the same thickness as negatively stained, double-stranded Snf7/Vps24/Vps2 heteropolymers (4.9 ± 0.5 nm; average ± SD)12. From this, we concluded that the peculiar organization of the filaments around the tube must minimize the energy of the helical membrane shape. Helical membrane cylinders have been reported before: cylindrical stacks of lipid membranes remodel into helical tubes in the presence of specific membrane-binding polymers, and it was suggested that the shape could emerge from gradients of spontaneous curvature across the membrane14. Helical membrane tubes have also been predicted in the presence of curved polymers whose membrane-binding interface is not located within the polymer’s groove (like in BAR domains), but on the orthogonal side15. We hypothesize that a similar mechanism determines the emergence of helical tubes in our experiments. Indeed, if Snf7/Vps24/Vps2 helical filaments preferred binding the membrane along their spontaneous direction of curvature, we would expect them to shape straight membrane tubes, as it happens with BAR domain-containing proteins or dynamin-coated membrane tubes16. Since we did not observe straight tubes in our experiments, we hypothesized that Snf7/Vps24/Vps2 helical filaments force the tube to follow an equilibrium helical path because they prefer to bind the membrane perpendicular to their spontaneous direction of curvature. We further develop this argument through mathematical modeling of the helical tubes. ### Two distinct ESCRT-III filament bundles on helical tubes To obtain a more detailed understanding of how the filaments are organized, we performed STA on slices along the membrane tube axis. The variability in tube dimensions in the dataset made it impossible to resolve the entire tube. We, therefore, focused on the filaments on the outer tube surface and obtained a ~32 Å reconstruction (Fig. 2). This map revealed that the filaments cluster in three separate regions with two clearly defined grooves between them (Fig. 2a, Supplementary Fig. 2a). The central cluster, containing two filaments, covered a 13 nm wide region around the equator of the tube (equatorial filaments, blue). Two additional filament clusters, each containing 2–3 filaments, were shifted up and down from the equator, respectively, (polar filaments, red) and appeared wider (16–20 nm) (Fig. 2b–d). The resolution of the shifted, polar filaments was limited as their positions varied more with tube diameter compared to equatorial filaments (Supplementary Table 1). With further STA focused on the equatorial cluster, we reconstructed a focused map of this area (~32 Å resolution), revealing that the two equatorial filaments contained two strands each (Fig. 2e–g, Supplementary Fig. 2b). The filaments bundled in a plane parallel to the tube’s helical axis and their membrane binding area was on the bundle’s inside, also parallel to the helical axis, as observed in previously described ESCRT-III heteropolymers6. Yet, in our case, both strands appeared to be interacting with the membrane. The filaments in the polar clusters, based on their width, could be double-stranded as well, though our reconstructions were unable to resolve the substructure directly. In contrast to the equatorial filaments, however, the bundling plane of the polar filament strands was perpendicular to the helical axis, as was its membrane-binding interface (Fig. 2h). This orientation fits the double-stranded spirals formed by Snf7/Vps24/Vps2 on flat bilayers12. Overall, the architectures of equatorial and polar filaments appeared to be similar: both were composed of at least two double-stranded filaments, bundled together as a helical ribbon along the surface of the tube. However, the geometry of the helical tube makes it impossible that all filaments have the same path and bind the membrane with the same interface (Fig. 2h). For the same reasons, interactions between filaments within a bundle cannot be the same within polar filaments and equatorial filaments. Given that helical tubes did not form in the absence of any of the three ESCRT-III subunits (Snf7, Vps24, and Vps2), we conclude that both kinds of filaments were formed from all three proteins. At this resolution, however, we cannot determine whether polar and equatorial filaments contain different subunit compositions or stoichiometries. Different examples of ESCRT-III copolymers made with different subunits, like CHMP1B and IST1/CHMP8, have very different spontaneous curvatures and shapes6,17. Our equatorial and polar filaments did have similar helical paths, bundling properties and dimensions, though, leading us to favor the hypothesis that the polar and equatorial filaments comprised the same subunits at similar stoichiometry. While the possibility that ESCRT-III molecules bind their target membranes with two different orientations seems a priori unexpected, existing structural studies have reported different membrane binding interfaces for Snf7 versus CHMP1B6,17,18. ### Organization of tube-less ESCRT-III filaments To clarify the interplay between the elasticity of the ESCRT-III filaments and that of the membrane in determining the shape of the helical tube, we sought to analyze the spontaneous shape of ESCRT-III filaments without a helical membrane tube for higher-resolution imaging. When incubating Snf7/Vps24/Vps2 with detergent-solubilized lipids, different helical ribbons formed without complete membrane tubes during detergent removal by dilution (Supplementary Fig. 3a–c). We suppose that the detergent removal generates a great number of small membrane structures that nucleate ESCRT-III filaments that self-assemble along a bicelle ribbon. Most of these tube-less, helical ribbons assembled into sharp zigzag shapes (Fig. 3a, red arrows in Supplementary Fig. 3a–c), a smaller population appeared sinusoidal (Fig. 3b, blue arrows in Supplementary Fig. 3a–c), and a third population displayed significantly larger ribbons with varying strand numbers and diameters (Fig. 3c, yellow arrows in Supplementary Fig. 3a–c). We did not observe any of these assemblies if any of the three ESCRT-III subunits was omitted (Supplementary Fig. 3d–f). We used single-particle 2D and 3D averaging approaches to analyze these tube-less helical protein filament ribbons and determined 2D class averages (Fig. 3d–f). The overall appearance of the sinusoidal ribbons suggested that they comprise multi-stranded filaments oriented along a helical path similar to that of the equatorial filaments we observed bound to the helical membrane tubes (pitch 55.7 ± 8.5 nm; diameter = 34.1 ± 5.0 nm, width 13.6 ± 2.1 nm average ± SD; Supplementary Table 1) (Fig. 3b–e). Analysis of the more ordered zigzag structure (Fig. 3a–d) led to a 3D reconstruction at ~15 Å resolution. This structure revealed a helical ramp formed around a bicelle, a tension-less lipid bilayer stabilized by detergents, with the bicelle plane oriented perpendicular to the helix axis. Given that such helical bicelles cannot form from vesicles, this explains why we only observed ESCRT-III ribbons with initially detergent-solubilized lipids. On both sides of the bicelle, we observed filamentous polymers with subunit dimensions consistent with other double-stranded ESCRT-III structures6,11,12. The observed pitch (39.8 ± 6.9 nm; diameter = 46.2 ± 4.9 nm; average ± SD; Supplementary Table 1) of the filament indicated a significantly elevated torsion and/or torsional rigidity compared to other helical ESCRT-III polymers4,6. Considering the apparent subunit tilt on both sides of the bicelle, the filaments appeared to be anti-parallel to each other (Fig. 3d–g). We confirmed the anti-parallel orientation of the two polymers by a 3D reconstruction at a higher resolution (~11 Å) that was computed by using masks to focus on one side of the bicelle only (Fig. 3h). The subunits appeared to polymerize in the same way as previously described ESCRT-III heteropolymers6, and were oriented along a similar helical path. Surprisingly, both strands seemed to interact with the membrane, and their membrane-binding interface was oriented perpendicular to the main helical axis (Fig. 3g, h). The interface was therefore perpendicular to that postulated for CHMP1B, which was parallel to the helix axis6. Molecular docking allowed fitting both filaments with crystal structures of subunits in the open (D. melanogaster CHMP4B homolog Shrub, PDB 5J4519; yeast Snf7, PDB 5FD918) and closed conformation (Human CHMP3; PDB 3FRT20), respectively (Supplementary Fig. 3d), with inter-subunit connectivity consistent with known ESCRT-III heteropolymer structures6. The resolution of the map, however, did not allow us to discern the identities or unambiguous conformations for the subunits of either strand. Nevertheless, the zigzag tube-less ribbon’s dimensions and architecture are compatible with the polar filaments on helical tubes, and confirmed that the polar filaments of the helical tube are also double-stranded. These results demonstrate that ESCRT-III filaments can bind the membrane with a previously undescribed orientation perpendicular to that of their curvature. ### Mathematical model of helical tubes’ mechanical equilibrium To understand the roles of ESCRT-III filament properties in shaping the membrane into helical tubes, we developed mathematical models that describe the competition between filament and membrane rigidities, membrane tension and filament-membrane binding energy. Here we summarize our conclusions, and refer the reader to the Supplementary Information for detailed derivations. In a first approach, we show that the membrane-binding interface observed in polar filaments (Fig. 3) is not only compatible with the existence of helical membrane tubes, but is actually required for their stability. To understand this requirement, we consider that the helical tube is not the only membrane structure compatible with the helical structure of Snf7/Vps24/Vps2 helical filaments: such filaments could, hypothetically, also enclose a straight membrane tube, implying a much smaller membrane bending energy cost. However, this alternative structure would imply that all filaments bind in their equatorial mode, as opposed to the mixed equatorial and polar binding observed on helical tubes. We thus interpret the formation of helical tubes as opposed to straight tubes as evidence that the polar filaments’ binding mode is energetically more favorable than that of the equatorial filaments, and that it more than compensates for the higher membrane curvature energy of helical tubes. To turn this reasoning into a quantitative estimate of the minimal binding energy difference between polar and equatorial filaments, we developed a mathematical model to compute the deformation energy of a flexible membrane of tension σ and stiffness κ enclosed by a non-deformable helical scaffold of radius R and pitch 2πP. This choice of a fixed radius and pitch is consistent with the modest filament deformation induced by the presence of membrane tubes, compared to their tube-less shape. We compare the energies of a helical tube and a straight tube under the assumption that two filament binding modes differ by an energy μ per unit filament length, where μ > 0 promotes polar filaments over equatorial ones and thus favors helical tubes. The relative stability of either configuration depends on two dimensionless parameters, which we use as coordinates for the phase diagram (Fig. 4a): the rescaled membrane tension σR2 and the rescaled differential binding energy per filament length μR/κ. We find that helical tubes are always favored at high rescaled membrane tension, and that lowering σR2 leads to an increase of the membrane tube radius r, with different outcomes depending on the value of the rescaled differential binding energy per filament length. For high values of μR/κ, helical tubes remain stable at all σR2. For lower values of μR/κ, r increases significantly before reaching a μR/κ-dependent critical value rc where the system transitions to a straight tube (Fig. 4b). While the surface tension σ of the membrane is not directly experimentally accessible in our setup, this reasoning demonstrates that given a certain value of μR/κ, helical tubes with radii larger than rc(μR/κ) cannot occur. Consequently, our observation of relatively thick tubes with average radius rexp = 12.1 nm implies that $$r_c\left( {\frac{{\mu R}}{\kappa }} \right) > r_{{\mathrm{exp}}}$$, which, according to our calculations, implies that the membrane-binding energy difference between polar and equatorial Snf7/Vps24/Vps2 filaments is larger than or equal to 2kBT per monomer. This value is compatible with the previously estimated membrane-binding energy per monomer of Snf7 polymers alone (about 4 kBT)5, suggesting that Vps24 and Vps2 may be significant contributors of the binding of ESCRT-III filaments to lipid membranes. In a second approach, we look more closely at the deformation of the helical filaments. We thus relax the assumption of a non-deformable helical scaffold and endow the Snf7/Vps24/Vps2 filaments with bending and torsional rigidities, characterized by the filaments’ bending and torsional persistence lengths, ℓp and ℓt, respectively. We furthermore define the helical parameters (radius and pitch) of zigzag-shaped (Fig. 3a–d) and sinusoidal (Fig. 3b–e) tube-less filaments, respectively, as resting conformations of polar and equatorial filaments (Fig. 2a), respectively, on helical tubes (Supplementary Table 1). As a result of their deformability, enclosing a helical membrane tube inside our model filaments results in a variation of their radius and pitch. By matching these predicted variations to the observed differences in filament radius and pitch between the tube-less situation (Fig. 3) and the tube-enclosing configurations of (Figs. 1, 2) as the result of the membrane, we establish a lower bound $$\ell _{\mathrm{p}} \ge \ell _p^{{\mathrm{min}}}{\mathrm{ = 114}}\,{\mathrm{nm}}$$ for the filaments’ bending persistence length, and establish that the membrane-binding energy difference μ per monomer must be greater than 5 kBT. This is slightly larger than the lower bound on binding energy difference inferred with the first mathematical model, implying that the tubes observed in our experiments are well within the helical tube region of the stability diagram (Fig. 4a). By adding the further assumption that Snf7/Vps24/Vps2 filaments have a bending rigidity close to that of Snf7 homopolymers, i.e., by setting $$\ell _{\it{p}} = \ell _p^{{\mathrm{Snf7}}}$$ = 250 nm5, we were moreover able to infer their torsional persistence length $$\ell _t =$$45 nm, comparable to that of DNA at low tension21, as well as a binding energy difference μ of 15kBT per monomer, suggesting that Vps24 and Vps2 could play an even more important role in the binding of ESCRT-III filaments to lipid membranes. ## Discussion Our findings support the hypothesis that the assembly of multiple strands of ESCRT-III triggers a buckling transition by increasing the filament’s torsion angle and/or torsional rigidity, in addition to bending rigidity5,10. A previous theoretical model predicts that flat ESCRT-III spirals without torsional rigidity can tubulate membranes by growing out of plane, provided their bending rigidity is high enough10. In the presence of torsional rigidity, the filament in the flat spiral would be pre-constrained (no torsion), and the increase of its torsion angle and/or torsional rigidity when it pairs with additional strands would allow the new composite filament to adopt a conformation closer to its preferred torsion (helical). Hence, under these circumstances, a buckling transition is possible with a lower number of ESCRT-III subunits and with compositional heterogeneity, explaining why our previous model5 required more subunits than are found at sites of intraluminal vesicle formation9. However, we have described a membrane deformation that appears to be a buckling opposite to the direction expected in physiological contexts, such as multi-vesicular body formation1. We note that the same filaments could also stabilize the inverse direction as well, yet we cannot observe this on large liposomes because their surface-to-volume ratio will always favor outward deformation. Considering the helical path of ESCRT-III assemblies, structural studies have identified several membrane-interacting surfaces on the inside6 and the outside18 of the helix. We identify here a third surface perpendicular to those, which is required for the mechanical stability of the helical membrane tubes observed here. This may reveal a more complex picture of the filament shape transition involved in membrane deformation. If ESCRT-III subunits change their membrane-binding interface during membrane deformation, this could allow a filament to roll on the membrane and generate torque along the filament axis as another source of membrane strain. This provides a microscopic argument in support of recent coarse-grained simulations, which suggest that torque generation from a polymer rolling on the membrane can lead to both neck formation and scission22. Shape buckling and torque may originate from subunits being exchanged for different subunits that bind the membrane with a different preferred orientation. We have previously shown that both subunit turnover and incorporation of different subunits are necessary for ESCRT-III-mediated membrane remodeling12,23. In addition, or alternatively, the formation of a secondary membrane-binding filament parallel to the leading strand12 could change the membrane-binding interface orientation, forcing the membrane to adopt a tubular shape. Our data did not allow us to establish whether polar and equatorial binding modes reflect different heteropolymer stoichiometries or different conformations of the same proteins forming the heteropolymer, or both. We favor the notion that both filament types contain all three subunits (Snf7, Vps24, and Vps2) as they do not form in the absence of any one of them and different ESCRT-III polymers and copolymers display considerable flexibility2,3,4,5,12,24. In this study, we show that the different architectures and mechanical properties of ESCRT-III copolymers allow them to stabilize complex membrane shapes. This versatility could well explain the ubiquitous requirement of ESCRT-III as modular membrane-remodeling complex. ## Methods ### Protein expression and purification Proteins were expressed from plasmids encoding budding yeast Snf7 (Addgene no. 21492), Vps2 (Addgene no. 21494) and Vps24 (gift from James Hurley), and were purified as previously described12. ### Liposome preparation 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) and 1,2-dioleoyl-sn-glycero-3-phospho-L-serine (sodium salt) (DOPS) were purchased in solution from Avanti Polar Lipids and mixed at the desired molar ratio in chloroform. The lipid mix was dried first under a nitrogen stream and then under vacuum at 30 °C for 1 h before hydration with 100 mM NaCl 20 mM Hepes pH = 7.5. We made large unilamellar vesicles (LUVs) by extrusion of the hydrated lipid films using a Mini Extruder (Avanti Polar Lipids) and polycarbonate filters of pore size 0.2 µm (Whatman). ### Formation of helical membrane tubes At 4 °C, in 100 mM NaCl, 20 mM Hepes pH = 7.5, extruded LUVs made from DOPC/DOPS (60/40 mol/mol) (10 mM final) were incubated with 10 µM Snf7 for 1 h, then Vps2 and Vps24 (5 µM each) were added and incubated overnight. For cryo-EM, 4 µL of the sample were deposited on glow-discharged Quantifoil R2/2 200 mesh copper grids and plunge frozen in liquid ethane after a two-sided blot using a FEI Vitrobot. For cryo-ET, we added 10 nm BSA-nanogold (Aurion) to the reaction prior to vitrification. For negative stain EM, the sample was diluted 1/10 in 100 mM NaCl, 20 mM Hepes pH = 7.5 before staining for 30 s with 2% uranyl acetate. ### Formation of protein polymers on bicelles We prepared micelles by solubilizing a dried lipid film made from DOPC/DOPS (60/40 mol/mol) at 25 °C in 100 mM NaCl, 20 mM Hepes pH = 7.5, 20 mM CHAPS (3-[(3-Cholamidopropyl)-dimethyl-ammonio]-1-propanesulfonate hydrate, Sigma-Aldrich) at a total lipid concentration of 12 mM. The following protocol is adapted from25. In brief, micelles were homogenized by bath sonication and stirring at 25 °C for 1 h before addition of 4 µM Snf7, 2 µM Vps24 and 2 µM Vps2, making sure that the detergent concentration was above its critical micellar concentration after addition of all proteins. The sample was then gradually diluted four-fold over 30 min under agitation at 25 °C and further incubated for 5 h. For cryo-EM, 4 µL of the sample were deposited on glow-discharged Quantifoil R1.2/1.3 300 mesh copper grids and plunge frozen in liquid ethane after a two-sided blot using a FEI Vitrobot. ### Low-resolution EM data collection Transmission electron micrographs of negatively stained liposomes and helical tubes were acquired on a FEI Tecnai G2 Sphera LaB6 at 200 kV using a 4k × 4k FEI Eagle Camera. Vitrified liposomes and helical tubes were imaged in low-dose mode on the same instrument. ### High-resolution cryogenic EM data collection 962 Transmission electron cryo-micrographs of helical filaments on bicelles were collected on a FEI Titan Krios XFEG microscope at the University of California San Francisco, USA, equipped with a GIF K2 Quantum System (Gatan) and operated by Serial-EM software at 300 kV. Micrographs were collected at a nominal magnification of ×105,000 in super-resolution mode, corresponding to a super-resolution pixel size of 0.69 Å. Images were collected as dose-fractionated stacks for a total of 80 frames (0.2 s/frame) and total dose of 67.2 electrons/Å2. Coma-free beam alignments were performed prior to data collection and automated data collection was conducted with SerialEM26. ### Cryogenic EM data processing Each dose-fractionated image stack was processed using MotionCorr2 and binned by two to yield motion-weighted and dose-weighted images with a pixel size of 1.38 Å/pixel27. For the double-filament structures, helical filaments were picked manually using RELION 3.0, segmented into ~90% overlapping segments and additionally binned by two during extraction with a final pixel size of 2.76 Å/pixel (bin4). After rejecting unalignable segments during 2D classification, 35,087 single particle images were processed by 3D classification and 3D auto-refine with helical priors, but without imposing helical symmetry28,29. For the single-filament structure, a soft mask was employed to remove one filament from the model and all of the images were re-processed by 3D classification and 3D auto-refine with helical priors and helical symmetry, using RELION 3.0 software (twist rotation angle (degrees) = 6.7, rise along axis = 10.6 Å). Independent half maps were post-processed using automated procedures. Reported resolutions based on FSC 0.143 criteria30. ### Electron cryo-tomography data collection 73 tilt series were collected on a FEI Titan Krios XFEG microscope at the European Molecular Biology Laboratory, Heidelberg, Germany, equipped with a GIF K2 Quantum System (Gatan) and operated by Serial-EM software at 300 kV. The tilt series were collected using a dose-symmetric scheme31 ranging ±61° with 2° increments and defoci between −2.5 and −3.5 µm. The nominal magnification was ×65,000 with a calibrated pixel size of 2.14 Å. Images were recorded in counting mode with five frames per tilt angle and a total dose of 120 e Å−2 (2 e Å−2 s−1 per tilt angle). ### Tomogram reconstruction and subtomogram averaging Tilt series of combined frames were aligned using the gold fiducials markers in IMOD32. Tomograms were then reconstructed from these aligned tilt series using weighted back-projection in IMOD. Tomograms were binned four times and a 3D Gaussian filter of radius 2 was applied to increase contrast. Tomograms were then filtered using Hide Dust in UCSF Chimera33, or manually segmented and analyzed using 3dmod from the IMOD suite32. We used the Dynamo software for particle extraction and subtomogram averaging34. We selected 8150 particle positions along the center of a helical axis of membrane tubes in 17 bin2 tomograms. The table containing these positions was used to extract 1603 pixel particles from bin2 tomograms (bin2 particles). Bin2 particles were used to compute a first reference-free subtomogram average using a single, manually selected, blurred particle as alignment template, yielding a map at 31.9 Å resolution. This first subtomogram average was then used as a template for the alignment of 2037 bin2 particles with a soft elliptical alignment and classification mask focusing on the equatorial filament cluster, yielding a map with a resolution 32.4 Å. Reported resolutions based on FSC 0.143 criterion30. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability Data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. A reporting summary for this Article is available as a Supplementary Information file. The source data underlying Figs. 1h, 2d–g, 3a, b and 4a, b, Supplementary Figs. 2c, 3h and Supplementary Table 1 are provided as a Source Data file. Structural data is available from the Electron Microscopy Data Bank, accession numbers for electron density maps are EMD-10136, EMD-10137, EMD-10138, and EMD-10139. ## References 1. 1. Schoneberg, J., Lee, I. H., Iwasa, J. H. & Hurley, J. H. Reverse-topology membrane scission by the ESCRT proteins. Nat. Rev. Mol. Cell Biol. 18, 5–17 (2017). 2. 2. Shen, Q. T. et al. Structural analysis and modeling reveals new mechanisms governing ESCRT-III spiral filament assembly. J. Cell Biol. 206, 763–777 (2014). 3. 3. Henne, W. M., Buchkovich, N. J., Zhao, Y. & Emr, S. D. The endosomal sorting complex ESCRT-II mediates the assembly and architecture of ESCRT-III helices. Cell 151, 356–371 (2012). 4. 4. Lata, S. et al. Helical structures of ESCRT-III are disassembled by VPS4. Science 321, 1354–1357 (2008). 5. 5. Chiaruttini, N. et al. Relaxation of loaded ESCRT-III spiral springs drives membrane deformation. Cell 163, 866–879 (2015). 6. 6. McCullough, J. et al. Structure and membrane remodeling activity of ESCRT-III helical polymers. Science 350, 1548–1551 (2015). 7. 7. Hanson, P. I., Roth, R., Lin, Y. & Heuser, J. E. Plasma membrane deformation by circular arrays of ESCRT-III protein filaments. J. Cell Biol. 180, 389–402 (2008). 8. 8. Cashikar, A. G. et al. Structure of cellular ESCRT-III spirals and their relationship to HIV budding. Elife 3, https://doi.org/10.7554/eLife.02184 (2014). 9. 9. Adell, M. A. Y. et al. Recruitment dynamics of ESCRT-III and Vps4 to endosomes and implications for reverse membrane budding. Elife 6, https://doi.org/10.7554/eLife.31652 (2017). 10. 10. Lenz, M., Crow, D. J. & Joanny, J. F. Membrane buckling induced by curved filaments. Phys. Rev. Lett. 103, 038101 (2009). 11. 11. Banjade, S., Tang, S., Shah, Y. H. & Emr, S. D. Electrostatic lateral interactions drive ESCRT-III heteropolymer assembly. Elife 8, https://doi.org/10.7554/eLife.46207 (2019). 12. 12. Mierzwa, B. E. et al. Dynamic subunit turnover in ESCRT-III assemblies is regulated by Vps4 to mediate membrane remodelling during cytokinesis. Nat. Cell Biol. 19, 787–798 (2017). 13. 13. Bertin, A. et al. Human ESCRT-III polymers assemble on positively curved membranes and induce helical membrane tube formation. Nat. Commun. 11, https://doi.org/10.1038/s41467-020-16368-5 (2020). 14. 14. Tsafrir, I., Guedeau-Boudeville, M. A., Kandel, D. & Stavans, J. Coiling instability of multilamellar membrane tubes with anchored polymers. Phys. Rev. E Stat. Nonlin Soft Matter Phys. 63, 031603 (2001). 15. 15. Fierling, J., Johner, A., Kulic, I. M., Mohrbach, H. & Muller, M. M. How bio-filaments twist membranes. Soft Matter 12, 5747–5757 (2016). 16. 16. Kozlov, M. M., McMahon, H. T. & Chernomordik, L. V. Protein-driven membrane stresses in fusion and fission. Trends Biochemical Sci. 35, 699–706 (2010). 17. 17. Nguyen, H. C. et al. Membrane constriction and thinning by sequential ESCRT-III polymerization. bioRxiv 798181, https://doi.org/10.1101/798181 (2019). 18. 18. Tang, S. et al. Structural basis for activation, assembly and membrane binding of ESCRT-III Snf7 filaments. Elife 4, https://doi.org/10.7554/eLife.12548 (2015). 19. 19. McMillan, B. J. et al. Electrostatic interactions between elongated monomers drive filamentation of Drosophila shrub, a metazoan ESCRT-III protein. Cell Rep. 16, 1211–1217 (2016). 20. 20. Bajorek, M. et al. Structural basis for ESCRT-III protein autoinhibition. Nat. Struct. Mol. Biol. 16, 754–762 (2009). 21. 21. Kriegel, F. et al. Probing the salt dependence of the torsional stiffness of DNA by multiplexed magnetic torque tweezers. Nucleic Acids Res. 45, 5920–5929 (2017). 22. 22. Harker-Kirschneck, L., Baum, B. & Saric, A. E. Changes in ESCRT-III filament geometry drive membrane remodelling and fission in silico. BMC Biol. 17, 82 (2019). 23. 23. Pfitzner, A.-K., Mercier, V. & Roux, A. Vps4 triggers sequential subunit exchange in ESCRT-III polymers that drives membrane constriction and fission. bioRxiv 718080, https://doi.org/10.1101/718080 (2019). 24. 24. Effantin, G. et al. ESCRT-III CHMP2A and CHMP3 form variable helical polymers in vitro and act synergistically during HIV-1 budding. Cell. Microbiol. 15, 213–226 (2013). 25. 25. Szwedziak, P., Wang, Q., Bharat, T. A., Tsim, M. & Lowe, J. Architecture of the ring formed by the tubulin homologue FtsZ in bacterial cell division. Elife 3, e04601 (2014). 26. 26. Mastronarde, D. N. Automated electron microscope tomography using robust prediction of specimen movements. J. Struct. Biol. 152, 36–51 (2005). 27. 27. Zheng, S. Q. et al. MotionCor2: anisotropic correction of beam-induced motion for improved cryo-electron microscopy. Nat. methods 14, 331–332 (2017). 28. 28. Zivanov, J. et al. New tools for automated high-resolution cryo-EM structure determination in RELION-3. Elife 7, https://doi.org/10.7554/eLife.42166 (2018). 29. 29. He, S. & Scheres, S. H. W. Helical reconstruction in RELION. J. Struct. Biol. 198, 163–176 (2017). 30. 30. Rosenthal, P. B. & Henderson, R. Optimal determination of particle orientation, absolute hand, and contrast loss in single-particle electron cryomicroscopy. J. Mol. Biol. 333, 721–745 (2003). 31. 31. Hagen, W. J. H., Wan, W. & Briggs, J. A. G. Implementation of a cryo-electron tomography tilt-scheme optimized for high resolution subtomogram averaging. J. Struct. Biol. 197, 191–198 (2017). 32. 32. Mastronarde, D. N. Dual-axis tomography: an approach with alignment methods that preserve resolution. J. Struct. Biol. 120, 343–352 (1997). 33. 33. Pettersen, E. F. et al. UCSF Chimera–a visualization system for exploratory research and analysis. J. Comput. Chem. 25, 1605–1612 (2004). 34. 34. Castano-Diez, D., Kudryashev, M., Arheit, M. & Stahlberg, H. Dynamo: a flexible, user-friendly development tool for subtomogram averaging of cryo-EM data in high-performance computing environments. J. Struct. Biol. 178, 139–151 (2012). ## Acknowledgements The authors would like to thank Alexander Myasnikov, Arthur Melo and Wim Hagen for help with electron microscopy data collection and processing. The tomography data collection was funded through iNEXT EM HEDC (PID: 6073). J.M.F. acknowledges funding through an EMBO Long-Term Fellowship (ALTF 1065-2015), the European Commission FP7 (Marie Curie Actions, LTFCOFUND2013, GA-2013-609409) and a Transitional Postdoc fellowship (2015/345) from the Swiss SystemsX.ch initiative, evaluated by the Swiss National Science Foundation. A.R. acknowledges funding from the Swiss National Fund for Research Grants N°31003A_130520, N°31003A_149975 and N°31003A_173087, and the European Research Council Consolidator Grant N° 311536. AR thanks the NCCR Chemical Biology for constant support during this project. L.B. is supported by the “IDI 2016” project funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02. M.L. acknowledges support by ANR grant ANR-15-CE13-0004-03 and ERC Starting Grant 677532. M.L.’s group belongs to the CNRS consortium CellTiss. The UCSF Center for Advanced CryoEM is supported by NIH grants S10OD020054 and 1S10OD021741 and the Howard Hughes Medical Institute (HHMI). I.J. was funded by a graduate research fellowship from the National Science Foundation (1000232072) and a Mortiz-Heyman Discovery Fellowship. A.F. is supported by an HHMI Faculty Scholar grant, the American Asthma Foundation, the Chan Zuckerberg Biohub, NIH/NIAID grant P50 AI150464-13 and NIH/NIGMS grant 1R01GM127673-01. ## Author information Authors ### Contributions Conception and design: J.M.v.F. and A.R.; data acquisition, analysis and interpretation: J.M.v.F., L.B., N.T., I.E.J., A.F., M.L., and A.R.; theoretical model: L.B. and M.L.; writing (original draft): J.M.v.F., L.B., M.L., and A.R.; writing (review and editing): J.M.v.F., L.B., N.T., I.E.J., A.F., M.L., and A.R. ### Corresponding authors Correspondence to Luca Barberi or Aurélien Roux. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Moser von Filseck, J., Barberi, L., Talledge, N. et al. Anisotropic ESCRT-III architecture governs helical membrane tube formation. Nat Commun 11, 1516 (2020). https://doi.org/10.1038/s41467-020-15327-4 • Accepted: • Published: • ### The biomechanical role of extra-axonemal structures in shaping the flagellar beat of Euglena gracilis • Giancarlo Cicconofri • , Giovanni Noselli •  & Antonio DeSimone eLife (2021) • ### Principles of membrane remodeling by dynamic ESCRT-III polymers • Anna-Katharina Pfitzner • , Joachim Moser von Filseck •  & Aurélien Roux Trends in Cell Biology (2021) • ### ESCRT-III-mediated membrane repair in cell death and tumor resistance • Jiao Liu • , Rui Kang •  & Daolin Tang Cancer Gene Therapy (2021) • ### The ESCRT-III isoforms CHMP2A and CHMP2B display different effects on membranes upon polymerization • Maryam Alqabandi • , Nicola de Franceschi • , Sourav Maity • , Nolwenn Miguet • , Marta Bally • , Wouter H. Roos • , Winfried Weissenhorn • , Patricia Bassereau •  & Stéphanie Mangenot BMC Biology (2021) • ### The ESCRTs – converging on mechanism • Mark Remec Pavlin •  & James H. Hurley Journal of Cell Science (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
2021-05-18 03:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6129441261291504, "perplexity": 7528.284698358238}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00466.warc.gz"}
https://cstheory.stackexchange.com/questions?sort=newest&page=2
# All Questions 9,980 questions 26 views ### Reference on generalization of plane graph duality between bonds and simple cycles I've also asked this question on Mathoverflow, but it hasn't gotten an answer after several months: https://mathoverflow.net/questions/316132/reference-on-generalization-of-plane-graph-duality-between-... 78 views ### Shortest s-t path when is allowed to ignore k weights Given an undirected graph $G$ with $n$ vertices and $m$ edges, with non-negative weights on the edges, what's the best algorithm that computes the shortest path from $s$ to $t$, where you are allowed ... 157 views ### Intuition Behind Strict Positivity? I'm wondering if someone can give me the intuition behind why strict positivity of inductive data types guarantees strong normalization. To be clear, I see how having negative occurrences leads to ... 171 views +100 120 views ### How small can extension complexity be? In this article on extension complexity of regular polygons https://arxiv.org/pdf/1505.08031.pdf it is mentioned that extension complexity of $n$ regular polygons should be $\theta(\log n)$. This is ... 135 views ### Are Turing machines still useful as model of computation? Often when I hear "Turing machine," my mind's eye pictures a quaint infinite ticker-tape with a small little machine writing and erasing $0$'s and $1$'s. But when I'm forced to think about a Turing ... 105 views ### Canonical complete problem for $\mathrm{FP}^{\Sigma^p_2}$ Given a $\Sigma^p_2$-complete oracle (i.e., $\Sigma_2 \mathrm{SAT}$), I have a problem that requires to call this oracle polynomially many times and returns an integer. Essentially, this is a function ... 66 views ### Entropy bounds on solutions to problems in BPP and other complexity classes based on entropy demands Has anyone studied the asymptotics of problems in complexity classes like $BPP$? The thought came to me that if a problem in $BPP$ only requires $O(log(n))$ bits of entropy to solve then, intuitively, ... 94 views 249 views ### Naive definition of treewidth Treewidth has arguably pretty involved definition. Recently I was thinking about a problem and turns out it easy to solve it for graphs with small naive treewidth''. Naive treewidth is defined as ... 158 views ### Extending Hindley-Milner to type mutable references I have been trying to implement a programming language from scratch, and have gotten reasonably far. It reads just like Python, other than the fact that let is used ... 126 views
2019-04-18 16:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272964000701904, "perplexity": 1078.7296046112062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00497.warc.gz"}
https://krex.k-state.edu/dspace/handle/2097/38443
# Measurement of the cross section for electroweak production of Z? in association with two jets and constraints on anomalous quartic gauge couplings in proton–proton collisions at s=8 TeV ## This item appears in the following Collection(s) Except where otherwise noted, the use of this item is bound by the following: Attribution 4.0 International (CC BY 4.0) Center for the
2021-09-28 11:24:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487095475196838, "perplexity": 1065.5128988197123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00065.warc.gz"}
https://proofwiki.org/wiki/Particular_Point_Space_is_Non-Meager/Proof_1
# Particular Point Space is Non-Meager/Proof 1 ## Theorem Let $T = \left({S, \tau_p}\right)$ be a particular point space. Then $T$ is non-meager. ## Proof Suppose $T$ were meager. Then it would be a countable union of subsets which are nowhere dense in $T$. Let $H \subseteq S$. From Closure of Open Set of Particular Point Space, the closure of $H$ is $S$. From the definition of interior, the interior of $S$ is $S$. So the interior of the closure of $H$ is not empty. So $T$ can not be the union of a countable set of subsets which are nowhere dense in $T$. Hence $T$ is not meager and so by definition must be non-meager. $\blacksquare$
2020-10-30 21:47:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625039100646973, "perplexity": 157.48418852353424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00207.warc.gz"}