text
stringlengths
256
16.4k
This was too long to be a comment, my apologies: Further to Tim Gowers' comment and the following discussion, Chapter 6 of Bollobas' lovely white book "Combinatorics - set systems hypergraphs, families of vectors, and combinatorial probability" studies exactly this kind of question,. In fact Tim's comment corresponds to Theorem 6 in that chapter of the book, I believe. The language of that theorem uses symmetric difference of subsets of $\{1,2,..,n\}$, which can be converted to inner products of normalized $n$ dimensional vectors with entries $\pm 1/\sqrt{n}$. Let $\epsilon>0$ be small. The second case of Theorem 6 essentially states that if the normalized inner product is allowed to be in $[-1,\epsilon]$, equivalently, if the pairwise symmetric differences of the sets in the family are all slightly less than $n/2$ in relative terms, then the number of sets in the family, and hence the number of vectors with entries $\pm 1/\sqrt{n}$ can be as large as $2^{\epsilon n}.$ The proof of this part of the Theorem is given as an exercise with a nice hint in the book: Focus on subsets of size $k=\lceil n/2 \rceil$ and do sphere packing in the Hamming space. The variation with the OPs question is that the allowed range for the normalized inner product is $[-\epsilon, +\epsilon]$ in the OPs question. From Jelani Nelson's answer, it seems that the penalty for restricting the inner product to a band is perhaps not as severe as one might expect. We go from $\epsilon n$ down to $\epsilon^2 \log(1/\epsilon)n$ in the exponent.
Hi, I'm aware of a typical example of injective immersion that is not a topological embedding: figure 8##\beta: (-\pi, \pi) \to \mathbb R^2##, with ##\beta(t)=(\sin 2t,\sin t)##As explained here an-injective-immersion-that-is-not-a-topological-embedding the image of ##\beta## is compact in... I have a hypothetical universe where the distance between two points in spacetime is defined as:$$ds^2 =−(\phi^2 t^2)dt^2+dx^2+dy^2+dz^2$$Where ##\phi## has units of ##km s^{-2}##. The space in this universe grows quadratically with time (and, as I understand it, probably isn’t Minkowski... I am newbie to topology and trying to understand covering maps and quotient maps. At first sight it seems the two are closely related. For example SO(3) is double covered by SU(2) and is also the quotient SU(2)/ℤ2 so the 2 maps appear to be equivalent. Likewise, for ℝ and S1. However, I... <Moderator's note: Moved from General Math to Differential Geometry.>Let p:E→ B be a covering space with a group of Deck transformations Δ(p). Let b2 ∈ B be a basic point.Suppose that the action of Δ(p) on p-1(b0) is transitive. Show that for all b ∈ B the action of Δ(p)on p-1(b) is also... 1. Homework StatementWe define ##X=\mathbb{N}^2\cup\{(0,0)\}## and ##\tau## ( the family of open sets) like this##U\in\tau\iff(0,0)\notin U\lor \exists N\ni : n\in\mathbb{N},n>N\implies(\{n\}\times\mathbb{N})\backslash U\text{ is finite}####a)## Show that ##\tau## satisfies that axioms for... https://www.ma.utexas.edu/users/dafr/OldTQFTLectures.pdfI'm reading the paper linked above (page 10) and have a simple question about notation and another that's more of a sanity check. Given a space ##Y## and a spacetime ##X## the author talks about the associated Quantum Hilbert Spaces... 1. Homework StatementHello All, I am experiencing Adventures in Topology. So far, so good, but I have an issue here.In the topological space (Real #s, U), show that 1 is not an element of Cl((2,3]).2. Homework EquationsThe closed subsets of our topological space are the converses of...
LaTeX typesetting is made by using special tags or commands that provide a handful of ways to format your document. Sometimes standard commands are not enough to fulfil some specific needs, in such cases new commands can be defined and this article explains how. Contents Most of the LaTeX commands are simple words preceded by a special character. In a document there are different types of \textbf{commands} that define the way the elements are displayed. This commands may insert special elements: $\alpha \beta \Gamma$ In the previous example there are different types of commands. For instance, \textbf will make boldface the text passed as parameter to the command. In mathematical mode there are special commands to display Greek characters. Commands are special words that determine LaTeX behaviour. Usually this words are preceded by a backslash and may take some parameters. The command \begin{itemize} starts an environment, see the article about environments for a better description. Below the environment declaration is the command \item, this tells LaTeX that this is an item part of a list, and thus has to be formatted accordingly, in this case by adding a special mark (a small black dot called bullet) and indenting it. Some commands need one or more parameters to work. The example at the introduction includes a command to which a parameter has to be passed, textbf; this parameter is written inside braces and it's necessary for the command to do something. There are also optional parameters that can be passed to a command to change its behaviour, this optional parameters have to be put inside brackets. In the example above, the command \item[\S] does the same as item, except that inside the brackets is \S that changes the black dot before the line for a special character. LaTeX is shipped with a huge amount of commands for a large number of tasks, nevertheless sometimes is necessary to define some special commands to simplify repetitive and/or complex formatting. New commands are defined by \newcommand statement, let's see an example of the simplest usage. \newcommand{\R}{\mathbb{R}} The set of real numbers are usually represented by a blackboard bold capital r: \( \R \). The statement \newcommand{\R}{\mathbb{R}} has two parameters that define a new command \R \mathbb{R} \mathbb the package After the command definition you can see how the command is used in the text. Even tough in this example the new command is defined right before the paragraph where it's used, good practice is to put all your user-defined commands in the preamble of your document. It is also possible to create new commands that accept some parameters. \newcommand{\bb}[1]{\mathbb{#1}} Other numerical systems have similar notations. The complex numbers \( \bb{C} \), the rational numbers \( \bb{Q} \) and the integer numbers \( \bb{Z} \). The line \newcommand{\bb}[1]{\mathbb{#1}} defines a new command that takes one parameter. \bb [1] \mathbb{#1} User-defined commands are even more flexible than the examples shown above. You can define commands that take optional parameters: \newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1} To save some time when writing too many expressions with exponents is by defining a new command to make simpler: \[ \plusbinomial{x}{y} \] And even the exponent can be changed \[ \plusbinomial[4]{y}{y} \] Let's analyse the syntax of the line \newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1}: \plusbinomial [3] [2] (#2 + #3)^#1 If you define a command that has the same name as an already existing LaTeX command you will see an error message in the compilation of your document and the command you defined will not work. If you really want to override an existing command this can be accomplished by \renewcommand: \renewcommand{\S}{\mathbb{S}} The Riemann sphere (the complex numbers plus $\infty$) is sometimes represented by \( \S \) In this example the command \S (see the example in the commands section) is overwritten to print a blackboard bold S. \renewcommand uses the same syntax as \newcommand. For more information see:
% Note: replace “Template” with Name_of_class in previous line Abbreviation: CReg A is a Hausdorff space $\mathbf{X}=\langle X,\Omega\rangle$ that is completely regular Hausdorff space : $...$ completely regular Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out some information on this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct % 2)
It looks like you're new here. If you want to get involved, click one of these buttons! In Chapter 4 of Seven Sketches, Fong and Spivak introduce 'enriched profunctors' as a way to study collaborative design, where different teams are working together to make a product. An enriched profunctor can be used to answer the questions: and You'll notice that these questions are two sides of the same coin! There will be an enriched profunctor going from resources to requirements they fulfill, and we can 'flip' it to get an enriched profunctor from requirements to resources needed to fulfill them. As the name suggests, an 'enriched profunctor' is a bit like a functor between enriched categories... for pros. That is, for professionals. Indeed, most category theorists consider enriched profunctors rather sophisticated. But Fong and Spivak bring them down to earth by their clever trick of focusing on preorders rather than more general categories. Remember that in Chapter 1 they introduced preorders. A preorder is a set \(X\) equipped with a relation \(\le\) obeying $$ x \le x$$ and $$ x \le y \text{ and } y \le z \; \implies \; x \le z .$$Later we saw a preorder is secretly a category with at most one morphism from any object \(x\) to any object \(y\): if one exists we write \(x \le y\). But because there's at most one, we never have to worry about equations between morphisms. Everything simplifies enormously! This is the key to Fong and Spivak's expository strategy. In Chapter 2 they introduced monoidal preorders. These are a special case of 'monoidal categories', which we haven't discussed yet - but they're much simpler! A monoidal preorder is a preorder \( (X,\le) \) with an operation \(\otimes : X \times X \to X\) and element \(I \in X\) obeying $$ (x \otimes y) \otimes z = x \otimes (y \otimes z) $$ $$ I \otimes x = x = x \otimes I $$ and $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$We used preorders to study resources: we said \( x \le y \) if \(x \) is cheaper than \(y\), or you can get \(x\) if you have \(y\). Then we used \(\otimes\) to combine resources, and used \(I\) for a 'nothing' resource: \(x\) combined with nothing is just \(x\). (Actually Fong and Spivak use the opposite convention, writing \(x \le y\) to mean you can get \(y\) if you have \(x\). This seems weird if you think of resources as being like money, but natural if you think of your preorder as a category, and remember \(x \le y\) means there's a morphism \(f : x \to y\). I should probably use this convention.) Later in Chapter 2 they generalized preorders a bit, and introduced categories enriched in a monoidal preorder. Remember the idea: first we choose a monoidal preorder to enrich in, and call it \(\mathcal{V}\). Then a \(\mathcal{V}\)-enriched category, say \(\mathcal{X}\), consists of a set of objects \(\text{Ob}(\mathcal{X})\), and for every two objects \(x,y\), an element \(\mathcal{X}(x,y)\) of \(\mathcal{V}\), such that a) \( I\leq\mathcal{X}(x,x) \) for every object \(x\in\text{Ob}(\mathcal{X})\), and b) \( \mathcal{X}(x,y)\otimes\mathcal{X}(y,z)\leq\mathcal{X}(x,z) \) for all objects \(x,y,z\in\mathrm{Ob}(\mathcal{X})\). We saw that if \(\mathcal{V} = \mathbf{Bool}\), a \(\mathcal{V}\)-enriched category is just a preorder: the truth value \(\mathcal{X}(x,y)\) tells you if you can get from \(x\) to \(y\). But if \(\mathcal{V} \) is something fancier, like \(\mathbf{Cost}\), \(\mathcal{X}(x,y)\) tells you more, like how much it costs to get from \(x\) to \(y\). Where do enriched profunctors fit into this game? Here: given \(\mathcal{V}\)-enriched categories \(\mathcal{X}\) and \(\mathcal{Y}\), a \(\mathcal{V}\)-enriched profunctor is a clever kind of thing going from \(\mathcal{X}\) to \(\mathcal{Y}\). If we take objects of \(\mathcal{X}\) to be requirements and objects of \(\mathcal{Y}\) to be resources, we can use a \(\mathcal{V}\)-enriched profunctor from \(\mathcal{X}\) to \(\mathcal{Y}\) to describe, for each choice of requirements, which resources will fulfill it... or how much it will cost to make them fulfill it... or various other things like that, depending on \(\mathcal{V}\). On the other hand, we can use a \(\mathcal{V}\)-enriched profunctor going back from \(\mathcal{Y}\) to \(\mathcal{X}\) to describe, for each choice of resources, which requirements they will fulfill.. or how much it will cost to make it fulfill them... or various other things like that, depending on \(\mathcal{V}\). It's all very beautiful and fun. Dive in! Read Section 4.1 and 4.2.1, and maybe 4.2.2 if you're feeling energetic. Happy Fourth of July! You don't need fireworks for excitement if you've got profunctors.
No-ones showing the second quotient: a) $f(x) = x^2 - 2x$ So \begin{align}\frac{f(x+h) - f(x)}{h} &= \frac{(x +h)^2 - 2(x-h) - x^2 + 2x}{h} \\&= \frac{x^2 + 2hx + h^2 - 2x - 2h - x^2 + 2x}h \\&= \frac{2hx + h^2 - 2h}h \\&= 2x +h - 2\end{align} And for the other \begin{align}\frac{f(x) -f(a)}{x-a} &= \frac{x^2 -2x -a^2 +2a}{x-a} \\&= \frac{(x^2 -a^2) - (2x -2a)}{x-a} \\&= \frac{(x+a)(x-a) - 2(x-a)}{x-a} \\&= \frac{((x+a) -2)(x-a)}{x-a} \\&= x +a -2\end{align} Notice if you replace $a$ with $x + h$, get the same thing. ==== (This is all hinting toward learning about derivatives in calculus by the way.) $$\lim_{h\rightarrow 0} \frac{f(x + h) - f(x)}h = \lim_{h\rightarrow 0} (2x +h -2) = 2x -2$$ and $$\lim_{a\rightarrow x}\frac{f(x) - f(a)}{h} = \lim_{a\rightarrow x} (x +a - 2) = x + x -2= 2x -2$$ ===== If you think of what these equations are doing graphically$\ldots$ They are taking two points, dividing the vertical change in value by the horizontal change in value. In other words, they are taking the slope of the line between these points. What happens if the points get very close together? If $h$ becomes tiny or $x -a$ becomes tiny? Then this is the slope of two points on the graph very close together. As $h$ approaches zero or $a$ approaches $x$ this will become very close to the slope of the tangent line at $x$ of the function. That's why these quotients are a big deal.
Makkai and Paré introduced the following binary relation on regular cardinals: given $\kappa$ and $\lambda$, $\kappa \vartriangleleft \lambda$ (read, $\kappa$ is sharply less than $\lambda$) when $\kappa < \lambda$ and, for every set $X$ of cardinality $< \lambda$, the set $P_\kappa (X)$ of all subsets of $X$ of cardinality $< \kappa$ has a cofinal subset of cardinality $< \lambda$. It is not hard to see that $\kappa \vartriangleleft \kappa^+$ for all regular cardinals $\kappa$. On the other hand, $\aleph_1$ is not sharply less than $\aleph_{\omega + 1}$, so $\vartriangleleft$ is not the same as $<$. Nonetheless, it is true that $\aleph_0 \vartriangleleft \lambda$ for every uncountable regular cardinal $\lambda$, simply because $P_{\aleph_0} (X)$ has the same cardinality as $X$ when $X$ is infinite. More generally, if for all (not necessarily regular) cardinals $\kappa' < \kappa$ and all cardinals $\lambda' < \lambda$, we have ${\lambda'}^{\kappa'} < \lambda$, then $\kappa \vartriangleleft \lambda$. In particular if $\lambda$ is an inaccessible cardinal then $\kappa \vartriangleleft \lambda$ for all regular cardinals $\kappa < \lambda$. Question. Do there exist uncountable regular cardinals $\kappa$ such that $\kappa \vartriangleleft \lambda$ if and only if $\kappa < \lambda$? Is there a proper class of them?
If $k \leq (1-\epsilon) N$, where $N$ is the total number of subtrees, then the following approach would work: Start with the empty list $L$. Repeat $k$ times: Pick a random subtree $T'$. If $T' \notin L$, add it to $L$; otherwise, go back to the previous step. Since $k \leq (1-\epsilon) N$, the expected number of times it takes to find $T' \notin L$ is at most $1/\epsilon$ throughout the process; it will be much smaller in the beginning. In particular, if $k \ll \sqrt{N}$, then it is highly likely that you will never generate the same subtree twice. In more detail, the average expected number of repetitions is$$\frac{1}{k} \sum_{\ell=0}^{k-1} \frac{N}{N-\ell} \approx \frac{N}{k} \int_0^k \frac{dx}{N-x} = \frac{N}{k} \ln \frac{N}{N-k} \approx \frac{N}{N-k} = 1 + \frac{k}{N-k}.$$ You can implement the check $T' \notin L$ quickly using a hashtable. In this way, we have reduced your problem to that of generating a uniformly random subtree, which you can do as follows, essentially by reducing the problem of uniform generation to that of counting. In order to pick the root of the subtree, first compute $R(T_x)$ for every vertex $x \in T$ (you can do this in $O(n)$ for all vertices together if you're careful, where $n$ is the number of vertices). The root is $x$ with probability $R(T_x)/N$ (you can choose $x$ quickly using binary search, for example). If $x$ is a leaf, then we're done. Otherwise, suppose that $x$ has children $x_1,\ldots,x_\ell$. Your random subtree skips $x_i$ with probability $1/(1+R(T_{x_i}))$ (independently). If it doesn't skip $x_i$, then you generate a random subtree of $T_{x_i}$ recursively. Here are two other related approaches. The first is to generate all subtrees, permute them randomly in $O(N)$, and then output the prefix of length $k$. A variant of the first approach uses unranking. By modifying the approach above, you can take an integer in the range $0,\ldots,N-1$ and convert it to a subtree. This goes as follows. Let $x_1,\ldots,x_n$ be an enumeration of the vertices of $T$. The first $R(T_{x_1})$ integers correspond to subtrees rooted at $x_1$. The following $R(T_{x_2})$ integers correspond to subtrees rooted at $x_2$. And so on. Now suppose we're given an integer $i$ in the range $0,\ldots,R(T_x)-1$, and need to convert it to a subtree rooted at $x$. If $x$ is a leaf, then there is nothing to do. Otherwise, let $x_1,\ldots,x_\ell$ be the children of $x$. We first convert $i$ into $\ell$ numbers $i_1,\ldots,i_\ell$, where $i_j$ is in the range $0,\ldots,R(T_{x_{i_j}})$. If $i_j = R(T_{x_{i_j}})$, then the subtree won't contain $x_{i_j}$. Otherwise, we can generate a subtree of $x_{i_j}$ recursively. Given the unranking procedure, you can generate a permutation of $0,\ldots,N-1$, take the prefix of length $k$, and convert it to a list of $k$ subtrees. If you have any other way of generating a random sequence of $k$ elements of $0,\ldots,N-1$ without repetition, then you can use it in the same way.
The answer is $m/n$. The reason is that $$f(n,x) = \sum_{j=0}^{\infty} \frac{x^{j n}}{(j n)!} = \frac1{n} \sum_{k=0}^{n-1} \exp{\left ( e^{i 2 \pi k/n} x\right )} $$ The sum is dominated by the $k=0$ term as $x \to \infty$. The ratio of such terms is thus $m/n$. ADDENDUM Proof of the above assertion is straightforward. The Taylor expansion of the RHS is $$\frac1{n} \sum_{k=0}^{n-1} \sum_{j=0}^{\infty} \frac{e^{i 2 \pi j k/n} x^j}{j!} $$ Reverse order of summation (justified because each individual sum absolutely converges): $$\frac1{n} \sum_{j=0}^{\infty} \sum_{k=0}^{n-1} \frac{e^{i 2 \pi j k/n} x^j}{j!} = \frac1{n} \sum_{j=0}^{\infty}\frac{ x^j}{j!} \sum_{k=0}^{n-1} e^{i 2 \pi j k/n}$$ The inner sum is a geometrical series, so the Taylor expansion is now $$ \frac1{n} \sum_{j=0}^{\infty}\frac{ x^j}{j!} \frac{e^{i 2 \pi j} - 1}{e^{i 2 \pi j/n} - 1} $$ It should be clear that the latter factor is equal to zero unless $j$ is equal to a multiple of $n$, where it is equal to $n$. QED.
How many subgroups $K \le \mathbb{Z}_n$ are there with $K =\ker(\phi)$ for some homomorphism $\phi\colon\mathbb{Z}_n \rightarrow \mathbb{Z}_m$? Stuck and need a hint. Have so far that there are $\gcd(n,m)$ such homomorphisms. Also that all such maps are of the form $\phi_a(k)= ak$ for $a \in \mathbb{Z}_m$ satisfying $na=0 \pmod m$. Specifically $a = 0, \dfrac{m}{d}, \dfrac{2m}{d},... (d-1)\dfrac{m}{d}$, where $d = gcd(n,m)$. Thus we can denote $\ker(\phi_i)$ by set of all $k$ where $ki\dfrac{m}{d}=0 \pmod m$ for $i \in \mathbb{Z}_d$. Further I showed that if $i,j \in U_d$ then $\ker(\phi_i)=\ker(\phi_j)$. However I am stuck with figuring out how many kernels there are for remaining $d-|U_d|$ homomorphisms(aside from trivial map).
For a simple elementary reaction on electrode \begin{equation} O+e^-\rightleftharpoons R \end{equation} We can derive the Butler-Volmer equation. But it seems that the formula found on John Newman's Electrochemical System seems to be different from the one found on Allen Bard's Electrochemical Methods, both of which are considered classics. Below is my understanding, if not incorrect. In the most general case, Newman writes \begin{equation} i = i_0\biggl[\exp\biggl(\frac{(1-\beta)nF}{RT}\eta_s\biggr)-\exp\biggl(-\frac{\beta nF}{RT}\eta_s\biggr) \biggr] \end{equation} where $\eta_s=V-U$. $U$ is the equilibrium which depends on the surface concentration, and so do $i_0$, the exchange current. Bard, on the other hand, writes, in the most general case, the current-overpotential equation\begin{equation}i_n = i_0\biggl[\frac{C_O}{C_O^*}\exp\biggl(-\frac{\alpha nF}{RT}\eta_s\biggr)-\frac{C_R}{C_R^*}\exp\biggl(\frac{(1-\alpha) nF}{RT}\eta_s\biggr) \biggr]\end{equation}which only leads to Butler-Volmer equation if mass-transfer is not concerned. Here $\eta_s=E-E_{eq}$ (with its notation). $E_{eq}$ is the equilibrium potential established on bulk concentration, which is a constant taken to be initial condition. $i_0$ is dependent on the bulk concentration. Aside from the anodic, cathodic sign convention different, what Bard has written explicitly depends on the surface concentration, which in a sense suggest that even if the over-potential is negative (which, in Bard's convention, leads to positive cathodic current), anodic reaction can still be established if $C_R$ is sufficiently high. The more widely known equation (also found in Bockris's Modern Electrochemistry vol.2) is the one Newman wrote. But the equation will only lead to (in Newman's convention) positive anodic current if the over-potential $\eta_s$ is positive, even if $C_O$ dominates (which leads to cathodic current in Bard's formula). It seems that the derivation differ on the reference potential: Newman sets equilibrium potential $U$ to be dependent on the surface concentration while Bard let it be referenced to the open-circuit potential with bulk concentration. What's right and wrong?
On general super-target spaces the $\kappa$-symmetry of the Green-Schwarz action functional is indeed a bit, say, in-elegant. But a miracle happens as soon as the target space has the structure of a super-group (notably if it is just super-Minkowski spacetime with its canonical structure of the super-translation group over itself): in that case the Green-Schwarz action functional is just a supergeometric analog of the Wess-Zumino-Witten functional with a certain exceptional super-Lie algebra cocycle on spacetime playing the role of the B-field in the familiar WZW functional. It turns out that this statement implies and subsumes $\kappa$-symmetry in these cases. Moreover, this nicely explains the brane scan of superstring theory: a Green-Schwarz action functional for super-$p$-branes on super-spacetime exists precisely for each exceptional super-Lie algebra cocycle on spacetime. Classifying these yields all the super-$p$-branes... ... or almost all of them. It turns out that some are missing in the "old brane scan". For instance the M2-brane is there (is given by a $\kappa$-symmetric Green-Schwarz action functional) but the M5-brane is missing in the "old brane scan". Physically the reason is of course that the M5-brane is not just a $\sigma$-model, but also carries a higher gauge field on its worldvolume: it has a "tensor multiplet" of fields instead of just its embedding fields. But it turns out that mathematically this also has a neat explanation that corrects the "old branee scan" of $\kappa$-symmetric Green-Schwarz action functional in its super-Lie-theoretic/WZW interpretation: namely the M5-brane and all the D-branes etc. do appear as generalized WZW models as soon as one passes from just super Lie algebras to super Lie n-algebras. Using this one can build higher order WZW models from exceptional cocycles on super-$L_\infty$-algebra extensions of super-spacetime. The classification of these is richer than the "old brane scan", and looks like a "bouquet", it is a "brane bouquet"... and it contains precisely all the super-$p$-branes of string M-theory. This is described in a bit more detail in these notes: The brane bouquet diagram itself appears for instance on p. 5 here. Notice that this picture looks pretty much like the standard "star cartoon" that everyone draws of M-theory. But this brane bouquet is a mathematical theorem in super $L_\infty$-algebra extension theory. Each point of it corresponds to precisely one $\kappa$-symmetric Green-Schwarz action functional generalized to tensor multiplet fields.
I have a CO 2 sensor that output signal values 30-50 mV. I need to translate these voltages to 0-5 V for my microcontroller with the highest resolution. I understand that I can amplify the voltage using a non-inverting op-amp circuit, as shown, to a range of 3-5 V, but is it possible to expand that range to 0-5 V in order to get better resolution of sensor values? I have a CO You can use a differential amplifier to subtract the 30 mV offset. When R1 = R2 and R3 = R4 the transfer function is \$ V_{OUT} = \dfrac{R3}{R1}(V_2 - V_1) \$ So set V1 to 30 mV and choose R3 = 250 \$\times \$ R1. A problem with differential amplifiers is that R1 will load the resistor divider to get the 30 mV offset, so that you have to recalculate the resistors, and also V2 will have an input impedance which may distort the measurement. An instrumentation amplifier is the solution. Most instrumentation amplifiers are differential amplifiers with a buffering input stage. The input stage sets the gain, while the differential stage is usually a \$\times\$1 amplifier. The amplification is then \$ V_{OUT} = \dfrac{2 R2}{R1}\cdot \dfrac{R4}{R3} (V_2 - V_1) \$ The Microchip MCP6N11 is a suitable device. An instrumentation amplifier is what you need here (though an opamp could be used with some attention to detail) Depending on your supply (single, dual) you need to be careful though. If using a single supply (e.g. 0-5V) you must make sure the InAmp can handle common mode inputs of the level of your input signals, which will be 30-50mV relative to ground (so the input range must include ground) Also since your output includes ground (and power rail if using a 5V supply) you must make sure the output can swing fully to both rails. Many InAmps don't do either of these things. The LTC2053 is one rail to rail in/out option, as is the MCP6N11 Steven mentions. EDIT - the LTC2053 will not be suitable as the input impedance is not high enough. The MG811 datasheet specifies the need for an Opamp/Inamp with an input impedance of >100GΩ, so something like the MCP6N11 Steven recommends is needed. This has an input resistance of \$ 10^{13}\Omega \$, which is \$ 10\ T\Omega \$. I have left the rest of the answer to demonstrate a typical setup, since the principle is the same regardless of the Inamp used. Anyway, as long as you take care with the above, the setup is pretty simple. Apply 30mV to the inverting input, signal to the non-inverting input and set gain for (5V - 0V) / (50mV-30mV) = 250. Here is a dual rail (+-5V) example circuit with the LT1789 InAmp: Simulation: Single supply LTC2053 circuit (simulation not shown as it's the same as above): Use an instrumentation amplifier like this one. Since you want to amplify 30-50mV to 0-5V, 5V/(50mV-30mV) = gain of 250. Use the datasheet to select a gain resistor. For my example, G = 1 + (100k/Rg), so Rg = 100k/(G-1) for 402 Ohms. These values need to be pretty exact, and when in doubt make it a little bigger and sacrifice a little span. Since you want 0-5V, you'll want to set the reference voltage to 2.5V since that is the middle of the span. Use a reference diode for that. protected by Dave Tweed♦ Jul 18 '14 at 11:18 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
I've bee given 2 equations in spherical form. One of a sphere $(s_1)$ and the other of a cylinder $(s_2)$. $$S_1 : \rho = 4\cos \phi \text{ and } S_2 : \rho \sin \phi = 1$$ I need to find the volume inside the sphere and inside the cylinder using both spherical and cylindrical coordinates. So far I was able to convert surfaces to Cartesian form $$S_1 : x^2 + y^2 + (z-2)^2 = 4 \text{ and } S_2: x^2 + y^2 = 1$$ But now when I try calculating the volume, I get values that dont make sense. Im expecting a volume of a bit less than $4\pi $ $units^3$ $(4 \cdot \pi \cdot 1^2)$, but I cant seem to come near that. Im getting values around $5$ and $6$ (depending on if I calculated it in cylindrical or cartesian) Please help. Thanks
I'm struggling with this problem, I'm still only on part (a). I tried X=rcos(theta) Y=rsin(theta) but I don't think I'm doing it right. Curve C has polar equation r=sin(${\theta}$)+cos(${\theta}$). (a) Write parametric equations for the curve C. $\left\{\begin{matrix} x= \\ y= \end{matrix}\right.$ (b) Find the slope of the tangent line to C at its point where ${\theta}$ = $\frac{\pi}{2}$. (c) Calculate the length of the arc for 0 $\leq {\theta} \leq {\pi}$ of that same curve C with polar equation r=sin(${\theta}$)+cos(${\theta}$).
Abbreviation: DCPO A is a poset $\mathbf{P}=\langle P,\leq \rangle $such that every directed subset of $P$ has a least upper bound: $\forall D\subseteq P\ (D\ne\emptyset\mbox{and}\forall x,y\in D\ \exists z\in D(x,y\le z)\Longrightarrow \exists z\in P(z=\bigvee D))$. directed complete partial order Let $\mathbf{P}$ and $\mathbf{Q}$ be directed complete partial orders. A morphism from $\mathbf{P}$ to $\mathbf{Q}$ is a function $f:Parrow Q$ that is , which means that $f$ preserves all directed joins: Scott-continuous $z=\bigvee D\Longrightarrow f(z)= \bigvee f[D]$ Example 1: $\langle \mathbb{R},\leq \rangle $, the real numbers with the standard order. Example 1: $\langle P(S),\subseteq \rangle $, the collection of subsets of a sets $S$, ordered by inclusion. Classtype second-order Amalgamation property Strong amalgamation property Epimorphisms are surjective $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Let $X_{1},X_{2},\ldots,X_{n}$ be a random sample whose distribution is given by $\mathcal{N}(\mu,\sigma^{2})$, where both parameters are unknown. (a) Prove the normal probability density function satisfies the Cramer-Rao theorem hypothesis. (b) Prove that $T(\textbf{X}) = \hat{\sigma}^{2}$ reaches the Cramer-Rao bound. The exercise propose using the following result RESULT Suppose that $X_{1},X_{2},\ldots,X_{n}$ are IID whose joint probability density function is given by $f(\textbf{x}|\theta)$. If $T(\textbf{X})$ represents any unbiased estimator for $\psi(\theta) = \textbf{E}_{\theta}(T(\textbf{X}))$, then $T(\textbf{X})$ reach the Cramer-Rao bound if and only if \begin{align*} \frac{\partial\ln f(\textbf{x}|\theta)}{\partial\theta} = h(\theta)[T(\textbf{X}) - \psi(\theta)] \end{align*} for some function $h(\theta)$. I am really having trouble in proceeding with the exercise. Any help is appreciated. Thanks in advance!
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
A good way to think about these equations is to imagine them as sets over valid words. Say I have an alphabet $\Sigma = \{a, b\}$, then the rule$$A \leftarrow A a \mid b$$ says that there's some set $A$ that is isomorphic to the set $(A * \{a\}) \cup \{b\}$. Here the $*$ operation is the product/concatenation operation lifted over sets, that is$$\{a, b, c\} * \{d, e, f\} = \{ad, ae, af, bd, be, bf, cd, ce, cf\}$$ Going back to our grammar rule, it is basically declaring the existence of a set $A$ of words that satisfies the equation$$A = (A * \{a\}) \cup \{b\}.$$ Now, there's a huge class of sets of words that satisfies this equation. For example, the set $A_\bot = \{b, ba, baa, baaa, baaaa, \dots\}$ is a valid solution since a $b$ or appending any element of $A_\bot$ with another $a$ will result in a word that is already in $A_\bot$. However, you should check that$$A_{bb} = \{b, bb, ba, bba, baa, bbaa, baaa, bbaaa, \dots\}$$is also a valid solution to this equation. In particular, $b$ is already in $A_{bb}$, and for any element $ba^k$ or $bba^k$ in $A_{bb}$, its concatenation with $a$ results in $ba^{k+1}$ and $bba^{k+1}$, which are also already in $A_{bb}$. Therefore, $A_{bb}$ is also a valid fixed point. In fact, for any given ground set of words $S = \{w_1, w_2, w_3, \dots\}$, we can construct a closure $A_S$ which contains every word in $S$ and similarly satisfies the above equation. In particular, the closure operator is the same isomorphism: \begin{align*}A_S^{k + 1} &= (A_S^k * \{a\}) \cup \{b\} \\A_S &= \bigcup_k^\infty A_S^k\end{align*} So now we get to the real question. If there are an infinite supply of solutions for this equation, then what good is it as a characterization of some grammar? Well, intuitively, we hope to just characterize those words that are generated from this rule "from scratch". In effect, we wish to treat these rules as free objects. That is, we care about the case where the underlying generator $S = \{\}$. Let$$A(S) = (S * \{a\}) \cup \{b\}$$then we want$$A_* = \bigcup_k A^{(k)}(\{\})$$to be the set generated by our grammar rule, where $A^{(k)} = A(A(\stackrel{k}{\dots} A(\cdot)))$. This then gives some intuition of what we mean by the "least solution." $A(S)$ is monotone in the sense that $S \subseteq S' \implies A(S) \subseteq A(S')$; in fact, this holds over all grammar rules. Since the least element over the class of possible sets that $S$ can take (ordered on set inclusion) is the empty set, then effectively the "freely generated" language that we desire also turns out to be the smallest (ordered on set inclusion) fixed point, that is, $A(\{\}) \subseteq A(S')$ for any set $S'$. In general, you'll see a lot of these "smallest solution" clauses all around the landscape of computer science. For example, any recursive program is the smallest solution to some program equation; any constructable inductive datatype is the smallest solution to some inductive datatype equation; any valid program semantic is the smallest solution to some logical equation; etc. It turns out that these classes of topological closures corresponds elegantly to a notion of finiteness. For example, the set of all finitary words $\Sigma^*$ is itself a least fixed-point. Since computations themselves can be seen as some enumeration process of finite objects, the analogy holds quite naturally; least fixed-points typically gives some assurance of some form of computability. In conclusion, if you see the phrase "least solution" or "the least fix-point," it often means that the problem concerns finite objects that are freely-generated in the sense that only things that can be derived "from scratch" are considered.
A remark on an eigenvalue condition for the global injectivity of differentiable maps of $R^2$ 1. Instituto de Ciências Matemáticas e de Computa¸cão - USP, Cx. Postal 668, CEP 13560–970, São Carlos, SP, Brazil 2. Institute of Mathematics, P.O. Box 1078, Hanoi, Vietnam There does not exist a sequence $R^2$ ∋ $x_i\rightarrow \infty$ such that $X(x_i)\rightarrow a\in \R^2$ and $DX(x_i)$ has a real eigenvalue $\lambda _i\rightarrow 0$.When the graph of $X$ is an algebraic set, this condition becomes a necessary and sufficient condition for $X$ to be a global diffeomorphism. Mathematics Subject Classification:Primary: 14R15; Secondary: 14E07, 14E09, 14E4. Citation:Carlos Gutierrez, Nguyen Van Chau. A remark on an eigenvalue condition for the global injectivity of differentiable maps of $R^2$. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 397-402. doi: 10.3934/dcds.2007.17.397 [1] [2] Roberto Livrea, Salvatore A. Marano. A min-max principle for non-differentiable functions with a weak compactness condition. [3] [4] [5] [6] Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, Sandro Vaienti. Polynomial loss of memory for maps of the interval with a neutral fixed point. [7] [8] VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. [9] Hicham Zmarrou, Ale Jan Homburg. Dynamics and bifurcations of random circle diffeomorphism. [10] [11] Christian Bonatti, Sylvain Crovisier and Amie Wilkinson. The centralizer of a $C^1$-generic diffeomorphism is trivial. [12] Christian Bonatti, Stanislav Minkov, Alexey Okunev, Ivan Shilin. Anosov diffeomorphism with a horseshoe that attracts almost any point. [13] Joachim Escher, Boris Kolev. Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle. [14] Robert Lauter and Victor Nistor. On spectra of geometric operators on open manifolds and differentiable groupoids. [15] [16] [17] [18] [19] Robert Brooks and Eran Makover. The first eigenvalue of a Riemann surface. [20] Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu, Dušan D. Repovš. Perturbations of nonlinear eigenvalue problems. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
The motivation and previous discussion of some related ideas can be seen here. My question: given a sequence $u_n\in u_0+H^1_0(\Omega)$ where $u_0\in H^1(\Omega)$ and $\Omega\subset \mathbb R^N$ is open bounded lipschitz boundary. Assume $u_n\to u\in u_0+H^1_0(\Omega)$ weakly in $H^1$, I want to obtain a sequence of functions $u_{n,\epsilon}$ with double index $n$, $\epsilon$ such that $u_{n,\epsilon}\in C^\infty(\bar \Omega)$ and $u_{n,\epsilon}\to u_{\epsilon}$ weakly in $H^1$ and $$ \lim_{\epsilon\to 0}\sup_{n\in\mathbb N}\left| \int_\Omega |\nabla u_{n,\epsilon}|^2dx- \int_\Omega |\nabla u_{n}|^2dx\right|=0, $$ i.e., the mollification is uniformly obtained. Or this condition can be weaken as for any $\delta>0$, there exists a sequence of functions $u_{n,\epsilon}$ with double index $n$, $\epsilon$ such that $u_{n,\epsilon}\in C^\infty(\bar \Omega)$ and $u_{n,\epsilon}\to u_{\epsilon}$ weakly in $H^1$ and $$ \lim_{\epsilon\to 0}\sup_{n\in\mathbb N}\left| \int_\Omega |\nabla u_{n,\epsilon}|^2dx- \int_\Omega |\nabla u_{n}|^2dx\right|<\delta. $$ If we know the Hession $\nabla^2 u$ such that $\|\nabla^2 u\|_{L^2}$ is uniformly bounded, then this would be easy by using standard mollification operator. But I am wondering can we still obtain the same without the information of Hession? Maybe some other mollification construction?
Let $X$ be a topological space. Definitions: $X$ is countably compact if every countable open cover of $X$ has a finite subcover or equivalently, every sequence in $X$ has a cluster point. $X$ is sequentially compact if every sequence in $X$ has a convergent subsequence $X$ is sequential if every sequentially closed set is closed. It is known that if $X$ is countably compact + sequential + $T_2$ then $X$ is sequentially compact (see e.g. Engelking). The proof goes like this: Let $x_n$ be a sequence in $X$. Since $X$ is countably compact $x_n$ has a cluster point $x \in X$. If $\{ n \mid x_n = x \}$ is infinite then we have a constant subsequence of $x_n$, thus convergent. So assume that $\{ n \mid x_n = x \}$ is finite such that there is some $n_0$ and $x_n \neq x$ for all $n \geq n_0$. Consider the set $A := \{ x_n \mid n \geq n_0 \} \setminus \{ x \}$. Then $A$ is not closed and since $X$ is sequential, $A$ is not sequentially closed. Thus, there is a sequence $y_k \in A$ and $y \in X \setminus A$ such that $y_k \to y$. Since $X$ is $T_2$ it follows that $y_k$ is not eventually constant since otherwise $y_k \to y_N \in A$ for some $N \in \mathbb{N}$ and $y_k \to y \in X \setminus A$ implies $y_N = y$ which is a contradiction. Thus, we have infinitely many $y_k$ in $A$ which can be finally used to construct a convergent subsequence of $x_n$. There are also other properties $\varphi$ such that countable compactness + $\varphi$ imply sequential compactness. As an example, $\varphi$ can be taken to be first-countable or even Fréchet-Urysohn (cluster points of injective sequences $x_n$ are accumulation points of the corresponding sets $x(\mathbb{N})$, thus lying in the closure and thus being able to be approximated by a sequence in $x(\mathbb{N})$ which can be used to generate a convergent subsequence of $x_n$). There is no need for an additional separation property. In my eyes, the Fréchet-Urysohn property is not "too far" away from the sequential property and thus it is a little bit "strange" that sequentialness needs an additional separation property. By "too far" I mean that typical spaces that are sequential but not Fréchet-Urysohn are a little bit pathological (e.g. Arens-Fort space). Questions: Is there some deeper insight, why we need a separation property for sequentialness but not for Fréchet-Urysohn? Is the separation property really needed, i.e. is there some sequential space which is countably compact but not sequentially compact? Remark: In fact, for the uniqueness of the sequential limit we can reduce the $T_2$ separation property to the $US$ separation property (i.e. $X$ is sequentially Hausdorff) which lies strictly between $T_1$ and $T_2$. This gives a hint, that $T_1$ should be not enough.
I am studying differential geometry I am trying to proof the expression below. Given that for a map $\phi$ : $M$ $\to$ $M$ the pull-back $\phi$*$\omega$ $\in$ $T^\ast_p M$ of a 1-form $\omega $ $ \in$ $T^\ast_p M$ is defined by : ($\phi$*$\omega$)$(v)$ = $\omega$($\phi_{*}v$) where $v$ $\in$ $T_{p}M$. How would we proof this in a coordinate basis $dx^{\mu}_{p}$, $\phi^{*}\omega$ has components: $(\phi^{*}\omega)_{\nu} = \frac{\partial x^{'\mu}}{\partial x^{v}}\omega_{\mu}$ where $\mathbf{\omega} = \omega_{\mu}dx^{\mu}_{\phi(p)}$ and $x^{'\mu} = x^{\mu} \bullet \phi $. and also prove that if $\phi$ is a diffeomorphism, then the push-forward is $\phi$*$\omega$ $\in$ $T^{\ast}_{\phi(p)} M$ of a 1-form $\omega$ $\in$ $T^{\ast}_{p} M$ is defined by: $(\phi_{*}\omega)(v) = \omega(\phi^{*}v)$ for any $v \in T^{\ast}_{\phi(p)} M$. Prove that in the coordinate basis $dx^{\mu}_{\phi(p)}, \phi_{*}\omega$ has components : $(\phi_{*}\omega)_{\nu} = \frac{\partial x^{\mu}}{\partial x^{'v}}\omega_{\mu}$. To clarify things please find the extract of the notes I am reading:[extract]
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see
Abbreviation: PoGrp A is a structure $\mathbf{G}=\langle G,\cdot,^{-1},1,\le\rangle$ such that partially ordered group $\langle G,\cdot,^{-1},1\rangle$ is a group $\langle G,\le\rangle$ is a partially ordered set $\cdot$ is : $x\le y\Longrightarrow wxz\le wyz$ orderpreserving Let $\mathbf{A}$ and $\mathbf{B}$ be partially ordered groups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is an orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $x\le y\Longrightarrow h(x)\le h(y)$ Example 1: The integers, the rationals and the reals with the usual order. Any group is a partially ordered group with equality as partial order. Any finite partially ordered group has only the equality relation as partial order. $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &2\\ f(5)= &1\\ \end{array}$ $\begin{array}{lr} f(6)= &2\\ f(7)= &1\\ f(8)= &5\\ f(9)= &2\\ f(10)= &2\\ \end{array}$ Partially ordered monoids reduced type
Abbreviation: Sfld A is a semiring with identity $\mathbf{S}=\langle S,+,\cdot, 1\rangle $ such that semifield $\langle S^*,\cdot,1\rangle$ is a group, where $S^*=S-\{0\}$ if $S$ has an absorbtive $0$, and $S=S^*$ otherwise. Let $\mathbf{S}$ and $\mathbf{T}$ be semifields. A morphism from $\mathbf{S}$ to $\mathbf{T}$ is a function $h:S\to T$ that is a homomorphism: $h(x+y)=h(x)+h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$ Example 1: The only finite semifield that is not a field is the 2-element Boolean semifield: https://arxiv.org/pdf/1709.06923.pdf $\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &1\\ f(4)= &1\\ f(5)= &1\\ f(6)= &0\\ \end{array}$
Can you help me evaluating the following indefinite integral? $$ \int \sqrt{{x}^{2} + 3} \; dx $$ Please, don't give a full solution, just some hint on which method to use... ** UPDATE ** Thank you very much to everybody for the useful comments and suggestions. I'm sorry for the delay with my reply, unfortunately do some mathematics as an hobby and often don't have time to work at it. I tried to take Lucian suggestion on board and use trigonometric substitution as follows. $$ x = \sqrt{3} \; tan\theta $$ and $$ \int \sqrt{{x}^{2} + 3} \; dx = \int \sqrt{{3\;tan}^{2}\theta + 3} \; \sqrt{3}\;{sec}^{2}\theta \; d\theta = \int \sqrt{{3\;sec}^{2}\theta} \; \sqrt{3}\;{sec}^{2}\theta \; d\theta = \int \sqrt{3}\;sec\theta \; \sqrt{3}\;{sec}^{2}\theta \; d\theta = 3 \int {sec}^{3}\theta \; d\theta $$ which (according to common integral tables) is equal to $$ 3 \left[ \frac{1}{2} sec\theta \; tan\theta + \frac{1}{2} ln\left| sec\theta + tan\theta \right| + C \right] $$ Problem arises when I try to substitute back the variable from $ \theta $ to $ x $ because I know $ tan\theta = \frac{x}{\sqrt{3}} $ but I don't know how to substitute back $ sec\theta $, so basically I stopped here: $$ \frac{3}{2}sec\theta\;\frac{x}{\sqrt{3}} + \frac{3}{2}ln\left| sec\theta + \frac{x}{\sqrt{3}} \right| + 3C $$ It looks close to the final answer but still not there...any suggestions?
I will first state Fatou's lemma and provide a proof, then I will present the corollary I am trying to prove. I am a little lost on the proof so I to assist in the reader's help I will provide what Folland suggests to do to prove it. If $\{f_n\}$ is any sequence in $L^{+}$, then $$\int(\lim\inf f_n)\leq \lim\inf\int f_n$$ proof:$$\int \lim_{j\rightarrow \infty} \inf f_j = \int \lim_{k\rightarrow \infty} \inf_{k\geq j}f_k$$ by Monotone Convergence Theorem, then $$= \lim_{k\rightarrow \infty}\int \inf_{k\geq j}f_k$$ observe that $\inf_{k\geq j}f_k \leq f_k$ for all $k\geq j$. So, $$\int \inf_{k\geq j}f_j \leq \int f_k \ \ \ \forall k\geq j$$ $$\leq \inf_{k\geq j}\int f_k \leq \lim_{k\rightarrow \infty}\inf_{k\geq j}\int f_k$$ Corollary - If $\{f_n\}\subset L^{+}$, $f\in L^{+}$, and $f_n\rightarrow f$ a.e., then $\int f\leq \lim \inf \int f_n$ proof (Folland) - If $f_n\rightarrow f$ everywhere, the result is immediate from Fatou's lemma, and this can be achieved by modifying $f_n$ and $f$ on a null set without affecting the integrals by proposition 2.16 which states: If $f\in L^{+}$, then $\int f = 0$ if and only if $f = 0$ a.e. I am lost on how to prove the corollary I just need some initial start and I think I can go from there.
Let $L/K$ be a Galois extension of number fields with Galois group $G$. Let $O_K$ and $O_L$ be the ring of algebraic integers of $K$ and $L$ respectively. Let $P\subseteq O_K$ be a prime. Let $Q\subseteq O_L$ be a prime lying over $P$. The decomposition group is defined as $$D(Q|P)=\lbrace \sigma\in G\text{ }|\text{ }\sigma(Q)=Q\rbrace$$ The $n$-th ramification group is defined as $$E_n(Q|P)=\lbrace \sigma\in G:\sigma(a)\equiv a\text{ mod } Q^{n+1}\text{ for all } a\in O_L\rbrace$$ I want to find some worked out examples of these definitions. Where can I find them ?
On multi-trial Forney-Kovalev decoding of concatenated codes 1. DCS, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany 2. TAIT, Ulm University, Albert-Einstein-Allee 43, 89081, Ulm 3. ECE, University of Toronto, 10 King's College Road, Toronto M5S 3G4, ON, Canada Our approach extends results of Forney and Kovalev (obtained for $\lambda=2$) to the whole given range of $\lambda$. For the fixed erasing strategy the error correcting radius approaches $\rho_F\approx\frac{d^i d^o}{2}(1-\frac{l^{-m}}{2})$ for large $d^o$. For the adaptive erasing strategy, the error correcting radius $\rho_A\approx\frac{d^i d^o}{2}(1-l^{-2m})$ quickly approaches $d^i d^o/2$ if $l$ or $m$ grows. The minimum number of trials required to reach an error correcting radius $d^i d^o/2$ is $m_A=\frac{1}{2}\left(\log_ld+1\right)$. This means that $2$ or $3$ trials are sufficient in many practical cases if $l>1$. Keywords:Multi-trial decoding, fixed erasing, concatenated codes, GMD decoding, adaptive erasing., error correcting radius. Mathematics Subject Classification:94B3. Citation:Anas Chaaban, Vladimir Sidorenko, Christian Senger. On multi-trial Forney-Kovalev decoding of concatenated codes. Advances in Mathematics of Communications, 2014, 8 (1) : 1-20. doi: 10.3934/amc.2014.8.1 References: [1] E. L. Blokh and V. V. Zyablov, [2] A. Chaaban, [3] G. D. Forney Jr., [4] G. D. Forney Jr., Generalized minimum distance decoding,, 12 (1966), 125. Google Scholar [5] [6] [7] S. I. Kovalev, Two classes of minimum generalized distance decoding algorithms,, 22 (1986), 35. Google Scholar [8] G. Schmidt, V. R. Sidorenko and M. Bossert, Collaborative decoding of interleaved Reed-Solomon codes and concatenated code designs,, 55 (2009), 2991. doi: 10.1109/TIT.2009.2021308. Google Scholar [9] C. Senger, [10] C. Senger, V. R. Sidorenko, M. Bossert and V. Zyablov, Multi-trial decoding of concatenated codes using fixed thresholds,, 46 (2010), 127. doi: 10.1134/S0032946010020031. Google Scholar [11] C. Senger, V. R. Sidorenko, M. Bossert and V. Zyablov, Optimal thresholds for GMD decoding with (L+1)/L-extended bounded distance decoders,, [12] C. Senger, V. R. Sidorenko and V. Zyablov, On generalized minimum distance decoding thresholds for the AWGN channel,, [13] V. R. Sidorenko, A. Chaaban, C. Senger and M. Bossert, On extended Forney-Kovalev GMD decoding,, [14] V. R. Sidorenko, C. Senger, M. Bossert and V. Zyablov, Single-trial decoding of concatenated codes using fixed or adaptive erasing,, [15] V. R. Sidorenko, G. Schmidt and M. Bossert, Decoding punctured Reed-Solomon codes up to the Singleton bound,, [16] [17] [18] [19] [20] J. H. Weber, V. R. Sidorenko, C. Senger and K. A. S. Abdel-Ghaffar, Asymptotic single-trial strategies for GMD decoding with arbitrary error-erasure tradeoff,, 48 (2012), 324. doi: 10.1134/S0032946012040023. Google Scholar [21] V. V. Zyablov, Optimization of concatenated decoding algorithms,, 9 (1973), 26. Google Scholar show all references References: [1] E. L. Blokh and V. V. Zyablov, [2] A. Chaaban, [3] G. D. Forney Jr., [4] G. D. Forney Jr., Generalized minimum distance decoding,, 12 (1966), 125. Google Scholar [5] [6] [7] S. I. Kovalev, Two classes of minimum generalized distance decoding algorithms,, 22 (1986), 35. Google Scholar [8] G. Schmidt, V. R. Sidorenko and M. Bossert, Collaborative decoding of interleaved Reed-Solomon codes and concatenated code designs,, 55 (2009), 2991. doi: 10.1109/TIT.2009.2021308. Google Scholar [9] C. Senger, [10] C. Senger, V. R. Sidorenko, M. Bossert and V. Zyablov, Multi-trial decoding of concatenated codes using fixed thresholds,, 46 (2010), 127. doi: 10.1134/S0032946010020031. Google Scholar [11] C. Senger, V. R. Sidorenko, M. Bossert and V. Zyablov, Optimal thresholds for GMD decoding with (L+1)/L-extended bounded distance decoders,, [12] C. Senger, V. R. Sidorenko and V. Zyablov, On generalized minimum distance decoding thresholds for the AWGN channel,, [13] V. R. Sidorenko, A. Chaaban, C. Senger and M. Bossert, On extended Forney-Kovalev GMD decoding,, [14] V. R. Sidorenko, C. Senger, M. Bossert and V. Zyablov, Single-trial decoding of concatenated codes using fixed or adaptive erasing,, [15] V. R. Sidorenko, G. Schmidt and M. Bossert, Decoding punctured Reed-Solomon codes up to the Singleton bound,, [16] [17] [18] [19] [20] J. H. Weber, V. R. Sidorenko, C. Senger and K. A. S. Abdel-Ghaffar, Asymptotic single-trial strategies for GMD decoding with arbitrary error-erasure tradeoff,, 48 (2012), 324. doi: 10.1134/S0032946012040023. Google Scholar [21] V. V. Zyablov, Optimization of concatenated decoding algorithms,, 9 (1973), 26. Google Scholar [1] Vladimir Sidorenko, Christian Senger, Martin Bossert, Victor Zyablov. Single-trial decoding of concatenated codes using fixed or adaptive erasing. [2] [3] Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. [4] [5] [6] [7] [8] [9] [10] Heide Gluesing-Luerssen, Uwe Helmke, José Ignacio Iglesias Curto. Algebraic decoding for doubly cyclic convolutional codes. [11] Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. [12] [13] [14] [15] Fernando Hernando, Tom Høholdt, Diego Ruano. List decoding of matrix-product codes from nested codes: An application to quasi-cyclic codes. [16] Jonas Eriksson. A weight-based characterization of the set of correctable error patterns under list-of-2 decoding. [17] [18] Deren Han, Hai Yang, Xiaoming Yuan. A practical trial-and-error implementation of marginal-cost pricing on networks. [19] [20] Ahmed S. Mansour, Holger Boche, Rafael F. Schaefer. The secrecy capacity of the arbitrarily varying wiretap channel under list decoding. 2018 Impact Factor: 0.879 Tools Metrics Other articles by authors [Back to Top]
Suppose we have a signal $y(t)$. For sampling we use a Dirac $\delta$ function. The sampled function is $\widetilde{y}(t)$. So $$\widetilde{y}(t) = y(t)\cdot\sum_{n=-\infty}^{\infty}\delta(t-nT).$$ The Fourier transformation is $$\widetilde{Y}(\omega)=\mathcal{F\{\widetilde{y}(t)\}} = \sum_{n=-\infty}^{\infty}y(nT)e^{-i2\pi\omega nT}$$ where $i$ is the imaginary unit and $\omega$ angular frequency. $y(nT)$ are the sampled points but I am having problems in understanding what $e^{-i2\pi\omega nT}$ of $\widetilde{Y}(\omega)$ means? I could write it down with $\cos$ and $\sin$ but this does not make it clearer. I would appreciate it if someone could explain it to me. First, $i$ is not the imaginary part, but the imaginary unit. This tells you that the signal, under suitable conditions, admits a discrete time Fourier transform. Its is a linear representation of the discrete signal. It provides you with a spectral interpretation, ie a view of the content of the signal as a linear combination of waves, pure frequencies indexed by a continuous index $\omega$. Pure frequencies are cisoids or complex exponentials/sinusoids, by definition $$e^{-i2\pi\omega nT} = \cos(-2\pi\omega nT)+i\sin(-2\pi\omega nT)\,.$$ The relative importance (weight) and "timing" of these waves is given by $\tilde{Y}(\omega)$, by its magnitude and phase, respectively. Another interpretation of the sum of products is a scalar product between the signal and the different waves. If the scalar product is high, the signal looks like the pure wave a lot, and the corresponding frequency is very "present" in the signal. If is is low, this component is not really present in the signal. The whole spectrum, made of all the $\tilde{Y}(\omega)$, tells you about the content in "each" frequency in your data. The two links above drive you to the excellent site of J. O. Smith, and the mathematics of the discrete Fourier transform, an perfect starting point to understand these concepts. The given expression for the Fourier transform of the sampled signal $y(t)$ is useful in the sense that it shows that the spectrum of the sampled signal equals the discrete-time Fourier transform (DTFT) of the discrete-time signal $$y_d[n]=y(nT)$$ $$Y_d(\omega)=\sum_{n=-\infty}^{\infty}y_d[n]e^{-jn\omega}=\sum_{n=-\infty}^{\infty}y(nT)e^{-j\Omega nT}=\widetilde{Y}(\Omega)\tag{1}$$ where $\omega$ is defined as $\omega=\Omega T$, with the angular frequency $\Omega$ of the continuous-time signal, and the sampling interval $T$. Equation $(1)$ shows that $Y_d(\omega)=\widetilde{Y}(\Omega)$ is a periodic function with Fourier series coefficients $y(nT)$. However, the expression $(1)$ does not give much insight into the relation between $\widetilde{Y}(\Omega)$ (or $Y_d(\omega)$) and the spectrum $Y(\Omega)$ of the original continuous-time signal $y(t)$. (And that's the way I interpret your question). This relation becomes explicit when you derive $\widetilde{Y}(\Omega)$ as follows: $$\begin{align}\widetilde{Y}(\Omega)&=\mathcal{F}\left\{y(t)\cdot\sum_n\delta(t-nT)\right\}\\&=\frac{1}{2\pi}\mathcal{F}\{y(t)\}\star\mathcal{F}\left\{\sum_n\delta(t-nT)\right\}\\&=\frac{1}{2\pi}Y(\Omega)\star\frac{2\pi}{T}\sum_n\delta\left(\Omega-\frac{2\pi n}{T}\right)\\&=\frac{1}{T}\sum_nY\left(\Omega-\frac{2\pi n}{T}\right)\tag{2}\end{align}$$ where $\star$ denotes convolution. Note that $(1)$ and $(2)$ are equivalent, even if that's not immediately obvious. The expression $(2)$ very clearly shows that sampling results in periodization of the spectrum, and this will result in aliasing if the spectrum $Y(\Omega)$ is not properly band-limited.
Consider the space of all bounded continuous real-valued functions of $\mathbb{R}$. I am having trouble understanding how to find the closed subalgebra generated by sine and cosine. Denote $\mathcal A$ the subalgebra generated by $x\mapsto \cos x$ and $x\mapsto \sin x$. $\mathcal A$ contains all the maps $\cos(nx)$ and $\sin(nx)$ (can be shown by induction and the formulas of $\cos(a+b)$, $\sin(a+b)$) and also the constants (because $\cos^2x+\sin^2x=1$). If we take a continuous $2\pi$-periodic function then by Stone-Weierstrass theorem it's in the closure of $\operatorname{span}\{1,\cos(kx),\sin(kx),k\in\mathbb N\}$ so the closure of $\mathcal A$ contains all continuous $2\pi$-periodic functions. Conversely, every function in $\mathcal A$ is continuous and $2\pi$-periodic and so is an uniform limit of such functions. We conclude that the closure of $\mathcal A$ is the space of all continuous $2\pi$-periodic functions.
Evaluate $$\int\sqrt{x^3+x^2}\;dx$$ What I have tried Using substitution (which I believe was applied incorrectly) I get: $$\frac{(x+2)\sqrt{4x+4}}{4x+4}$$ How can this integral be evaluated? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Evaluate $$\int\sqrt{x^3+x^2}\;dx$$ What I have tried Using substitution (which I believe was applied incorrectly) I get: $$\frac{(x+2)\sqrt{4x+4}}{4x+4}$$ How can this integral be evaluated? For the first integral, suppose that $x\gt 0$. Let $u^2=1+x$. Then $$\sqrt{x^3+x^2}=x\sqrt{1+x}=(u^2-1)(u).$$ Since $2u\,du=dx$, we end up with the integral $$\int (2u)(u^2-1)(u) \,du,$$ which is easy. $$ \int\sqrt{\vphantom{\Large A}x^{3} + x^{2}\,} = \int\left[% \left(x + 1\right)^{3/2} - \left(x + 1\right)^{1/2} \right] = {2 \over 5}\,\left(x + 1\right)^{5/2} - {2 \over 3}\,\left(x + 1\right)^{3/2} + \mbox{constant} $$
Definition:Disjoint Union (Set Theory) Definition Let $\family {S_i}_{i \mathop \in I}$ be an $I$-indexed family of sets. The disjoint union of $\family {S_i}_{i \mathop \in I}$ is defined as the set: $\displaystyle \bigsqcup_{i \mathop \in I} S_i = \bigcup_{i \mathop \in I} \set {\tuple {x, i}: x \in S_i}$ where $\bigcup$ denotes union. $S_i^* = \set {\tuple {x, i}: x \in S_i}$ $\displaystyle \bigsqcup_{i \mathop \in I} S = S \times I$ Where $A \cap B = \O$, we can define: $A \sqcup B := A \cup B$ where $A \cup B$ is the union of $A$ and $B$. Also known as This is also called a discriminated union. In Georg Cantor's original words: We denote the uniting of many aggregates $M, N, P, \ldots$, which have no common elements, into a single aggregate by $\tuple {M, N, P, \ldots}$. The elements in this aggregate are, therefore, the elements of $M$, of $N$, of $P$, $\ldots$, taken together. Occasionally the notations: $\displaystyle \sum_{i \mathop \in I} S_i$ and $\displaystyle \coprod_{i \mathop \in I} S_i$ can be seen for the disjoint union of a family of sets. When two sets are under consideration, the notation: $A \sqcup B$ is usually used. Some sources use: $A \vee B$ The notation: $A + B$ is also encountered sometimes. Also see Results about disjoint unionscan be found here. Sources 1915: Georg Cantor: Contributions to the Founding of the Theory of Transfinite Numbers... (previous) ... (next): First Article: $\S 1$: The Conception of Power or Cardinal Number: $(2)$ 1970: Avner Friedman: Foundations of Modern Analysis... (previous) ... (next): $\S 1.1$: Rings and Algebras 1970: Lynn Arthur Steen and J. Arthur Seebach, Jr.: Counterexamples in Topology... (previous) ... (next): $\text{I}: \ \S 1$: Functions 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 5$. Induced mappings; composition; injections; surjections; bijections: Exercise $18$ 1996: H. Jerome Keisler and Joel Robbin: Mathematical Logic and Computability... (previous) ... (next): Appendix $\text{A}.2$: Boolean Operations
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
65th SSC CGL level Question Set, topic Trigonometry 6 This is the 65th question set for the 10 practice problem exercise for SSC CGL exam and 6th on topic Trigonometry. You will find the answers to the questions along with list of related readings including the detailed conceptual solutions after the questions. Before taking the test it is recommended that you refer to the tutorials You may also refer to the related guideline: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test. If you like,you may to get latest subscribe content on competitive examspublished in your mail as soon as we publish it. Now set the stopwatch alarm and start taking this test. It is not difficult. 65th question set- 10 problems for SSC CGL exam: 6th on Trigonometry - testing time 12 mins Problem 1. If $2-cos^2 \theta=3sin \theta{cos \theta}$, where $sin \theta \neq cos \theta$, the value of $tan \theta$ is, $0$ $\displaystyle\frac{1}{2}$ $\displaystyle\frac{2}{3}$ $\displaystyle\frac{1}{3}$ Problem 2. If $sin \theta + cos \theta =\sqrt{2}cos(90^0- \theta)$, then $cot \theta$ is, $\sqrt{2}-1$ $\sqrt{2}+1$ $0$ $\sqrt{2}$ Problem 3. If $(a^2-b^2)sin \theta + 2abcos \theta=a^2+b^2$ then the value of $tan \theta$ is, $\displaystyle\frac{1}{2ab}(a^2+b^2)$ $\displaystyle\frac{1}{2}(a^2-b^2)$ $\displaystyle\frac{1}{2}(a^2+b^2)$ $\displaystyle\frac{1}{2ab}(a^2-b^2)$ Problem 4. The value of $sec \theta\left(\displaystyle\frac{1+sin \theta}{cos \theta}+\displaystyle\frac{cos \theta}{1+sin \theta}\right) - 2tan^2 \theta$ is, 4 0 2 1 Problem 5. If $cot \theta + cosec \theta =3$, and $\theta$ an acute angle, the value of $cos \theta$ is, $1$ $\displaystyle\frac{1}{2}$ $\displaystyle\frac{4}{5}$ $\displaystyle\frac{3}{4}$ Problem 6. If $xcos \theta - sin \theta=1$, then the value of $x^2-(1+x^2)sin \theta$ is, $1$ $0$ $2$ $-1$ Problem 7. If $\theta=60^0$, then the value of $\displaystyle\frac{1}{2}\sqrt{1+ sin \theta} + \displaystyle\frac{1}{2}\sqrt{1- sin \theta}$ is, $cot \displaystyle\frac{\theta}{2}$ $cos \displaystyle\frac{\theta}{2}$ $sec \displaystyle\frac{\theta}{2}$ $sin \displaystyle\frac{\theta}{2}$ Problem 8. If $3sin \theta + 5cos \theta =5$, ($0\lt \theta \lt 90^0$), then the value of $5sin \theta-3cos \theta$ will be, 1 2 5 3 Problem 9. If $tan \theta = \displaystyle\frac{1}{\sqrt{11}}$, and $0\lt \theta \lt 90^0$, then the value of $\displaystyle\frac{cosec^2 \theta - sec^2 \theta}{cosec^2 \theta + sec^2 \theta}$ is, $\displaystyle\frac{5}{6}$ $\displaystyle\frac{3}{4}$ $\displaystyle\frac{4}{5}$ $\displaystyle\frac{6}{7}$ Problem 10. If $tan^2 \theta=1-e^2$, then the value of $sec \theta + tan^3 \theta{cosec \theta}$ is equal to, $(2+e^2)^{\frac{3}{2}}$ $(2+e^2)^{\frac{1}{2}}$ $(2-e^2)^{\frac{3}{2}}$ $(2-e^2)^{\frac{1}{2}}$ The answers to the questions are given below, but you will find the to these questions in detailed conceptual solutions . SSC CGL level Solution Set 65 on Trigonometry 6 If you like, you may also watch the video solutions in the two-part video below. Part1: Q1 to Q5 Part 2: Q6 to Q10 Answers to the questions Problem 1. Answer: b: $\displaystyle\frac{1}{2}$. Problem 2. Answer: a: $\sqrt{2}-1$. Problem 3. Answer: d: $\displaystyle\frac{1}{2ab}(a^2-b^2)$. Problem 4. Answer: Option c: 2. Problem 5. Answer: c: $\displaystyle\frac{4}{5}$. Problem 6. Answer: a: $1$. Problem 7. Answer: b: $cos \displaystyle\frac{\theta}{2}$. Problem 8. Answer: d: 3. Problem 9. Answer: a: $\displaystyle\frac{5}{6}$. Problem 10. Answer: c: $(2-e^2)^{\frac{3}{2}}$. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Question Set 65 on Trigonometry 6 Algebraic concepts If you like, you may to get latest subscribe content on competitive examsin your mail as soon as we publish it.
Abbreviation: TA A is a structure $\mathbf{A}=\langle A,\vee,0,\wedge,1,\neg,\diamond_f, \diamond_p\rangle$ such that both tense algebra $\langle A,\vee,0,\wedge,1,\neg,\diamond_f\rangle$ and $\langle A,\vee,0,\wedge,1,\neg,\diamond_p\rangle$ are Modal algebras $\diamond_p$ and $\diamond_f$ are : $x\wedge\diamond_py = 0$ iff $\diamond_fx\wedge y = 0$ conjugates Remark: Tense algebras provide algebraic models for logic of tenses. The two possibility operators $\diamond_p$ and $\diamond_f$ are intuitively interpreted as and at some past instance . at some future instance Let $\mathbf{A}$ and $\mathbf{B}$ be tense algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\to B$ that is a Boolean homomorphism and preserves $\diamond_p$ and $\diamond_f$: $h(\diamond x)=\diamond h(x)$ Example 1: Classtype variety Equational theory decidable Quasiequational theory decidable First-order theory undecidable Locally finite no Residual size unbounded Congruence distributive yes Congruence modular yes Congruence n-permutable yes, $n=2$ Congruence regular yes Congruence uniform yes Congruence extension property yes Definable principal congruences no Equationally def. pr. cong. no Discriminator variety no Amalgamation property yes Strong amalgamation property yes Epimorphisms are surjective yes $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
Working with the bosonic string in a background space-time with one compact dimension, i.e.: $$ R^{1,24}\times S^1 $$ I have been able to calculate the mass-squared: $$ M^2 = \frac{n^2}{R^2} + \frac{m^2R^2}{\alpha^{'2}} + \frac{2}{\alpha'}\left( N + \bar{N} - 2\right) $$ Here n and m are integers related to the quantisation of the string momentum and winding respectively. I would now like to calculate the Hamiltonian of the closed string in question. My first thought was to sum this with the momentum-squared but I can't seem to get it in the proper form. I also thought that perhaps I could start from the Lagrangian density in the Polyakov action: $$ S = \frac{-1}{4\pi\alpha^{'}}\int d\tau d\sigma \sqrt{-h}~h^{\alpha\beta}\partial_\alpha X^\mu\partial_\beta X^\nu\eta_{\mu\nu} $$ Could someone please give me a nudge in the correct direction, I feel like I'm overcomplicating this. Thanks, String Theory Newbie Edit: I've figured it out now, I'll type my workings up as an answer tomorrow morning. Hopefully they will aid any weary travellers that reach this final bastion of hope.
I would like to derive the Fourier transform of $f(x)=\ln(x^2+a^2)$, where $a\in \mathbb{R}^+$ by making use of the properties: \begin{equation} \mathcal{F}[f'(x)]=(ik)\hat{f}(k)\\ \mathcal{F}[-ixf(x)]=\hat{f}'(k) \end{equation} For the Fourier transform I use the definition given by: \begin{equation} \hat{f}(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(x)e^{-ikx}dx, k \in \mathbb{R} \end{equation} Until now I found out that by taking the derivative of $f$ and finding the Fourier transform of $f'$ I can then use the relation $\mathcal{F}[f'(x)]=(ik)\hat{f}(k)$ and find $\hat{f}$. The derivative of $f$ would be: \begin{equation} f'(x)=\frac{2x}{x^2+a^2} \end{equation} and by considering $g(x)=1/(x^2+a^2)$, I then have: \begin{equation} f'(x)=2xg(x) \end{equation} Now I know that the Fourier transform of $g$ is given by: \begin{equation} \hat{g}(k)=\frac{1}{a}\sqrt{\frac{\pi}{2}}e^{-a|k|}, a \in \mathbb{R}, k\in \mathbb{R} \end{equation} Now I must find the Fourier transform of $xg(x)$ which would be given by the derivative of $\hat{g}$ right? But how can this possible since $\hat{g}$ has no derivative? I think I am really close now but I need that extra tip. Thank you!
6. Series 32. Year Post deadline: 29th April 2019 Upload deadline: 30th April 2019 11:59:59 PM CET (3 points)1. selfenlightment We illuminate a mirror at an angle of $\alpha = 15\mathrm{\dg }$ with respect to the normal. We want the light to travel directly back to the source. For doing so, we can use a glass prism with an index of refraction $n = 1,8$. Find the angle $\eta $ as a function of $\alpha $ and $n$ (see the figure). The prism is placed into the air with an index of refraction $n_0$. Hint:\[\begin{align*} \sin \(x + y\) &= \sin x \cos y + \cos x \sin y , \\ \cos \(x + y\) &= \cos x \cos y - \sin x \sin y , \\ \sin x + \sin y &= 2\sin \(\frac {x + y}{2}\)\cos \(\frac {x - y}{2}\) , \\ \cos x + \cos y &= 2\cos \(\frac {x + y}{2}\)\cos \(\frac {x - y}{2}\) . \end {align*}\] Karel saw Danka's task. (3 points)2. bookworm Vítek has been spending some time in the library. Because of his clumsiness, a book fell down from a shelf and he managed to press it with a swift move towards the wall. He pushes the book with a force $F$ applied at an angle $\alpha $ (see figure). The book's mass equals $M$ and the coefficient of friction between the wall and the book is $\mu $. Find the condition under which the force keeps the book from falling down (and at rest) and determine the critical value $\alpha _0$, below which there does not exist any force that will keep the book up. Vítek was in a mobile library. (6 points)3. range A container is filled with sulfuric acid to the height $h$. We drill a very small hole perpendicularly to the side of the container. What is the maximal distance (from the container) that the acid can reach from all possible positions of the hole? Assume the container placed horizontally on the ground. Do not leave drills where Jáchym may take them! (7 points)4. rope A rope is hanging over the football goal crossbar (a horizontal cylindrical pole). When one of the rope ends is at least three times longer than the other one (the rope is hanging freely, not touching the ground), the rope spontaneously starts to slip off the crossbar. Now, we wrap the rope once around the crossbar (i.e. the rope wraps an angle of $540\dg $). How many times can the one end of the rope be longer than the other one so that the rope does not slip? Matej was pulling down a climbing rope. (9 points)5. elastic cord swing Matěj was bored by common swings, which are at playgrounds because you can swing on only forward and backwards. Therefore, he has invented his own amusement ride, which will move vertically. It will consist of an elastic cord of length $l$ attached to two points separated by distance $l$ in the same height. If he sits in the middle of the attached cord, it will stretch so that the middle will displace by a vertical distance $h$. Then, he pushes himself up and starts to swing. Find the frequency of small oscillations. Matěj wonders how to hurt little children at playgrounds. (10 points)P. problem of high-way safety How many cars going on the road per unit of time are needed to keep the road dry in case of raining? How many cars going on the road per unit of time are needed to keep the road dry (i.e. there is neither snow nor ice on the road) in case of snowing? The temperature of the snow is comparable to the surroundings (i.g. several degrees bellow zero). Assume constant normal rate of precipitacion. Karel drove on the high-way (12 points)E. slippery Find two plain surfaces made from the same material and measure the coefficient of friction between them. Then find out how this coefficient of friction changes when you put some free-flowing or liquid substance between them. You can use everything - water, oil, honey, melted chocolate, flour, sand, etc. Make measurements for at least 4 different substances. Discuss the results in detail and focus mainly on properties of the substances which had the greatest effect. Mikulas wants to go sliding.
I realize that the Maclaurin Series is a special form of the Taylor Series where the series is centered at $x=0$, but I have to wonder what's special about it such that it deserves its own special designation? On that point, how would you know (or care) which point to choose as the center of a Taylor Series? Expanding on the comment above, the idea is that we really like the expression $$ \sum_{k=0}^\infty a_k z^k, $$ simply because it is easy to manipulate and involves less writing than a series with powers of $(z-a)$. So a lot of the time we like to shift our function so that the "point of interest" is simply $0$ (mathematicians try to be efficient, I suppose). Typically we expand in a Taylor series (or more generally, a Laurent Series) about the point $z=a$ to investigate the behavior of $f$ near $a$. Is $f$ well behaved, or does it blow up? Can it be approximated using polynomials? If so, how good is this approximation and how far away from $a$ will it hold? This third question is the basis of many classical numerical analysis algorithms, including numerical differentiation and integration, as well as solution methods for ODEs. The analysis of these methods relies heavily on Taylor series - for example, say we're at $x=a$ and want to approximate the value of the function $f$ at $a+h$, a little distance away. The Taylor series about $x=a$ reads: $$ f(x)=f(a)+f^\prime(a)(x-a)+\frac{f^{\prime\prime}(a)}{2}(x-a)^2+O((x-a)^3) $$ where the "big-O-$(x-a)^3$" means a quantity that grows as a constant multiple of $(x-a)^3$. If we evaluate this Taylor approximation at $x=a+h$, we arrive at the nice, simple expression $$f(a+h)=f(a)+hf^\prime(a)+\frac{h^2}{2}f^{\prime\prime}(a)+O(h^3)$$ This says that if we know the value of the function and its first and second derivatives at $x=a$, we can approximate the value of $f$ at $a+h$ to an accuracy of $h$-cubed. So, for instance, if $h=0.1$, our approximation will only be off by a constant multiple of $0.001$. (This constant, incidentally, will depend on how bad the third derivative is near $a$). Of course, I'm only using this "numerical" idea as an example of why we might expand the Taylor series at a location other than 0 - the idea has plenty of other uses. I think that Taylor series expansions around zero are not so special so as to deserve their own name, and in fact when I teach this material I do not use the term "Maclaurin series" (except to warn students in passing that others may use the term). In this part of calculus students already plenty of things to memorize, like many hard-to-keep-distinct convergence tests. From a hard-nosed perspective there cannot be any truly distinguished expansion point for a Taylor series. The fact that in many of the simplest standard examples of elementary functions $0$ is an especially nice expansion point is an artifact of the fact that the coordinate system has been chosen so as to make $0$ a distinguished point: think e.g. about $\sin x, \cos x, e^x$. As soon as we start changing from one coordinate system to another we will certainly have to expand around nonzero points. This comes up for instance in the theory of analytic continuation in the complex variable case. In practice, you want to expand around a point $c$ such that you are interested in the behavior of the function near $c$. The Taylor series is not guaranteed to converge at any point other than $x = c$; if it does converge, it is not guaranteed to be equal to the function. The way you show that the Taylor series $T(x) = T_{f,c}(x) = f(x)$ is to consider the remainder and apply various estimates on the derivatives of $f$. These derivatives typically grow quickly as you move too far away from the expansion point. For example, if you are trying to compute $(26.5)^{\frac{1}{3}}$ using Taylor series, then $c= 27$ is a good expansion point: then $x =26.5$ is close enough to $c$ so that the convergence of the series is rapid, and since $27$ is a perfect cube, the Taylor series coefficients will take an especially simple form. As another example, you might think about the Taylor series of $f(x) = \frac{1}{1+x^2}$ at various central points $c$. The radius of convergence of the Taylor series at $c$ is $\sqrt{c^2+1}$ for reasons that can only really be understood by thinking about the complex variable case. (Thus in a precise quantitative sene, $c = 0$ is the the worst expansion point in this case!) In a later course, the choice of expansion point is related to the various Laurent series expansions of a meromorphic function: one certainly cannot get away with always expanding around $0$!
Usually a simple P,PI loop is better than bad PID loop. You can put 2 PI controlers for speed, for each wheel it own PI controller. Then you just control the speed of each wheel separately. Here is a small pseudo algorithm of PI controller with trap integration method:\$ u(k) = K_p \bigg[ e(k)-e(k-1)+\dfrac{T_s}{2T_i}\big[ e(k)+e(k-1)\big] +\\ +\dfrac{T_d}{T_s}\big[ e(k)-2e(k-1)+e(k-2)\big] \bigg] + u(k-1) \$ EDIT 1: This is a block diagram of PMDC, notice that at the output there is an integrator \$\dfrac{1}{s}\$ that integrates the speed \$\Omega\$ into position \$\Theta\$. It matters if your controller uses speed or velocity feedback (or both). Since the veloctity feedback is obtained derivating the position, that's how is usually done, you can use PI, else a P controller is enough good, but it will give you I like response of the entire loop P->integrating->I. Using PD will give you PI like response PD->interating->PI(IP). You should decide wheather you will use velocity or position as setpoint. Professional equipment uses combined period and frequency measuring for estimating the velocity feedback information from position feedback, so simply derivating the position will give you bad result at low speed. IMO you should start with making a setpoint velocity generator with ramp, then integrate into position setpoint and use position as setpoint. \$ v_{set}(k)= v_{set}(k) + \Delta v_{max}\$ \$ p_{set}(k)= p_{set}(k) + v_{set}(k)\cdot T_{sample} \$ integrating position This integrating position should be exactly the same size as your counter, for example 32-bit integer, so you have to parse it in integer form. When it will roll over, this won't affect the calculation of the error: \$\varepsilon = p_{measured} - p_{set} \$ EDIT 2: v_max ..maximum speed [pulses/s] Acc..maximum acceleration [pulses/s^2] or Acc=v_max/Tramp..Tramp ramp time to max velocity T_s...sampling time [s] if (FWD==1) { v_set=v_set+Acc*T_s; //pulses per second if v_set>v_max v_set=v_max; } elsif (BKW==1){ v_set=v_set-Acc*T_s; if v_set<-v_max v_set=-v_max;} else{ //stop moving if v_set>0.0 { v_set=v_set-Acc*T_s; if v_set<0.0 v_set=0.0; } if v_set<0.0 { v_set=v_set+Acc*T_s; if v_set>0.0 v_set=0.0; } } p_float=p_float + v_set*T_s; //pulses if p_float>32767.0 p_float=p_float-65535.0; //for 16-bit position counter else if p_float<-32768.0 p_float=p_float+65535.0; p_int = int_16(p_float); epsilon=p_int-p_measured; //all numbers equal int16, 32,... y_out=Kp*float(epsilon)+v_set; //pulses per second, velocity is fed forward y_pwm = y_out * k_pulse;//rpm , k_pulse [rpm/pulses] y_pwm = y_pwm * kv; //volts, kv[V/rpm] from motor spec y_pwm = y_pwm * k_pwm; //% k_pwm[%/V] - 100% equals to Vcc volts, approx. if y_pwm>100.0 y_pwm=100.0; // max positive limit pwm DT else if y_pwm<-100.0 < y_pwm=-100.0; //max neg limit pwm DT EDIT 3: Industrial PID contrller zn3fd source code. The algorithm is incremental : \$u(k)=u(k-1)+\Delta u(k), \;\Delta u(k)= \Delta P+ \Delta I+\Delta D\$Transfer function:\$ G(s) = K_p(1+\dfrac{1}{sT_i}+ \dfrac{sT_d}{1+sT_d/\alpha}) \$Note \$alpha\$ is a filtering factor of D-component it is reccomended to be from 4 to 20, default value is 10. Bigger value means less filtering, lower value means more filter, it makes no sense to use low values. http://www.mathworks.com/matlabcentral/fx_files/8381/1/content/help/html/STCSL.htm // industrial PID controller, author: Bobal et al. // parameters: Kp, Ti, Td, alpha, u_in, u_max // inputs: Setponit, ProcesVar // output: u double ek, ek1, ek2, uk1, uk2, Setponit, ProcesVar; double Ts, gamma, u_min, u_max; double Kpu, Tu, Kp, Ti, Td, Tf, cf, ci, cd; double q0, q1, q2, p1, p2; //initialzation, calcualte coefficients ek1=ek2=uk1=uk2=u=0.0; Tf = Td/alpha; cf = Tf/Ts; ci = Ts/Ti; cd = Td/Ts; p1 = -4*cf/(1+2*cf); p2 = (2*cf-1)/(1+2*cf); q0 = Kp * (1 + 2*(cf+cd) + (ci/2)*(1+2*cf))/(1+2*cf); q1 = Kp * (ci/2-4*(cf+cd))/(1+2*cf); q2 = Kp * (cf*(2-ci) + 2*cd + ci/2 - 1)/(1+2*cf); // ISR PID algo ek2 = ek1; ek1 = ek; ek = (Setpoint - ProcesVar); uk2 = uk1; uk1 = u; u = q0*ek + q1*ek1 + q2*ek2 - p1*uk1 - p2*uk2; //limit if u>u_max u = u_max; else if u<u_min u = u_min;
Let $X$ be a projective variety over an algebraically closed field of characteristic zero. Let $\eta$ be a generic point of $X$ and $x$ be a closed point. By http://stacks.math.columbia.edu/tag/054F there exists a discrete valuation ring $R$ and a morphism $\mbox{Spec}(R) \to X$ such that the fraction field of $R$ maps to $\eta$ and the residue field maps to $x$. My question is: Can we make sure that such a morphism $\mbox{Spec}(R) \to X$ is flat? I am just writing my comment as an answer. Since you refer to $X$ as a variety, I assume that $X$ is an integral scheme. I will also assume that $x$ does not equal $\eta$, i.e., I will assume that $X$ is not a singleton. In that case, there exists a flat morphism $\text{Spec}(R)\to X$ from the spectrum of a DVR to $X$ having image $\{\eta,x\}$ if and only if $x$ is a codimension $1$ point at which $X$ is regular. In the forward direction, if $x$ is a codimension $1$ point at which $X$ is regular, then the local ring $\mathcal{O}_{X,x}$ is already a DVR. For every point $x$ of $X$, the natural morphism $\text{Spec}(\mathcal{O}_{X,x})\to X$ is flat. Therefore, when $x$ is a codimension $1$ point at which $X$ is regular, the natural morphism $\text{Spec}(\mathcal{O}_{X,x}) \to X$ is a flat morphism from the spectrum of a DVR having image $\{\eta,x\}$. In the reverse direction, if $x$ has codimension $\geq 2$ in $X$ or if $x$ has codimension $1$ yet $X$ is not regular at $x$, then $\mathfrak{m}/\mathfrak{m}^2$ has dimension $d\geq 2$ as a vector space over $\kappa(x) = \mathcal{O}_{X,x}/\mathfrak{m}$. Now consider the short exact sequence of $\mathcal{O}_{x,x}$-modules, $$0 \to \mathfrak{m}/\mathfrak{m}^2 \to \mathcal{O}_{X,x}/\mathfrak{m}^2 \to \mathcal{O}_{X,x}/\mathfrak{m} \to 0.$$ For a local ring homomorphism, $\mathcal{O}_{X,x}\to R$, the base change complex is $$ \mathfrak{m}/\mathfrak{m}^2\otimes_{\kappa(x)} (R/\mathfrak{m}R) \to R/\mathfrak{m}^2R \xrightarrow{q} R/\mathfrak{m}R.$$ The kernel of the surjection $q$ is $\mathfrak{m}R/\mathfrak{m}^2R$. Thus, the complex is exact if and only if the following surjective homomorphism is an isomorphism, $$p:\mathfrak{m}/\mathfrak{m}^2 \otimes_{\kappa(x)} (R/\mathfrak{m}R) \to \mathfrak{m}R/\mathfrak{m}^2R.$$ Since $R$ is a DVR, $\mathfrak{m}R$ equals $\pi^eR$ where $\pi$ is a uniformizing element and where $e\geq 1$ is an integer. Thus, the module $\mathfrak{m}R/\mathfrak{m}^2R$ is generated by a single element $\overline{\pi}^e$ as a $R/\mathfrak{m}R$-module. On the other hand, since $\mathfrak{m}/\mathfrak{m}^2$ is a free $\kappa(x)$-module of rank $d$, also the base change $R/\mathfrak{m}R$-module, $\mathfrak{m}/\mathfrak{m}^2\otimes_{\kappa(x)} (R/\mathfrak{m}R)$, is free of rank $d\geq 2$. Thus, $p$ is not an isomorphism of $R/\mathfrak{m}R$-modules.
Let's say we've got a fluid of heavy and light particles inside a cubical flask, which is initially shaken up so that the density of heavy particles is uniform everywhere. Let's also say that these molecules interact identically whether in contact with like or opposite particles (i.e. they don't naturally separate, like water and oil, and they don't naturally mix, like soap and dirt). Over time, the heavier particles will settle at the bottom of the flask. This seems like a decrease in overall entropy to me. However, since potential energy was unleashed when sinking in the flask, it still isn't obvious that this violates any fundamental principle. So, my next thought is whether or not there is a way to re-capture the heat dispersed while the particles decrease their potential energy through settling, and then use that heat to rotate the cube 90 degrees so that the particles will be level with where they started, which would use up the exact same amount of energy, since their center of mass would be at the same height: (Aside: the particles would only remain on the left edge of the container for a moment - then they would again start settling back down to the bottom.) Since we want to re-capture as much energy as possible, let's use a Carnot engine. We'll situate our cube in a somewhat large, cold container. A thermally isolating barrier will be erected between the cube and the container, and only a Carnot engine will connect the two. We wait for the particles in the cube to settle down, hence giving off heat, which is initially trapped in the cube. We then turn on the engine, and extract a fraction $ \eta $ of the heat dispersed by the settling particles. The formula for $ \eta $ in a general Carnot engine is: $ \eta = \frac{T_{H}^{*} - T_{C}}{T_{H}^{*}} = 1 - \frac{T_{C}}{T_{H}^{*}} $ Where $ T_{H}^{*} $ is the heat of the cube after settling and $ T_{C} $ is the temperature of the outside fluid, which is assumed to be the same as the temperature of the cube at the start of the experiment. How good can $ \eta $ get? While it would be tricky/impossible to get $ T_{H}^{*} $ arbitrarily large, it wouldn't be too hard to get $ T_{C} $ arbitrarily small. If we take the limit as $ T_{C} $ goes to absolute zero, we'll get infinitesimally close to $ 100\% $ of our energy back! It doesn't matter that $ \eta $ isn't exactly $ 100\% $ - an infinitesimal loss of energy won't explain the finite drop in entropy. We can have, say, a very tiny spring handy to push the cube the last little bit. A diagram of how this machine would work might look like: Where: The circular material is the friction-less insulator that separates the cube from the surrounding fluid. The square is our cubical flask. The dots are the heavier particles (I haven't drawn the lighter particles they displace) The four circles connecting the cube with the outside fluid are four Carnot engines. They are initially off. There are four so that the whole rotating chamber (everything inside the circular insulator) is equally balanced, except for the particles. red illustrates where (all but infinitesimal leaks of) the energy goes. $ \epsilon $ is the temperature of the cube and the surrounding liquid after the Carnot engines extract the heat from the inner cube. $ \eta $ isn't exactly $ 100 \% $, and hence some small energy will remain untapped. $ 4 \delta $ is the amount of energy not extracted. $ m^{*} $ is the mass of one of the heavier particles minus one of the lighter particles. $ g $ is the acceleration due to gravity. $ h $ is the average of the distances the particles fall - that is, half the side length of the cube. (Thus, between these last three points, $ m^{*} g h $ is the amount of energy unleashed as the particles fall) The mechanism used to convert $ m^{*} g h $ energy into the rotation of the chamber isn't shown, but should be pretty basic. To me, it seems that this machine finitely decreases entropy at an infinitesimal loss of energy. What's gone wrong with my reasoning? Any clarification will be greatly appreciated!
Answer $x^2+12x+36 \Rightarrow (x+6)^2 $ Work Step by Step The third term of a perfect square trinomial is equal to the square of half the coefficient of the middle term. Hence, to complete the square of the given expression $ x^2+12x ,$ the third term must be \begin{array}{l}\require{cancel}\left( \dfrac{12}{2} \right)^2\\\\=\left( 6 \right)^2\\\\= 36 .\end{array} Hence, $ x^2+12x+36 \Rightarrow (x+6)^2 .$
Answer Two vectors are orthogonal if the dot product of both the vectors is 0. Work Step by Step If $v$ and $w$ are non-zero vectors and $\theta $ is the smallest non-negative angle between $\mathbf{u}$ and $\mathbf{w}$, Then the angle between two vectors can be calculated as: $\begin{align} & \cos \theta =\frac{\mathbf{u}\centerdot \mathbf{w}}{\left\| \mathbf{u} \right\|\left\| \mathbf{w} \right\|} \\ & \mathbf{u}\centerdot \mathbf{w}=\left\| \mathbf{u} \right\|\left\| \mathbf{w} \right\|\cos \theta \\ \end{align}$ Two vectors are said to be orthogonal if the angle between them is ${{90}^{\circ }}$ If $\mathbf{u}$ and $\mathbf{w}$ are orthogonal then, $\begin{align} & \mathbf{u}\centerdot \mathbf{w}=\left\| \mathbf{u} \right\|\left\| \mathbf{w} \right\|\cos 90{}^\circ \\ & =\left\| \mathbf{u} \right\|\left\| \mathbf{w} \right\|\left( 0 \right) \\ & =0 \end{align}$ $\mathbf{u}\centerdot \mathbf{w}=0$
I have a question that reads: A full-wave controlled bridge rectifier is considered, shown below. Vs = 230V. Load is represented by \$i_o = 10\$ A. Frequency is 50 Hz and the delay angle is 45\$^{\circ}\$. (a) Sketch \$V_{o(avg)}\$, \$V_{SCR1}\$ and \$i_s\$ (b) Derive the formula for calculating the average value and RMS value of the output voltage Do I assume that this FWB rectifier has an RL load or an R load? I am assuming that there is a purely resistive load: So, they look roughly like this (a) (apologies for paint) (b) $$V_{o(avg)} = \frac{1}{\pi} \int_\alpha^\pi V_m \sin{\omega t} \text{ d} \omega t = \frac{V_m}{\pi}\left[-\cos{\omega t}\right]_\alpha ^\pi = \boxed{(1+\sin{\alpha}) \frac{V_m}{\pi}}$$ and RMS: $$V_{rms} = \sqrt{\frac{2}{\pi}\int_\alpha^\pi V_m^2 \sin^2{\omega t}\text{ d}\omega t }$$ Then, I think I just use \$\sin^2{\omega t}=\frac{1-\cos{2\omega t}}{2}\$ to contiune the integration. Is this right, or am I making things too complicated?
This paper is one in a series of papers in which du Bois Reymond studied functions on the positive real line ordered by the "order of infinity" (order of growth at infinity), what he later called infinitary pantachy. This was motivated by attempts to find "ideal boundary" between converging and diverging series in terms of the growth of their terms by analogy to filling in Dedekind cuts to get real numbers. Du Bois Reymond more or less established that such completion would not work. Later Hausdorff developed modern theory in Cantorian terms which explains why, the "gaps" in the orders of growth are more severe, beyond what countable limits can fill in. These are now called Hausdorff gaps, but du Bois Reymond did not work in cardinality terms. Fisher gives a detailed account in Infinite and Infinitesimal Quantities of du Bois-Reymond and their Reception, here is the reference with direct link: Ueber asymptotische Werthe, infinitäre Approximationen und infinitäreAuflösung yon Gleichungen, Mathematische Annalen 8 (1875) 363-414 (Nachtrage zur Abhandlung: Ueber asymptotische Werthe etc., 574-576). Du Bois-Reymond defines one order of infinity to be larger than another if the limit of the quotient of their representing functions is infinite. He is not very specific about the class of functions considered, and apparently overlooks the possibility of incomparable orders, an issue later investigated by Borel and Hausdorff. He then analogizes, and contrasts, this continuum of orders to the linear continuum of real numbers: " Between the two domains there are many analogies... Instead of numbers as fixed signs in the domain of numbers, one has in the infinitary domain of quantities an unlimited number of simple functions: the exponential functions, the powers, the logarithmic functions, that likewise form fixed points of comparisons, and between whose arbitrarily close infinities a limitless number of infinities different from each other can still be inserted. These functions serving as numbers stretch, in accordance with the present state of analysis, from exponential functions stacked arbitrarily high... down to logarithms repeated arbitrarily, often to infinity... The question which arises here, whether one actually includes in this interval of infinity the entire domain of infinity, in such a way that one can enclose any given infinity between two infinities of functions of that interval, as is possible for any number in the number series: This question I have already answered in substance and in fact negatively..." He refers to 1873 paper where only a particular case was considered. This time he gives a more general version, which Borel highly praised and termed du Bois-Reymond's theorem. A consequence of it, and the motivation, is the non-existence of the "ideal boundary" that can be specified by a sequence of converging/diverging series, as Bertrand earlier proposed (he was thinking of reciprocals to products of powers and powers of iterated logarithms as terms). The du Bois-Reymond's theorem would be the "diagonal argument": " ...if an unlimited family of more and more slowly increasing functions $\lambda_1(x), \lambda_2(x), \lambda_3(x)...$ is given which for each $r$ satisfies the condition $\lim\lambda_r(X)/\lambda_{r+1}(X) = \infty$, one can always specify a function $\psi(x)$ which becomes infinite with $x$, but more slowly than any function of that family". The construction of $\psi(x)$ is in a footnote on p. 365, and does show some "diagonality" if one looks hard. There is however no cardinality involved in du Bois-Reymond's setting (Hausdorff will relate gaps to cardinalities only later), so making it into the diagonal argument takes some reading in. In his 1882 book Die allgemeine Functionentheorie (The General Theory of Functions) du Bois-Reymond touches base with Cantorian set theory, and mentions that Cantor showed "continuum of the idealists" to be uncountable. He does not however point out any affinity between the diagonal argument by which it was shown and his earlier construction, let alone lay any claim to it. So whatever the relation between the two it was not apparent to du Bois-Reymond.
Solve the following system of equations: $\left\{\begin{matrix} x^3(1-x)+y^3(1-y)=12xy+18\\ \left | 3x-2y+10 \right |+\left | 2x-3y \right |=10 \end{matrix}\right.$ closed as too localized by Gerry Myerson, Amzoti, Lord_Farin, Micah, Asaf Karagila♦ May 28 '13 at 13:20 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. Perhaps asking Mathematica(WolframAlpha gives the answer as well) to solve it: Solve[{x^3 (1 - x) + y^3 (1 - y) == 12 x y + 18, Abs[3 x - 2 y + 10] + Abs[2 x - 3 y] == 10}, {x, y}, Reals] immediately gives: $$\left\{\left\{x\to -\sqrt{3},y\to \sqrt{3}\right\}\right\}$$ And there is this nice plot of the two curves: P.S. I will probably(if I find a reason to) add an analytic answer later.
If we have a two dimensional measurementbasis, then we have two possible outcomes of the measurement. Now I figured: considering the law of conservation of energy, if one particle goes in, one and only one can come out. So outcome "both results simultaneously" cannot happen, for that would... Summary: If a measurement outcome depends on the measurement setup, is de measured not real or the measurement?If the factual outcome of an electron-spin measurement depends on the orientation of the SG magnet, for instance up or down in one orientation and left or right in the other, does... Suppose we have a quantum object in superposition to some measurement basis, given by: ##\frac{\sqrt{2}}{\sqrt{3}}|a \rangle + \frac{1}{\sqrt{3}}|b \rangle##. (1)Suppose the measurement is made, and the system evolves, according to MWI, into ##\frac{\sqrt{2}}{\sqrt{3}}|a \rangle|W_a \rangle +... A physicist prepares a box and tells us that in the box there is a cat that is in a superposition of being alive and being dead. How can we be sure whether they're telling the truth? Is the state a superposition or a mixture?If we open the box and measure only whether the cat is alive, using... I was reading the free will theorem and it basically says that subatomic particles and observers have to have free will because there's nothing prior to measurement that predetermines the outcome. Here's more:The free will theorem states:Given the axioms, if the two experimenters in question... Hi.Double slit experiments are being performed successfully with increasingly large molecules. Some physicists (e.g. Anton Zeilinger) believe it might work with viruses as well. Assuming it works with a system that qualifies as a measurement device (be it a virus or something else complex... Given an ideal "box" as used by Schrodinger;- have a quantum event occur inside it, e.g. sudden cat death with 50% probability.- have a machine in it that sends out a qubit, fully entangled with the box' internal state, at regular intervals.- the qubit is a polarised photon- outside, use a... Let's say I have a system whose time evolution looks something like this:This equation tells me that if I measure energy on it, I will get either energy reading ## E_0 ## or energy reading ## E_1 ## , when I do that, the system will "collapse" into one of the energy eigenstates, ## \psi_0 ##... Hey all,I am using a Wheatstone bridge with 4 strain gauges as resistors. I have a formula for the output voltage (Vout). My questions is how do I make it so instead of voltage I measure force?Do I simply apply a set max. force (let's say 140N), see what voltage I get (Vout,max) and then use... Hi. I'm a retired software engineer who has always been fascinated in science. I was a math major in college, before the time that computer science majors were a thing. My key areas of interest these days are particle physics and quantum mechanics.I'll be asking some questions that have... I am still confused about the difference between measurement and interaction. I mean when electrons are travelling from source to the screen through the slits, there are air molecules in their way. And even if the electron double slit experiment is carried out in total vacuum in a completely... Suppose we have a photon in superposition of reaching detector A or detector B. Then, in Everett-worlds, both outcomes (detection at A/detection at B) are true, but in different worlds (##|U_x\rangle|x\rangle##). But if we observe the law of conservation of energy and the quantisation of the... Hi everyone,I'm kind of new in the QM world and I'm having difficulties understanding the superposition and the measurement principles together with the have function collapse. This is how I understand these principles:Superposition: While not measuring, the particle is in a superpsotion of... I was wondering how the rules work for observation in a quantum system. Particularly, about what happens if two separate entities try measuring at the same time. And also, what kinds of interactions are happening all the time that are considered measurements, for example in quantum... This thread is a split-off of this post:https://www.physicsforums.com/threads/do-macro-objects-get-entangled.946927/page-2#post-5997089So my issue is this: if, for convenience, we use a Copenhagen interpretation, and we measure an observable WF ##\alpha |A \rangle + \beta |B \rangle##, then... Hi,I have an air wound 0.736 mH coil in series with a 3.5pF capacitor being driven with a function generator. Ideally the series resonant frequency should be around 3.13 MHz. The internal impedance of the function generator is 50 ohms or so. At resonance the voltage across the cap should be... Hi,I would like to know how the amplitude of probability is estimated/determinated in practice, for a given experiment.In this example 1.3.2 Analysis of Experiment 2 it is assumed that the probability for each of the two possible states are equiprobable. Than from the experimental results... 1. Homework Statement2. Homework Equations(Above given. I think it's a hint.)3. The Attempt at a SolutionHow is delta T = 0.25??Where is the 10kΩ coming from?We were given a solution for this homework, since it's not collected. It just seems out of order. 1. Homework StatementFor a lab, we explored Kirchhoff's Laws. I made a procedural mistake while measuring my voltage values across my different elements. I know that all of my calculated voltage sums are correct, so I was wondering what I might have done to have loops ACBA and CDBC have almost... The Fundamental Theorem of Quantum Measurement is stated as follows:Every set of operators ##\{ A_n \}## ##n =1,...,N## that satisfies ##\sum_n A_n^{\dagger}A_n = I## describes a possible measurement on a quantum system, where the measurement has ##n## possible outcomes labeled by ##n##. If... The Fundamental Theorem of Quantum Measurements (see page 25 of these PDF notes) is given as follows:Every set of operators ##\{A_n \}_n## where ##n=1,...,N## that satisfies ##\sum_{n}A_{n}A^{\dagger}_{n} = I##, describes a possible measurement on a quantum system, where the measurement has... I am interested in defining Krauss operators which allow you to define quantum measurements peaked at some basis state. To this end I am considering the Normal Distribution. Consider a finite set of basis states ##\{ |x \rangle\}_x## and a set of quantum measurement operators of the form $$A_C =... In scientific experiment, we often have a physical property that can change but have no detectable impact on the measurement.For example, suppose I have a mass of (say) 30 grams attached to a string passing over a pulley. I can add up to another 2 grams and the system doesn't budge.In our... My question is simple, and I'm only asking it because most places talk about more advanced problems than this one:I've measured the radius of a sphere (a very regular one) with a micrometer of 0.01 mm resolution.I took 3 measures (rotating it between each measure), and all of them were... 1. Homework Statement2. Homework Equationsin addition to those provided in the questions, I used the following:Tr(B) = sigma<x_j|B|x_j>purity = Tr(rho^2)3. The Attempt at a SolutionI find calculating trace and purity very confusing. Am I on the right track with question 1? With... In almost general case, the space-time metrics looks like:\begin{equation}ds^2 = g_{00}(dx^0)^2 + 2g_{0i}dx^0dx^i + g_{ik}dx^idx^k,\end{equation}where ##i,k = 1 \ldots 3## - are spatial indeces.The spatial distance between points (as determined, for example, by the stationary observer)... This thread is to serve as- a collection of theories that have been falsified by and/or have had new constrained placed on them by the ongoing gravitational wave measurements.- a place to discuss the further constraining/falsifying of still existing models using GW data.I'll start by posting... I'm trying to understand the impact of past measurements, and when measurements occur.As I understand it, in the simplest case, you've got a particle emitter in the center of a circle, and a measuring plate around the circle. Here in the ideal case the particle is emitted and has equal...
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
An example of methylation analysis with simulated datasets Part 2: Potential DMPs from the methylation signal Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1. 2. Potential DMPs from the methylation signal using empirical distribution As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$. DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF") 3. Potential DMPs detected with Fisher’s exact test In Methyl-IT Fisher’s exact test (FT) is implemented in function FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926). ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05) There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality: $TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1]. where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1. So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values. nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000 5. Potential DMPs detected with Gamma 2-parameters model As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs. nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998 Summary table: data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length))) ## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354 6. Density graphic with a new critical value The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$. suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427.
I'm having trouble with this problem in Ahlfors' Complex Analysis (page 238): If a vertex of the polygon is allowed to be at $\infty$, what modification does the formula undergo? If in this context $\beta_k = 1$, what is the polygon like? The formula he is referring to is (probably) $$F(w) = C \int_0^w \prod_{k=1}^n ( w - w_k)^{-\beta_k}dw + C'$$ which maps the unit disk coformally onto a polygon $\Omega$ with outer angles $\{\beta_k \pi\}$. Here $\{w_k\}$ are points on the unit circle, and $C,C'$ are complex constants. Since $F(w)$ has no dependence on the vertices of $\Omega$ I can't see what modification he is talking about. Shouldn't the exact same formula hold in all cases? Also, I believe that letting a vertex $z_k$ tend to $\infty$ should always force the corresponding outer angle $\beta_k \pi$ into $ \pi$ (that is $\beta_k \to 1$). Thus, I can't see how a polygon with an infinite vertex $z_k$ could have the corresponding $\beta_k$ other than 1. Are my conclusions correct? If not, please help me figure this out. Thanks! P.S. I do understand that having some $\beta_k=1$ means that the polygon admits two infinite parallel line segments (?).
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron... Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 2. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 Journal Article 3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52 Journal Article 4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34 Journal Article 5. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261 Journal Article 6. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3 Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are... PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 7. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34 A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 8. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in... Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Journal Article 9. Measurement of the ZZ production cross section and Z→ℓ+ℓ−ℓ′+ℓ′− branching fraction in pp collisions at s=13 TeV Physics Letters B, ISSN 0370-2693, 12/2016, Volume 763, Issue C, pp. 280 - 303 Journal Article 10. Measurement of event-shape observables in Z→ℓ+ℓ− events in pp collisions at √s = 7 TeV with the ATLAS detector at the LHC European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 7, pp. 1 - 40 Journal Article
As @DavidRicherby already points out, the confusion arises because different measures of complexity are getting mixed up.But let me elaborate a bit. Usually, when studying algorithms for polynomial multiplication over arbitrary rings, one is interested in the number of arithmetic operations in the ring that an algorithm uses. In particular, given some (commutative, unitary) ring $R$, and two polynomials $f,g \in R[X]$ of degree less than $n$, the Schönhage-Strassen algorithm needs $O(n \log{n} \log{\log{n}})$ multiplications and additions in $R$ in order to compute $fg \in R[X]$ by, roughly, adjoining $n$-th primitive roots of unity to $R$ to get some larger ring $D \supset R$ and then, using the Fast Fourier Transform over $D$, computing the product in $D$. If your ring contains an $n$-th root of unity, then this can be sped up to $O(n \log n)$ operations in $R$ by using the Fast Fourier Transform directly over $R$.More specifically, over $\mathbb{Z} \subset \mathbb{C}$, you can do this using $O(n \log n)$ ring operations (ignoring the fact that this would require exact arithmetic over the complex numbers). The other measure that can be taken into account is the bit complexity of an operation.And this is what we are interested in when multiplying two integers of bit length $n$. Here, the primitive operations are multiplying and adding two digits (with carry).So, when multiplying two polynomials over $\mathbb{Z}$, you actually need to take into account the fact that the numbers that arise during computation cannot be multiplied using a constant number of primitive operations.This and the fact that $\mathbb{Z}$ doesn't have an $n$-th primitive root of unity for $n > 2$ prevents you from appling the $O(n \log n)$ algorithm.You overcome this by considering $f,g$ with coefficients from the ring $\mathbb{Z}/\langle 2^n + 1 \rangle$, since the coefficients of the product polynomial will not exceed this bound.There (when $n$ is a power of two), you have (the congruence class of) $2$ as an $n$-th root of unity, and by recursively calling the algorithm for coefficient multiplications, you can achieve a total of $O(n \log n \log \log n)$ primitive (i.e., bit) operations. This then carries over to integer multiplication. For an example that nicely highlights the importance of the difference between ring operations and primitive operations, consider two methods for evaluating polynomials: Horner's method and Estrin's method.Horner's method evaluates a polynomial $f = \sum_{i=0}^n f_i X^i$ at some $x \in \mathbb{Z}$ by exploiting the identity$$f(x) = (\ldots (f_n x + f_{n-1})x + \ldots + \ldots) + f_0$$while Estrin's method splits up $f$ into two parts $$H = \sum_{i=1}^{n/2} f_{n/2+i} X^i$$ and $$L = \sum_{i=0}^{n/2} f_{i} X^i$$i.e., $H$ contains the terms of degree $>n/2$ and $L$ the terms of degree $\leq n/2$ (assume $n$ is a power of two, for simplicity). Then, we can calculate $f(x)$ using$$f(x) = H(x)x^{n/2} + L(x)$$and applying the algorithm recursively. The former, using $n$ additions and multiplications, is proven to be optimal w.r.t. the number of additions and multiplications (that is, ring operations), the latter needs more (at least $n + \log n$). But, on the level of bit operations, one can (quite easily) show that in the worst case, Horner's method performs $n/2$ multiplications of numbers of size at least $n/2$, leading to $\Omega(n^2)$ many bit operations (this holds even if we assume that two $n$-bit numbers can be multiplied in time $O(n)$), whereas Estrin's scheme uses $O(n \log^c n) = \tilde{O}(n)$ operations for some $c > 0$, which is, by far, asymptotically faster.
Here are a few trivial lemmas. I won't use anything about the rolling motion, just that the distance is defined by gluing pentagons edge-to-edge: The $dd$-circle of radius $k$, which I'll call $C_k$, is a closed polygonal curve. Let $D_k$ be the closed $dd$-disk of radius $k$; note that $D_k$ may contain holes and $C_k$ is not in general simple! (This happens even at $k=2$.) The orientations of the line segments forming $C_k$ (measured relative to the $+x$ ray) when $k$ is even (odd) take the form $\pi m/5$ with $m$ an odd (even) integer; this is because pentagons are glued to each other in only two orientations (up-pointing or down-pointing) and furthermore, we can only glue down-pointing pentagons to up-pointing ones and vice versa. (Note also that no line segments of $C_k$ lie on $C_{k+1}$). It's not hard to see that $C_k$ is strictly contained within $D^\circ_{k+2}$, the interior of $D_{k+2}$ (the open $dd$-disk). Suppose that $v$ is a simple vertex of $C_k$ so that the interior angle $\alpha$ is defined and $\alpha> 4\pi/5$. Then $v$ is surrounded by the pentagons added to its adjacent segments and therefore lies inside $D^\circ_{k+1}$. Otherwise, if the interior angle $\alpha\leq 4\pi/5$ then $v$ is also a vertex on $C_{k+1}$. However the interior angle at $v$ on $C_{k+1}$ is now $\alpha+6\pi/5>4\pi/5$, so it gets "eaten up" in the next layer. Here are pictures of $D_k$ for $k=1,\dots,8$ placed side by side with $D_{k}\setminus D_{k-1}$, (made by "hand" from edited versions of this file from wikipedia): $D_1$: $D_2$ and $D_2\setminus D_1$: $D_3$ and $D_3\setminus D_2$: $D_4$ and $D_4\setminus D_3$: $D_5$ and $D_5\setminus D_4$: $D_6$ and $D_6\setminus D_5$: $D_7$ and $D_7\setminus D_6$: $D_8$ and $D_8\setminus D_7$: From these images, the following patterns are apparent: the "outer frontier" of $C_k$ is determined by a closed chain of $5k$ pentagons joined vertex to vertex. When $k$ is odd, these pentagons are all "down-pointing" and when $k$ is even they all are "up-pointing". From now on assume $k\geq2$. Then the pentagon chain is formed from 10 "segments of pentagons", which I'll call pentasegments, joining 10 corner pentagons where the pentasegments change direction. There are two types of pentasegments: those where the bases of the constituent pentagons point outwards relative to the interior of $D_k$ (type I): and those where the vertices of the pentagons point outwards (type II): The top pentasegment of $C_k$ is type I if $k$ is odd and type II if $k$ is even and the 10 pentasegments alternate between type I and type II so that there are 5 of each type. When $k$ is even, there are $k/2-1$ pentagons on each pentasegment lying strictly between the corners (the number of pentagons on each pentasegment including the 2 corners is $k/2+1$). When $k$ is odd, there are $(k-1)/2$ pentagons on each type I pentasegment between the corners and $(k-3)/2$ pentagons on each type II pentasegment between the corners. There are holes between each of the pentagons lying on the type II pentasegments, thus there are $5\lfloor \frac{k}{2}\rfloor$ holes in total. I think the above gives a full characterization of $C_k$ and also shouldn't be too hard to prove, though I'm finding it awkward to turn the pictures into words. First, the corners propagate along "zig-zag" paths of pentagons; for example, a formula for the coordinates in $\mathbb{C}$ of the centroid of one of the corners of $C_k$ is: $$2r_{in}\sum_{j=1}^ke^{i\pi m_j/5},$$ where $r_{in}=\frac{1}{2}\sqrt{1+\frac{2}{\sqrt{5}}}$ is the inradius of the regular pentagon (formula from mathworld), $m_{2l-1}=2$, and $m_{2l}=1$. This simplifies to: $$\left\lceil\frac{k}{2}\right\rceil e^{2\pi i/5}+\left\lfloor\frac{k}{2}\right\rfloor e^{\pi i/5},$$ and there are 9 other similar formulas for the other corners. Another easy step is seeing that type I pentasegments turn into type II pentasegments (and vice versa) after attaching the next layer of pentagons, and seeing how the number of pentagons along each type of pentasegment changes is also straightforward. Maybe there's a more elegant description which might then lead to a simple formula / algorithm to compute $dd(p)$. Here's some geometry which may help in writing an algorithm; as you can see, the details are straightforward but rather messy. If I haven't screwed up the law of cosines, the centroids of the corner pentagons of $C_k$ lie on the circle of radius $R_k$ centered at the origin, where: $$\left(\frac{R_k}{r_{in}}\right)^2=\left\lceil\frac{k}{2}\right\rceil^2+\left\lfloor\frac{k}{2}\right\rfloor^2+2\left\lceil\frac{k}{2}\right\rceil\left\lfloor\frac{k}{2}\right\rfloor\cos\frac{\pi}{5}.$$ The 10-gon $P_k$ formed by these centroids is inscribed in this circle and is determined once we calculate the polar angles of two neighboring centroids, $\alpha_k,\beta_k$. This can be done with the law of sines but I won't write out expressions explicitly here. Let $S_k$ be the polygonal annulus lying strictly between $P_k$ and $P_{k+1}$. Given a point $p$, it will lie in some $S_k$ or on some $P_k$, and this tells us that $|dd(p)-k|\leq 1$. I don't know what the best algorithm for determining this $k$ is (something using the polar coordinates of $p$?), but I suspect Joseph O'Rourke will know. Once we get $k$, then it's possible to pin down $dd(p)$ exactly by analyzing in more detail where $p$ sits relative to the edges of $C_k$, though again I'm not sure what the most efficient algorithm would be.
Radiation will kill you on the surface Several other posts hit the same point, but to summarize briefly, Europa's surface receives 540 rem per day, which is probably a fatal dose. However, the 1080 rem you get in two days is definitely a fatal dose. You need shielding, water (and ice) are great shielding, there you go. Water pressure is too high in the ocean Let me make a few super-general assumptions to simplify the math. First lets assume atmospheric pressure of zero, lets assume that the ice crust is pure water ice (density: 0.9167 g/cm$^2$), and lets assume the equation for hydrostatic pressure ($P=\rho gh$) can be naively applied under an ice sheet, an assumption that I argue is good enough. Other numbers we'll use are the surface gravity on Europa (1.315 m/s$^2$), 20km depth of surface ice on Europa (estimates range between 10-30km) $$P=\rho gh,$$ $$P = \left(\frac{0.9167 g}{cm^3}\right)\left( \frac{1 kg}{1000 g}\right)\left(\frac{1000000 cm^3}{1 m^3}\right)\left(\frac{1.315m}{s^2}\right)\left(20000m\right)$$ $$P=24.1 MPa = 237 Atm$$ That is equivalent to about 2500m below the ocean on earth. Building habitats with the strength of a submarine wouldn't be hard, but they are rated to about 500m tops. Building a habitat to handle that pressure would be hard. Inside the ice is just right On the other hand, you can find a happy medium in the middle. The 1MeV gamma tenth-thickness of water is about 0.6m. The tenth-thickness is the distance of material needed to attenuate radiation by a factor of 10. I couldn't find the tenth-thickness of ice, so I will assume it is the same (possibly a horrible assumption). Therefore, under 85m of ice, the radiation from Jupiter is about $$540 rem \cdot 10^\left(-\frac{85m}{0.6m}\right) \approx 0. $$ At this depth the pressure is $$P = \left(\frac{0.9167g}{cm^3}\right)\left(1.315 \frac{m}{s^2}\right)\left(85m\right) = 102.5kPa = 1.01 atm.$$ No radiation and atmospheric pressure. Sounds about right to me! Of course, this is not to say that there is atmospheric pressure in the air of a habitat, just because we are 85m below the surface of the ice; since ice is solid it doesn't work like that. But structures built at this depth won't have any pressure related problems that they don't already have on earth...unless the ice is moving. What building materials are available at 85m below the surface of Europa? Well....ice. Everything else you are going to have to bring yourself. The rocky surface is below another 20km of ice and maybe 100km of ocean. Pretty technically challenging to get something from there. However, Jupiter is just surrounded by moonlets and rings and what have you. It you want metal, just mine it out of loose material in the Jovian system and bring it down to the surface.
I'm a little late to this party, but here is the illustration of the method so many seem to suggest: $$\cos(\theta) -\sin(\theta) = 1 \\(\cos(\theta)-\sin(\theta))^2 = 1^2 \\\cos^2(\theta) - 2\sin(\theta)\cos(\theta) + \sin^2(\theta) = 1 \\-2\sin(\theta)\cos(\theta) + [\cos^2(\theta) + \sin^2(\theta)] = 1 \\-2\sin(\theta)\cos(\theta) + 1 = 1 \\-2\sin(\theta)\cos(\theta) = 0 \\2\sin(\theta)\cos(\theta) = 0.$$ Now we use the double-angle formula $\sin(2\theta) = 2\sin(\theta)\cos(\theta)$ to get $$2\sin(\theta)\cos(\theta)=0 \\\sin(2\theta) = 0 \\2\theta = \arcsin(0) \\2\theta = k\pi \qquad \text{where $k$ is an integer.} \\\theta = \frac{k\pi}{2}.$$ Now to check for extraneous solutions. Since the least common multiple of the periods of the trig functions in the original equation is $2\pi$, we only need to check the values of $k$ that will cause $\theta$ to be in between $0$ and $2\pi$: $k = 0 \implies \theta = 0. $ Then we check: $\cos(0)-\sin(0) = 1-0 =1$. It works! $k=1\implies \theta = \pi/2$. Then we check: $\cos(\pi/2)-\sin(\pi/2) = 0-1 \neq 1$. Doesn't work! $k=2\implies \theta = \pi$. Then we check: $\cos(\pi)-\sin(\pi) = -1-0 \neq 1$. Doesn't work! $k = 3 \implies \theta = 3\pi/2. $ Then we check: $\cos(3\pi/2)-\sin(3\pi/2) = 0-(-1) =1$. It works! When $k=4$, we get $\theta = 2\pi$ which is effectively the sign to stop because the values of $k$ cause their own period as mentioned before. So altogether we see that $k=0$ and $k=3$ both work. By extending the period of 4 to these values, we get $k=0,\pm4,\pm8,\pm12\ldots$ and also $k = \pm3,\pm7,\pm11, \pm 15, \ldots$ as valid values of $k$ when $\theta = \frac{k\pi}{2}$. So altogether and more eleganlty, as mentioned in other posts and comments, we have $\theta \in \{ 0+2\pi k$; $\frac{3\pi}{2} + 2\pi k$ | $k\in \mathbb{Z}\}$.
The title of your question asks for techniques that are impossible to break, to which the One Time Pad (OTP) is the correct answer, as pointed out in the other answers. The OTP is information-theoretically secure, which means that an adversaries computational abilities are inapplicable when it comes to finding the message.However, despite being perfectly ... I suppose there is a type of encryption that is not crackable using quantum computers: a one-time pad such as the Vigenère cipher. This is a cipher with a keypad that has at least the length of the encoded string and will be used only once. This cipher is impossible to crack even with a quantum computer.I will explain why:Let's assume our plaintext is ... The function $f$ is simply an arbitrary boolean function of a bit string: $f\colon \{0,1\}^n \to \{0,1\}$. For applications to breaking cryptography, such as [1], [2], or [3], this is not actually a ‘database lookup’, which would necessitate storing the entire database as a quantum circuit somehow, but rather a function such as\begin{equation*}x \... There is a good explanation by Craig Gidney here (he also has other great content, including a circuit simulator, on his blog).Essentially, Grover's algorithm applies when you have a function which returns True for one of its possible inputs, and False for all the others. The job of the algorithm is to find the one that returns True.To do this we express ... Yes, there are a lot of proposals for Post-quantum cryptographical algorithms that provide the cryptographic primitives that we are used to (including asymmetric encryption with private and public keys). Giving an estimate for a generic quantum chip is impossible as there is no standard implementation for the moment.Nevertheless, it is possible to estimate this number for specific quantum chip, with the information provided online. I found information on the IBM Q chips, so here is the answer for the IBM Q 5 Tenerife chip. In the link you will find ... On computational helpfulness in generalWithout perhaps realising it, you are asking a version of one of the most difficult questions you can possibly ask about theoretical computer science. You can ask the same question about classical computers, only instead of asking whether adding 'quantumness' is helpful, you can ask:Is there a concise statement ... "Postselection" refers to the process of conditioning on the outcome of a measurement on some other qubit. (This is something that you can think of for classical probability distributions and statistical analysis as well: it is not a concept special to quantum computation.)Postselection has featured quite often (up to this point) in quantum mechanics ... From a pseudo-foundational standpoint, the reason why BQP is a differently powerful (to coin a phrase) class than NP, is that quantum computers can be considered as making use of destructive interference.Many different complexity classes can be described in terms of (more or less complicated properties of) the number of accepting branches of an NTM. Given ... There are plenty of different variants, particularly with regards to the conditions on the Hamiltonian. It's a bit of a game, for example, to try and find the simplest possible class of Hamiltonians for which simulation is still BQP-complete.The statement will roughly be along the lines of: let $|\psi\rangle$ be a (normalised) product state, $H$ be a ... This is not a very enlightening concept, because most interesting quantum algorithms, such as Shor's algorithm, involve some classical computations as well. While you can always shoehorn a classical computation into a quantum computer, it would be at unnecessarily exorbitant cost.We don't yet know, of course, exactly what problems will be hard to solve ... Not sure if this is strictly what you're looking for; and I don't know that I'd qualify this as "exponential" (I'm also not a computer scientist so my ability to do algorithm analysis is more or less nonexistent...), but a recent result by Bravyi et. al presented a class of '2D Hidden Linear Function problems' that provably use fewer resources on a quantum ... TL;DR: No, we do not have any precise "general" statement about exactly which type of problems quantum computers can solve, in complexity theory terms. However, we do have a rough idea.According to Wikipedia's sub-article on Relation to to computational complexity theoryThe class of problems that can be efficiently solved by quantumcomputers is ... There is an important difference between physical operations and logical operations.Physical operations that will be slightly imperfect, performed on qubits that are also imperfect. The rate at which these can be performed depends on what physical system is being used to realize the qubits. For example, superconducting qubits can perform two qubit gates (... BQP is defined considering circuit size, which is to say the total number of gates. This means that it incorporates:Number of qubits — because we can ignore any qubits which are not acted on by a gate. This will be polynomially bounded relative to the input size, and often a modest polynomial (e.g. Shor's algorithm only involves a number of ... Suppose a function $f\colon {\mathbb F_2}^n \to {\mathbb F_2}^n$ has the following curious property: There exists $s \in \{0,1\}^n$ such that $f(x) = f(y)$ if and only if $x + y = s$. If $s = 0$ is the only solution, this means $f$ is 1-to-1; otherwise there is a nonzero $s$ such that $f(x) = f(x + s)$ for all $x$, which, because $2 = 0$, means $f$ is 2-to-... I don't think there are clear reasons for a 'yes' or a 'no' answer. However, I can provide a reason why PP was much more likely to admit such a characterisation than NP was, and to give some intuitions for why NP might never have a simple characterisation in terms of modification of the quantum computational model.Counting complexityThe classes NP and PP ... For comparison-based sorting (and search) bounds seem to fit the ones of classical computers: $\Omega(N\log N)$ for sorting and $\Omega(\log N)$ for search, as shown by Hoyer et al. A couple of quantum sorting algorithms are listed in 'Related work' section of "Quantum sort algorithm based on entanglement qubits {00, 11}". DISCLAIMER: The quantum-bogosort is a joke-algorithmLet me just state the algorithm in brief:Step 1: Using a quantum randomization algorithm, randomize the list/array, such that there is no way of knowing what order the list is in until it is observed. This will divide the universe into $O(N!)$ universes; however, the division has no cost, as it happens ... There is always a difference between a quantum system and a classical metaphor. If a system is a qubit in a pure state, then there always exists a measurement basis (or alternatively a proper unitary gate for the standard measurement basis) such that the measurement outcome is 100% predictable, and a measurement basis with measurement outcome 50%-50%. You ... The essential feature of this problem is that while both the quantum and classical algorithms can make use of the efficient classical function of calculating $a^k\text{ mod }N$, the issue is how many times does each have to evaluate the function.For the classical algorithm you're suggesting, you'd calculate $a\text{ mod }N$, and $a^2\text{ mod }N$, and $a^... There is evidently a classical polynomial-time algorithm for finding a four-coloring of a given planar graph, so the answer to the question is "yes" for the trivial reason that every polynomial-time classical algorithm can be implemented as a polynomial-time quantum algorithm. (Also, polynomial time implies polynomial space, for both quantum and classical ... There is a newer result from Robert Beals, Stephen Brierley, Oliver Gray, Aram Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, Mark Stather. They present on Table 2 of Efficient Distributed Quantum Computing the results for bubble sort and insertion sort, it is mainly for "network sorting" but they gave more references about sorting.A quick and very ... There is no such general statement and it is unlikely there will be one soon. I will explain why this is the case. For a partial answer to your question, looking at the problems in the two complexity classes BQP and PostBQP might help.The complexity classes that come closest to the problems that can be solved efficiently by quantum computers of the quantum ... I think John Watrous' survey is a great place to start (Professor Watrous recommended it to me a long long time ago and I have been hooked ever since!):J. Watrous. Quantum computational complexity. Encyclopedia of Complexity and System Science, Springer, 2009. arXiv:0804.3401 [quant-ph]To the best of my knowledge, it has the highest complexity classes to ... SummaryThere is a theory of complexity of search problems (also known as relation problems). This theory includes classes called FP, FNP, and FBQP which are effectively about solving search problems with different sorts of resources.From search problems, you can also define decision problems, which allows you to relate search problems to the usual classes ... Sometimes, you might know the eigenvector, and the computational question that you want to answer is what the eigenvalue is. For example, any function evaluation $f(x)$ defined by the action of a $U$$$U:|x\rangle|y\rangle\mapsto|x\rangle|y\oplus f(x)\rangle$$for $x\in\{0,1\}^n$, $y\in\{0,1\}$ has well defined eigenvectors,$$|x\rangle(|0\rangle\pm|1\... If you don't supply a $|u\rangle$ as an input, there are two possible things you might want to get out:The $\varphi$ for a randomly chosen (but unknown) eigenstate $|u\rangle$;Both $\varphi$ and $|u\rangle$ for one or more eigenstates.Let's first look at 1. Since eigenstates form a complete basis, any input state you use can be interpreted as a ... Two quick comments before explaining this:The notes don't actually contain a proof of the claim made about the simulation; the intention was only to give a basic idea of how the simulation works. It is therefore not at all surprising that the mathematical justification is not clear, because the notes didn't even try to explain it. (It was the last lecture ... Clearly $\mathrm{QMA \subseteq P^{QMA}}$, as we can construct a $\mathrm{P^{QMA}}$ algorithm to solve any problem in $\mathrm{QMA}$, by using an oracle call. The question is whether the reverse containment is known to hold. And the answer is that the reverse containment is not known to hold (and I think is not expected to hold).Of course, computational ...
This is very much an open question, but yes, there is a considerable amount of work that is being done on this front.Some clarificationsIt is, first of all, to be noted that there are two major ways to merge machine learning (and deep learning in particular) with quantum mechanics/quantum computing:1) ML $\to$ QMApply classical machine learning ... I will only answer to the part of the question regarding how quantum mechanics can be useful for analysis of classical data via machine learning.There are also works related to "quantum AI", but that is a much more speculative (and less defined) kind of thing, which I do not want to go into.So, can quantum computers be used to speed-up data analysis via ... Here's a list of other resources to learn about quantum machine learning:An introduction to quantum machine learningThe quest for a Quantum Neural NetworkQuantum Machine Learning: What Quantum Computing Means to Data MiningQuantum Machine Learning 1.0 Yes, all classical algorithms can be run on quantum computers, moreover any classical algorithm involving searching can get a $\sqrt{\text{original time}}$ boost by the use of grovers algorithm. An example that comes to mind is treating the fine tuning of neural network parameters as a "search for coefficients" problem.For the fact there are clear ... You are not swapping the first register (one qubit) with the entire second register ($k$ qubits), but just with the first qubit of the second register.What you need to know is what is meant by $\langle x | y \rangle$ when $x$ is one qubit and $y$ is $k$ qubits. The resulting state is the $k-1$ qubit state you get when you project one qubit (generally the ... Much of the research on quantum algorithms that may have applications to AI is centered on quantum machine learning (QML).While I'd argue there are quite a few hypothetical reasons that QML could be used in machine learning some time in the future, QML research is in its infancy relative to classical machine learning research and its practical benefits ... There are arguments that our brains are quantum mechanical, and arguments against, so that's a hotly debated topic. Fisher at UCSB has some speculative thinking about how brains might still use quantum effects even though they aren't quantum mechanical in nature. While there's no direct experimental evidence there are two references you might want to read:... I've not looked at those papers specifically, but there are several different models for quantum computation (see here), including the gate model and the adiabatic model, which are polynomial time equivalent. That means if one has an exponential speedup, so does the other. The discussion should be interchangeable.The title, if not the question body, also ... In general, the efficiency of Quantum Machine Learning Techniques will be calibrated and measured more in terms of the energy efficiency, ability to handle complex computational problems, NP-hard problems and the ability to ensemble different domain algorithms than the speed and learning rate. However, there could be exceptionally faster quantum algorithms ... Much of the work done so far with quantum computers has been focused on solving combinatorial optimization problems. Both D-Wave style Quantum Annealers and the more recent Gate Model machines from Rigetti, IBM, and Google have been solving combinatorial optimization problems. One promising approach to connecting machine learning and quantum computing ... Gaussian Processes are a key component of the model-building procedure at the core of Bayesian Optimization. Therefore speeding up the training of Gaussian processes directly enhances Bayesian Optimization. The recent paper by Zhao et. al on Quantum algorithms for training Gaussian Processesdoes exactly this. First, they reduce the size from 28*28 to 4*4 images (by downsampling), then convert into binary values for pixels by just comparing to a value. Then, they encode the data in a quantum uniform superposition (with computational basis representing a bitstring data image with its label). There are different review overviews about Quantum Machine Learning (see the question referenced in comments to find a few) but it is an evolving field so you will have to keep updated. There is also an EDX online course about the subject made by Wittek recently released if you would like a little more hands-on format.I would advise to start with basics ... The paper you refer is incomplete and not very right on this part.First a minus sign should be present in :$$ |\phi\rangle = \frac{1}{\sqrt{Z}} (|a||0\rangle - |b||1\rangle) $$Secondly, if you look at the original reference of this procedure on a special case of algorithm but it can be generalized, what you swap is actually the ancilla qubit of $ |\psi\... One can recommend PennyLane by Xanadu.AI. You can find complete examples of quantum machine learning algorithms (e.g. Iris Classification), using hybrid quantum-classical computations.Additionally, they offer built-in plugins for IBM QisKit, Pyquil etc., to enable running Pennylane QML codes on IBM and Rigetti quantum hardwares. In this particular one (by quickly overlooking), they refer mostly to the logic gate approach. But nothing prevent them from talking about both. It depends on the algorithm and on which original model it was thought/designed on. Generally, if it is linear algebra based, it will be the logic gate approach. If they refer to optimization of a QUBO, they will ... I was not able to find references specifically in quantum biology.I found however a review called Quantum Assisted biomolecular modeling.You may find it interesting but this is from 2010. The field has evolved since but I guess the ideas remain similar. The authors focus more on the idea of the ability of a quantum computer to try every classical paths ... All of the answers here seem to be ignoring a fundamental practical limitation:Deep Learning specifically works best with big data. MNIST is 60000 images, ImageNet is 14 Million images.Meanwhile, the largest quantum computers right now have 50~72 Qbits.Even in the most optimistic scenarios, quantum computers that can handle the volumes of data that ... Here is a latest development from Xanadu, a photonic quantum circuit which mimics a neural network. This is an example of a neural network running on a quantum computer.This photonic circuit contains interferometers and squeezing gates which mimic the weighing functions of a NN, a displacement gate acting as bias and a non-linear transformation similar to ... I will assume you are asking about D-Wave's quantum annealer.If there is a part of the learning process that can fit the QUBO (Quadratic Unconstrained Binary Optimization) formulation, then yes.The problem however is what to consider as binary variables of your problem. In CNN, we have in general real-valued parameters that we tweak for training (using ... Quantum simulation can be used to test models that could describe certain biological process. For example, a 2018 paper by Potočnik et al. examined light harvesting models using superconducting quantum circuits (see figure below).Currently, it's an open question whether quantum mechanics plays an important functional role in biological processes. Some ... The problem is that you applied a Swap gate when you should have applied a CSWAP, and so you never entangled the readout qubit with your query states (as a result the readout qubit will always return a "0", which makes sense because the net effect of $HH|0\rangle$ is $I|0\rangle$).Continuing your derivation starting from just after the first Hadamard, we ...
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. J-aggregates 2012, ISBN 9814365742, viii, 520 Book 3. Earthquake rupture properties of the 2016 Kumamoto earthquake foreshocks (M j 6.5 and M j 6.4) revealed by conventional and multiple-aperture InSAR Earth, Planets and Space, ISSN 1343-8832, 12/2017, Volume 69, Issue 1, pp. 1 - 12 By applying conventional cross-track InSAR and multiple-aperture InSAR (MAI) techniques with ALOS-2 SAR data to foreshocks of the 2016 Kumamoto earthquake,... Crustal deformation | Earth Sciences | Geology | MAI | InSAR | Earth Sciences, general | Geophysics/Geodesy | Foreshocks | Kumamoto earthquake | Fault model | GEOSCIENCES, MULTIDISCIPLINARY | RADAR | IMAGES | DEFORMATION Crustal deformation | Earth Sciences | Geology | MAI | InSAR | Earth Sciences, general | Geophysics/Geodesy | Foreshocks | Kumamoto earthquake | Fault model | GEOSCIENCES, MULTIDISCIPLINARY | RADAR | IMAGES | DEFORMATION Journal Article 4. Study of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c^+ \rightarrow J/\psi D_s^+$$\end{document}Bc+→J/ψDs+ and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c^+ \rightarrow J/\psi D_s^{+}$$\end{document}Bc+→J/ψDs∗+ decays with the ATLAS detector The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 2016, Volume 76 The decays \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy}... Regular - Experimental Physics Regular - Experimental Physics Journal Article Physics Letters B, ISSN 0370-2693, 12/2015, Volume 751, Issue C, pp. 63 - 80 An observation of the decay and a comparison of its branching fraction with that of the decay has been made with the ATLAS detector in proton–proton collisions... PARTICLE ACCELERATORS PARTICLE ACCELERATORS Journal Article 6. Prompt and non-prompt $$J/\psi $$ J/ψ elliptic flow in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}} = 5.02$$ sNN=5.02 Tev with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 23 The elliptic flow of prompt and non-prompt $$J/\psi $$ J/ψ was measured in the dimuon decay channel in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}}=5.02$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 7. Measurement of the prompt J/ψ pair production cross-section in pp collisions at √s = 8 TeV with the ATLAS detector The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 2, pp. 1 - 34 Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 03/2015, Volume 114, Issue 12, p. 121801 A search for the decays of the Higgs and Z bosons to J/psi gamma and Upsilon(nS)gamma (n = 1,2,3) is performed with pp collision data samples corresponding to... PHYSICS, MULTIDISCIPLINARY | PSI | PHYSICS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PHYSICS, MULTIDISCIPLINARY | PSI | PHYSICS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 9. Prompt and non-prompt $$J/\psi $$ J/ψ and $$\psi (2\mathrm {S})$$ ψ(2S) suppression at high transverse momentum in $$5.02~\mathrm {TeV}$$ 5.02TeV Pb+Pb collisions with the ATLAS experiment The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 28 A measurement of $$J/\psi $$ J/ψ and $$\psi (2\mathrm {S})$$ ψ(2S) production is presented. It is based on a data sample from Pb+Pb collisions at... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 10. Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s = 7 and 8 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 5, pp. 1 - 47 The production rates of prompt and non-prompt J/ψ and ψ(2S) mesons in their dimuon decay modes are measured using 2.1 and 11.4 fb-1 of data collected with the... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article 11. Measurement of the differential cross-sections of inclusive, prompt and non-prompt J / ψ production in proton–proton collisions at s = 7 TeV Nuclear Physics, Section B, ISSN 0550-3213, 2011, Volume 850, Issue 3, pp. 387 - 444 The inclusive production cross-section and fraction of mesons produced in -hadron decays are measured in proton–proton collisions at with the ATLAS detector at... Journal Article 12. Topological Crystalline Materials of J=3 /2 Electrons: Antiperovskites, Dirac Points, and High Winding Topological Superconductivity Physical Review X, ISSN 2160-3308, 11/2018, Volume 8, Issue 4, p. 041026 Journal Article 13. Study of the $$B_c^+ \rightarrow J/\psi D_s^+$$ B c + → J / ψ D s + and $$B_c^+ \rightarrow J/\psi D_s^{+}$$ B c + → J / ψ D s ∗ + decays with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 1/2016, Volume 76, Issue 1, pp. 1 - 24 The decays $$B_c^+ \rightarrow J/\psi D_s^+$$ B c + → J / ψ D s + and $$B_c^+ \rightarrow J/\psi D_s^{*+}$$ B c + → J / ψ D s ∗ + are studied with the ATLAS... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 14. Measurement of the prompt $J/\psi$ pair production cross-section in $pp$ collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector European Physical Journal C: Particles and Fields, ISSN 1434-6044, 2017, Volume 77, Issue 2, p. 76 Journal Article 15. Measurement of differential J/ψ production cross sections and forward-backward ratios in p + Pb collisions with the ATLAS detector Physical Review C - Nuclear Physics, ISSN 0556-2813, 10/2015, Volume 92, Issue 3 Measurements of differential cross sections for J/ψ production in p + Pb collisions at √sNN=5.02 TeV at the CERN Large Hadron Collider with the ATLAS detector... NUCLEAR PHYSICS AND RADIATION PHYSICS NUCLEAR PHYSICS AND RADIATION PHYSICS Journal Article
Abbreviation: FL$_{ew}$ A $_{ew}$ FL is a FLe-algebra $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \backslash, /, 0\rangle$ that is -algebra (i.e. satisfies the weakening rules): $0\le x\le 1$ integral Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct
Abbreviation: Grp A is a structure $\mathbf{G}=\langle G,\cdot,^{-1},e\rangle $, where $\cdot $ is an infix binary operation, calledthe group , $^{-1}$ is a postfix unary operation, called the group product and $e$ is a constant (nullary operation), called the group inverse , such that identity element $\cdot $ is associative: $(xy)z=x(yz)$ $e$ is a left-identity for $\cdot$: $ex=x$ $^{-1}$ gives a left-inverse: $x^{-1}x=e$. Remark: It follows that $e$ is a right-identity and that $^{-1}$gives a right inverse: $xe=x$, $xx^{-1}=e$. Let $\mathbf{G}$ and $\mathbf{H}$ be groups. A morphism from $\mathbf{G}$ to $\mathbf{H}$ is a function $h:Garrow H$ that is a homomorphism: $h(xy)=h(x)h(y)$, $h(x^{-1})=h(x)^{-1}$, $h(e)=e$ Example 1: $\langle S_{X},\circ ,^{-1},id_{X}\rangle $, the collection of permutations of a sets $X$, with composition, inverse, and identity map. Example 2: The general linear group $\langle GL_{n}(V),\cdot ,^{-1},I_{n}\rangle $, the collection of invertible $n\times n$ matrices over a vector space $V$, with matrix multiplication, inverse, and identity matrix. Classtype variety Equational theory decidable in polynomial time Quasiequational theory undecidable First-order theory undecidable Congruence distributive no ($\mathbb{Z}_{2}\times \mathbb{Z}_{2}$) Congruence modular yes Congruence n-permutable yes, n=2, $p(x,y,z)=xy^{-1}z$ is a Mal'cev term Congruence regular yes Congruence uniform yes Congruence types 1=permutational Congruence extension property no, consider a non-simple subgroup of a simple group Definable principal congruences Equationally def. pr. cong. no Amalgamation property yes Strong amalgamation property yes Epimorphisms are surjective yes Locally finite no Residual size unbounded $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &2\\ f(5)= &1\\ f(6)= &2\\ f(7)= &1\\ f(8)= &5\\ f(9)= &2\\ f(10)= &2\\ f(11)= &1\\ f(12)= &5\\ f(13)= &1\\ f(14)= &2\\ f(15)= &1\\ f(16)= &14\\ f(17)= &1\\ f(18)= &5\\ \end{array}$ Information about small groups up to size 2000: http://www.tu-bs.de/~hubesche/small.html
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Let us consider the set of complex numbers and the binary operation $\circ$ defined by $z_a\circ z_b=|z_a|e^{\Theta(z_b)}$, where $\Theta(z_b)$ is the argument of the complex number $z_b$. Explain whether the set of complex numbers together with the binary operation $\circ$ forms a monoid. I know I need to prove whether it satisfies closure, associativity and identity. I'm not sure how to start with showing this, I've tried to put it into polar form which I get $z_a\circ z_b=|z_a|(\cos \Theta(z_b)+i\sin \Theta(z_b))$ but I have no idea what to do next. Any help will be appreciated, thanks.
below is an exercise that is really giving me a hard time, I believe that there is a simple way around it but I can not find it: Assume the correct regression model is Y = X$\beta$ + $\epsilon$ for E($\epsilon$) = 0 and var($\epsilon$) = $\sigma^2$I. Assume the matrix X of dimensions n × m with m < n has full rank. Denote by $\hat\beta$ the ordinary least squares estimator of $\beta$. Assume as known that the upper left corner of the inverse of $$\begin{bmatrix}\Sigma_{11}&\Sigma_{12}\\\Sigma_{21}&\Sigma_{22} \end{bmatrix}$$ $\Sigma_{11} = \Sigma_{11}^{-1}+\Sigma_{11}^{-1}\Sigma_{12}(\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12})^{-1}\Sigma_{21}\Sigma_{11}^{-1}$ and the lower right corner is $\Sigma_{22} = (\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12})^{-1}$ a. Assume that we forget some independent variables and fit the regression model $Y = X_1\beta_{1\ast}+\epsilon_{\ast}$ where X = [X1; X2] and E($\epsilon_{\ast}$) = 0 and var($\epsilon_{\ast}$) = $\sigma^2$I. Write $\beta$=$\begin{bmatrix}\beta_{1}\\\beta_{2}\end{bmatrix}$ Assuming the “wrong” model we estimate $\beta_{1}$ by $\hat\beta_{1\ast}$=$(X_1^TX_1)^{-1}X_1^TY$ Let $\hat\beta_1$ be the best unbiased linear estimator of $\beta_1$ in the correct model. Show that var($\hat\beta_1$) − var($\hat\beta_{1\ast}$) = $AB^{-1}A^{T}$ where A = $(X_1^TX_1)^{-1}X_1^TX_2$ B=$X_2^TX_2-X_2^TX_1A$ Now, my first idea was to express the $\hat\beta_1$ from the partitioned model, and using Takeshi Amemiya textbook, $\hat\beta_1$=$(X_1^TM_2X_1)^{-1}X_1^TM_2Y$ Where $M_2=I-X_2(X_2^TX_2)^{-1}X_2^T$ Where $M_2$ is idempotent, and $M_2X_2=0$ Therefore I finally get $\hat\beta_1$=$\beta_1$+$(X_1^TM_2X_1)^{-1}X_1^TM_2\epsilon$. The expected value of $\hat\beta_1$ is $\beta_1$ because the expected value of $\epsilon$ is 0, and the term that multiplies the $\epsilon$ is nonstochastic. To calculate the variance of $\hat\beta_1$ I take the formula E(($\hat\beta_1$-E($\hat\beta_1$)($\hat\beta_1$-E($\hat\beta_1)^T$) which gets me to $\sigma^2(X_1^TM_2X_1)^{-1}$. For $\hat\beta_{1\ast}$ I plug in the 'wrong' model and get $\beta_{1\ast}$+$(X_1^TX_1)^{-1}X_1^T\epsilon_{\ast}$. I get the expected value of $\hat\beta_{1\ast}$ to be $\beta_{1\ast}$ because the expected value of $epsilon_{\ast}$ is zero, so the variance I get using the same approach as above is $\sigma^2(X_1^TX_1)^{-1}$. Now, when I subtract the variances as requested in the exercise, I don't get nearly the same thing they got in the text, I still have $\sigma^2$, and I can't get rid of the bracket because the inverse of the entire product is defined but the inverse of individual matrices might not be. Additionally, I have no idea what to do with the clue I was given about the upper left and the lower right corner of the inverse. Would there be a soul kind enough to give me a hint if I am even going in the proper direction? Thank you very very much
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Leaving it on would use more energy, absolutely. Sometimes, people try to convince themselves that turning a light on and off uses more energy because there is some high inrush current, or some such thing.Firstly, incandescent lights hardly have any inrush current, because they don't have any capacitors to charge, and they need not strike an arc in the ... Okay, let's set up a simple simulation:According to the Wiki page on incandescent bulbs, for a 100W, 120V bulb, the cold resistance is ~9.5Ω, and the hot resistance ~144Ω. It takes around 100ms for the bulb to reach the hot resistance on turning on.So armed with this info, we can simulate and prove the initial surge would be absolutely ... According to a Mythbusters episode summary on Wikipedia:" The MythBusters calculated that the power surge from turning on a light would only consume as much power as leaving it on for a fraction of a second (except for fluorescent tube lights; the startup consumed about 23 seconds' worth of power)".So in fact it is possible that on/off would consume more ... This is a common phenomenon: neon pilot lights have a limited lifetime, and after many years of use, they begin to flicker, then they finally go dark. They no longer can operate at line voltage, but instead require a higher voltage for stable operation.Also, neon pilot lights can act as photosensors. Try this with a flickering neon bulb: shine a red ... It depends on the type of lightbulb!Halogen, incandescent, florescent, and vapor lights all use tungsten filaments that heat up and emit electrons via thermionic emission. In that sense, they are similar. However, the method to "turn on" the lights varies.Incandescent bulbs are simply turned on once and left on. The inrush current is on the order of 12 ... I'll presume your bulbs are ordinary resistors first. The 60 W bulb is R1, the other one R2.Resistance can be calculated as\$ R = \dfrac{V^2}{W} \$For R1 and R2 that's\$ R1 = \dfrac{(110 V)^2}{60 W} = 202 \Omega \$\$ R2 = \dfrac{(110 V)^2}{110 W} = 110 \Omega \$Then Rp || R1 = R2, or\$\dfrac{Rp \cdot R1}{Rp + R1} = R2 \$Filling in ... The constantly on setting would consume more energy powering the bulb.A possible counter-argument would be that the turn-on/turn-off cycling would shorten the bulb life, and thus the energy cost of manufacturing, transporting, and disposing of it would be amortized over fewer service hours. But without digging up actual numbers, my gut feeling is that ... Nothing was said about the lamp types, or that they need to be identical. So, assuming that L1 and L2 are both 120V incandescents, and L1 is a 1 watt bulb and L2 is a 100-watt bulb,simulate this circuit – Schematic created using CircuitLabwhen the switch is open, L1 will be lit, and the current through L2 will be so small as to produce ... To quote from the great wiki - https://en.wikipedia.org/wiki/Neon_lamp"When the current through the lamp is lower than the current for the highest-current discharge path, the glow discharge may become unstable and not cover the entire surface of the electrodes.[6] This may be a sign of aging of the indicator bulb, and is exploited in the decorative "... There are two things to consider in trying to match the light output of multiple LEDsControlling the current thru each LED.Compensating for light output variances between LEDs even when driven with the same current.The first is not that hard to guarantee electrically. The brute force way to ensure the same current thru all LEDs is to put them in ... Look at the listed amp rating of your bulbYou need to look at your bulb's documentation/data sheet for its listed draw in amps. If it provides a VA rating, you can compute amps from VA/volts. If it provides actual watts and power factor, you can compute Watts/Volts/PF. LED bulbs are often marketed as "Same brightness as a 60 watt incandescent bulb", ... No. It seems very unlikely there is any need for a fuse. The LIFX gadgets are presumed to be rated for direct connection to a mains branch circuit without any additional fusing. The fuse was probably there to protect the switch as incandescent lamps do not require fusing. They act as their own fuse.The "dimming" feature where it connected two bulbs in ... I see that you're referring to the light fixture itself. In this case, the rating is the maximum that the fixture construction can withstand. Remember that the fixture is really only wires, switches, and connectors. It only serves to pass the current from the wall to the light bulb.In your case, the fixture can handle any voltage up to 250V. You can use a ... As Steven states, this is only true when the bulbs act like ordinary resistors.The solution is easy. The voltage across the 'divider' will be evenly distributed when power at the top half and power at the bottom half are equal.Power at the top half is 60W.Power at the lower half is 110W.To have equal power both at top and at bottom halves, you have to ... You'd need to provide brand and model of projector and bulb plus web links.What you need to know, at least, is ANSI lumen rating of projector and present technology and rating.All that said, this is getting more viable but is not something attempted lightly. The fact (as you recount it) that available solutions are from unknown makers and that they don'... All of the energy that goes into an incandescent bulb will get converted into heat, which must then be dissipated somehow. Some of that heat will then be radiated off in the form of light, but the energy must start out as heat. Therefore, the only way an incandescent bulb can use more power is for it to dissipate more heat. A bulb which is cold consumes ... For full brightness they can be driven either with (about) 12 VDC or 12 VAC RMS.Use of half wave rectified AC will operate them at somewhat less than half power but they may flicker and will have a lower color temperature. I've never seen them operated with half wave AC. If this works for some application and meets some purpose then that's fine - but I ... According to the U.S. Dept. of Energy:It is best to turn off incandescent and halogen bulbs whenever theyare not needed, due to their high consumption of electricity.For a compact fluorescent bulb, a rule of thumb is to leave it on ifyou leave a room for 15 minutes or less (depending on several factors).For LED lighting the operating life is unaffected ... You are overloading your current supply, it is shutting down to prevent damage.2 x 10W = 20WWhich is more than 15W.It seems the only options are buying 5 of this "converters" (one for each lamp) or buying another model which can handle more power (more lamps per converter). The (now obsolescent) conventional tungsten-filament consumer incandescent bulbs have a trade-off between life and efficiency.The longer you make the bulb last for (by running the filament cooler) the less efficient the bulb is, so the higher the lifetime cost of the electricity required to illuminate it.The light also becomes redder the cooler the ... It's more than that -- polarized plug make cheap single-pole switches much safer.For example, look at the extension strip:With a polarized plug, it is perfectly safe to have a single-pole switch there-- one just makes sure it interrupts the "live" contactWith a non-polarized plug, one either has to put a more expensive double-pole switch, or to accept ... TL;DR The electric arc may conduct much higher current than the filament, this is not normal mode of bulb operation so you can see a flash and maybe even have the bulb exploded.Typical incandescent bulb failure develops while current is flowing through the filament. The filament is a piece of wire about half a meter long coiled such it forms a filament ... See http://hackaday.com/2011/11/21/simple-touch-sensors-with-the-arduino-capsense-library/Ever thought of using touch sensors on your projects but didn’t because it would be too much work? [Paul Stoffregen] proves that it can be pretty easy if you use the CapSense library for Arduino. Here he’s created three touch sensors, connecting them to the Teensy ... How safe is this?Relatively safe, this is how relays are used.Is the only thing separating wandering hands from live wire just the screw terminal?Yes. Of course, touching only one side of a wire, with only one hand, is not terribly dangerous, compared to completing a circuit through your body. Of course, the only thing separating a live outlet and ... Responding to old post but it comes up at top of google search:Neon lights can't work in complete darkness. This is the "neon lamp dark effect".I know it sounds fake, but that's why neon lamp bulb makers sometimes add a little ionizing radiation inside the bulbs.Dark Effect: All ILT Neon lamps are subject to a condition called dark effect. Dark ... Just rotate the plastic from the back (CCW to "unscrew", it's sort of a bayonet base-like action). I have the same general kind of lamp in the instrument cluster of an E28 chassis automobile. The metal contacts work against pads on the single-sided PCB. It's a 24V 1.2W bulb (photo from linked site).As @Jack mentions some such bulbs have removable bulbs ... Couple of comments about the optical portion of your project:Both the electrical and optical portion of the project are important but you won't be able to achieve a homogenized optical output even with perfect drive electronics unless you worry about the optical portion of the design which in turn may constrain how you plan to build/drive the device.... No, there is no such thing as standard insulation for a particular wire size.The wire gauge only tells you the physical size of the wire. From that, and knowing the material (usually copper), you can determine its resistance per length. That in turn tells you how much voltage it will drop for a given current, and how much power it will dissipate. These ... Flouro lamps are actually more efficient at say 30KHz than 50Hz .The efficiency curve doesnt keep increasing ,it kind of limits out .This is why you tend to not see stuff in the MHz range despite it being technicaly feasible .The electronic ballast makes the tube itself more efficient .Even if you use rubbish for the electronics your overall efficiency ... You've drawn an NPN transistor as a high-side switch. This won't work since you need to drive the base at least 0.7V higher than the emitter, and your RPi GPIO outputs only push 3.3 volts.You should move the transistor into the ground path to operate as a low-side switch, and add a current limiting resistor to the base. Even then, you will probably run ...
ISSN: 1930-8337 eISSN: 1930-8345 All Issues Inverse Problems & Imaging August 2011 , Volume 5 , Issue 3 Select all articles Export/Reference: Abstract: A new reconstruction algorithm is presented for eit in dimension two, based on the constructive uniqueness proof given by Astala and Päivärinta in [ Ann. of Math. 163(2006)]. The method is non-iterative, provides a noise-robust solution of the full nonlinear eit problem, and applies to more general conductivities than previous approaches. In particular, the new algorithm applies to piecewise smooth conductivities. Reconstructions from noisy and non-noisy simulated data from conductivity distributions representing a cross-sections of a chest and a layered medium such as stratified flow in a pipeline are presented. The results suggest that the new method can recover useful and reasonably accurate eit images from data corrupted by realistic amounts of measurement noise. In particular, the dynamic range in medium-contrast conductivities is reconstructed remarkably well. Abstract: Registration methods could be roughly divided into two groups: area-based methods and feature-based methods. In the literature, the Monge-Kantorovich (MK) mass transport problem has been applied to image registration as an area-based method. In this paper, we propose to use Monge-Kantorovich (MK) mass transport model as a feature-based method. This novel image matching model is a coupling of the MK problem with the well-known alpha divergence from the probability theory. The optimal matching scheme is the one which minimizes the weighted alpha divergence between two images. A primal-dual approach is employed to analyze the existence and uniqueness/non-uniqueness of the optimal matching scheme. A block coordinate method, analogous to the Sinkhorn matrix balancing method, can be used to compute the optimal matching scheme. We also derive a distance function for image morphing. Similar to elastic distances proposed by Younes, the geodesic under this distance function has an explicit expression. Abstract: We consider variations of the Rudin-Osher-Fatemi functional which are particularly well-suited to denoising and deblurring of 2D bar codes. These functionals consist of an anisotropic total variation favoring rectangles and a fidelity term which measure the $L^1$ distance to the signal, both with and without the presence of a deconvolution operator. Based upon the existence of a certain associated vector field, we find necessary and sufficient conditions for a function to be a minimizer. We apply these results to 2D bar codes to find explicit regimes -- in terms of the fidelity parameter and smallest length scale of the bar codes -- for which the perfect bar code is attained via minimization of the functionals. Via a discretization reformulated as a linear program, we perform numerical experiments for all functionals demonstrating their denoising and deblurring capabilities. Abstract: Based on the variable Hilbert scale interpolation inequality, bounds for the error of regularisation methods are derived under range inclusions. In this context, new formulae for the modulus of continuity of the inverse of bounded operators with non-closed range are given. Even if one can show the equivalence of this approach to the version used previously in the literature, the new formulae and corresponding conditions are simpler than the former ones. Several examples from image processing and spectral enhancement illustrate how the new error bounds can be applied. Abstract: This paper presents a novel variational model for ultrasound image segmentation that uses a maximum likelihood estimator based on Fisher-Tippett distribution of the intensities of ultrasound images. A convex relaxation method is applied to get a convex model of the subproblem with fixed distribution parameters. The relaxed subproblem, which is convex, can be fast solved by using a primal-dual hybrid gradient algorithm. The experimental results on simulated and real ultrasound images indicate the effectiveness of the method presented. Abstract: In this article, we analyze the microlocal properties of the linearized forward scattering operator $F$ and the reconstruction operator $F^{*}F$ appearing in bistatic synthetic aperture radar imaging. In our model, the radar source and detector travel along a line a fixed distance apart. We show that $F$ is a Fourier integral operator, and we give the mapping properties of the projections from the canonical relation of $F$, showing that the right projection is a blow-down and the left projection is a fold. We then show that $F^{*}F$ is a singular FIO belonging to the class $I^{3,0}$. Abstract: We study inverse problems for non-linear penetrable media in the context of scattering theory and impedance tomography. Using a general description of the range of the non-linear far-field operator we show an explicit characterization of the support of a weakly non-linear inhomogeneous scattering object. Application of the same technique to the impedance tomography problem for a monotonic non-linear inclusion yields a characterization of the inclusion's support from the non-linear Neumann-to-Dirichlet operator. Abstract: Let $H$ be a real separable Hilbert space and $A:\mathcal{D}(A) \to H$ be a positive and self-adjoint (unbounded) operator, and denote by $A^\sigma$ its power of exponent $\sigma \in [-1,1)$. We consider the identification problem consisting in searching for a function $u:[0,T] \to H$ and a real constant $\mu$ that fulfill the initial-value problem $$ u' + Au = \mu \, A^\sigma u, \quad t \in (0,T), \quad u(0) = u_0, $$ and the additional condition $$ \alpha \|u(T)\|^{2} + \beta \int_{0}^{T}\|A^{1/2}u(\tau)\|^{2}d\tau = \rho, $$ where $u_{0} \in H$, $u_{0} \neq 0$ and $\alpha, \beta \geq 0$, $\alpha+\beta > 0$ and $\rho >0$ are given. By means of a finite-dimensional approximation scheme, we construct a unique solution $(u,\mu)$ of suitable regularity on the whole interval $[0,T]$, and exhibit an explicit continuous dependence estimate of Lipschitz-type with respect to the data $u_{0}$ and $\rho $. Also, we provide specific applications to second and fourth-order parabolic initial-boundary value problems. Abstract: We consider an inverse boundary value problem for a discrete Schrödinger operator $-\Delta + \hat{q} $ on a bounded domain in the square lattice. We define an analogue of the Dirichlet-to-Neumann map, and give a reconstruction procedure of the potential $\hat{q} $ from the D-to-N map for all energies. Abstract: We consider the inverse problem for the wave equation on a compact Riemannian manifold or on a bounded domain of $\mathbb{R}^n$, and generalize the concept of domain of influence. We present an efficient minimization algorithm to compute the volume of a domain of influence using boundary measurements and time-reversed boundary measurements. Moreover, we show that if the manifold is simple, then the volumes of the domains of influence determine the manifold. For a continuous real valued function $\tau$ on the boundary of the manifold, the domain of influence is the set of those points on the manifold from which the travel time to some boundary point $y$ is less than $\tau(y)$. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
How to Couple a Full-Wave Simulation to a Ray Tracing Simulation Welcome back to our discussion on multiscale modeling in high-frequency electromagnetics. Multiscale modeling is a simulation challenge that arises when there are vastly different scales in a single simulation, such as the size of an antenna compared to the distance between the antenna and its target. Today, in Part 4 of the series, we will examine how we can construct a multiscale model by coupling a Full-Wave antenna simulation with a geometrical optics simulation using the Ray Optics Module. Using the Ray Optics Module for Multiscale Modeling In Part 2 of the blog series, we used the Electromagnetic Waves, Frequency Domain interface, which we call a Full-Wave simulation, and a Far-Field Domain node to determine the electric field in the far field. We then coupled a Full-Wave simulation to the Electromagnetic Waves, Beam Envelopes interface (or a Beam-Envelopes simulation) in order to precisely calculate fields in any region, regardless of the distance from the source. The Far-Field Domain and Beam-Envelopes solutions that we looked at in the previous blog post are effective, but they share one noteworthy restriction. In each case, we assumed that a homogeneous domain surrounded the antenna in all directions. For many situations, this information is sufficient. In other simulations, you may not have a homogeneous domain surrounding your antenna and you need to account for issues like atmospheric refraction or reflection off of nearby buildings. These simulations require a different approach. A model of several hotels in Las Vegas. A directional antenna emits rays toward the ARIA® Resort & Casino. The Geometrical Optics interface in the Ray Optics Module, an add-on product to the COMSOL Multiphysics® software, regards EM waves as rays. This interface can account for spatially varying refractive indices, reflection and refraction from complicated geometries, and long propagation distances. However, these features come with a tradeoff. Since waves are treated as rays, this approach neglects diffraction. In other words, we are assuming that the wavelength of light is much smaller than any geometric features in our environment. You can read a more thorough description of ray optics in a previous blog post. Modeling Coupled Antennas: Preparing for Ray Optics As you may recall, we introduced an approach to coupling a radiating and receiving antenna in Part 3 of this series. When incorporating ray optics into our multiscale modeling, we are required to use a similar but more generalized approach. Before we show you how to set up a geometrical optics simulation in COMSOL Multiphysics, let’s first review this alternate method. Mapping the Radiated Fields As a quick refresher, we are interested in calculating the fields at the location of the receiving antenna using the following equation: We previously used an integration operator on a single point to calculate this along the line directly between the two antennas. We now wish to retain the angular dependence, so we need to recalculate this equation for each point in the receiving antenna’s domain. Since it is impractical to add numerous points and integration operators, we need to establish a more general technique. To do so, we replace the integration operator with a General Extrusion operator. As before, we create a variable for the magnitude of r. We then use the General Extrusion operator to evaluate the scattering amplitude at a point in the geometry that shares the same angular coordinates, (\theta,\phi), as the point in which we are actually interested. To demonstrate this concept, we use a figure that is slightly more involved than that from the previous post. Note that the subscripts 1, 2, and r in \vec{r}_j=\left(x_j,y_j,z_j\right) represent a vector in component 1, a vector in component 2, and the offset between the antennas, respectively. Image showing where the scattering amplitude should be calculated and how the coordinates of that point can be determined. As we previously outlined, the primary complication is determining where to calculate the scattering amplitude. We want the fields at the point \vec{r}_r + \vec{r_2}, which requires calculating the scattering amplitude at \vec{r}_1. The complication, of course, is that each point in the domain around the receiving antenna (each vector \vec{r}_2) will have its own evaluation location \vec{r}_1. We evaluate this by again rescaling the Cartesian coordinates, but instead of doing it for a single point, we define it inside of the general operator so that it can be called from any location. From the above figure, we know that this point is x_1 = \left(x+x_r\right) \frac{|\vec{r}_1|}{|\vec{r}_2+\vec{r_r}|}, with corresponding equations for y and z. The operator is defined in component 1, so the source will be defined in that component. It will be called from component 2, so the x, y, z in the following expressions refer to x 2, y 2, z 2 in the above figure. The General Extrusion operator used for the scattering amplitude calculation. Note that this is defined in component 1. Storing the Radiated Fields in a Dummy Variable As a bookkeeping step, we store the calculated fields in a “dummy” variable. By a dummy variable, we mean that we add in an extra dependent variable that takes the value of a calculation determined elsewhere. We do this for two reasons. The first reason is that most variables in COMSOL Multiphysics are calculated on demand from the dependent variables. In an RF simulation, for example, the dependent variables are the three Cartesian components of the electric field: Ex, Ey, and Ez. These are determined when computing the solution. In postprocessing, every other value (electric current, magnetic field, etc.) is calculated from the electric field when required. In most cases, this is a fast and seamless process. In our case, each field evaluation point requires a general extrusion of a scattering amplitude, and each scattering amplitude point requires a surface integration as defined in the Far-Field Domain node. This can take a while and we want to ensure that we perform this calculation only once. The second reason why we do this has to do with the element order. The Scattered Field formulation requires a background electric field. COMSOL Multiphysics then calculates the magnetic field using the differential form of Faraday’s law (also known as the Maxwell-Faraday equation). This requires taking spatial derivatives of the electric field. There are no issues when taking the spatial derivatives of an analytical function like a plane wave or Gaussian beam, but it can cause a discretization issue when applied to a solved-for variable. This is a rather advanced topic, which you can find out more about in an archived webinar on equation-based modeling. By using a cubic dummy variable to store the electric field, we can take a spatial derivative of the electric field and still obtain a well-resolved magnetic field for use in the Scattered Field formulation. Without the increased order of the dummy variable, the magnetic field used would be underresolved. Below, you can see what it looks like to put the General Extrusion operator together with the dummy variable setup. The variable r is identical to the one used in Part 3 of this blog series and is defined in component 2. The dummy variable implementation. Notice that the dummy variable components are called Ebx, Eby, and Ebz. The only remaining step is to use the dummy variables — Ebx, Eby, and Ebz — in a background field simulation of the half-wavelength dipole discussed in Part 1 and Part 3. This technique isn’t actually very good for this particular problem. There may be situations where it is useful, but the technique from Part 3 is preferred in the vast majority of cases. The received power from the two simulations is extremely close, but this method takes much longer to calculate and the file size increases drastically. In the demo examples for this post, this method took several times longer than the previous simulation method. While you may conclude that this is not a terribly useful step overall, it is useful when we incorporate ray optics into our multiscale modeling, as discussed in the next section. Setting Up a Geometrical Optics Simulation in the COMSOL® Software A geometrical optics simulation implicitly assumes that every ray is already in the far field. Earlier in the blog series, we saw that the Far-Field Domain feature correctly calculates the electric field at arbitrary points in the far field. Here, we use that information as the input for rays in a geometrical optics simulation. The simulation geometry, symmetry, and electric dipole point source used are the same as in Part 2. The domain assignments for the simulation. The Full-Wave simulation is performed over the entire domain, with the outer region set as a perfectly matched layer (PML). The geometrical optics simulation is only performed in this outer region. Note that this image is not to scale. With the domains assigned, we select the Geometrical Optics interface, change the Intensity computation to Compute intensity, and select the Compute phase check box. These steps are required to properly compute the amplitude and phase of the electric field along the ray trajectory. Settings for the Geometrical Optics interface. The Intensity computation is set to Compute intensity and the Compute phase check box is selected. We also apply an Inlet boundary condition to the boundary between the Full-Wave simulation domain and Geometrical Optics domain. The inlet settings can be seen in the image below, but let’s walk through them one at a time. First, the Ray Direction Vector section is configured. This will launch the rays normal to the curved surface we’ve selected for the inlet — in other words, radially outwards. The variables Etheta and Ephi are calculated from the scattering amplitude according to with a similar assignment for Ephi. This equation comes from our previous blog post about using the Far-Field Domain node to calculate the fields at an arbitrary location. These variables are used to specify the initial phase and polarization of the rays. The GOP\_I0 variable specifies the correct spatial intensity distribution for the rays (as antennas generally do not emit uniformly) and is calculated according to GOP\_I0 = (|Etheta|^2 + |Ephi|^2)/Z/2, where Z is the impedance of the medium. The initial radius of curvature has two factors. The parameter dipole\_sim\_r is the radius of the spherical boundary that we are launching the rays from and will correctly initialize the curvature of the ray wavefront. Finally, we use the Cartesian components of our spherical unit vector \hat{\theta} to specify the initial principal curvature direction. This ensures that the correct polarization orientation is imparted to the rays. The wavefront shape here must be set to Ellipsoid — even though the surface is technically a sphere — because we need to be able to specify a preferred direction for polarization. If we choose Spherical, then each orientation is degenerate and we cannot make that specification. Beyond setting the correct frequency, the only other setting here is the placement of a Freeze Wall condition on the exterior boundary to stop the rays. Let’s take a look at the results vs. theory. As before, we express the full solution for a point dipole as a sum of two contributions, which we have labeled near field (NF) and far field (FF). \overrightarrow{E} & = \overrightarrow{E}_{FF} + \overrightarrow{E}_{NF} \\ \overrightarrow{E}_{NF} & = \frac{1}{4\pi\epsilon_0}[3\hat{r}(\hat{r}\cdot\vec{p})-\vec{p}](\frac{1}{r^3}+\frac{jk}{r^2})e^{-jkr}\\ \overrightarrow{E}_{FF} & = \frac{1}{4\pi\epsilon_0}k^2(\hat{r}\times\vec{p})\times\hat{r}\frac{e^{-jkr}}{r}\\ \end{align} The electric fields from a geometrical optics simulation compared against theory. Geometrical optics is always in the far field, so we see excellent agreement as the distance from the source increases. For reference, the far-field domain results from the previous post would overlap exactly with the ray optics and FF theory lines. As mentioned before, the Geometrical Optics interface is necessarily in the far field, so we do not expect to be able to correctly capture the near-field information as we did in the Beam-Envelopes solution in Part 2. This can also be seen because we seeded the ray tracing simulation with data from the Far-Field Domain node calculation. It is therefore unsurprising that there is disagreement near the source, but we can clearly see that the results match with theory as the distance from the source increases. Summary of Multiscale Modeling Techniques From looking solely at the above plot, we have to ask ourselves: “What have we actually gained here?” This is a fair question, because the plot shown above could have been constructed directly from any of the techniques covered in the series so far. To make this clear, let’s review each of them. Multiscale Technique Regime of Validity Modules Used Notes Far-Field Domain node Far field RF or Wave Optics Requires the antenna to be completely surrounded by a homogeneous domain. Beam-Envelopes Any field Wave Optics Requires specification of the phase function or wave vector. Geometrical Optics Far field Ray Optics Can account for a spatially varying index as well as reflection and refraction from complex geometries. Diffraction is neglected. A summary of the multiscale modeling techniques we have covered in this blog series. Note that any of these techniques will require a Full-Wavesimulation of the radiation source. This generally requires the RF Module, although there is a subset of radiation sources that can be modeled using the Wave Optics Module instead. The Far-Field Domainnode is available in both the RF and Wave Optics modules. We originally motivated this discussion by talking about signal transmission from one antenna to another, and solved that simulation using the Far-Field Domain node in the last post. In the next blog post in this series, we’ll redo that simulation using the Geometrical Optics interface introduced here. Access the model discussed in this blog post and any of the model examples highlighted throughout this blog series by clicking on the button above. ARIA is a registered trademark of CityCenter Land, LLC. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
$$\text{maximize } f(\mathbf{x}) \quad\text{subject to } \mathbf{Ax} = \mathbf{b}$$ where $$f(\mathbf{x}) = \sum_{i=1}^N\sqrt{1+\frac{x_i^4}{\left(\sum_{i=1}^{N}x_i^2\right)^2}},$$ $\mathbf{x} = [x_1,x_2,...,x_N]^T \in \mathbb{R}^{N\times 1}$ and $\mathbf{A} \in \mathbb{R}^{M\times N}$. We can see that $f$ is convex and of the form $\sqrt{1+y^2}$. It can be also shown that $f$ is bounded in $[\sqrt{2}, 2]$. I know that a convex maximization problem is NP-hard, in general. However, using the specific nature of the problem, is it possible to solve it using any standard convex optimization software/package?
I was working through this z-test of proportions example I found online. The online example solutions says that the difference between the groups is statistically significant, whereas I concluded it was not. Problem: Suppose the Acme Drug Company develops a new drug, designed to prevent colds. The company states that the drug is equally effective for men and women. To test this claim, they choose a a simple random sample of 100 womenand 200 menfrom a population of 100,000 volunteers. At the end of the study, 38% of the womencaught a cold; and 51% of the mencaught a cold. Based on these findings, can we reject the company's claim that the drug is equally effective for men and women? Use a 0.05 level of significance. My approach to the problem: H 0 $\rightarrow$ difference in groups = 0 H 1 $\rightarrow$ difference in groups $\neq$ 0 Goal: Get a 95% Confidence Interval around the difference in groups metric. See if 0 in is the interval. If not, we can reject the null hypothesis. Difference in groups = $\hat d$ = .51 - .38 = .13 95% C.I. formula = .13 $\pm$ (Z-value for two-tailed test with .05 significance) * (Standard Error) Standard Error for Z-test of proportions = SE = $\sqrt {p_{pooled}(1-p_{pooled})(\frac{1}{100}+\frac{1}{200})}$ $p_{pooled}$ = (102+38)/(300) = .46667 SE = $\sqrt {.46667(.53333)(.015)}$ = .06107 Z-value for two-tailed test with .05 significance = 1.96 95% C.I. formula = .13 $\pm$ (1.96) * (.06107) = (-0.0103, .2495) Since 0 is in the 95% C.I. formula, we cannot reject the null that H 0 is equal to zero. However, the source uses a different methodology and arrives at a differing conclusion: Solution: The solution to this problem takes four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results. We work through those steps below: State the hypotheses. The first step is to state the null hypothesis and an alternative hypothesis. Null hypothesis: P1 = P2 Alternative hypothesis: P1 ≠ P2 Note that these hypotheses constitute a two-tailed test. The null hypothesis will be rejected if the proportion from population 1 is too big or if it is too small. Formulate an analysis plan. For this analysis, the significance level is 0.05. The test method is a two-proportion z-test. Analyze sample data. Using sample data, we calculate the pooled sample proportion (p) and the standard error (SE). Using those measures, we compute the z-score test statistic (z). p = (p1 * n1 + p2 * n2) / (n1 + n2) p = [(0.38 * 100) + (0.51 * 200)] / (100 + 200) p = 140/300 = 0.467 SE = sqrt{ p * ( 1 - p ) * [ (1/n1) + (1/n2) ] } SE = sqrt [ 0.467 * 0.533 * ( 1/100 + 1/200 ) ] SE = sqrt [0.003733] = 0.061 z = (p1 - p2) / SE = (0.38 - 0.51)/0.061 = -2.13 where p1 is the sample proportion in sample 1, where p2 is the sample proportion in sample 2, n1 is the size of sample 1, and n2 is the size of sample 2. Since we have a two-tailed test, the P-value is the probability that the z-score is less than -2.13 or greater than 2.13. We use the Normal Distribution Calculator to find P(z < -2.13) = 0.017, and P(z > 2.13) = 0.017. Thus, the P-value = 0.017 + 0.017 = 0.034. Interpret results. Since the P-value (0.034) is less than the significance level (0.05), we cannot accept the null hypothesis. Where did I go wrong in my approach?
257 23 Homework Statement Two identical uniform triangular metal played held together by light rods. Caluclate the x coordinate of centre of mass of the two mass object. Give that mass per unit area of plate is 1.4g/cm square and total mass = 25.2g Homework Equations - Not sure what I went wrong here, anyone can help me out on this? Thanks. EDIT: Reformatted my request. Diagram: So as far as I know to calculate the center of mass for x, I have to use the following equation: COM(x): ##\frac{1}{M}\int x dm## And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula: ##dm = \mu * dx * y## where ##\mu## is the mass per unit area. So subbing in the above equation into the first, I get: ##\frac{1}{M}\int x (\mu * dx *y) ## ##\frac{\mu}{M}\int xy dx## Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4## This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method? Regardless of what, supposed I am correct: COM for right triangle: ##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected) COM for left triangle: ##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63... Total COM = ##8+5.63/2## which is wrong :( Thanks EDIT: Reformatted my request. Diagram: So as far as I know to calculate the center of mass for x, I have to use the following equation: COM(x): ##\frac{1}{M}\int x dm## And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula: ##dm = \mu * dx * y## where ##\mu## is the mass per unit area. So subbing in the above equation into the first, I get: ##\frac{1}{M}\int x (\mu * dx *y) ## ##\frac{\mu}{M}\int xy dx## Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4## This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method? Regardless of what, supposed I am correct: COM for right triangle: ##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected) COM for left triangle: ##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63... Total COM = ##8+5.63/2## which is wrong :( Thanks Last edited:
On the DNA Computer Binary Code In any finite set we can define a , a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule ( G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable. In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table: OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table: $A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example: CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111 ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000 $\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001 The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is: In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position. There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning. References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
This answer focuses more on minimalizing the code, rather than finding the source of the problem, as the top-voted answer does. It is intended to be concise and hands-on, but digestible rather than exhaustive. Suggestions for improvement welcome! Here are some strategies for reducing your code, which will help you get better and faster answers, since it will be clearer what your problem is and the other users will see that you put some effort into producing a concise Minimal Working Example. Thanks for that! Most likely, not all of these things will apply to your question, so just pick what does apply. However, it is advised that you provide the community with something that will reproduce the problem in the easiest way possible. Typically this requires code that starts with \documentclass and ends with \end{document} (if using LaTeX). It will allow readers to copy-and-paste-and-compile your code and see exactly what problems you might be experiencing. What follows below are snippets of code; bad references imply that it should typically not be used, as it may not be part of the problem, while good references make suggestions that should be used instead. Note that these snippets should still form part of a larger, \documentclass... \end{document} structure as mentioned above. Document Class - Bad: \documentclass{MyUniversitysThesisClass} - Bad: \documentclass[..]{standalone} ...unless your problem relates to the standalone document class. standalone is meant for cropping stand-alone images within a main document usually. If this doesn't pertain to you, don't use it. + Good: \documentclass{article} Using a non-standard document class? Does your problem still show up with article? Then use . article Document Class Options - Bad: \documentclass[12pt, a5paper, final, oneside, onecolumn]{article} + Good: \documentclass{article} Using any options for your document class? Does your problem still show up without them? Then get rid of them. Comments - Bad: \usepackage{booktabs} % hübschere Tabllen, besseres Spacing \usepackage{colortbl} % farbige Tabellenzellen \usepackage{multirow} % mehrzeilige Zellen in Tabellen \usepackage{subfloat} % Sub-Gleitumgebungen + Good: \usepackage{booktabs} \usepackage{colortbl} \usepackage{multirow} \usepackage{subfloat} You put comments in your code to remember what packages are there for? Great habit, but usually not necessary in a MWE – get rid of them. Loading Packages - Bad: \usepackage{a4wide} \usepackage{amsmath, amsthm, amssymb} \usepackage{url} \usepackage[algoruled,vlined]{algorithm2e} \usepackage{graphicx} \usepackage[ngerman, american]{babel} \usepackage{booktabs} \usepackage{units} \usepackage{makeidx} \makeindex \usepackage[usenames,dvipsnames]{color} \usepackage{colortbl} \usepackage{epstopdf} \usepackage{rotating} + Good: % Assuming your problem is related e.g. to the rotation of a figure, you might need: \usepackage{rotating} You’ve developed an awesome template with lots of helpful packages? Does your problem still show up if you remove some or even most of them? Then get rid of those that aren’t necessary for reproducing the problem. (If you should later find out that another package is complicating the situation, you can always ask another question or edit the existing question.) In most cases, even packages like inputenc or fontenc are not necessary in MWEs, even though they are essential for many non-English documents in “real” documents. Images - Bad: \includegraphics{graphs/dataset17b.pdf} + Good: \usepackage[demo]{graphicx} .... \includegraphics{graphs/dataset17b.pdf} + Good: \usepackage{graphicx} .... \includegraphics{example-image}% Image from the mwe package Your problem includes an image? Does your problem show up with any image? Then use the option for the package demo graphicx – this way, other users who don’t have your image file won’t get an error message because of that. If you prefer an actual image that you can rotate, stretch, etc., use the , which provides a number of dummy images, named e.g. mwe package example-image. If your problem is specific to the size of the included image, still use mwe's example-image, but also specify the width and height so it more readily replicates your custom-image dimensions. Again, this way the problem is reproducible without using your image. Text - Bad: In \cite{cite:0}, it is shown that $\Delta \subset {U_{\mathcal{{D}}}}$. Hence Y. Q. Qian's characterization of conditionally uncountable elements was a milestone in constructive algebra. Now it has long been known that there exists an almost everywhere Clifford right-canonically pseudo-integrable, Clairaut subset \cite{cite:0}. The groundbreaking work of J. Davis on isomorphisms was a major advance. In future work, we plan to address questions of uniqueness as well as degeneracy. Thus in \cite{cite:0}, the main result was the classification of meromorphic, completely left-invariant systems. + Good: \usepackage{lipsum} % just for dummy text ... \lipsum[1-3] + Good: Foo bar baz. Need a few paragraphs of text to demonstrate your problem? Use a package that produces dummy text. Popular choices are lipsum (plain paragraphs) and blindtext (can produce entire documents with section titles, lists, and formulae). Need just a tiny amount of text? Then keep it maximally simple; avoid formulae, italics, tables – anything that’s not essential to the problem. Popular choices for dummy words are foo, bar, and baz. Bibliography Files + Good: \usepackage{filecontents} \begin{filecontents*}{\jobname.bib} @book{Knu86, author = {Knuth, Donald E.}, year = {1986}, title = {The \TeX book}, } \end{filecontents*} \bibliography{\jobname} % if you’re using BibTeX \addbibresource{\jobname.bib} % if you’re using biblatex Need a .bib file to reproduce your problem? Use a maximally simple entry embedded in a in the preamble. During the compilation, this will create a .bib file in the same directory as the filecontents environment .tex file, so users compiling your code only need to save one file by themselves. Another option for biblatex would be to use the file biblatex-examples.bib, which should be installed with biblatex by default. You can find it in bibtex/bib/biblatex/. Data -- Bad: Never include data as an image. - Bad: Number of points Values 10 100 20 400 30 1200 40 2345 etc... + Good: \usepackage{filecontents} \begin{filecontents*}{data.txt} Number of points, Values 10, 100 20, 400 30, 1200 40, 2345 \end{filecontents*} Including the data as part of the MWE makes the example portable as well. Of course, the input may differ depending on what package you use to manage the data (some require CSV, some don't). Index + Good: \usepackage{filecontents} \begin{filecontents*}{\jobname.ist} delim_0 "\\dotfill " \end{filecontents*} The index style can be included in the filecontents* environment in the preamble. The contents (and file extension) will differ according to the required indexing application ( makeindex or xindy). Sometimes a problem can only be demonstrated with an index that spans several pages. The testidx package is like lipsum etc but the dummy text is interspersed with \index to make it easier to test index styles. It has over 400 top-level terms (along with some sub-items and sub-sub-items) that includes every basic Latin letter group (A–Z) as well some extended Latin characters and a few digraphs. - Bad: \begin{document} aa\index{aa} ab\index{ab} ... zy\index{zy} zz\index{zz} \printindex \end{document} + Good: \begin{document} \testidx \printindex \end{document} If page breaking is the source of your problem (for example, after a letter group heading or between an item and sub-item), there's a high probability of an awkward break occurring given the large number of test items, but you can alter the page dimensions or font size to ensure one occurs in your MWE. Glossaries The glossaries package comes with some files containing dummy entries, which can be used in MWEs. - Bad: \newglossaryentry{sample1}{name={sample1},description={description 1}} ... \newglossaryentry{sample100}{name={sample100},description={description 100}} \newacronym{ac1}{ac1}{acronym 1} ... \newacronym{ac100}{ac100}{acronym 100} + Good: \loadglsentries{example-glossaries-brief} \loadglsentries[\acronymtype]{example-glossaries-acronym} See Dummy Entries for Testing for a complete list of dummy entry files provided by glossaries. There's an additional file example-glossaries-xr.tex provided by glossaries-extra. Formatting your code Formatting of code is done using Markdown. See the relevant FAQ How do I mark code blocks?. There also exists some syntax-highlighting, a discussion of which can be following at What is syntax highlighting and how does it work?. With the above in mind, don't post your code in comments, since comments only support a limited amount of Markdown. Posting a Picture of Your Output It’s often helpful to see what your current, faulty output looks like. If you’re not sure how to do that, have a look at How does one add a LaTeX output to a question/answer? and how can i upload an image to be included in a question or answer?. Selection of packages inspired by Inconsistent rotations with \sidewaysfigure. Math ramble generated by Mathgen. Bibliography sample from lockstep’s question biblatex: Putting thin spaces between initials.
I want to implement the EM algorithm manually and then compare it to the results of the normalmixEM of mixtools package. Of course, I would be happy if they both lead to the same results. The main reference is Geoffrey McLachlan (2000), Finite Mixture Models. I have a mixture density of two Gaussians, in general form, the log-likelihood is given by (McLachlan page 48): $$\log L_c(\Psi) = \sum_{i=1}^g \sum_{j=1}^n z_{ij}\{\log \pi_i + \log f_i(y_i;\theta_i)\}.$$The $z_{ij}$ are $1$, if the observation was from the $i$ th component density, otherwise $0$. The $f_i$ is the density of the normal distribution. The $\pi$ is the mixture proportion, so $\pi_1$ is the probability, that an observation is from the first Gaussian distribution and $\pi_2$ is the probability, that an observation is from the second Gaussian distribution. The E step is now, calculation of the conditional expectation: $$ Q(\Psi;\Psi^{(0)}) = E_{\Psi(0)}\{\log L_c(|\Psi)|y\}. $$ which leads, after a few derivations to the result (page 49): \begin{align} \tau_i(y_j;\Psi^{(k)}) &= \frac{\pi_i^{(k)}f_i(y_j;\theta_i^{(k)}}{f(y_j;\Psi^{(k)}} \\[8pt] &= \frac{\pi_i^{(k)}f_i(y_j;\theta_i^{(k)}}{\sum_{h=1}^g \pi_h^{(k)}f_h(y_j;\theta_h^{(k)})} \end{align} in the case of two Gaussians (page 82): $$\tau_i(y_j;\Psi) = \frac{\pi_i \phi(y_j;\mu_i,\Sigma_i)}{\sum_{h=1}^g \pi_h\phi(y_j; \mu_h,\Sigma_h)}$$The M step is now the maximization of Q (page 49): $$ Q(\Psi;\Psi^{(k)}) = \sum_{i=1}^g\sum_{j=1}^n\tau_i(y_j;\Psi^{(k)})\{\log \pi_i + \log f_i(y_j;\theta_i)\}. $$ This leads to (in the case of two Gaussians) (page 82): \begin{align} \mu_i^{(k+1)} &= \frac{\sum_{j=1}^n \tau_{ij}^{(k)}y_j}{\sum_{j=1}^n \tau_{ij}^{(k)}} \\[8pt] \Sigma_i^{(k+1)} &= \frac{\sum_{j=1}^n \tau_{ij}^{(k)}(y_j - \mu_i^{(k+1)})(y_j - \mu_i^{(k+1)})^T}{\sum_{j=1}^n \tau_{ij}^{(k)}} \end{align} and we know that (p. 50) $$ \pi_i^{(k+1)} = \frac{\sum_{j=1}^n \tau_i(y_j;\Psi^{(k)})}{n}\qquad (i = 1, \ldots, g). $$ We repeat the E, M steps until $L(\Psi^{(k+1)})-L(\Psi^{(k)})$ is small. I tried to write a R code (data can be found here). # EM algorithm manually# dat is the data# initial valuespi1 <- 0.5pi2 <- 0.5mu1 <- -0.01mu2 <- 0.01sigma1 <- 0.01sigma2 <- 0.02loglik[1] <- 0loglik[2] <- sum(pi1*(log(pi1) + log(dnorm(dat,mu1,sigma1)))) + sum(pi2*(log(pi2) + log(dnorm(dat,mu2,sigma2))))tau1 <- 0tau2 <- 0k <- 1# loopwhile(abs(loglik[k+1]-loglik[k]) >= 0.00001) { # E step tau1 <- pi1*dnorm(dat,mean=mu1,sd=sigma1)/(pi1*dnorm(x,mean=mu1,sd=sigma1) + pi2*dnorm(dat,mean=mu2,sd=sigma2)) tau2 <- pi2*dnorm(dat,mean=mu2,sd=sigma2)/(pi1*dnorm(x,mean=mu1,sd=sigma1) + pi2*dnorm(dat,mean=mu2,sd=sigma2)) # M step pi1 <- sum(tau1)/length(dat) pi2 <- sum(tau2)/length(dat) mu1 <- sum(tau1*x)/sum(tau1) mu2 <- sum(tau2*x)/sum(tau2) sigma1 <- sum(tau1*(x-mu1)^2)/sum(tau1) sigma2 <- sum(tau2*(x-mu2)^2)/sum(tau2) loglik[k] <- sum(tau1*(log(pi1) + log(dnorm(x,mu1,sigma1)))) + sum(tau2*(log(pi2) + log(dnorm(x,mu2,sigma2)))) k <- k+1}# comparelibrary(mixtools)gm <- normalmixEM(x, k=2, lambda=c(0.5,0.5), mu=c(-0.01,0.01), sigma=c(0.01,0.02))gm$lambdagm$mugm$sigmagm$loglik The algorithm is not working, since some observations have the likelihood of zero and the log of this is -Inf. Where is my mistake?
For the sake of completeness, here are more general statements: 1) Every separable type I factor is singly generated. You take the weighted shift $S=\sum_j 2^{-j}E_{j,j+1}$. Then $W^*(S)$ contains $(S^*S)^{1/2}=\sum_j2^{-j}E_{jj}$ and now by using the characteristic function of $\{2^{-j}\}$ we obtain $E_{jj}\in W^*(S)$ for all $j$. Now $E_{k,k+1}=2^kE_{kk}S\in W^*(S)$ and we can get the rest of the matrix units as in $E_{k,k+2}=E_{k,k+1}E_{k+1,k+2}$, etc. Then $W^*(S)$ contains all matrix units and is then equal to $B(H)$. 2) A countable direct sum of singly generated is singly generated. If $\mathcal M=\bigoplus_{j\in J}\mathcal M_j$, $J\subset\mathbb N$, we take generators $M_j\in\mathcal M_j$ with $\sigma(M_j)\subset[3/4,1]$ (note that $M$ is a generator if and only if $\alpha M+\beta I$ is a generator for any $\alpha,\beta\in\mathbb C$). Now $M=\bigoplus_j2^{-j}M_j$ is a generator, as we can isolate each summand via functional calculus as in 1). So 1) and 2) address Sébastien's question. To go further, a generalization of 1) shows that any separable infinite factor (i.e. II$_\infty$ or III) is singly generated: for type II$_\infty$, you take a dense countable subset of the positive unit ball, normalize all the elements, and form the "weighted shift" as in 1). For type III, you use that $M=M\otimes B(H)$ and you can also form the weighted shift. 3) Separable abelian von Neumann algebras are also singly generated (by a selfadjoint), by a Theorem of von Neumann himself. A nice proof can be found in II.2.8 of Davidson's "C$^*$-algebras by Example". 4) Tensor product of separable abelian and singly generated is singly generated. This is a wonderful old trick. If $\mathcal A$ is separable abelian, it has a selfadjoint generator $a$. If $\mathcal M$ is singly generated, it has a generator $b+ic$ with $b,c$ selfadjoint. Then $W^*(a)\otimes W^*(b)$ is separable and abelian, and so it has a selfadjoint generator $x$; similarly $W^*(a)\otimes W^*(c)$ has a selfadjoint generator $y$. Then $x+iy$ is a generator for $\mathcal A\otimes\mathcal M$ (since $W^*(x+iy)$ contains $a\otimes I$, and $I\otimes(b+ic)$). 5) A separable von Neumann algebra with no type II$_1$ summand is singly generated. Such an algebra is of the form $\bigoplus_j\mathcal A_j\otimes\mathcal M_j$, with each $\mathcal A_j$ abelian and separable, and each $\mathcal M_j$ a non II$_1$-factor (and so, singly generated). So the "generator problem" for von Neumann algebras is reduced to the case of II$_1$-factors. 6) The question of whether all separable II$_1$-factors are singly generated is still open.
Digital PLL's -- Part 2 In Part 1, we found the time response of a 2 nd order PLL with a proportional + integral (lead-lag) loop filter. Now let’s look at this PLL in the Z-domain [1, 2]. We will find that the response is characterized by a loop natural frequency ω n and damping coefficient ζ. Having a Z-domain model of the DPLL will allow us to do three things: Compute the values of loop filter proportional gain K Land integrator gain K Ithat give the desired loop natural frequency and damping. Deriving these formulae is somewhat involved, but the good news is we only have to do it once. Compute the linear-system time response to a step in the reference phase. Compute the frequency response. Figure 1. DPLL Time Domain Model Block Diagram Figure 1 is the time-domain DPLL model we derived in part 1. To convert this to a useful Z-domain model, we replace the accumulators in the Loop Filter and NCO with the transfer function z -1/(1 – z -1), who’s numerator and denominator we can multiply by z to get 1/(z – 1). This gives us the model in Figure 2, where we have also indicated the phase detector gain K p. Figure 2. DPLL in the Z-domain The open-loop response of this system is the product of the transfer functions of the three blocks: $$ G_1(z) = \frac{K_pK_LK_{nco}}{z-1} + \frac{K_pK_IK_{nco}}{(z-1)^2} \tag{1} $$ 2 nd Order System in s and z 2 ndOrder System in s and z A 2 nd order continuous-time system with a Lead-lag filter is shown in Figure 3. [3, 4]. See Appendix B for a derivation of the closed-loop response. If we convert this to the Z-domain, we’ll see the response has the same form as that of our DPLL. This will allow us to relate K L and K I of the DPLL to ω n and ζ. Figure 3. 2 nd order system in s with a zero in the closed-loop response To convert this system to the z-domain equivalent, make the following replacement: $$ s \rightarrow \frac{z-1}{T_s} $$ This approximation is valid as long as the loop natural frequency is much less than the sample frequency (see Appendix C). The Z-domain block diagram is shown in Figure 4, where we have allowed for the possibility that the loop filter could have a sample rate T s_filt different from the NCO sample rate T s_nco. Figure 4. 2 nd order system in the z-domain The open-loop response G 2(z) is: $$ G_2(z) = \frac{2\zeta\omega_nT_{s\_nco}}{z-1} + \frac{T_{s\_filt}T_{s\_nco}\omega_n^2}{(z-1)^2} \tag{2} $$ By equating open-loop response G 1 of our DPLL to G 2, we can find K L and K I in terms of ω n and ζ. Equating G 1 and G 2: $$ \frac{K_pK_LK_{nco}}{z-1} + \frac{K_pK_IK_{nco}}{(z-1)^2} = \frac{2\zeta\omega_nT_{s\_nco}}{z-1} + \frac{T_{s\_filt}T_{s\_nco}\omega_n^2}{(z-1)^2} $$ Solve for K L and K I: $ K_pK_LK_{nco} = 2\zeta\omega_nT_{s\_nco} $ → $ K_L = 2\zeta\omega_n/K_p\; * \; T_{s\_nco}/K_{nco} $ (3) $ K_pK_IK_{nco} = T_{s\_filt}T_{s\_nco}\omega_n^2 $ → $ K_I = T_{s\_filt}\omega_n^2/K_p \; * \; T_{s\_nco}/K_{nco} $ (4) A given DPLL design has defined values for T s_filt,T s_nco, K p, and K nco. Given those values, K L and K I are uniquely determined by the choice of ω n and ζ. Note that the units of K p are cycle -1. See Appendix D for an alternate form of the equations for K L and K I. Computing the Closed-loop response Computing the Closed-loop response The closed loop phase response of the DPLL in Figure 2 is given by: $$ CL(z) = \frac{Z[u]}{Z[ref\_phase]} = \frac{G_1}{1 + G_1} $$ Thus from equation 1, $$ \frac{K_pK_LK_{nco}} {z-1} + \frac{K_pK_IK_{nco}} {(z-1)^2} $$ $ CL(z) = $ __________________________________ $$ 1 + \frac{K_pK_LK_{nco}} {z-1} + \frac{K_pK_IK_{nco}} {(z-1)^2} $$ $$ CL(z) = \frac {b_0 + b_1z^{-1}} {1 + a_1z^{-1} + a_2z^{-2}} \qquad (5) $$ Where b 0 = K pK LK ncob 1= K pK IK nco- K pK LK ncoa 1= K pK LK nco– 2 a 2= 1 + K pK IK nco- K pK LK nco Equation 5 is in the form of an IIR filter transfer function, which allows for straightforward computation of the time and frequency responses in Matlab. Example This example uses the same parameters as Example 2 in Part 1. We will compute K L and K I, then we will find the time and frequency response using CL(z). For this example, f s_nco = f s_filt = f s. The Matlab script is listed in Appendix A. The DPLL parameters are as follows. As you can see, not all of the parameters from the time domain model apply to the Z-domain model. f s 25 MHz Reference frequency NA Initial reference phase NA NCO initial frequency error NA K nco 1/4096 K p 2 cycle -1f n400 Hz loop natural frequency. f n= ω n/(2π) ζ 1.0 loop damping coefficient 1. Find K L and K I KL= 2*zeta*wn*Ts/(KP*Knco) % loop filter proportional gain KI= wn^2*Ts^2/(KP*Knco) % loop filter integral gain KL = 0.4118 KI = 2.0698e-005 2. Compute the time response to a step in the reference phase. Since CL(z) is in the form of a digital filter transfer function, we can find the time response using the Matlab "filter" function. b0= KP*KL*Knco; b1= KP*Knco*(KI - KL); a1= KP*KL*Knco - 2; a2= 1 + KP*Knco*(KI - KL); b= [b0 b1]; % numerator coeffs a= [1 a1 a2]; % denominator coeffs x= ones(1,N); % step function y= filter(b,a,x); % step response pe= y-1; % phase error response The phase error response is shown in Figure 5. Because this model is linear, the non-linear acquisition behavior we saw in the time-domain model of Part 1 (Figure 3.4) is missing. Thus we see that the Z-domain model is less capable than the time-domain model for computing the time response. Finally, one detail worth mentioning: the response has some overshoot. This is caused by the zero in CL(z). (An all-pole system would not have overshoot for ζ = 1). 3. Compute the frequency response CL(z). u = 0:.1:.9; f= 10* 10 .^u; % log-scale frequencies f = [f 10*f 100*f 1000*f]; z = exp(j*2*pi*f/fs); % complex frequency z CL= (b0 + b1*z.^-1)./(1 + a1*z.^-1 + a2*z.^-2); % closed-loop response CL_dB= 20*log10(abs(CL)); semilogx(f,CL_dB),grid The closed-loop frequency response is shown in Figure 6. Comparing this response to that of the equivalent continuous-time system in Figure B.2, we see that they match. Note the peak in the response occurs near the loop natural frequency of 400 Hz. The slope of the response in the stopband is -20 dB/decade. Figure 5. Phase Error for unit-step change in reference phase. f n = 400 Hz, ζ = 1.0. Figure 6. Closed-Loop Frequency Response. f n = 400 Hz, ζ = 1.0. 4. Plot the step response and the frequency response for different values of damping. Figure 7 shows the step response and Figure 8 shows the frequency response. Figure 7. Step Response. f n = 400 Hz; ζ = 0.5 (blue), 1.0 (green), 2.0 (red) Figure 8. Closed-Loop Frequency Response. f n = 400 Hz; ζ = 0.5 (blue), 1.0 (green), 2.0 (red) Appendix A. Z-Domain model of DPLL with f n = 400 Hz n= 400 Hz %pll_response_z2.m nr 5/24/16 % Digital 2nd order type 2 PLL % step response and closed loop frequency response Knco= 1/4096; % NCO gain KP= 2; % 1/cycles phase detector gain wn = 2*pi*400; % rad/s loop natural frequency fs = 25e6; % Hz sample rate zeta = 1; % damping factor Ts= 1/fs; % s sample time KL= 2*zeta*wn*Ts/(KP*Knco) % loop filter proportional gain KI= wn^2*Ts^2/(KP*Knco) % loop filter integral gain % Find coeffs of closed-loop transfer function of u/ref_phase % CL(z) = (b0 + b1z^-1)/(a2Z^-2 + a1z^-1 + 1) b0= KP*KL*Knco; b1= KP*Knco*(KI - KL); a1= KP*KL*Knco - 2; a2= 1 + KP*Knco*(KI - KL); b= [b0 b1]; % numerator coeffs a= [1 a1 a2]; % denominator coeffs % step response N= 100000; n= 1:N; t= n*Ts; x= ones(1,N); % step function y= filter(b,a,x); % step response pe = y – 1; % phase error response plot(t*1e3,pe),grid xlabel('ms'),ylabel('Phase Error = u/ref-phase -1'),figure %plot phase error % Closed-loop frequency response u = 0:.1:.9; f= 10* 10 .^u; % log-scale frequencies f = [f 10*f 100*f 1000*f]; z = exp(j*2*pi*f/fs); % complex frequency z CL= (b0 + b1*z.^-1)./(1 + a1*z.^-1 + a2*z.^-2); % closed-loop response CL_dB= 20*log10(abs(CL)); semilogx(f,CL_dB),grid xlabel('Hz'),ylabel('CL(z) dB') Appendix B. 2 nd order continuous-time system closed loop response in s Figure B.1. 2 nd order system in s with a zero in the closed-loop response $ A(s) = 2\zeta\omega_n + \frac {\omega_n^2} {s} $ lead-lag filter $ G(s) = \frac {1} {s} A(s) = \frac {2\zeta\omega_n} {s} + \frac {\omega_n^2} {s^2}$ $CL(s) = G/(1+G) $ $ CL(s) = \frac {2\zeta\omega_n + \omega_n^2} {s^2 + 2\zeta\omega_n + \omega_n^2} $ where ω n= 2πf n = loop natural frequency ζ= damping factor Figure B.2. Closed-Loop Frequency Response. f n = 400 Hz, ζ = 1.0. Appendix C. Converting H(s) to H(z) This is a way to approximate H(s) when a system’s passband frequency range is much less than the sample frequency. We choose this method because it results in a block diagram and transfer function that have the same form as that of our DPLL in Figure 2. The definition of z is: z = exp(sT s) where s is complex frequency and T s is the sample time. If we approximate z by the first two terms in the Taylor series for e x, we have z ~= 1 + sT s (1) Here we are assuming 2πfT s << 1, or f/f s << 1/2π. For our examples, we have been using f n = 5 kHz or less and fs = 25 MHz. So f n/f s = .0002 << 1/2π. Rearranging equation 1, we get s ~= (z – 1)/T s To convert H(s) to H(z) we replace each occurrence of the variable s by (z – 1)/T s. Appendix D. Alternative formulae for loop filter coefficients The gain block in front of the NCO has gain Knco. The output frequency of the NCO due just to Vtune is f = Vtune*K nco*f s_nco If we define K v = K nco*f s_nco Hz, then K nco can be replaced by K v*T s_nco in the formulae for K L and K I. So equations 3 and 4 become: $ K_L = 2\zeta\omega_n/(K_pK_v) $ $ K_I = \omega_n^2T_{s\_filt}/(K_pK_v) $ Here, the units of K p are cycle -1, which is consistent with K v in Hz. Alternative units for K p and K v are radian -1 and rad/s, respectively. References 1. Gardner, Floyd M., Phaselock Techniques, 3 rd Ed., Wiley-Interscience, 2005, Chapter 4. 2. Rice, Michael, Digital Communications, a Discrete-Time Approach, Pearson Prentice Hall, 2009, Appendix C. 4. Rice, C.1.3 6/9/2016 Neil Robertson Previous post by Neil Robertson: Digital PLL's -- Part 1 Next post by Neil Robertson: The Power Spectrum To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Title Picard and Chazy solutions to the Painlevé VI equation Publication Type Journal Article Year of Publication 2001 Authors Mazzocco, M Journal Math. Ann. 321 (2001) 157-195 Abstract I study the solutions of a particular family of Painlevé VI equations with the parameters $\beta=\gamma=0, \delta=1/2$ and $2\alpha=(2\mu-1)^2$, for $2\mu\in\mathbb{Z}$. I show that the case of half-integer $\mu$ is integrable and that the solutions are of two types: the so-called Picard solutions and the so-called Chazy solutions. I give explicit formulae for them and completely determine their asymptotic behaviour near the singular points $0,1,\infty$ and their nonlinear monodromy. I study the structure of analytic continuation of the solutions to the PVI$\mu$ equation for any $\mu$ such that $2\mu\in\mathbb{Z}$. As an application, I classify all the algebraic solutions. For $\mu$ half-integer, I show that they are in one to one correspondence with regular polygons or star-polygons in the plane. For $\mu$ integer, I show that all algebraic solutions belong to a one-parameter family of rational solutions. URL http://hdl.handle.net/1963/3118 DOI 10.1007/PL00004500 Picard and Chazy solutions to the Painlevé VI equation Research Group:
Consider elastic net regression with glmnet-like parametrization of the loss function$$\mathcal L = \frac{1}{2n}\big\lVert y - \beta_0-X\beta\big\rVert^2 + \lambda\big(\alpha\lVert \beta\rVert_1 + (1-\alpha) \lVert \beta\rVert^2_2/2\big).$$ I have a data set with $n\ll p$ (44 and 3000 respectively) and I am using repeated 11-fold cross-validation to select the optimal regularization parameters $\alpha$ and $\lambda$. Normally I would use squared error as the performance metric on the test set, e.g. this R-squared-like metric: $$L_\text{test} = 1-\frac{\lVert y_\text{test} - \hat\beta_0 - X_\text{test}\hat\beta\rVert^2}{\lVert y_\text{test} - \hat\beta_0\rVert^2},$$ but this time I also tried using correlation metric (note that for the un-regularized OLS regression minimizing the squared error loss is equivalent to maximizing the correlation): $$L_\text{test}=\operatorname{corr}(y_\text{test}, X_\text{test}\hat\beta).$$ It's clear that these two performance metrics are not exactly equivalent, but weirdly, they disagree rather strongly: Note in particular what happens at small alphas, e.g. $\alpha=.2$ (green line): maximum test-set correlation is achieved when test-set $R^2$ drops quite substantially compared to its maximum. In general, for any given $\alpha$, correlation seems to be maximized at larger $\lambda$ than squared error. Why does it happen and how to deal with it? Which criterion should be preferred? Has anybody encountered this effect?
Let $BG$ is classifying space of $G$ topological group. If $G$ is any compact group and $H$ is a closed subgroup of $G$, then the inclusion map $i:H\rightarrow G$ induces \begin{equation*} G/H\rightarrow BH\rightarrow BG \end{equation*} a fiber bundle? If $G$ is any compact group and $H$ is a closed subgroup of $G$, then theinclusion map $i:H\rightarrow G$ induces \begin{equation*} G/H\rightarrow BH\rightarrow BG \end{equation*} a fibration? If $G$ is any compact group and $N$ is a closed normal subgroup of $G$, thenthe quotient map $\pi :G\rightarrow G/N$ induces \begin{equation*} BN\rightarrow BG\rightarrow B\left( G/N\right) \end{equation*} a fiber bundle? If $G$ is any compact group and $H$ is a closed normal subgroup of $G$, then the quotient map $\pi :G\rightarrow G/N$ induces \begin{equation*} BN\rightarrow BG\rightarrow B\left( G/N\right) \end{equation*} a fibration?
Context: On the 5th page of the paper Quantum circuit design for solving linear systems of equations (Cao et al, 2012) there's this circuit: Schematic: A brief schematic of what's actually happening in the circuit is: Question: Cao et al.'s circuit (in Figure 4) is specifically made for the matrix: $$A = \frac{1}{4} \left(\begin{matrix} 15 & 9 & 5 & -3 \\ 9 & 15 & 3 & -5 \\ 5 & 3 & 15 & -9 \\ -3 & -5 & -9 & 15 \end{matrix}\right)$$ whose eigenvalues are $\lambda_1 = 2^{1-1}=1,\lambda_2 = 2^{2-1}=2,\lambda_3 = 2^{3-1}=4$ and $\lambda_4 = 2^{4-1} = 8$ and corresponding eigenvectors are $|u_i\rangle = \frac{1}{2}\sum_{j=1}^{4}(-1)^{\delta_{ij}}|j\rangle^C$. In this case since there are $4$ qubits in the clock register, the $4$ eigenvalues can be represented as states of the clock register itself (no approximation involved) i.e. as $|0001\rangle$ (binary representation of $1$), $|0010\rangle$ (binary representation of $2$), $|0100\rangle$ (binary representation of $4$) and $|1000\rangle$ (binary representation of $8$). After the first Quantum phase estimation step, the circuit's (in Fig. 4) state is $$|0\rangle_{\text{ancilla}} \otimes \sum_{j=1}^{j=4} \beta_j |\lambda_j\rangle |u_j\rangle$$ Everything is fine till here. However, after this, to produce the state $$\sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\lambda_j\rangle^C ((1-C^2/\lambda_j^2)^{1/2}|0\rangle + C/\lambda_j|1\rangle)$$ it seems necessary to get to the state $$\sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\frac{2^{3}}{\lambda_j}\rangle^C\otimes |0\rangle_{\text{ancilla}}$$ That is, the following mappings seem necessary in the $R(\lambda^{-1})$ rotation step: $$|0001\rangle \mapsto |1000\rangle, |0010\rangle \mapsto |0100\rangle, |0100\rangle \mapsto |0010\rangle \ \& \ |1000\rangle \mapsto |0001\rangle$$ which implies that the middle two qubits in the clock register need to be swapped as well as the two end qubits. But, in the circuit diagram they seem to be swapping the first and third qubit in the clock register! That doesn't seem reasonable to me. In the paper (Cao et al.) claim that the transformation they're doing using their SWAP gate is $$\sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\lambda_j\rangle^C\otimes |0\rangle_{\text{ancilla}} \mapsto \sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\frac{2^{4}}{\lambda_j}\rangle^C\otimes |0\rangle_{\text{ancilla}}$$ According to their scheme, $|1000\rangle \to |0010\rangle$ (see the third page). This scheme doesn't make sense to me because the state $|0001\rangle$ (representing the eigenvalue $1$) would have to be transformed to $|2^4/1\rangle$. But the decimal representation of $16$ would be $|10000\rangle$ which is a 5-qubit state! However, our clock register has only $4$ qubits in total. So, basically I think that their SWAP gates are wrong. The SWAPs should actually have been applied between the middle qubits and the two end qubits. Could someone verify? Supplementary question: This is not a compulsory part of the question. Answers addressing only the previous question are also welcome. @Nelimee wrote a program ( 4x4_system.py in HHL) in QISKit to simulate the circuit (Figure 4) in Cao et al's paper. Strangely, his program works only if the SWAP gate is applied between the middle two qubits but not in between the two end qubits. The output of his program is as follows: <class 'numpy.ndarray'>Exact solution: [-1 7 11 13]Experimental solution: [-0.84245754+0.j 6.96035067+0.j 10.99804383+0.j 13.03406367+0.j]Error in found solution: 0.16599956439346453 That is, in his program only the mapping $|0100\rangle \mapsto |0010\rangle$ takes place in the clock register in the $R(\lambda^{-1})$ step. There's no mapping $|1000\rangle \mapsto |0001\rangle$ taking place. Just in case anyone figures out why this is happening please let me know in the comments (or in an answer).
RF Signal Transformation Between the Time and Frequency Domains When we analyze high-frequency electromagnetics problems using the finite element method (FEM), we often compute S-parameters in the frequency domain without reviewing the results in the complementary domain; that is, the time domain. The time domain is where we can find other useful information, such as time-domain reflectometry (TDR). In this blog post, we will demonstrate data conversion between two domains in order to efficiently obtain results in the desired computation domain through a fast Fourier transform (FFT) process. Very Wide Frequency Range S-Parameter Calculation Say you are simulating a device and want to get a very wideband frequency response with a small frequency step in the frequency domain or the time-domain reflectometry with a long time period. This would take a long time. However, in both cases, the computation performance in a wide range of frequencies and times can be boosted by running the simulation in the complementary domain first and then conducting a FFT to generate the results in the preferred domain. For example, you can: Simulate a transient analysis, and then run a time-to-frequency FFT for a wideband frequency response Perform a frequency sweep, and then a frequency-to-time FFT for a time domain bandpass impulse response Logarithmic surface plot of the electric field norm and arrow plot of time-averaged power flow in a coaxial low-pass filter at 10 GHz. Performing a wideband frequency sweep with a small frequency step size could be a time-consuming and cumbersome task. A sharp resolution of the frequency response of a device can be found from the time-to-frequency FFT, where the ending time of the transient input to the FFT process defines the frequency resolution of the final results. Consider a modulated Gaussian pulse used for an excitation source to drive a time-domain model for a wideband response in the frequency domain. The excited energy gradually decays in general as time passes by, and it eventually vanishes. The longer the time-domain simulation is performed as an input to the FFT, the finer the frequency step size in the FFT output. When the amount of energy in the simulation domain is negligible after a certain time period, there is no need to continue the simulation. Instead, we can stop the transient simulation when the energy is less than a certain threshold and fill the solutions with zeros in the remaining time before executing the FFT. We call this process zero-padding. The time-domain voltage at the excitation (source) lumped port. Left: The voltage is converging to zero and S-parameters are in the frequency domain. Right: Reflection property (S 11) and insertion loss (S21) are plotted in a 60-GHz bandwidth. Far-Field Radiation Pattern of Wideband and Multiband Antennas A wideband antenna study, such as a S-parameter and/or far-field radiation pattern analysis, can be obtained by performing a transient simulation and a time-to-frequency FFT. We can run a time-dependent study first and then transform the dependent variable (magnetic vector potential A) to convert a voltage signal at a lumped port from the time domain to the frequency domain. S-parameters and far-field radiation results are computed from the converted frequency domain data. The following dual-band printed antenna shows two resonances, where the computed S 11 is below -10 dB in the S-parameter plot for the given frequency range. Left: Surface plot of the electric field norm and far-field radiation pattern of a dual-band printed strip antenna at 2.265 GHz. Right: S-parameter plot shows two resonance regions where the computed S 11 is below -10 dB. Two-Step Process with Time-to-Frequency Fourier Transform In the Lumped Port Settings window, clicking the Calculate S-parameter check box on the excitation port sets the voltage excitation type to the modulated Gaussian. The center frequency (f0) of the modulating sinusoidal function can also be specified. Lumped port settings in the Electromagnetic Waves, Transient physics interface. The modulated Gaussian excitation voltage is defined as: where \sigma is the standard deviation 1/2f 0, f 0 is the center frequency, and \eta_f is the modulating frequency shift ratio. A small ratio value (for example, 3%) of \eta_f may enhance the frequency response around the highest frequency. The frequency here has to be matched to the center frequency of the S-parameter calculation used in the time-to-frequency FFT study step from the Model Builder tree. Left: Time-dependent study step settings. Center: Time-to-frequency FFT study step settings. Right: Default solver sequence in the Model Builder tree. The end time of the time-dependent study step is set to 100 times that of the period of the modulating sinusoidal function, which could be long enough for a simple passive device to ensure that the input energy is fully decayed. This works for typical passive circuits, except for closed-cavity-type devices, where the energy decay time can be much longer. The stop condition is automatically added under the time-dependent solver (the Calculate S-parameter check box activates this stop condition in the solver settings). When the sum of the total electric and magnetic energy in the modeling domain is less than 70 dB compared to the input energy, the time-dependent study is terminated by the stop condition and all time-domain data is passed to the FFT step. To generate the frequency-domain data without significant distortion in the frequency range between 0 and 2f 0, the time step, satisfying the Nyquist criterion, is set to 1/4f 0 = 1/2B, where B is the bandwidth 2f 0. To provide a fine frequency resolution, the end time of the FFT study step is much longer than that of the time-dependent study. Zero-padding is automatically applied to the time-dependent study data before the FFT study step. Time Domain Bandpass Impulse Response of a Transmission Line While transient analyses are useful for time-domain reflectometry (TDR) to handle signal integrity (SI) problems, many RF and microwave examples are addressed using frequency-domain simulations generating S-parameters. However, from the frequency-domain data, it is difficult to identify sources for this signal degradation. By simulating a circuit in the frequency domain and performing a frequency-to-time FFT, a voltage signal in the frequency domain can be investigated in the time domain. The computed results can help identify the physical discontinuities and impedance mismatches on the transmission line by analyzing the signal fluctuation in the time domain. Time domain lumped port voltage. The overshoot and undershoot of the signal indicate the discontinuities of the microstrip line. In the above figure, the time-domain results of the voltage bandpass impulse response at lumped port 1 are plotted with a microstrip line that has a couple of line discontinuities. The voltage fluctuation times correspond to the propagation times for the incident pulse to be reflected from the two line discontinuities: the defective parts of the 50-ohm microstrip line. The roundtrip travel time from lumped port 1 to each discontinuity agrees with the voltage fluctuation location. Two-Step Process with Frequency-to-Time Fourier Transform The time-domain results may vary with the input arguments in each study step. The impacts of the study step input arguments are described in the table below: Study Step Argument Impact on Transformed Time-Domain Result Frequency domain Start frequency Low-frequency envelope noise Stop frequency Resolution and high-frequency ripple noise Frequency step Alias period Frequency-to-time FFT Stop time Alias visibility Frequency domain study step settings. The frequency step, \Delta f (that is, df in the frequency domain study step settings above), is set to make the period of alias in the time-domain response greater than the roundtrip travel time from the excitation, lumped port 1, to the line termination, lumped port 2: 1/\Delta f = 1 ns > 2d\sqrt{\epsilon_r}/c_const where d is the circuit board length; \epsilon_r is the permittivity; and c_const is a constant for the speed of light in vacuum, predefined in the COMSOL Multiphysics® software. Frequency-to-time FFT study step settings. While performing FFT, a Gaussian window function is used. This helps to suppress the noise coming from the limited range of the frequency sweep. Each study step uses the Store fields in output option, which defines the selections where the computed results are stored. By choosing only the lumped port boundaries for the Store fields in output settings, it is possible to greatly reduce the size of the model file. Managing Computed Results Since the FFT only transforms the dependent variable from the first domain, it is only possible to use postprocessing variables directly related to the dependent variable in the second domain. The first domain results are still accessible, typically through the Solution Store 1 data set. The frequency-to-time FFT study step transforms the solution of the dependent variables in the frequency domain to the time domain with a very small time step of ten samples per period, defined by the highest frequency in the model. Only the postprocessing variables that can be expressed with the dependent variables are valid for results analysis. Since the transformed solutions typically contain many time steps, it is recommended to use the Store field in output option to reduce the size of the model. Try These Application Examples of RF Signal Transformation The simulation methods using fast Fourier transform presented in this blog post make RF and microwave device modeling more efficient. Try performing a time-to-frequency FFT for a wideband evaluation of S-parameters in this tutorial model of a coaxial cable: Note that to download the MPH-file, you must be logged into your COMSOL Access account and have a valid software license. Browse additional examples in the Application Library in the COMSOL® software: RF Module > Filters > coaxial_low_pass_filter_transient RF Module > Antennas > dual_band_antenna_transient RF Module > EMI_EMC_Applications > microstrip_line_discontinuity (Frequency-to-time for a quick investigation of TDR) Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
I'm trying to solve this problem that was given for my homework assignment, but I cannot figure out how to actually finish the problem in a way that makes sense to me. I've gotten as far as finding $\text{curl }\textbf{F} = \langle1, 1, 1\rangle$ from the original integral $\oint_C z dx + x dy + y dz $. (I believe that F is $\langle z, x, y \rangle$) However, the question says to evaluate that integral over $C$, which is the trace of the cylinder $x^2 + y^2 = 25$ on the plane $y + z = 10$. My textbook says to rearrange the plane equation so that it's $z = 10 - y$. I then use the formula from the book (such that $g(x, y) = z = 10 - y$, and $\textbf{F} = P\vec{i} + Q\vec{j} + R\vec{k}$.) That formula is $\iint_D (-P \frac{\partial g}{\partial x} - Q \frac{\partial g}{\partial y} + R) dA = \iint_S \textbf{F} \cdot dS$ Does this mean that $P = z$ (Obtained from $\textbf{F} = \langle z, x, y \rangle$) or does it mean that $P = 1$ (Obtained from $\text{curl }\textbf{F}$)? I'm leaning towards that it is $P = z$, but I end up with $0$ after setting up and solving. My work uses the $P = z$ version so my integral looks like $\iint_D (x + y) dA$ I then perform a change-of-coordinate-system on it resulting in $\int_0^{2\pi} \int_0^5 r^2 (\sin \theta + \cos \theta) dr d\theta = 0$ Is my answer correct or have I made an error in solving it? If so, how would I go about solving this properly?
Goodness-of-fit statistic. Suppose you want to test whether a die is fair by rolling it 600 times.Then you would expect, on average, to see each face $E = 100$ times. If the observedcounts for faces $i = 1, \dots, 6$ are $X_i,$ then the chi-squaredstatistic is $$Q = \sum_{i=1}^6 \frac{(X_i - E)^2}{E} \stackrel{aprx}{\sim}\mathsf{Chisq}(\nu = 6-1=5),$$the chi-squared distribution with 5 degrees of freedom. Test at the 5% level. Then we would reject the null hypothesis that the die is fair at the 5% level of significance, if$Q \ge t_c = 11.07,$ where the critical value $q_c$ cuts 5% of the probabilityfrom the upper tail of $\mathsf{Chisq}(5).$ qchisq(.95, 5) [1] 11.0705 Experience has shown that the approximation is reasonably goodin such circumstances provided that $E > 5,$ which is true in ourcase. Illustration by simulation. A simulation in R of this situation with a fair die is as below. Becausewe are simulating rolls of a fair die, we expect to reject in about5% of the 100,000 iterations. The simulated rejection rate is indeed very nearly 5%. set.seed(709) # for reproducibility m = 10^5 # iterations of the 600-roll experiment q = replicate( m, sum((tabulate(sample(1:6, 600, rep=T))-100)^2/100) ) mean(q > 11.0705) [1] 0.04974 A histogram of the simulated distribution of $Q$ is a reasonablygood fit to the density function of $\mathsf{Chisq}(5).$ hist(q, prob=T, br=40, col="skyblue2") curve(dchisq(x, 5), add=T, n=1001, col="red", lwd=2) The statistic $Q$ is discrete because values change by small incrementsas the counts change at random. However, the continuous chi-squared distributionturns out to be a very good approximation to the distribution of $Q$ in the circumstances illustrated. Power of the test for a biased die. By contrast, if we simulate using a die that is somewhat biased againstshowing $1$'s (in favor of $6$'s), then we see that the goodness-of-fittest is very likely to reject the null hypothesis that the die is fair.The power of the test is about 97%. set.seed(1776) # for reproducibility m = 10^5 # iterations of the 600-roll experiment p = c(2,3,3,3,3,4)/18 # probabilities for biased die q = replicate( m, sum((tabulate(sample(1:6, 600, rep=T, prob=p))-100)^2/100) ) mean(q > 11.0705) [1] 0.97434 Note: Under the null hypothesis that the die is biased with probabilities$p = (2,3,3,3,3,4)/18,$ the statistic $Q$ has the non-central chi-squareddistribution with $\nu = 5$ degrees of freedom and 'noncentrality parameter'$\lambda = n\sum_i (p_i - 1/6)^2/(1/6) = 22.22,$ so that the power of the goodness-of-fit test can be computed in R (without simulation) as $0.971.$ 1-pchisq(11.0705, 5, 22.22) [1] 0.9709646
There are many examples of gauge theories with disconnected gauge groups, but as far as I know, the non-trivial topological sectors of these theories are beyond the current experimental capabilities. In order for large gauge transformations to act nontrivially on the states, it must be a symmetry of the Hamiltonian. For example, the Maxwell Hamiltonian depends only on the electric and magnetic fields. In flat space, the electromagnetic $U(1)$ gauge group is connected, thus no large gauge transformations exists. However, if appropriate boundary conditions are provided, for example periodic boundary conditions in one direction, then large gauge transformations exist and are symmetries of the Hamiltonian. In contrast, the Hamiltonian a charged particle moving under the influence of a background electromagnetic field is only quasi-invariant, only when the gauge transformation is performed also on the wave functions, the spectrum is conserved. In this case, large gauge transformations must be considered as gauge redundancies rather than symmetry. (However, a gas of such particles has an invariant second quantized Hamiltonian, but I don't know how to exploit this fact). This is the reason why the Aharonov-Bohm system of a particle moving on a circle around a magnetic flux, does not possess large gauge symmetries in spite of the fact that the electromagnetic gauge group is disconnected because $\pi_1(U(1)) = \mathbb{Z}$. What I am going to describe to you here is a (quite ingenious) experiment suggested by S.-R. Eric Yang. He proposed a modification of the Aharonov-Bohm setting to introduce degenerate energy eigenfunctions related by a large gauge transformation. This experiment seems feasible, but from a google scholar search, there does not seem that this experiment has yet been actually performed. The trick is to use a spin half particle and add an electric field in the radial direction in order to generate a spin orbit interaction (proportional to: $\vec{\sigma} \cdot \left (\vec{p} – i e \vec{A} \right )$).The spin orbit term breaks the time reversal invariance, but Yang noticed is that when the Aharonov-Bohm potential is equal to a half flux quantum $A_{\phi} = \frac{1}{2}$, then both kinetic and the gauge orbit interaction terms become invariant under the large gauge transformation $e^{i\phi}$ (which shifts the gauge potential by one quantum) followed by a time reversal transformation. Thus, this transformation is a symmetry of the Hamiltonian. As a consequence there are two degenerate states of opposite spin which are related by the above transformation. These two states should be able to be distinguished between by means of their Berry phases. Update I am using the term gauge group for the group of all gauge transformations (including small and large gauge transformations). In our case it is not the one dimensional group $U(1)$ of global gauge transformations, but the infinite group $\mathrm{Map}(S^1, U(1))$ of local gauge transformations. This group is disconnected. Its disconnected part modulo the connected component (which form the group of large gauge transformations) is $\pi_1(S^1)= \mathbb{Z}$ realized by means of transformations of the type $e^{i n \phi}$ for integer $n$. This is essentially the same gauge group addressed by Landsman and Wren in my answer of the attached question in the main text (in their case it is $\mathrm{Map}(S^1, U(n))$ , but since for $G$ semisimple and centerless $\pi_1(G)= 0$ , thus only the $U(1)$ part of $U(n)$ contributes to the large gauge transformations. As I emphasized in my answer to Friedrich's comment in the attached question; the group of disconnected elements from the unit component (modulo connected component) is the basic definition of large gauge transformation. It is true that when you describe spheres as one point compactifications of flat spaces, then the large gauge transformations become those which do not approach the unit at infinity. You can do this exercise for our case by expressing $S^1$ as one point compactification of $\mathbb{R}$. The Hamiltonian of the Aharonov-Bohm system is not invariant under large gauge transformation because the theory is only quasi-invariant and not fully invariant. This can be easily checked by direct substitution. A symmetry of the theory is a transformation commuting with the Hamiltonian without acting on the wave functions. I only emphasized that to tell that the A-B system is not invariant under large gauge transformations. Your last comment correctly summarises the distinction between symmetry and redundancy.
The Annals of Probability Ann. Probab. Volume 46, Number 1 (2018), 456-490. Random walks on the random graph Abstract We study random walks on the giant component of the Erdős–Rényi random graph $\mathcal{G}(n,p)$ where $p=\lambda/n$ for $\lambda>1$ fixed. The mixing time from a worst starting point was shown by Fountoulakis and Reed, and independently by Benjamini, Kozma and Wormald, to have order $\log^{2}n$. We prove that starting from a uniform vertex (equivalently, from a fixed vertex conditioned to belong to the giant) both accelerates mixing to $O(\log n)$ and concentrates it (the cutoff phenomenon occurs): the typical mixing is at $(\nu\mathbf{d})^{-1}\log n\pm(\log n)^{1/2+o(1)}$, where $\nu$ and $\mathbf{d}$ are the speed of random walk and dimension of harmonic measure on a $\operatorname{Poisson}(\lambda)$-Galton–Watson tree. Analogous results are given for graphs with prescribed degree sequences, where cutoff is shown both for the simple and for the nonbacktracking random walk. Article information Source Ann. Probab., Volume 46, Number 1 (2018), 456-490. Dates Received: May 2015 Revised: October 2016 First available in Project Euclid: 5 February 2018 Permanent link to this document https://projecteuclid.org/euclid.aop/1517821227 Digital Object Identifier doi:10.1214/17-AOP1189 Mathematical Reviews number (MathSciNet) MR3758735 Zentralblatt MATH identifier 06865127 Subjects Primary: 60B10: Convergence of probability measures 60J10: Markov chains (discrete-time Markov processes on discrete state spaces) 60G50: Sums of independent random variables; random walks 05C80: Random graphs [See also 60B20] Citation Berestycki, Nathanaël; Lubetzky, Eyal; Peres, Yuval; Sly, Allan. Random walks on the random graph. Ann. Probab. 46 (2018), no. 1, 456--490. doi:10.1214/17-AOP1189. https://projecteuclid.org/euclid.aop/1517821227
26 0 One thing that always bothered me in Chemistry is how the equilibrium constant is written. It never made sense to me. If I take a simple bimolecular reaction approaching equilibrium : [tex] aA + bB \mathop{\rightleftharpoons}^{k_1}_{k_{-1}} cC + dD [/tex] From ART , [tex] r_1 = k_1 [A]^ \alpha If I take a simple bimolecular reaction approaching equilibrium : [tex] aA + bB \mathop{\rightleftharpoons}^{k_1}_{k_{-1}} cC + dD [/tex] From ART , [tex] r_1 = k_1 [A]^ \alpha ^ \beta [/tex] [tex] r_{-1} = k_{-1} [C]^ \gamma [D]^ \delta [/tex] Then if we consider the rate to be equal at equilibrium and the expression of the equilibrium constant , [tex] K = \frac{k_1}{k_{-1}} = \frac{[C]^ \gamma [D]^ \delta }{[A]^ \alpha [tex] r_{-1} = k_{-1} [C]^ \gamma [D]^ \delta [/tex] Then if we consider the rate to be equal at equilibrium and the expression of the equilibrium constant , [tex] K = \frac{k_1}{k_{-1}} = \frac{[C]^ \gamma [D]^ \delta }{[A]^ \alpha ^ \beta } \neq \frac{[C]^c [D]^d }{[A]^a ^b} [/tex] The only way for both expressions to be equal is that the reaction is elementary , which doesnt hold for most chemical reactions. So what does that mean ? There can be approximations here , molecularity is not equal to stoichiometry. Am I missing something here or are all the claculations I made in equilibrium chemistry just wrong ? The only way for both expressions to be equal is that the reaction is elementary , which doesnt hold for most chemical reactions. So what does that mean ? There can be approximations here , molecularity is not equal to stoichiometry. Am I missing something here or are all the claculations I made in equilibrium chemistry just wrong ?
You are correct. Estimating the instantaneous phase of a noisy sinusoid is NOT easy. I suggest you design a narrow bandpass filter such that your sinusoid-of-interest is in the filter's passband. (The better the filter the more noise that will be eliminated.) Pass your two signals through the bandpass filter to generate filtered signals $x_1[n]$ and $x_2[n]$. Next, pass your $x_1[n]$ and $x_2[n]$ signals through a Hilbert transformer to generate $\hat{x}_1[n]$ and $\hat{x}_2[n]$. Create two analytic (complex) signals as: $$z_1[n] = x_1[n] + j \, \hat{x}_1[n],$$ and $$z_2[n] = x_2[n] + j \, \hat{x}_2[n]$$ where $$ \begin{align}\hat{x}[n] & = \mathcal{H}\{ x[n] \} \\ & = \sum\limits_{m=-\infty}^{+\infty} \frac{1 - (-1)^{m}}{\pi \, m} x[n-m] \\\end{align} $$ Next, compute two instantaneous phase sequences: $$\phi_1[n] = \arg\{z_1[n]\}$$ and $$\phi_2[n] = \arg\{z_2[n]\}.$$ Finally, compare the instantaneous phase difference between the $\phi_1[n]$ and $\phi_2[n]$ sequences.
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order" (→Over an abelian group) (→Over an abelian group) Line 35: Line 35: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/ pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Revision as of 16:06, 24 October 2011 Contents 1 Particular cases 2 Homology groups for trivial group action 3 Cohomology groups for trivial group action Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The first few homology groups are given below: rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group The homology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology. Important case types for abelian groups Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the integers are given as below: The first few cohomology groups are given below: 0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group The cohomology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology. Important case types for abelian groups Important case types for abelian groups Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of
While trying to solve the exercise below, I came up with a wrong conclusion, but I can't see why it's wrong. Also I'm accepting suggestions to get the right solution. This is the problem 17 from chapter 10 of Rudin's Functional Analysis. Let $A$ be a Banach algebra. Suppose that the spectrum of $x\in A$ is not connected. Prove that $A$ contains a nontrivial idempotent $z$. My attempt: Let $F_1, F_2$ be two disjoint closed non-empty sets in $\sigma(x)$ such that $F_1\cup F_2 = \sigma(x)$. There is a function $f$ defined over a neighborhood $\Omega$ of $\sigma(x)$ such that $f=1$ in $F_1$ and $f=0$ in $F_2$. Denote $$\tilde{f}(x) = \frac{1}{2\pi i}\int_\Gamma f(\lambda)(\lambda e-x)^{-1}\ d\lambda,$$which comes from the functional (or symbolic) calculus. $\Gamma$ is a contour of $\sigma(x)$ in $\Omega$. The idea is to show that $\tilde{f}(x)$ is idempotent. This idea of proof was used in some books and I'm trying to follow it. My problem is this: it's clear that $f(\sigma(x)) = F_1$ from the very definition of $f$. I also know that $\sigma(\tilde{f}(x)) = f(\sigma(x)) = F_1$. But from this post, for instance, we have that the spcetrum of idempotents elements is $\{0,1\}$. If $\tilde{f}(x)$ would be idempotent, then $F_1=\{0,1\}$, but this is not necessarily the case. Thank you for your help.
Minhash is said to estimate the Jaccard coefficient - supposedly because it's faster to compute. Given two sets $A$ and $B$, minhash (with k hash functions) takes $O(k*(|A|+|B|))$ time to compute. But isn't it possible to also compute the Jaccard coefficient $\frac {|A \cap B|} {|A \cup B|} $ in $O(|A|+|B|)$ time by simply doing : 1 Given collections A' and B', create hash set datastructures A and B from them.2. foreach(a in A){ if(a in B)sizeOfIntersection++; }3. sizeOfUnion = A.size() + B.size() - sizeOfIntersection;4. Jaccard = sizeOfIntersection / sizeOfUnion; Assuming [an ammortized] O(1) hash set membership lookup, the complexity of the above is O(|A|+|B|), which is less than with k hash-function minhash. So how is Minhash more efficient, and what benefit does it provide?
Suppose $f$ is continuous on $[a, b]$, if for every continuous function $g$ on $[a, b]$ with $g(a) = g(b) = 0, \int_{a}^{b}f(x)g(x) dx = 0$, Show $f(x) = 0, \forall x \in [a, b]$, I want to prove by contradiction, and then find a continuous $g$ such that $g(a) = g(b) = 0$ but $\int_{a}^{b}f(x)g(x) dx \neq 0$ Proof: Suppose by contradiction that $f(x) > 0 $ for some $x_0 \in [a, b]$. Since $f$ is continuous, $\exists \delta$ such that $f(x) > 0, \forall x \in [x_0 - \delta, x_0 + \delta]$. Take $g(x) = \begin{cases} 0 & \text{ if } x \in (a, x_0 - \delta) \cup (x_0+\delta, b) \\ -(x - x_0 - \delta)(x - x_0 + \delta) & \text{ if } x \in (x_0 - \delta, x_0 + \delta) \end{cases}$ Now I want to show that since $g$ is continuous on $[a, b]$, then it is integrable. Thus $\int_{a}^{b} g(x) dx = sup L(g, p)$. However, since the lower sum is $> 0$, it follows that the supremum is also $> 0$. Therefore $\int_{a}^{b} > 0$. A contradiction. However, I do not know how to show that $L(g, p) > 0$
From Artin: When $d$ is congruent $2$ or $3$ modulo $4$, an integer prime $p$ remains prime in the ring of integers of $\Bbb{Q}[$$\sqrt{d}]$ if the polynomial $x^2-d$ is irreducible modulo $p$. a) Prove that this is also true when $d \equiv 1$ modulo $4$ and $p\neq 2$ b) What happens to $p=2$ when $d \equiv 1$ modulo $4$? I have been stuck on this problem for a while, can any one give some tips to approach this question? I was thinking maybe I can use the face that when $d \cong 1$ mod $4$ and $h = 1/4(1-d)$ a prime p generates a prime ideal $(p)$ of the ring of integers if and only if the polynomial $x^2-x+h$ is irreducible modulo $p$. But I really don't know how to apply this or if I even should. Any help is appreciated thanks.
ISSN: 1930-5346 eISSN: 1930-5338 All Issues Advances in Mathematics of Communications November 2013 , Volume 7 , Issue 4 Select all articles Export/Reference: Abstract: We enumerate $H$-phase Golay sequences for $H\le 36$ and lengths up to 33. Our enumeration method is based on filtering by the power spectra. Some of the hexaphase Golay sequence pairs are new. We provide a compact way to reconstruct all these Golay sequences from specific Golay arrays. The Golay arrays are part of the three-stage construction introduced by Fiedler, Jedwab, and Parker. All such minimal Golay arrays can be constructed from a small set of Golay sequence pairs with binary, quaternary, or hexaphase alphabet adjoining 0. We also prove some non-existence results for Golay sequences when $H/2$ is odd. Abstract: Families of $m-$sequences with low correlation property have important applications in communication systems. In this paper, for a prime $p\equiv 1\ \mathrm{mod}\ 4$ and an odd integer $k$, we study the cross correlation between a $p$-ary $m$-sequence $\{s_t\}$ of period $p^n-1$ and its decimated sequence $\{s_{dt}\}$, where $d=\frac{(p^k+1)^2}{2(p^e+1)}$, $e|k$ and $n = 2k$. Using quadratic form polynomial theory, we obtain the distribution of the cross correlation which is six-valued. Specially, our results show that the magnitude of the cross correlation is upper bounded by $2\sqrt{p^n}+1$ for $p=5$ and $e=1$, which is meaningful in CDMA communication systems. Abstract: For weakly regular bent functions in odd characteristic the dual function is also bent. We analyse a recently introduced construction of non-weakly regular bent functions and show conditions under which their dual is bent as well. This leads to the definition of the class of dual-bent functions containing the class of weakly regular bent functions as a proper subclass. We analyse self-duality for bent functions in odd characteristic, and characterize quadratic self-dual bent functions. We construct non-weakly regular bent functions with and without a bent dual, and bent functions with a dual bent function of a different algebraic degree. Abstract: Let $F$ be a number field with ring of integers $\boldsymbol{O}_F$ and $D$ a division $F$-algebra with a maximal cyclic subfield $K$. We study rings occurring as quotients of a natural $\boldsymbol{O}_F$-order $\Lambda$ in $D$ by two-sided ideals. We reduce the problem to studying the ideal structure of $\Lambda/q^s\Lambda$, where $q$ is a prime ideal in $\boldsymbol{O}_F$, $s\geq 1$. We study the case where $q$ remains unramified in $K$, both when $s=1$ and $s>1$. This work is motivated by its applications to space-time coded modulation. Abstract: In this paper we introduce the notion of cyclic ($f(t),\sigma,\delta$)-codes for $f(t)\in A[t;\sigma,\delta]$. These codes generalize the $\theta$-codes as introduced by D. Boucher, F. Ulmer, W. Geiselmann [2]. We construct generic and control matrices for these codes. As a particular case the ($\sigma,\delta$)-$W$-code associated to a Wedderburn polynomial are defined and we show that their control matrices are given by generalized Vandermonde matrices. All the Wedderburn polynomials of $\mathbb F_q[t;\theta]$ are described and their control matrices are presented. A key role will be played by the pseudo-linear transformations. Abstract: In this paper, new constructions of the binary sequence families of period $q-1$ with large family size and low correlation, derived from multiplicative characters of finite fields for odd prime powers, are proposed. For $m ≥ 2$, the maximum correlation magnitudes of new sequence families $\mathcal{S}_m$ are bounded by $(2m-2)\sqrt{q}+2m+2$, and the family sizes of $\mathcal{S}_m$ are given by $q-1$ for $m=2$, $2(q-1)-1$ for $m=3$, $(q^2-1)q^{\frac{m-4}{2}}$ for $m$ even, $m>2$, and $2(q-1)q^{\frac{m-3}{2}}$ for $m$ odd, $m>3$. It is shown that the known binary Sidel'nikov-based sequence families are equivalent to the new constructions for the case $m=2$. Abstract: A significant amount of effort has been devoted to improving divisor arithmetic on low-genus hyperelliptic curves via explicit versions of generic algorithms. Moderate and high genus curves also arise in cryptographic applications, for example, via the Weil descent attack on the elliptic curve discrete logarithm problem, but for these curves, the generic algorithms are to date the most efficient available. Nagao [22] described how some of the techniques used in deriving efficient explicit formulas can be used to speed up divisor arithmetic using Cantor's algorithm on curves of arbitrary genus. In this paper, we describe how Nagao's methods, together with a sub-quadratic complexity partial extended Euclidean algorithm using the half-gcd algorithm can be applied to improve arithmetic in the degree zero divisor class group. We present numerical results showing which combination of techniques is more efficient for hyperelliptic curves over $\mathbb{F}_{2^n}$ of various genera. Abstract: A computer calculation with $M$AGMA shows that there is no extremal self-dual binary code $\mathcal{C}$ of length $72$ whose automorphism group contains the symmetric group of degree $3$, the alternating group of degree $4$ or the dihedral group of order $8$. Combining this with the known results in the literature one obtains that $Aut(\mathcal{C})$ has order at most $5$ or is isomorphic to the elementary abelian group of order $8$. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]