text
stringlengths
256
16.4k
Does Reed-Muller codes $\mathrm{RM}(r,m)$ exist for $r = 0$ and $m = 0$? Namely, does $\mathrm{RM}(0,0)$ exist? Some books mention that $m$ should be a natural number whereas some books mention that it can be a whole number including $0$, for example Shu Lin and Daniel Costello Error Control Coding book. The codewords in a Reed-Muller code of degree $d$ and length $2^n$ are binary vectors of the form $$\big(f(0,0,\ldots, 0, 0), f(0,0, \ldots, 0, 1), f(0,0, \ldots, 1, 0), f(0,0, \ldots, 1, 1), \cdots, f(1,1,\ldots, 1,1) \big)$$ where $f$ is the corresponding binary polynomial of degree at most $d$ in $n$ variables. Suppose first that $n > 0$. The dimension of the code is$$k = \sum_{i=0}^d \binom{n}{i}\tag{1}$$If $d=0$, then $k = \binom{n}{0}=1.$ There are only two "polynomials"of degree $0$ in $n$ variables, namely the constants $0$ and $1$, and consequentlythe codewords of length $2^n$ are $000\cdots 0$ and $111\cdots 1$. If $n=0$ also, the codewords (if any) are of length $2^0 = 1$. Now, you might want to claim from $(1)$ that $k = \binom{0}{0} = \frac{0!}{0!(0-0)!} = 1$ and so there are $2^k = 2$ codewords (which, of course, are $0$ and $1$). Alternatively, the $k$ in Equation $(1)$ is the total number of subsets of cardinality $d$ or less of a set of $n$ elements. There is exactly one subset (of cardinality $0$) of the set of $0$ elements (namely,the empty set $\emptyset$). and that subset is $\emptyset$ itself. Thus, the Reed-Muller code of order $0$ and length $2^0$ is the $(1,1)$ code consisting of two codewords $0$ and $1$. As a linear code, this is the identity map.
Sorry if this is a super easy question, I am really not a signals guy (more of a statistician, computer scientist). But I was trying to reconstruct/learn a sinusoidal function $x(t)$ with linear regression $f(x) = \langle w , \phi(x) \rangle $ (I'm adding as many polynomial features as I may need) over a fixed interval $[-1,1]$. Then I realized that there is this thing called the Nyquist-Shannon sampling theorem that might help figure out if I have enough samples. The reason that I care about it is because the performance of my regression model is really bad on the test set and I wondering if I am just not getting enough samples from my ground truth signal (which for the moment I control synthetically). I know now that I need my sample frequency $f_s$ to be larger than twice the band width $B$ of my signal: $$ f_s > 2B$$ to my understanding bandwidth is just the range of the frequencies present in the signal $x(t)$ (I even asked to clarify that to me just in case here: What is the definition of the bandwidth of a signal?). Thus if that is correct then band width is: $$ B = f_{max} - f_{min} $$ is there a general way to get $f_{max}$,$f_{min}$ from a signal in the time domain? If I understand this correctly then I just need to get the frequencies that define $x(t)$? Right? If that is correct then the only way I know is if we know the analytic form of the $x(t)$, which I do for my example, then I just read of the frequencies from the Fourier series $ f(x) = \frac{1}{2}a_0 + \sum^{\infty}_{n=1} a_n cos(nx) + \sum^{\infty}_{n=1} b_n sin(nx) $. Say $n_{max},n_{min}$ are the largest values modifying the inside of sin or cos terms then we read them off and do: $$ B = \frac{n_{max} }{2 \pi} - \frac{n_{min}}{2 \pi} $$ right? What happens if there is only 1 single term like it only a sin like $x(t) = sin(n t)$, how do we get the smaller frequency if there is no other frequency? Now that I understand much better what the term frequency means I assume that the answer will point me that if I don't know the analytic form of my signal (as probably happens most in practice) we need to resort to some type of transform used. Probably the Fourier transform? But a quick google search to the frequency domain yielded a list of methods: Fourier series – repetitive signals, oscillating systems Fourier transform – nonrepetitive signals, transients Laplace transform – electronic circuits and control systems Z transform – discrete-time signals, digital signal processing Wavelet transform - image analysis, data compression I don't know when to apply each but it seems the wikipedia article links to more articles for each one... Is this correct or am I totally off? (which might be the case since I'm not very familiar with this field). I am assuming there are probably engineering details I am totally missing like flow pass filters or something else that might be important for real signals in practice. Though its just a random guess based on the definition of low-pass filter (filters high frequencies out). Though not sure. Further reading What is the definition of the bandwidth of a signal? makes me suspect that the actual correct answer is as follows: Consider a time domain signal $x(t)$. Consider its frequency description $X(\omega)$ the the support of $X(\omega)$ is the bandwidth. In other words, "the summations" of the points in the domain (to be more precise, the integral of the domain). So if the domain is the whole interval $[-W,W]$ the the bandwidth would be: $$ B = \int^{W}_{-W}d \omega = 2W $$ so in general is it: $$ B = \int_{w \in supp(X(\omega))} d\omega $$ would be my guess.
Category:Generalized Sums Jump to navigation Jump to search and referred to as a Let $\subseteq$ denote the subset relation on $\mathcal F$. Define the net: $\phi: \mathcal F \to G$ by: $\displaystyle \phi \left({F}\right) = \sum_{i \mathop \in F} g_i$ Then $\phi$ is denoted: $\displaystyle \sum \left\{{g_i: i \in I}\right\}$ and referred to as a generalized sum. Pages in category "Generalized Sums" The following 9 pages are in this category, out of 9 total.
I don't understand the different behaviour of the advection-diffusion equation when I apply different boundary conditions. My motivation is the simulation of a real physical quantity (particle density) under diffusion and advection. Particle density should be conserved in the interior unless it flows out from the edges. By this logic, if I enforce Neumann boundary conditions the ends of the system such as $\frac{\partial \phi}{\partial x}=0$ (on the left and the right sides) then the system should be "closed" i.e. if the flux at the boundary is zero then no particles can escape. For all the simulations below, I have applied the Crank-Nicolson discretization to the advection-diffusion equation and all simulation have $\frac{\partial \phi}{\partial x}=0$ boundary conditions. However, for the first and last rows of the matrix (the boundary condition rows) I allow $\beta$ to be changed independently of the interior value. This allows the end points to be fully implicit. Below I discuss 4 different configurations, only one of them is what I expected. At the end I discuss my implementation. Diffusion only limit Here the advection terms are turned off by setting the velocity to zero. Diffusion only, with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at all points The quantity is not conserved as can be seen by the pulse area reducing. Diffusion only, with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at interior points, and $\boldsymbol{\beta}$=1 (full implicit) at the boundaries By using fully implicit equation on the boundaries I achieve what I expect: no particles escape. You can see this by the area being conserved as the particle diffuse. Why should the choice of $\beta$ at the boundary points influence the physics of the situation? Is this a bug or expected? Diffusion and advection When the advection term is included, the value of $\beta$ at the boundaries does not seem to influence the solution. However, for all cases when the boundaries seem to be "open" i.e. particles can escape the boundaries. Why is this the case? Advection and Diffusion with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at all points Advection and Diffusion with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at interior points, and $\boldsymbol{\beta}$=1 (full implicit) at the boundaries Implementation of the advection-diffusion equation Starting with the advection-diffusion equation, $ \frac{\partial \phi}{\partial t} = D\frac{\partial^2 \phi}{\partial x^2} + \boldsymbol{v}\frac{\partial \phi}{\partial x} $ Writing using Crank-Nicolson gives, $ \frac{\phi_{j}^{n+1} - \phi_{j}^{n}}{\Delta t} = D \left[ \frac{1 - \beta}{(\Delta x)^2} \left( \phi_{j-1}^{n} - 2\phi_{j}^{n} + \phi_{j+1}^{n} \right) + \frac{\beta}{(\Delta x)^2} \left( \phi_{j-1}^{n+1} - 2\phi_{j}^{n+1} + \phi_{j+1}^{n+1} \right) \right] + \boldsymbol{v} \left[ \frac{1-\beta}{2\Delta x} \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + \frac{\beta}{2\Delta x} \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) \right] $ Note that $\beta$=0.5 for Crank-Nicolson, $\beta$=1 for fully implicit, and, $\beta$=0 for fully explicit. To simplify the notation let's make the substitution, $ s = D\frac{\Delta t}{(\Delta x)^2} \\ r = \boldsymbol{v}\frac{\Delta t}{2 \Delta x} $ and move the known value $\phi_{j}^{n}$ of the time derivative to the right-hand side, $ \phi_{j}^{n+1} = \phi_{j}^{n} + s \left( 1-\beta \right) \left( \phi_{j-1}^{n} - 2\phi_{j}^{n} + \phi_{j+1}^{n} \right) + s \beta \left( \phi_{j-1}^{n+1} - 2\phi_{j}^{n+1} + \phi_{j+1}^{n+1} \right) + r \left( 1 - \beta \right) \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + r \beta \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) $ Factoring the $\phi$ terms gives, $ \underbrace{\beta(r - s)\phi_{j-1}^{n+1} + (1 + 2s\beta)\phi_{j}^{n+1} -\beta(s + r)\phi_{j+1}^{n+1}}_{\boldsymbol{A}\cdot\boldsymbol{\phi^{n+1}}} = \underbrace{ (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n}}_{\boldsymbol{M\cdot}\boldsymbol{\phi^n}} $ which we can write in matrix form as $\boldsymbol{A}\cdot\boldsymbol{\phi^{n+1}} = \boldsymbol{M}\cdot\boldsymbol{\phi^{n}}$ where, $ \boldsymbol{A} = \left( \begin{matrix} 1+2s\beta & -\beta(s + r) & & 0 \\ \beta(r-s) & 1+2s\beta & -\beta (s + r) & \\ & \ddots & \ddots & \ddots \\ & \beta(r-s) & 1+2s\beta & -\beta (s + r) \\ 0 & & \beta(r-s) & 1+2s\beta \\ \end{matrix} \right) $ $ \boldsymbol{M} = \left( \begin{matrix} 1-2s(1-\beta) & (1 - \beta)(s + r) & & 0 \\ (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) & \\ & \ddots & \ddots & \ddots \\ & (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) \\ 0 & & (1 - \beta)(s - r) & 1-2s(1-\beta) \\ \end{matrix} \right) $ Applying Neumann boundary conditions NB is working through the derivation again I think I have spotted the error. I assumed a fully implicit scheme ($\beta$=1) when writing the finite difference of the boundary condition. If you assume a Crank-Niscolson scheme here the complexity become too great and I could not solve the resulting equations to eliminate the nodes which are outside the domain. However, it would appear possible, there are two equation with two unknowns, but I couldn't manage it. This probably explains the difference between the first and second plots above. I think we can conclude that only the plots with $\beta$=0.5 at the boundary points are valid. Assuming the flux at the left-hand side is known (assuming a fully implicit form), $ \frac{\partial\phi_1^{n+1}}{\partial x} = \sigma_L $ Writing this as a centred-difference gives, $ \frac{\partial\phi_1^{n+1}}{\partial x} \approx \frac{\phi_2^{n+1} - \phi_0^{n+1}}{2\Delta x} = \sigma_L $ therefore, $ \phi_0^{n+1} = \phi_{2}^{n+1} - 2 \Delta x\sigma_L $ Note that this introduces a node $\phi_0^{n+1}$ which is outside the domain of the problem. This node can be eliminated by using a second equation. We can write the $j=1$ node as, $ \beta(r - s)\phi_0^{n+1} + (1+2s\beta)\phi_1^{n+1} - \beta(s+r)\phi_2^{n+1} = (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n} $ Substituting in the value of $\phi_0^{n+1}$ found from the boundary condition gives the following result for the $j$=1 row, $ (1+2s\beta)\phi_1^{n+1} - 2s\beta\phi_2^{n+1} = (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n} + 2\beta(r-s)\Delta x\sigma_L $ Performing the same procedure for the final row (at $j$=$J$) yields, $ -2s\beta\phi_{J-1}^{n+1} + (1+2s\beta)\phi_J^{n+1} = (1-\beta)(s - r)\phi_{J-1}^{n} + (1 - 2s(1-\beta))\phi_{J}^{n} + 2\beta(s+r)\Delta x\sigma_R $ Finally making the boundary rows implicit (setting $\beta$=1) gives, $ (1+2s)\phi_1^{n+1} - 2s\phi_2^{n+1} = \phi_{j-1}^{n} + 1\phi_{j}^{n} + 2(r-s)\Delta x\sigma_L $ $ -2s\phi_{J-1}^{n+1} + (1+2s)\phi_J^{n+1} = \phi_{J}^{n} + 2(s+r)\Delta x\sigma_R $ Therefore with Neumann boundary conditions we can write the matrix equation, $\boldsymbol{A}\cdot\phi^{n+1} = \boldsymbol{M}\cdot\phi^{n} + \boldsymbol{b_N}$, where, $ \boldsymbol{A} = \left( \begin{matrix} 1+2s & -2s & & 0 \\ \beta(r-s) & 1+2s\beta & -\beta (s + r) & \\ & \ddots & \ddots & \ddots \\ & \beta(r-s) & 1+2s\beta & -\beta (s + r) \\ 0 & & -2s & 1+2s \\ \end{matrix} \right) $ $ \boldsymbol{M} = \left( \begin{matrix} 1 & 0 & & 0 \\ (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) & \\ & \ddots & \ddots & \ddots \\ & (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) \\ 0 & & 0 & 1 \\ \end{matrix} \right) $ $ \boldsymbol{b_N} = \left( \begin{matrix} 2 (r - s) \Delta x \sigma_L & 0 & \ldots & 0 & 2 (s + r) \Delta x \sigma_R \end{matrix} \right)^{T} $ My current understanding I think the difference between the first and second plots is explained by noting the error outlined above. Regarding the conservation of the physical quantity. I believe the cause is that, as pointed out here, the advection equation in the form I have written it doesn't allow propagation in the reverse direction so the wave just passes through even with zero-flux boundary conditions. My initial intuition regarding conservation only applied when advection term is zero (this is solution in plot number 2 where the area is conserved). Even with Neumann zero-fluxboundary conditions $\frac{\partial \phi}{\partial x} = 0$ the mass can still leave the system, this is because the correct boundary conditions in this case are Robinboundary conditions in which the totalflux is specified $j = D\frac{\partial \phi}{\partial x} + \boldsymbol{v}\phi = 0$. Moreover the Neunmann condition specifies that mass cannot leave the domain via diffusion, it says nothing about advection. In essence what we have hear are closed boundary conditions to diffusion and open boundary conditions to advection. For more information see the answer here, Implementation of gradient zero boundary conditon in advection-diffusion equation. Would you agree?
Under the auspices of the Computational Complexity Foundation (CCF) The degree-$d$ Chow parameters of a Boolean function $f: \bn \to \R$ are its degree at most $d$ Fourier coefficients. It is well-known that degree-$d$ Chow parameters uniquely characterize degree-$d$ polynomial threshold functions (PTFs) within the space of all bounded functions. In this paper, we prove a robust ... more >>> We study the problem of testing identity against a given distribution (a.k.a. goodness-of-fit) with a focus on the high confidence regime. More precisely, given samples from an unknown distribution $p$ over $n$ elements, an explicitly given distribution $q$, and parameters $0< \epsilon, \delta < 1$, we wish to distinguish, {\em ... more >>> We study the problem of {\em generalized uniformity testing}~\cite{BC17} of a discrete probability distribution: Given samples from a probability distribution $p$ over an {\em unknown} discrete domain $\mathbf{\Omega}$, we want to distinguish, with probability at least $2/3$, between the case that $p$ is uniform on some {\em subset} of $\mathbf{\Omega}$ ... more >>> We study the general problem of testing whether an unknown discrete distribution belongs to a given family of distributions. More specifically, given a class of distributions $\mathcal{P}$ and sample access to an unknown distribution $\mathbf{P}$, we want to distinguish (with high probability) between the case that $\mathbf{P} \in \mathcal{P}$ and ... more >>> We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm. These problems have been extensively studied in distribution testing and sample-optimal estimators are known for them~\cite{Paninski:08, CDVV14, VV14, DKN:15}. In this work, we show ... more >>> We prove the first {\em Statistical Query lower bounds} for two fundamental high-dimensional learning problems involving Gaussian distributions: (1) learning Gaussian mixture models (GMMs), and (2) robust (agnostic) learning of a single unknown mean Gaussian. In particular, we show a {\em super-polynomial gap} between the (information-theoretic) sample complexity and the ... more >>> We study problems in distribution property testing: Given sample access to one or more unknown discrete distributions, we want to determine whether they have some global property or are $\epsilon$-far from having the property in $\ell_1$ distance (equivalently, total variation distance, or ``statistical distance''). In this work, we give a ... more >>> We give a {\em deterministic} algorithm for approximately computing the fraction of Boolean assignments that satisfy a degree-$2$ polynomial threshold function. Given a degree-2 input polynomial $p(x_1,\dots,x_n)$ and a parameter $\eps > 0$, the algorithm approximates \[ \Pr_{x \sim \{-1,1\}^n}[p(x) \geq 0] \] to within an additive $\pm \eps$ in ... more >>> Let $g: \{-1,1\}^k \to \{-1,1\}$ be any Boolean function and $q_1,\dots,q_k$ be any degree-2 polynomials over $\{-1,1\}^n.$ We give a \emph{deterministic} algorithm which, given as input explicit descriptions of $g,q_1,\dots,q_k$ and an accuracy parameter $\eps>0$, approximates \[ \Pr_{x \sim \{-1,1\}^n}[g(\sign(q_1(x)),\dots,\sign(q_k(x)))=1] \] to within an additive $\pm \eps$. For any constant ... more >>> For $f$ a weighted voting scheme used by $n$ voters to choose between two candidates, the $n$ \emph{Shapley-Shubik Indices} (or {\em Shapley values}) of $f$ provide a measure of how much control each voter can exert over the overall outcome of the vote. Shapley-Shubik indices were introduced by Lloyd Shapley ... more >>> We initiate the study of \emph{inverse} problems in approximate uniform generation, focusing on uniform generation of satisfying assignments of various types of Boolean functions. In such an inverse problem, the algorithm is given uniform random satisfying assignments of an unknown function $f$ belonging to a class $\C$ of Boolean functions ... more >>> The \emph{Chow parameters} of a Boolean function $f: \{-1,1\}^n \to \{-1,1\}$ are its $n+1$ degree-0 and degree-1 Fourier coefficients. It has been known since 1961 \cite{Chow:61, Tannenbaum:61} that the (exact values of the) Chow parameters of any linear threshold function $f$ uniquely specify $f$ within the space of all Boolean ... more >>> Let x be a random vector coming from any k-wise independent distribution over {-1,1}^n. For an n-variate degree-2 polynomial p, we prove that E[sgn(p(x))] is determined up to an additive epsilon for k = poly(1/epsilon). This answers an open question of Diakonikolas et al. (FOCS 2009). Using standard constructions of ... more >>> We show that any distribution on {-1,1}^n that is k-wise independent fools any halfspace h with error \eps for k = O(\log^2(1/\eps)/\eps^2). Up to logarithmic factors, our result matches a lower bound by Benjamini, Gurel-Gurevich, and Peled (2007) showing that k = \Omega(1/(\eps^2 \cdot \log(1/\eps))). Using standard constructions of k-wise ... more >>> We describe a general method for testing whether a function on n input variables has a concise representation. The approach combines ideas from the junta test of Fischer et al. with ideas from learning theory, and yields property testers that make poly(s/epsilon) queries (independent of n) for Boolean function classes ... more >>>
So I was playing around with solving polynomials last night and realized that I had no idea how to solve a polynomial with no rational roots, such as $$x^4+3x^3+6x+4=0$$ Using the rational roots test, the possible roots are $\pm1, \pm2, \pm4$, but none of these work. Because there were no rational linear factors, I had to assume that the quartic separated into two quadratic equations yielding either imaginary or irrational "pairs" of roots. My initial attempt was to "solve for the coefficients of these factors". I assumed that $x^4+3x^3+6x+4=0$ factored into something that looked like this $$(x^2+ax+b)(x^2+cx+d)=0$$ because the coefficient of the first term is one. Expanding this out I got $$x^4+ax^3+cx^3+bx^2+acx^2+dx^2+adx+bcx+bd=0$$ $$x^4+(a+c)x^3+(b+ac+d)x^2+(ad+bc)x+bd=0$$ Equating the coefficients of both equations $$a+c = 3$$ $$b+ac+d = 0$$ $$ad+bc = 6$$ $$bd = 4$$ I found these relationships between the various coefficients. Solving this system using the two middle equations: $$\begin{cases} b+a(3-a)+\frac4b=0 \\ a\frac4b+b(3-a)=6 \end{cases}$$ From the first equation: $$a = \frac{3\pm\sqrt{9+4b+\frac{16}{b}}}{2}$$ Substituting this into the second equation: $$\frac{3\pm\sqrt{9+4b+\frac{16}{b}}}{2}\cdot\frac4b+b\cdot(3-\frac{3\pm\sqrt{9+4b+\frac{16}{b}}}{2})=6$$ $$3(b-2)^2 = (b^2-4)\cdot\pm\sqrt{9+4b+\frac{16}b}$$ $$0 = (b-2)^2\cdot((b+2)^2(9+4b+\frac{16}b)-9(b-2)^2)$$ So $b = 2$ because everything after $(b-2)^2$ did not really matter in this case. From there it was easy to get that $d = 2$, $a = -1$ and $c = 4$. This meant that $$x^4+3x^3+6x+4=0 \to (x^2-x+2)(x^2+4x+2)=0$$ $$x = \frac12\pm\frac{\sqrt7}{2}i,\space x = -2\pm\sqrt2$$ These answers worked! I was pretty happy at the end that I had solved the equation which had taken a lot of work, but my question was if there was a better way to solve this?
Answer $\tan26$ Work Step by Step $\tan\theta=\frac{\sin\theta}{\cos\theta}$ As $\theta$ becomes larger, $\sin\theta$ becomes larger and $\cos\theta$ becomes smaller. Therefore $\tan\theta$ will be larger. Hence, $\tan26$ is larger than $\tan25$. You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
View source on GitHub Adds the parameters for a fully connected layer and returns the output. tf.contrib.layers.legacy_fully_connected( x, num_output_units, activation_fn=None, weight_init=initializers.xavier_initializer(), bias_init=tf.zeros_initializer(), name=None, weight_collections=(ops.GraphKeys.WEIGHTS,), bias_collections=(ops.GraphKeys.BIASES,), output_collections=(ops.GraphKeys.ACTIVATIONS,), trainable=True, weight_regularizer=None, bias_regularizer=None) A fully connected layer is generally defined as a matrix multiply: y = f(w * x + b) where f is given by activation_fn. If activation_fn is None, the result of y = w * x + b isreturned. If x has shape [\(\text{dim}_0, \text{dim}_1, ..., \text{dim}_n\)]with more than 2 dimensions (\(n > 1\)), then we repeat the matrixmultiply along the first dimensions. The result r is a tensor of shape[\(\text{dim}_0, ..., \text{dim}_{n-1},\) num_output_units],where \( r_{i_0, ..., i_{n-1}, k} =\sum_{0 \leq j < \text{dim}_n} x_{i_0, ... i_{n-1}, j} \cdot w_{j, k}\).This is accomplished by reshaping x to 2-D[\(\text{dim}_0 \cdot ... \cdot \text{dim}_{n-1}, \text{dim}_n\)]before the matrix multiply and afterwards reshaping it to[\(\text{dim}_0, ..., \text{dim}_{n-1},\) num_output_units]. This op creates w and optionally b. Bias ( b) can be disabled by setting bias_init to None. Most of the details of variable creation can be controlled by specifying theinitializers ( weight_init and bias_init) and in which collections to placethe created variables ( weight_collections and bias_collections; note thatthe variables are always added to the VARIABLES collection). The output ofthe layer can be placed in custom collections using output_collections.The collections arguments default to WEIGHTS, BIASES and ACTIVATIONS,respectively. A per layer regularization can be specified by setting weight_regularizerand bias_regularizer, which are applied to the weights and biasesrespectively, and whose output is added to the REGULARIZATION_LOSSEScollection. Args: : The input x Tensor. : The size of the output. num_output_units : Activation function, default set to None to skip it and maintain a linear activation. activation_fn : An optional weight initialization, defaults to weight_init xavier_initializer. : An initializer for the bias, defaults to 0. Set to bias_init Nonein order to disable bias. : The name for this operation is used to name operations and to find variables. If specified it must be unique for this scope, otherwise a unique name starting with "fully_connected" will be created. See name tf.compat.v1.variable_scopefor details. : List of graph collections to which weights are added. weight_collections : List of graph collections to which biases are added. bias_collections : List of graph collections to which outputs are added. output_collections : If trainable Truealso add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES(see tf.Variable). : A regularizer like the result of weight_regularizer l1_regularizeror l2_regularizer. Used for weights. : A regularizer like the result of bias_regularizer l1_regularizeror l2_regularizer. Used for biases. Returns: The output of the fully connected layer. Raises: : If x has rank less than 2 or if its last dimension is not set. ValueError
10.2.2: Compressible Flow Stream Function Last updated Save as PDF The stream function can be defined also for the compressible flow substances and steady state. The continuity equation is used as the base for the derivations. The continuity equation for compressible substance is \[ \label{if:eq:continutyRho} \dfrac{\partial \rho\, \pmb{U}_x}{ dx} + \dfrac{\partial \rho\, \pmb{U}_y}{ dy} = 0 \tag{59} \] To absorb the density, dimensionless density is inserted into the definition of the stream function as \[ \label{if:eq:rhoStreamFunY} \dfrac{\partial \psi }{ dy} = \dfrac{\rho\, U_x}{\rho_0} \tag{60} \] and \[ \label{if:eq:rhoStreamFunX} \dfrac{\partial \psi }{ dx} = -\dfrac{\rho\, U_y}{\rho_0} \tag{61} \] Where \(\rho_0\) is the density at a location or a reference density. Note that the new stream function is not identical to the previous definition and they cannot be combined. The stream function, as it was shown earlier, describes (constant) stream lines. Using the same argument in which equation (50) and equation (??) were developed leads to equation (53) and there is no difference between compressible flow and incompressible flow case. Substituting equations (60) and (61) into equation (53) yields \[ \label{if:eq:streamUcompressible} \left( \dfrac{\partial \psi}{\partial y} \,dy + \dfrac{\partial \psi}{\partial x} \,dx \right)\, \dfrac{\rho_0}{\rho} = \dfrac{\rho_0}{\rho} \, d\psi \tag{62} \] Equation suggests that the stream function should be redefined so that similar expressions to incompressible flow can be developed for the compressible flow as \[ \label{if:eq:compressibleFlowStreamFun} d\psi = \dfrac{\rho_0}{\rho} \, \pmb{U} \boldsymbol{\cdot} \widehat{s} \, d\ell \tag{63} \] With the new definition, the flow crossing the line \(1\) to \(2\), utilizing the new definition of (63) is \[ \label{if:eq:mDOTcompressibleFlow} \dot{m} = \int_1^2 \rho\, \pmb{U} \boldsymbol{\cdot} \widehat{s} \, d'\ell = \rho_0 \int_1^2 d\psi = \rho_0 \left( \psi_2 -\psi_1 \right) \tag{64} \]
For example we have the vector $8i + 4j - 6k$, how can we find a unit vector perpendicular to this vector? Let $\vec{v}=x\vec{i}+y\vec{j}+z\vec{k}$, a perpendicular vector to yours. Their inner product (the dot product - $\vec{u}.\vec{v}$ ) should be equal to 0, therefore: $$8x+4y-6z=0 \tag{1}$$ Choose for example x,y and find z from equation 1. In order to make it's lengh equal to 1, calculate $\|\vec{v}\|=\sqrt{x^2+y^2+z^2}$ and divide $\vec{v}$ with it. Your unit vector would be: $$\vec{u}=\frac{\vec{v}}{\|\vec{v}\|}$$ Every answer here gives the equation $8a+4b-6c=0$. None mentions that this equation represents a plane perpendicular to the given vector. I am sure that the omission was an oversight of each respondent. But it deserves mention and emphasis. In the plane perpendicular to any vector, the set of vectors of unit length forms a circle. So answers will vary. The vectors $(-1,2,0)^t$ and $(2,0,3)^t$ can be chosen to be a basis for the solution space of the plane: solve for a, divide by 8, and let $2b$ and $3c$ be independent variables. You can divide each by its length $\sqrt{5}$ and $\sqrt{13}$ respectively, and take a trigonometric combination of them to get a general solution. Congrats on 10'000+ views! I'd like to combine the above fine answers into an algorithm. Given a vector $\vec x$ not identically zero, one way to find $\vec y$ such that $\vec x^T \vec y = 0$ is: start with $\vec y' = \vec 0$ (all zeros); find $m$ such that $x_m \neq 0$, and pick any other index $n \neq m$; set $y'_n = x_m$ and $y'_m = -x_n$, setting potentially two elements of $\vec y'$ non-zero (maybe one if $x_n=0$, doesn't matter); and finally normalize your vector to unit length: $\vec y = \frac{\vec y'}{\|\vec y'\|}.$ (I'm referring to the $n$th element of a vector $\vec v$ as $v_n$.) An automated procedure: Take a standard vector $\vec e_k$ which is not parallel to $\vec v$, and form the cross product $\vec e_k\times\vec v$, which is guaranteed to be orthogonal to $\vec v$. The crux of the method is to take the $\vec e_k$ which is "the less parallel" to $\vec v$, i.e. that minimizes the dot product: take the index $k$ corresponding to the smallest $|v_k|$, and you are on the safe side. Update: In $d$ dimensions, take the standard vector $\vec e_k$ that forms the smallest dot product (in absolute value) with $\vec v$ and normalize the vector $$\vec e_k-\frac{\vec e_k\vec v}{\|v\|^2}\vec v=\vec e_k-\frac{v_k}{\|v\|^2}\vec v.$$ Two steps: First, find a vector $a\,{\bf i}+b\,{\bf j}+c\,{\bf k}$ that is perpendicular to $8\,{\bf i}+4\,{\bf j}-6\,{\bf k}$. (Set the dot product of the two equal to 0 and solve. You can actually set $a$ and $b$ equal to 1 here, and solve for $c$.) Then divide that vector by its length to make it a unit vector. This unit vector will still be perpendicular to $8\,{\bf i}+4\,{\bf j}-6\,{\bf k}$ . A vector $v=ai+bj+ck$ is perpendicular to $w=8i+4j-6k$ if and only if $$v\cdot w=8a+4b-6c=0.$$ So for example, we could choose $a=1,b=1,c=2$, so that $v=i+j+2k$. But this is not a unit vector: $$\|v\|=\sqrt{a^2+b^2+c^2}=\sqrt{1^2+1^2+2^2}=\sqrt{6}.$$ However, for any number $t$, it is the case that $\|tv\|=|t|\cdot\|v\|$, and $(tv)\cdot w=t(v\cdot w)$. This shows us how to modify our vector $v$ to get a unit vector that still retains the property of being perpendicular to $w$. Specifically, $$u=\frac{1}{\sqrt{6}}\cdot v=\left(\frac{1}{\sqrt{6}}\right)i+\left(\frac{1}{\sqrt{6}}\right)j+\left(\frac{2}{\sqrt{6}}\right)k$$ satisfies $$\|u\|=\frac{1}{\sqrt{6}}\|v\|=\frac{1}{\sqrt{6}}\cdot\sqrt{6}=1,$$ so that $u$ is unit vector, and $$u\cdot w=\frac{1}{\sqrt{6}}(v\cdot w)=\frac{1}{\sqrt{6}}\cdot0=0,$$ so that $u$ is perpendicular to $w=8i+4j-6k$. You are just looking to a vector $(x,y,z)$ s.t. $8x+4y-6z=0$. Take $(1,-2,0)$ for example, and then divide it by its norm to make it unit. Another method is that from any non-colinear vector to the first, you can apply Gramm-Schmidt process to get an orthogonal vector from the second. THEORY:- Suppose 2 vectors A(given) and B,we need to find vector B such that it is Perpendicular to vector A. We also know that A.B=0 (since angle b/w them is 90° and cos(90°)=0. The below described algorithm can be applied for 2D as well as 3D vectors. Proof:- Given A (8i+4j-6k) Total 3 steps. Step 1:- Suppose a vector B (i+j+k). Step 2:- Now firstly compare and divide the coefficient of each unit vector in B with that of A. As in this case, => B=(1/8)i+(1/4)j+(1/-6)k. Step 3:- Now multiply any one of the three coefficient of unit vector in B with (-2). As in this case i have multiplied (-2) with the coefficient of j.It can be done with i or k also. ** => B=(1/8)i+(1/4)j+(1/3)k. Now for the Unit Vector we can divide vector B with its magnitude, which would be in this case, => |B|= √(1/8)² +(1/4)² +(1/3)² **=> Unit (B)=B/|B| ** (B) IS THE VECTOR PERPENDICULAR TO A. ** Unit(B) IS THE REQUIRED UNIT VECTOR. Verification:-This can be verified as A.B=0 Proved. :) In addition to Yves Daoust's answer, In d dimensions, take any random vector $w$ which is not parallel to $v$, then $$u=w-\frac{v^Tw}{||v||^2}v$$ is orthogonal to the vector $v$. It doesn't have to be a standard vector (or that forms the smallest dot product). You need to find a*i + b*j + c* k so that the dot product of it with 8i +4j -6k is 0. It means 8a + 4b - 6 c = 0. You need to choose a,b,c satisfy above. For example, you can choose a = 1, b = 1, c = 2.
The situation with quadrupole moments is slightly more complex than with the point-charge model of a point dipole because when you say "quadrupole moment" you are actually referring to a matrix rather than a vector, so there are more degrees of freedom and therefore more possibilities for the model. In general, multipole moments are tensors, and they come in two (equivalent) varieties, cartesian and spherical. The most comfortable way to describe them is using the spherical description, which gives you five different independent components, $Q_{2,m}$ with $m=-2,-1,0,1,2$, given by\begin{align}Q_{2,0} & = \int (x^2+y^2-2z^2)\rho(\mathbf r)\mathrm d\mathbf r \\Q_{2,\pm 1} & = \int (x\pm i y)z \, \rho(\mathbf r)\mathrm d\mathbf r \\Q_{2,\pm 2} & = \int (x\pm i y)^2\rho(\mathbf r)\mathrm d\mathbf r.\end{align}In terms of visualization, it is often normal to switch out the complex-valued moments for the equivalent combinations\begin{align}Q_{2,xz} & = \int xz \, \rho(\mathbf r)\mathrm d\mathbf r \\Q_{2,yz} & = \int yz \, \rho(\mathbf r)\mathrm d\mathbf r \\Q_{2,xy} & = \int xy \, \rho(\mathbf r)\mathrm d\mathbf r \\Q_{2,x^2-y^2} & = \int (x^2-y^2) \rho(\mathbf r)\mathrm d\mathbf r .\end{align} These can be modelled by suitable combinations of point charges as follows: $Q_{2,xz}$ is produced by two point charges $+q$ at $(\pm d,0,\pm d)$ and two opposite point charges $-q$ at $(\pm d,0,\mp d)$, respectively; $Q_{2,yz}$ is produced by two point charges $+q$ at $(0,\pm d,\pm d)$ and two opposite point charges $-q$ at $(0,\pm d,\mp d)$, respectively; $Q_{2,xy}$ is produced by two point charges $+q$ at $(\pm d,\pm d,0)$ and two opposite point charges $-q$ at $(\pm d,0,\mp d)$, respectively; $Q_{2,x^2-y^2}$ is produced by two point charges $+q$ at $(\pm d,0,0)$ and two opposite point charges $-q$ at $(0,\pm d,0)$, respectively; and finally $Q_{2,0}$ is produced by two point charges $+q$ at $(0,0,\pm d)$ and a single point charge $-2q$ at the origin. In all cases, you obtain the point quadrupole by taking the limit of $d\to0$ while making the charge $q\to\infty$ by keeping $qd^2$ constant. Given any quadrupole moment, you can find the corresponding point quadrupole by choosing an appropriate linear combination of the five fields described above. However, this is a slightly misleading statement, because this scheme asks you to put a bunch of charges around (as many as 19) and this is not a minimal number. To get that minimal number, you need to use the cartesian form of the tensor, which has components\begin{align}Q_{ij} & = \int (x_ix_j -\frac13 \delta_{ij}r^2)\rho(\mathbf r)\mathrm d\mathbf r,\end{align}and which describes a traceless, symmetric, rank-two tensor (a.k.a. a matrix). The cool thing about this formulation is that all such matrices can be diagonalized, so that there exists a frame of reference where only the $Q_{ii}$ are nonzero. This means, in turn, that there will always exist a point-charge model of the following form: Two point charges $q_1$ at $\pm d \hat{\mathbf e}_1$, for some unit vector $\hat{\mathbf e}_1$, two point charges $q_2$ at $\pm d \hat{\mathbf e}_2$, for some unit vector $\hat{\mathbf e}_2$ orthogonal to $\hat{\mathbf e}_1$, two point charges $q_3$ at $\pm d \hat{\mathbf e}_3$, for some unit vector $\hat{\mathbf e}_3$ orthogonal to both $\hat{\mathbf e}_1$ and $\hat{\mathbf e}_2$, and a single point charge $-2(q_1+q_2+q_3)$ at the origin. This point-charge model approach can be extended to higher multipole moments, but it quickly becomes cumbersome; instead, I normally recommend visualizing point multipoles (starting with point dipoles!) as a surface charge distribution on a sphere given by the corresponding spherical harmonic; to see this in action two levels up, see Hexadecapole potential using point particles?.
This question already has an answer here: Equation left margin 1 answer I want to use the align environment inside a list but the equation shouldn't be centered with respect to the pagewidth but with respect to the intented block of the list. A minimal example: \documentclass[12pt,a4paper]{amsart}\usepackage[utf8]{inputenc}\begin{document}Some text..\begin{itemize} \item Very import point: \begin{enumerate} \item Yet another very important point substantiated by a long equation \begin{align*} x^2+y^2+5\int_{10003}^{10033455}\sin x\;dx+\tan^2(74638263x^2)=1. \end{align*} \end{enumerate}\end{itemize}\end{document} I want to have the long equation centered under point (1).
Let a function $f(x)$ be continuous in some point $x_0$ and $f(x_0) \ne 0$. Prove that there exists a number $C > 0$ and neighbourhood of $x_0$ such that forall $x$ in that neighbourhood the following inequality holds: $$ |f(x)| \ge C $$ The problem statement says that $f(x_0) \ne 0$, so it is either greater than $0$ or less than $0$. Let's consider the case for $f(x_0) > 0$. In such case by continuity of $f(x)$ we know: $$ \lim_{x\to x_0} f(x) = f(x_0) > 0 $$ Or in other words: $$ \forall \epsilon > 0\ \exists \delta_\epsilon > 0\ \forall x: |x-x_0| < \delta_\epsilon \implies |f(x) - f(x_0)| < \epsilon $$ Since $f(x_0) \ne 0$ we may let $\epsilon = {f(x_0)\over 2}$. In such case: $$ |f(x) - f(x_0)| < \epsilon \\ |f(x) - f(x_0)| < { f(x_0)\over 2 } \\ -{ f(x_0)\over 2 } < f(x) - f(x_0) < { f(x_0)\over 2 }\\ 0<{f(x_0)\over 2} < f(x) < {3f(x_0)\over 2} $$ So we may now choose any $C \in \left(0; {f(x_0)\over 2}\right)$, which would imply $f(x) \ge C$. Similar reasoning is applied to the case when $f(x_0) < 0$. I'm not sure my reasoning above is valid, so I would like to kindly ask for verification. Or for a correct proof if the above makes no sense. Thank you!
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
I don't know what your "oracle" is, but without further information what you are asking is whether the following language is decidable: $L:=\{ (t,u)\in\Lambda\times\Lambda \mathrel{|} t \rightarrow^* u, u \textrm{ normal}\}$ (with $\Lambda$ being the set of $\lambda$-terms). This is obviously not the case: for every Turing machine $M$, there is a $\lambda$-term $t_M$ (effectively computable from the description of $M$) such that $t_M$ reduces to a fixed normal form $u$ if $M$ terminates on the empty string and has no normal form otherwise; therefore, if $L$ were decidable, we would be able to decide the halting problem for $M$ on empty input by asking whether $(t_M,u)\in L$. If your "oracle" gives more information, then of course the situation may very well be different. For instance, if the oracle also gives an upper bound to the number of reduction steps needed to reach the purported normal form, then it is obviously decidable whether the oracle's answer is correct. If you are actually asking about proof-carrying code (like D.W. suggests), please modify your question accordingly and I'll be happy to modify/remove my answer.
It is not clear to me what you meant by " I measured the size of the image on the surface of the lens". You calculated value of approximately 2 was done on the assumption that the object was in the focal plane of the convex lens of focal length $f = 12$ cm. If the height of the size is $h$ the best the naked eye can do is to view the object at the least distance of distinct vision $D$ which is generally taken to be $25$ cm. The angular magnification is $\frac {25}{12} \approx 2$. The highest magnification is obtained when the final virtual image is at the least distance of distinct vision which would require the object to be approximately $8$ cm from the lens. This would give an angular magnification of $\frac {25}{8} \approx 3$. To measure the magnification you need to set your lens $12$ cm from a grid and observe the image of the grid through the lens which is formed at infinity . At the same time you need to observe another grid which is 25 cm from the (other?) eye. This is very difficult to do. Much easier is setting up the lens so that the final image is at the near point and I have done a simple experiment which produced a surprisingly good result. I found a hand magnifier whose focal length was approximately $5$ cm and set it up to be about $4$ cm from the lens so that the virtual image would be about 25 cm from the lens. I then put another grid 25 cm from the lens as shown in the photograph. What was pleasing was that the iPhone simultaneously brought into focus the grid viewed through the lens and the grid 25 cm below the lens. Note that the grid 4 cm from the lens was out of focus and "bigger" than the grid 25 cm from the lens. If I had used that as my direct view grid as the reference the magnification found would have been in error. 10 magnified small squares were equal to 63 unmagnified small squares which gave a magnification of approximately $6$ which is not bad when compared with the theoretical value of $\frac {25}{4} \approx 6$. So perhaps it is worth having another go at measuring the magnification of your $12$ cm lens noting that it is not only the focal length but the optical configuration which determines the magnification? Later The magnification $M$ of a magnifying glass is defined as $$M = \dfrac{\text{angle subtended by image of object when 25 cm from the lens}}{\text{angle subtended by object when 25 cm from the naked eye}} = \dfrac {\alpha '}{\alpha}$$ The HyperPhysics article Simple Magnifier gives some more theory.
So, I created a planet that has a civilization that is beyond advanced. The planet's radius is 10x the radius of Earth, and is has the same density as Earth. What issues would I have with the gravity? Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up.Sign up to join this community So, I created a planet that has a civilization that is beyond advanced. The planet's radius is 10x the radius of Earth, and is has the same density as Earth. What issues would I have with the gravity? The gravity would be 10 times larger than Earth's gravity. Since mass grows by the cube and gravity falls by the square your planet would have 1000 Earth masses and surface gravity of 10g, assuming they have same density. And since your planet is very large it's uncompressed density must be much lower than Earth density. I hope that you didn't plan to have humans there. In general you can't have more than 1.6 Earth radius if you want Earth like planet. Planets larger than 1.6 tend to keep their Hydrogen and become gas giants, sort of Mini-Neptunes. Though Universe is large and there might be exceptions If you want more handwavium you can use Chthonian Planet that somehow got it's atmosphere stripped and after due to some collision moved back into habitable zone. Whatever you decide to use 10g is way too much gravity. Surface gravity $\hat{g}$ is a function of the mass $M$ and radius $r$ of the planet: $$\hat{g} = \frac{G\cdot M}{r^2},$$ where $G$ is the universal gravitation constant $6.67\times10^{-11}\,\frac{\text{N}\cdot\text{m}^2}{\text{kg}^2}$. If you assume your planet is in hydrostatic equilibrium (a good assumption for any planet with noticeable surface gravity), then mass is in turn a function of radius and density $\rho$: $$M = \rho\frac{4}{3}\pi r^3.$$ Put these together and you get: $$\hat{g} = \frac{4}{3}\pi G\rho r.$$ Proof. The radius of earth is 6371 km; the density is 5515 kg/m$^3$. $$ \hat{g}_{earth} = \frac{4}{3}\pi \left(6.67\times10^{-11}\,\frac{\text{N}\cdot\text{m}^2}{\text{kg}^2}\right) \left(5515 \frac{\text{kg}}{\text{m}^3}\right) \left(6371000 \text{m}\right) = 9.81 \frac{\text{m}}{\text{s}^2}.$$ Surface gravity scales linearly with radius and density. If you double your radius, but want surface gravity to stay the same, you must halve the density of your planet. Example. If Earth had the density of the moon (3348 kg/m$^3$), then its density would be 0.607 of earth's, so its radius would have to change by 1/0.607 = 1.65 to give the same surface gravity. The new radius of earth would be 10500 km. Size and Density Reference. From Wikipedia, here is a chart with both size and density for many solar system objects. You can read about what these objects are made of (iron, rocks, ice, hydrogen, etc) to find out what a reasonable density would be. Use a spreadsheet, plug in the equation above, and you can calculate surface gravity for all sorts of fantastical worlds.
This question already has an answer here: Does anyone know how to find the exact sum of $$ \sum_{n = 1}^{\infty} \frac{(-1)^{n + 1}}{n} $$ I've only taken second semester calculus and don't see how to go about computing this sum. The only way that I know how to find the sum of an infinite series is if it is a geometric series. Using WolframAlpha, I found that the sum for this series is $\log(2)$.
From my understanding from university physics from Pearson (Page ~20): Dot product multiplies "like" unit vector terms and cross product multiplies "unlike" unit vector terms. So, why does the cross product retain its unit vector (ñ), but dot product does not? Example: $A\cdot B = AB\cos(\theta) =$ magnitude of $A$ * The component of $B$ that is parallel to $A$. So shouldn't the unit vector be going in the direction of $A$? Do mathematicians drop unit vectors when stuff only goes in one direction and then call it a scalar?
Let $S^{[2]}$ be the Hilbert scheme of two points on a smooth projective surface (actually, right now I am particularly interested in del Pezzo surfaces). Let $B$ be the exceptional divisor of the Hilbert-Chow morphism $S^{[2]} \rightarrow \operatorname{Sym}^2 S$. Let $L$ be a divisor on $S$ and $\tilde L$ the corresponding "symmeterized" divisor on $S^{[2]}$, i.e. the set of $D \in S^{[2]}$ such that $\operatorname{Supp} D \cap \operatorname{Supp} L \neq 0$. I would like to have a formula for $\chi(\tilde L+\frac{n}{2}B)$ in terms of $n$ and the invariants of $S$ and $L$. (We have a conjecture for del Pezzo surfaces by doing a different computation that we expect to match.) In, e.g. [Li, Qin, and Wang], they say that there is an algorithm to compute the cup product of any two cohomology classes for an arbitrary $S$. Furthermore, Boissiere has a paper about finding "universal formulas" for chern classes of tangent bundles of $S^{[n]}$. If I understand the chern classes of the the tangent bundle, maybe I could understand the todd class, and assuming I could convert $\tilde L + \frac{n}{2}B$ into generators suitable for the algorithm mentioned by L,Q,W, maybe I could use GRR to compute the desired Euler characteristic. I've been looking at these and the surrounding papers, but I'm having a hard time getting a feel for what they can do. I don't mind putting in some time in to learn some new stuff, but I'd' like to be sure I'm going in the right direction first. QuestionIs the plan described above realistic? If so, do you have any recommendations for where to start reading? Or is there a different strategy that looks better?
First, a streaming algorithm running in space $s(n)$ for a problem $C$ implies a communication protocol for $C$ using $s(n)$ bits of communication: Alice, on input $x$, runs the streaming algorithm on $x$ and then hands the configuration of the machine at the end of the stream to Bob ($s(n)$ bits), and then Bob, who has input $y$, runs the streaming algorithm starting from the provided configuration, and returns the result. Therefore, communication lower bounds imply streaming lower bounds. Second, consider the following problem, called INDEX: Alice gets an $n$-bit string $x$ and Bob gets an integer $i \in [n]$, and they want to determine value of the $i$th bit of $x$, $x_i$. I'll quote the following result, which can be proved using an information statistics argument. Claim 1: Any randomized (1/3-error, say) protocol for INDEX requires $\Omega(n)$ communication. Now we can reduce from INDEX to the triangle-free problem. In particular, given and instance $(x, i)$ of INDEX we can build edge sets $E_A$ and $E_B$ over $V = L \cup R \cup \{s\}$ where $L = \{y_1, ..., y_{\sqrt{n}}\}$, $R = \{z_1, ..., z_{\sqrt{n}}\}$, and $s$ some constant node (so $2\sqrt{n} + 1 = O(\sqrt{n})$ nodes) with the property that the resulting graph $G$ is triangle-free iff $x_i = 1$. Additionally, Alice can compute $E_A$ herself and Bob can compute $E_B$ himself. To do this, pick some pairing function $p: [n] \to [\sqrt{n}] \times [\sqrt{n}]$ and say $p_L(k)$ is the first element in the pair corresponding to $p(k)$ and $p_R(k)$ is the second. We'll use this to map bit positions to edges. Then let $E_A = \{(y_{p_L(k)}, z_{p_R(k)})\ |\ x_k = 1\}$. Note that this is a bipartite graph. Now let $E_B = \{(s, y_{p_L(i)}), (s, z_{p_R(i)})\}$. $E_A$ has no triangles, and $E_B$ forms a triangle with $E_A$ only when theres an edge between $y_{p_L(i)}$ and $z_{p_R(i)}$, which only happens when $x_i = 1$. An $o(n^2)$ communication-protocol for determining if a graph is triangle-free readily contradicts the INDEX lower bound: Alice gets $x$ and Bob gets $i$. Alice and Bob compute $E_A$ and $E_B$ respectively with no communication. They use the triangle-free protocol as a subroutine, using $o(\sqrt{n}^2) = o(n)$ communication.
I'm trying to edit multifile beamer presentation documents in AUCTeX. I use Emacs 24.3.1 / AucTeX-11.87 / TeX 3.1415926 (TeX Live 2012/Debian) in Ubuntu 12.04. The most simple code that explains my issue is attached below MasterFile ( mainFile.tex) \documentclass[compress, 9pt, t,xcolor={usenames,dvipsnames,svgnames,table}]{beamer}\mode<presentation> {\usetheme{Madrid}}% include packages\usepackage{graphicx}\title[ShortTitle]{Beamer in AUCTeX}\subtitle{Beamer in AUCTeX}\author[WM]{\small WanderingMind}\institute[World]{\begin{center} \includegraphics[scale=0.3]{figure1}\end{center}Department \\Organization \\ % Your institution for the title page\medskip\textit{email@email.com} % Your email address}\date{\today}\begin{document}\frame{\titlepage}\section{Introduction}\label{sec:Intro}\input{sec1_Intro}\end{document} SlaveFile ( sec1_Intro.tex) \section{Introduction} \begin{frame}{Introduction} \begin{block}{} \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth,keepaspectratio]{figure1} \end{center} \end{figure} \begin{itemize} \item Beamer with AUCTeX trial. \item Issue with MultiFile. \item Schrodinger Eq. $i\hbar\frac{d\left|\psi(t)\right>}{dt}=H\left|\psi(t)\right>$. \end{itemize} \end{block} \end{frame} %%% Local Variables: %%% mode: latex%%% TeX-master: mainFile%%% End: However I'm having two important issues. (1) I'm unable to compile the beamer from one of the section files buffer (in this case sec1_Intro.tex). The AUCTeX throws me an error stating ERROR: Undefined control sequence. I do include the local variables definition at the end of the section files (see sec1_Intro.tex). Further in order to avoid automatic master file, I also perform (setq-default TeX-master nil)in the Init File in addition to other AUCTeX options described here. (2) The eps figures (figure1.eps) does not show up in the xdvi while performing View (C-c C-v) command The eps figure is visible from other viewers such as evince. But this is not helpful since I cannot perform inverse search from evince to emacs.
Prove that the following system is invertible. $$y(t) = \mathcal{T}\{x(t)\} = \int_{-\infty}^{3t} x(\tau) \,\mathrm d \tau$$ Answer: yes, the system is invertible. I need some hint here, not the full solution. Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community In this problem of finding the inverse system (if it exists) its intuitive to try differentiating the integral as the system input/output is given by: $$y(t) = \mathcal{T}\{x(t)\} = \int_{-\infty}^{3t} x(\tau) d\tau$$ Before differentiating te integral however, I would like to make this little change which is quite clear I assume: $$y(t/3) = \mathcal{T}\{x(t)\} = \int_{-\infty}^{t} x(\tau) d\tau$$ Then I differentiate both sides of the equality: $$ \frac{d}{dt} \left( y(t/3) \right) = \frac{d}{dt} \int_{-\infty}^{t} x(\tau) d\tau $$ Which proceeds as: $$ \frac{1}{3} y'(t/3) = x(t) $$ which yields the inverse system. Note that differentiation of an integral with variable limits is known as the Leibnitz Rule and can be summarized as below: given $$F(x) = \int_{\alpha(x)}^{\beta(x)} g(x,t) dt$$ then $$ \frac{d}{dx} F(x) = \frac{d}{dx} \int_{\alpha(x)}^{\beta(x)} g(x,t) dt $$ $$ F'(x) = g(x,\beta(x)) \beta'(x) - g(x,\alpha(x)) \alpha'(x) + \int_{\alpha(x)}^{\beta(x)} \frac{d}{dx} g(x,t) dt $$
Article Keywords: singular cohomology with local coefficients Summary: Let $\Cal Z$ be a set of all possible nonequivalent systems of local integer coefficients over the classifying space $BO(n_1)\times \dots \times BO(n_m)$. We introduce a cohomology ring $\bigoplus_{\Cal G\in \Cal Z} H^*(BO(n_1)\times \dots \times BO(n_m);\Cal G)$, which has a structure of a $\Bbb Z\oplus (\Bbb Z_2)^m$-graded ring, and describe it in terms of generators and relations. The cohomology ring with integer coefficients is contained as its subring. This result generalizes both the description of the cohomology with the nontrivial system of local integer coefficients of $BO(n)$ in [Č] and the description of the cohomology with integer coefficients of $BO(n_1)\times \dots \times BO(n_m)$ in [M]. References: [B] Brown E.H. Jr.: The cohomology of $BSO(n)$ and $BO(n)$ with integer coefficients . Proc. Amer. Mat. Soc. 85 (1982), 283-288. MR 0652459 [Č] Čadek M.: The cohomology of $BO(n)$ with twisted integer coefficients . J. Math. Kyoto Univ. 39 2 (1999), 277-286. MR 1709293 | Zbl 0946.55009 [F] Feshbach M.: The integral cohomology rings of the classifying spaces of $O(n)$ and $SO(n)$ . Indiana Univ. Math. J. 32 (1983), 511-516. MR 0703281 | Zbl 0507.55014 [M] Markl M.: The integral cohomology rings of real infinite dimensional flag manifolds . Rend. Circ. Mat. Palermo, Suppl. 9 (1985), 157-164. MR 0853138 | Zbl 0591.55007 [MS] Milnor J.W., Stasheff J.D.: Characteristic Classes . Princeton University Press and University of Tokyo Press, Princeton, New Jersey, 1974. MR 0440554 | Zbl 1079.57504 [T] Thomas E.: On the cohomology of the real Grassman complexes and the characteristic classes of the $n$-plane bundle . Trans. Amer. Math. Soc. 96 (1960), 67-89. MR 0121800
Let $B$ be a subset of an (additive) abelian group $F$. Then $F$ is free abelian with basis $B$if the cyclic subgroup $\langle b \rangle$ is infinite cyclic for each $b \in B$ and $F=\sum_{b \in B} \langle b \rangle $ (direct sum). A free abelian group is thus a direct sum of copies of $\Bbb Z$. A typical element $x \in F$ has a unique expression $$x = \sum m_b b$$ where $m_b \in \Bbb Z$ and almost all $m_b$ (all but a finite number) are zero. I understand that by $F = \sum_{b \in B} \langle b \rangle$ (direct sum), they mean an external direct product of the infinite cyclic subgroups $\langle b \rangle$. But what does $x=\sum m_bb$ mean? Since $F = \sum_{b \in B} \langle b \rangle = \{ ...,-b_1, 0, b_1, ...\} \oplus \dots \oplus \{\dots, -b_n, 0, b_n, \dots\}$ and $x \in F$, then $x=(m_1b_1, \dots, m_nb_n)$. But each $b$ is an element of $F$ and therefore by the same logic shouldn't each $b$ be of the form $(j_1b_1, \dots, j_nb_n)$? And therefore $x=(m_1(j_1b_1, \dots, j_nb_n), etc \dots)$? What exactly am I misunderstanding here? This seems like a circular definition.
Complex Numbers cannot be Extended to Algebra in Three Dimensions with Real Scalars Theorem Proof Let $1$ and $i$ have their usual properties as they do as complex numbers: $\forall a: 1 a = a 1 = a$ $i \cdot i = -1$ Then: $i j = a_1 + a_2 i + a_3 j$ for some $a_1, a_2, a_3 \in \R$. Multiplying through by $i$: $(1): \quad i \left({i j}\right) = \left({i i}\right) j = -j$ and: \(\displaystyle i \left({a_1 + a_2 i + a_3 j}\right)\) \(=\) \(\displaystyle a_1 i - a_2 + a_3 i j\) \(\displaystyle \) \(=\) \(\displaystyle a_1 i - a_2 + a_3 \left({a_1 + a_2 i + a_3 j}\right)\) \(\displaystyle \) \(=\) \(\displaystyle a_1 i - a_2 + a_1 a_3 + a_2 a_3 i + {a_3}^2 j\) \(\displaystyle \leadsto \ \ \) \(\displaystyle 0\) \(=\) \(\displaystyle \left({a_1 a_3 - a_2}\right) + \left({a_1 + a_2 a_3}\right) i + \left({ {a_3}^2 + 1}\right) j\) from $(1)$ But this implies that ${a_3}^2 = -1$, which contradicts our supposition that $a_3 \in \R$. Hence the result by Proof by Contradiction. $\blacksquare$
12877 12814 Deletions are marked like this. Additions are marked like this. Line 4: Line 4: We can minimize '''night light pollution (NLP)''', and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. We can minimize '''night light pollution (NLP)''' by turning the thinsat as we approach eclipse. The goal will be to perform one complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. Night Side Maneuvers We can minimize night light pollution (NLP) by turning the thinsat as we approach eclipse. The goal will be to perform one complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. Another advantage of the turn is that if thinsat maneuverability is destroyed by radiation or a collision on the night side, it will come out of night side with a slow tumble that won't be corrected. The passive radar signature of the tumble will help identify the destroyed thinsat to other thinsats in the array, allowing another sacrificial thinsat to perform a "rendezvous and de-orbit". If the destroyed thinsat is in shards, the shards will tumble. The tumbling shards ( or a continuously tumbling thinsat ) will eventually fall out of the normal orbit, no longer get J_2 correction, and the thinsat orbit will "eccentrify", decay, and reenter. This is the fail-safe way the arrays will reenter, if all active control ceases. Maneuvering Thrust and Satellite Power Neglecting tides, the synodic angular velocity of the m288 orbit is \Large\omega = 4.3633e-4 rad/sec = 0.025°/s. The angular acceleration of a thinsat is 13.056e-6 rad/sec 2 = 7.481e-4°/s 2 with a sun angle of 0°, and 3.740e-4°/s 2 at a sun angle of 60°. Because of tidal forces, a thinsat entering eclipse will start to turn towards sideways alignment with the center of the earth; it will come out of eclipse at a different velocity and angle than it went in with. If the thinsat is rotating at \omega and either tangential or perpendicular to the gravity vector, it will not turn while it passes into eclipse. Otherwise, the tidal acceleration is \ddot\theta = (3/2) \omega^2 \sin 2 \delta where \delta is the angle to the tangent of the orbit. If we enter eclipse with the thinsat not turning, and oriented directly to the sun, then \delta = 30° . Three Strategies and a Worst Case Failure Mode There are many ways to orient thinsats in the night sky, with tradeoffs between light power harvest, light pollution, and orbit eccentricity. If we reduce power harvest, we will need to launch more thinsats to compensate, which makes more problems if the system fails. I will present three strategies for light harvest and nightlight pollution. The actual strategies chosen will be blend of those. Tumbling If things go very wrong, thinsats will be out of control and tumbling. In the long term, the uncontrolled thinsats will probably orient flat to the orbital plane, and reflect very little light into the night sky, but in the short term (less than decades), they will be oriented in all directions. This is equivalent to mapping the reflective area of front and back (2 π R 2 ) onto a sphere ( 4 π R 2 ). Light with intensity I shining onto a sphere of radius R is evenly reflected in all directions uniformly. So if the sphere intercepts π R 2 I units of light, it scatters e I R 2/4 units of light (e is albedo) per steradian in all directions. While we will try to design our thinsats with low albedo ( high light absorption on the front, high emissivity on the back), we can assume they will get sanded down and more reflective because of space debris, and they will get broken into fragments of glass with shiny edges, adding to the albedo. Assume the average albedo is 0.5, and assume the light scattering is spherical for tumbling. Source for the above animation: g400.c Three design orientations All three orientations shown are oriented perpendicularly in the daytime sky. Max remains perpendicular in the night sky, Min is oriented vertically in the night sky, and Zero is edge on to the terminator in the night sky. All lose orientation control and are tilted by tidal forces in eclipse - the compensatory thrusts are not shown. Min and Zero are accelerated into a slow turn before eclipse, so they come out of the eclipse in the correct orientation. In all cases, there will probably be some disorientation and sun-seeking coming out of eclipse, until each thinsat calibrates to the optimum inertial turn rate during eclipse. So, there may be a small bit of sky glow at the 210° position, directly overhead at 2am and visible in the sky between 10pm and 6am. Max NLP: Full power night sky coverage, maximum night light pollution The most power is harvested if the thinsats are always oriented perpendicular to the sun. During the half of their orbit into the night sky, there will be some diffuse reflection to the side, and some of that will land in the earth's night sky. The illumination is maximum along the equator. For the M288 orbit, about 1/6th of the orbit is eclipsed, and 1/2 of the orbit is in daylight with the diffuse (Lambertian) reflection scattering towards the sun and onto the day side of the earth. Only the two "horns" of the orbit, the first between 90° and 150° (6pm to 10pm) and the second between 210° and 270° (2am to 6am) will reflect night into the light sky. The light harvest averages to 83% around the orbit. This is the worst case for night sky illumination. Though it is tempting to run thinsats in this regime, extracting the maximum power per thinsat, it is also the worst case for eccentricity caused by light pressure, and the thinsats must be heavier to reduce that eccentricity. Min NLP: Partial night sky coverage, some night light pollution This maneuver will put some scattered light into the night sky, but not much compared to perpendicular solar illumination all the way into shadow. In the worst case, assume that the surface has an albedo of 0.5 (typical solar cells with an antireflective coating are less than 0.3) and that the reflected light is entirely Lambertian (isotropic) without specular reflections (which will all be away from the earth). At a 60° angle, just before shadow, the light emitted by the front surface will be 1366W/m 2 × 0.5 (albedo) × 0.5 ( cos 60° ), and it will be scattered over 2π steradians, so the illumination per steradian will be 54W/m 2-steradian just before entering eclipse. Estimate that the light pollution varies from 0W to 54W between 90° and 150° and that the average light pollution is half of 54W, for 1/3 of the orbit. Assuming an even distribution of thinsat arrays in the constellation, that is works out to an average of 9W/m 2-steradian for all thinsats in M288 orbit. The full moon illuminates the night side of the equatorial earth with 27mW/m 2 near the equator. A square meter of thinsat at 6400km distance produces 9W/6400000 2 or 0.22 picowatts per m 4 times the area of all thinsats. If thinsat light pollution is restricted to 5% of full moon brightness (1.3mW/m 2), then we can have 6000 km 2 of thinsats up there, at an average of 130 W/m 2, or about 780GW of thinsats at m288. That is about a million tons of thinsats. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 ~ \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1 ~ \Large\omega 0° to 60° 100% to 50% 0W to 54W 100 to 140 150° to 210° 4 ~ \Large\omega 60° to 300° Eclipse 0W 140 to 180 210° to 270° 1 ~ \Large\omega 300° to 0° 50% to 100% 54W to 0W 180 to 240 270° to 0° 0 ~ \Large\omega 0° 100% 0W The angular velocity change at 0° takes 250/7.481 = 33.4 seconds, and during that time the thinsat turns 0.42° with negligible effect on thrust or power. The angular velocity change at 60° takes 750/3.74 = 200.5 seconds, and during that time the thinsat turns 12.5°, perhaps from 53.7° to 66.3°, reducing power and thrust from 59% to 40%, a significant change. The actual thrust change versus time will be more complicated (especially with tidal forces), but however it is done, the acceleration must be accomplished before the thinsat enters eclipse. The light harvest averages 78% around the orbit. Zero NLP: Partial night sky coverage, no night light pollution In this case, in the night half of the sky the edge of the thinsat is always turned towards the terminator. As long as the thinsats stay in control, they will never produce any nighttime light pollution, because the illuminated side of the thinsat is always pointed away from the night side of the earth. The average illumination fraction is around 68%. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees average rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1.5 \Large\omega 0° to 90° 100% to 0% 0W 100 to 140 150° to 210° 3 \Large\omega Start at 3.333\Large\omega 90° to 270° Eclipse 0W 140 to 180 210° to 270° 1.5 \Large\omega 270° to 0° 0% to 100% 0W 180 to 240 270° to 0° 0 \Large\omega 0° 100% 0W Pedants and thinsat programmers take note: The actual synodic orbit period is 240 minutes and 6.57 seconds long; that results in 2190.44 rather than 2191.44 sidereal orbits per year, accounting for the annual apparent motion of the sun around the sky. The light harvest averages 67% around the orbit. Why would a profit maximizing operator settle for 67% when 83% was possible? Infrared-filtering thinsats reduce launch weight and can use the Zero NLP flip to increase the minimum temperature of a thinsat during eclipses. An IR filtering thinsat in maximum night light pollution mode will have the emissive backside pointed at 2.7K space when it enters eclipse; the thinsat temperature will drop towards 20K if it cannot absorb the 64W/m 2 of 260K black body radiation reaching it from the earth through the 3.5μm front side infrared filter. The thinsat will become very brittle at those temperatures, and the thermal shock could destroy it. If the high thermal emissivity back side is pointed towards the 260K earth, the temperature will drop to 180K - still challenging, but the much higher thermal mobility may heal atomic-scale damage. Details of the Zero NLP maneuver In the night sky, assuming balanced thrusters, only tidal forces act on the thinsat, \ddot\theta = -(3/2) \omega^2 \sin( 2 \theta . Integrating numerically from the proper choice of initial rotation rate (10/9ths the average, due to tidal decceleration) we go from 30 degrees "earth relative tilt" at the beginning of eclipse, to 150 degrees tilt at the end of eclipse, with the night sky sending 260K infrared at the back side throughout the manuever. The entire disk of the earth is always in view, and always emits the same amount of infrared to a given radius, but the Lambertian angle of absorption changes. Power versus angle Night light pollution versus hour The night light pollution for 1 Terawatt of thinsats at M288. Mirror the graph for midnight to 6am. Some light is also put into the daytime sky (early morning and late afternoon), but it will be difficult to see in the glare of sunlight. The Zero NLP option puts no light in the night sky, so that curve is far below the bottom of this graph. Source for the above two graphs: nl02.c Note to Astronomers Yes, we will slightly occult some of your measurements, though we won't flash your images like Iridium. We will also broadcast our array ephemerides, and deliver occultation schedules accurate to the microsecond, so you can include that in your luminosity calculations. Someday, server sky is where you will perform those calculations, rather than in your own coal powered (and haze producing) computer.
Energy of the system is the quantity that is conserved as a result of invariance with respect to translations in time, i.e. the laws that govern your system are the same irrespective of the time (the actual behaviour may depend on time, given suitable initial conditions). I like this definition since it focuses on what is important, not some magic formula, but the general (usually desired) property of the system. What follows relies on Lagrangian mechanics. In case you are lost, I suggest looking into Goldstein's "Classical Mechanics" Kinetic energy is the type of energy that a 'free' particle would have. So let's look at the Lagrangian for a free (non-relativistic) particle with mass $m$: $L=\frac{1}{2}m\left(\frac{d\mathbf{r}}{dt}\right)^2$, where $\mathbf{r}$ is the position of the particle, and $t$ is time. For a general Lagrangian $L=L\left(t,\,\mathbf{r},\,\frac{d\mathbf{r}}{dt}\right)$: $\left(\frac{\partial L}{\partial t}\right)_{\mathbf{r},\,\frac{d\mathbf{r}}{dt}}=\frac{dL}{dt}-\frac{d\mathbf{r}}{dt}.\left(\frac{\partial L}{\partial \mathbf{r}}\right)_{t,\frac{d\mathbf{r}}{dt}}-\frac{d^2\mathbf{r}}{dt^2}.\left(\frac{\partial L}{\partial\left( d\mathbf{r}/dt\right)}\right)_{t,\mathbf{r}}=\frac{d}{dt}\left(L-\frac{d \mathbf{r}}{dt}.\left(\frac{\partial L}{\partial \left(d\mathbf{r}/dt\right)}\right)_{t,\mathbf{r}}\right)$ The second step requires use of E-L equations. Thus quantity in the brackets on the right is conserved if $\left(\frac{\partial L}{\partial t}\right)_{\mathbf{r},\,\frac{d\mathbf{r}}{dt}}=0$, i.e. if system does not explicitly depend on the time (and is thus invariant under time-translations). This is true for the free-particle Lagrangian, where the conserved quantity is: $L-\frac{d \mathbf{r}}{dt}.\left(\frac{\partial L}{\partial \left(d\mathbf{r}/dt\right)}\right)_{t,\mathbf{r}}=\frac{1}{2}m\left(\frac{d\mathbf{r}}{dt}\right)^2-\frac{d\mathbf{r}}{dt}.m\frac{d\mathbf{r}}{dt}=-\frac{1}{2}m\left(\frac{d\mathbf{r}}{dt}\right)^2$ which is the energy of the system (the sign is due to convention). So let the energy of the free-particle be $U=\frac{1}{2}m\left(\frac{d\mathbf{r}}{dt}\right)^2$. If the energy is changing the change in energy (work) is given by $\Delta U=\int^{t_2}_{t_1}\frac{dU}{dt} dt=\int^{t_2}_{t_1} \frac{d\mathbf{r}}{dt}.m\frac{d^2\mathbf{r}}{dt^2} dt= \int^{\mathbf{r}_2}_{\mathbf{r}_1} m\frac{d^2\mathbf{r}}{dt^2}.d\mathbf{r}=\int^{\mathbf{r}_2}_{\mathbf{r}_1} \mathbf{F}.d\mathbf{r} $ where $\mathbf{F}$ is the force defined as the rate of change of momentum, which for a free particle is $\mathbf{F}=m\frac{d^2\mathbf{r}}{dt^2}$
The problem is that any statement that a dimensionful physical constant has changed is meaningless if you do not state what it is you . When you do state it, and you work it out in full, it turns out you're measuring the change of a dimensionless physical constant. are keeping constant Suppose, for example, that I claim "the speed of light has increased by 10% since last year". My experimental evidence for this is simple: I have a ruler and a clock, and every day I use them (the exact same ruler and clock) to measure the speed of light. I then calculate the time $T$ - as a quantity, in seconds, given by my clock - taken by light to cross the ruler, and I divide it into the length $L$ of the ruler. Over a year, this quantity $L/T$ increases by 10%. For the metrologically minded, let's take a look at my experimental apparatus: My ruler consists, essentially, of a bunch of atoms in a linear arrangement. For example, I could make my ruler by putting a bunch of hydrogen atoms in a line, separated from each other by one Bohr radius (which can be experimentally determined, at least in principle, as the distance at which the ground state charge density decreases to $e^{-2}$ of the maximum). If my ruler is 1m long, it will have $N=1\:\mathrm m/a_0\approx1.89\times10^{10}$ atoms in it. Moreover, since it's the same ruler, and because I'm lazy, I don't recalibrate it to the SI every day before I measure (which would have catastrophic consequences for my result!). Instead, what I do before I measure is look at all the bonds to ensure they're still one Bohr radius long, and re-count the atoms to ensure the ruler still has the exact same number $N$ of atoms. I should also note that, despite sounding esoteric, this ruler is close to the best possible model for an actual physical ruler made of platinum or whatever, since its length is governed by the same physics (nonrelativistic QM plus electrostatics, a.k.a. chemistry) that governs the size of all everyday objects. It's been distilled into a form that's clearly definable, but the essence of the definition is along the lines of "this ruler here, so long as the ends don't get worn out and it doesn't get bent and there's no monkey business with thermal expansion or whatever". My clock is a caesium clock as per the SI second. That is, it has a bunch of caesium atoms which are placed in a specific state (technically, a coherent superposition of their ground state at and the first hyperfine excited state) which emits microwave radiation. The clock measures this radiation and counts the number of maxima; every 9,192,631,770 cycles it increases the counter by 1 second. With this apparatus, then, I observe the measured speed of light to change. What does this result mean? The easiest interpretation is simple: The speed of light has changed. Light simply travels faster than it did last year. That's pretty mysterious, but then the result is pretty weird too. However, there are other possible interpretations for the result. For example, the size of the hydrogen atoms might have changed. That is, the speed of light is still the same, but for some mysterious reason all my hydrogen atoms are 10% bigger than they were. This is not that crazy at all: the Bohr radius is determined by the Schrödinger equation under the electrostatic force, so their constants $\hbar,m_e$ and $e^2$ determine $a_0$ to be $a_0=\hbar^2/m_ee^2$. If any of those changed - say, all electrons are suddenly 10% lighter, which is as mysterious as light being faster - then I'd see exactly the same result I do observe. It's important to note that this is the reading of my result that would come from a strict application of the current SI system of units, since the SI meter is defined as the distance covered by light in 1s as given by my clock. However, if my ruler has "shrunk" then so have I (as I'm made of atoms) and so has every piece of equipment in my lab and elsewhere on Earth. I could then speak of the "mysterious growth of the SI meter" equally meaningfully. Alternatively, the caesium atoms could be getting progressively more sluggish. They could simply no longer be giving out microwave peaks and troughs as fast as they used to (or as I'd see if I teleported myself to the past), so even though the speed of light is the same, the length of the ruler is the same, and the traversal time is the same, the counter on the clock now reads 10% less seconds than it would have a year ago. These three explanations are all equally mysterious, or equally reasonable, as each other. Moreover, from my experiment I have no way to test which is right, as I have no way of teleporting myself back to the past to compare my (possibly) engorged hydrogens or my (allegedly) sluggish caesiums with their previous versions, in the same way that I cannot set up a race between my (supposedly) faster light and the light of yesteryear. It is clear that something in physics has indeed changed, but you can pan the change into different factors depending on your perspective. As it turns out, of course, the thing in physics which has changed, and which is the only thing that can unambiguously be said to have changed, is the fine structure constant. This $\alpha$, as any atomic physicist will tell you, is one over the speed of light in atomic units, which is exactly what we're measuring. More specifically,$$\alpha=\frac{e^2}{\hbar c},$$and $e^2/\hbar$ is easily seen to be the atomic unit of velocity. This abstract 'atomic unit' is of course a very physical quantity: it is, within a constant, calculable factor, the mean velocity of any electron around its atom. Now, as it happens, this crazy experiment I proposed is indeed being performed. The actual experimental realization does not rely on meter-long rulers or on external clocks, but it relies instead on the natural length and time scales of ytterbium ions, which of course are completely determined by the same length and time scales as all atomic physics. Atoms, it turns out, have natural built-in velocimeters for measuring the speed of light: their kinetic energies $p^2/2m$ change by relativistic corrections on the order of $p^4/8m^3c^2$, and the different electronic and spin currents have magnetic interactions on the order of $p/c$. Different states respond differently, and even in different directions, to these perturbations, so it's possible to monitor $c$ by observing the precise location of the different energy levels. (For more information, see NPL | Physics | PRL | arXiv.) There is, so far, no observed change in $\alpha$. But if there is, we simply won't be able to tell whether a change in $\alpha=e^2/\hbar c$ is because of a change in the speed of light $c$, the size $\hbar$ of the fundamental phase-space cell, or the strength $e^2$ of the electrostatic interaction, or of the finely-tuned joint changes of these constant that would implement the three implementations stated above. Those individual changes have no meaning by themselves. Finally, some useful references for further reading are How fundamental are fundamental constants? M.J. Duff, Contemp. Phys. 56 no. 1, 35-47 (2014), arXiv:1412.2040, which supersedes arXiv:hep-th/0208093 (Comment on time-variation of fundamental constants, 2002), and Trialogue on the number of fundamental constants. M.J. Duff, L.B. Okun and G. Veneziano, J. High Energy Phys. 03 (2002) 023, arXiv:physics/0110060.
As far as I know, I don't think it is possible to drive a Color Ramp or Mapping nodes from another socket (but I am not super experienced in drivers). However I have managed to re-create the color ramp and mapping node with math nodes, which you can plug inputs into directly. Color Ramp Unfortunately there is no way to create a group node exactly like the Color Ramp, with addable and removable color swatches. To get around this I have created a node with two movable swatches, you can then combine multiple of these nodes together to have the functionality of multiple swatches. The theory: The two input colors are plugged directly into a Mix RGB node. The two Pos inputs need to be sent through a function and plugged into the mix factor. The position of the first swatch needs to be mapped to $0$ to get just the first color out of the mix node. The position of the second swatch needs to be mapped to $1$ to get just the second color out of the mix node. The math: Here's a graph to visualize what we are trying to do, on the x-axis is the input factor of the color ramp, on the y-axis is the desired output, $a$ and $b$ are the positions of the two swatches. With some simple algebra we can find the equation of the line to be: $$y = \frac{1}{b-a}x + \frac{a}{a-b}$$ The math nodes below are simply replicating this equation. The final Add node also has Clamp checked to clamp the output to the interval $[0,1]$, which is what the mix node accepts. Mapping node The mapping node has three functions it can Translate (move), Rotate, and Scale the texture. Since often not all of these are needed I will create separate group nodes for each of these functions. If you don't have a decent understanding about how texture coordinates work you may want to read my answer here. The basic theory of manipulation texture coordinates is to separate the components of the vector with a Separate XYZ node, manipulate the components individually, then combine them back into a vector with a Combine XYZ node. Mapping Node - Translation Translation is easy, to translate $V$ to $V^\prime$ just add the desired amounts to each of the components of $V$. In the above node setup I actually used Subtract nodes instead of Add so positive values will move the texture to the right instead of left. Mapping Node - Scaling Scaling is the same as translating, just multiply the individual coordinates by the scale factor instead of adding. Mapping Node - Rotation Rotation is a little more complicated and I won't take the time to derive the equations here, but they are pretty standard pre-calc formulas. Here are the formulas for rotation. Around the x-axis $$\begin{aligned}x^\prime &= x\\y^\prime &= y\cos{\theta} - z\sin{\theta}\\z^\prime &= y\sin{\theta} + z\cos{\theta}\end{aligned}$$ Around the y-axis $$\begin{aligned}x^\prime &= x\cos{\theta} - z\sin{\theta}\\y^\prime &= y\\z^\prime &= x\sin{\theta} + z\cos{\theta}\end{aligned}$$ Around the z-axis$$\begin{aligned}x^\prime &= x\cos{\theta} - y\sin{\theta}\\y^\prime &= x\sin{\theta} + y\cos{\theta}\\z^\prime &= z\end{aligned}$$ the first two nodes in each of the rotation setups convert the degree input to radians, which is what the trig math nodes want. Note: And finally, here's a .blend file with all the above node groups.
[m0036_Wave_Equations_SourceFree_Lossless] Electromagnetic waves are solutions to a set of coupled differential simultaneous equations – namely, Maxwell’s Equations. The general solution to these equations includes constants whose values are determined by the applicable electromagnetic boundary conditions. However, this direct approach can be difficult and is often not necessary. In unbounded homogeneous regions that are “source free” (containing no charges or currents), a simpler approach is possible. In this section, we reduce Maxwell’s Equations to wave equations that apply to the electric and magnetic fields in this simpler category of scenarios. Before reading further, the reader should consider a review of Section [m0074_Fundamentals_of_Waves] (noting in particular Equation [m0074_eAcousticWaveEquation]) and Section [m0027_Wave_Equations_for_a_Transmission_Line] (wave equations for voltage and current on a transmission line). This section seeks to develop the analogous equations for electric and magnetic waves. We can get the job done using the differential “point” phasor form of Maxwell’s Equations, developed in Section [m0042_Phasor_Form_of_Maxwells_Equations_Differential]. Here they are: \[\nabla \cdot \widetilde{\bf D} = \widetilde{\rho}_v\] \[\nabla \times \widetilde{\bf E} = -j\omega\widetilde{\bf B}\] \[\nabla \cdot \widetilde{\bf B} = 0\] \[\nabla \times \widetilde{\bf H} = \widetilde{\bf J} + j\omega\widetilde{\bf D}\] In a source-free region, there is no net charge and no current, hence \(\widetilde{\rho}_v=0\) and \(\widetilde{\bf J}=0\) in the present analysis. The above equations become \[\nabla \cdot \widetilde{\bf D} = 0\] \[\nabla \times \widetilde{\bf E} = -j\omega\widetilde{\bf B}\] \[\nabla \cdot \widetilde{\bf B} = 0\] \[\nabla \times \widetilde{\bf H} = +j\omega\widetilde{\bf D}\] Next, we recall that \(\widetilde{\bf D}=\epsilon\widetilde{\bf E}\) and that \(\epsilon\) is a real-valued constant for a medium that is homogeneous, isotropic, and linear (Section [m0007_Properties_of_Materials]). Similarly, \(\widetilde{\bf B}=\mu\tilde{\bf H}\) and \(\mu\) is a real-valued constant. Thus, under these conditions, it is sufficient to consider either \(\widetilde{\bf D}\) or \(\widetilde{\bf E}\) and either \(\widetilde{\bf B}\) or \(\widetilde{\bf H}\). The choice is arbitrary, but in engineering applications it is customary to use \(\widetilde{\bf E}\) and \(\widetilde{\bf H}\). Eliminating the now-redundant quantities \(\widetilde{\bf D}\) and \(\widetilde{\bf B}\), the above equations become \[\nabla \cdot \widetilde{\bf E} = 0\label{m0036_eGL3}\] \[\nabla \times \widetilde{\bf E} = -j\omega\mu\widetilde{\bf H}\label{m0036_eMFE3}\] \[\nabla \cdot \widetilde{\bf H} = 0\] \[\nabla \times \widetilde{\bf H} = +j\omega\epsilon\widetilde{\bf E}\label{m0036_eAL3}\] It is important to note that requiring the region of interest to be source-free precludes the possibility of loss in the medium. To see this, let’s first be clear about what we mean by “loss.” For an electromagnetic wave, loss is observed as a reduction in the magnitude of the electric and magnetic field with increasing distance. This reduction is due to the dissipation of power in the medium. This occurs when the conductivity \(\sigma\) is greater than zero because Ohm’s Law for Electromagnetics (\(\widetilde{\bf J}=\sigma\widetilde{\bf E}\); Section [m0010_Conductivity]) requires that power in the electric field be transferred into conduction current, and is thereby lost to the wave (Section [m0106_Power_Dissipation_in_Conducting_Media]). When we required \({\bf J}\) to be zero above, we precluded this possibility; that is, we implicitly specified \(\sigma=0\). The fact that the constitutive parameters \(\mu\) and \(\epsilon\) appear in Equations [m0036_eGL3]–[m0036_eAL3], but \(\sigma\) does not, is further evidence of this. Equations [m0036_eGL3]–[m0036_eAL3] are Maxwell’s Equations for a region comprised of isotropic, homogeneous, and source-free material. Because there can be no conduction current in a source-free region, these equations apply only to material that is lossless (i.e., having negligible \(\sigma\)). Before moving on, one additional disclosure is appropriate. It turns out that there actually is a way to use Equations [m0036_eGL3]–[m0036_eAL3] for regions in which loss is significant. This requires a redefinition of \(\epsilon\) as a complex-valued quantity. We shall not consider this technique in this section. We mention this simply because one should be aware that if permittivity appears as a complex-valued quantity, then the imaginary part represents loss. To derive the wave equations we begin with the MFE, Equation [m0036_eMFE3]. Taking the curl of both sides of the equation we obtain ( ) &= ( -j ) &= -j( ) [m0036_eWE1] On the right we can eliminate \(\nabla \times \widetilde{\bf H}\) using Equation [m0036_eAL3]: -j( ) &= -j( +j ) &= +^2 On the left side of Equation [m0036_eWE1], we apply the vector identity \[\nabla \times \nabla \times {\bf A} = \nabla\left(\nabla\cdot {\bf A}\right) -\nabla^2{\bf A}\] which in this case is \[\nabla \times \nabla \times \widetilde{\bf E} = \nabla\left(\nabla\cdot \widetilde{\bf E}\right) -\nabla^2\widetilde{\bf E}\] We may eliminate the first term on the right using Equation [m0036_eGL3], yielding \[\nabla \times \nabla \times \widetilde{\bf E} = -\nabla^2\widetilde{\bf E}\] Substituting these results back into Equation [m0036_eWE1] and rearranging terms we have \[\nabla^2\widetilde{\bf E} +\omega^2\mu\epsilon \widetilde{\bf E} = 0 \label{m0036_eWEE1}\] This is the wave equation for \(\widetilde{\bf E}\). Note that it is a homogeneous (in the mathematical sense of the word) differential equation, which is expected since we have derived it for a source-free region. It is common to make the following definition \[\boxed{\beta \triangleq \omega\sqrt{\mu\epsilon} }\label{m0036_eBetaDef}\] so that Equation [m0036_eWEE1] may be written \[\boxed{\nabla^2\widetilde{\bf E} +\beta^2 \widetilde{\bf E} = 0 }\label{m0036_eWEE2}\] Why go the the trouble of defining \(\beta\)? One reason is that \(\beta\) conveniently captures the contribution of the frequency, permittivity, and permeability all in one constant. Another reason is to emphasize the connection to the parameter \(\beta\) appearing in transmission line theory (see Section [m0080_Wave_Propagation_on_a_Transmission_Line] for a reminder). It should be clear that \(\beta\) is a phase propagation constant, having units of 1/m (or rad/m, if you prefer), and indicates the rate at which the phase of the propagating wave progresses with distance. The wave equation for \(\widetilde{\bf H}\) is obtained using essentially the same procedure, which is left as an exercise for the reader. It should be clear from the duality apparent in Equations [m0036_eGL3]-[m0036_eAL3] that the result will be very similar. One finds: \[\boxed{ \nabla^2\widetilde{\bf H} +\beta^2 \widetilde{\bf H} = 0 } \label{m0036_eWEH2}\] Equations [m0036_eWEE2] and [m0036_eWEH2] are the wave equations for \(\widetilde{\bf E}\) and \(\widetilde{\bf H}\), respectively, for a region comprised of isotropic, homogeneous, lossless, and source-free material. Looking ahead, note that \(\widetilde{\bf E}\) and \(\widetilde{\bf H}\) are solutions to the same homogeneous differential equation. Consequently, \(\widetilde{\bf E}\) and \(\widetilde{\bf H}\) cannot be different by more than a constant factor and a direction. In fact, we can also determine something about the factor simply by examining the units involved: Since \(\widetilde{\bf E}\) has units of V/m and \(\widetilde{\bf H}\) has units of A/m, this factor will be expressible in units of the ratio of V/m to A/m, which is \(\Omega\). This indicates that the factor will be an impedance. This factor is known as the wave impedance and will be addressed in Section [m0039_Uniform_Plane_Waves_Characteristics]. This impedance is analogous the characteristic impedance of a transmission line (Section [m0052_Characteristic_Impedance]). Additional Reading:
For $r=1$, you can see that $\big(\Bbb Z\big/p\Bbb Z\big)^\times$ is cyclic of order $p-1$ by realizing it as the multiplicative group of the finite field $\Bbb Z\big/p\Bbb Z$. Now suppose $r > 1$, so that $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is an abelian group of order $p^{r-1}(p-1)$. Factor $p-1$ into primes as$$p-1 = q_1^{s_1}\dotsb q_t^{s_t}.$$ We will prove that each Sylow subgroup of $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is cyclic and use the following lemma: Lemma. Let $G$ be a finite group, and let $P_1,\dots,P_r$ be its nontrivial Sylow subgroups. If each $P_i$ is a normal subgroup of $G$, then $$ G \cong P_1\times \dotsb \times P_r. $$ Since $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is an abelian group, each of its Sylow subgroups is normal, so we will be able to use the Chinese Remainder Theorem (CRT) to conclude that$$\big(\Bbb Z\big/p^r\Bbb Z\big)^\times \cong \Bbb Z\big/p^{r-1}\Bbb Z\times \Bbb Z\big/q_1^{s_1}\Bbb Z\times\dotsb\times\Bbb Z\big/q_t^{s_t}\Bbb Z \underbrace{\cong}_\text{CRT} \Bbb Z\big/p^{r-1}q_1^{s_1}\dotsb q_t^{s_t}\Bbb Z,$$i.e. that $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is cyclic of order $p^{r-1}q_1^{s_1}\dotsb q_t^{s_t} = p^{r-1}(p-1)$, as desired. Proof sketch that the Sylow subgroups are cyclic: First show that the Sylow $p$-subgroup of $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is cyclic as follows. Use the Binomial Theorem and induction on $r$ to show that if $p$ is an odd prime then $(1+p)^{p^{r-1}}\equiv 1\bmod p^r$ but $(1+p)^{p^{r-2}}\not\equiv 1\bmod p^r$. Deduce that $1+p$ is an element of order $p^{r-1}$ in $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$. (Recall that if $g^n = 1$, then the order of $g$ divides $n$.) Now we know that the Sylow $p$-subgroup of $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is cyclic. Consider the canonical homomorphism $\varphi\colon\big(\Bbb Z\big/p^r\Bbb Z\big)^\times\to \big(\Bbb Z\big/p\Bbb Z\big)^\times$ defined by$$a + (p^r)\mapsto a+(p).$$Note that $\varphi$ is a surjection, and that it is indeed the map you specified. By the first isomorphism theorem,$$\frac{p^{r-1}(p-1)}{|\!\ker\varphi|} = \frac{\big|\big(\Bbb Z\big/p^r\Bbb Z\big)^\times\big|}{|\!\ker\varphi|} = \big|\big(\Bbb Z\big/p\Bbb Z\big)^\times\big|=p-1 \implies |\!\ker\varphi| = p^{r-1},$$hence the kernel of $\varphi$ is precisely the Sylow $p$-subgroup of $\big(\Bbb Z/p^r\Bbb Z\big)^\times$. For each prime $q\ne p$ dividing the order of $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$, the Sylow $q$-subgroup of $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ therefore maps injectively into $\big(\Bbb Z\big/p\Bbb Z\big)^\times$ under $\varphi$, and can thereby be identified with a (cyclic) subgroup of the cyclic group $\big(\Bbb Z\big/p\Bbb Z\big)^\times$. Hence we have shown that every Sylow subgroup of $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is cyclic. $\qquad\blacksquare$ Thus, by the lemma, we may write $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ as a direct product of its Sylow subgroups, each of which we know to be cyclic:$$\big(\Bbb Z\big/p^r\Bbb Z\big)^\times \cong \Bbb Z\big/p^{r-1}\Bbb Z\times\Bbb Z\big/q_1^{s_1}\Bbb Z\times\dotsb\times\Bbb Z\big/q_t^{s_t}\Bbb Z.$$ By the Chinese Remainder Theorem, since all the primes $p,q_i$ are distinct, we have$$\big(\Bbb Z\big/p^r\Bbb Z\big)^\times \cong \Bbb Z\big/p^{r-1}q_1^{s_1}\dotsb q_t^{s_t}\Bbb Z,$$i.e., that $\big(\Bbb Z\big/p^r\Bbb Z\big)^\times$ is cyclic, as desired.
I'm looking for the smallest possible theoretical size for a spherical uniform star in rotation, after it loses some energy as heat. I'm puzzled by the following calculations. Initially, the star has a radius $R_0$ and angular velocity $\omega_0$. Its mass $M$ is conserved, and its mechanical energy is this : \begin{equation}\tag{1} E_0 = \frac{1}{2} \, I_0 \, \omega_0^2 - \frac{3 G M^2}{5 R_0} = \frac{1}{5} \, M R_0^2 \, \omega_0^2 - \frac{3 G M^2}{5 R_0}. \end{equation} Then for some reason, the star releases (or produces) some heat $Q$ and reduces its size to $R$. Angular momentum is conserved : \begin{equation}\tag{2} I_0 \, \omega_0 = I \, \omega \quad \Rightarrow \quad \omega = \frac{R_0^2}{R^2} \, \omega_0. \end{equation} Conservation of energy (with heat released) : $E_0 = E + Q$, gives this equation : \begin{equation}\tag{3} \frac{1}{5} \, M R_0^2 \, \omega_0^2 - \frac{3 G M^2}{5 R_0} = \frac{1}{5} \, M R^2 \, \Big( \frac{R_0^2}{R^2} \, \omega_0 \Big)^2 - \frac{3 G M^2}{5 R} + Q. \end{equation} I then could do two things : Isolate $R$ as a function of $R_0$, $\omega_0$ and $Q$ (equation (3) gives a second order equation with two roots), then ask which value of $Q$ gives the smallest value of $R$, i.e. $\frac{\partial R}{\partial Q} = 0$. The calculations are a bit messy, and I'm not sure of the result. Express $E$ as a function of $R$ while considering $R_0$, $\omega_0$ and $Q$ as fixed parameters. Then $\frac{dE}{dR} = 0$ would minimize the final energy, but I'm not sure that considering $Q$ as fixed is making any sense. This calculation is easy though and gives this : \begin{align}\tag{4} \tilde{R} &= \frac{2 \omega_0^2 \, R_0^4}{3 G M}, & E_{\text{min}} \equiv E(\tilde{R}) = -\, \frac{9 G^2 M^3}{20 \omega_0^2 \, R_0^4}. \end{align} Notice that $\tilde{R}$ is not the minimal value of $R$, but gives the minimal value of final $E$ instead. I think this approach is wrong. Maybe there's another way of finding the minimal value of $R$ that a star can achieve, for given $R_0$ and $\omega_0$ ($Q$ is a variable, and $M$ is conserved). Apparently, even when $Q$ is arbitrary large, conservation of energy and angular momentum implies there's a non-trivial solution to this problem, but I may be wrong. What is the best way of doing this ?
Black holes cannot be seen because they do not emit visible light or any electromagnetic radiation. Then how do astronomers infer their existence? I think it's now almost established in the scientific community that black holes do exist and certainly, there is a supermassive black hole at the centre of our galaxy. What is the evidence for this? Black holes cannot be seen because they do not emit visible light or any electromagnetic radiation. This is not absolutely correct in the sense that visible light is emitted during the capture of charged matter from the radiation as it is falling into the strong gravitational potential of the black hole, but it is not strong enough to characterize a discovery of a black hole. X rays are also emitted if the acceleration of the charged particles if high, as is expected by a black hole attractive sink. The suspicion of the existence of a black hole comes from kinematic irregularities in orbits. For example: Doppler studies of this blue supergiant in Cygnus indicate a period of 5.6 days in orbit around an unseen companion. ..... An x-ray source was discovered in the constellation Cygnus in 1972 (Cygnus X-1). X-ray sources are candidates for black holes because matter streaming into black holes will be ionized and greatly accelerated, producing x-rays. A blue supergiant star, about 25 times the mass of the sun, was found which is apparently orbiting about the x-ray source. So something massive but non-luminous is there (neutron star or black hole). Doppler studies of the blue supergiant indicate a revolution period of 5.6 days about the dark object. Using the period plus spectral measurements of the visible companion's orbital speed leads to a calculated system mass of about 35 solar masses. The calculated mass of the dark object is 8-10 solar masses; much too massive to be a neutron star which has a limit of about 3 solar masses - hence black hole. This is of course not a proof of a black hole - but it convinces most astronomers. Further evidence that strengthens the case for the unseen object being a black hole is the emission of X-rays from its location, an indication of temperatures in the millions of Kelvins. This X-ray source exhibits rapid variations, with time scales on the order of a millisecond. This suggests a source not larger than a light-millisecond or 300 km, so it is very compact. The only possibilities that we know that would place that much matter in such a small volume are black holes and neutron stars, and the consensus is that neutron stars can't be more massive than about 3 solar masses. From frequently asked questions, What evidence do we have for the existence of black holes?, first in a Google search: Astronomers have found convincing evidence for a supermassive black hole in the center of our own Milky Way galaxy, the galaxy NGC 4258, the giant elliptical galaxy M87, and several others. Scientists verified the existence of the black holes by studying the speed of the clouds of gas orbiting those regions. In 1994, Hubble Space Telescope data measured the mass of an unseen object at the center of M87. Based on the motion of the material whirling about the center, the object is estimated to be about 3 billion times the mass of our Sun and appears to be concentrated into a space smaller than our solar system. Again, it is only a black hole that fits these data in our general relativity model of the universe. So the evidence for our galaxy is based on kinematic behavior of the stars and star systems at the center of our galaxy. If the black hole was situated in the middle of nowhere with no matter surrounding it, then indeed it would be quite hard to observe it. Any black hole with a considerable mass emits extremely tiny amount of Hawking radiation and that's it. However the black hole in the center of our galaxy is surrounded by matter. Thus we can observe it by its gravitational pull on this matter. The periods are small with S2 completing the orbit in only 15.2 years (the observations over 15 years can be seen in this clip, thanks to luk32 for the link to the image) Such short-period orbits signify the presence of the supermassive object: But there's also matter in the vicinity of the black hole. Under its huge gravitational pull most of the matter is scattered around whereas a little bit is driven to inspiral until it falls on the black hole in the process known as accretion. The falling matter radiates primarily in the radio spectrum which results in it losing energy and falling further. We can see this radiation from the accreting matter. What we don't see however is the radiation from the object this matter falls on. Because of all the stuff falling, compressed at the surface and overheated any ordinary object would be very bright. Instead it's very dim as if all this matter simply disappeared at some point. This is consistent with existence of the black hole horizon. Similar principle works for other black hole candidates. We can observe its graviational pull on the surrounding matter and radiation from the accretion disc in its vicinity. Sagittarius A* (the black hole at the center of our galaxy) has some of the best observational evidence for black hole I have ever seen. Here, check out the animations from UCLA made from our observations. This is from data taken over a span of 20 years. You can see the bright spots (stars) orbiting around a patch of nothingness. The ones that get really close whip around at some insane speed but slow down quickly as they move away. Obviously, whatever is at the center has got a respectable amount of mass to it. But notice also that the stars always seem to move around something that is at the dead center (and their orbits are ellipses, which shows that we aren't just moving the camera to keep it at the center). Consider that, the mass of those stars must be insanely small next to the mass of the central body, otherwise it would be flung out into space the other direction when one star gets really close. So, here you can see massive stars orbiting something that gives off no light and must be orders of magnitude more massive than any of the stars around it. Well that seems to fit the profile of a black hole. Plus the mass that we calculate it must have is high enough that anything that massive and that compact would have to collapse into a black hole. P.S. If you didn't check out the videos, do. They're great; I love them. Short answer: There is compelling evidence for the existence of supermassive dark compact object at the center of the Milky Way, however the conclusion that this compact object is a black hole (and thus has a horizon) is far from established. Moreover, the statement “black holes exist in our Universe” may be fundamentally unfalsifiable, but alternatives to black holes can be ruled out or confirmed by experiments. Longer Answer. The center of our Galaxy hosts supermassive black hole candidate that is the best constrained by observations among other purported black holes. Its mass and distance have been accurately determined from orbits of nearby stars and its proper motion studies, and it has been established that high-frequency radio, and highly variable near-infrared and X-ray emission from this object originate from within a few Schwarzschild radii of this very compact object. Other answers list this evidence in greater details, but let me emphasize the following: this is all evidence of massive dark compact object, not necessarily of the black hole. If we assume the validity of classical general relativity, there is only one possible interpretation: there is a black hole at the center of our galaxy. However, there is always a possibility that there is some new physics that becomes relevant in situations where a black hole would be formed in ordinary GR, physics that possibly would prevent the formation of horizon, the defining feature of a black hole. So, what could be an alternative to black hole? The general name is an exotic compact object (ECO). It can be seen as two regions glued together: the exterior of the black hole solution starting from some distance $r_g\cdot\epsilon$ (where $r_g$ is the Schwarzschild radius for a given mass of an ECO ) of the supposed horizon and the interior composed of some exotic stuff in a way that does not lead to the formation of the horizons. If the parameter $\epsilon$ is small enough, then most of the features one could expect from black holes: strong gravitational lensing, general relativistic behavior of the orbits near the ECO including photon sphere, ergosphere, formation of gravitational waves by mergers of the colliding ECO etc. could be present in these kind of objects. In classical GR it is impossible for any signal (EM or GW) to escape the surface of the horizon. So for any given effect of black hole it would be possible to choose small enough $\epsilon$ so that it would be impossible to distinguish between a true black hole and a hypothetical ECO which does not have a horizon, meaning that existence of black holes is unfalsifiable. There are various theoretical models that lead to the formation of ECOs instead of black holes. Most of them are based on somewhat speculative assumptions on the behavior of particular model of quantum gravity (or specific properties of matter content) in the strong regime. An overview of various types of black hole alternatives could be found in a recent paper: Cardoso, V., & Pani, P. (2017). Tests for the existence of black holes through gravitational wave echoes. Nature Astronomy, 1(9), 586, doi, arXiv, extended version. This paper is quite accessible and has a following figure, giving overview of various types of ECO: Example: $2-2$ hole, here is a proposal for a specific type of ECO (chosen mostly because I have not seen this before): It is based on a conjectured analogy between quantum chromodynamics and quadratic quantum gravity: above a certain scale $\Lambda_{QQG}$ quantum gravity exhibits a spin-2 ghosts. So a strong gravity regime would be quite different from infrared regime which is classical quadratic gravity. A $2-2$ hole is a solution that is an exterior of Schwarzschild solution down to about a Planck proper length of the would-be horizon, while inside there is a phase of strong quantum gravity without horizon: Experimental tests While there are many models for various types ECO they can be constrained by experiments: Rapidly spinning ECOs often exhibit instability and lose angular momentum. Observation of large angular momentum of black hole candidates would rule such models out. Merging ECOs would have echoes in gravitational waves signature. Analysis of data from LIGO and future generation of GW detectors could support or disprove some ECO models. See for instance this paper: Abedi, J., Dykaar, H., & Afshordi, N. (2017). Echoes from the Abyss: Tentative evidence for Planck-scale structure at black hole horizons. Physical Review D, 96(8), 082004, doi. Proposed Event Horizon Telescope could collect data on ECO. And so this cottage industry of black hole alternatives is mostly driven by the hope (however small) that observations of near-would-be horizon structure would give us a window into quantum gravity. First of all, Black Holes' accretion disks do emit radiation. That is one way that astronomers use to detect black holes, that is, observing incoming radiation. Another way is comparing the motion of objects with the motion expected from objects near black holes. This is relevant to your question: many astronomers have noticed that the motion of stars close to the center of our galaxy match the expected motion of stars in the presence of black holes. This is evidence of the presence of a massive black hole in the center of the Milky Way. Black holes are like your Death Metal loving neighbors who never leave their apartment: You can't see them, but you know for sure they are there. When you state that "black holes cannot be seen because they do not emit any electromagnetic radiation" you are nominally correct: The amount of Hawking radiation the large ones emit is so tiny that they actually form a shadow in front of the microwave background. But black holes interact gravitationally with their environment, in a dramatic fashion. The otherworldly acceleration of matter orbiting and falling into the black hole can result in rather spectacular emissions of radiation. This article from 2015 reports on X-ray emissions observed by the Chandra observatory in the radio galaxy Pictor A. The hypothesis is that the X-rays are synchrotron radiation originating in a high-energy particle jet which in turn originates close to the black hole in the center of the galaxy. In order to gain some perspective on the scale of events let's examine a few numbers. The power of the observed radiation is estimated to be around 2*10 35Watt. That's half a billion times the sun's entireenergy output. The observable radiation is only a small fraction of the jet's power which is estimated to be 100 or 1000 times larger. That would make the energy output of the beams equal the energy output of a small-ish galaxy of stars. The beam has a diameter of several kilo-parsecs. That makes it wider than our galaxy's height, and is a substantial fraction of our galaxy's diameter. No known mechanisms outside black holes would be able to effect anything on that scale. You know your neighbors are there because they produce more noise than the entire rest of your apartment block. protected by Qmechanic♦ May 18 '18 at 7:07 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in Italian. Contents Italian language has some accentuated words. For this reason the preamble of your document must be modified accordingly to support these characters and some other features. documentclass{article} \usepackage[utf8]{inputenc} \usepackage[italian]{babel} \usepackage[T1]{fontenc} \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Questo è un breve riassunto dei contenuti del documento scritto in italiano. \end{abstract} \section{Sezione introduttiva} Questa è la prima sezione, possiamo aggiungere alcuni elementi aggiuntivi e tutto digitato correttamente. Inoltre, se una parola è troppo lunga e deve essere troncato babel cercherà per troncare correttamente a seconda della lingua. \section{Teoremi Sezione} Questa sezione è quello di vedere cosa succede con i comandi testo definendo \[ \lim x = \sin{\theta} + \max \{3.52, 4.22\} \] \end{document} There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections. If your are looking for instructions on how to use more than one language in a sinlge document, for instance English and Italian, see the International language support article. Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the inputenc package to set up input encoding. In this case the package properly displays characters in the Italian alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system. To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for Italian language, this is accomplished by the package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Italian, using this specific encoding will avoid glitches with some specific characters. The default LaTeX encoding is OT1. To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the babel package for the Italian language. \usepackage[italian]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Italian words "Sommario" and "Indice" are used. Sometimes for formatting reasons some words have to be broken up in syllables separated by a - ( hyphen) to continue the word in a new line. For example, matematica could become mate-matica. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble. \usepackage{hyphenat} \hyphenation{mate-mati-ca recu-perare} The first command will import the package hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the {\nobreak word} command within your document. For more information see
This question already has an answer here: In my .bib file, there is a reference including the following name in the authors list: Mitášová, Helena When the PDF is generated, this name does not display correctly: MitÃaÅaovÃa I searched for related topics, and found the following link: I tried including the package Alegreya mentioned therein, but still the problem persists. Can someone tell me if there is a package which I can use to get all the unicode text (I am presuming that this will take care of Eastern European names as well, or please correct me if I am wrong) displayed correctly in the output? TIA I tried adding \usepackage[utf8]{inputenc} but get a inputenc Error: Unicode char \u8” for the equation $\mu = 4\pi \times 10^{-7}$.I tried using \usepackage[utf8x]{inputenc} instead based on search, but that shows a different error not seen before. If I remove both the packages, the PDF document is generated as before, so I am not sure whether utf8 package is the way forward for me as I do not wish to spend a lot of time debugging latex issues....unless someone can tell me a one time fix to make all the issues go away associated only with the introduction of the utf8 package, or may be suggest an alternate package which could display all the European names correctly.
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 42, Number 1 (1971), 401-404. Almost Certain Summability of Independent, Identically Distributed Random Variables Abstract The Strong Law of Large Numbers, valid for independent, identically distributed (i.i.d.) random variables $\{X_n, n \geqq 1\}$ with finite first moment, may be regarded as merely one of a host of summability methods applicable to the divergent sequence $\{X_n\}$. Here, a subclass of regular (Toeplitz) summability methods will be considered and concern will focus on the almost certain (a.c.) convergence to zero of the transformed sequence \begin{equation*}\tag{1}T_n = A_n^{-1} \sum^n_{j=1} a_jX_j\end{equation*} when centered where \begin{equation*}\tag{i}a_n \geqq 0,\quad A_n = \sum^n_{j=1} a_j \rightarrow \infty,\end{equation*} thereby ensuring regularity. If $T_n - C_n \rightarrow_{\operatorname{a.c.}} 0$ for some choice of centering constants $C_n$, the i.i.d. random variables $\{X_n\}$ will be called $a_n$-summable with probability one or simply $a_n$-summable. The Strong Law is the special case $(a_n \equiv 1)$ of Cesaro-one summability with $C_n \equiv EX$. Of course, if $X^\ast_n = X_n - X'_n, n \geqq 1$ are the symmetrized $X_n$ (i.e., $\{X'_n\}$ is i.i.d., independent of $\{X_n\}$ with the same distribution), then $a_n$-summability of $\{X_n\}$ implies $a_n$-summability of $\{X^\ast_n\}$ with vanishing centering constants, i.e. \begin{equation*}\tag{2}T^\ast_n = A_n^{-1} \sum^n_{j=1} a_jX^\ast_j \rightarrow_{\operatorname{a.c.}} 0.\end{equation*} It will be shown, on the one hand, that no such choice of $\{a_n\}$ and $\{C_n\}$ will render i.i.d. $\{X_n\}$ with the St. Petersburg (mass $2^{-n}$ at the point $2^n, n \geqq 1$) or Cauchy distribution $a_n$-summable. On the other hand, necessary and sufficient conditions for certain types of $a_n$-summability more refined than (implied by) Cesaro-one will be proffered. The prototype of these appears in Corollary 1 and Corollary 2. Article information Source Ann. Math. Statist., Volume 42, Number 1 (1971), 401-404. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177693533 Digital Object Identifier doi:10.1214/aoms/1177693533 Mathematical Reviews number (MathSciNet) MR279862 Zentralblatt MATH identifier 0229.60022 JSTOR links.jstor.org Citation Chow, Y. S.; Teicher, H. Almost Certain Summability of Independent, Identically Distributed Random Variables. Ann. Math. Statist. 42 (1971), no. 1, 401--404. doi:10.1214/aoms/1177693533. https://projecteuclid.org/euclid.aoms/1177693533
Brief solution ideas to the least solved Crypto CTF challenges. Midnight Moon We can see that the primes are generated as follows. Let $m$ be the right half of the flag (as an integer) and $l$ be its byte length. We repeat the transformation $m \mapsto ((m+1)\cdot l)\oplus l$ until $m$ is prime. This is the first prime $p$. Then we set $m = (2m+1)$ and repeat the first transformation until we get another prime. As a result, the second prime $q$ is approximately equal to $2 l^e p$, where $e$ is the number of iterations in the last process. We can guess $e$ and $l$ (note that $e$ is upper bounded by $log_l n$) and then apply the Fermat factorization method to the number $2 l^e n = 2 l^e p q$. It should work since $2 l^e p \approx q$. Since $e$ can vary a lot, we can only hope that the approximation is good enough. However, there is a little subtlety with the Fermat method. It works only if both close factors have the same parity, which is not the case: $2l^e p$ is even and $q$ is odd. This of course can be easily overcome by multiplying the number by $4$, which corresponds to multiplying both close factors by 2. In the challenge the modulus can be factored with $l=27$ and $e=442$. The next step is to guess the number of applications of the transformation $m \mapsto ((m+1)\cdot l)\oplus l$ when going from initial $m$ to the first prime $p$. After trivial inversion of the transformation, we obtain the right half of the flag: “4D3_1n__m1dNi9hT_witH_L0v3!}”. We could study the encryption function to decrypt the first half, but we can already guess the whole flag: “CCTF{M4D3_1n__m1dNi9hT_witH_L0v3!}”, which is correct. Starving Parrot The primes are generated by taking two random values and applying some fixed unknown polynomial to them. Once the two values are primes, the process is finished. Since we have access to the polynomial, we can easily recover it, for example by putting 10000000000000000000000000000000 as input we can clearly see the output 100…003700..002019. The polynomial is trivially deduced: $x^{13} + 37x+2019$. It follows that $$n = (r^{13} + 37r+2019) \cdot (s^{13} + 37s+2019)$$ for some $r,s$ of size roughly 55 bits. Observe that $\sqrt[13]{n}$ provides a good approximation of $rs$. In fact, in the given setting $\lfloor \sqrt[13]{n} \rfloor = rs$. This number has 108 bits and can be easily factored: $$\begin{multline*}251970989651144357978582196759904 = 2^5 \cdot 11 \cdot 13^2 \cdot 61 \cdot 239 \cdot 491\cdot\\ \cdot 3433 \cdot 137383 \cdot 1254599604823.\end{multline*}$$ This number has 2304 divisors in total. Each divisor gives a candidate for $r$ and its complement is a candidate for $s$. By applying the polynomial to them, we can obtain potential prime factors and check if they result in the same modulus. In the challenge we obtain $$\begin{align*}r &= 30132816491977336,\\ s &= 15416171199104228.\end{align*}$$ Given the factorization, we can easily decrypt the message (don’t forget to invert the operations applied to the message before squaring): “CCTF{it5____Williams____ImprOv3d_M2_Crypt0yst3m!!!}”. Oliver Twist We can recognize the (twisted) Edwards curve addition formulas: $$\begin{align*} x_3 &\equiv (x_1 y_2 + y_1 x_2) / (1 + d x_1 x_2 y_1 y_2)\pmod{p},\\ y_3 &\equiv (y_1 y_2 – a x_1 x_2) / (1 – d x_1 x_2 y_1 y_2) \pmod{p}. \end{align*}$$ In our case $a=3$ and $d=2$. You can learn a bit about twisted Edwards curves for example from slides by Christiane Peters. We are given $y$-coordinate of a point that was generated from the flag in a particular way. In order to find the $x$ coordinate, we have to plug the $y$ coordinate into the curve equation and solve for $x$: $$3x^2 + y^2 – 2x^2y^2-1\equiv 0 \pmod{p}.$$ This is a quadratic equation in $x$ and we can easily find both roots. One of them is significantly smaller than $p$, so we can assume that this is the one we are looking for. This point $(x,y)$ is obtained from doubling the point $(m’, y)$ $(m’ \mod 3)$ times, where $m’$ is generated from the flag. Since we found a small $x$, we can guess that $m’ \mod 3 = 0$ and no squarings occured and that $x = m’$. Now, $m’$ is generated from the flag $m$ by adding $\sum_{i=1}^t 2^i + 3i$ to it for some small integer $t \ge 313$. We guess $t$ and subtract the added terms. We obtain for $t=313$ the flag: “CCTF{N3w_But_3a5Y_Twisted_Edwards_curv3_Crypt0sys7em}”.
Introduction Algorithms in the Q learning family learn a mapping from action-state pairs to an expected reward, the so called Q function. Classical Q-learning algorithms store this information in a tabular form whose memory complexity and non-linearly growing size make problems with large state spaces infeasible, e.g.: playing video games from video input. We will look at how deep Q-learning helps mitigate this problem. We will start this article by remembering some concepts and formalizing the problem as a Markov Decision Process which will be important in understanding the assumptions behind the Q-learning algorithm. Markov Decision Process Physical processes are inherently causal, so far there is no forward dependency in time (we haven’t invented the time machine yet!). A physical process or system, can be seen as transitions between so called states. States are unique representations of a systems internal parameters, e.g.: position, velocity or a discrete identifier. Markov Processes take this assumption one step further and satisfy the Markovian property: The conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it. A Markov process is defined by a set of states and their transitions together with the respective probabilities of occurrence. The agent interacts with this process by choosing his actions and receives rewards if goals are achieved. In the above image you can see a visual representation of a discrete markov decision process with three states (S, R and F) and the respective transition probabilities. Q-Learning Going quickly over some important notions, the objective of algorithms in the Q-learning family is to learn a mapping between a state-action pair $(s, a)$ and the corresponding expected future reward $Q(s, a)$. This function allows us to interact with the environment, maximizing the future expected reward at every step. The Markovian property, introduced in the previous section, is important in Q-Learning since it means the state-action pairs are independent of the past sequence of events. That is, $Q(a, s)$ only depends on the current state and the action. For discrete systems, the function’s values can be stored in tabular form with dimensions $S \times A$. This table is initialized with zeros and filled with an iterative algorithm called Q-learning. The algorithm starts with exploring the environment through the so called exploitation vs discovery trade off, that is, it selects either a random action with probability $\epsilon$ or one that maximizes the stored Q-table. After executing the action, the update step depends on the returned reward to adjust the Q table. If you have not yet studied the classic Q-learning algorithm I strongly encourage you to understand the concepts in the tabular algorithm first. [1]. The most important part of this algorithm is the update equation for the Q table: $Q^{t+1}(s_t, a_t) \Leftarrow (1-\alpha)\cdot Q^t(s_t, a_t) + \alpha \cdot (r_t + \gamma\cdot max_a Q^t(s_{t+1}, a_t))$ This expression is a so called running mean and, as we will now see, is very similar to the gradient descent update common in machine learning. Rewriting this expression slightly the equation becomes: $Q^{t+1}(s_t, a_t) \Leftarrow Q^t(s_t, a_t) + \alpha \cdot (r_t + \gamma\cdot max_a Q^t(s_{t+1}, a_t)-Q^t(s_t, a_t))$ If we substitute $r_t + \gamma\cdot max_a Q^t(s_{t+1}, a)$ by an auxiliary function called $target(r_t, Q^t(s_{t+1}, a_t)$ we can see that the expression simplifies to: $Q^{t+1}(s_t, a_t) \Leftarrow Q^t(s_t, a_t) + \alpha \cdot (target(r_t, Q^t(s_{t+1}, a_t))-Q^t(s_t, a_t))$ The last equation can be seen as a gradient descent optimization if we interpret $target(r_t, s_{t+1}, Q^t)-Q(s_{t})$ as an error term, rewriting the expression as: $Q^{t+1}(s_t, a_t) \Leftarrow Q^t(s_t, a_t) + \alpha \cdot e^t$ By this point we are starting to see how we will be able to generalize the q-learning algorithm in the tabular form to a function approximation. In the next section we will integrate the two pieces and show how we are able to tackle large and continuous state spaces. Q-Function Approximation In the previous section we saw that classical Q learning algorithms store the Q function in a tabular form. This tabular form makes large state spaces infeasible as the memory grows non-linearly and due to the curse of dimensionality. Deep Q Learning mitigates this problem by introducing a smoothness prior (Q function approximation). This methodology was proven to be feasible, among others, in Mnih et. al (2013) with the introduction of improvements such as the Experience Replay technique. Revisiting the rewritten Q update expression from the previous section: $Q^{t+1}(s_t, a_t) \Leftarrow Q^t(s_t, a_t) + \alpha \cdot e^t$ In this expression the Q function is stored in tabular form, but could we not use a parameterized function to store the same information? Instead of storing the value of the Q function in a table and index it with the state-action pairs, couldn’t we calculate them using a function? Yes we can! (but there are some caveats we have to deal with first!) Lets not get ahead of ourselves, we just saw that we can update our Q function using something similar to gradient descent. What would happen if we substituted the Q function with a smooth parameterized function? Starting with an expression we arrived at earlier but substituting it with a parameterized $Q$ function with parameters $\theta$: $Q_\theta^{t+1}(s_t, a_t) \Leftarrow Q_\theta^t(s_t, a_t) + \alpha \cdot (target(r_t, Q_\theta^t(s_{t+1}, a_t))-Q_\theta^t(s_t, a_t))$ Since $Q_\theta^t$ is smooth we can rewrite this expression as a gradient descent on a squared error cost function: $Q_\theta^{t+1}(s_t, a_t) \Leftarrow Q_\theta^t(s_t, a_t) + \alpha \cdot \nabla_\theta C$ Where C is defined by: $C = \left(target(r_t, Q_\theta^t(s_{t+1}, a_t))-Q_\theta^t(s_t, a_t)\right)^2$ Comparing this with a supervised learning squared error function: $C= \left(y - \hat y\right)^2$ we can see that we can interpret the reference value as the measured reward and the estimated value as the reward estimated by our network. That is: $y = target(r_t, Q_\theta^t(s_{t+1}, a_t)) \quad \quad \quad \hat y = Q_\theta^t(s_t, a_t)$ Which looks great, we were able to transform the problem into a supervised learning setting! Before we celebrate there are two important things we have to deal with: trajectory sample correlation and unbounded reward due to self following. Experience Replay Stochastic gradient descent optimization requires samples to be uncorrelated, or in other words, if you feed a neural network more images of cats then dogs it creates natural bias towards cats. The same is true for Q-learning function approximators, if consecutive samples are drawn from the same trajectory the resulting Q-function is skewed in favor of this trajectory. Another issue related to this approach is the forgetting of old trajectories as new ones overwrite the functions weights. Experience replay seeks to mitigate these two issues by keeping a buffer of experienced trajectories. The samples for the q-function update step are then sampled from this buffer instead of directly from experience. This helps de-correlate samples and remember old trajectories. Non-Stationary Reference As we have seen in previous sections, the reference value is non-stationary, that is, the target value depends directly on $Q_\theta^t(s_{t+1}, a_t)$. This stands in contrast to classical supervised learning approaches where the reference is a fixed label. Here the network follows a reference which is changing in time. A common approach to overcoming this problem is to freeze the target network for a given number of samples while updating a training network and after the number of steps synchronizing the weights between the two networks. This allows us to somewhat stabilize the reference value and avoid some common pitfalls in this kind of training. Conclusion In this article we started by introducing the Markov Decision Process problem and showing the importance of its assumptions in the Q-learning algorithm. Afterwards we gave a short overview of the Q-learning algorithm before moving on to showing how we can generalize the tabular Q-learning to a function approximator. We were able to demonstrate how to arrive at the update formulas for the Q-learning algorithm with a function approximator and introduced some common pitfalls in the training procedure and how to avoid them. In the next article we will look at how to implement the algorithm from scratch in PyTorch! Stay tuned for updates by staring this repository! Curious to apply some of these algorithms on different environments? Then you have come to the right place! Explore our database of 100+ open source reinforcement learning environments at RLenv.directory! :-)
I have an optimization problem such as follow: $$\underset{X}{\operatorname{argmin}}\sum _s \left \| T_sX_{:,s} - Y_{:,s} \right \|^2_2 +\lambda\left \| GX \right \|_{2,1} \tag{1}$$ I have introduced a new variable: $$\underset{X,Z}{\operatorname{argmin}}\sum _s\left \| T_sX_{:,s}-Y_{:,s} \right \|^2_2 +\gamma\left \| GX -Z\right \|^2_2+\lambda\left \| Z \right \|_{2,1} \tag{2}$$ And divided this problem into two sub-problems: $$\underset{Z}{\operatorname{argmin}} \gamma\left \| GX -Z\right \|^2_2+\lambda\left \| Z \right \|_{2,1}\tag{3.1}$$ $$\underset{X}{\operatorname{argmin}}\sum _s\left \| T_s X_{:,s} - Y_{:,s} \right \|^2_2 +\gamma\left \| GX -Z\right \|^2_2 \tag{3.2}$$ I have found that (from a paper) for the eqn 3.1, solution is as follows: $$Z_{n,:}=[GX]_{n,:} \operatorname{}max\left \{1-\frac{\lambda}{2\gamma \left \| [GX]_{n,:} \right \|_2} ,0\right\}\tag{4}$$ I am solving this problem iteratively. First, I solve for $Z$ using equation 4, then I am solving for $X$ using equation 3.2 and utilizing nonlinear conjugate gradient method. Something is clearly wrong since I cannot reach to a solution. And it is not like solutions are diverging but more like always 0. Do you see anything wrong so far? If not, I can upload my nonlinear conjugate gradient method. Thanx in advance and have a nice day! The problem is given by: $$\begin{equation} \arg \min_{X} \frac{1}{2} \sum_{k} {\left\| {T}_{k} {X}_{:, k} - {Y}_{:, k} \right\|}_{2}^{2} + \lambda {\left\| G X \right\|}_{2, 1} \\ = \arg \min_{X} \frac{1}{2} \sum_{k} {\left\| {T}_{k} {X}_{:, k} - {Y}_{:, k} \right\|}_{2}^{2} + \lambda \sum_{l} {\left\| G {X}_{:, l} \right\|}_{2} \end{equation}$$ In the above the MATLAB notation of : is used to select a column. It is also assumed that the Mixed Norm $ {\left\| \cdot \right\|}_{2, 1} $ is operating on each column. In case working on each rows is needed one could easily transpose $ X $. One could see above that the problem can be solved per column of $ X $ independently. Hence it can be solved in a separable manner column of $ X $ as a vector: $$ {X}_{:, i} = \arg \min_{x} \frac{1}{2} {\left\| {T}_{i} x - {Y}_{:, i} \right\|}_{2}^{2} + \lambda {\left\| G x \right\|}_{2} $$ Now this is a simple problem which can be solved using Sub Gradient Descent and its accelerated variants. The OP also mentions that $ {T}_{k} $ is the DFT matrix which is a Unitary Matrix namely it preserves the $ {L}_{2} $ norm. So the problem could be written: $$\begin{aligned} \arg \min_{x} \frac{1}{2} {\left\| T x - y \right\|}_{2}^{2} + \lambda {\left\| G x \right\|}_{2} & = \arg \min_{x} \frac{1}{2} {\left\| {T}^{H} T x - {T}^{H} y \right\|}_{2}^{2} + \lambda {\left\| G x \right\|}_{2} \\ & = \arg \min_{x} \frac{1}{2} {\left\| x - {T}^{H} y \right\|}_{2}^{2} + \lambda {\left\| G x \right\|}_{2} \end{aligned}$$ In this form there is no need to calculate the DFT of $ x $ each iteration. Some of the Math derivation is given in my answer at The Sub Gradient and the Prox Operator of the of $ {L}_{2, 1} $ Norm (Mixed Norm).
I am learning about Instantaneous speed and velocity in Khan Academy. I understand the concept and calculation, but how do I know the different instantaneous speed at different points in real life? closed as unclear what you're asking by Bill N, ZeroTheHero, tom, sammy gerbil, Daniel Griscom Apr 4 '18 at 2:09 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. The instantaneous speed can be measured using the doppler effect, which is how the radar guns works. It sends a pulse of light to a moving object, and when it gets reflected of the object and return, there will be a change in frequency given by $$\frac{\Delta f}{f}\frac{c}{2}=v$$ From this we can measure the instantaneous velocity of the object. But we cannot find it to absolute precision, due to the finite speed of light (Due to which this velocity we found would be that an infinitesimally small time ago) and also the uncertainty principle, due to which we cannot find with no error. Short answer: You can't. Medium answer: You can't, it is a mathematical construct. However, we can measure average velocity, $\bar{v} = \frac{\Delta x}{\Delta t}$, that the difference between the measured average velocity and the instantaneous velocity doesn't matter to the answer of any questions you might care about. Long answer: Any real measurement takes a finite amount of time and consists of discrete data points. As such it is technically impossible to measure the instantaneous velocity. However, we can do well enough. A realistic measurement of velocity may be done as follows. We can measure the position of an object, $x_i = x(t_i)$ periodically, sampling the position every $\Delta t$. We will then get a series of positions $x_i$ corresponding to times $t_i = t_0 + i \Delta t$. We can then calculate the average velocity during any time interval from $t_i$ to $t_{i+1}$ by $$ v_i = \frac{x_{i+1} - x_i}{\Delta t} $$ This is not the instantaneous velocity but rather the average velocity over a time interval $\Delta t$. Now I need to justify to you why this can be "good enough" for a given experiment or calculation. Imagine you are running up and down a hill. The hill is a 100 m long. Say you can run 5 m/s up the hill and 10 m/s down the hill. Then it will take you 20 s to run up the hill and 10 s to run down, 30 s total. Now let's imagine someone is measuring your velocity by measuring your position at various times separated by $\Delta t$ as described above. Ok. First imagine they ONLY measure how long it takes you run the whole race. Their best estimate for your velocity during the whole race would be $v = \frac{200 \text{ m}}{30 \text{ s}} = 6.67 \frac{\text{m}}{\text{s}}$. This measurement is clearly missing some details because you were never actually running 6.67 m/s, You were running either 10 m/s or 5 m/s the whole time. The average is just an estimate. Now imagine instead they measure your time at the top of the hill and at the bottom. Well this measurement would give them full details about your velocity because they would now know your average speed was 5 m/s up the hill and 10 m/s down the hill. So the point is if you have a shorter $\Delta t$ in between measurements you get finer resolution on the velocity and you can track changes in velocity that occur over the period of the measurement. Let's now imagine the person measures your velocity every second. Well for the first 20 seconds they would always measure your velocity to be 5 m/s and for the last 10 seconds they would measure your velocity to be 10 m/s. However, they haven't gained any additional information over the previous case where they measured at the top and the bottom. That is, they are making measurements faster than your velocity is changing!! This is the crux of my answer. You cannot actually measure instantaneous velocity, however, if you measure average velocity with a measurement every $\Delta t$, in such a way that $\Delta t$ is shorter than the time scale on which $v(t)$ varies, then you can get a "good enough" estimation of $v(t)$ for whatever application/experiment/problem you are considering. How do you determine what $\Delta t$ you should use? Well that depends on whatever thing you are measuring. For example, a runner probably doesn't change their pace very much over the course of 5 s so you could probably measure every 5 s and get a pretty good estimate of their instantaneous velocity. But now consider you were trying to measure the velocity of a pinball in a pinball machine. The pinball's velocity can undergo many major changes in the course of 5 s. This means you should measure its velocity much more often! Modern video uses a framerate of 25 frames per second. This means the $\Delta t$ between measurement would be 40 ms. You can imagine that this may still not even capture all of the motion of a pinball and you may instead want to use a high speed camera with $\Delta t$ of 1 ms or less! One parting note. This concept of identifying appropriate scales for a given problem is something which is very imporant in physics in a much broader context than just measuring instantaneous velocity. Very often we approximate discrete series and continuous and vice versa. The license to make these approximations comes from identifying a smallest time scale, or smallest length scale or energy scale or whatever below which nothing in the problem is changing so it doesn't make a significant difference to the end result that we have made the approximation. This is a trick I highly recommend thinking about and becoming familiar with. You can never calculate an exact instantaneous velocity in real life. You can get an approximate value, but you can't get an exact one. Two problems prevent this. Classically Lack of perfect knowledge The Uncertainty Principle Classically I can never exactly measure any anything. I can make measurements relative to other things of know magnitude, but the limit of accuracy is always finite. I can never be exact. The Uncertainty Principle is a principle from quantum mechanics. In simplistic terms it means I can't ever know both position and velocity exactly and if I know either exactly, the other value has an infinitely large margin of error. In practice what you do with a macroscopic object is observe it and work out an velocity that fits a model of it's trajectory that you have. Let's says I make a measurement of velocity $v_k\pm \delta v_k$ with an error range estimate of $\delta v_k$. How do I remove the error ? Well I can't. I can only average it out. If I think the object is moving on a path $\vec{r(t)}$ whose forumla I know (but not the values of the parameters) I can try and fit my data to that formula and get approximate values for the parameters and use those to work out the instantaneous velocity using $\vec{v(t)}=\frac d {dt} \vec{r(t)}$. But there will always be some error because I did not have exact measurements of the velocities to fit the equations to. We can reduce the margin of error in our parameters by fitting more data to it, but it will never quite disappear. The uncertainty principle (which doesn't matter much for large everyday scale objects) has a more extreme effect. It means that once I measure something like velocity, that measurement process itself will mean that I have changed the value to something else that is (more or less) random. The very act of measuring something changes it. So at a quantum mechanical level, just trying to determine something's velocity will change the velocity to some unknown value. So the margin of error cannot be reduced by making repeated measurements, as all repeated measurements do is introduce more uncertainty, not less. And because uncertainty is measuring e.g. position and velocity (momentum) are linked, if I measure velocity, I've probably changed the path it's on to some new unknown path. So I can't even fit my motion to a fixed path using repeated measurements. Si quantum theory is fundamentally different in this sense. Here is another possible method with a couple of severe limitations, but probably acceptable for educational purposes: accelerometer. If the initial speed of an object is zero (first limitation) and the accelerometer does not have any errors (second limitation), the accelerometer output will let you calculate a precise instantaneous speed of the object at any moment in time as: I am not selling anything here, but if you can afford a 3-axis accelerometer, you can track instantaneous speed of an object in 3D space. With real life accelerometers, the error accumulation will not allow you to measure the speed accurately beyond a short time window after the start.
I'm looking for a comprehensive description of and justification for a Normal hierarchical model where both the means of the groups and the standard deviation are modelled. It is common to find something like the following model in many textbooks (e.g. Gelman et al., p. 288): $$y_{ij} \sim \text{Normal}(\mu_i,\sigma) \\ \mu_i \sim \text{Normal}(M, S)$$ where $y_{ij}$ is the $i$th datapoint from group $j$ and where non-informative priors are proposed for $M,S$ and $\sigma$. What I'm looking for is an extension of this model where also the standard deviations of each group are modelled and given hyperparameters (and not only a single $\sigma$ is assumed for all groups). That is something like: $$y_{ij} \sim \text{Normal}(\mu_i,\sigma_i) \\ \mu_i \sim \text{Normal}(M, S) \\ \sigma_i \sim \text{SomeDistribution}(P_1,P_2,\dots)\\ $$ but where proposals are given for The distribution of the $\sigma_i$s ($\text{SomeDistribution}$ in the model above). Non-informative prior distributions for the hyperparameters $M, S$ and the parameters of $\text{SomeDistribution}$. I have not been able to find this in the literature, and my question is: Where can I find this model described in the literature? Or alternatively: What should such a model look like? References Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
作者:Hariwan Zikri Ibrahim 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.61-68现代科学出版社 摘要:The purpose of this paper is to introduce the new concepts namely, α-g-closed, pre-g-closed, semi-g-closed, b-g-closed, β-g-closed, α-g-open, pre-g-open, semi-g-open, b-g-open and β-g-open sets in ditopological texture spaces. The relationships between these classes of sets ... 作者:A. D Nezhad , S. Shahriyari 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.43-55现代科学出版社 摘要:In this paper we follow the work of J. A. Alvarez Lopez and X. M. Masa (The problem 31.10 ) given in [1]. The purpose of this work is to study the new concepts of pseudomonoids. We also obtain some interesting results. 作者:V. Inthumathi , M. Maheswari , A. Anis Fathima 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.56-60现代科学出版社 摘要:In this paper, the notions of pairwise \(\delta_{\mathcal{I}}\)-semi-continuous functions and pairwise \(\delta_{\mathcal{I}}\)-semi-irresolute functions are introduced and investigated in ideal bitopological spaces. 作者:N. Selvanayaki , Gnanmbal Ilango 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.38-42现代科学出版社 摘要:In this paper, we introduce the notions of \(\alpha grw\)-separated sets and \(\alpha grw\)-connectedness in topological spaces and study some of their properties in topological spaces 作者:V. A. Khan , Mohd Shafiq , Rami Kamel Ahmad Rababah 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (1), pp.28-37现代科学出版社 摘要:In this article we introduce and study \(I\)-convergent sequence spaces \(\mathcal{S}^{I}(M)\), \(\mathcal{S} ^{I}_{0}(M)\) and \(\mathcal{S}^{I}_{\infty}(M)\) with the help of compact operator \(T\) on the real space \(\mathbb{R}\) and an Orlicz function \(M\). We study some top... 作者:H. M. Abu-Donia , Mona Bakri 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (1), pp.9-19现代科学出版社 摘要:In this paper we introduced two new classes of sets in bitopological spaces, the first type is weaker than \(ij\)-\(\Omega\)-closed sets namely, \(ij\)-\(\Omega^{^{*}}\)-closed sets, and the second type called \(ij\)-\(\Omega^{^{**}}\)-closed sets which lies between the class of ... 作者:Hariwan Zikri Ibrahim 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (1), pp.20-27现代科学出版社 摘要:The purpose of this present paper is to study some new classes of sets by using the open sets and functions in topological spaces. For this aim, the notions of \(\delta^{*}\)-open, \(\delta\)-\(\delta^{*}\)-\(\alpha\)-open, \(\delta\)-\(\delta^{*}\)-preopen, \(\delta\)-\(\delta^{... 作者:Hariwan Z. Ibrahim 来源:[J].Journal of Advanced Studies in Topology, 2014, Vol.6 (1), pp.1-8现代科学出版社 摘要:In this paper, the author introduce and study new notions of continuity, compactness and stability in ditopological texture spaces based on the notions of \(\alpha\)-\(g\)-open and \(\alpha\)-\(g\)-closed sets and some of their characterizations are obtained. 我们正在为您处理中,这可能需要一些时间,请稍等。
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
The concept of a lower bound has been introduced with reference to the work formula approach to analyse deformation. This approach generally results in an underestimate of the required load. Clearly, there also will be an “upper bound”, i.e. an overestimate of the load that needs to be applied to effect a given deformation. The two approaches together are called “limit analysis” since the actual loads required will lie between the lower and upper bounds. In practice, limit analysis is much easier to apply to a problem than the slip-line field approach and can be reasonably accurate. The upper bound is particularly useful for the study of metalworking processes in which it is essential to ensure sufficient forces are applied to cause the required deformation. In contrast, the lower bound is valuable in engineering where failure of a component must be avoided and hence an estimate of the minimum collapse load is needed. The approach taken for estimating the upper bound is based on suggesting a likely deformation pattern, i.e. lines along which slip would be expected to occur for a given loading situation. Then the rate at which energy is dissipated by shear along these lines can be calculated and equated to the work done by an (unknown) external force. By refining the geometry of the deformation pattern, the minimum upper bound can be determined. Frictional forces can be accommodated in this approach. The approach utilises hodographs, which are self-consistent plots of velocity for different regions within a body being deformed; the different regions are assigned by considering how the overall body will deform for a particular deformation process and their relative velocities are estimated by assuming that the applied external force has unit velocity. For both upper- and lower-bounds, one of the following two conditions has to be satisfied: between internal and external displacements or strains. This is usually concerned with kinetic conditions – velocities must be compatible to ensure no gain or loss of material at any point. geometrical compatibility i.e. the internal stress fields must balance the externally applied stresses (forces). stress equilibrium The basis of limit analysis rests upon two theorems, which can be proved mathematically. In simple terms, these theorems are: any stress system in which the applied forces are just sufficient to cause yielding. Lower Bound: Any velocity field that can operate is associated with an upper bound solution. Upper Bound: Example \(\PageIndex{1}\): Notched bar in tension The plane strain condition is satisfied when breath, \(b » h\), the depth of the bar. Lower-Bound: Find a stress system, e.g. \( \sigma = 0\) in the length of the bar where there is the notch, \(\sigma = 2k\) elsewhere. Therefore, for a breadth \( b \), \( P = 2khb = \text{load} = \text{stress} \times \text{area} \) Upper-Bound: Postulate a suitable simple deformation pattern. Assume yielding by slip on 45º shear planes with shear yield stress k. Let displacement along shear plane \( \mathrm{AB}=\delta x \). Then internal work done = \( k .|A B| b \delta x=k \sqrt{2} b h \delta x\), where the force is \( k\left| {AB} \right|b \) acting on the shear plane AB. Distance moved by the external load \(P=\delta x \cos 45^{\circ}=\frac{\delta x}{\sqrt{2}}\) \(\Rightarrow P \frac{\delta x}{\sqrt{2}}=k \sqrt{2} b h \delta x \) \( \Rightarrow P = 2kbh\) So, here we obtain the same result for the upper bound and lower bound \( \Rightarrow P = 2kbh\) is the true failure load, the load required to cause plastic flow. Example \(\PageIndex{2}\): Notched bar in plane bending Lower-Bound: The area immediately under the notch, above the neutral axis is in tension \( \sigma=2 k \). The area below the neutral axis is in compression \( \sigma=2 k \). where: \( h \) = thickness of slab beneath the notch. \( 2 k . \frac{h}{2} \cdot b \) = magnitude of forces in tensile and compressive regions. \( \frac{h}{2} \) = distance between the two. Equating the couples, \( M=\left(2 k . \frac{h}{2} \cdot b\right) \frac{h}{2}=0.5 k h^{2} b \) Upper-Bound: Assume failure occurs by sliding around a ‘plastic hinge’ along a circular arc of length and radius l . r If the rotation is δθ, the internal work done \( =k . l b . r \delta \theta \) along one arc. External work \( =M \delta \theta \) by one moment. \( \Rightarrow M=k l b r \) where no assumptions have been made regarding and l . r The upper bound theorem states that whatever values are taken for and l will lead to an upper bound. Clearly we wish to find the lowest possible value. r From the above geometry, \( l=r \alpha \) and \( r=\frac{h}{2 \sin \alpha / 2} \) \( \Rightarrow M=\dfrac{k h^{2} b}{4} \frac{\alpha}{\sin ^{2} \alpha / 2} )] and so to find the lowest possible value of M, we minimize the function \( \dfrac{\alpha}{\sin ^{2} \alpha / 2} \) Let \[ Y=\frac{\alpha}{\sin ^{2} \alpha / 2} \nonumber \] \[ \frac{\mathrm{d} Y}{\mathrm{d} \alpha}=\frac{1}{\sin ^{4} \alpha / 2}\left\{\sin ^{2} \frac{\alpha}{2}-2 \frac{\alpha}{2} \cos \frac{\alpha}{2} \sin \frac{\alpha}{2}\right\} \nonumber \] \[ =0 \quad \text { when } \quad \sin \frac{\alpha}{2}=\alpha \cos \frac{\alpha}{2} \nonumber \] \[ \Rightarrow \tan \frac{\alpha}{2}=\alpha \nonumber \] \[ \Rightarrow M=\frac{k h^{2} b}{4} \cdot \frac{1}{\sin \alpha / 2 \cos \alpha / 2}=1 / 2 \frac{k h^{2} b}{\sin \alpha} \cong 0.69 k h^{2} b \nonumber \] Taking the lower bound and the upper bound as limits, we therefore find \[\Rightarrow 0.5 \leq \frac{M}{k h^{2} b} \leq 0.69 \nonumber \] This forms a good example of constraining the value of the external force between lower bound and upper bound. It is also a good example of how to produce a lower limit on an upper bound calculation. Hodographs I A hodograph is a diagram showing the relative velocities of the various parts of a deformation process. To analyze a complicated deformation process with many shear planes, it is worth looking at the basic equation for the rate of energy dissipation in an upper bound situation in more detail. ABCD is distorted into A'B'C'D' by shear along \(\overrightarrow {SS`} \) at a velocity \( \underline{v_{s}} \) in the metal. Suppose ABCD moves towards the shear plane SS' at a velocity \({v_1}\) and suppose that there is a pressure P acting on the area al (where l is the dimension out of the plane of the paper) helping to cause this movement. Rate of performance of work externally \( = pal\left| \underline {v_1} \right|\) Rate of performance of work internally \( = k\left| {SS`} \right|l\left| \underline{v_s} \right|\) since the only internal work assumed to occur is that required to effect the shear deformation so that ABCD → A'B'C'D'. Equating these, \( \Rightarrow p a\left|\underline{v_{1}}\right|=k\left|S S^{*}\right|\left|\underline{v_{s}}\|\right. \) \(\Rightarrow p a=k\left|S S^{*}\right| \frac{\left|v_{s}\right|}{\left|v_{1}\right|} \) Simple vector algebra relates v 1, v 2 and v s as on the diagram below: If in a deformation process there are n such shear planes of the type SS', then \( p a=k \sum_{n}\left|S S_{n}^{s}\right|\left|v_{s n}\right| \) setting \( \left|\underline{v_{1}}\right|=1.0\), i.e. unit velocity. Rules for constructing a hodograph The animation below illustrates the seven rules for constructing a hodograph, for the case of a constrained punch. Note: This animation requires Adobe Flash Player 8 and later, which can be downloaded here. An analysis of the geometry of the hodograph enables an upper bound for the applied force to be calculated. Let \( Oq \) be a velocity vector of unit magnitude in the hodograph, i.e. \( ν_{Oq} = 1 \) Due to the dead metal zone, Q and Q' move at the same velocity. \( O \) is a stationary component of the system, anywhere in the surrounding perfectly rigid metal which has not yielded at all. \( Oq \) and \( Oq' \) are in essence vectors defining the motion of particles in region Q'. Or is velocity of a particle in region R. q'r is a vector defining the shear velocity parallel to Q'R. Os is velocity of a particle in region S. rs is a vector defining the shear velocity parallel to RS. Hence, \[ v_{O r}=\frac{1}{\tan \theta}, v_{q^{\prime} r}=\frac{1}{\sin \theta} \text { and } v_{O s}=v_{r s}=\frac{1}{2 \sin \theta} \nonumber \] \( \Rightarrow \) Using the rate of performance of work formula we have: \[p\left( {\frac{b}{2}} \right) = k\left\{ {Q'R{v_{q'r}} + OR{v_{Or}} + RS{v_{rs}} + OS{v_{Os}}} \right\}\] where Q'R is the length of the line dividing regions Q' and R, OR is the length of the line dividing regions O and R, RS is the length of the line dividing regions R and S and OS is the length of the line dividing regions O and S \[p\left(\frac{b}{2}\right)=k b\left\{\frac{1}{2 \cos \theta} \cdot \frac{1}{\sin \theta}+1 \cdot \frac{1}{\tan \theta}+\frac{1}{2 \sin \theta} \cdot \frac{1}{2 \sin \theta}+\frac{1}{2 \cos \theta} \cdot \frac{1}{2 \sin \theta}\right\} \nonumber \] \[ =k b\left\{\frac{1}{\sin \theta \cos \theta}+\frac{\cos \theta}{\sin \theta}\right\}=k b\left\{\frac{1+\cos ^{2} \theta}{\cos \theta \sin \theta}\right\} \nonumber \] \[ \Rightarrow \frac{p}{2 k}=\frac{1+\cos ^{2} \theta}{\sin \theta \cos \theta}=f(\theta) \nonumber \] \( \frac{d f}{d \theta}=0 \) and \(f \)is then a minimum when \[ \cos 2 \theta=-\frac{1}{3} \Rightarrow \theta=54.74^{\circ} \text { when } \sin \theta=\frac{\sqrt{2}}{\sqrt{3}} \text { and } \cos \theta=\frac{1}{\sqrt{3}} \nonumber \] \( \Rightarrow \) minimum \(\dfrac{p}{2k}\)\( = 2\sqrt 2 = 2.83\) from this upper bound analysis. When indenting using a sliding (frictionless) punch, we can postulate a different deformation pattern without the dead metal zone. The system also has a plane of symmetry and a hodograph can be constructed as follows: Note: This animation requires Adobe Flash Player 8 and later, which can be downloaded here. As before, \( O_q = 1.0 \) Material in R travels in direction shown with velocity \( Or = \frac{1}{\sin \theta } \) \[ |O r|=|r s|=\frac{1}{\sin \theta}=|s t|=|O t| \nonumber \] \[ \left| {Os} \right| = \frac{2\cos \theta }{\sin \theta } \nonumber \] \[ \left| {qr} \right| = \frac{1}{\tan \theta }= \frac{\cos \theta }{\sin \theta } \nonumber \] Lengths in drawing of indent: \(QR = SO = \dfrac{b}{2}\) \( OR = RS = ST = OT = \dfrac{b}{4\cos \theta} \) We therefore have: \( \dfrac{pb}{2}\) = \(k\left\{ OR{v_{Or} + RS{v_{rs}} + OS{v_{Os}} + ST{v_{st}} + TO{v_{tO}}} \right\} \) \[ =k b\left\{\frac{1}{4 \cos \theta} \cdot \frac{1}{\sin \theta}+\frac{1}{4 \cos \theta} \cdot \frac{1}{\sin \theta}+\frac{1}{2} \cdot \frac{2 \cos \theta}{\sin \theta}+\frac{1}{4 \cos \theta} \cdot \frac{1}{\sin \theta}+\frac{1}{4 \cos \theta} \cdot \frac{1}{\sin \theta}\right\} \nonumber \] \[ =k b\left\{\frac{1}{\sin \theta \cos \theta}+\frac{\cos \theta}{\sin \theta}\right\} \nonumber \] \( \Rightarrow \dfrac{P}{2 k}=\left\{\dfrac{1+\cos ^{2} \theta}{\sin \theta \cos \theta}\right\} \) as before for the case of the constrained punch. This analysis has assumed that no friction occurs at the punch face to cause particles in R to move parallel to OR. If there is friction, we can take it to be sticking friction, so that there is a shear stress k acting and slippage velocity \( = {v_{qr}}\) \( \Rightarrow \) in this case \(\frac{P}{2k}\) = \(\left\{ \frac{2 + 3{\cos }^2 \theta }{2\sin \theta \cos \theta } \right\}\) \( \Rightarrow \) of the three possible upper bound solutions, the 'best' answer is \(\frac{P}{2k}\) = \(2\sqrt 2 = 2.83\) This is a 10% overestimate of the true value of \(\dfrac{P}{2k}\) found from slip-line field theory. Hodographs II Extrusion is an important working process. A simple form of extrusion used for non-ferrous metals involves a smooth square die. We define extrusion ratio, R = ratio of areas. \( R=\frac{A_{0}}{A_{1}}=\frac{H}{h} \) for plane strain ( R > 1), e.g. R = 4 = 75% reduction in area. For a square die with sliding on the die face in plane strain, the hodograph can be constructed as follows: Note: This animation requires Adobe Flash Player 8 and later, which can be downloaded here. Click here for a full mathematical analysis of this hodograph. An alternative approach to an extrusion hodograph assumes there is a 'dead metal' zone. Note: This animation requires Adobe Flash Player 8 and later, which can be downloaded here. Then \( p \frac{H}{2}=k\left\{P Q v_{P Q}+D Q v_{d q}+Q R v_{q r}\right\} \) After similar algebra to the previous example, we obtain \[ \frac{p}{2 k}= \frac{1}{2(\sin \varphi-\cos \varphi)} \left\{\frac{R+1}{\sin \varphi}-2(R-1) \cos \varphi\right \} \] Minimising \( \cot \varphi=1-\frac{2}{\sqrt{R+1}} \) After more algebra, it is found that \[ \frac{p_{\min }}{2 k}=2(\sqrt{R+1}-1) \] Note that for low R (< 4) this value is less than that for sliding on the die face, even if the die face is frictionless. ⇒ For R < 4 this is a better upper bound solution for extrusion problems. Click here for a full mathematical analysis of this hodograph.
Why is the IK formulation like that? It looks like it's trying to find the optimum dq/dt that will minimize the magnitude dq/dt. How is that giving IK solution? and isn't the optimum value be dq/dt=0? The formulation is typical for redundant robots, in which there are an infinite number of joint velocity vectors that could satisfy the $\dot{r}_{t}$ goal. In the version you cite, the $Q$ matrix would allow you to weight the different joint velocities in order to create an optimal solution that matches that $Q$. Other formulations of this approach specifically call out the null space of the manipulator Jacobian to be used for satisfying the optimization criteria. Also, that last equation is inverted to solve for joint velocities, which makes this an approach to inverse kinematics. With nonsquare Jacobians, pseudoinverses are used for this. Now that I have investigated the paper in question, here is my new answer. It turns out that the paper actually discusses task-based motion generation. That is, given a task (end-effector velocity trajectory) $\dot{r}(t)$, find a robot input (joint velocity trajectory) $\dot{q}(t)$ that realizes the task. The formulated optimization problem is happening at a time instant on the trajectory. So if the Jacobian is square and full-rank, you wouldn't have to worry about this optimization problem since there can be only one $\dot{q}$ corresponding to a given $\dot{r}$. However, when the robot is redundant, there can be infinitely many $\dot{q}$ satisfying $J\dot{q} = \dot{r}$. Now that you have choices, you can do fancier things. In this case, the minimization objective is $\Vert \dot{q}^TQ\dot{q} \Vert$ to minimize the magnitude of the joint velocity. The reason is simple: one wouldn't want the joint velocity to just go unreasonably high. So, to really answer your questions Why is the IK formulation like that? I think it is just that this formulation is mentioned as an example that leads to the main content of the paper. IK problems don't have to be formulated this way. There are other formulations out there. How is that giving IK solution? $\dot{r}$ implicitly specifies the next end-effector pose. A solution $\dot{q}$ implicitly specifies the next IK solution to reach the specified pose. isn't the optimum value be dq/dt=0? Without the equality constraint, yes. However, there is this inequality constraint. So $\dot{q}$ can be non-zero. (Below is my previous answer) The formulation is general, i.e., it can be used with any types of robots. I think it may be easier to consider the discrete-time analogy of the problem. Given an initial point $q$ (i.e., a robot configuration), this optimization problem is actually trying to pull $q$ towards some point $q^\ast$ that is an IK solution to your problem. What is effectively steering the point $q$ towards a solution is the difference $\Delta r$ between your desired end-effector position and your current end-effector position. However, what you control is actually the joint values. They are related by $$J\Delta q = \Delta r.$$ You can then compute the joint velocity that satisfies the above equation (given the current $\Delta r$). And since $\Delta q$ is essentially the difference of joint values between consecutive time steps, $\Delta q = q[t + \Delta t] - q[t]$, you will obtain a joint value for the next time step. That's why this is the constraint that is optimization problem is subject to. At the end, the system should stop moving when it reaches an IK solution. This means $\Delta q$ should be zero. That's why the minimization objective is to minimize a norm of $\dot{q}$. I think the positive semi-definite matrix $Q$ in the objective function will control how each of the joint values converges to an IK solution. For example, if $Q$ is an identity matrix, it means you place equal importance to each DOF (that is, you do not have any bias in favor of some DOFs to converge faster than the others).
Find the Nash equilibrium of Cournot’s game when there are two firms, the inverse demand function is P(Q) = α – Q when α ≥ Q and 0 otherwise, and the cost function of each firm I is Ci(qi) = qi2. If the firms collude in this situation to create a cartel to maximize their profits, how much would each firm produce? I'm able to find the Nash equilibrium quantities when there is no collusion - however, I'm stuck on the cartel part. We want to find joint profit maximizing quantities under collusion here. When the marginal cost to each firm is constant this is easy to do, since $C(Q) = C(q_1) + C(q_2)$. However, in this case, $C(Q) = q_1^2 + q_2^2$. Thus, our joint profits are $\pi(Q) = Q(\alpha - Q) - (q_1^2 + q_2^2) \neq Q(\alpha - Q) - Q^2$. I'm not sure how to take this first order condition (with respect to $Q = (q_1 + q_2)$) given the non-linearity of the cost functions. Any ideas as to how to solve this would be greatly appreciated (not looking for a direct answer; simply a nudge in the right direction).
I'm going to prove the statement by contradiction. The assumption that the weak topology $\tau_w$ on the $\infty$-dimensional linear space X is first-countable, is equivalent to saying that each point in $X$ has a countable neighborhood basis. In particular for $0_X \in X$ there exists a N.B. $(U_n)_n\subset\tau_w$.We define the sequence $(A_n)_n\subset\tau_w $ by$$A_n:=\bigcap_{i=1}^n U_i$$obtaining another neighborhood basis of $0_X$, but this time decreasing. It's easy to prove that in a normed linear space, each weakly open neighborhood of $0$ contains a non-trivial closed subspace. So for every $n$, $A_n$ contains a non-trivial subspace $Y_n$ of X, and we can pick a vector $y_n\in V_n\setminus\{0_x\}$. We then consider $x_n:=n\frac{v_n}{\lvert\lvert v_n\rvert\rvert}\in V_n$. Because of $\lvert\lvert x_n\rvert\rvert=n$, $(x_n)_n$ is an unbounded sequence, so it is not weakly convergent (this is also easy to show, prove the contrapositive statement by using the canonical embedding and the uniform boundedness theorem). On the other hand, for any $\varepsilon>0$ and $\phi \in X^*$ the set $$U_{\varepsilon,\phi}=\{x\in X \vert\ \lvert \phi(x) \rvert<\varepsilon\}$$ is a neighborhood of $0_X$, and because $(A_n)_n$ is a decreasing neighborhood basis, there exists an $N \in \mathbb{N}$ such that $$\forall n\ge N: U_{\varepsilon,\phi}\supseteq A_n\supseteq\ V_n$$ In particular $\lvert\phi(x_n)\rvert<\varepsilon,$ since $x_n\in V_n$. Because $\varepsilon$ was picked arbitrarily, we get $\lvert \phi(x)\rvert\rightarrow0$. We also considered an arbitrary $\phi \in X^*$, and hence $x\overset{w}\rightarrow 0$. This is a contradiction to what we showed in the above paragraph. This proves the statement.
12889 12877 Deletions are marked like this. Additions are marked like this. Line 4: Line 4: We can minimize night light pollution, and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. We can minimize '''night light pollution (NLP)''', and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. Line 8: Line 8: == Maneuvering thrust and satellite power == == Maneuvering Thrust and Satellite Power == Line 30: Line 30: === Max Night Light Pollution (NLP): Full power night sky coverage, maximum night light pollution === === Max NLP: Full power night sky coverage, maximum night light pollution === Night Side Maneuvers We can minimize night light pollution (NLP), and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. Another advantage of the turn is that if thinsat maneuverability is destroyed by radiation or a collision on the night side, it will come out of night side with a slow tumble that won't be corrected. The passive radar signature of the tumble will help identify the destroyed thinsat to other thinsats in the array, allowing another sacrificial thinsat to perform a "rendezvous and de-orbit". If the destroyed thinsat is in shards, the shards will tumble. The tumbling shards ( or a continuously tumbling thinsat ) will eventually fall out of the normal orbit, no longer get J_2 correction, and the thinsat orbit will "eccentrify", decay, and reenter. This is the fail-safe way the arrays will reenter, if all active control ceases. Maneuvering Thrust and Satellite Power Neglecting tides, the synodic angular velocity of the m288 orbit is \Large\omega = 4.3633e-4 rad/sec = 0.025°/s. The angular acceleration of a thinsat is 13.056e-6 rad/sec 2 = 7.481e-4°/s 2 with a sun angle of 0°, and 3.740e-4°/s 2 at a sun angle of 60°. Because of tidal forces, a thinsat entering eclipse will start to turn towards sideways alignment with the center of the earth; it will come out of eclipse at a different velocity and angle than it went in with. If the thinsat is rotating at \omega and either tangential or perpendicular to the gravity vector, it will not turn while it passes into eclipse. Otherwise, the tidal acceleration is \ddot\theta = (3/2) \omega^2 \sin 2 \delta where \delta is the angle to the tangent of the orbit. If we enter eclipse with the thinsat not turning, and oriented directly to the sun, then \delta = 30° . Three Strategies and a Worst Case Failure Mode There are many ways to orient thinsats in the night sky, with tradeoffs between light power harvest, light pollution, and orbit eccentricity. If we reduce power harvest, we will need to launch more thinsats to compensate, which makes more problems if the system fails. I will present three strategies for light harvest and nightlight pollution. The actual strategies chosen will be blend of those. Tumbling If things go very wrong, thinsats will be out of control and tumbling. In the long term, the uncontrolled thinsats will probably orient flat to the orbital plane, and reflect very little light into the night sky, but in the short term (less than decades), they will be oriented in all directions. This is equivalent to mapping the reflective area of front and back (2 π R 2 ) onto a sphere ( 4 π R 2 ). Light with intensity I shining onto a sphere of radius R is evenly reflected in all directions uniformly. So if the sphere intercepts π R 2 I units of light, it scatters e I R 2/4 units of light (e is albedo) per steradian in all directions. While we will try to design our thinsats with low albedo ( high light absorption on the front, high emissivity on the back), we can assume they will get sanded down and more reflective because of space debris, and they will get broken into fragments of glass with shiny edges, adding to the albedo. Assume the average albedo is 0.5, and assume the light scattering is spherical for tumbling. Source for the above animation: g400.c Three design orientations All three orientations shown are oriented perpendicularly in the daytime sky. Max remains perpendicular in the night sky, Min is oriented vertically in the night sky, and Zero is edge on to the terminator in the night sky. All lose orientation control and are tilted by tidal forces in eclipse - the compensatory thrusts are not shown. Min and Zero are accelerated into a slow turn before eclipse, so they come out of the eclipse in the correct orientation. In all cases, there will probably be some disorientation and sun-seeking coming out of eclipse, until each thinsat calibrates to the optimum inertial turn rate during eclipse. So, there may be a small bit of sky glow at the 210° position, directly overhead at 2am and visible in the sky between 10pm and 6am. Max NLP: Full power night sky coverage, maximum night light pollution The most power is harvested if the thinsats are always oriented perpendicular to the sun. During the half of their orbit into the night sky, there will be some diffuse reflection to the side, and some of that will land in the earth's night sky. The illumination is maximum along the equator. For the M288 orbit, about 1/6th of the orbit is eclipsed, and 1/2 of the orbit is in daylight with the diffuse (Lambertian) reflection scattering towards the sun and onto the day side of the earth. Only the two "horns" of the orbit, the first between 90° and 150° (6pm to 10pm) and the second between 210° and 270° (2am to 6am) will reflect night into the light sky. The light harvest averages to 83% around the orbit. This is the worst case for night sky illumination. Though it is tempting to run thinsats in this regime, extracting the maximum power per thinsat, it is also the worst case for eccentricity caused by light pressure, and the thinsats must be heavier to reduce that eccentricity. Min NLP: Partial night sky coverage, some night light pollution This maneuver will put some scattered light into the night sky, but not much compared to perpendicular solar illumination all the way into shadow. In the worst case, assume that the surface has an albedo of 0.5 (typical solar cells with an antireflective coating are less than 0.3) and that the reflected light is entirely Lambertian (isotropic) without specular reflections (which will all be away from the earth). At a 60° angle, just before shadow, the light emitted by the front surface will be 1366W/m 2 × 0.5 (albedo) × 0.5 ( cos 60° ), and it will be scattered over 2π steradians, so the illumination per steradian will be 54W/m 2-steradian just before entering eclipse. Estimate that the light pollution varies from 0W to 54W between 90° and 150° and that the average light pollution is half of 54W, for 1/3 of the orbit. Assuming an even distribution of thinsat arrays in the constellation, that is works out to an average of 9W/m 2-steradian for all thinsats in M288 orbit. The full moon illuminates the night side of the equatorial earth with 27mW/m 2 near the equator. A square meter of thinsat at 6400km distance produces 9W/6400000 2 or 0.22 picowatts per m 4 times the area of all thinsats. If thinsat light pollution is restricted to 5% of full moon brightness (1.3mW/m 2), then we can have 6000 km 2 of thinsats up there, at an average of 130 W/m 2, or about 780GW of thinsats at m288. That is about a million tons of thinsats. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 ~ \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1 ~ \Large\omega 0° to 60° 100% to 50% 0W to 54W 100 to 140 150° to 210° 4 ~ \Large\omega 60° to 300° Eclipse 0W 140 to 180 210° to 270° 1 ~ \Large\omega 300° to 0° 50% to 100% 54W to 0W 180 to 240 270° to 0° 0 ~ \Large\omega 0° 100% 0W The angular velocity change at 0° takes 250/7.481 = 33.4 seconds, and during that time the thinsat turns 0.42° with negligible effect on thrust or power. The angular velocity change at 60° takes 750/3.74 = 200.5 seconds, and during that time the thinsat turns 12.5°, perhaps from 53.7° to 66.3°, reducing power and thrust from 59% to 40%, a significant change. The actual thrust change versus time will be more complicated (especially with tidal forces), but however it is done, the acceleration must be accomplished before the thinsat enters eclipse. The light harvest averages 78% around the orbit. Zero NLP: Partial night sky coverage, no night light pollution In this case, in the night half of the sky the edge of the thinsat is always turned towards the terminator. As long as the thinsats stay in control, they will never produce any nighttime light pollution, because the illuminated side of the thinsat is always pointed away from the night side of the earth. The average illumination fraction is around 68%. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees average rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1.5 \Large\omega 0° to 90° 100% to 0% 0W 100 to 140 150° to 210° 3 \Large\omega Start at 3.333\Large\omega 90° to 270° Eclipse 0W 140 to 180 210° to 270° 1.5 \Large\omega 270° to 0° 0% to 100% 0W 180 to 240 270° to 0° 0 \Large\omega 0° 100% 0W Pedants and thinsat programmers take note: The actual synodic orbit period is 240 minutes and 6.57 seconds long; that results in 2190.44 rather than 2191.44 sidereal orbits per year, accounting for the annual apparent motion of the sun around the sky. The light harvest averages 67% around the orbit. Why would a profit maximizing operator settle for 67% when 83% was possible? Infrared-filtering thinsats reduce launch weight and can use the Zero NLP flip to increase the minimum temperature of a thinsat during eclipses. An IR filtering thinsat in maximum night light pollution mode will have the emissive backside pointed at 2.7K space when it enters eclipse; the thinsat temperature will drop towards 20K if it cannot absorb the 64W/m 2 of 260K black body radiation reaching it from the earth through the 3.5μm front side infrared filter. The thinsat will become very brittle at those temperatures, and the thermal shock could destroy it. If the high thermal emissivity back side is pointed towards the 260K earth, the temperature will drop to 180K - still challenging, but the much higher thermal mobility may heal atomic-scale damage. Details of the Zero NLP maneuver In the night sky, assuming balanced thrusters, only tidal forces act on the thinsat, \ddot\theta = -(3/2) \omega^2 \sin( 2 \theta . Integrating numerically from the proper choice of initial rotation rate (10/9ths the average, due to tidal decceleration) we go from 30 degrees "earth relative tilt" at the beginning of eclipse, to 150 degrees tilt at the end of eclipse, with the night sky sending 260K infrared at the back side throughout the manuever. The entire disk of the earth is always in view, and always emits the same amount of infrared to a given radius, but the Lambertian angle of absorption changes. Power versus angle Night light pollution versus hour The night light pollution for 1 Terawatt of thinsats at M288. Mirror the graph for midnight to 6am. Some light is also put into the daytime sky (early morning and late afternoon), but it will be difficult to see in the glare of sunlight. The Zero NLP option puts no light in the night sky, so that curve is far below the bottom of this graph. Source for the above two graphs: nl02.c Note to Astronomers Yes, we will slightly occult some of your measurements, though we won't flash your images like Iridium. We will also broadcast our array ephemerides, and deliver occultation schedules accurate to the microsecond, so you can include that in your luminosity calculations. Someday, server sky is where you will perform those calculations, rather than in your own coal powered (and haze producing) computer.
Let us denote $X^\top X$ by $A$. By construction, it is a $n\times n$ square symmetric positive semi-definite matrix, i.e. it has an eigenvalue decomposition $A=V\Lambda V^\top$, where $V$ is the matrix of eigenvectors (each column is an eigenvector) and $\Lambda$ is a diagonal matrix of non-negative eigenvalues $\lambda_i$ sorted in the descending order. You want to maximize $$\operatorname{Tr}(D^\top A D),$$ where $D$ has $l$ orthonormal columns. Let us write it as $$\operatorname{Tr}(D^\top V\Lambda V^\top D)=\operatorname{Tr}(\tilde D^\top\Lambda \tilde D)=\operatorname{Tr}\big(\tilde D^\top \operatorname{diag}\{\lambda_i\}\, \tilde D\big)=\sum_{i=1}^n\lambda_i\sum_{j=1}^l\tilde D_{ij}^2.$$ This algebraic manipulation corresponds to rotating the coordinate frame such that $A$ becomes diagonal. The matrix $D$ gets transformed as $\tilde D=V^\top D$ which also has $l$ orthonormal columns. And the whole trace is reduced to a linear combination of eigenvalues $\lambda_i$. What can we say about the coefficients $a_i=\sum_{j=1}^l\tilde D_{ij}^2$ in this linear combination? They are row sums of squares in $\tilde D$, and hence (i) they are all $\le 1$ and (ii) they sum to $l$. If so, then it is rather obvious that to maximize the sum, one should take these coefficients to be $(1,\ldots, 1, 0, \ldots, 0)$, simply selecting the top $l$ eigenvalues. Indeed, if e.g. $a_1<1$ then the sum will increase if we set $a_1=1$ and reduce the size of the last non-zero $a_i$ term accordingly. This means that the maximum will be achieved if $\tilde D$ is the first $l$ columns of the identity matrix. And accordingly if $D$ is the first $l$ columns of $V$, i.e. the first $l$ eigenvectors. QED. (Of course this is a not a unique solution. $D$ can be rotated/reflected with any $l\times l$ orthogonal matrix without changing the value of the trace.) This is very close to my answer in Why does PCA maximize total variance of the projection? This reasoning follows @whuber's comment in that thread: [I]s it not intuitively obvious that given a collection of wallets of various amounts of cash (modeling the non-negative eigenvalues), and a fixed number $k$ that you can pick, that selecting the $k$ richest wallets will maximize your total cash? The proof that this intuition is correct is almost trivial: if you haven't taken the $k$ largest, then you can improve your sum by exchanging the smallest one you took for a larger amount.
Whether you scale the output of your DFT, forward or inverse, has nothing to do with convention or what is mathematically convenient. It has everything to do with the input to the DFT. Allow me to show some examples where scaling is either required or not required for both the forward and inverse transform.Must scale a forward transform by 1/N.To start ... I can think of several reasons involving computational precision issues, but that probably would not do justice because mathematically we're defining it the same way no matter what, and mathematics knows no precision issues.Here's my take on it. Let's conceptually think about what DFT means in signal processing sense, not just purely as a transform. In ... The fast Fourier transform ($\textrm{FFT}$) algorithms are fast algorithms for computing the discrete Fourier transform ($\textrm{DFT}$). This is achieved by successive decomposition of the $N$-point $\textrm{DFT}$ into smaller-block $\textrm{DFT}$, and taking advantage of periodicity and symmetry.Now, the $N$-point $\textrm{DFT}$ of a sequence $\{x[0], x[... You need Parseval's theorem. For the discrete-time Fourier transform (DTFT) you have the following relation:$$\sum_{n=-\infty}^{\infty}|x[n]|^2=\int_{-1/2}^{1/2}|X(f)|^2df\tag{1}$$where $f$ is normalized by the sampling frequency. For the DFT you have$$\sum_{n=0}^{N-1}|x[n]|^2=\frac{1}{N}\sum_{k=0}^{N-1}|X[k]|^2$$So due to Parseval's theorem it is ... Actually, 3 different ways to put the scale factors are common in various and different FFT/IFFT implementations: 1.0 forward and 1.0/N back, 1.0/N forward and 1.0 back, and 1.0/sqrt(N) both forward and back.These 3 scaling variations all allow an IFFT(FFT(x)) round trip, using generic unscaled sin() and cos() trig functions for the twiddle factors, to ... Check out this classic example from Oppenheim, A. V., & Lim, J. S. (1981). "The importance of phase in signals". a) and b) are the original images, c) is the image created using the phase of a) with the magnitude of b), d) is the image created using the phase of b) and the magnitude of a). Phase carries most of the information in an image. The correct way to group multiple bins together is to multiply each complex fft bin output by its complex conjugate (which gives the bin power) then add all the bin powers together and divide by the number of bins in the group. If you want to display in db (which is the conventional approach) then take 10*log10() of the result. Negative db values are normal ... Warning: $|e^{j\omega}|$ is equal to $1$ if and only if $\omega$ is a real number. More generally, for $z$ complex, $|e^{z}|=e^{\Re{z}}$.This really depends in your prior knowledge, because functions like exponentials, sines or cosines, can be developed in different ways. Here, I'll assume you know about sines and cosines, as you refer to angular ... All effects you see have to do with windowing. Your signal can be seen as a truncated (i.e., rectangularly windowed) sinusoid. If $s[n]$ is your signal, and $w[n]$ is the window, the signal you analyze is$$\tilde{s}[n]=w[n]\cdot s[n]\tag{1}$$With the discrete-time Fourier transform (DTFT) defined by$$S(f)=\sum_{n=-\infty}^{\infty}s[n]e^{-j2\pi nf}\tag{... If you want to code your own application that plot the magnitude response, you first need to extract the poles and zeros from your transfer function in the $Z$ domain. The process that follows can either be analytic or graphical. I will try to cover both, starting with the analytic approach, then graphical.Extracting the poles and zerosTaking your time ... By Euler's formula, assuming $\omega$ is a real number:$$e^{j \omega} = \cos(\omega) + j \sin(\omega)$$The definition of magnitude for a complex number $z = x + jy$ is:$$|z| = \sqrt{x^2 + y^2},$$therefore:$$\left|e^{j \omega} \right| = \sqrt{\cos^2\omega + \sin^2\omega} = 1$$by trigonometric identity. The quickest way would be[b,a] = ellip(8,1.5,60,[.2 .3]);This designs a bandpass from .2 to .3 with 1.5 dB ripple and 60 db stop band attenuation. You don't get to pick the edges of the stop band since you already pre-selected the order. You can do one or the other but not both at the same time. However the 8th order filter is more than enough to meet ... Use the transformation $z = e^{j\omega}$, you will get$$H(e^{j\omega}) = \frac{1-e^{-j\omega}}{5\left(1+2e^{-j\omega}\right)}$$Solving this (using $e^{j\omega} = \cos \omega + j \sin \omega$), you should get something like (please double check):$$H(\omega) = \frac{\left(\cos\omega-1 \right) + j4\sin\omega}{5\left(5+4\cos\omega\right)}$$Here,$$\text{... Is this a well-known phenomenon?Yes, of course. You will see harmonics as soon as your clip point is lower than the maximum amplitude in the time domain. The latter is a function of the relative phases between the harmonic components. In your case the max amplitude is indeed 2.5 (plus whatever the noise adds).If you change the phases you will get a ... The FFT magnitude, for non-zero sinusoidal-like signals, will only be the same, given a change in phase relative to the window, for signals that are exactly periodic in the window width (the non-zero padded portion of the FFT aperture). It's derived using Euler's equation http://en.wikipedia.org/wiki/Euler's_formula. You start with $$H(z) = (\frac{1}{1 - \frac{1}{5}z^{-1}}) (\frac{1}{1 + \frac{1}{2}z^{-1}})$$Then plug in $e^{-jwT}$ for $z^{-1}$ and you get$$H(z) = (\frac{1}{1 - \frac{1}{5}e^{-j\frac{\pi }{2}}}) (\frac{1}{1 + \frac{1}{2}e^{-j\frac{\pi }{2}}})$$Now according to Euler ... Many audio FFT visualizations use log(magnitude) instead of linear, as that makes it easier to find a scale that makes data visible a graph.The energy contained in band that contains multiple FFT result bins would be the sum, not the max.If the "certain amount" of samples you grab for each FFT is a small fraction of those per 1/24 second, then you may ... If the magnitude spectrum is symmetric$$M(\omega)=M(-\omega)\tag{1}$$(as I assume), then your system is real-valued. The phase response of a real-valued system is asymmetric:$$\phi(\omega)=-\phi(-\omega)\quad(\mod 2\pi)\tag{2}$$This means that there can be two cases:The phase goes through zero at $\omega=0$, i.e. the phase is given by $\phi(\omega)=... Some of the cues you want to look for are the following:A zero on the unit circle causes a zero in the magnitude response at the corresponding frequency ($z=1$ is DC, and $z=-1$ is Nyquist)For a zero at DC ($z=1$), the sum of all impulse response coefficients must be zero.A zero near the unit circle causes a dip in the magnitude response at the ... I did a fair amount of frequency-content visualization in the past, although I don't have the code in front of me... Sorry I don't have more time but I think this might be of help, so here's what I recall:First off, I'm not sure if this is part of what is giving you headaches, but since a real signal run through an FFT doesn't have a complex (phase) signal ... For a rough sketch, you can eyeball or measure the distance of the poles and zeros to a point on the unit circle, multiply/divide to get a magnitude, and sum/difference the angles from the poles and zeros to that point to get a phase. A protractor and ruler might be useful.The angles and distances change more rapidly when a pole or zero is near the unit ... I tried running the same code, and it works fine, except for 2 minor points:It's preferable to use exp(1i*phase) instead of exp(sqrt(-1)*phase)The final result will deviate from the original signal due to rounding noise (typically around 1e-16). Also, some of this noise will appear in the imaginary part of the result (so, even if y was real, yinv will be ... Ideal low pass filter is supposed to attenuate/remove all the frequencies above the cutoff. but in practice the filters have non zero magnitude even for frequencies in their attenuation region, still they are called low pass.if the magnitude of filter is quite high for frequencies above cutoff it might be called bad lowpass filter, but is still a low pass ... In the image you gave, the right-hand scale is in decibels (dB). So, essentially, the square turns into an affine scaling in the logarithm domain, which essentially yields the same image, at least the same relative dynamic range. So, in that case, it does not matter.Aside, $x \to x^\alpha$ transformations with $x\ge 0$ (a positive signal, a magnitude ... Actually we are also facing the same problem, instead of (Z,P,k), even if you use (A,B,C,D), as soon as sos will come in syntax with g, your magnitude response will scaled by the attenuation factor. If you are concerned only with plotting magnitude response you can plot as freqz(sos) you will get exact magnitude response without scaling. it is obvious that abs(real(fft(x)))!=abs(fft(x)) since you are removing the imaginary part while taking the real function.One way if you want a signal with all dft components being real.Is to normal to just take abs(fft(x)) as the spectrum itself rather than taking the real components.In this you maintain the magnitude and all the components have zero phase.... Audio for humans? The human ear-brain system appears to be insensitive to absolute phase within stationary pitches, but more sensitive or responsive to differential phase (between the two ears), periodic or patterned changes in phase (pitch modulations), and perhaps phase coincidence within the spectrum to help characterize transient envelopes.Animals ...
It's my understanding that the colder liquids get (or anything else for that matter) the slower the constituting particles move. That being the case, why is H$_2$O either 'water' or 'ice'? Given that temperature is continuous, why isn't there a continual physical change depending on temperature? Water, like most liquids does indeed get more viscous as its temperature approaches freezing point. See the graph below, which I took from the "Engineering Toolbox" However, what's interesting about this curve is that it does not diverge as $T\to 0^\circ{\rm C}$. The reason is that a phase change really is for all effective purposes a discontinuous phenomenon: at $0+\epsilon^\circ{\rm C}$ the water molecules have enough kinetic energy to avoid binding to one another in a solid lattice: at $0-\epsilon^\circ{\rm C}$ they do not and this argument holds for any $\epsilon>0$. Otherwise put: the freezing point is an energy threshold in the same way that the behavior of a body in the neighborhood of the Earth shows behaviors that are similarly discontinuous functions of the body's total energy: if the body's speed energy exceeds the escape velocity at a point by an arbitrarily small amount, the body will follow a hyperbolic path and leave the neighborhood forever: lower the speed to an arbitrarily small amount below escape velocity and the path will be elliptic and the motion periodic: the path topologies change discontinuously from noncompact to compact.
I've recently started learning SR, and while the Lorentz transformation for space is pretty obvious, just the Galilean transformation combined with space contraction, I can't figure out the explanation for the $\frac{vx}{c^2}$ term in the time transformation. I understand it has something to do with the time needed for knowledge of the event at $x$ to reach the origin, that is $\frac{x}{c}$, but I can't find an explanation for the $\frac{v}{c}$ term. I've tried many directions, drawing the frames at different times, drawing clocks, analyzing Einstein's train thought experiment, but I haven't found anything. I can follow the mathematical derivation, both the one from interval invariance with the assumption of a linear transformation and the one utilizing the space transformation and symmetry, but I want a more visual, intuitive explanation.. Anyone got some insight? The Lorentz transformation is given as \begin{align*} t' &= \gamma (t - \tfrac{v}{c^{2}}x), \\ x' &= \gamma (x - vt). \end{align*} The $vt$ term in the second equation has to do with the $t$-axis being tilted to the right. The $vx/c^{2}$ in the first equation has to do with the $x$-axis being tilted upward. Now, the change in the $x$-axis comes from the fact that the notion of simultaneity changes between different frames. In the $(t, x)$-frame, simultaneity is shown by the black horizontal $x$-axis, but in the $(t', x')$-frame, simultaneity is shown by the blue $x'$-axis. Contrast this to Galilean relativity: in Galilean transformations, simultaneity never changes, so you never tilt the $x$-axis, as shown here. You can get a clearer picture of relativity of simultaneity using exactly the same spacetime diagrams I'm presenting and even relate them to the train thought experiments. The term $vx/c^2$ represents the apparent lack of synchronisation of moving clocks. In other words, if $S$, $S'$ are frames moving relative to each other, then each of them sees the other's clocks to be unsynchronised, and this term gives the phase difference between them. This is most easily seen from the Lorentz transformations. Consider an event $A(t,x)$ in $S$. If the corresponding event occurs at a time $t'$ in $S'$, we have $$t=\gamma(t'-vx')$$ in $S'$. Clearly, since $t$ is fixed, the value $t'$ depends on $x'$-the clocks at different locations will thus record different times, and there will be a corresponding phase difference between them. Let us do the following thought experiment to see this more explicitly. Consider what happens when you try synchronising clocks and they start moving. If clocks $A,B$ are at rest, then they can be synchronised(say, set to zero) using a light beam reaching each of them from their midpoint simultaneously. Now, if the clocks start moving(i.e., you move to a frame where the clocks are moving), this notion of simultaneity is lost. Moreover, the distance $\Delta x$ between them will be Lorentz-contracted. The light signals emanating from the midpoint will reach $A,B$ at distinct times, and so, in the frame where the clocks are moving, the 2 clocks will be set to zero at different times-there is now a phase difference between them. It is simple to show that this difference in readings is in fact $v\Delta x/c^2$, i.e. this is infact the term encoding this lack of synchronisation(use the fact that the distance between them is contracted, and that the speed of light is the same in this frame too). As a final remark, note that this DOES NOT mean that the moving frame has no clear sense of time. The moving frame has it's own set of synchronised clocks which are stationary in its frame, and it uses only those clocks to make measurements. See the section 'Failure of relativity' in Schutz for a geometric discussion of this and how this helps resolve apparent 'paradoxes'.
Actually this is pretty simple: Bayes classifier chooses the class that has greatest a posteriori probability of occurrence (so called maximum a posteriori estimation). The 0-1 loss function penalizes misclassification, i.e. it assigns the smallest loss to the solution that has greatest number of correct classifications. So in both cases we are talking about estimating mode. Recall that mode is the most common value in the dataset, or the most probable value, so both maximizing the posterior probability and minimizing the 0-1 loss leads to estimating the mode. If you need a formal proof, the one is given in the Introduction to Bayesian Decision Theory paper by Angela J. Yu: The 0-1 binary loss function has the following form: $$ l_\boldsymbol{x}(\hat s, s^*) = 1 - \delta_{\hat ss^*} = \begin{cases} 1 & \text{if} \quad \hat s \ne s^* \\ 0 & \text{otherwise} \end{cases} $$ where $\delta$ is the Kronecker Delta function. (...) the expected loss is: $$ \begin{align} \mathcal{L}_\boldsymbol{x}(\hat s) &= \sum_{s^*} l_\boldsymbol{x}(\hat s, s^*) \; P(s = s^* \mid \boldsymbol{x}) \\ &= \sum_{s^*} (1 - \delta_{\hat ss^*}) \; P(s = s^* \mid \boldsymbol{x}) \\ &= \sum_{s^*} P(s = s^* \mid \boldsymbol{x}) ds^* - \sum_{s^*} \delta_{\hat ss^*} P(s = s^* \mid \boldsymbol{x}) \\ &= 1 - P(s = s^* \mid \boldsymbol{x}) \end{align} $$ This is true for maximum a posteriori estimation in general. So if you know the posterior distribution, then assuming 0-1 loss, the most optimal classification rule is to take the mode of the posterior distribution, we call this a optimal Bayes classifier. In real-life we usually do not know the posterior distribution, but rather we estimate it. Naive Bayes classifier approximates the optimal classifier by looking at the empirical distribution and by assuming independence of predictors. So naive Bayes classifier is not itself optimal, but it approximates the optimal solution. In your question you seem to confuse those two things.
Proceedings of the Japan Academy, Series A, Mathematical Sciences Proc. Japan Acad. Ser. A Math. Sci. Volume 90, Number 8 (2014), 113-118. Numerical Godeaux surfaces with an involution in positive characteristic Abstract A numerical Godeaux surface $X$ is a minimal surface of general type with $\chi(\mathcal{O}_{X})=K_{X}^{2}=1$. Over $\mathbf{C}$ such surfaces have $p_{g}(X)=h^{1}(\mathcal{O}_{X})=0$, but $p_{g}=h^{1}(\mathcal{O}_{X})=1$ also occurs in characteristic $p>0$. Keum and Lee~[9] studied Godeaux surfaces over $\mathbf{C}$ with an involution, and these were classified by Calabri, Ciliberto, and Mendes Lopes~[4]. In characteristic $p\ge 5$, we obtain the same bound $|\mathrm{Tors}\,X|\le 5$ as in characteristic 0, and we show that the quotient $X/\sigma$ of $X$ by its involution is rational, or is birational to an Enriques surface. Moreover, we give explicit examples in characteristic 5 of quintic hypersurfaces $Y$ with an action of each of the group schemes $G$ of order 5, and having extra symmetry by $\mathrm{Aut}\,G\cong\mathbf{Z}/4\mathbf{Z}$, hence by the \textit{holomorph} $H_{20}=\mathrm{Hol}\,G=G\rtimes\mathbf{Z}/4\mathbf{Z}$ of $G$. Article information Source Proc. Japan Acad. Ser. A Math. Sci., Volume 90, Number 8 (2014), 113-118. Dates First available in Project Euclid: 3 October 2014 Permanent link to this document https://projecteuclid.org/euclid.pja/1412341994 Digital Object Identifier doi:10.3792/pjaa.90.113 Mathematical Reviews number (MathSciNet) MR3266744 Zentralblatt MATH identifier 1338.14042 Subjects Primary: 14J29: Surfaces of general type Citation Kim, Soonyoung. Numerical Godeaux surfaces with an involution in positive characteristic. Proc. Japan Acad. Ser. A Math. Sci. 90 (2014), no. 8, 113--118. doi:10.3792/pjaa.90.113. https://projecteuclid.org/euclid.pja/1412341994
The orbital speed of a body, generally a planet, a natural satellite, an artificial satellite, or a multiple star, is the speed at which it orbits around the barycenter of a system, usually around a more massive body. It can be used to refer to either the mean orbital speed, i.e. the average speed as it completes an orbit, or the speed at a particular point in its orbit such as perihelia. The orbital speed at any position in the orbit can be computed from the distance to the central body at that position, and the specific orbital energy, which is independent of position: the kinetic energy is the total energy minus the potential energy. Contents Radial trajectories 1 Transverse orbital speed 2 Mean orbital speed 3 Precise orbital speed 4 Tangential velocities at altitude 5 See also 6 References 7 Radial trajectories In the case of radial motion: Transverse orbital speed The transverse orbital speed is inversely proportional to the distance to the central body because of the law of conservation of angular momentum, or equivalently, Kepler's second law. This states that as a body moves around its orbit during a fixed amount of time, the line from the barycenter to the body sweeps a constant area of the orbital plane, regardless of which part of its orbit the body traces during that period of time. [1] This law implies that the body moves slower near its apoapsis than near its periapsis, because at the smaller distance along the arc it needs to trace to cover the same area. Mean orbital speed For orbits with small eccentricity, the length of the orbit is close to that of a circular one, and the mean orbital speed can be approximated either from observations of the orbital period and the semimajor axis of its orbit, or from knowledge of the masses of the two bodies and the semimajor axis. v_o \approx {2 \pi a \over T} v_o \approx \sqrt{\mu \over a} where v is the orbital velocity, a is the length of the semimajor axis, T is the orbital period, and μ= GM is the standard gravitational parameter. Note that this is only an approximation that holds true when the orbiting body is of considerably lesser mass than the central one, and eccentricity is close to zero. Taking into account the mass of the orbiting body, v_o \approx \sqrt{G (m_1 + m_2) \over r} where m 1 is the mass of the orbiting body, m 2 is the mass of the body being orbited, r is specifically the distance between the two bodies (which is the sum of the distances from each to the center of mass), and G is the gravitational constant. This is still a simplified version; it doesn't allow for elliptical orbits, but it does at least allow for bodies of similar masses. When one of the masses is almost negligible compared to the other mass, as the case for Earth and Sun, one can approximate the previous formula to get: v_o \approx \sqrt{\frac{GM}{r}} or assuming r equal to the body's radius v_o \approx \frac{v_e}{\sqrt{2}} Where M is the (greater) mass around which this negligible mass or body is orbiting, and v is the escape velocity. e For an object in an eccentric orbit orbiting a much larger body, the length of the orbit decreases with orbital eccentricity e, and is an ellipse. This can be used to obtain a more accurate estimate of the average orbital speed: v_o = \frac{2\pi a}{T}\left[1-\frac{1}{4}e^2-\frac{3}{64}e^4 -\frac{5}{256}e^6 -\frac{175}{16384}e^8 - \dots \right] [2] The mean orbital speed decreases with eccentricity. Precise orbital speed For the precise orbital speed of a body at any given point in its trajectory, both the mean distance and the precise distance are taken into account: v = \sqrt {\mu \left({2 \over r} - {1 \over a}\right)} where μ is the standard gravitational parameter, r is the distance at which the speed is to be calculated, and a is the length of the semi-major axis of the elliptical orbit. For the Earth at perihelion, v = \sqrt {1.327 \times 10^{20} ~m^3 s^{-2} \cdot \left({2 \over 1.471 \times 10^{11} ~m} - {1 \over 1.496 \times 10^{11} ~m}\right)} \approx 30,300 ~m/s which is slightly faster than Earth's average orbital speed of 29,800 m/s, as expected from Kepler's 2nd Law. Tangential velocities at altitude orbit Center-to-center distance Altitude above the Earth's surface Speed Orbital period Specific orbital energy Standing on Earth's surface at the equator (for comparison -- not an orbit) 6,378 km 0 km 465.1 m/s (1,040 mph) 1 day (24h) −62.6 MJ/kg Orbiting at Earth's surface (equator) 6,378 km 0 km 7.9 km/s (17,672 mph) 1 h 24 min 18 sec −31.2 MJ/kg Low Earth orbit 6,600 to 8,400 km 200 to 2,000 km circular orbit: 6.9 to 7.8 km/s (15,430 mph to 17,450 mph) respectively elliptic orbit: 6.5 to 8.2 km/s respectively 1 h 29 min to 2 h 8 min −29.8 MJ/kg Molniya orbit 6,900 to 46,300 km 500 to 39,900 km 1.5 to 10.0 km/s (3,335 mph to 22,370 mph) respectively 11 h 58 min −4.7 MJ/kg Geostationary 42,000 km 35,786 km 3.1 km/s (6,935 mph) 23 h 56 min −4.6 MJ/kg Orbit of the Moon 363,000 to 406,000 km 357,000 to 399,000 km 0.97 to 1.08 km/s (2,170 to 2,416 mph) respectively 27.3 days −0.5 MJ/kg See also References ^ ^ Horst Stöcker; John W. Harris (1998). Handbook of Mathematics and Computational Science. Springer. p. 386. Shape/Size Orientation Position Variation This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
An Artificial Intelligence for the 2048 game A discussion about possible algorithms which solve the 2048 game arose on StackOverflow. The main discussed algorithms are: The solution I propose is very simple and easy to implement. Although, it has reached the score of 131040. Several benchmarks of the algorithm performances are presented. The assumption on which my algorithm is based is rather simple: if you want to achieve higher score, the board must be kept as tidy as possible. In particular, the optimal setup is given by a linear and monotonic decreasing order of the tile values. This intuition will give you also the upper bound for a tile value: $2^{n} \rightarrow 2^{16} = 65536$ where $n$ is the number of tile on the board. (There's a possibility to reach the 131072 tile if the 4-tile is randomly generated instead of the 2-tile when needed) Two possible ways of organizing the board are shown in the following images To enforce the ordination of the tiles in a monotonic decreasing order, the score si computed as the sum of the linearized values on the board multiplied by the values of a geometric sequence with common ratio $r<1$ . Several linear path could be evaluated at once, the final score will be the maximum score of any path. An implementation of the minmax or the Expectiminimax will surely improve the algorithm. Obviously a more sophisticated decision rule will slow down the algorithm and it will require some time to be implemented.I will try a minimax implementation in the near future. (stay tuned) In case of T2, four tests in ten generate the 4096 tile with an average score of $\sim 42000$ https://github.com/ov3y/2048-AI (ovolve)- A minimax approach which take into account several heuristic functions - I tested it out and it turned out to have some problems in reaching the 4098 tile in a reliable way https://github.com/nneonneo/2048-ai (nneonneo)- A very efficient implementation of an Expectiminimax approach with a quite simple heuristic - AFAIK he has reached the highest score of 173364 but I have not tried it yet The solution I propose is very simple and easy to implement. Although, it has reached the score of 131040. Several benchmarks of the algorithm performances are presented. Algorithm The assumption on which my algorithm is based is rather simple: if you want to achieve higher score, the board must be kept as tidy as possible. In particular, the optimal setup is given by a linear and monotonic decreasing order of the tile values. Heuristic scoring algorithm This intuition will give you also the upper bound for a tile value: $2^{n} \rightarrow 2^{16} = 65536$ where $n$ is the number of tile on the board. (There's a possibility to reach the 131072 tile if the 4-tile is randomly generated instead of the 2-tile when needed) Two possible ways of organizing the board are shown in the following images To enforce the ordination of the tiles in a monotonic decreasing order, the score si computed as the sum of the linearized values on the board multiplied by the values of a geometric sequence with common ratio $r<1$ . $p_n \in Path_{0 \cdots N-1}$ $score = \sum_{n=0}^{N-1} value(p_n) * r^n$ Several linear path could be evaluated at once, the final score will be the maximum score of any path. Decision ruleThe decision rule implemented is not quite smart, the code in Python is presented here: An implementation of the minmax or the Expectiminimax will surely improve the algorithm. Obviously a more sophisticated decision rule will slow down the algorithm and it will require some time to be implemented.I will try a minimax implementation in the near future. (stay tuned) Benchmark T1 - 121 tests - 8 different paths - $r = 0.125$ T2 - 122 tests - 8-different paths - $r = 0.25$ T3 - 132 tests - 8-different paths - $r = 0.5$ T4 - 211 tests - 2-different paths - $r = 0.125$ T5 - 274 tests - 2-different paths - $r = 0.25$ T6 - 211 tests - 2-different paths - $r = 0.5$ In case of T2, four tests in ten generate the 4096 tile with an average score of $\sim 42000$ Code You can upvote my answer on StackOverflow here :)
I am a little confused as to what the magnitude of acceleration is and what it means. Your question is kind of vague but I will try to respond. Acceleration is defined as the time rate of change of velocity. Since velocity has both magnitude and direction, so does acceleration. In other words, acceleration is a vector. The length of the vector is its magnitude. Its direction is the direction of the vector. So the magnitude of acceleration is the magnitude of the acceleration vector while the direction of the acceleration is the direction of the acceleration vector. This is, of course, true of all physical quantities defined as having a magnitude and a direction. As an example, if a car is traveling north and accelerating at a rate of 10 feet per second per second, then the magnitude of the acceleration is 10 feet per second per second and the direction of the acceleration is north. If the car was traveling south but accelerating at the same rate, then the magnitude of its acceleration vector would be the same but its direction would be south. Acceleration is simply a rate of change of velocity. So the magnitude tells you, how quickly velocity changes. If you are talking about linear motion, then the magnitude of acceleration is simply a measurement of change in speed per unit time. As an example, say you are in a car starting from rest and you begin to speed up. Say that you reach a speed of $20 {m \over s}$ in $2$ seconds. This means the magnitude of your acceleration is: $$ a = {20 {m \over s} \over 2s} = 10 {m \over s^2}$$ That is, your speed changed by $20 {m \over s}$ every $2$ seconds, or $10 {m \over s}$ every second. Thus, when we talk about the magnitude of acceleration, we are talking about how quickly your speed changes in a given unit of time. It is important to note that this is only the magnitude of acceleration. Acceleration is a vector, meaning it has both magnitude and direction. Therefore, the magnitude only describes part of any accelerated motion. Also, as is pointed out in a comment below a more precise definition of acceleration is needed when talking about nonlinear motion. In the context of linear motion (as BMS correctly points out in a comment of a different answer), the magnitude of acceleration is a measure of how much speed you are gaining per second. The difference with the acceleration is that the vector form also encapsulates the direction in which this gain in speed is happening. vector So as an example an acceleration magnitude of $2$ $m/s^2$ means that second your speed is every . Therefore, if my starting speed is $0$ $m/s$, my speed after 1 second is $2$ $m/s$, $4$ $m/s$ after 2 seconds, $6$ $m/s$ after 3 seconds and so on... $2$ $m/s$ higher When a particle moves along a prescribed path, with tangent vector $\hat{e}(t)$ and normal vector $\hat{n}(t)$ then the velocity and acceleration vectors are decomposed as such: $$ \vec{v} = v(t) \hat{e}(t) $$ $$ \vec{a} = \dot{v}(t) \hat{e}(t) + \frac{v(t)^2}{\rho(t)} \hat{n}(t) $$ which is interpreted as The magnitude of the velocity vector is the speed along the path. The direction of the velocity vector is tangent to the path. The magnitude of the acceleration vector along the pathis the time rate of change of speed. The magnitude of the acceleration vector normal to the pathis the centripetal acceleration as it goes around the instantaneous radius of curvature $\rho(t)$. The combined magnitude is the combination of the above and does not have a directinterpretation. See https://physics.stackexchange.com/a/99570/392 for more details. Note that item 3 forms a screw vector field, but 4 does not. A. Putting it physically The magnitude of acceleration tells you how much the rest of the world is affecting a particle's state of motion. One curious characteristic of our universe is that it has a natural state of motion. If a particle is left alone, such particle will: Move along a straight line. It’s linear velocity won’t change, i.e., the rate of change of its velocity along that straight line will remain 0 (zero). This is known as Newton’s first law or Galileo's law of inertia. So, every time the rest of the world messes (actual word is interact) with a particle, it will cause any or both of these conditions to change.Now, any time these conditions change the magnitude of acceleration will change as well, because Magnitude of acceleration = Rate of change of in the magnitude of velocity + Rate of changing the direction of motion The rate of change of in the magnitude of velocity is known as linear acceleration (let it be $alinear$), and the rate of changing the direction of motion is known as centripetal acceleration (let it be $acurve$). B. Putting it mathematically: $$alinear = \frac{v2-v1}{t2-t1}$$ $$acurve = \frac{ \theta 2 - \theta 1}{t2-t1}$$ where, $v$ is the magnitude of the velocity $t$ is the time and $\theta$ is the angle between the first and the second direction. quantities with subscript 2 represent the final states, and those with subscript 1 represent initial states. if one knows calculus these equations should be put as $$alinear = \frac{dv}{dt}$$ $$acurve = \frac{d \theta}{dt}$$ When you are asked to "find" or "use" the "magnitude of the acceleration," what it means is that you need not be concerned with its direction, just its value! For example: a boat on a river is traveling at 4 m/s, and there is a cross wind pushing it at 3 m/s. What is the resultant magnitude of the velocity of the boat? Answer: V = $(4^2 + 3^2)^{1/2}$ = 5 m/s. The same applies to acceleration. Magnitude refers to size or quantity alone. When it comes to movement, magnitude refers to the speed at which an object is traveling or its size. In physics, magnitude is the size of a phusical object, a property by which the object can be compared as larger or smaller than other objects of the same kind. More formally, an object's magnitude is an ordering (or ranking) of the class of objects to which it belongs. Acceleration is basically : (final velocity - initial velocity) / change in time. protected by rob♦ Oct 26 '16 at 18:48 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Acoustic Topology Optimization with Thermoviscous Losses Today, guest blogger René Christensen of GN Hearing discusses including thermoviscous losses in the topology optimization of microacoustic devices. Topology optimization helps engineers design applications in an optimized manner with respect to certain a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization. Standard Acoustic Topology Optimization A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries. The governing equation is the standard wave equation with material parameters given in terms of the density \rho and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, \epsilon. This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1. Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot. Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape. Thermoviscous Acoustics (Microacoustics) For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects. Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity. An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot. The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely. Governing Equations of Thermoviscous Acoustics Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the Thermoviscous Acoustics physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as where the viscous field \Psi_{v} is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions. In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as where the thermal field \Psi_{h} is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions. As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme. Topology Optimization for Thermoviscous Acoustics Applications For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section. For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as (1) where \Delta_{cd} is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure. In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are and These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter: We already know that for air domains, (a v,f v) = (1,1), since that gives us the original equation (1). If we instead set a v to a large value so that the gradient term becomes insignificant, and set f v to zero, we get This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a v,f v) should have values of (“large”,0). Thus, we have established our interpolation extremes: and I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a v and f v are input. Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air. The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions. Figure 4: The resulting field with contours for the setup in Figure 3. The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition. Optimizing an Acoustic Loss Response Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this: Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz. A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied. Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz. The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz. Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively. For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example. This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future. References W.R. Kampinga, Y.H. Wijnant, A. de Boer, “An Efficient Finite Element Model for Viscothermal Acoustics,” Acta Acousticaunited with Acoustica, vol. 97, pp. 618–631, 2011. M.P. Bendsoe, O. Sigmund, Topology Optimization: Theory, Methods, and Applications, Springer, 2003. About the Guest Author René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
When we say "there is no voltage drop in an ideal wire", what we really mean is "In many circuits, a wire may be approximated as having no voltage drop because the drop across the wire is insignificant compared to the drops across other elements in the circuit." In your circuit, there is no other element aside from the source, so this approximation is invalid. Therefore you must treat your wire as a low-valued resistor rather than an ideal wire with no voltage drop. Then, for any one electron, isn't potential - with whatever reference - dependent on its POSITION in the wire? Yes, the potential in a resistor depends on the position within the resistor. To be as clear as I know how: In your circuit the potential does vary with the position along the wire just as you say it should. In this circuit you cannot say the wire defines a node with equal potential at all points. The wire acts as a low-valued resistor and the potential is indeed different at different points in the wire. And - what does it mean for voltage to 'drop' across a resistor - and WHY does it drop - in terms of ELECTRIC FIELDS? Electrostatic potential difference between points $a$ and $b$ is defined as $$V_{ab} = -\int_b^a \vec{E}\cdot{\rm d}\vec{\ell},$$ where $\ell$ is some path between the points. As you correctly stated, when a source is applied to a resistor, a field is developed in the resistor, which exerts a force on charged particles, causing some of them to move, producing a current. Since there is a field in the resistor, a potential difference is developed between its two terminals. Edit In comments you asked, I'm asking why do we need a resistor at all for there to be a voltage drop? You don't. If you just have a battery sitting on a table top with nothing connected to it, there will be a potential difference (aka "voltage drop") between its two terminals. Why, then, with an 'ideal wire' and a resistor, does potential at a point in the wire does not with distance from the terminals of the battery? In this situation, you no longer have uniform field strength along the path from one battery terminal to the other. The E-field in the wire will be much weaker than it is in the resistor. If it were higher, a large current would flow in the wire (because it has very low resistivity). But this current would also have to flow through the resistor (because of conservation of charge), which would require a large potential across the resistor. Things naturally balance out so that most of the potential drop is across the resistor. And very often that means the drop across the wire is negligible and we can model it as an ideal wire.
If you use the operators $\{+,-,\times,/\}$ (i.e., you don't included the power operator), then all of your problems are likely decidable. Testing equality with zero For instance, let's consider $L = \mathbb{Z} \cup \{\pi\}$. Then you can treat $\pi$ as a formal symbol, so that each leaf is a polynomial in $\mathbb{Z}[\pi]$ (e.g., the integer $5$ is the constant polynomial $5$; $\pi$ is the polynomial $\pi+0$ of degree 1). Now you can express the tree as a rational polynomial over $\mathbb{Z}$, with $\pi$ as the formal unknown. Suppose this polynomial is $p(\pi)/q(\pi)$. Test whether $p(\pi)$ is the zero polynomial (degree $-\infty$). If it is not the zero polynomial, then the expression is not equal to zero. If $p(\pi)$ is the zero polynomial, and $q(\pi)$ is not the zero polynomial, then the expression is equal to zero. The correctness of this procedure follows from the fact that $\pi$ is transcendental. What's the complexity of this procedure? The answer depends on the computational model. Let's assume that each operator as taking constant time to evaluate (regardless of the size of the operands). Then the complexity depends on the size of the resulting polynomials. The degree of the polynomial can grow exponentially with the depth of the tree, so if you build the polynomial recursively and express it explicitly (in coefficient form), the running time will be at most exponential in the depth of the tree. Fortunately, the degree grows at most linearly in the number of leaves in the tree, so the running time of a deterministic algorithm is linear in the size of the tree. Therefore, assuming a straightforward representation of the tree and a simplistic computational model, this gives you a linear time algorithm for zero testing when the operators are $\{+,-,\times,/\}$. This procedure works not just for $L=\mathbb{Z} \cup \{\pi\}$, but also for $\mathbb{N} \cup \{\pi\}$ and $\mathbb{Q} \cup \{\pi\}$. The same procedure also works for $L=\mathbb{Q} \cup \{\pi,e\}$, if we can assume a reasonable conjecture: that $\pi$ and $e$ are algebraically independent. It is not known whether this conjecture is correct, but it seems likely. Anyway, here's the approach. We treat the polynomial as a multivariate polynomial over two unknowns $\pi,e$ instead of one unknown, but everything carries over as before, given the algebraic independence of $\pi$ and $e$. It also works for $L=\mathbb{Z} \cup \{\pi,e\}$ and $L=\mathbb{N} \cup \{\pi,e\}$, too, again, assuming the conjecture. If you wanted to get fancy, you could use randomized algorithms for polynomial identity testing. If $L = \mathbb{Z} \cup \{\pi\}$, they'll amount to the following: choose a random prime $r$ and a random integer $s_{\pi} \in \{0,\dots,r-1\}$; replace each instance of $\pi$ with $s_{\pi}$; and then check whether the resulting expression evaluates to $0 \bmod r$. (If you have both $\pi$ and $e$, you'll pick two random integers $s_{\pi}$ and $s_e$.) You can repeat this test multiple times. If this procedure ever gives you something non-zero (modulo $r$), then the original expression is certainly non-zero. If it always gives you zero (modulo $r$), then with high probability the original expression is equal to zero. This may be more efficient in some computational models (e.g., where the time to evaluate a single operator is dependent on the size of the operands). Sign comparison You can also find the sign of the expression using similar procedures (again, assuming you have excluded the ^ operator, and again, assuming that $\pi$ and $e$ are algebraically independent). Evaluate the expression as a rational polynomial $p(\pi,e)/q(\pi,e)$ over $\mathbb{Q}[\pi,e]$. Assume you have determined that $p(\pi,e)/q(\pi,e) \ne 0$ and $q(\pi,e) \ne 0$. You want to know whether $p(\pi,e)/q(\pi,e) > 0$ or not. Here is one approach. Note that $p(\pi,e)/q(\pi,e) > 0$ iff $p(\pi,e) \cdot q(\pi,e) > 0$. Therefore, we can form a new polynomial $r(\pi,e) = p(\pi,e) \cdot q(\pi,e)$ and reduce this to the problem of evaluating the sign of $r(\pi,e)$. Basically, we need to evaluate the sign of a rational polynomial in $\pi$ and $e$. We know this evaluates to something non-zero. One approach is to compute $\pi$ and $e$ to $k$ bits of precision, and then evaluate $r(\pi,e)$ accordingly, gaining lower and upper bounds on $r(\pi,e)$. If 0 is included within this interval, double $k$, until the lower bound is strictly positive or the lower bound is strictly negative. What's the complexity of this approach? If $|r(\pi,e)|$ evaluates to a value $\epsilon$, then I think the running time will be polynomial in the size of the input and in $\lg 1/\epsilon$. There may be a better algorithm, but this is the best I can come up with right now. Conclusion The takeaway is that the power operator (^) is the real source of difficulty. Without the power operator, all of your problems can be solved without too much difficulty (assuming a reasonable conjecture).
If I am trying to model the dynamics of a double-pendulum (on a horizontal plane without the effects of gravity), in which the second angle is constrained to range between values of [-10 deg, 10 deg], how would I derive the equations of motion? I'm having trouble identifying whether I would use some method involving solving the Lagrangian with holonomic or non-holonomic constraints. The derivation of double pendulum system is indeed trivial but tedious because of computing some derivatives. The first step is to tackle the problem from a geometric perspective. My drawing of double pendulum is shown in the following picture: From the preceding figure, we can write down some equations. The derivatives of these equations are needed subsequently, therefore, we compute them as well. The independent time variable is omitted for the sake of simplicity. $$ \begin{array} xx_1 = a_1 \sin\theta, & \qquad y_1 = -a_1 \cos\theta, \\ \dot{x}_1 = a_1 \dot{\theta} \cos\theta, & \qquad \dot{y}_1 = a_1 \dot{\theta} \sin\theta, \\ %#################################################### x_2 = x_1 + a_2 \sin\phi, & \qquad y_2 = y_1 - a_2 \cos\phi, \\ \dot{x}_2 = \dot{x}_1 + a_2 \dot{\phi} \cos\phi, & \qquad \dot{y}_2 = \dot{y}_1 + a_2 \dot{\phi} \sin\phi, \end{array} $$ To derive the dynamics equation of the system, I will use the Lagrangian approach. We need the kinetic and potential energies of masses. Let's start off with the kinetic energy. The kinetic energy of $m_1$ is computed as follows: $$ \begin{align} \mathcal{K}_1 &= \frac{1}{2}m_1 v^2_1 \\ &= \frac{1}{2}m_1 (\dot{x}^{2}_1+\dot{y}^{2}_1) \\ &= \frac{1}{2}m_1 (a^{2}_1 \dot{\theta}^{2} \cos^{2}\theta+a^{2}_1 \dot{\theta}^{2} \sin^{2}\theta) \\ &= \frac{1}{2}m_1 [a^{2}_1 \dot{\theta}^{2} (\cos^{2}\theta+ \sin^{2}\theta)] \\ &= \frac{1}{2}m_1 a^{2}_1 \dot{\theta}^{2} \end{align} $$ The kinetic energy of $m_2$ is computed as follows: $$ \begin{align} \mathcal{K}_2 &= \frac{1}{2}m_2 v^2_2 \\ &= \frac{1}{2}m_2 (\dot{x}^{2}_2+\dot{y}^{2}_2) \\ &= \frac{1}{2}m_2 [(\dot{x}_1 + a_2 \dot{\phi} \cos\phi)^{2}+(\dot{y}_1 + a_2 \dot{\phi} \sin\phi)^{2}] \\ %=============================================== &= \frac{1}{2}m_2 [(\dot{x}^{2}_1 + 2a_2 \dot{\phi} \cos\phi \dot{x}_1 + a^{2}_2 \dot{\phi}^{2} \cos^{2}\phi) + \\ & \qquad \ \ \ \quad (\dot{y}^{2}_1 + 2a_2 \dot{\phi} \sin\phi \dot{y}_1 + a^{2}_2 \dot{\phi}^{2} \sin^{2}\phi)] \\ %=============================================== &= \frac{1}{2}m_2 [(a^{2}_1 \dot{\theta}^{2} \cos^{2}\theta + 2a_2 \dot{\phi} \cos\phi a_1 \dot{\theta} \cos\theta + a^{2}_2 \dot{\phi}^{2} \cos^{2}\phi) + \\ & \qquad \ \ \ \quad (a^{2}_1 \dot{\theta}^{2} \sin^{2}\theta + 2a_2 \dot{\phi} \sin\phi a_1 \dot{\theta} \sin\theta + a^{2}_2 \dot{\phi}^{2} \sin^{2}\phi)] \\ %=============================================== &= \frac{1}{2}m_2 [a^{2}_1 \dot{\theta}^{2} [\cos^{2}\theta+\sin^{2}\theta] + 2a_2 a_1 \dot{\phi} \dot{\theta}[ \cos\theta \cos\phi + \sin\theta \sin\phi ] + \\ & \qquad \qquad a^{2}_2 \dot{\phi}^{2}[ \cos^{2}\phi+\sin^{2}\phi] ] \\ %=============================================== &= \frac{1}{2}m_2 [a^{2}_1 \dot{\theta}^{2} + 2a_2 a_1 \dot{\phi} \dot{\theta} \cos(\theta-\phi) + a^{2}_2 \dot{\phi}^{2} ] \end{align} $$ The kinetic energy of the whole system is then: $$ \begin{align} \mathcal{K} &= \mathcal{K}_1 + \mathcal{K}_2 \\ &= \frac{1}{2}m_1 a^{2}_1 \dot{\theta}^{2} + \frac{1}{2}m_2 [a^{2}_1 \dot{\theta}^{2} + 2a_2 a_1 \dot{\phi} \dot{\theta} \cos(\theta-\phi) + a^{2}_2 \dot{\phi}^{2} ] \end{align} $$ The second step is to compute the potential energy of the whole system. We start off with the potential energy of $m_1$ which is computed as follows: $$ \begin{align} \mathcal{P}_1 &= m_1 y_1 g \\ &= m_1(-a_1 \cos\theta)g \\ &= -a_1 m_1 g \cos\theta \end{align} $$ The potential energy of $m_2$ is computed as follows: $$ \begin{align} \mathcal{P}_2 &= m_2 y_2 g \\ &= m_2(y_1 - a_2 \cos\phi)g \\ &= m_2(-a_1 \cos\theta - a_2 \cos\phi)g \\ &= -a_1 m_2 g \cos\theta - a_2 m_2 g \cos\phi \end{align} $$ The potential energy of the whole system is then: $$ \begin{align} \mathcal{P} &= \mathcal{P}_1 + \mathcal{P}_2 \\ &= -a_1 m_1 g \cos\theta -a_1 m_2 g \cos\theta - a_2 m_2 g \cos\phi \end{align} $$ To construct the Lagrangian equation, we take the difference between the kinetic $\mathcal{K}$ and potential $\mathcal{P}$ energies of the whole system, hence: $$ \begin{align} \mathcal{L} &= \mathcal{K} - \mathcal{P} \end{align} $$ At this moment, we will compute dynamics equation of the double pendulum system. Remember this system has two degrees of freedom, therefore, we need two differential equations to describe the motion of the system. Let's start off with the first equation (i.e. the equation for $\theta$). This can be done by determining the following equations: $$ \frac{\partial \mathcal{L}}{\partial \dot{\theta} }, \ \ \frac{d}{dt} \left( \frac{\partial \mathcal{L}}{\partial \dot{\theta} } \right) \text{and} \frac{\partial \mathcal{L}}{\partial \theta}. $$ For the second equation of dynamics equation, we do the following: $$ \frac{\partial \mathcal{L}}{\partial \dot{\phi} }, \ \ \frac{d}{dt} \left( \frac{\partial \mathcal{L}}{\partial \dot{\phi} } \right) \text{and} \frac{\partial \mathcal{L}}{\partial \phi}. $$ I will leave the rest as an exercise. Once you came up with the dynamics equation, you can use some controllers to limit the range of the second pendulum. I don't think you need to worry about the constraint. You can derive the non linear equation using Lagrangian. Once you have the non linear model, you will have to linearize it at the equilibrium point which is simply valid only for angle around 0 deg. This condition is much more strict then your mechanical limitations. Therefore you don't need to consider the limitations that you have mentioned. And I assume you will modeling the friction as a viscous damper which is a linear model. This assumption makes the system holonomic. Here I assume that you will use linear model to control the system. However, if you want to use non linear control then you will probably need to worry about the constraints. At first it is important to understand the double pendulum problem as a hybrid system. To solve these kinds of task a combination of a finite state machine plus mathematical equations are the best way. On page 26 of On-Line Symbolic Constraint Embedding for Simulation of Hybrid Dynamical Systems is a automaton given which consists of three modes: double-pendulum, slider-crank and stuck. Additional sourcecode in Matlab are not given, but this paper is the way to go. A sourcecode for a simpler version (the single pendulum problem) is the following: def fsmstart(self): # strategy: if pole is on 180 Degree, move in other direction for i in range(10000): a1=self.angPole() pygame.time.wait(50) a2=self.angPole() middle=False if a1>160 and a1<200: middle=True direction="left" if a2<a1: direction="right" if middle==True: print direction if direction=="right": self.setspeed(0, .7) # move left if direction=="left": self.setspeed(0, -.7) # move right pygame.time.wait(500) self.setspeed(0, 0) def angPole(self): return self.getangle(4)
The Annals of Probability Ann. Probab. Volume 41, Number 1 (2013), 294-328. Exact thresholds for Ising–Gibbs samplers on general graphs Abstract We establish tight results for rapid mixing of Gibbs samplers for the Ferromagnetic Ising model on general graphs. We show that if \[(d-1)\tanh\beta<1,\] then there exists a constant $C$ such that the discrete time mixing time of Gibbs samplers for the ferromagnetic Ising model on any graph of $n$ vertices and maximal degree $d$, where all interactions are bounded by $\beta$, and arbitrary external fields are bounded by $Cn\log n$. Moreover, the spectral gap is uniformly bounded away from $0$ for all such graphs, as well as for infinite graphs of maximal degree $d$. We further show that when $d\tanh\beta<1$, with high probability over the Erdős–Rényi random graph $G(n,d/n)$, it holds that the mixing time of Gibbs samplers is \[n^{1+\Theta({1}/{\log\log n})}.\] Both results are tight, as it is known that the mixing time for random regular and Erdős–Rényi random graphs is, with high probability, exponential in $n$ when $(d-1)\tanh\beta>1$, and $d\tanh\beta>1$, respectively. To our knowledge our results give the first tight sufficient conditions for rapid mixing of spin systems on general graphs. Moreover, our results are the first rigorous results establishing exact thresholds for dynamics on random graphs in terms of spatial thresholds on trees. Article information Source Ann. Probab., Volume 41, Number 1 (2013), 294-328. Dates First available in Project Euclid: 23 January 2013 Permanent link to this document https://projecteuclid.org/euclid.aop/1358951988 Digital Object Identifier doi:10.1214/11-AOP737 Mathematical Reviews number (MathSciNet) MR3059200 Zentralblatt MATH identifier 1270.60113 Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] Secondary: 82B20: Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs 82C20: Dynamic lattice systems (kinetic Ising, etc.) and systems on graphs Citation Mossel, Elchanan; Sly, Allan. Exact thresholds for Ising–Gibbs samplers on general graphs. Ann. Probab. 41 (2013), no. 1, 294--328. doi:10.1214/11-AOP737. https://projecteuclid.org/euclid.aop/1358951988
In Harel's book on PDL (Propositional Dynamic Logic) I've learnt that Kripke frames are pairs such as: K = (K; $m_k$) Where $K$ is a set called states and $m_k$ is a meaning function assigning subset of $K$ to each atomic proposition and a binary relation on $K^2$ to each atomic program: $m_k$ ($p$) $\subset$ $K$, where $p$ is an atomic formula; $m_k$ ($\alpha$) $\subset$ $K^2$, where $\alpha$ is an atomic program. By definition, we have: $$m_k(\phi->\psi) = (K - m_k(\phi)) \ \bigcup \ m_k(\psi)$$ $$m_k(0) = \emptyset$$ $$m_k([\alpha]\psi) = K - (m_k(\alpha) \ o \ (K - m_k(\phi))$$ $$m_k(\alpha;\beta) = m_k(\alpha) \ o \ m_k(\beta)$$ $$m_k(\alpha \ \bigcup \ \beta) = m_k(\alpha) \ \bigcup \ m_k(\beta)$$ $$m_k(\alpha^*) = m_k(\alpha)^* = \bigcup \ m_k(\alpha)^n, where\ n>0$$ $$m_K(\psi?) = ( (u,v) \ | \ u \in m_k(\psi) ) $$ The previous definition can be extended by induction to all formulas and programs. As Noah Schweber said below: "The point is that the definition of "nonstandard Kripke frame" is the same as the definition of "standard Kripke frame," except that one condition is weakened: rather than having the property that $m_N(\alpha^∗)$ is the reflexive transitive closure of $m_N(\alpha^∗)$ (which is required of standard Kripke frames), a nonstandard Kripke frame merely needs to satisfy the weaker condition." How can I show that all finite non-standard Kripke frames are standard?
Why did scientists chose to go with sine wave to represent alternating current and not other waveforms like triangle and square? What advantage does sine offers above other waveforms in representing current and voltage? Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community Circular motion produces a sine wave naturally: - It's just a very natural and fundamental thing to do and trying to produce waveforms that are different is either more complicated or leads to unwanted side effects. Up and down motion (in nature) produces a sine wave against time: - Cosine and sine waves (actually their constituents in the form of complex exponentials) are the Eigenfunctions of linear, time-invariant systems, having a time-dependent system response of $$\begin{align}f\bigl(a(t)+b(t),t_0\bigr)&= f\bigl(a(t),t_0\bigr)+f\bigl(b(t),t_0\bigr)&&\text{linearity}\\ f\bigl(a(t+h),t_0\bigr)&=f\bigl(a(t),t_0+h\bigr)&&\text{time invariance}\end{align}$$ If you build any network from linear passive components (resistors, inductors, capacitors on this StackExchange) and feed it with a continuous sinoidal signal, then any point in the network will deliver a continuous sinoidal signal of possibly different phase and magnitude. No other waveform shape will generally be preserved since the response will be different for different input frequencies, so if you decompose some input into its sinoidal components of unique frequency, check the individual responses of the network to those, and reassemble the resulting sinoidal signals, the result will generally not have the same relations between its sinoidal components as originally. So Fourier analysis is pretty important: passive networks respond straightforwardly to sinoidal signals, so decomposing everything into sinoids and back is an important tool for analyzing circuitry. Things oscillate according to sine and cosine. Mechanical, electrical, acoustical, you name it. Hang a mass on a spring and it will bounce up and down at its resonant frequency according to the sine function. An LC circuit will behave the same way, just with currents and voltages instead of velocity and force. A sinewave consists of a single frequency component, and other waveforms can be built up from adding up multiple different sinewaves. You can see the frequency components in a signal by looking at it on a spectrum analyzer. Since a spectrum analyzer sweeps a narrow filter over the frequency range you're looking at, you will see a peak at each frequency that the signal contains. For a sinewave, you will see 1 peak. For a square wave, you will see peaks a f, 3f, 5f, 7f, etc. Sine and cosine are also the projection of things that rotate. Take an AC generator, for example. An AC generator spins a magnet around next to a coil of wire. As the magnet rotates, the field that impinges upon the coil due to the magnet will vary according to the sine of the shaft angle, generating a voltage across the coil that is also proportional to the sine function. On a more mathematical and physical sense why sine and cosine happen to be the fundamentals of waves can have its roots on the Pythagorean theorem and calculus. Pythagorean theorem gave us this gem, with sines and cosines: $$ \mathrm{sin}^2(t) + \mathrm{cos}^2(t) = 1, t \in \mathbb{R} $$ This made sines and cosines cancel each other out in the inverse-square laws that scatter around in the entire physics world. And with calculus we have this: $$ \frac{\mathrm{d}}{\mathrm{d}x}\mathrm{sin}x = \mathrm{cos}x $$ $$ \frac{\mathrm{d}}{\mathrm{d}x}\mathrm{cos}x = -\mathrm{sin}x $$ This means that any form of calculus operation would preserve sines and cosines if there is perfectly one of them. For example, when we solve the instantaneous position of object in Hooke's law (similar form everywhere too) we have this: $$ -kx = F = m\frac{\mathrm{d}^2}{\mathrm{d}t^2}x $$ And the solution happens to be a linear function of \$x=\mathrm{sin}(t)\$. Scientists did not chose the sine wave, that's what they got from an AC generator. In AC generator, sine wave is generated due to the rotor motion inside a magnetic field. There is no easy way to make it otherwise. See this figure in Wikipedia. http://en.wikipedia.org/wiki/Single-phase_generator#Revolving_armature Sine waves contain only one frequency. A square or triangle wave is a sum of infinite amount of sine waves that are harmonics of the fundamental frequency. The derivative of a perfect square wave (has zero rise/fall time) is infinite when it changes from low to high or vice versa. The derivative of a perfect triangle wave is infinite at the top and bottom. One practical consequence of this is that it is harder to transfer a square/triangle signal, say over a cable compared to a signal that is only a sine wave. Another consequence is that a square wave tends to generate much more radiated noise compared to a sine wave. Because it contains a lot of harmonics, those harmonics may radiate. A typical example is the clock to an SDRAM on a PCB. If not routed with care it will generate a lot of radiated emission. This may cause failures in EMC testing. A sine wave may also radiate, but then only the sine wave frequency would radiate out. First of all, the sine and cosine functions are uniformly continuous(so there are no discontinuous points anywhere in their domain) and infinitely differentiable on the entire Real line. They are also easily computed by means of a Taylor series expansion. These properties are especially useful in defining the Fourier series expansion of periodic functions on the real line. So non-sinusoidal waveforms such as the square, sawtooth, and triangle waves can be represented as an infinite sum of sine functions. Ergo, the sine wave forms the basis of Harmonic Analysis and is the most mathematically simple waveform to describe. We always like to work with linear mathematical models of physical realities because of it simplicity to work with. Sinusoidal functions are 'eigenfunctions' of linear systems. This means that if the input is \$ \sin(t) \$ the output is of the form \$ A\cdot\sin(t + \phi) \$ The function stays the same and is only scaled in amplitude and shifted in time. This gives us a good idea what happens to the signal if it propagates through the system. Sine/Cosine are solutions of second order linear differential equations. sin'=cos, cos'=-sin Basic electronic elements as inductors and capacitors produces either an integration of a differentiation of current to tension. By decomposing arbitrary signals into sine waves, the differential equations can be analysed easily. One way to look at it, in a nutshell, is that a harmonic series of sine and cosine functions forms an orthogonal basis of a linear vector space of real-valued functions on a finite time interval. Thus a function on a time interval can be represented as a linear combination of harmonically related sine and cosine functions. Of course you could use some other set of functions (e.g. particular wavelets) as long as they'd form a valid basis set, and decompose the function of interest that way. Sometimes such decompositions may be useful, but so far we only know of specialized applications for them. Taking a geometrical analogy: you could use a non-ortogonoal basis to describe the components of a vector. For example, a vector in an orthonormal basis may have components of [1,8,-4]. In some other, non-orthonormal basis, it may have components of [21,-43,12]. Whether this set of components is easier or harder to interpret than the usual orthonormal basis depends on what you're trying to do.
The question is the same as asking when are the solutions to the Schrodinger equation of the form $\psi(x)=A\sin(kx)$ or $\psi(x)=Ae^{\kappa x}$. Assume $\psi(x)=A\sin(kx)$ is your solution with $k$ constant but an unknown $V(x)$ and plug into the Schrodinger equation\begin{align}-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}A\sin(kx) + V(x)A\sin(kx)&=EA\sin(kx)\, , \\\frac{\hbar^2k^2}{2m}A\sin(kx)+V(x)A\sin(kx)&=EA\sin(kx) \tag{1}\, .\end{align}Since (1) must hold for every $x$, and assuming $A\sin(kx)\ne 0$ for some $x$, we can cancel the common factor $A\sin(kx)$ for this $x$ and are left with: $$V(x)=E-\frac{\hbar^2k^2}{2m}\, .$$In other words, the potential is constant: $V(x)=V_0$. Reorganizing we get$$\frac{\hbar^2k^2}{2m}=E-V_0\, .$$Since the left hand side is necessarily positive, the $\psi(x)=A\sin(kx)$ canonly occur when $E-V_0>0.$ The same manipulations repeated with $\psi(x)=Ae^{\kappa x}$ yields$$V(x)=E+\frac{\hbar^2\kappa ^2}{2m}\, ,$$also showing the potential is constant. This time however one concludes that$$\frac{\hbar^2\kappa ^2}{2m}=V_0-E$$so that this solution can only occur then the potential is greater than the energy.
My question is about the following paper: In section 2 they show Equation 3 (which is just an optimization problem), which they transform into the optimization problem right before section 3. Everything is very easy to read and self-contained. However, I do not understand how they can tell so easily that the constraints in the convex program are tight (which allows them to transform the program to the one before section 3). Are they relying on some result that they are not saying, or is this a very simple argument coming from optimization? An edit, following a request to rewrite the question in a self-contained way.Let $w \in R^d$ and $y_i \in R^d$ and $A_i$ be a finite set of vectors in $R^d$.Consider the following optimization problem: $\min_{w,\xi} ||w||_2^2 + \sum_i \xi_i$ under the constraints that $\forall i$ we have $w^T y_i + \xi_i \ge \max_{y \in A_i} \left( w^T y \right)$ why in any solution of this optimization problem, the constraints are tight, so that we can reformulate the optimization problem by replacing $\xi_i$ with $\max_{y \in A_i} \left(w^T y \right) - w^T y_i$ and get rid of the constraints?
12865 12889 Deletions are marked like this. Additions are marked like this. Line 30: Line 30: === Max NLP: Full power night sky coverage, maximum night light pollution === === Max Night Light Pollution (NLP): Full power night sky coverage, maximum night light pollution === Night Side Maneuvers We can minimize night light pollution, and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. Another advantage of the turn is that if thinsat maneuverability is destroyed by radiation or a collision on the night side, it will come out of night side with a slow tumble that won't be corrected. The passive radar signature of the tumble will help identify the destroyed thinsat to other thinsats in the array, allowing another sacrificial thinsat to perform a "rendezvous and de-orbit". If the destroyed thinsat is in shards, the shards will tumble. The tumbling shards ( or a continuously tumbling thinsat ) will eventually fall out of the normal orbit, no longer get J_2 correction, and the thinsat orbit will "eccentrify", decay, and reenter. This is the fail-safe way the arrays will reenter, if all active control ceases. Maneuvering thrust and satellite power Neglecting tides, the synodic angular velocity of the m288 orbit is \Large\omega = 4.3633e-4 rad/sec = 0.025°/s. The angular acceleration of a thinsat is 13.056e-6 rad/sec 2 = 7.481e-4°/s 2 with a sun angle of 0°, and 3.740e-4°/s 2 at a sun angle of 60°. Because of tidal forces, a thinsat entering eclipse will start to turn towards sideways alignment with the center of the earth; it will come out of eclipse at a different velocity and angle than it went in with. If the thinsat is rotating at \omega and either tangential or perpendicular to the gravity vector, it will not turn while it passes into eclipse. Otherwise, the tidal acceleration is \ddot\theta = (3/2) \omega^2 \sin 2 \delta where \delta is the angle to the tangent of the orbit. If we enter eclipse with the thinsat not turning, and oriented directly to the sun, then \delta = 30° . Three Strategies and a Worst Case Failure Mode There are many ways to orient thinsats in the night sky, with tradeoffs between light power harvest, light pollution, and orbit eccentricity. If we reduce power harvest, we will need to launch more thinsats to compensate, which makes more problems if the system fails. I will present three strategies for light harvest and nightlight pollution. The actual strategies chosen will be blend of those. Tumbling If things go very wrong, thinsats will be out of control and tumbling. In the long term, the uncontrolled thinsats will probably orient flat to the orbital plane, and reflect very little light into the night sky, but in the short term (less than decades), they will be oriented in all directions. This is equivalent to mapping the reflective area of front and back (2 π R 2 ) onto a sphere ( 4 π R 2 ). Light with intensity I shining onto a sphere of radius R is evenly reflected in all directions uniformly. So if the sphere intercepts π R 2 I units of light, it scatters e I R 2/4 units of light (e is albedo) per steradian in all directions. While we will try to design our thinsats with low albedo ( high light absorption on the front, high emissivity on the back), we can assume they will get sanded down and more reflective because of space debris, and they will get broken into fragments of glass with shiny edges, adding to the albedo. Assume the average albedo is 0.5, and assume the light scattering is spherical for tumbling. Source for the above animation: g400.c Three design orientations All three orientations shown are oriented perpendicularly in the daytime sky. Max remains perpendicular in the night sky, Min is oriented vertically in the night sky, and Zero is edge on to the terminator in the night sky. All lose orientation control and are tilted by tidal forces in eclipse - the compensatory thrusts are not shown. Min and Zero are accelerated into a slow turn before eclipse, so they come out of the eclipse in the correct orientation. In all cases, there will probably be some disorientation and sun-seeking coming out of eclipse, until each thinsat calibrates to the optimum inertial turn rate during eclipse. So, there may be a small bit of sky glow at the 210° position, directly overhead at 2am and visible in the sky between 10pm and 6am. Max Night Light Pollution (NLP): Full power night sky coverage, maximum night light pollution The most power is harvested if the thinsats are always oriented perpendicular to the sun. During the half of their orbit into the night sky, there will be some diffuse reflection to the side, and some of that will land in the earth's night sky. The illumination is maximum along the equator. For the M288 orbit, about 1/6th of the orbit is eclipsed, and 1/2 of the orbit is in daylight with the diffuse (Lambertian) reflection scattering towards the sun and onto the day side of the earth. Only the two "horns" of the orbit, the first between 90° and 150° (6pm to 10pm) and the second between 210° and 270° (2am to 6am) will reflect night into the light sky. The light harvest averages to 83% around the orbit. This is the worst case for night sky illumination. Though it is tempting to run thinsats in this regime, extracting the maximum power per thinsat, it is also the worst case for eccentricity caused by light pressure, and the thinsats must be heavier to reduce that eccentricity. Min NLP: Partial night sky coverage, some night light pollution This maneuver will put some scattered light into the night sky, but not much compared to perpendicular solar illumination all the way into shadow. In the worst case, assume that the surface has an albedo of 0.5 (typical solar cells with an antireflective coating are less than 0.3) and that the reflected light is entirely Lambertian (isotropic) without specular reflections (which will all be away from the earth). At a 60° angle, just before shadow, the light emitted by the front surface will be 1366W/m 2 × 0.5 (albedo) × 0.5 ( cos 60° ), and it will be scattered over 2π steradians, so the illumination per steradian will be 54W/m 2-steradian just before entering eclipse. Estimate that the light pollution varies from 0W to 54W between 90° and 150° and that the average light pollution is half of 54W, for 1/3 of the orbit. Assuming an even distribution of thinsat arrays in the constellation, that is works out to an average of 9W/m 2-steradian for all thinsats in M288 orbit. The full moon illuminates the night side of the equatorial earth with 27mW/m 2 near the equator. A square meter of thinsat at 6400km distance produces 9W/6400000 2 or 0.22 picowatts per m 4 times the area of all thinsats. If thinsat light pollution is restricted to 5% of full moon brightness (1.3mW/m 2), then we can have 6000 km 2 of thinsats up there, at an average of 130 W/m 2, or about 780GW of thinsats at m288. That is about a million tons of thinsats. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 ~ \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1 ~ \Large\omega 0° to 60° 100% to 50% 0W to 54W 100 to 140 150° to 210° 4 ~ \Large\omega 60° to 300° Eclipse 0W 140 to 180 210° to 270° 1 ~ \Large\omega 300° to 0° 50% to 100% 54W to 0W 180 to 240 270° to 0° 0 ~ \Large\omega 0° 100% 0W The angular velocity change at 0° takes 250/7.481 = 33.4 seconds, and during that time the thinsat turns 0.42° with negligible effect on thrust or power. The angular velocity change at 60° takes 750/3.74 = 200.5 seconds, and during that time the thinsat turns 12.5°, perhaps from 53.7° to 66.3°, reducing power and thrust from 59% to 40%, a significant change. The actual thrust change versus time will be more complicated (especially with tidal forces), but however it is done, the acceleration must be accomplished before the thinsat enters eclipse. The light harvest averages 78% around the orbit. Zero NLP: Partial night sky coverage, no night light pollution In this case, in the night half of the sky the edge of the thinsat is always turned towards the terminator. As long as the thinsats stay in control, they will never produce any nighttime light pollution, because the illuminated side of the thinsat is always pointed away from the night side of the earth. The average illumination fraction is around 68%. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees average rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1.5 \Large\omega 0° to 90° 100% to 0% 0W 100 to 140 150° to 210° 3 \Large\omega Start at 3.333\Large\omega 90° to 270° Eclipse 0W 140 to 180 210° to 270° 1.5 \Large\omega 270° to 0° 0% to 100% 0W 180 to 240 270° to 0° 0 \Large\omega 0° 100% 0W Pedants and thinsat programmers take note: The actual synodic orbit period is 240 minutes and 6.57 seconds long; that results in 2190.44 rather than 2191.44 sidereal orbits per year, accounting for the annual apparent motion of the sun around the sky. The light harvest averages 67% around the orbit. Why would a profit maximizing operator settle for 67% when 83% was possible? Infrared-filtering thinsats reduce launch weight and can use the Zero NLP flip to increase the minimum temperature of a thinsat during eclipses. An IR filtering thinsat in maximum night light pollution mode will have the emissive backside pointed at 2.7K space when it enters eclipse; the thinsat temperature will drop towards 20K if it cannot absorb the 64W/m 2 of 260K black body radiation reaching it from the earth through the 3.5μm front side infrared filter. The thinsat will become very brittle at those temperatures, and the thermal shock could destroy it. If the high thermal emissivity back side is pointed towards the 260K earth, the temperature will drop to 180K - still challenging, but the much higher thermal mobility may heal atomic-scale damage. Details of the Zero NLP maneuver In the night sky, assuming balanced thrusters, only tidal forces act on the thinsat, \ddot\theta = -(3/2) \omega^2 \sin( 2 \theta . Integrating numerically from the proper choice of initial rotation rate (10/9ths the average, due to tidal decceleration) we go from 30 degrees "earth relative tilt" at the beginning of eclipse, to 150 degrees tilt at the end of eclipse, with the night sky sending 260K infrared at the back side throughout the manuever. The entire disk of the earth is always in view, and always emits the same amount of infrared to a given radius, but the Lambertian angle of absorption changes. Power versus angle Night light pollution versus hour The night light pollution for 1 Terawatt of thinsats at M288. Mirror the graph for midnight to 6am. Some light is also put into the daytime sky (early morning and late afternoon), but it will be difficult to see in the glare of sunlight. The Zero NLP option puts no light in the night sky, so that curve is far below the bottom of this graph. Source for the above two graphs: nl02.c Note to Astronomers Yes, we will slightly occult some of your measurements, though we won't flash your images like Iridium. We will also broadcast our array ephemerides, and deliver occultation schedules accurate to the microsecond, so you can include that in your luminosity calculations. Someday, server sky is where you will perform those calculations, rather than in your own coal powered (and haze producing) computer.
I was reading Vector Linear Independence where I came across a matrix operation I have never seen before. What is this operation called and is it generally valid? I mean there is a $2\times3$ matrix that was reduced to $2\times2$...! $$\begin{bmatrix}1&0&\frac{16}{5}\\ 0 & 1 & \frac 25\end{bmatrix}\left\{\begin{array}{c}a_1\\a_2\\a_3\end{array}\right\}=\left\{\begin{array}{c}0\\0\end{array}\right\}$$ We can now rearrange this equation to obtain $$\begin{bmatrix}1&0\\0&1\end{bmatrix}\left\{\begin{array}{c}a_1\\a_2\end{array}\right\}=\left\{\begin{array}{c}a_1\\a_2\end{array}\right\}=-a_3\left\{\begin{array}{c}\frac{16}{5}\\\frac{2}{5}\end{array}\right\}$$ Thank you in advance.
As others have said in the comments, you know what rules and axioms you can use because you are given or choose a particular collection. One of the most common misunderstandings I see is the thought that there is one set of rules and axioms that all mathematicians agree on. This is especially pernicious in logic but exists in other areas too. Usually what happens is a student sees one definition and assumes that that is the definition. It usually takes a while before they realize that all of these things have a variety of different but equivalent definitions, and in most cases also have genuinely different definitions. In the case of logic, just looking at the sheer number of entries under "logic" in the Stanford Encyclopedia of Philosophy, many of which correspond to different logics, gives an indication. And it is not comprehensive by any means. For your example, all the reasoning is equational so which logic we're using is not as critical. We do need the laws for equality which could be roughly described as "a congruence with respect to everything". First, equality is an equivalence relation meaning it is reflexive, $x=x$; symmetric, if $x = y$ then $y = x$; and transitive, if $x = y$ and $y = z$ then $x = z$. Then what makes equality equality is the indiscernibility of indenticals which is usually expressed as a rule rather than an axiom and states: if $x = y$ and $P$ is some predicate with free variable $z$, then if $P[x/z]$ is provable so is $P[y/z]$ where $P[x/z]$ means $P$ with all free occurrences of $z$ replaced with $x$, i.e. substituting $x$ for $z$, and similarly for $P[y/z]$. (Again, there are other ways of presenting these rules and axioms. Indeed, this set is redundant...) The proof of a statement like $0x=0$ is common in the theory of rings. For example, using the definition of a ring given on Wikipedia, this is a theorem but not an axiom. The aspect I mentioned before strikes here too. There are other choices you could take for the axioms of a ring, including ones where $0x=0$ is taken axiomatically. Also, the term "ring" is ambiguous as many authors consider "rings without unit" (i.e. which don't necessarily have an element that behaves like $1$). The definition Wikipedia gives is a ring with unit. These definitions are not equivalent. Anyway, using the definition on Wikipedia, one way to prove $0x=0$ is the following: $$\begin{align}&0x-0x=0 \tag{additive inverse}\\ \iff & (0+0)x-0x=0 \tag{additive identity}\\ \iff & (0x+0x)-0x = 0 \tag{left distributivity}\\ \iff & 0x+(0x-0x) = 0 \tag{additive associativity}\\ \iff & 0x + 0 = 0 \tag{additive inverse}\\ \iff & 0x = 0 \tag{additive identity}\end{align}$$ Each $\iff$ is hiding a use of the indiscernibility of identicals. For example, the first step is: let $P$ be $zx-0x=0$, the additive identity axiom for $0$ states $0+0=0$ or, by symmetry, $0=0+0$, if $P[0/z]$ is provable, then $P[(0+0)/z]$ is provable. This gives the $\Rightarrow$ direction, the $\Leftarrow$ direction just uses the same equality the other way. So why don't we just take $0x=0$ as an axiom. Well, we could. However, doing so wouldn't let us derive the other axioms and given the other axioms we can derive this one. Using a minimal collection of axioms makes it easier to verify if something is a ring (or a ring homomorphism). We would have to explicitly verify $0x=0$ while having it be a theorem means we can derive it once and for all for all rings. Another factor affecting the choice of axioms is also evident in Wikipedia's presentation. We often want to build our definitions in a modular fashion (which often leads to non-minimal lists of axioms). In this case, Wikipedia's definition starts with the axioms of an commutative group. That is, a ring is an commutative group and simultaneously a monoid whose "multiplication" distributes over the group operation. This way of presenting rings allows us to "import" theorems about commutative groups and monoids and apply them to rings. We could, of course, still do this if we had a different presentation of the axioms of a ring, but we'd have to derive the commutative group/monoid structure first, and this structure may not be obvious from the alternate presentation. If you really want to get a visceral feel for all of this, I recommend getting familiar with a proof assistant like Agda, Coq, LEAN, or several others. I particularly recommend Agda as it puts all the gory details of the proofs right in your face. Most other proof assistants use a tactics-based approach which means you typically write "proof scripts" which are little programs that search for proofs for you. You don't typically see the proofs in those systems. Nevertheless, any of them will make it very apparent what it means to work with a given definition, what is and is not available at any point in time, and why structuring definitions one way versus another may be desirable. They all have pretty steep learning curves though and LEAN and Coq have better introductory material than Agda.
$x,y \in \mathbb{R}$ and $F: D\subset \mathbb{R}^2 \to \mathbb{R}$ where $$ F =\sqrt{x^2 - y^2} + \arccos\Big(\frac{x}{y}\Big) = 0 \tag{1}$$ Noticing that the square-root and inverse cosine functions only return positive or $0$ values, the only way for this statement to work is if both output $0$. Therefore $(1)$ defines an implicit function $y = g(x) = x$ excluding the point $(0,0)$. Unfortunately, you cannot conclude that $(1)$ implies an implicit function from the implicit function theorem. Consider the point $(2,2)$. We apply the implicit function theorem around this point. The partial derivatives $F_x$ and $F_y$ evaluated at $(2,2)$ or any point $(x, y=x)$ where $F(x,y) = 0$ is undefined and so we cannot use the theorem, which only gives a sufficient condition for the existence of an implicit function. So when it doesn't apply, who knows if a function exists. In this case, we were able to find one. If the implicit function theorem holds for a function $F$, guaranteeing the existence of some function, we can apply implicit differentiation to the equation. If the implicit function theorem doesn't hold, but we know an implicit function exists, you'd think (or maybe hope) 'well the function exists so I might as well try implicit differentiation.' In $(1)$, we know that $g'(x) = 1$. Yet, implicit differentiation gives you junk (you get a square-root in the denominator). How can an implicit function $y(x)$ exist but implicit differentiation not work? Differentiation is just a operation. If you have an object to work on, shouldn't differentiation give something reasonable when the object is reasonable? $(1)$ must be much different than $y = x$ excluding the origin
A commonly encountered form of the ABC-conjecture is the following: For all $\epsilon > 0$, there is a constant $\kappa_{\epsilon} > 0$ (depending only on $\epsilon$) such that for all coprime integers $a$, $b$ with sum $c = a + b$, \begin{eqnarray} \text{rad}(abc)^{1 + \epsilon} > \kappa_{\epsilon} \max ( |a|, |b|, |c| ). \end{eqnarray} Related to his work on linear forms of logarithms, Baker conjectured the inequality: \begin{eqnarray} (\epsilon^{-\omega(abc)} \text{rad}(abc))^{1 + \epsilon} > \kappa_{\epsilon} \max ( |a|, |b|, |c| ). \end{eqnarray} What are the consequences, if any, for allowing $\kappa_{\epsilon}$ to no longer depend solely on $\epsilon$ but also have a dependence on $\omega(abc)$ (the number of distinct prime divisors of $abc$) in the original form of the conjecture? That is, for example, somewhere in between the two conjectures: For all $\epsilon > 0$, there is a constant $\kappa_{\epsilon, n} > 0$ (depending only on $\epsilon$ and $n = \omega(abc)$) such that for all coprime integers $a$, $b$ with sum $c = a + b$, \begin{eqnarray} \text{rad}(abc)^{1 + \epsilon} > \kappa_{\epsilon, n} \max ( |a|, |b|, |c| ). \end{eqnarray} Question: Are there any neat and non-trivial consequences preserved with this latter form of the conjecture, i.e., finite solutions of certain Diophantine equations, asymptotic bounds on linear forms for logarithms, etc.? Thanks in advance.
I'm just beginning to learn about Fourier series and I'm trying to figure out how to find the Fourier series coefficients for $$x(t) = e^{j100\pi t}$$ I know that $$x(t) = \sum_{-\infty}^{\infty} a_{k} e^{jk(2\pi/T)t}$$ How do I go about finding the coefficient(s)? I know that I can get T = 1/50, but beyond that I don't even know where to begin. I think I'm supposed to be able to do this just by looking at it without having to solve the integral equation for $a_k$, but I don't know what I'm supposed to be looking for.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281 A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV... PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 2. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30 The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma... B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Journal Article Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102 A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,... scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0 scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0 Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 03/2007, Volume 98, Issue 13, p. 132002 We present an analysis of angular distributions and correlations of the X(3872) particle in the exclusive decay mode X(3872)-> J/psi pi(+)pi(-) with J/psi... PHYSICS, MULTIDISCIPLINARY | CHARMONIUM | DETECTOR | Physics - High Energy Physics - Experiment | PARTICLE DECAY | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | SPIN | PIONS MINUS | QUANTUM NUMBERS | FERMILAB COLLIDER DETECTOR | MUONS PLUS | PARITY | ANGULAR DISTRIBUTION | J PSI-3097 MESONS | MUONS MINUS | TEV RANGE 01-10 | PIONS PLUS | PROTON-PROTON INTERACTIONS | PAIR PRODUCTION PHYSICS, MULTIDISCIPLINARY | CHARMONIUM | DETECTOR | Physics - High Energy Physics - Experiment | PARTICLE DECAY | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | SPIN | PIONS MINUS | QUANTUM NUMBERS | FERMILAB COLLIDER DETECTOR | MUONS PLUS | PARITY | ANGULAR DISTRIBUTION | J PSI-3097 MESONS | MUONS MINUS | TEV RANGE 01-10 | PIONS PLUS | PROTON-PROTON INTERACTIONS | PAIR PRODUCTION Journal Article Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5 Journal Article 6. Search for rare decays of $$\mathrm {Z}$$ Z and Higgs bosons to $${\mathrm {J}/\psi } $$ J/ψ and a photon in proton-proton collisions at $$\sqrt{s}$$ s = 13$$\,\text {TeV}$$ TeV The European Physical Journal C, ISSN 1434-6044, 2/2019, Volume 79, Issue 2, pp. 1 - 27 A search is presented for decays of $$\mathrm {Z}$$ Z and Higgs bosons to a $${\mathrm {J}/\psi } $$ J/ψ meson and a photon, with the subsequent decay of the... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281 A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The... Journal Article 8. Suppression of non-prompt J/psi, prompt J/psi, and Upsilon(1S) in PbPb collisions at root s(NN)=2.76 TeV JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 05/2012, Issue 5 Yields of prompt and non-prompt J/psi ,as well as Upsilon(1S) mesons, are measured by the CMS experiment via their mu(+)mu(-) decays in PbPb and pp collisions... P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS Journal Article 9. Search for rare decays of Z and Higgs bosons to J / ψ and a photon in proton-proton collisions at √s = 13 TeV European Physical Journal C, ISSN 1434-6044, 02/2019, Volume 79, Issue 2, p. 94 A search is presented for decays of and Higgs bosons to a meson and a photon, with the subsequent decay of the to . The analysis uses data from proton-proton... Journal Article 10. Relative Modification of Prompt ψ (2S) and J /ψ Yields from pp to PbPb Collisions at sNN =5.02 TeV Physical Review Letters, ISSN 0031-9007, 04/2017, Volume 118, Issue 16 Journal Article Physics Letters B, ISSN 0370-2693, 12/2013, Volume 727, Issue 4-5, pp. 381 - 402 Journal Article 12. Measurement of the ratio of the production cross sections times branching fractions of B-c(+/-) -> J/psi pi(+/-) and B-+/- -> J/psi K-+/- and B(B-c(+/-) -> J/psi pi(+/-)pi(+/-)pi(-/+))/B(B-c(+/-) -> J/psi pi(+/-)) in pp collisions at root s=7 Tev JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 01/2015, Issue 1 Journal Article
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear functions defined on these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations. The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. [1] [2] The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach. In modern introductory texts to functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theory of measure, integration, and probability to infinite dimensional spaces, also known as infinite dimensional analysis. The basic and historically first class of spaces studied in functional analysis are complete normed vector spaces over the real or complex numbers. Such spaces are called Banach spaces. An important example is a Hilbert space, where the norm arises from an inner product. These spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics. An important object of study in functional analysis are the continuous linear operators defined on Banach and Hilbert spaces. These lead naturally to the definition of C*-algebras and other operator algebras. Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. [3] Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to \ell 2 (\aleph 0 ) General Banach spaces are more complicated than Hilbert spaces, and cannot be classified in such a simple manner as those. In particular, many Banach spaces lack a notion analogous to an orthonormal basis. Examples of Banach spaces are L p p\geq1 \mu X L p (X) L p (X,\mu) L p (\mu) [f] p f \int X \left|f(x)\right| pd\mu(x)<+inf ty If \mu \sum x\in \left|f(x)\right| p<+inf ty Then it is not necessary to deal with equivalence classes, and the space is denoted \ell p (X) \ell p X In Banach spaces, a large part of the study involves the dual space: the space of all continuous linear maps from the space into its underlying field, so-called functionals. A Banach space can be canonically identified with a subspace of its bidual, which is the dual of its dual space. The corresponding map is an isometry but in general not onto. A general Banach space and its bidual need not even be isometrically isomorphic in any way, contrary to the finite-dimensional situation. This is explained in the dual space article. Important results of functional analysis include: See main article: Banach-Steinhaus theorem. The uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm. Theorem (Uniform Boundedness Principle).Let Xbe a Banach space and Ybe a normed vector space. Suppose that Fis a collection of continuous linear operators from Xto Y. If for all xin Xone has \sup\nolimits T \|T(x)\| Y <inf ty, then \sup\nolimits T \|T\| B(X,Y) <inf ty. U * T U= A where T is the multiplication operator: [T \varphi](x)= f(x) \varphi(x). and \|T\|= \|f\| inf ty This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure. There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now f See main article: Hahn–Banach theorem. The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". \varphi(x) \leq p(x) \forall x \in U then there exists a linear extension of to the whole space, i.e., there exists a linear functional such that \psi(x)= \varphi(x) \forall x\in U, \psi(x) \le p(x) \forall x\in V. See main article: Open mapping theorem (functional analysis). The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely,: [4] Open mapping theorem. If X and Y are Banach spaces and A : X → Y is a surjective continuous linear operator, then A is an open map (i.e. if U is an open set in X, then A( U) is open in Y). The proof uses the Baire category theorem, and completeness of both X and Y is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if X and Y are taken to be Fréchet spaces. Most spaces considered in functional analysis have infinite dimension. To show the existence of a vector space basis for such spaces may require Zorn's lemma. However, a somewhat different concept, Schauder basis, is usually more relevant in functional analysis. Many very important theorems require the Hahn–Banach theorem, usually proved using axiom of choice, although the strictly weaker Boolean prime ideal theorem suffices. The Baire category theorem, needed to prove many important theorems, also requires a form of axiom of choice. Functional analysis in its includes the following tendencies:
First, to dispel a possible cognitive dissonance: reasoning about infinite structures is not a problem, we do it all the time. As long as the structure is finitely describable, that's not a problem. Here are a few common types of infinite structures:languages (sets of strings over some alphabet, which may be finite);tree languages (sets of trees over some ... First off, you're absolutely right: you're on to a real concern. Formal verification transfers the problem of confidence in program correctness to the problem of confidence in specification correctness, so it is not a silver bullet.There are several reasons why this process can still be useful, though.Specifications are often simpler than the code ... This is a standard notation for an inference rule. The premises are put above a horizontal line, and the conclusion is put below the line. Thus, it ends up looking like a "fraction", but with one or more logical propositions above the line and a single proposition below the line. If you see a label (e.g., "LET" or "VAR" in your example) next to it, that's ... Can I find a general algorithm to solve the halting problem for some possible program input pairs?Yes, sure. For example you could write an algorithm that returns "Yes, it terminates" for any program which contains neither loops nor recursion and "No, it does not terminate" for any program that contains a while(true) loop that will definitely be reached ... Let us consider the following inductive definition:$\qquad \displaystyle \begin{align*}&\phantom{\Rightarrow} \quad \varepsilon \in \mathcal{T} \\w \in \mathcal{T} \quad &\Rightarrow \quad aw \in \mathcal{T}\\aw \in \mathcal{T} \quad &\Rightarrow \quad baw \in \mathcal{T}\end{align*}$What is $\mathcal{T}$? Clearly, the set of ... D.W.'s answer is great, but I'd like to expand on one point. A specification is not just a reference against which the code is verified. One of the reasons to have a formal specification is to validate it by proving some fundamental properties. Of course, the specification cannot be completely validated — the validation would be as complex as the ... In contrast to what the nay-sayers say, there are many effective techniques for doing this.Bisimulation is one approach. See for example, Gordon's paper on Coinduction and Functional Programming.Another approach is to use operational theories of program equivalence, such as the work of Pitts.A third approach is to verify that both programs satisfy the ... The constructive equivalence of linear-time fixed point formulae (the logic is called $\nu$TL by some) and Buechi automata is given in a paper by Mads Dam from 1992.Fixed Points of Buchi Automata, FST&TCS 1992.See page 4 for the construction of a $\nu$TL formula from a Buechi automaton. The construction of a Buechi automaton from a $\nu$TL formula ... In the preface of his book “Mathematical Discovery, On Understanding, Learning, and Teaching Problems Solving” George Pólya writes:Solving problems is a practical art, like swimming, or skiing, orplaying the piano: you can learn it only be imitation and practice.This book cannot offer you a magic key that opens all the doors andsolves all the ... Because a CCS process is worth a thousand pixels – and it is easy to see the underlying LTS – here are two processes that simulates each other but are not bisimilar:$$P = ab + a$$$$Q = ab$$$\mathcal{R_1}=\{(ab+a, ab), (b, b), (0,b), (0, 0)\}$ is a simulation.$\mathcal{R_2}=\{(ab, ab+a), (b, b), (0,0)\}$ is a simulation.$P\ \mathcal R_1\ Q$ and $Q\ \... This is clearly reducible from the Halting Problem. If a machine $M$ does not stop on input $x$ then any final state is "useless". Given an input $M,x$ for the Halting problem, it is easy to construct $M_x$ that halts on every input (thus its final state is not useless) if and only if $M$ halts on $x$. That way you can decide Halting Problem if you can ... To elaborate slightly on the "it's impossible" statements, here's a simple proof sketch.We can model algorithms with output by Turing Machines which halt with their output on their tape. If you want to have machines that can halt by either accepting with output on their tape or rejecting (in which case there's no output) you can easily come up with an ... After reading your question the only way I could see and had enough knowledge to tie the topics together was to give a hi-level set of articles that drill down from software verification ending up with trying to unite model checking and theorem proving. Hopefully my comment did that:Take a look at Software verification then Formal verification then Model ... The intuitive answer is that if you don't have unbounded loops and you don't have recursion and you don't have goto, your programs terminate. This isn't quite true, there are other ways to sneak non-termination in, but it's good enough for most practical cases. Of course the converse is wrong, there are languages with these constructs that do not allow non-... Even if there's a simulation in each direction, the simulations back and forth may not relate the same sets of states. Sometimes you have a simulation $R_1$ in one direction, and a simulation $R_2$ in the other direction, and two states $p_1$ and $q$ which are related by $R_1$ but not by $R_2$ nor by any other simulation in the same direction.The canonical ... The state can change in subsequent reduction steps because on the right hand side of$$\langle while\ B\ do\ S, \sigma \rangle\quad\rightarrow\quad\langle if\ B\ then\ ( {\color{red}{S}};\ while\ B\ do\ S )\ else\ skip, \sigma\rangle$$the $while$-loop is guarded (preceeded) by $S$. The computation of $S$ may change the state so that the ... In programming language semantics, the notion of program state is not a vague philosophical notion, but a very precise mathematical one. A state $s$ in this small-step operational semantics is a partial function$$ s : \mathbf{Var} \hookrightarrow \mathbb{Z} $$that records the values of the variables. So if $s\, x = v$, then variable $x$ has value $v$. ... First order logic is undecidable, so SAT solving does not really help. That said, techniques exist for bounded model checking of first order formulas. This means that only a fixed number of objects can be considered when trying to determine whether the formula is true or false. Clearly, this is not complete, but if a counter-example is found, then it truly ... The state $\sigma$ does not change when we consider $B$ to decide whether to perform one iteration of the loop, but it can change later when we run the body $S$. And so, the next time we consider $B$, there can be a change of $\sigma$. Infinite-state system verification is indeed a rather broad topic. First of all, all computers used nowadays can only have a finite number of states, as the amount of RAM is fixed. But that's is mainly a matter of terminology -- any verification technique that needs to iterate over all possible states is doomed from the beginning due to computation time.... John Harrison's book is an exception in going all the way from theory to practice and making all the source code available. I think you will find it difficult to find an equivalent book for model checking, but there are a few that achieve a close approximation.Principles of Model Checking by Baier and Katoen contains a lot of examples and pretty detailed ... Here is a very informal explanation that might help people unfamiliar with formal notations to get a foot in the door. It does not replace a formal definition!The Ap is the state of your system or your running program. "State" can mean a lot of things but in this case it seems to include a list of all defined local variables and their values. Why is Ap a ... Disclaimer: I'm not sure how useful any of this is for getting this done practically since you have a program, not a Turing Machine.The Cook-Levin Theorem essentially states that you can translate the execution of a Turing Machine into a boolean formula that is polynomial in the length of the TM's execution such that the formula is satisfiable iff the TM ... Probably the most common fixpoint expressions in model checking are things like $\mu X.A\cup(B\cap\circ X)$ and $\nu X.A\cap(B\cup\circ X)$, where $\circ$ is some flavour of "next state" operator. That is, the least $X$ such that $X = A\cup(B\cap\circ X)$, and the greatest $X$ such that $X = A\cap(B\cup\circ X)$, respectively. More generally, we are talking ... I really think "formal" methods are not a very good idea for educational purposes. For that matter, programming a computer is a "formal" method. Does it succeed as an educational tool?What is needed is understanding, intuition, and the ability to deal with abstraction. Formal methods hinder all that. Rather, they promote trial and error, hacking, ... An initial algebra is an initial object in the category of $F$-algebras for a given endofunctor $F : \mathcal{C} \rightarrow \mathcal{C}$.This construction is widely used to gives semantics to data-structures in (functional) programming languages. Intuitively, the functor $F$ captures the "shape" of the data-structure (e.g., $F(X) = 1 + A \times X$, with $... Java BytecodeSimilar to Microsoft's CLS, the Java Bytecode that the Java virtual machine executes gives you the (theoretical) possibility of using libraries from one JVM-targeting language with another JVM-targeting language. For example, Java libraries can be used in Scala, which IMO is a much better language than Java itself. Libraries written in Scala ... Partial correctness does not mean that not all statements of a specification are met by an algorithm. Have a look at the Wikipedia article about correctness:Partial correctness of an algorithm means that it returns the correct answer if it terminates.Total correctness means that is it additionally guaranteed that the algorithm terminates.Such a proof ... The first step should becount = 0while (x*2 > x)x = x*2count++to find the largest power of 2 that can fit into the variable. Note that doing +1 instead of *2 is not only much slower but also fails for floating point numbers (the gaps between large consecutive floating point numbers are bigger than 1). The above procedure should work for both ...
In many applications involving electromagnetic waves, one is less concerned with the instantaneous values of the electric and magnetic fields than the power associated with the wave. In this section, we address the issue of how much power is conveyed by an electromagnetic wave in a lossless medium. The relevant concepts are readily demonstrated in the context of uniform plane waves, as shown in this section. A review of Section [m0039_Uniform_Plane_Waves_Characteristics] (“Uniform Plane Waves: Characteristics”) is recommended before reading further. Consider the following uniform plane wave, described in terms of the phasor representation of its electric field intensity: \[\widetilde{\bf E} = \hat{\bf x}E_0 e^{-j\beta z} \label{m0041_eE}\] Here \(E_0\) is a complex-valued constant associated with the source of the wave, and \(\beta\) is the positive real-valued propagation constant. Therefore, the wave is propagating in the \(+\hat{\bf z}\) direction in lossless media. The first thing that should be apparent is that the amount of power conveyed by this wave is infinite. The reason is as follows. If the power passing through any finite area is greater than zero, then the total power must be infinite because, for a uniform plane wave, the electric and magnetic field intensities are constant over a plane of infinite area. In practice, we never encounter this situation because all practical plane waves are only “locally planar” (see Section [m0142_Types_of_Waves] for a refresher on this idea). Nevertheless, we seek some way to express the power associated with such waves. The solution is not to seek total power, but rather power per unit area. This quantity is known as the spatial power density, or simply “ power density,” and has units of W/m\(^2\). 1 Then, if we are interested in total power passing through some finite area, then we may simply integrate the power density over this area. Let’s skip to the answer, and then consider where this answer comes from. It turns out that the instantaneous power density of a uniform plane wave is the magnitude of the Poynting vector \[{\bf S} \triangleq {\bf E} \times {\bf H} \label{m0041_eS}\] Note that this equation is dimensionally correct; i.e. the units of \({\bf E}\) (V/m) times the units of \({\bf H}\) (A/m) yield the units of spatial power density (V\(\cdot\)A/m\(^2\), which is W/m\(^2\)). Also, the direction of \({\bf E} \times {\bf H}\) is in the direction of propagation (reminder: Section [m0039_Uniform_Plane_Waves_Characteristics]), which is the direction in which the power is expected to flow. Thus, we have some compelling evidence that \(\left|{\bf S}\right|\) is the power density we seek. However, this is not proof – for that, we require the Poynting Theorem, which is a bit outside the scope of the present section, but is addressed in the “Additional Reading” at the end of this section. A bit later we’re going to need to know \({\bf S}\) for a uniform plane wave, so let’s work that out now. From the plane wave relationships (Section [m0039_Uniform_Plane_Waves_Characteristics]) we find that the magnetic field intensity associated with the electric field in Equation \ref{m0041_eE} is \[\widetilde{\bf H} = \hat{\bf y}\frac{E_0}{\eta} e^{-j\beta z} \label{m0041_eH}\] where \(\eta=\sqrt{\mu/\epsilon}\) is the real-valued impedance of the medium. Let \(\psi\) be the phase of \(E_0\); i.e., \(E_0 = |E_0|e^{j\psi}\). Then and H &= { e^jt } &= ( t - z + ) Now applying Equation \ref{m0041_eS}, \[{\bf S} =\hat{\bf z} \frac{|E_0|^2}{\eta} \cos^2\left( \omega t - \beta z + \psi \right)\] As noted earlier, \(\left|{\bf S}\right|\) is only the instantaneous power density, which is still not quite what we are looking for. What we are actually looking for is the time-average power density \(S_{ave}\) – that is, the average value of \(\left|{\bf S}\right|\) over one period \(T\) of the wave. This may be calculated as follows: Since \(\omega =2\pi f = 2\pi/T\), the definite integral equals \(T/2\). We obtain \[\boxed{ S_{ave} = \frac{|E_0|^2}{2\eta} } \label{m0041_eWEPWD}\] It is useful to check units again at this point. Note (V/m)\(^2\) divided by \(\Omega\) is W/m\(^2\), as expected. Equation \ref{m0041_eWEPWD} is the time-average power density (units of W/m\(^2\)) associated with a sinusoidally-varying uniform plane wave in lossless media. Note that Equation \ref{m0041_eWEPWD} is analogous to a well-known result from electric circuit theory. Recall the time-average power \(P_{ave}\) (units of W) associated with a voltage phasor \(\widetilde{V}\) across a resistance \(R\) is \[P_{ave} = \frac{|\widetilde{V}|^2}{2R} \label{m0041_ePWLM-EA}\] which closely resembles Equation \ref{m0041_eWEPWD}. The result is also analogous to the result for a voltage wave on a transmission line (Section [m0090_Power_Flow_on_Transmission_Lines]), for which: \[P_{ave} = \frac{|V_0^+|^2}{2Z_0}\] where \(V_0^+\) is a complex-valued constant representing the magnitude and phase of the voltage wave, and \(Z_0\) is the characteristic impedance of the transmission line. Here is a good point at which to identify a common pitfall. \(|E_0|\) and \(|\widetilde{V}|\) are the peak magnitudes of the associated real-valued physical quantities. However, these quantities are also routinely given as root mean square (“rms”) quantities. Peak magnitudes are greater by a factor of \(\sqrt{2}\), so Equation [m0041_eWEPWD] expressed in terms of the rms quantity lacks the factor of \(1/2\). Example \(\PageIndex{1}\): Power density of a typical radio wave. A radio wave transmitted from a distant location may be perceived locally as a uniform plane wave if there is no nearby structure to scatter the wave; a good example of this is the wave arriving at the user of a cellular telephone in a rural area with no significant terrain scattering. The range of possible signal strengths varies widely, but a typical value of the electric field intensity arriving at the user’s location is 10 \(\mu\)V/m rms. What is the corresponding power density? Solution From the problem statement, \(|E_0|=10~\mu\)V/m rms. We assume propagation occurs in air, which is indistinguishable from free space at cellular frequencies. If we use Equation \ref{m0041_eWEPWD}, then we must first convert \(|E_0|\) from rms to peak magnitude, which is done by multiplying by \(\sqrt{2}\). Thus: Alternatively, we can just use a version of Equation \ref{m0041_eWEPWD]} which is appropriate for rms units: Either way, we obtain the correct answer, 0.265 pW/m\(^2\) (that’s picowatts per square meter). Considering the prevalence of phasor representation, it is useful to have an alternative form of the Poynting vector which yields time-average power by operating directly on field quantities in phasor form. This is \({\bf S}_{ave}\), defined as: \[{\bf S}_{ave} \triangleq \frac{1}{2} \mbox{Re} \left\{ \widetilde{\bf E} \times \widetilde{\bf H}^* \right\}\] (Note that the magnetic field intensity phasor is conjugated.) The above expression gives the expected result for a uniform plane wave. Using Equations \ref{m0041_eE} and \ref{m0041_eH}, we find \[{\bf S}_{ave} = \frac{1}{2} \mbox{Re} \left\{ \left( \hat{\bf x}E_0 e^{-j\beta z} \right) \times \left( \hat{\bf y}\frac{E_0}{\eta} e^{-j\beta z} \right)^* \right\}\] which yields \[{\bf S}_{ave} = \hat{\bf z}\frac{|E_0|^2}{2\eta}\] as expected. Contributors Ellingson, Steven W. (2018) Electromagnetics, Vol. 1. Blacksburg, VA: VT Publishing. https://doi.org/10.21061/electromagnetics-vol-1 Licensed with CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0. Report adoption of this book here. If you are a professor reviewing, adopting, or adapting this textbook please help us understand a little more about your use by filling out this form. Be careful: The quantities power spectral density(W/Hz) and power flux density(W/(m\(^2\cdot\)Hz)) are also sometimes referred to as “power density.” In this section, we will limit the scope to spatialpower density (W/m\(^2\)).↩
We have seen how regular expressions can be used to generate languages mechanically. How might languages be recognized mechanically? The question is of interest because if we can mechanically recognize languages like L = {all legal C++ programs that will not go into infinite loops on any input}, then it would be possible to write uber-compilers that can do semantic error-checking like testing for infinite loops, in addition to the syntactic error-checking they currently do. What formalism might we use to model what it means to recognize a language “mechanically”? We look for inspiration to a language-recognizer with which we are all familiar, and which we’ve already in fact mentioned: a compiler. Consider how a C++ compiler might handle recognizing a legal if statement. Having seen the word if, the compiler will be in a state or phase of its execution where it expects to see a ‘(’; in this state, any other character will put the compiler in a “failure” state. If the compiler does in fact see a ‘(’ next, it will then be in an “expecting a boolean condition” state; if it sees a sequence of symbols that make up a legal boolean condition, it will then be in an “expecting a ‘)’” state; and then “expecting a ‘{’ or a legal statement”; and so on. Thus one can think of the compiler as being in a series of states; on seeing a new input symbol, it moves on to a new state; and this sequence of transitions eventually leads to either a “failure” state (if the if statement is not syntactically correct) or a “success” state (if the if statement is legal). We isolate these three concepts—states, input-inspired transitions from state to state, and “accepting” vs “non-accepting” states— as the key features of a mechanical language-recognizer, and capture them in a model called a finite-state automaton. (Whether this is a successful distillation of the essence of mechanical language recognition remains to be seen; the question will be taken up later in this chapter.) A , then, is a machine which takes, as input, a finite string of symbols from some alphabet Σ. There is a finite set of finite-state automaton (FSA) in which the machine can find itself. The state it is in before consuming any input is called the states . Some of the states are start state or accepting . If the machine ends in such a state after completely consuming an input string, the string is said to be final by the machine. The actual functioning of the machine is described by something called a accepted , which specifies what happens if the machine is in a particular state and looking at a particular input symbol. (“What happens” means “in which state does the machine end up”.) transition function Example 3.9. Below is a table that describes the transition function of a finite-state automaton with states p, q, and r, on inputs 0 and 1. The table indicates, for example, that if the FSA were in state p and consumed a 1, it would move to state q. FSAs actually come in two flavours depending on what properties you require of the transition function. We will look first at a class of FSAs called deterministic finite-state automata (DFAs). In these machines, the current state of the machine and the current input symbol together deter- mine exactly which state the machine ends up in: for every <current state, current input symbol> pair, there is exactly one possible next state for the machine. Definition 3.5 Formally, a deterministic finite-state automaton M is specified by 5 components: \(M=\left(Q, \Sigma, q_{0}, \delta, F\right)\) where Q is a finite set of states; Σ is an alphabet called the input alphabet; \(q_{0} \in Q\) is a state which is designated as the start state; F is a subset of Q; the states in F are states designated as finalor acceptingstates; δ is a transition function that takes <state, input symbol> pairs and maps each one to a state: \(\delta : Q \times \Sigma \rightarrow Q\). To say \(\delta(q, a)=q^{\prime}\) means that if the machine is in state q and the input symbol a is consumed, then the machine will move into state q′. The function δ must be a total function, meaning that δ(q,a) must be defined for every state q and every input symbol a. (Recall also that, according to the definition of a function, there can be only one output for any particular input. This means that for any given q and a, δ(q,a) can have only one value. This is what makes the finite-state automaton deterministic: given the current state and input symbol, there is only one possible move the machine can make.) Example 3.10. The transition function described by the table in the pre- ceding example is that of a DFA. If we take p to be the start state and rto be a final state, then the formal description of the resulting machine is M = ({p,q,r},{0,1},p,δ,{r}), where δ is given by \(\begin{array}{ll}{\delta(p, 0)=p} & {\delta(p, 1)=q} \\ {\delta(q, 0)=q} & {\delta(q, 1)=r} \\ {\delta(r, 0)=r} & {\delta(r, 1)=r}\end{array}\) The transition function δ describes only individual steps of the machine as individual input symbols are consumed. However, we will often want to refer to “the state the automaton will be in if it starts in state q and consumes input string w”, where w is a string of input symbols rather than a single symbol. Following the usual practice of using ∗ to designate “0 or more”, we define δ∗(q,w) as a convenient shorthand for “the state that the automaton will be in if it starts in state q and consumes the input stringw”. For any string, it is easy to see, based on δ, what steps the machine will make as those symbols are consumed, and what δ∗(q, w) will be for any q and w. Note that if no input is consumed, a DFA makes no move, and so δ∗(q, ε) = q for any state q. Example 3.11. Let M be the automaton in the preceding example. Then, for example: \(\delta^{*}(p, 001)=q,\) since \(\delta(p, 0)=p, \delta(p, 0)=p,\) and \(\delta(p, 1)=q\) \(\delta^{*}(p, 01000)=q\) \(\delta^{*}(p, 1111)=r\) \(\delta^{*}(q, 0010)=r\) We have divided the states of a DFA into accepting and non-accepting states, with the idea that some strings will be recognized as “legal” by the automaton, and some not. Formally: Definition 3.6. accepted by \(M\) iff \(\delta^{*}\left(q_{0}, w\right) \in F\). (Don’t get confused by the notation. Remember, it’s just a shorter and neater way of saying "\(w \in \Sigma^{*}\) is accepted by M if and only if the state that M will end up in if it starts in q0 and consumes w is one of the states in F.”) The language accepted by M, denoted L(M), is the set of all strings \(w \in \Sigma^{*}\) that are accepted by \(M : L(M)=\left\{w \in \Sigma^{*} | \delta^{*}\left(q_{0}, w\right) \in F\right\}\). Note that we sometimes use a slightly different phrasing and say that a language L is accepted by some machine M. We don’t mean by this that L and maybe some other strings are accepted by M; we mean L = L(M), i.e. L is exactly the set of strings accepted by M. It may not be easy, looking at a formal specification of a DFA, to determine what language that automaton accepts. Fortunately, the mathematical description of the automaton \(M=\left(Q, \Sigma, q_{0}, \delta, F\right)\) can be neatly and helpfully captured in a picture called a transition diagram. Consider again the DFA of the two preceding examples. It can be represented pictorially as: The arrow on the left indicates that p is the start state; double circles indicate that a state is accepting. Looking at this picture, it should be fairly easy to see that the language accepted by the DFA M is L(M) = \(\left\{x \in\{0,1\} * | n_{1}(x) \geq 2\right\}\) Example 3.12. Find the language accepted by the DFA shown below (and describe it using a regular expression!) The start state of M is accepting, which means ε ∈ L(M). If M is in state \(q_{0}\), a sequence of two a’s or three b’s will move M back to \(q_{0}\) and hence be accepted. So \(L(M)=L\left((a a | b b b)^{*}\right)\). The state \(q_{4}\) in the preceding example is often called a garbage or trap state: it is a non-accepting state which, once reached by the machine, cannot be escaped. It is fairly common to omit such states from transition diagrams. For example, one is likely to see the diagram: Note that this cannot be a complete DFA, because a DFA is required to have a transition defined for every state-input pair. The diagram is “short for” the full diagram: As well as recognizing what language is accepted by a given DFA, we often want to do the reverse and come up with a DFA that accepts a given language. Building DFAs for specified languages is an art, not a science. There is no algorithm that you can apply to produce a DFA from an English-language description of the set of strings the DFA should accept. On the other hand, it is not generally successful, either, to simply write down a half-dozen strings that are in the language and design a DFA to accept those strings—invariably there are strings that are in the language that aren’t accepted, and other strings that aren’t in the language that are accepted. So how do you go about building DFAs that accept all and only the strings they’re supposed to accept? The best advice I can give is to think about relevant characteristics that determine whether a string is in the language or not, and to think about what the possible values or “states” of those characteristics are; then build a machine that has a state corresponding to each possible combination of values of relevant characteristics, and determine how the consumption of inputs affects those values. I’ll illustrate what I mean with a couple of examples. Example 3.13. Find a DFA with input alphabet \(\Sigma=\{a, b\}\) that accepts the language \(L=\left\{w \in \Sigma^{*} | n_{a}(w) \text { and } n_{b}(w) \text { are both even }\right\}\) The characteristics that determine whether or not a string w is in L are the parity of \(n_{a}(w)\) and \(n_{b}(w)\). There are four possible combinations of “values” for these characteristics: both numbers could be even, both could be odd, the first could be odd and the second even, or the first could be even and the second odd. So we build a machine with four states \(q_{1}, q_{2}, q_{3}, q_{4}\) corresponding to the four cases. We want to set up δ so that the machine will be in state q1 exactly when it has consumed a string with an even number of a’s and an even number of b’s, in state \(q_{2}\) exactly when it has consumed a string with an odd number of a’s and an odd number of b’s, and so on. To do this, we first make the state \(q_{1}\) into our start state, because the DFA will be in the start state after consuming the empty string ε, and ε has an even number (zero) of both a’s and b’s. Now we add transitions by reasoning about how the parity of a’s and b’s is changed by additional input. For instance, if the machine is in \(q_{1}\) (meaning an even number of a’s and an even number of b’s have been seen) and a further a is consumed, then we want the machine to move to state \(q_{3}\), since the machine has now consumed an odd number of a’s and still an even number of b’s. So we add the transition \(\delta\left(q_{1}, a\right)=q_{3}\) to the machine. Similarly, if the machine is in \(q_{2}\) (meaning an odd number of a’s and an odd number of b’s have been seen) and a further b is consumed, then we want the machine to move to state \(q_{3}\) again, since the machine has still consumed an odd number of a’s, and now an even number of b’s. So we add the transition \(\delta\left(q_{2}, b\right)=q_{3}\) to the machine. Similar reasoning produces a total of eight transitions, one for each state-input pair. Finally, we have to decide which states should be final states. The only state that corresponds to the desired criteria for the language L is \(q_{1}\), so we make \(q_{1}\) a final state. The complete machine is shown below. Example 3.14. Find a DFA with input alphabet Σ = {a, b} that accepts the language \(L=\left\{w \in \Sigma^{*} | n_{a}(w) \text { is divisible by } 3\right\}\). The relevant characteristic here is of course whether or not the number of a’s in a string is divisible by 3, perhaps suggesting a two-state machine. But in fact, there is more than one way for a number to not be divisible by 3: dividing the number by 3 could produce a remainder of either 1 or 2 (a remainder of 0 corresponds to the number in fact being divisible by 3). So we build a machine with three states \(q_{0}, q_{1}, q_{2}\), and add transitions so that the machine will be in state \(q_{0}\) exactly when the number of a’s it has consumed is evenly divisible by 3, in state \(q_{1}\) exactly when the number ofa’s it has consumed is equivalent to 1 mod 3, and similarly for \(q_{2}\). State \(q_{0}\) will be the start state, as ε has 0 a’s and 0 is divisible by 3. The completed machine is shown below. Notice that because the consumption of a b does not affect the only relevant characteristic, b’s do not cause changes of state. Example 3.15. Find a DFA with input alphabet Σ = {a, b} that accepts the language \(L=\left\{w \in \Sigma^{*} | w \text { contains three consecutive a's }\right\}\) Again, it is not quite so simple as making a two-state machine where the states correspond to “have seen aaa” and “have not seen aaa”. Think dynamically: as you move through the input string, how do you arrive at the goal of having seen three consecutive a’s? You might have seen two consecutive a’s and still need a third, or you might just have seen one aand be looking for two more to come immediately, or you might just have seen a b and be right back at the beginning as far as seeing 3 consecutivea’s goes. So once again there will be three states, with the “last symbol was not an a” state being the start state. The complete automaton is shown below. Exercises 1. Give DFAs that accept the following languages over Σ = {a, b}. a) \(L_{1}=\{x | x \text { contains the substring aba }\}\) b) \(L_{2}=L\left(a^{*} b^{*}\right)\) c) \(L_{3}=\left\{x | n_{a}(x)+n_{b}(x) \text { is even }\right\}\) d) \(L_{4}=\left\{x | n_{a}(x) \text { is a multiple of } 5\right\}\) e) \(L_{5}=\{x | x \text { does not contain the substring } a b b\}\) f) \(L_{6}=\left\{x | x \text { has no } a^{\prime} s \text { in the even positions }\right\}\) g) \(L_{7}=L\left(a a^{*} | a b a^{*} b^{*}\right)\) 2. What languages do the following DFAs accept? 3. Let Σ = {0, 1}. Give a DFA that accepts the language \(L=\left\{x \in \Sigma^{*} | x \text { is the binary representation of an integer divisible by } 3\right\}\)
It is well-known that a linear combination of 2 random normal variables is also a random normal variable. Are there any common non-normal distribution families (e.g., Weibull) that also share this property? There seem to be many counterexamples. For instance, a linear combination of uniforms is not typically uniform. In particular, are there any non-normal distribution families where both of the following are true: A linear combination of two random variables from that family is equivalent to some distribution in that family. The resulting parameter(s) can be identified as a function of the original parameters and the constants in the linear combination. I'm especially interested in this linear combination: $Y = X_1 \cdot w + X_2 \cdot \sqrt{(1-w^2)}$ where $X_1$ and $X_2$ are sampled from some non-normal family, with parameters $\theta_1$ and $\theta_2$, and $Y$ comes from the same non-normal family with parameter $\theta_Y = f(\theta_1, \theta_2, w)$. I'm describing a distribution family with 1 parameter for simplicity, but I'm open to distribution families with multiple parameters. Also, I'm looking for example(s) where there is plenty of parameter space on $\theta_1$ and $\theta_2$ to work with for simulation purposes. If you can only find an example that works for some very specific $\theta_1$ and $\theta_2$, that would be less helpful.
Difference between revisions of "Lower attic" From Cantor's Attic Line 1: Line 1: [[File:SagradaSpiralByDavidNikonvscanon.jpg | right | Sagrada Spiral photo by David Nikonvscanon]] [[File:SagradaSpiralByDavidNikonvscanon.jpg | right | Sagrada Spiral photo by David Nikonvscanon]] − Welcome to the lower attic, where + Welcome to the lower attic, where the ordinals . − + * [[aleph_1 | $\omega_1$]], the first uncountable ordinal, and the other uncountable cardinals of the [[middle attic]] * [[aleph_1 | $\omega_1$]], the first uncountable ordinal, and the other uncountable cardinals of the [[middle attic]] Line 12: Line 11: * [[admissible#relativized_admissible | $\omega_1^x$]] * [[admissible#relativized_admissible | $\omega_1^x$]] * [[admissible]] ordinals * [[admissible]] ordinals + * [[Gamma | $\Gamma$]] * [[Gamma | $\Gamma$]] − * [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]] * [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]] * the [[small countable ordinals]], those below [[epsilon naught | $\epsilon_0$]] * the [[small countable ordinals]], those below [[epsilon naught | $\epsilon_0$]] Revision as of 20:24, 28 December 2011 Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an endlessly self-similar reflecting ascent, whose life-giving everlasting beauty enraptures us. $\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including $\omega_1^x$ admissible ordinals Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals $\Gamma$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the small countable ordinals, those below $\epsilon_0$ Hilbert's hotel $\omega$, the smallest infinity down to the subattic, containing very large finite numbers
For a harmonic oscillator in one dimension, there is an uncertainty relation between the number of quanta $n$ and the phase of the oscillation $\phi$. There are all kinds of technical complications arising from the fact that $\phi$ can't be made into a single-valued and continuous operator (Carruthers 1968), but roughly speaking, you can write an uncertainty relation like this: $\Delta n \Delta \phi \ge 1$ The fact that the right-hand side is 1 rather than $\hbar$ is not just from using natural units such that $\hbar=1$; we can see this because $n$ and $\phi$ are both unitless. This suggests that this uncertainty relation is a classical one, like the time-frequency uncertainty relation that is the reason for referring to data transmission speeds in terms of bandwidth. However, the only physical interpretation I know of seems purely quantum-mechanical (Peierls 1979): [...] any device capable of measuring the field, including its phase, must be capable of altering the number of quanta by an indeterminate amount If this uncertainty relation is classical, what is its classical interpretation? If it's not classical, then why doesn't the restriction vanish in the limit $\hbar\rightarrow0$? Carruthers and Nieto, "Phase and Angle Variables in Quantum Mechanics," Rev Mod Phys 40 (1968) 411 -- can be found online by googling Peierls, Surprises in theoretical physics, 1979
14 0 Homework Statement A yo-yo is placed on a conveyor belt accelerating ##a_C = 1 m/s^2## to the left. The end of the rope of the yo-yo is fixed to a wall on the right. The moment of inertia is ##I = 200 kg \cdot m^2##. Its mass is ##m = 100kg##. The radius of the outer circle is ##R = 2m## and the radius of the inner circle is ##r = 1m##. The coefficient of static friction is ##0.4## and the coefficient of kinetic friction is ##0.3##. Find the initial tension in the rope and the angular acceleration of the yo-yo. Homework Equations ##T - f = ma## ##\tau_P = -fr## ##\tau_G = Tr## ##I_P = I + mr^2## ##I_G = I + mR^2## ##a = \alpha R## First off, I was wondering if the acceleration of the conveyor belt can be considered a force. And I'm not exactly sure how to use Newton's second law if the object of the forces is itself on an accelerating surface. Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the
I want to do a 3-dimensional FFT on this function $\frac{\cos (x) \cos (y) \cos (z)-\sin (x) \sin (y) \sin (z)}{\left((1.0001+\sin (y)+\cos (z))^2+(0.0001+\cos (x)+\sin (z))^2+(0.0001+\sin (x)+\cos (y))^2\right)^{3/2}}$ for it looks intractable via analytical Fourier expansion. Let's denote the number of numerical sampling points in each dimension as $N$. Here's the Mathematica code. nn = 10; step = (2 \[Pi])/nn; mx0 = 1.0001; my0 = 0.0001; mz0 = 0.0001; data = Table[ ( Cos[x] Cos[y] Cos[z] - Sin[x] Sin[y] Sin[z])/((mz0 + Cos[y] + Sin[x])^2 + (mx0 + Cos[z] + Sin[y])^2 + (my0 + Cos[x] + Sin[z])^2)^(3/2), {x, 0, 2 \[Pi] - step, step}, {y, 0, 2 \[Pi] - step, step}, {z, 0, 2 \[Pi] - step, step}];s = Fourier[data, FourierParameters -> {-1, -1}]; s[[1, 1, 1]] As far as I've tried, using Fast Fourier Transform routine from either C++ MKL library or Mathematica, the transformation result doesn't converge even when $N=500$, oscillating extravagantly with respect to $N$ in fact. I checked the programs with many other non-singular functions, they turned out to be good. So I guess the problem may be caused by the special form of the singular function (denominator can be zero at some points). I tried $\frac{1}{0.9-\sin{(x+y+z)}}$. It doesn't oscillate so much, but still considerably. Can anyone shed some light on this problem? Thanks in advance!
12425 12865 Deletions are marked like this. Additions are marked like this. Line 96: Line 96: === Note to Astronomers === Yes, we will slightly occult some of your measurements, though we won't flash your images like Iridium. We will also broadcast our array ephemerides, and deliver occultation schedules accurate to the microsecond, so you can include that in your luminosity calculations. Someday, server sky is where you will perform those calculations, rather than in your own coal powered (and haze producing) computer. Night Side Maneuvers We can minimize night light pollution, and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side. Another advantage of the turn is that if thinsat maneuverability is destroyed by radiation or a collision on the night side, it will come out of night side with a slow tumble that won't be corrected. The passive radar signature of the tumble will help identify the destroyed thinsat to other thinsats in the array, allowing another sacrificial thinsat to perform a "rendezvous and de-orbit". If the destroyed thinsat is in shards, the shards will tumble. The tumbling shards ( or a continuously tumbling thinsat ) will eventually fall out of the normal orbit, no longer get J_2 correction, and the thinsat orbit will "eccentrify", decay, and reenter. This is the fail-safe way the arrays will reenter, if all active control ceases. Maneuvering thrust and satellite power Neglecting tides, the synodic angular velocity of the m288 orbit is \Large\omega = 4.3633e-4 rad/sec = 0.025°/s. The angular acceleration of a thinsat is 13.056e-6 rad/sec 2 = 7.481e-4°/s 2 with a sun angle of 0°, and 3.740e-4°/s 2 at a sun angle of 60°. Because of tidal forces, a thinsat entering eclipse will start to turn towards sideways alignment with the center of the earth; it will come out of eclipse at a different velocity and angle than it went in with. If the thinsat is rotating at \omega and either tangential or perpendicular to the gravity vector, it will not turn while it passes into eclipse. Otherwise, the tidal acceleration is \ddot\theta = (3/2) \omega^2 \sin 2 \delta where \delta is the angle to the tangent of the orbit. If we enter eclipse with the thinsat not turning, and oriented directly to the sun, then \delta = 30° . Three Strategies and a Worst Case Failure Mode There are many ways to orient thinsats in the night sky, with tradeoffs between light power harvest, light pollution, and orbit eccentricity. If we reduce power harvest, we will need to launch more thinsats to compensate, which makes more problems if the system fails. I will present three strategies for light harvest and nightlight pollution. The actual strategies chosen will be blend of those. Tumbling If things go very wrong, thinsats will be out of control and tumbling. In the long term, the uncontrolled thinsats will probably orient flat to the orbital plane, and reflect very little light into the night sky, but in the short term (less than decades), they will be oriented in all directions. This is equivalent to mapping the reflective area of front and back (2 π R 2 ) onto a sphere ( 4 π R 2 ). Light with intensity I shining onto a sphere of radius R is evenly reflected in all directions uniformly. So if the sphere intercepts π R 2 I units of light, it scatters e I R 2/4 units of light (e is albedo) per steradian in all directions. While we will try to design our thinsats with low albedo ( high light absorption on the front, high emissivity on the back), we can assume they will get sanded down and more reflective because of space debris, and they will get broken into fragments of glass with shiny edges, adding to the albedo. Assume the average albedo is 0.5, and assume the light scattering is spherical for tumbling. Source for the above animation: g400.c Three design orientations All three orientations shown are oriented perpendicularly in the daytime sky. Max remains perpendicular in the night sky, Min is oriented vertically in the night sky, and Zero is edge on to the terminator in the night sky. All lose orientation control and are tilted by tidal forces in eclipse - the compensatory thrusts are not shown. Min and Zero are accelerated into a slow turn before eclipse, so they come out of the eclipse in the correct orientation. In all cases, there will probably be some disorientation and sun-seeking coming out of eclipse, until each thinsat calibrates to the optimum inertial turn rate during eclipse. So, there may be a small bit of sky glow at the 210° position, directly overhead at 2am and visible in the sky between 10pm and 6am. Max NLP: Full power night sky coverage, maximum night light pollution The most power is harvested if the thinsats are always oriented perpendicular to the sun. During the half of their orbit into the night sky, there will be some diffuse reflection to the side, and some of that will land in the earth's night sky. The illumination is maximum along the equator. For the M288 orbit, about 1/6th of the orbit is eclipsed, and 1/2 of the orbit is in daylight with the diffuse (Lambertian) reflection scattering towards the sun and onto the day side of the earth. Only the two "horns" of the orbit, the first between 90° and 150° (6pm to 10pm) and the second between 210° and 270° (2am to 6am) will reflect night into the light sky. The light harvest averages to 83% around the orbit. This is the worst case for night sky illumination. Though it is tempting to run thinsats in this regime, extracting the maximum power per thinsat, it is also the worst case for eccentricity caused by light pressure, and the thinsats must be heavier to reduce that eccentricity. Min NLP: Partial night sky coverage, some night light pollution This maneuver will put some scattered light into the night sky, but not much compared to perpendicular solar illumination all the way into shadow. In the worst case, assume that the surface has an albedo of 0.5 (typical solar cells with an antireflective coating are less than 0.3) and that the reflected light is entirely Lambertian (isotropic) without specular reflections (which will all be away from the earth). At a 60° angle, just before shadow, the light emitted by the front surface will be 1366W/m 2 × 0.5 (albedo) × 0.5 ( cos 60° ), and it will be scattered over 2π steradians, so the illumination per steradian will be 54W/m 2-steradian just before entering eclipse. Estimate that the light pollution varies from 0W to 54W between 90° and 150° and that the average light pollution is half of 54W, for 1/3 of the orbit. Assuming an even distribution of thinsat arrays in the constellation, that is works out to an average of 9W/m 2-steradian for all thinsats in M288 orbit. The full moon illuminates the night side of the equatorial earth with 27mW/m 2 near the equator. A square meter of thinsat at 6400km distance produces 9W/6400000 2 or 0.22 picowatts per m 4 times the area of all thinsats. If thinsat light pollution is restricted to 5% of full moon brightness (1.3mW/m 2), then we can have 6000 km 2 of thinsats up there, at an average of 130 W/m 2, or about 780GW of thinsats at m288. That is about a million tons of thinsats. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 ~ \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1 ~ \Large\omega 0° to 60° 100% to 50% 0W to 54W 100 to 140 150° to 210° 4 ~ \Large\omega 60° to 300° Eclipse 0W 140 to 180 210° to 270° 1 ~ \Large\omega 300° to 0° 50% to 100% 54W to 0W 180 to 240 270° to 0° 0 ~ \Large\omega 0° 100% 0W The angular velocity change at 0° takes 250/7.481 = 33.4 seconds, and during that time the thinsat turns 0.42° with negligible effect on thrust or power. The angular velocity change at 60° takes 750/3.74 = 200.5 seconds, and during that time the thinsat turns 12.5°, perhaps from 53.7° to 66.3°, reducing power and thrust from 59% to 40%, a significant change. The actual thrust change versus time will be more complicated (especially with tidal forces), but however it is done, the acceleration must be accomplished before the thinsat enters eclipse. The light harvest averages 78% around the orbit. Zero NLP: Partial night sky coverage, no night light pollution In this case, in the night half of the sky the edge of the thinsat is always turned towards the terminator. As long as the thinsats stay in control, they will never produce any nighttime light pollution, because the illuminated side of the thinsat is always pointed away from the night side of the earth. The average illumination fraction is around 68%. The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun: time min orbit degrees average rotation rate sun angle Illumination Night Light 0 to 60 0° to 90° 0 \Large\omega 0° 100% 0W 60 to 100 90° to 150° 1.5 \Large\omega 0° to 90° 100% to 0% 0W 100 to 140 150° to 210° 3 \Large\omega Start at 3.333\Large\omega 90° to 270° Eclipse 0W 140 to 180 210° to 270° 1.5 \Large\omega 270° to 0° 0% to 100% 0W 180 to 240 270° to 0° 0 \Large\omega 0° 100% 0W Pedants and thinsat programmers take note: The actual synodic orbit period is 240 minutes and 6.57 seconds long; that results in 2190.44 rather than 2191.44 sidereal orbits per year, accounting for the annual apparent motion of the sun around the sky. The light harvest averages 67% around the orbit. Why would a profit maximizing operator settle for 67% when 83% was possible? Infrared-filtering thinsats reduce launch weight and can use the Zero NLP flip to increase the minimum temperature of a thinsat during eclipses. An IR filtering thinsat in maximum night light pollution mode will have the emissive backside pointed at 2.7K space when it enters eclipse; the thinsat temperature will drop towards 20K if it cannot absorb the 64W/m 2 of 260K black body radiation reaching it from the earth through the 3.5μm front side infrared filter. The thinsat will become very brittle at those temperatures, and the thermal shock could destroy it. If the high thermal emissivity back side is pointed towards the 260K earth, the temperature will drop to 180K - still challenging, but the much higher thermal mobility may heal atomic-scale damage. Details of the Zero NLP maneuver In the night sky, assuming balanced thrusters, only tidal forces act on the thinsat, \ddot\theta = -(3/2) \omega^2 \sin( 2 \theta . Integrating numerically from the proper choice of initial rotation rate (10/9ths the average, due to tidal decceleration) we go from 30 degrees "earth relative tilt" at the beginning of eclipse, to 150 degrees tilt at the end of eclipse, with the night sky sending 260K infrared at the back side throughout the manuever. The entire disk of the earth is always in view, and always emits the same amount of infrared to a given radius, but the Lambertian angle of absorption changes. Power versus angle Night light pollution versus hour The night light pollution for 1 Terawatt of thinsats at M288. Mirror the graph for midnight to 6am. Some light is also put into the daytime sky (early morning and late afternoon), but it will be difficult to see in the glare of sunlight. The Zero NLP option puts no light in the night sky, so that curve is far below the bottom of this graph. Source for the above two graphs: nl02.c Note to Astronomers Yes, we will slightly occult some of your measurements, though we won't flash your images like Iridium. We will also broadcast our array ephemerides, and deliver occultation schedules accurate to the microsecond, so you can include that in your luminosity calculations. Someday, server sky is where you will perform those calculations, rather than in your own coal powered (and haze producing) computer.
Normalized frequency is frequency in units of cycles/sample or radians/sample commonly used as the frequency axis for the representation of digital signals. When the units are cycles/sample, the sampling rate is 1 (1 cycle per sample) and the unique digital signal in the first Nyquist zone resides from a sampling rate of -0.5 to +0.5 cycles per sample. This is the frequency equivalent of representing the time axis in units of samples instead of an actual time interval such as seconds. When the units are radians/sample, the sampling rate is $2\pi$ ($2\pi$ radians per sample) and the unique digital signal in the first Nyquist zone resides from a sampling rate of $-\pi$ to $+\pi$. How this comes about can be seen from the following expressions: For an analog signal given as $$x(t)=\sin(2\pi F t)$$ where F is the analog frequency units in Hz, When sampled at a sampling frequency of $F_s$ Hz, the sampling interval is $T_s=1/F_s$ so the signal after being sampled is given as: $$x(nT_s)=\sin(2\pi F nT_s) = \sin\left(\frac{2\pi F}{F_s}n\right)$$ Where the units of normalized frequency, either $\frac{F}{F_s}$ in cycles/sample or $\frac{2\pi F}{F_s}$ in radians/sample is clearly shown. This is illustrated below using $\Omega = 2\pi F$ Update: As @Fat32 points out in the comments, the units for sampling rate $F_s$ in the figure below should be "samples/sec" in order for the normalized frequency to become radians/sample. To visually see the concept of "radians/sample" (and most other DSP concepts dealing with frequency and time) it has helped me considerably to get away from viewing individual frequency tones as sines and/or cosines and instead view them as spinning phasors ($e^{j\omega t} = 1 \angle (\omega t)$) as depicted in the graphic below, which shows a complex phasor spinning at a rate of 2 Hz and it's associated cosine and sine (being the real and imaginary axis). Each point in a DFT is an individual frequency tone represented as a single rotating phasor in time. Such a tone in an analog system would continuously rotate (counter-clockwise if a positive frequency and clockwise if a negative frequency) at F rotations per second where F is the frequency in Hz, or cycles/second. Once sampled, the rotation will be at the same rate but will be in discrete samples where each sample is a constant angle in radians, and thus the frequency can be quantified as radians/sample representing the rate of rotation of the phasor.
Suppose the lens has an aperture radius of r and focal length f. The image of an object at infinity will then appear at a distance f from the lens. The angular separation between two points of the object is then the same as the angle between the image of the two points in the focal point seen from a distance of $f$. The area of the image of the object is thus given by $\pi\alpha^2 f^2$ where $\alpha$ is the angle between the center of the object (assumed to be spherical) and the edge, if we assume that this angle is small. If the object radiates as a black body, has a radius of R, a temperature of $T$ and is a distance d away, then the flux of radiation reaching the lens is: $$F = \sigma T^4 \left(\frac{R}{d}\right)^2 = \sigma T^4 \alpha^2$$ where $\sigma$ is the Stefan–Boltzmann constant. The total power of the radiation entering the lens $P$ is the area of the lens opening times the flux: $$P = \pi r^2 F = \pi \sigma T^4\alpha^2r^2$$ This power ends up heating the area of the image in the focal plane. The flux of radiation there is: $$F_{\text{im}} = \frac{P}{\pi\alpha^2f^2} = \sigma T^4\frac{r^2}{f^2}$$ Suppose then that you put a black body in the image plane, then the temperature there would be $T_{\text{im}}$ where $\sigma T_{\text{im}}^4 = F_{\text{im}}$, therefore: $$T_{\text{im}} = \sqrt{\frac{r}{f}}T$$ The ratio of the focal length f and the lens diameter is called the F-number and this is always larger than 1. So, the factor multiplying $T$ in the above equation will always be smaller than 1, therefore you can never reach a higher temperature than the temperature of the object in this way.
I have a fairly good understanding of Wavelet Analysis, but what are these bilinear distributions and how do they differ from Wavelet Transform? As you mentioned - all these methods share common principle, they allow for representation of our signal in time-frequency domain. First thing to notice is that wavelets are very different from the STFT-like (linear) methods as they provide scale dependent frequency resolution (which in case of STFT is constant and governed by Heisenberg uncertainty principle). Probably you've seen this famous picture: So called bi-linear transform mostly refers to Wigner Transform which can be considered as the Fourier Transform of signals' instantaneous autocorrelation function. It is governed by following equation: $$W(t,f)=\int_{-\infty}^{\infty} x \left( t+ \frac{\tau}{2} \right) x^* \left( t-\frac{\tau}{2} \right) e^{-2\pi i f \tau} d\tau$$ where $^*$ denotes complex conjugate and $\tau$ is the time delay. One might notice that in analogy to STFT, the shifted signal itself is considered as a window. It is also worth of mentioning, that Wigner Transform provides perfect time and frequency resolution at the same time - this is simply great for signals with fast changes of instantaneous frequency. On the other hand WT is the bi-linear transform, which implies major disadvantage: for each linear combination of two signals $x(t)$ and $y(t)$, result consists of two parts: auto-terms and cross-terms. This issue produces interferences in time-frequency representation known as "beats", which can make interpretation of WT difficult. If you are looking for more specific mathematical description, then please refer to article posted below - I see no point in rewriting equations. Example is shown on a figure below, where two linear chirps were analysed. Cross-term is clearly visible in between, which makes interpretation of results troublesome. Sometimes it happens, that magnitude of cross-term is even higher than auto-terms. Although some methods were described in literature how to deal with this problem. Please refer to the following article for more details.
Finding Answers to the Tuning Fork Mystery with Simulation When a tuning fork is struck, and held against a tabletop, the peak frequency of the emitted sound doubles — a mysterious behavior that has left many people baffled. In this blog post, we explain the tuning fork mystery using simulation and provide some fun facts about tuning forks along the way. Explaining the Tuning Fork Mystery In a recent video on YouTube from standupmaths, science enthusiasts Matt Parker and Hugh Hunt discuss and demonstrate the “mystery” of a tuning fork. When you strike a tuning fork and hold it against a tabletop, it seems to double in frequency. As it turns out, the explanation behind this mystery can be boiled down to nonlinear solid mechanics. How Does Sound Reach Our Ears? When you hold a vibrating tuning fork in your hand, the bending motion of the prongs sets the air around them in motion. The pressure waves in the air propagate as sound. You can hear it, but it is not a very efficient conversion of the mechanical vibration into acoustic pressure. When you hold the stem of the tuning fork to a table, an axial motion in the stem connects to the tabletop. The motion is much smaller than the transverse motion of the prongs, but it has the potential to set the large flat tabletop in motion — a surface that is a far better emitter of sound than the thin prongs of a tuning fork. The tabletop surface will act as a large loudspeaker diaphragm. Our tuning fork. To investigate this interesting behavior, we created a solid mechanics computational model of a tuning fork. The model is based on a tuning fork that one of my colleagues keeps in her handbag. The tone of the device is a reference A4 (440 Hz), the material is stainless steel, and the total length is about 12 cm. First, let’s have a look at the displacement as the tuning fork is vibrating in its first eigenmode: The mode shape for the fundamental frequency of the tuning fork. If we study the displacements in detail, it turns out that even though the overall motion of the prongs is in the transverse direction (the x direction in the picture), there are also some small vertical components (in the z direction), consisting of two parts: The bending of the prongs is accompanied with an up-down motion that varies linearly over the prong cross section The stem has an essentially rigid axial motion, which is necessary for keeping the center of mass in a fixed position, as required by Newton’s second law The displacements are shown in the figures below. The mode is normalized so that the maximum total displacement is 1. The peak axial displacement is 0.03 and the displacement in the stem is 0.01. Total displacement vectors in the first eigenmode. Axial displacements only. Note that the scales differ between figures. The center of gravity is indicated by the blue sphere. Now, let’s turn to the sound emission. By adding a boundary element representation of the acoustic field to the model, the sound pressure level in the surrounding air can be computed. The amplitude of the vibration at the prong tips is set to 1 mm. This is approximately the maximum feasible value if the tuning fork is not to be overloaded from a stress point of view. As can be seen in the figure below, the intensity of the sound decreases rather fast with the distance from the tuning fork, and also has a large degree of directionality. Actually, if you turn a tuning fork around its axis beside your ear, the near-silence in the 45-degree directions is striking. Sound pressure level (dB) and radiation pattern (inset) around the tuning fork. We now add a 2-cm-thick wooden table surface to the model. It measures 1 by 1 m and is supported at the corners. The stem of the tuning fork is in contact with a point at the center of the table. As can be seen below, the sound pressure levels are quite significant in a large portion of the air domain above and outside the table. Sound pressure levels above the table when the stem of the tuning fork is attached to the table. For comparison, we plot the sound pressure level for the same air domain when the tuning fork is held up. The difference is quite stunning with very low sound pressure levels in all parts of the air above the table except for in the vicinity of the tuning fork. This matches our experience with tuning forks as shown in the original YouTube video. Sound pressure levels for the tuning fork when held up. Is the Double Frequency a Natural Frequency? So far, we have not touched on the original question: Why does the frequency double when the tuning fork is placed on the table? One possible explanation could be that there is such a natural frequency, which has a motion that is more prominent in the vertical direction. For a vibrating string, for example, the natural frequencies are integer multiples of the fundamental frequency. This is not the case for a tuning fork. If the prongs are approximated as cantilever beams in bending, the lowest natural frequency is given by the expression The quantities in this expression are: Length of the prong, L Young’s modulus, E; usually around 200 GPa for steel Mass density, ρ; approximately 7800 kg/m 3 Area moment of inertia of the prong cross section, I Cross-sectional area of the prong, A For our tuning fork, this evaluates to 435 Hz, so the formula provides a good approximation. The second natural frequency of a cantilever beam is This frequency is a factor 6.27 higher than the fundamental frequency. It cannot be involved in the frequency doubling. However, there are other mode shapes besides those with symmetric bending. Could one of them be involved in the frequency doubling? This is unlikely for two reasons. The first reason is that the frequency doubling phenomenon can be observed for tuning forks with different geometries, and it would be too much of a coincidence if all of them have an eigenmode with exactly twice the fundamental natural frequency. The second reason is that nonsymmetrical eigenmodes have a significant transverse displacement at the stem, where the tuning fork is clenched. Such eigenmodes will thus be strongly damped by your hand, and have an insignificant amplitude. One such mode, with a natural frequency of 1242 Hz, is shown in the animation below. The tuning fork’s first eigenmode at 440 Hz, an out-of-plane mode with an eigenfrequency of 1242 Hz, and the second bending mode with an eigenfrequency of 2774 Hz. The Probable Cause of the Tuning Fork Mystery Let’s summarize what we know about the frequency-doubling phenomenon. Since it is only experienced when we press the tuning fork to the table, the double frequency vibration has a strong axial motion in the stem. Also, we can see from a spectrum analyzer (you can download such an app on a smartphone) that the level of vibration at the double frequency decays relatively quickly. There is a transition back to the fundamental frequency as the dominant one. The dependency on the amplitude suggests a nonlinear phenomenon. The axial movement of the stem indicates that the stem compensates for a change in the location of the center of mass of the prongs. Without going into details with the math, it can be shown that for the bending cantilever, the center of mass shifts down by a distance relative to the original length L, which is Here, a is the transverse motion at the tip and the coefficient β ≈ 0.2. The important observation is that the vertical movement of the center of mass is proportional to the square of the vibration amplitude. Also, the center of mass will be at its lowest position twice per cycle (both when the prong bends inward and when it bends outward), thus the double frequency. With a = 1 mm and a prong length of L = 80 mm, the maximum shift in the position of the center of mass of the prongs can be estimated to The stem has a significantly smaller mass than the prongs, so it has to move even more for the total center of gravity to maintain its position. The stem displacement amplitude can thus be estimated to 0.005 mm. This should be seen in relation to what we know from the numerical experiments above. The linear (440 Hz) part of the axial motion is of the order of a/100; in this example, 0.01 mm. In reality, the tuning fork is a more complex system than a pure cantilever beam, and the connection region between the stem and the prongs will affect the results. For the tuning fork analyzed here, the second-order displacements are actually less than half of the back-of-the-envelope predicted 0.005 mm. Still, the axial displacement caused by the second-order moving mass effect is significant. Furthermore, when it comes to emitting sound, it is the velocity, not the displacement, that is important. So, if displacement amplitudes are equal at 440 Hz and 880 Hz, the velocity at the double frequency is twice that at the fundamental frequency. Since the amplitude of the axial vibration at 440 Hz is proportional to the prong amplitude a, and the amplitude of the 880-Hz vibration is proportional to a 2, it is necessary that we strike the tuning fork hard enough to experience the frequency-doubling effect. As the vibration decays, the relative importance of the nonlinear term decreases. This is clearly seen on the spectrum analyzer. The behavior can be investigated in detail by performing a geometrically nonlinear transient dynamic analysis. The tuning fork is set in motion by a symmetric impulse applied horizontally on the prongs, and is then left free to vibrate. It can be seen that the horizontal prong displacement is almost sinusoidal at 440 Hz, while the stem moves up and down in a clearly nonlinear manner. The stem displacement is highly nonsymmetrical, since the 440 Hz contribution is synchronous with the prong displacement, while the 880-Hz term always gives an additional upward displacement. Due to the nonlinearity of the system, the vibration is not completely periodic. Even the prong displacement amplitude can vary from one cycle to another. The blue line shows the transverse displacement at the prong tip, and the green line shows the vertical displacement at the bottom of the stem. If the frequency spectrum of the stem displacement plotted above is computed using FFT, there are two significant peaks at 440 Hz and 880 Hz. There is also a small third peak around the second bending mode. Frequency spectrum of the vertical stem displacement. To actually see the second-order term at 880 Hz in action, we can subtract the part of the stem vibration that is in phase with the prong bending from the total stem displacement. This displacement difference is seen in the graph below as the red curve. The total axial stem displacement (blue), the prong bending proportional stem displacement (dashed green), and the remaining second-order displacement (red). How did we perform this calculation? Well, we know from the eigenfrequency analysis that the amplitude of the axial stem vibration is about 1% of the transverse prong displacement (actually 0.92%). In the graph above, the dashed green curve is 0.0092 times the current displacement of the prong tip (not shown in the graph). This curve can be considered as showing the linear 440 Hz term — a more or less pure sine wave. That value is then subtracted from the total stem displacement, and what is left is the red curve. The second-order displacement is zero when the prong is straight, and peaks both when the prong has its maximum inward bending and when it has its maximum outward bending. Actually, the red curve looks very much like it is having a time variation proportional to sin 2(ωt). It should, since that displacement, according to the analysis above, is proportional to the square of the prong displacement. Using a well-known trigonometric identity, \sin^2(\omega t) = \dfrac{1-\cos(2 \omega t)}{2}. Enter the double frequency! Different Tuning Forks Commenters on the original video from standupmaths have noticed that some tuning forks work better than others, and with some tuning forks, it is difficult to see the frequency doubling at all. As discussed above, the first criterion is that you hit it hard enough in order to get into the nonlinear regime. But there are also geometrical differences influencing the ratio between the amplitude of the two types of vibration. For instance, prongs that are heavy relative to the stem will cause large double-frequency displacements, since the stem must move more in order to maintain the center of gravity. Slender prongs can have a larger amplitude–length ( a/ L) ratio, thus increasing the nonlinear term. The design of the region where the prongs meet the stem is important. If it is stiff, then the amplitude of the fundamental frequency vibration in the stem will be reduced, and the relative importance of the double-frequency vibration is larger. The cross section of the prongs will also have an influence. If we return to the expression for the natural frequency it can be seen that the moment of inertia of the cross section plays a role. A prong with a square cross section with side d has while a prong with a circular cross section with diameter d has Thus, for two tuning forks that look the same when viewed from the side, the one with a square profile must have prongs that are a factor 1.14 longer to give the same fundamental frequency. If we assume the same maximum stress due to bending in the two tuning forks, the one with the square profile can have a transverse displacement amplitude, which is 1.14 2 larger than the circular one because of its higher load-carrying capacity. In addition, if the stem is kept at a fixed size, then it will become proportionally lighter when compared to the longer prongs. All these contributions end up in a 70% increase in vertical stem vibration amplitude when moving from a circular profile to a square profile. In addition, tuning forks with a circular cross section usually have a design that is more flexible at the connection between the prongs and the stem, and thus a higher level of vibration at the fundamental frequency. The conclusion is that a tuning fork with a square cross section is more likely to exhibit the frequency-doubling behavior than one with a circular cross section. Do We Hear the Frequency Doubling? In most cases, the answer is “no.” The fundamental frequency is still there, even though it may have a lower amplitude than the one with the double frequency. But the way our senses work, we hear the fundamental frequency, although with a different timbre. It is difficult, but not impossible, to strike the tuning fork so hard that the sound level of the double frequency is significantly dominant. Conclusions The frequency doubling occurs due to a nonlinear phenomenon, where the stem of the tuning fork must move upward, in order to compensate for the small lowering of the center of mass of the prongs as they approach the outermost positions of their bending motion. Note that it is not the fact that the tuning fork is connected to the table that causes the frequency doubling. The reason that we measure it in that case is that the sound emitted by the resonating table surface is caused by the axial stem motion, whereas the sound we hear from the tuning fork that is held up is dominated by the prong bending. The motion is the same in both cases, as long as the impedance of the table is ignored. In fact, you can measure the doubled frequency with a tuning fork when held up as well, but it is 30 dB or so below the fundamental frequency. Next Steps Watch the original videos from standupmaths on YouTube: Read more about the intersection of tuning forks and simulation on the COMSOL Blog: Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
No, you cannot skip it. Uninformative priors contain information and for multivariate regression assure that the sum of the probabilities will be unity. In fact, you cannot use a uniform prior on a multivariate regression with three or more independent variables or the sum of the probabilities of your posterior will not equal one. This is likely the reason that Stein's lemma in Frequentist statistics exists. If you try it and you get lucky, then you will get screwy results that warn you something is wrong. If you get unlucky, the results will not permit you to detect the problems. Let me give you some examples of the information in uninformative prior distributions. For the binomial distribution whose true parameter value is unknown then a simple prior is the beta distribution. This does not mean you should use a beta distribution, merely that it is very convenient. There are three known uninformative priors for the binomial, each is a form of the beta distribution. The first will result in an point estimate identical to the Frequentist solution. It is $$p^{-1}(1-p)^{-1}.$$ It provides an unbiased estimator for $p$ in the sense that the maximum a posteriori(MAP) estimator and the Frequentist estimator match for the estimated value of $p$. If you look at that prior, however, it provides infinite weight on either zero or one and minimal weight to $p=\frac{1}{2}$. It is uninformative in the sense that it does not influence the location of the MAP estimator. It does not weight the tails uniformly though. The second is the Jeffreys' prior, which is $$p^{-1/2}(1-p)^{-1/2}.$$ As with the first one, it provides infinite weight on either extreme and minimal weight at $p=1/2$. It is a biased estimator in that it adds one half a success and one half of a failure to the ultimate solution. It is equivalent to tossing a coin one time and it lands on its side. The reason to use it is that it permits you to do something that cannot be done in non-Bayesian methods and cannot be done with either uninformative prior, it allows you to transform the variable of interest and get the same statistical results under the transformation as you get in the raw data. A common transformation is to take the logarithm of data, but $\hat{\mu}_{raw}\ne\exp(\log(\hat{\mu}_{\log}))$ in most cases. If you are careful in your use of the prior, then $\hat{\mu}_{raw}=\exp(\log(\hat{\mu}_{\log}))$, as well as all other moments and intervals. A Jeffreys' prior preserves results across transformations. An important area of research in Bayesian methods are how to create invariant results. The third common uninformative prior is the uniform distribution, which is also a beta distribution. The uniform distribution is $\Pr(p)=1,\forall{p}\in(0,1)$. This is equivalent to adding one success and one failure to the final answer. A consequence of this is that although each possible solution is equally probable, prior to seeing the data, the expectation of $p$ will be biased toward the center. This is also true for multivariate regression. The more important issue, however, is whether or not you are in possession of prior information. Is there no prior research among any of the variables? It is generally the case that prior information exists, the more important issue is the disciplined incorporation of prior learning. Let me give you a silly example, that at another level, is not so silly. Imagine you were eating green beans and you wondered how many calories were in the forkful you were holding. You decide to be careful and sample from the can of beans in equal amounts. You have many samples. Strangely, you also own a calorimeter. You want to estimate the calories per gram. At first you consider using the maximum likelihood estimate(MLE), but then you realize that you could do better. The MLE is equivalent to using a uniform prior over the half plane, since you also have to estimate standard deviation as a nuisance parameter. You realize that you cannot have negative calories, so you only need a quarter plane. That is information, even though it gives equal weight to all positive solutions. Then you realize you could do better. The recommended intake for an adult male is 2000 calories and you do not believe that a person could survive on one forkful of green beans, so you restrict it to uniform up to 2000 calories and half normal above 2000 calories so it quickly tapers off. Of course, you then realize the can itself tells the FDA estimate of the calories, but you are aware there have been significant errors in the FDA estimate in specific isolated foods. According to the FDA there are .31 calories per gram, but how big should the standard deviation be? You don't trust the FDA estimate, so you reason that if three standard deviations are to the left, you should allow three more to the right so you have a truncated normal whose $\pm{3}\sigma=.31$ and whose center equals .31. You have just gone from an uninformative prior to a weakly informative prior. If you collected multiple studies, you could weight them into your prior for a strongly informative prior distribution. This would protect you from happening to get the weird sample of beans by random chance. The prior is the most difficult thing to define, particularly if you have academic adversaries as it should be no stronger than their beliefs if you are to convince them. It sounds like you are using conjugate priors in order to get mathematical simplicity. Although there are giant discussions on this, I will give you a weak heuristic to find your prior. First, gather as much academic research, Frequentist or Bayesian, on the topic that you can gather. You are looking for sample size, slope, intercept, covariance and variance estimates. If you think a study was very poorly done, then multiply its variance/covariance estimate by some suitable number, such as 25 or 100. This will weaken its influence in the process, but preserve the estimate of the location. You are keeping poor data, by weakening its value, but not excluding it entirely. Consider that your first prior, then look at the second study. That study, assuming they do not use the same data set, will be your likelihood. If you are using conjugate priors, update this so that your posterior is now the product of the two studies. Now get your third study and treat your last posterior as your new prior and the third study as your likelihood to get a new posterior. Continue this until you run out of studies and the final posterior will be your prior. Again, if the research isn't exactly your questions then multiply the variances and covariances by some large enough number that it weakens the impact of the prior research. If some of the research uses different variables, then you will have to make judgment calls as there is no simple solution. If you are wed to an uninformative prior because you are afraid of criticism, then you want the parameters of the prior to make a distribution that is as spread out as possible, but still a proper distribution. That is to say, it adds to one. You can, for example, give a million unit standard deviation when you believe it will probably be .01 units. If the prior is diffuse enough, it won't matter, except that it trivially avoids the problem brought about in Stein's lemma. A solution is to look at the smallest reported digit. If your report out to five digits past the decimal, then you want your variance to be large enough that it only impacts up to the sixth decimal in the calculation. There are far better solutions than this, but you really need to sit down with a statistician to do them. Bayesian methods are not DIY methods at first.
As you will see pointed out elsewhere that tf-idf is discussed, there is no universally agreed single formula for computing tf-idf or even (as in your question) idf. The purpose of the $+ 1$ is to accomplish one of two objectives: a) to avoid division by zero, as when a term appears in no documents, even though this would not happen in a strictly "bag of words" approach, or b) to set a lower bound to avoid a term being given a zero weight just because it appeared in all documents. I've actually never seen the formulation $log(1+\frac{N}{n_t})$, although you mention a textbook. But the purpose would be to set a lower bound of $log(2)$ rather than zero, as you correctly interpret. I have seen 1 + $log(\frac{N}{n_t})$, which sets a lower bound of 1. The most commonly used computation seems to be $log(\frac{N}{n_t})$, as in Manning, Christopher D, Prabhakar Raghavan, and Hinrich Schütze (2008) Introduction to Information Retrieval, Cambridge University Press, p118 or Wikipedia (based on similar sources). Not directly relevant to your query, but the upper bound is not $\infty$, but rather $k + log(N/s)$ where $k, s \in {0, 1}$ depending on your smoothing formulation. This happens for terms that appear in 0 or 1 documents (again, depends on whether you smooth with $s$ to make it defined for terms with zero document frequency - if not then the max value occurs for terms that appear in just one document). IDF $\rightarrow \infty$ when $1 + n_t=1$ and $N \rightarrow \infty$.
I suggest a different approach, not optimal (see What do the pgfkeys key handlers .get and .store in do?), but it works. Indeed, one problem in your code is that, inside arrow style the keys do not inherit the correct path /tikz/arrow inside/key-name. To make it working you can do something like: \[ \int_{ \tikz[scale=0.3]{ \path[fill=lightgray] (0,0) rectangle (1,1); \draw (1.5,1) -- (0,1) -- (0,0) -- (1.5,0); \draw[/tikz/arrow inside/pos = 0.4,/tikz/arrow inside/end = |,arrow inside] (1,-0.5) -- (1,1.5); } } \vec{B} \cdot \vec{n} \, df \] which is not so convenient IMHO. The other problem is that if you define a style doing something with keys, those keys have to be set not inside the style, but before as I did with the above code. Notice that: \[ \int_{ \tikz[scale=0.3]{ \path[fill=lightgray] (0,0) rectangle (1,1); \draw (1.5,1) -- (0,1) -- (0,0) -- (1.5,0); \draw[arrow inside={/tikz/arrow inside/pos = 0.4,/tikz/arrow inside/end = |}] (1,-0.5) -- (1,1.5); } } \vec{B} \cdot \vec{n} \, df \] does not do anything as the keys still have the values 0.5 and > respectively. So, my approach will be based on the "triple" handlers initial, get and store in and will let you use: arrow inside={pos = 0.4,end = |} inside \draw. First I would define the keys: \pgfkeys{/arrow inside/.cd, pos/.initial = 0.5, pos/.get = \arrow@inside@pos, pos/.store in = \arrow@inside@pos, end/.initial = >, end/.get = \arrow@inside@end, end/.store in = \arrow@inside@end, } They are under the path /arrow inside/ so later on we should take care of this. Second, I would define a style to place the arrow: place arrow/.style = { postaction = { decorate, decoration={ markings, mark=at position \arrow@inside@pos with {\arrow{\arrow@inside@end}} } } }, Rather than the keys, the style belongs to the usual /tikz/ path. To combine the keys and the aforementioned style, I define: arrow inside/.style={place arrow,/arrow inside/.cd,#1} as a style which "places" the arrow and, by changing the default /tikz/ path, allows you to use the previously defined keys. A mwe: \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations.markings} \makeatletter \pgfkeys{/arrow inside/.cd, pos/.initial = 0.5, pos/.get = \arrow@inside@pos, pos/.store in = \arrow@inside@pos, end/.initial = >, end/.get = \arrow@inside@end, end/.store in = \arrow@inside@end, } \tikzset{arrow inside/.style={place arrow,/arrow inside/.cd,#1}, place arrow/.style = { postaction = { decorate, decoration={ markings, mark=at position \arrow@inside@pos with {\arrow{\arrow@inside@end}} } } }, } \makeatother \begin{document} \[ \int_{ \tikz[scale=0.3]{ \path[fill=lightgray] (0,0) rectangle (1,1); \draw (1.5,1) -- (0,1) -- (0,0) -- (1.5,0); \draw[arrow inside={pos = 0.4,end = |}] (1,-0.5) -- (1,1.5); } } \vec{B} \cdot \vec{n} \, df \] \[ \int_{ \tikz[scale=0.3]{ \path[fill=lightgray] (0,0) rectangle (1,1); \draw (1.5,1) -- (0,1) -- (0,0) -- (1.5,0); \draw[arrow inside={pos = 0.7,end = stealth}] (1,-0.5) -- (1,1.5); } } \vec{B} \cdot \vec{n} \, df \] \end{document} The result: Notice that the keys now update correctly their values.
Under the auspices of the Computational Complexity Foundation (CCF) Let $\mathcal{F}$ be a finite alphabet and $\mathcal{D}$ be a finite set of distributions over $\mathcal{F}$. A Generalized Santha-Vazirani (GSV) source of type $(\mathcal{F}, \mathcal{D})$, introduced by Beigi, Etesami and Gohari (ICALP 2015, SICOMP 2017), is a random sequence $(F_1, \dots, F_n)$ in $\mathcal{F}^n$, where $F_i$ is a sample from some distribution $d \in \mathcal{D}$ whose choice may depend on $F_1, \dots, F_{i-1}$. We show that all GSV source types $(\mathcal{F}, \mathcal{D})$ fall into one of three categories: (1) non-extractable; (2) extractable with error $n^{-\Theta(1)}$; (3) extractable with error $2^{-\Omega(n)}$. This rules out other error rates like $1/\log n$ or $2^{-\sqrt{n}}$. We provide essentially randomness-optimal extraction algorithms for extractable sources. Our algorithm for category (2) sources extracts with error $\epsilon$ from $n = \poly(1/\epsilon)$ samples in time linear in $n$. Our algorithm for category (3) sources extracts $m$ bits with error $\epsilon$ from $n = O(m + \log 1/\epsilon)$ samples in time $\min\{O(nm2^m),n^{O(|{\mathcal{F}}|)}\}$. We also give algorithms for classifying a GSV source type $(\mathcal{F}, \mathcal{D})$: Membership in category (1) can be decided in $\mathrm{NP}$, while membership in category (3) is polynomial-time decidable.
I am studying a proof for a version of Poincaré's inequality for the Sobolev space $H^1_0(\Omega)$, where $\Omega\subseteq\{x\in\mathbb{R}^n:0<x_n<a\}$, for some $a>0$. I am trying to prove that, for any $u\in H^1_0(\Omega)$, $$\int_\Omega |u(x)|^2 dx \leq a\int_{\Omega}\left|\dfrac{\partial u}{\partial x_n}\right|dx.$$ By density, it is enough to prove this fact for $u\in C_c^{\infty}(\Omega)$. So write $x=(x',x_n)$, with $x'=(x_1,...,x_{n-1})$. We know that $$|u(x',x_n)|=\left|\int_0^{x_n} \dfrac{\partial u}{\partial x_n}(x',t)dt\right|\leq \int_0^{a} \left|\dfrac{\partial u}{\partial x_n}(x',t)\right|dt\leq \sqrt{a}\left(\int_0^{a} \left|\dfrac{\partial u}{\partial x_n}(x',t)\right|^2dt\right)^{1/2},$$ by Holder's inequality. Hence $$|u(x',x_n)|^2\leq a \int_0^{a} \left|\dfrac{\partial u}{\partial x_n}(x',t)\right|^2dt.$$ Now the idea is to integrate first with respect to $x_n$ and then with respect to $x'$ and this is what I can not do properly. Integrating the equality with respect to $x_n$, we get $$\int_0^{a} \left|u(x',x_n)\right|^2dx_n\leq a^2 \int_0^{a} \left|\dfrac{\partial u}{\partial x_n}(x',t)\right|^2dt.$$ Now the author of the proof claims that integrating with respect to $x'$ yields $$\int_\Omega |u(x)|^2dx\leq a^2 \int_\Omega\left|\dfrac{\partial u}{\partial x_n}(x)\right|^2 dx,$$ as desired. And I do not see why this is true; I understand that we integrates and then uses Fubini but it looks like he says that the "strip" where $x_n$ lives is $(0,a)$ but that need not be true, right? I would write something like $$\int_\Omega |u(x)|^2dx=\int_{\Omega_{x'}}\left( \int_{\Omega_{x_n}}|u(x',x_n)|^2 dx_n\right)dx' \leq \int_{\Omega_{x'}}\left( \int_0^a|u(x',x_n)|^2 dx_n\right)\leq a^2\int_{\Omega_{x'}}\left( \int_0^a\left|\dfrac{\partial u}{\partial x_n}(x)\right|^2 dx_n\right)$$ and now I don't know how to make the integral in $\Omega$ appear on the right handside. I would really appreciate some hint on how to conclude! Thank you!
Given the adjacency matrix $A_{ij}$ of a graph with $N$ vertices and $M$ links (or any binary symmetric matrix of size $N \times N$), is it possible to establish lower and upper boundaries of its eigenvalues? I mean, do $N$ and $M$ determine the lowest and the largest possible eigenvalues of the matrix? In other words, let $L$ be the Lapacian of $A_{ij}$. We know that its eigenvalues obbey the following: $\lambda_1 = 0 \leq \lambda_2 \leq \lambda_3 \leq \ldots \leq \lambda_N$. The lowest eigenvalue is always $\lambda_1 = 0$ and hence, the eigenvalues of $L$ have a lower bound. Is there an upper bound for $\lambda_N$ that depends only on the size $N$ of the graph and/or on $M$? Stated yet in another manner. Does anyone know a graph $G'$ whose $\lambda'_N$ is largest than the $\lambda_N$ of any other graph of the same size $N$ and/or number of links $M$? Thank you!
In an equation, I want to display three points like "...", but instead of horizontally, they should be diagonally. I'm sure there must be a command to do that, but a google search did not really help. Any idea? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community The command is called \ddots. See also How to look up a symbol or identify a math symbol or character? (esp. detexify). The mathdots package (besides fixing up the behaviour of (La)TeX \ddots and \vdots when the font size changes) provides an “inverse diagonal” ellipsis \iddots That is, \iddots is three dots sloping forwards while \ddots is three dots sloping backwards. An alternative to the \iddots (inverse diagonal dots) from the mathdots package: \makeatletter\def\Ddots{\mathinner{\mkern1mu\raise\p@\vbox{\kern7\p@\hbox{.}}\mkern2mu\raise4\p@\hbox{.}\mkern2mu\raise7\p@\hbox{.}\mkern1mu}}\makeatother Then call with \Ddots It's not perfect but you can use \dots and \cdot and align them using subscript _ and superscript ^ as demonstrated in row 2 below. You get the vertical dots on row 1 as a bonus since the need to write diagonal dots are typically related to building a "dotted" matrix like in the picture below. \newcommand{\vertdots}{\underset{\big{\overset{\cdot}{\cdot}}}{\cdot}} \newcommand{\diagdots}{_{^{\big\cdot}\cdot _{\big\cdot}}}\begin{equation*}A = \begin{bmatrix} a & \dots & b \\ \vertdots & \diagdots & \vertdots \\c & \dots & d \\\end{bmatrix}\end{equation*} This produces a matrix that looks like this: I tried to align the diagonal dots better but didn't succeed. This will be good enough for me.
88 0 1. Homework Statement a particle is in a linear superposition of two states with energy [tex] E_0 \ and\ E_1 [/tex] [tex] |\phi> = A|E_0> + \frac{A}{(3-\epsilon)^{1/2}}|E_1>[/tex] where: [tex] A \ > \ 0, \ 0\ <\ \epsilon \ <\ 3[/tex] What is the value of A expressed as a function of epsilon 2. Homework Equations [tex]P(E_0) \ +\ P(E_1) = 1\\ P(E_0) = |<E_0|\phi>|^2\\ P(E_1) = |<E_1|\phi>|^2 [/tex] 3. The Attempt at a Solution My attempt was to normalise the function to find a value for A in terms of epsilon. [tex] <E_0|\phi> = A\\ <E_1|\phi> = \frac{A}{\sqrt{(3-\epsilon)}} \\ |<E_1|\phi>|^2 + |<E_0|\phi>|^2 = A^2 + \frac{A^2}{(3-\epsilon)} = 1\\ A^2(1 + \frac{1}{3-\epsilon}) = 1\\ 4A^2 -\epsilon A^2 = 3-\epsilon\\ A^2 = \frac{3-\epsilon}{4-\epsilon}\\ A = \sqrt\frac{3-\epsilon}{4-\epsilon}\\ [/tex] But this does not give me a value of 1 when i put it back in? I'm unsure where I am going wrong - only had 2 QM lectures so my knowledge is limited. Last edited:
Submitted by Lanowen on Sun, 05/24/2015 - 02:15 A friend asked me about this problem, on how to calculate the distance an object needs to be from the camera to appear a certain height on the screen. The solution is simpler when dealing with an object centered in the in the screen where a right angle can be created from the camera to the object. Fig. 1: right angle triangle diagram. $\figLbl{1}{tanSol}$ Solving the triangle in \ref{fig:tanSol}: \begin{equation}$\vdots$ \tan{\left( \frac{\theta}{2} \right)} = \frac{h}{2d} \label{eq:tanTriangle} \end{equation}
Differences This shows you the differences between two versions of the page. Both sides previous revision Previous revision Next revision Previous revision sav08:sets_and_relations [2009/03/10 20:29] vkuncak sav08:sets_and_relations [2010/02/26 22:03] vkuncak Line 293: Line 293: Generalization of function update is override of partial functions, $f \oplus g$ Generalization of function update is override of partial functions, $f \oplus g$ + Line 299: Line 300: ==== Range, Image, and Composition ==== ==== Range, Image, and Composition ==== + The following properties follow from the definitions: \[ \[ - S \bullet r = ran(\ Delta_S \circ r) + (S \bullet r_1) \bullet r_2 = S \ bullet (r_1 \circ r_2) \] \] \[ \[ - (S \bullet r_1) \bullet r_2 = S \ bullet (r_1 \circ r_2) + S \bullet r = ran(\ Delta_S \circ r) \] \] ===== Further references ===== ===== Further references ===== + * [[sav08:discrete_mathematics_by_rosen|Discrete Mathematics by Rosen]] * [[:Gallier Logic Book]], Chapter 2 * [[:Gallier Logic Book]], Chapter 2