text
stringlengths
256
16.4k
I wonder if the sequence{$S_n$} where $S_n=\sum_{k=1}^{n}{1\over {\sum_{i=1}^{k} \frac{(i)(i+1)}{2}}}$is bounded and has a limit Also Calculate $1+{1\over {1+3}}+ {1 \over {1+3+6}}+...+{1\over {1+3+6+...+5050}}$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community First note that $$\begin{eqnarray} \sum_{i=1}^{k} \frac{(i)(i+1)}{2}&=& \frac{1}{2}\left(\sum_{i=1}^{k} i^2 + \sum_{i=1}^{k} i\right)\\ &=&\frac{1}{2}\left(\frac{k(k+1)(2k+1)}{6}+\frac{k(k+1)}{2}\right)\\ &=&\frac{ k(k+1)(k+2)}{6}\end{eqnarray}$$ so we have $$S_n=6\sum\limits_{k=1}^n \frac{1}{ k(k+1)(k+2)}\leq 6\sum\limits_{k=1}^n \frac{1}{k^2}\leq \pi^2$$ thanks to Euler's celebrated result that $\sum\limits_{k=1}^n \frac{1}{k^2}\to \frac{\pi^2}{6}$. Thus the sequence $(S_n)$ is bounded, and since it is clearly increasing it has some limit. First, $$\sum_{i=1}^k \frac{i(i+1)}2 = \frac12 \left(\sum i + \sum i^2 \right) = \frac{k(k+1)(k+2)}{6}.$$ We are looking at summing up the reciprocal of this as $k$ varies. The reciprocal can be decomposed: $$\frac6{k(k+1)(k+2)} = \frac{3}{k} + \frac{-6}{k+1} + \frac{3}{k+2}.$$ The sum $$S_n = \sum_{k=1}^n \left( \frac{3}{k} + \frac{-6}{k+1} + \frac{3}{k+2}\right)$$ telescopes, leaving only $$\frac31 + \frac{-6+3}2 + \frac{3+(-6)}{n+1} + \frac{3}{n+2} = \frac32 - \frac{3}{n+1} + \frac{3}{n+2}$$ for $n > 1$. This clearly has an upper bound and limit of $3/2$ as $n$ goes to infinity. The requested sum, $S_{100}$, is $$\frac32 - \frac{3}{101} + \frac{3}{102} = \frac{2575}{1717} \approx 1.4997.$$ Simpler and less accurate: The first sum has denominators that are greater than a constant times $n^3$, and the second sum has denominators that are greater than a constant times $n^2$. In both cases, since all terms are positive, and $\sum 1/n^2$ converges, the sum is bounded, converges, and must have a limit.
I am writing a FMM (Fast Multipole Method) algorithm in 3D. I generated the mesh and, currently, I am developing the expansion and the three (M2M, M2L, L2L) translation operators using spherical harmonics. See reference [1], more specifically, these equations are taken from equation numbers 3.36, 3.37, 3.55, 3.56, 3.57. I am confused about how to apply the operators and I can't seem to grasp how to apply these effectively. So far my aim is to (after the mesh generation of course): 1 - Perform multipole expansions at the finest level This as I understand is conducted using these equations $$\phi(P) = \sum_{n=0}^\infty \sum_{m=-n}^n \frac{M_n^m}{r^{n+1}} Y_n^m(\theta,\phi) \enspace ,$$ where, $$M_n^m=\sum_i q_i \rho_i^n Y_n^{-m}(\alpha_i,\beta_i) \enspace .$$ In the above equations $P(r,\theta,\phi)$ are the coordinates of the center of the cube and $Q_i(\rho_i,\alpha_i,\beta_i)$ are the source points present in each cube being evaluated with $q_i$ as their corresponding weight for each source point. 2 - Perform the upward pass (M2M) The translator operators for instance the M2M (i.e. multipole-to-multipole) in the upward pass have the following equations: $$\phi(G) = \sum_{j=0}^\infty \sum_{k=-j}^j \frac{M_j^k}{r^{j+1}} Y_j^k(\theta,\phi)$$ where, $$M_j^k=\sum_{n=0}^j \sum_{m=-n}^n \frac{O_{j-n}^{k-m} i^{|k|-|m|-|k-m|} A_n^m A_{j-n}^{k-m} \rho^n Y_n^{-m}(\alpha,\beta) }{A_j^k}$$ with $$A_n^m=\frac{(-1)^n}{\sqrt{(n-m)!(n+m)!}}$$ In the above equations, $G(r,\theta,\phi)$ is the center of the parent cube and $P(\rho,\alpha,\beta)$ is the coordinates of the child cube's center. Here's where I am basically stuck at: What is $O_{j-n}^{k-m}$? Is it the multipole expansions I performed before initiating the M2M translations ? is it the mulipole moment that was used to compute the multipole expansion before starting the M2M (i.e. $M_n^m$)? if so, then each parent will have 8 children in the level directly above it, hence there exists multiple $M_n^m$, how then should I choose which one to implement in the translation M2M operation in place of the $O_{j-n}^{k-m}$? P.S.: as you might have noticed already, this is the first time I am implementing the FMM and specifically in 3D. I have to accomplish this as it is a minor step in a larger project with an approaching deadline, hence I appreciate all help anyone can provide. Reference [1] Greengard, Leslie. "The rapid evaluation of potential fields in particle systems. ACM Distinguished Dissertations." (1988).
Problem Statement: You are given an array/sequence of positive numbers $a_1,a_2,a_3,\cdots,a_n$ and you need to execute q queries on the array and in each query you will take a positive number as an input and find out all the multiples of the number in the given array and print it. Input:2 4 9 15 21 20q1 = 2q2 = 3q3 = 5Output:332 To solve this problem I thought of an algorithm which works as explained below: Create an array name freq_data[]whose length will be equal to the maximum element of the array, and it stores the count of each and every number occurred in the input array. For Example: array[] = {2, 4, 9, 15, 21, 20} max_element = 21 freq_data[] = {0,1,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,1} Create an array name multiples[]whose length will also be equal to maximum element encountered in the array. This array will store all the multiples of the numbers between $[1,\text{maximum value}]$. Using multiples[]array, I can answer the query in $O(1)$ time by printing the value at multiples[q - 1]. Initialise the multiples[0] = (size of arr[] - 1)because every number in the array is divisible by 1. If the query entered is $(\gt \text{maximum value})$ in the array, then the answer will be zero because there will be no element in the array which will be divisible by arr[i]because for the number to be multiple of another, the number should be either equal to or greater than the divisor i.e $(\geq arr[i])$. Time Complexity: $O(max \times log(max))$ Space Complexity: $O(max)$ Now I was wondering what if I changed the question like instead of taking the queries from the user, I am now interested in finding for each arr[i] how many multiples are present in the left-sub array with respect to arr[i]. Formally: You are given a sequence of positive numbers $a_1,a_2,a_3,\cdots,a_n$. Write a program to count the number of multiples in $a_0,a_1,a_2,\cdots,a_{j-1}$ where $0 \leq j \lt i$ for each $a[i]$ where $0 \leq i \lt n$. For Example: 1. Input:arr[] = {2 6 3}Output: no_of_multiples[] = {0,0,1} // For 2 , 6 there are no multiples present but for 3 there is one multiple present i.e. 6 2. Input:arr[] = {16,25,63,12,65,45,23,65,78,99,36,12,36,41,36,2,3}Output: no_of_multiples[] ={0,0,0,0,0,0,1,0,0,0,2,1,0,2,6,9} To solve the above problem, the algorithm which I designed is basically based on the previous problem solution but rather than updating the freq_data[] at once I will first check how many multiples of the arr[i] is present in the freq_data[] and after finding all the multiples, I will increment the value at ++freq_data[data[i] - 1] and then I will print the freq_data[i] array after processing the whole arra arr[i]. Time Complexity: $O(n \times log(max))$ Space Complexity: $O(max)$ My question is: Is there any better algorithm to solve the second part i.e Write a program to count the number of multiples in $a_0,a_1,a_2,\cdots,a_{j-1}$ where $0 \leq j \lt i$ for each $a[i]$ where $0 \leq i \lt n$. Edit-1: With reference to @gnasher729 points, there are constraints on the inputs because if the size of the input gets larger the algorithm will not give the right answer: $1\leq n \leq 10^5$ Constraint on the length of the sequence $1 \leq arr[i] \leq 10^6$ Constraint on the values of the sequence
Using Self-Organizing Maps to solve the Traveling Salesman Problem Published on January 21, 2018 Table of Contents The Traveling Salesman Problem is a well known challenge in Computer Science: it consists on finding the shortest route possible that traverses all cities in a given map only once. Although its simple explanation, this problem is, indeed, NP-Complete. This implies that the difficulty to solve it increases rapidly with the number of cities, and we do not know in fact a general solution that solves the problem. For that reason, we currently consider that any method able to find a sub-optimal solution is generally good enough (we cannot verify if the solution returned is the optimal one most of the times). To solve it, we can try to apply a modification of the Self-Organizing Map (SOM) technique. Let us take a look at what this technique consists, and then apply it to the TSP once we understand it better. Note (2018-02-01): You can also read this post in Chinese, translated by Yibing Du. Some insight on Self-Organizing Maps The original paper released by Teuvo Kohonen in 1998 1 consists on a brief,masterful description of the technique. In there, it is explained thata self-organizing map is described as an (usually two-dimensional) grid ofnodes, inspired in a neural network. Closely related to the map, is the idea ofthe model, that is, the real world observation the map is trying torepresent. The purpose of the technique is to represent the model with a lowernumber of dimensions, while maintaining the relations of similarity of thenodes contained in it. To capture this similarity, the nodes in the map are spatially organized to be closer the more similar they are with each other. For that reason, SOM are a great way for pattern visualization and organization of data. To obtain this structure, the map is applied a regression operation to modify the nodes position in order update the nodes, one element from the model (\(e\)) at a time. The expression used for the regression is: \[ n_{t+1} = n_{t} + h(w_{e}) \cdot \Delta(e, n_{t}) \] This implies that the position of the node \(n\) is updated adding the distancefrom it to the given element, multiplied by the neighborhood factor of thewinner neuron, \(w_{e}\). The winner of an element is the more similar node inthe map to it, usually measured by the closer node using the Euclidean distance(although it is possible to use a different similarity measure if appropriate). On the other side, the neighborhood is defined as a convolution-like kernelfor the map around the winner. Doing this, we are able to update the winner andthe neurons nearby closer to the element, obtaining a soft and proportionalresult. The function is usually defined as a Gaussian distribution, but otherimplementations are as well. One worth mentioning is a bubble neighborhood,that updates the neurons that are within a radius of the winner (based on adiscrete Kronecker delta function), which is the simplest neighborhood functionpossible. Modifying the technique To use the network to solve the TSP, the main concept to understand is how tomodify the neighborhood function. If instead of a grid we declare a circulararray of neurons, each node will only be conscious of the neurons in front ofand behind it. That is, the inner similarity will work just in one dimension.Making this slight modification, the self-organizing map will behave as anelastic ring, getting closer to the cities but trying to minimize the perimeterof it thanks to the neighborhood function. Although this modification is the main idea behind the technique, it will notwork as is: the algorithm will hardly converge any of the times. To ensure theconvergence of it, we can include a learning rate, \(\alpha\), to control theexploration and exploitation of the algorithm. To obtain high explorationfirst, and high exploitation after that in the execution, we must includea decay in both the neighborhood function and the learning rate. Decaying thelearning rate will ensure less aggressive displacement of the neurons aroundthe model, and decaying the neighborhood will result in a more moderateexploitation of the local minima of each part of the model. Then, ourregression can be expressed as: \[ n_{t+1} = n_{t} + \alpha_{t} \cdot g(w_{e}, h_{t}) \cdot \Delta(e, n_{t}) \] Where \(\alpha\) is the learning rate at a given time, and \(g\) is the Gaussian function centered in a winner and with a neighborhood dispersion of \(h\). The decay function consists on simply multiplying the two given discounts, \(\gamma\), for the learning rate and the neighborhood distance. \[ \alpha_{t+1} = \gamma_{\alpha} \cdot \alpha_{t} , \ \ h_{t+1} = \gamma_{h} \cdot h_{t} \] This expression is indeed quite similar to that of Q-Learning, and the convergence is search in a similar fashion to this technique. Decaying the parameters can be useful in unsupervised learning tasks like the aforementioned ones. It is also similar to the functioning of the Learning Vector Quantization technique, also developed by Teuvo Kohonen. Finally, to obtain the route from the SOM, it is only necessary to associate a city with its winner neuron, traverse the ring starting from any point and sort the cities by order of appearance of their winner neuron in the ring. If several cities map to the same neuron, it is because the order of traversing such cities have not been contemplated by the SOM (due to lack of relevance for the final distance or because of not enough precision). In that case, any possible ordered can be considered for such cities. Implementing and testing the SOM For the task, an implementation of the previously explained technique isprovided in Python 3. It is able to parse and load any 2D instance problemmodelled as a TSPLIB file and run the regression to obtain the shortestroute. This format is chosen because for the testing and evaluation of thesolution the problems in the National Traveling Salesman Problem instancesoffered by the University of Waterloo, which also provides the optimal valueof the route of such instances and will allow us to check the quality of oursolutions. On a lower level, the numpy package was used for the computations, whichenables vectorization of the computations and higher performance in theexecution, as well as more expressive and concise code. pandas is used forloading the .tsp files to memory easily, and matplotlib is used to plot thegraphical representation. These dependencies are all included in the Anacondadistribution of Python, or can be easily installed using pip. To evaluate the implementation, we will use some instances provided by the aforementioned National Traveling Salesman Problem library. These instances are inspired in real countries and also include the optimal route for most of them, which is a key part of our evaluation. The evaluation strategy consists in running several instances of the problem and study some metrics: Execution timeinvested by the technique to find a solution. Qualityof the solution, measured in function of the optimal route: a route that we say is "10% longer that the optimal route" is exactly 1.1 times the length of the optimal one. The parameters used in the evaluation are the ones found by parametrization ofthe technique, by using the ones provided in previous works 2 as astarting point. These parameters are: A population size of 8 times the cities in the problem. An initial learning rate of 0.8, with a discount rate of 0.99997. An initial neighbourhood of the number of cities, decayed by 0.9997. These parameters were applied to the following instances: Qatar, containing 194 cities with an optimal tour of 9352. Uruguay, containing 734 cities with an optimal tour of 79114. Finland, containing 10639 cities with an optimal tour of 520527. Italy, containing 16862 cities with an optimal tour of 557315. The implementation also stops the execution if some of the variables decays under the useful threshold. An uniform way of running the algorithm is tested, although a finer grained parameters can be found for each instance. The following table gathers the evaluation results, with the average result of 5 executions in each of the instances. Instance Iterations Time (s) Length Quality Qatar 14690 14.3 10233.89 9.4% Uruguay 17351 23.4 85072.35 7.5% Finland 37833 284.0 636580.27 22.3% Italy 39368 401.1 723212.87 29.7% The implementation yields great results: we are able to obtain sub-optimal solutions in barely 400 seconds of execution, returning acceptable results overall, with some remarkable cases like Uruguay, where we are able to find a route traversing 734 cities only 7.5% longer than the optimal in less than 25 seconds. Final remarks Although not thoroughly tested, this seems like an interesting application of the technique, which is able to lay some impressing results when applied to some sets of cities distributed more or less uniformly across the dimensions. The code is available in my GitHub and licensed under MIT, so feel free to tweak it and play as much as you wish with it. Finally, if you found have any doubts or inquires, do not hesitate to contact me. Also, I wanted to thank Leonard Kleinans for its help during our Erasmus in Norway, tuning the first version of the code.
Inverse scattering problem on the axis for the triangular $2\times 2$ matrix potential with a virtual level Methods Funct. Anal. Topology 15 (2009), no. 4, 301-321 The characteristic properties of scattering data for the Schrodinger operator on the axis with a triangular $2\times 2$ matrix potential are obtained under the simple or multiple virtual levels being possibly present. Under a multiple virtual level, a pole for the reflection coefficient at $k=0$ is possible. For this case, the modified Parseval equality is constructed. Methods Funct. Anal. Topology 15 (2009), no. 4, 322-332 A subset $A$ of $\mathbb R^n$ is said to be max-min convex if, for any $x,y\in A$ and any $t\in \mathbb R$, we have $x\oplus t\otimes y\in A$ (here $\oplus$ stands for the coordinatewise maximum of two elements in $\mathbb R^n$ and $t\otimes (y_1,\dots,y_n)=(\min\{t,y_1\},\dots, \min\{t,y_n\})$). It is proved that the hyperspace of compact max-min convex sets in the Euclidean space $\mathbb R^n$, $n\ge2$, is homeomorphic to the punctured Hilbert cube. This is a counterpart of the result by Nadler, Quinn and Stavrokas proved for the hyperspace of compact convex sets. We also investigate the maps of the hyperspaces of compact max-min convex sets induced by the projection maps of Euclidean spaces. It is proved that this map is a Hilbert cube manifold bundle. Methods Funct. Anal. Topology 15 (2009), no. 4, 333-355 In this article we will investigate an inverse spectral problem for three-diagonal block Jacobi type Hermitian real-valued matrices with "almost" semidiagonal matrices on the side diagonals. Methods Funct. Anal. Topology 15 (2009), no. 4, 356-360 Let $X$ be a Baire space, $Y$ be a compact Hausdorff space and $f:X \times Y \to \mathbb{R}$ be a separately continuous mapping. For each $y \in Y$, we define a game $G(Y, \{ y \})$ between players $O$ and $P$, to show that if in this game either $O$ player has a winning strategy or $X$ is $\alpha$-favorable and $P$ player does not have a winning strategy, then for each countable subset $E$ of $Y$, there exists a dense $G_\delta$ subset $D$ of $X$ such that $f$ is jointly continuous on $D \times E$. Methods Funct. Anal. Topology 15 (2009), no. 4, 361-368 Walter Roth studied one form of the uniform boundedness theorem in \cite{Rot98}. We investigate some other versions of the uniform boundedness theorem for barreled and upper-barreled locally convex cones. Finally, we show some applications of this theorem. Methods Funct. Anal. Topology 15 (2009), no. 4, 369-383 A model operator $H$ associated to a system describing four particles in interaction, without conservation of the number of particles, is considered. We describe the essential spectrum of $H$ by the spectrum of the channel operators and prove the Hunziker-van Winter-Zhislin (HWZ) theorem for the operator $H.$ We also give some variational principles for boundaries of the essential spectrum and interior eigenvalues. Methods Funct. Anal. Topology 15 (2009), no. 4, 384-390 A classification of irreducible *-representations of a certain deformation of twisted canonical commutation relations is given. Methods Funct. Anal. Topology 15 (2009), no. 4, 391-400 We introduce the notion of an $ls$-Ponomarev-system $(f, M, X, \{\mathcal{P}_{\lambda,n}\})$, and give necessary and sufficient conditions such that the mapping $f$ is a compact (compact-covering, sequence-covering, pseudo-sequence-covering, sequentially-quotient) mapping from a locally separable metric space $M$ onto a space $X$. As applications of these results, we systematically get characterizations of certain compact images of locally separable metric spaces.
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
Is there an algorithm/systematic procedure to test whether a language is regular? In other words, given a language specified in algebraic form (think of something like $L=\{a^n b^n : n \in \mathbb{N}\}$), test whether the language is regular or not. Imagine we are writing a web service to help students with all their homeworks; the user specifies the language, and the web service responds with "regular", "not regular", or "I don't know". (We'd like the web service to answer "I don't know" as infrequently as possible.) Is there any good approach to automating this? Is this tractable? Is it decidable (i.e., is it possible to guarantee that we never need to answer "I don't know")? Are there reasonably efficient algorithms for solving this problem, and be able to provide an answer other than "don't know" for many/most languages that are likely to arise in practice? The classic method for proving that a language is not regular is the pumping lemma. However, it looks like requires manual insight at some point (e.g., to choose the word to pump), so I'm not clear on whether this can be turned into something algorithmic. A classic method for proving that a language is regular would be to use the Myhill–Nerode theorem to derive a finite-state automaton. This looks like a promising approach, but it does requires the ability to perform basic operations on languages in algebraic form. It's not clear to me whether there's a systematic way to symbolically perform all of the operations that may be needed, on languages in algebraic form. To make this question well-posed, we need to decide how the user will specify the language. I'm open to suggestions, but I'm thinking something like this: $$L = \{E : S\}$$ where $E$ is a word-expression and $S$ is a system of linear inequalities over the length-variables, with the following definitions: Each of $x,y,z,\dots$ is a word-expression. (These represent variables that can take on any word in $\Sigma^*$.) Each of $x^r,y^r,z^r,\dots$ is a word-expression. (Here $x^r$ represents the reverse of the string $x$.) Each of $a,b,c,\dots$ is a word-expression. (Implicitly, $\Sigma=\{a,b,c,\dots\}$, so $a,b,c,\dots$ represent a single symbol in the underlying alphabet.) Each of $a^\eta,b^\eta,c^\eta,\dots$ is a word-expression, if $\eta$ is a length-variable. The concatenation of word-expressions is a word-expression. Each of $m,n,p,q,\dots$ is a length-variable. (These represent variables that can take on any natural number.) Each of $|x|,|y|,|z|,\dots$ is a length-variable. (These represent the length of a corresponding word.) This seems broad enough to handle many of the cases we see in textbook exercises. Of course, you can substitute any other textual method of specifying a language in algebraic form, if you have a better suggestion.
Methods Funct. Anal. Topology 13 (2007), no. 3, 201-210 For extending the concepts of $p$-frame, frame for Banach spaces and atomic decomposition, we will define the concept of $pg$-frame and $g$-frame for Banach spaces, by which each $f\in X$ ($X$ is a Banach space) can be represented by an unconditionally convergent series $f=\sum g_{i}\Lambda_{i},$ where $\{\Lambda_{i}\}_{i\in J}$ is a $pg$-frame, $\{g_{i}\}\in(\sum\oplus Y_{i}^{*})_{l_q}$ and $\frac{1}{p}+\frac{1}{q}=1$. In fact, a $pg$-frame $\{\Lambda_{i}\}$ is a kind of an overcomplete basis for $X^{*}.$ We also show that every separable Banach space $X$ has a $g$-Banach frame with bounds equal to $1.$ Methods Funct. Anal. Topology 13 (2007), no. 3, 211-222 We define the $\varepsilon_{\infty }$-product of a Banach space $G$\ by a quotient bornological space $E\mid F$ that we denote by $G\varepsilon _{\infty }(E\mid F)$, and we prove that $G$ is an $% \mathcal{L}_{\infty }$-space if and only if the quotient bornological spaces $G\varepsilon _{\infty }(E\mid F)$ and $% (G\varepsilon E)\mid (G\varepsilon F)$ are isomorphic. Also, we show that the functor $\mathbf{.\varepsilon }_{\infty }\mathbf{.}:\mathbf{Ban\times qBan\longrightarrow qBan}$ is left exact. Finally, we define the $\varepsilon _{\infty }$-product of a b-space by a quotient bornological space and we prove that if $G$ is an $% \varepsilon $b-space\ and $E\mid F$ is a quotient bornological space, then $(G\varepsilon E)\mid (G\varepsilon F)$ is isomorphic to $G\varepsilon _{\infty }(E\mid F)$. Methods Funct. Anal. Topology 13 (2007), no. 3, 223-235 We consider a non-densely defined Hermitian contractive operator which is unitarily equivalent to its linear-fractional transformation. We show that such an operator always admits self-adjoint extensions which are also unitarily equivalent to their linear-fractional transformation. Methods Funct. Anal. Topology 13 (2007), no. 3, 236-261 We give a survey on generalized Krein algebras $K_{p,q}^{\alpha,\beta}$ and their applications to Toeplitz determinants. Our methods originated in a paper by Mark Krein of 1966, where he showed that $K_{2,2}^{1/2,1/2}$ is a Banach algebra. Subsequently, Widom proved the strong Szego limit theorem for block Toeplitz determinants with symbols in $(K_{2,2}^{1/2,1/2})_{N\times N}$ and later two of the authors studied symbols in the generalized Krein algebras $(K_{p,q}^{\alpha,\beta})_{N\times N}$, where $\lambda:=1/p+1/q=\alpha+\beta$ and $\lambda=1$. We here extend these results to $0< \lambda <1$. The entire paper is based on fundamental work by Mark Krein, ranging from operator ideals through Toeplitz operators up to Wiener-Hopf factorization. Methods Funct. Anal. Topology 13 (2007), no. 3, 262-266 It is known, that if the Euler--Lagrange variational equation is fulfilled everywhere in classical case $C^1$ then it's solution is twice continuously differentiable. The present note is devoted to the study of a similar problem for the Euler--Lagrange equation in the Sobolev space $W_{2}^{1}$. Direct theorems in the theory of approximation of Banach space vectors by exponential type entire vectors Methods Funct. Anal. Topology 13 (2007), no. 3, 267-278 For an arbitrary operator $A$ on a Banach space $X$ which is the generator of a $C_0$--group with certain growth condition at infinity, direct theorems on connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for the operator $A$, and the $k$-module of continuity are established. The results allow to obtain Jackson-type inequalities in a number of classic spaces of periodic functions and weighted $L_p$ spaces. Methods Funct. Anal. Topology 13 (2007), no. 3, 279-283 We extend the result of Beurling on the closure in $H^p$ of the linear manifold $F(z)\cdot $$ \{$polynomials of $z\}$ to the classes of entire functions of finite gamma-growth. The set of discontinuity points of separately continuous functions on the products of compact spaces Methods Funct. Anal. Topology 13 (2007), no. 3, 284-295 We solve the problem of constructing separately continuous functions on the product of compact spaces with a given set of discontinuity points. We obtain the following results. 1. For arbitrary \v{C}ech complete spaces $X$, $Y$, and a separable compact perfect projectively nowhere dense zero set $E\subseteq X\times Y$ there exists a separately continuous function $f:X\times Y\to\mathbb R$ the set of discontinuity points, which coincides with $E$. 2. For arbitrary \v{C}ech complete spaces $X$, $Y$, and nowhere dense zero sets $A\subseteq X$ and $B\subseteq Y$ there exists a separately continuous function $f:X\times Y\to\mathbb R$ such that the projections of the set of discontinuity points of $f$ coincides with $A$ and $B$, respectively. We construct an example of Eberlein compacts $X$, $Y$, and nowhere dense zero sets $A\subseteq X$ and $B\subseteq Y$ such that the set of discontinuity points of every separately continuous function $f:X\times Y\to\mathbb R$ does not coincide with $A\times B$, and a $CH$-example of separable Valdivia compacts $X$, $Y$ and separable nowhere dense zero sets $A\subseteq X$ and $B\subseteq Y$ such that the set of discontinuity points of every separately continuous function $f:X\times Y\to\mathbb R$ does not coincide with $A\times B$. Methods Funct. Anal. Topology 13 (2007), no. 3, 296-300 Let a sequence $\left\{v_k \right\}^{+\infty}_{-\infty}\in l_2$ and a real sequence $\left\{\lambda_k \right\}^{+\infty}_{-\infty}$ such that $\left\{\lambda_k^{-1} \right\}^{+\infty}_{-\infty}\in l_2$, and an orthonormal basis $\left\{e_k \right\}^{+\infty}_{-\infty}$ of a Hilbert space be given. We describe a sequence $M=\left\{\mu_k \right\}^{+\infty}_{-\infty}$, $M\cap \mathbb{R}=\varnothing$, such that the families $$ f_k = \sum\limits_{j\in\mathbb{Z}} {v_j\left(\lambda_j-\bar{\mu}_k \right)^{-1}}e_k, \quad k\in \mathbb{Z} $$ form an unconditional basis in $\mathfrak{H}$.
Methods Funct. Anal. Topology 14 (2008), no. 4, 297-301 We give some sufficient conditions under which the linear span of positive compact (resp. Dunford-Pettis, weakly compact, AM-compact) operators cannot be a vector lattice without being a sublattice of the order complete vector lattice of all regular operators. Also, some interesting consequences are obtained. On certain resolvent convergence of one non-local problem to a problem with spectral parameter in boundary condition Methods Funct. Anal. Topology 14 (2008), no. 4, 302-313 A family of non-local problems with the same finite point spectrum is given. The resolvent convergence on a dense linear subspace which gives a problem with spectral parameter in the boundary condition is considered. The spectral eigenvalue decomposition of the last problem on the half line for Sturm-Liouville operator with trivial potential is given. Methods Funct. Anal. Topology 14 (2008), no. 4, 314-322 In the present paper we describe semiadditive functionals and establish that the construction generated by semiadditive functionals forms a covariant functor. We show that the functor of semiadditive functionals is a normal functor acting in category of compact sets. Methods Funct. Anal. Topology 14 (2008), no. 4, 323-329 In this paper we investigate solvability of a partial integral equation in the space $L_2(\Omega\times\Omega),$ where $\Omega=[a,b]^ u.$ We define a determinant for the partial integral equation as a continuous function on $\Omega$ and for a continuous kernels of the partial integral equation we give explicit description of the solution. Methods Funct. Anal. Topology 14 (2008), no. 4, 330-333 In the paper we consider examples of basis families $\{\cos \lambda_k t\}^\infty_1$, $\lambda_k>0$, in the space $L_2(0,\sigma)$, such that systems $\{e^{i\lambda_kt},e^{-i\lambda_kt}\}^\infty_1$ don't form an unconditional basis in space $L_2(-\sigma,\sigma)$. Generalized stochastic derivatives on parametrized spaces of regular generalized functions of Meixner white noise Methods Funct. Anal. Topology 14 (2008), no. 4, 334-350 We introduce and study Hida-type stochastic derivatives and stochastic differential operators on the parametrized Kondratiev-type spaces of regular generalized functions of Meixner white noise. In particular, we study the interconnection between the stochastic integration and differentiation. Our researches are based on the general approach that covers the Gaussian, Poissonian, Gamma, Pascal and Meixner cases. Methods Funct. Anal. Topology 14 (2008), no. 4, 351-360 In this paper, we define the first topological $(\sigma,\tau)$-cohomology group and examine vanishing of the first $(\sigma,\tau)$-cohomology groups of certain triangular Banach algebras. We apply our results to study the $(\sigma,\tau)$-weak amenability and $(\sigma,\tau)$-amenability of triangular Banach algebras. Representation of commutants for composition operators induced by a hyperbolic linear fractional automorphisms of the unit disk Methods Funct. Anal. Topology 14 (2008), no. 4, 361-371 We describe the commutant of the composition operator induced by a hyperbolic linear fractional transformation of the unit disk onto itself in the class of linear continuous operators which act on the space of analytic functions. Two general classes of linear continuous operators which commute with such composition operators are constructed. The criteria of maximal dissipativity and self-adjointness for a class of differential-boundary operators with bounded operator coefficients Methods Funct. Anal. Topology 14 (2008), no. 4, 372-379 A class of the second order differential-boundary operators acting in the Hilbert space of infinite-dimensional vector-functions is investigated. The domains of considered operators are defined by nonstandard (e.g., multipoint-integral) boundary conditions. The criteria of maximal dissipativity and the criteria of self-adjointness for investigated operators are established. Methods Funct. Anal. Topology 14 (2008), no. 4, 380-385 The paper gives a complete characterization of all unitary operators acting in some wide Hilbert spaces $A^2_\omega(\mathbb{C})$ of entire functions possessing weighted square integrable modulus over the whole finite complex plane, which exhaust the set of all entire functions. Methods Funct. Anal. Topology 14 (2008), no. 4, 386-396 A detailed analysis of sufficient conditions on a family of many-body potentials, which ensure stability, superstability or strong superstability of a statistical system is given in present work.There has been given also an example of superstable many-body interaction.
A spin exchange relaxation-free (SERF) magnetometer is a type of magnetometer developed at Princeton University in the early 2000s. SERF magnetometers measure magnetic fields by using lasers to detect the interaction between alkali metal atoms in a vapor and the magnetic field. The name for the technique comes from the fact that spin exchange relaxation, a mechanism which usually scrambles the orientation of atomic spins, is avoided in these magnetometers. This is done by using a high (10 14 cm −3) density of Potassium atoms and a very low magnetic field. Under these conditions, the atoms exchange spin quickly compared to their magnetic precession frequency so that the average spin interacts with the field and is not destroyed by decoherence. [1] A SERF magnetometer achieves very high magnetic field sensitivity by monitoring a high density vapor of alkali metal atoms precessing in a near-zero magnetic field. [2] The sensitivity of SERF magnetometers improves upon traditional atomic magnetometers by eliminating the dominant cause of atomic spin decoherence caused by spin-exchange collisions among the alkali metal atoms. SERF magnetometers are among the most sensitive magnetic field sensors and in some cases exceed the performance of SQUID detectors of equivalent size. A small 1 cm 3 volume glass cell containing potassium vapor has reported 1 fT/√Hz sensitivity and can theoretically become even more sensitive with larger volumes. [3] They are vector magnetometers capable of measuring all three components of the magnetic field simultaneously.require('Module:No globals') local p = {} -- articles in which traditional Chinese preceeds simplified Chinese local t1st = { ["228 Incident"] = true, ["Chinese calendar"] = true, ["Lippo Centre, Hong Kong"] = true, ["Republic of China"] = true, ["Republic of China at the 1924 Summer Olympics"] = true, ["Taiwan"] = true, ["Taiwan (island)"] = true, ["Taiwan Province"] = true, ["Wei Boyang"] = true, } -- the labels for each part local labels = { ["c"] = "Chinese", ["s"] = "simplified Chinese", ["t"] = "traditional Chinese", ["p"] = "pinyin", ["tp"] = "Tongyong Pinyin", ["w"] = "Wade–Giles", ["j"] = "Jyutping", ["cy"] = "Cantonese Yale", ["poj"] = "Pe̍h-ōe-jī", ["zhu"] = "Zhuyin Fuhao", ["l"] = "literally", } -- article titles for wikilinks for each part local wlinks = { ["c"] = "Chinese language", ["s"] = "simplified Chinese characters", ["t"] = "traditional Chinese characters", ["p"] = "pinyin", ["tp"] = "Tongyong Pinyin", ["w"] = "Wade–Giles", ["j"] = "Jyutping", ["cy"] = "Yale romanization of Cantonese", ["poj"] = "Pe̍h-ōe-jī", ["zhu"] = "Bopomofo", } -- for those parts which are to be treated as languages their ISO code local ISOlang = { ["c"] = "zh", ["t"] = "zh-Hant", ["s"] = "zh-Hans", ["p"] = "zh-Latn-pinyin", ["tp"] = "zh-Latn", ["w"] = "zh-Latn-wadegile", ["j"] = "yue-jyutping", ["cy"] = "yue", ["poj"] = "hak", ["zhu"] = "zh-Bopo", } local italic = { ["p"] = true, ["tp"] = true, ["w"] = true, ["j"] = true, ["cy"] = true, ["poj"] = true, } -- Categories for different kinds of Chinese text local cats = { ["c"] = "", ["s"] = "", ["t"] = "", } function p.Zh(frame) -- load arguments module to simplify handling of args local getArgs = require('Module:Arguments').getArgs local args = getArgs(frame) return p._Zh(args) end function p._Zh(args) local uselinks = not (args["links"] == "no") -- whether to add links local uselabels = not (args["labels"] == "no") -- whether to have labels local capfirst = args["scase"] ~= nil local t1 = false -- whether traditional Chinese characters go first local j1 = false -- whether Cantonese Romanisations go first local testChar if (args["first"]) then for testChar in mw.ustring.gmatch(args["first"], "%a+") do if (testChar == "t") then t1 = true end if (testChar == "j") then j1 = true end end end if (t1 == false) then local title = mw.title.getCurrentTitle() t1 = t1st[title.text] == true end -- based on setting/preference specify order local orderlist = {"c", "s", "t", "p", "tp", "w", "j", "cy", "poj", "zhu", "l"} if (t1) then orderlist[2] = "t" orderlist[3] = "s" end if (j1) then orderlist[4] = "j" orderlist[5] = "cy" orderlist[6] = "p" orderlist[7] = "tp" orderlist[8] = "w" end -- rename rules. Rules to change parameters and labels based on other parameters if args["hp"] then -- hp an alias for p ([hanyu] pinyin) args["p"] = args["hp"] end if args["tp"] then -- if also Tongyu pinyin use full name for Hanyu pinyin labels["p"] = "Hanyu Pinyin" end if (args["s"] and args["s"] == args["t"]) then -- Treat simplified + traditional as Chinese if they're the same args["c"] = args["s"] args["s"] = nil args["t"] = nil elseif (not (args["s"] and args["t"])) then -- use short label if only one of simplified and traditional labels["s"] = labels["c"] labels["t"] = labels["c"] end local body = "" -- the output string local params -- for creating HTML spans local label -- the label, i.e. the bit preceeding the supplied text local val -- the supplied text -- go through all possible fields in loop, adding them to the output for i, part in ipairs(orderlist) do if (args[part]) then -- build label label = "" if (uselabels) then label = labels[part] if (capfirst) then label = mw.language.getContentLanguage():ucfirst( Spin-exchange relaxation Spin-exchange collisions preserve total angular momentum of a colliding pair of atoms but can scramble the hyperfine state of the atoms. Atoms in different hyperfine states do not precess coherently and thereby limit the coherence lifetime of the atoms. However, decoherence due to spin-exchange collisions can be nearly eliminated if the spin-exchange collisions occur much faster than the precession frequency of the atoms. In this regime of fast spin-exchange, all atoms in an ensemble rapidly change hyperfine states, spending the same amounts of time in each hyperfine state and causing the spin ensemble to precess more slowly but remain coherent. This so-called SERF regime can be reached by operating with sufficiently high alkali metal density (at higher temperature) and in sufficiently low magnetic field. [4] Alkali metal atoms with hyperfine state indicated by color precessing in the presence of a magnetic field experience a spin-exchange collision which preserves total angular momentum but changes the hyperfine state, causing the atoms to precess in opposite directions and decohere. Alkali metal atoms in the spin-exchange relaxation-free (SERF) regime with hyperfine state indicated by color precessing in the presence of a magnetic field experience two spin-exchange collisions in rapid succession which preserves total angular momentum but changes the hyperfine state, causing the atoms to precess in opposite directions only slightly before a second spin-exchange collision returns the atoms to the original hyperfine state. The spin-exchange relaxation rate R_{se} for atoms with low polarization experiencing slow spin-exchange can be expressed as follows: [4] R_{se} = \frac{1}{2 \pi T_{se}} \left( \frac{2 I(2 I -1)}{3(2I+1)^2} \right) where T_{se} is the time between spin-exchange collisions, I is the nuclear spin, \nu is the magnetic resonance frequency, \gamma_e is the gyromagnetic ratio for an electron. In the limit of fast spin-exchange and small magnetic field, the spin-exchange relaxation rate vanishes for sufficiently small magnetic field: [2] R_{se} = \frac{\gamma_e^2 B^2 T_{se} }{2 \pi} \frac{1}{2}\left( 1-\frac{(2I+1)^2}{Q^2} \right) where Q is the "slowing-down" constant to account for sharing of angular momentum between the electron and nuclear spins: [5] Q(I=3/2)=4\left( 2 - \frac{4}{3+P^2} \right)^{-1} Q(I=5/2)=6\left( 3 - \frac{48(1+P^2)}{19+26 P^2+3 P^4} \right)^{-1} Q(I=7/2)=8\left( \frac{4(1+7P^2+7P^4+P^6)}{11+35P^2+17P^4+P^6} \right)^{-1} where P is the average polarization of the atoms. The atoms suffering fast spin-exchange precess more slowly when they are not fully polarized because they spend a fraction of the time in different hyperfine states precessing at different frequencies (or in the opposite direction). Relaxation rate R_{tot} = Q \Delta \nu as indicated by magnetic resonance linewidth for atoms as a function of magnetic field. These lines represent operation with K vapor at 160, 180 and 200 C (higher temperature provides higher relaxation rate here). using a 2 cm diameter cell with 3 atm He buffer gas, 60 Torr N 2 quenching gas. The SERF regime is clearly apparent for sufficiently low magnetic fields where the spin-exchange collisions occur much faster than the spin precession. Sensitivity The sensitivity \delta B of atomic magnetometers are limited by the number of atoms N and their spin coherence lifetime T_2 according to \delta B = \frac{1}{\gamma} \sqrt{\frac{2 R_{tot} Q}{F_z N} } where \gamma is the gyromagnetic ratio of the atom and F_z is the average polarization of total atomic spin F = I+S. [5] In the absence of spin-exchange relaxation, a variety of other relaxation mechanisms contribute to the decoherence of atomic spin: [2] R_{tot} = R_D + R_{sd,self} + R_{sd,\mathrm{He}} + R_{sd,\mathrm{N_2}} where R_D is the relaxation rate due to collisions with the cell walls and R_{sd,X} are the spin destruction rates for collisions among the alkali metal atoms and collisions between alkali atoms and any other gasses that may be present. In an optimal configuration, a density of 10 14 cm −3 potassium atoms in a 1 cm 3 vapor cell with ~3 atm helium buffer gas can achieve 10 aT Hz −1/2 (10 −17 T Hz −1/2) sensitivity with relaxation rate R_{tot} ≈ 1 Hz. [2] Typical operation Atomic magnetometer principle of operation, depicting alkali atoms polarized by a circularly polarized pump beam, precessing in the presence of a magnetic field and being detected by optical rotation of a linearly polarized probe beam. Alkali metal vapor of sufficient density is obtained by simply heating solid alkali metal inside the vapor cell. A typical SERF atomic magnetometer can take advantage of low noise diode lasers to polarize and monitor spin precession. Circularly polarized pumping light tuned to the D_1 spectral resonance line polarizes the atoms. An orthogonal probe beam detects the precession using optical rotation of linearly polarized light. In a typical SERF magnetometer, the spins merely tip by a very small angle because the precession frequency is slow compared to the relaxation rates. Advantages and disadvantages SERF magnetometers compete with SQUID magnetometers for use in a variety of applications. The SERF magnetometer has the following advantages: Equal or better sensitivity per unit volume Cryogen-free operation All-optical measurement limits enables imaging and eliminates interference. Potential disadvantages: Can only operate near zero field. Sensor vapor cell must be heated. Applications Applications utilizing high sensitivity of SERF magnetometers potentially include: History SERF components mockup. The SERF magnetometer was developed by Michael V. Romalis at Princeton University in the early 2000s. [2] The underlying physics governing the suppression spin-exchange relaxation was developed decades earlier by William Happer [4] but the application to magnetic field measurement was not explored at that time. The name "SERF" was partially motivated by its relationship to SQUID detectors in a marine metaphor. References ^ ^ a b c d e ^ ^ a b c ^ a b ^ External links Photographs of a SERF magnetometer from the Romalis Group at Princeton University. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Now we will see that every linear map \(T \in \mathcal{L}(V, W) \), with \(V \) and \(W \) finite-dimensional vector spaces, can be encoded by a matrix, and, vice versa, every matrix defines such a linear map. Let \(V \) and \(W \) be finite-dimensional vector spaces, and let \(T:V\to W \) be a linear map. Suppose that \((v_1,\ldots,v_n) \) is a basis of \(V \) and that \((w_1,\ldots,w_m) \) is a basis for \(W \). We have seen in Theorem 6.1.3 that \(T \) is uniquely determined by specifying the vectors \(Tv_1,\ldots, Tv_n\in W \). Since \((w_1,\ldots,w_m) \) is a basis of \(W \), there exist unique scalars \(a_{ij}\in\mathbb{F} \) such that \begin{equation}\label{eq:Tv} Tv_j = a_{1j} w_1 + \cdots + a_{mj} w_m \quad \text{for \(1\le j\le n \).} \tag{6.6.1} \end{equation} We can arrange these scalars in an \(m\times n \) matrix as follows: \begin{equation*} M(T) = \begin{bmatrix} a_{11} & \ldots & a_{1n}\\ \vdots && \vdots\\ a_{m1} & \ldots & a_{mn} \end{bmatrix}. \end{equation*} Often, this is also written as \(A=(a_{ij})_{1\le i\le m,1\le j\le n} \). As in Section A.1.1, the set of all \(m\times n \) matrices with entries in \(\mathbb{F} \) is denoted by \(\mathbb{F}^{m\times n} \). Remark 6.6.1. It is important to remember that \(M(T) \) not only depends on the linear map \(T \) but also on the choice of the basis \((v_1,\ldots,v_n) \) for \(V \) and the choice of basis \((w_1,\ldots,w_m) \) for \(W \). The \(j^{\text{th}} \) column of \(M(T) \) contains the coefficients of the \(j^{\text{th}} \) basis vector \(v_j \) when expanded in terms of the basis \((w_1,\ldots,w_m) \), as in Equation 6.6.1. Example 6.6.2. Let \(T:\mathbb{R}^2\to \mathbb{R}^2 \) be the linear map given by \(T(x,y)=(ax+by,cx+dy) \) for some \(a,b,c,d\in\mathbb{R} \). Then, with respect to the canonical basis of \(\mathbb{R}^2 \) given by \(((1,0),(0,1)) \), the corresponding matrix is \begin{equation*} M(T) = \begin{bmatrix} a&b\\ c&d \end{bmatrix} \end{equation*} since \(T(1,0) = (a,c) \) gives the first column and \(T(0,1)=(b,d) \) gives the second column. More generally, suppose that \(V=\mathbb{F}^n \) and \(W=\mathbb{F}^m \), and denote the standard basis for \(V \) by \((e_1,\ldots,e_n) \) and the standard basis for \(W \) by \((f_1,\ldots,f_m) \). Here, \(e_i \) (resp. \(f_i\)) is the \(n\)-tuple (resp. \(m\)-tuple) with a one in position \(i \) and zeroes everywhere else. Then the matrix \(M(T)=(a_{ij}) \) is given by \begin{equation*} a_{ij} = (Te_j)_i, \end{equation*} where \((Te_j)_i \) denotes the \(i^{\text{th}} \) component of the vector \(Te_j \). Example 6.6.3. Let \(T:\mathbb{R}^2\to\mathbb{R}^3 \) be the linear map defined by \(T(x,y)=(y,x+2y,x+y) \). Then, with respect to the standard basis, we have \(T(1,0)=(0,1,1) \) and \(T(0,1)=(1,2,1) \) so that \begin{equation*} M(T) = \begin{bmatrix} 0&1\\ 1& 2 \\ 1&1 \end{bmatrix}. \end{equation*} However, if alternatively we take the bases \(((1,2),(0,1)) \) for \(\mathbb{R}^2 \) and \(((1,0,0),(0,1,0),(0,0,1)) \) for \(\mathbb{R}^3 \), then \(T(1,2)=(2,5,3) \) and \(T(0,1)=(1,2,1) \) so that \begin{equation*} M(T) = \begin{bmatrix} 2&1\\ 5&2 \\ 3&1 \end{bmatrix}. \end{equation*} Example 6.6.4. Let \(S:\mathbb{R}^2\to \mathbb{R}^2 \) be the linear map \(S(x,y)=(y,x) \). With respect to the basis \(((1,2),(0,1)) \) for \(\mathbb{R}^2 \), we have \begin{equation*} S(1,2) = (2,1) = 2(1,2) -3(0,1) \quad \text{and} \quad S(0,1) = (1,0) = 1(1,2)-2(0,1), \end{equation*} and so \[ M(S) = \begin{bmatrix} 2&1\\- 3& -2 \end{bmatrix}. \] Given vector spaces \(V \) and \(W \) of dimensions \(n \) and \(m \), respectively, and given a fixed choice of bases, note that there is a one-to-one correspondence between linear maps in \(\mathcal{L}(V,W)\) and matrices in \(\mathbb{F}^{m\times n} \). If we start with the linear map \(T \), then the matrix \(M(T)=A=(a_{ij})\) is defined via Equation 6.6.1. Conversely, given the matrix \(A=(a_{ij})\in \mathbb{F}^{m\times n} \), we can define a linear map \(T:V\to W \) by setting \[ Tv_j = \sum_{i=1}^m a_{ij} w_i. \] Recall that the set of linear maps \(\mathcal{L}(V,W) \) is a vector space. Since we have a one-to-one correspondence between linear maps and matrices, we can also make the set of matrices \(\mathbb{F}^{m\times n} \) into a vector space. Given two matrices \(A=(a_{ij}) \) and \(B=(b_{ij}) \) in \(\mathbb{F}^{m\times n} \) and given a scalar \(\alpha\in \mathbb{F} \), we define the matrix addition and scalar multiplication component-wise: \begin{equation*} \begin{split} A+B &= (a_{ij}+b_{ij}),\\ \alpha A &= (\alpha a_{ij}). \end{split} \end{equation*} Next, we show that the composition of linear maps imposes a product on matrices, also called matrix multiplication. Suppose \(U,V,W \) are vector spaces over \(\mathbb{F} \) with bases \((u_1,\ldots,u_p) \), \((v_1,\ldots,v_n) \) and \((w_1,\ldots,w_m) \), respectively. Let \(S:U\to V \) and \(T:V\to W \) be linear maps. Then the product is a linear map \(T\circ S:U\to W \). Each linear map has its corresponding matrix \(M(T)=A, M(S)=B \) and \(M(TS)=C \). The question is whether \(C \) is determined by \(A \) and \(B \). We have, for each \(j\in \{1,2,\ldots p\} \), that \begin{equation*} \begin{split} (T\circ S) u_j &= T(b_{1j}v_1 + \cdots + b_{nj} v_n) = b_{1j} Tv_1 + \cdots + b_{nj} Tv_n\\ &= \sum_{k=1}^n b_{kj} Tv_k = \sum_{k=1}^n b_{kj} \bigl( \sum_{i=1}^m a_{ik} w_i \bigr)\\ &= \sum_{i=1}^m \bigl(\sum_{k=1}^n a_{ik} b_{kj} \bigr) w_i. \end{split} \end{equation*} Hence, the matrix \(C=(c_{ij}) \) is given by \begin{equation} \label{eq:c} c_{ij} = \sum_{k=1}^n a_{ik} b_{kj}. \tag{6.6.2} \end{equation} Equation 6.6.2 can be used to define the \(m\times p \) matrix \(C\) as the product of a \(m\times n \) matrix \(A\) and a \(n\times p \) matrix \(B \), i.e., \begin{equation} C = AB. \tag{6.6.3} \end{equation} Our derivation implies that the correspondence between linear maps and matrices respects the product structure. Proposition 6.6.5. Let \(S:U\to V \) and \(T:V\to W \) be linear maps. Then \[ M(TS) = M(T)M(S).\] Example 6.6.6. With notation as in Examples 6.6.3 and 6.6.4, you should be able to verify that \begin{equation*} M(TS) = M(T)M(S) = \begin{bmatrix} 2&1\\ 5&2 \\ 3&1 \end{bmatrix} \begin{bmatrix} 2&1\\- 3& -2 \end{bmatrix} = \begin{bmatrix} 1&0\\ 4&1\\ 3&1 \end{bmatrix}. \end{equation*} Given a vector \(v\in V \), we can also associate a matrix \(M(v) \) to \(v \) as follows. Let \((v_1,\ldots,v_n) \) be a basis of \(V \). Then there are unique scalars \(b_1,\ldots,b_n\) such that \[ v= b_1 v_1 + \cdots b_n v_n. \] The matrix of \(v \) is then defined to be the \(n\times 1 \) matrix \[ M(v) = \begin{bmatrix} b_1 \\ \vdots \\ b_n \end{bmatrix}. \] Example 6.6.7 The matrix of a vector \(x=(x_1,\ldots,x_n) \in \mathbb{F}^n \) in the standard basis \((e_1,\ldots,e_n)\) is the column vector or \(n \times 1 \) matrix \begin{equation*} M(x) = \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \end{equation*} since \(x=(x_1,\ldots,x_n) = x_1 e_1 + \cdots + x_n e_n \). The next result shows how the notion of a matrix of a linear map \(T:V\to W \) and the matrix of a vector \(v\in V \) fit together. Proposition 6.6.8. Let \(T:V\to W \) be a linear map. Then, for every \(v\in V \), \begin{equation*} M(Tv) = M(T) M(v). \end{equation*} Proof. Let \((v_1,\ldots,v_n) \) be a basis of \(V \) and \((w_1,\ldots,w_m) \) be a basis for \(W \). Suppose that, with respect to these bases, the matrix of \(T \) is \(M(T)=(a_{ij})_{1\le i\le m, 1\le j\le n} \). This means that, for all \(j\in \{1,2,\ldots,n\} \), \[ \begin{equation*} Tv_j = \sum_{k=1}^m a_{kj} w_k. \end{equation*} \] The vector \(v\in V \) can be written uniquely as a linear combination of the basis vectors as \[ v = b_1 v_1 + \cdots + b_n v_n. \] Hence, \begin{equation*} \begin{split} Tv &= b_1 T v_1 + \cdots + b_n T v_n\\ &= b_1 \sum_{k=1}^m a_{k1} w_k + \cdots + b_n \sum_{k=1}^m a_{kn} w_k\\ &= \sum_{k=1}^m (a_{k1} b_1 + \cdots + a_{kn} b_n) w_k. \end{split} \end{equation*} This shows that \(M(Tv) \) is the \(m\times 1 \) matrix \begin{equation*} M(Tv) = \begin{bmatrix} a_{11}b_1 + \cdots + a_{1n} b_n \\ \vdots \\ a_{m1}b_1 + \cdots + a_{mn} b_n \end{bmatrix}. \end{equation*} It is not hard to check, using the formula for matrix multiplication, that \(M(T)M(v)\) gives the same result. Example 6.6.9. Take the linear map \(S \) from Example 6.6.4 with basis \(((1,2),(0,1)) \) of \(\mathbb{R}^2 \). To determine the action on the vector \(v=(1,4)\in \mathbb{R}^2 \), note that \(v=(1,4)=1(1,2)+2(0,1) \). Hence, \begin{equation*} M(Sv) = M(S)M(v) = \begin{bmatrix} 2&1\\-3&-2 \end{bmatrix} \begin{bmatrix} 1\\2 \end{bmatrix} = \begin{bmatrix} 4\\ -7 \end{bmatrix}. \end{equation*} This means that \[ Sv= 4(1,2)-7(0,1)=(4,1), \] which is indeed true.
Introduction Intravenous bolus injection Intravenous infusion Oral administration Using different parametrizations Objectives: learn how to define and use a PK model for single route of administration. Projects: bolusLinear_project, bolusMM_project, bolusMixed_project, infusion_project, oral1_project, oral0_project, sequentialOral0Oral1_project, simultaneousOral0Oral1_project, oralAlpha_project, oralTransitComp_project Once a drug is administered, we usually describe subsequent processes within the organism by the pharmacokinetics (PK) process known as ADME: absorption, distribution, metabolism, excretion. A PK model is a dynamical system mathematically represented by a system of ordinary differential equations (ODEs) which describes transfers between compartments and elimination from the central compartment. See this web animation for more details. Mlxtran is remarkably efficient for implementing simple and complex PK models: The function pkmodelcan be used for standard PK models. The model is defined according to the provided set of named arguments. The pkmodelfunction enables different parametrizations, different models of absorption, distribution and elimination, defined here and summarized in the following.. PK macros define the different components of a compartmental model. Combining such PK components provide a high degree of flexibility for complex PK models. They can also extend a custom ODE system. A system of ordinary differential equations (ODEs) can be implemented very easily. It is also important to highlight the fact that the data file used by Monolix for PK modelling only contains information about dosing, i.e. how and when the drug is administrated. There is no need to integrate in the data file any information related to the PK model. This is an important remark since it means that any (complex) PK model can be used with the same data file. In particular, we make a clear distinction between administration (related to the data) and absorption (related to the model). The PK model is defined by the names of the input parameters of the pkmodel function. These names are reserved keywords. Absorption p: Fraction of dose which is absorbed ka: absorption constant rate (first order absorption) or, Tk0: absorption duration (zero order absorption) Tlag: lag time before absorption or, Mtt, Ktr: mean transit time & transit rate constant Distribution V: Volume of distribution of the central compartment k12, k21: Transfer rate constants between compartments 1 (central) & 2 (peripheral) or V2, Q2: Volume of compartment 2 (peripheral) & inter compartment clearance, between compartments 1 and 2, k13, k31: Transfer rate constants between compartments 1 (central) & 3 (peripheral) or V3, Q3: Volume of compartment 3 (peripheral) & inter compartment clearance, between compartments 1 and 3. Elimination k: Elimination rate constant or Cl: Clearance Vm, Km: Michaelis Menten elimination parameters Effect compartment ke0: Effect compartment transfer rate constant bolusLinear_project A single iv bolus is administered at time 0 to each patient. The data file bolus1_data.txt contains 4 columns: id, time, amt (the amount of drug in mg) and y (the measured concentration). The names of these columns are recognized as keywords by Monolix: It is important to note that, in this data file, a row contains either some information about the dose (in which case y = ".") or a measurement (in which case amt = "."). We could equivalently use the data file bolus2_data.txt which contains 2 additional columns: EVID (in the green frame) and IGNORED OBSERVATION (in the blue frame): Here, the EVENT ID column allows the identification of an event. It is an integer between 0 and 4. It helps to define the type of line. EVID=1 means that this record describes a dose while EVID=0 means that this record contains an observed value. On the other hand, the IGNORED OBSERVATION column enables to tag lines for which the information in the OBSERVATION column-type is missing. MDV=1 means that the observed value of this record should be ignored while MDV=0 means that this record contains an observed value. The two data files bolus1_data.txt and bolus2_data.txt contain exactly the same information and provide exactly the same results. A one compartment model with linear elimination is used with this project: $$\begin{array}{ccl} \frac{dA_c}{dt} &=& – k A_c(t) \\ A_c(t) &= &0 ~~\text{for}~~ t<0 \end{array} $$ Here, \(A_c(t)\) and \(C_c(t)=A_c(t)/V\) are, respectively, the amount and the concentration of drug in the central compartment at time t. When a dose D arrives in the central compartment at time \(\tau\), an iv bolus administration assumes that $$A_c(\tau^+) = A_c(\tau^-) + D$$ where \(A_c(\tau^-)\) (resp. \(A_c(\tau^+)\)) is the amount of drug in the central compartment just before (resp. after) \(\tau\) Parameters of this model are V and k. We therefore use the model bolus_1cpt_Vk from the Monolix PK library: [LONGITUDINAL] input = {V, k} EQUATION: Cc = pkmodel(V, k) OUTPUT: output = Cc We could equivalently use the model bolusLinearMacro.txt (click on the button Model and select the new PK model in the library 6.PK_models/model) [LONGITUDINAL] input = {V, k} PK: compartment(cmt=1, amount=Ac) iv(cmt=1) elimination(cmt=1, k) Cc = Ac/V OUTPUT: output = Cc These two implementations generate exactly the same C++ code and then provide exactly the same results. Here, the ODE system is linear and Monolix uses its analytical solution. Of course, it is also possible (but not recommended with this model) to use the ODE based PK model bolusLinearODE.txt : [LONGITUDINAL] input = {V, k} PK: depot(target = Ac) EQUATION: ddt_Ac = - k*Ac Cc = Ac/V OUTPUT: output = Cc Results obtained with this model are slightly different from the ones obtained with the previous implementations since a numeric scheme is used here for solving the ODE. Moreover, the computation time is longer (between 3 and 4 time longer in that case) when using the ODE compared to the analytical solution. Individual fits obtained with this model look nice but the VPC show some misspecification in the elimination process: bolusMM_project A non linear elimination is used with this project: $$\frac{dA_c}{dt} = – \frac{ V_m \, A_c(t)}{V\, K_m + A_c(t) }$$ This model is available in the Monolix PK library as bolus_1cpt_VVmKm: [LONGITUDINAL] input = {V, Vm, Km} PK: Cc = pkmodel(V, Vm, Km) OUTPUT: output = Cc Instead of this model, we could equivalently use PK macros with bolusNonLinearMacro.txt from the library 6.PK_models/model: [LONGITUDINAL] input = {V, Vm, Km} PK: compartment(cmt=1, amount=Ac, volume=V) iv(cmt=1) elimination(cmt=1, Vm, Km) Cc = Ac/V OUTPUT: output = Cc or an ODE with bolusNonLinearODE: [LONGITUDINAL] input = {V, Vm, Km} PK: depot(target = Ac) EQUATION: ddt_Ac = -Vm*Ac/(V*Km+Ac) Cc=Ac/V OUTPUT: output = Cc Results obtained with these three implementations are identical since no analytical solution is available for this non linear ODE. We can then check that this PK model seems to describe much better the elimination process of the data: bolusMixed_project THe Monolix PK library contains “standard” PK models. More complex models should be implemented by the user in a model file. For instance, we assume in this project that the elimination process is a combination of linear and nonlinear elimination processes: $$ \frac{dA_c}{dt} = -\frac{ V_m A_c(t)}{V K_m + A_c(t) } – k A_c(t) $$ This model is not available in the Monolix PK library. It is implemented in bolusMixed.txt: [LONGITUDINAL] input = {V, k, Vm, Km} PK: depot(target = Ac) EQUATION: ddt_Ac = -Vm*Ac/(V*Km+Ac) - k*Ac Cc=Ac/V OUTPUT: output = Cc This model, with a combined error model, seems to describe very well the data: infusion_project Intravenous infusion assumes that the drug is administrated intravenously with a constant rate ( infusion rate), during a given time ( infusion time). Since the amount is the product of infusion rate and infusion time, an additional column INFUSION RATE or INFUSION DURATION is required in the data file: Monolix can use both indifferently. Data file infusion_rate_data.txt has an additional column rate: It can be replaced by infusion_tinf_data.txt which contains exactly the same information: We use with this project a 2 compartment model with non linear elimination and parameters , , , , : $$\begin{aligned} k_{12} &= Q/V_1 \\ k_{21} &= Q/V_2 \\\frac{dA_c}{dt} & = k_{21} \, Ap(t) – k_{12} \, Ac(t)- \frac{ V_m \, A_c(t)}{V_1\, K_m + A_c(t) } \\ \frac{dA_p}{dt} & = – k_{21} \, Ap(t) + k_{12} \, Ac(t) \\ Cc(t) &= \frac{Ac(t)}{V_1} \end{aligned}$$ This model is available in the Monolix PK library as infusion_2cpt_V1QV2VmKm: [LONGITUDINAL] input = {V1, Q, V2, Vm, Km} PK: V = V1 k12 = Q/V1 k21 = Q/V2 Cc = pkmodel(V, k12, k21, Vm, Km) OUTPUT: output = Cc oral1_project This project uses the data file oral_data.txt. For each patient, information about dosing is the time of administration and the amount. A one compartment model with first order absorption and linear elimination is used with this project. Parameters of the model are ka, V and Cl. we will then use model oral1_kaVCl.txt from the Monolix PK library [LONGITUDINAL] input = {ka, V, Cl} EQUATION: Cc = pkmodel(ka, V, Cl) OUTPUT: output = Cc Both the individual fits and the VPCs show that this model doesn’t describe the absorption process properly. Many options for implementing this PK model with Mlxtran exists: – using PK macros: oralMacro.txt: [LONGITUDINAL] input = {ka, V, Cl} PK: compartment(cmt=1, amount=Ac) oral(cmt=1, ka) elimination(cmt=1, k=Cl/V) Cc=Ac/V OUTPUT: output = Cc – using a system of two ODEs as in oralODEb.txt: [LONGITUDINAL] input = {ka, V, Cl} PK: depot(target=Ad) EQUATION: k = Cl/V ddt_Ad = -ka*Ad ddt_Ac = ka*Ad - k*Ac Cc = Ac/V OUTPUT: output = Cc – combining PK macros and ODE as in oralMacroODE.txt (macros are used for the absorption and ODE for the elimination): [LONGITUDINAL] input = {ka, V, Cl} PK: compartment(cmt=1, amount=Ac) oral(cmt=1, ka) EQUATION: k = Cl/V ddt_Ac = - k*Ac Cc = Ac/V OUTPUT: output = Cc – or equivalently, as in oralODEa.txt: [LONGITUDINAL] input = {ka, V, Cl} PK: depot(target=Ac, ka) EQUATION: k = Cl/V ddt_Ac = - k*Ac Cc = Ac/V< OUTPUT: output = Cc Remark: Models using the pkmodel function or PK macros only use an analytical solution of the ODE system. oral0_project A one compartment model with zero order absorption and linear elimination is used to fit the same PK data with this project. Parameters of the model are Tk0, V and Cl. We will then use model oral0_1cpt_Tk0Vk.txt from the Monolix PK library [LONGITUDINAL] input = {Tk0, V, Cl} EQUATION: Cc = pkmodel(Tk0, V, Cl) OUTPUT: output = Cc Remark 1: implementing a zero-order absorption process using ODEs is not easy… on the other hand, it becomes extremely easy to implement using either the pkmodel function or the PK macro oral(Tk0). Remark 2: The duration of a zero-order absorption has nothing to do with an infusion time: it is a parameter of the PK model (exactly as the absorption rate constant ka for instance), it is not part of the data. sequentialOral0Oral1_project More complex PK models can be implemented using Mlxtran. A sequential zero-order first-order absorption process assumes that a fraction Fr of the dose is first absorbed during a time Tk0 with a zero-order process, then, the remaining fraction is absorbed with a first-order process. This model is implemented in sequentialOral0Oral1.txt using PK macros: [LONGITUDINAL] input = {Fr, Tk0, ka, V, Cl} PK: compartment(amount=Ac) absorption(Tk0, p=Fr) absorption(ka, Tlag=Tk0, p=1-Fr) elimination(k=Cl/V) Cc=Ac/V OUTPUT: output = Cc Both the individual fits and the VPCs show that this PK model describes very well the whole ADME process for the same PK data: simultaneousOral0Oral1_project A simultaneous zero-order first-order absorption process assumes that a fraction Fr of the dose is absorbed with a zero-order process while the remaining fraction is absorbed simultaneously with a first-order process. This model is implemented in simultaneousOral0Oral1.txt using PK macros: [LONGITUDINAL] input = {Fr, Tk0, ka, V, Cl} PK: compartment(amount=Ac) absorption(Tk0, p=Fr) absorption(ka, p=1-Fr) elimination(k=Cl/V) Cc=Ac/V OUTPUT: output = Cc oralAlpha_project An -order absorption process assumes that the rate of absorption is proportional to some power of the amount of drug in the depot compartment: This model is implemented in oralAlpha.txt using ODEs: [LONGITUDINAL] input = {r, alpha, V, Cl} PK: depot(target = Ad) EQUATION: dAd = Ad^alpha ddt_Ad = -r*dAd ddt_Ac = r*Ad - (Cl/V)*Ac Cc = Ac/V OUTPUT: output = Cc oralTransitComp_project A PK model with a transit compartment of transit rate Ktr and mean transit time Mtt can be implemented using the PK macro oral(ka, Mtt, Ktr), or using the pkmodel function, as in oralTransitComp.txt: [LONGITUDINAL] input = {Mtt, Ktr, ka, V, Cl} EQUATION: Cc = pkmodel(Mtt, Ktr, ka, V, Cl) OUTPUT: output = Cc The PK macros and the function pkmodel use some preferred parametrizations and some reserved names as input arguments: Tlag, ka, Tk0, V, Cl, k12, k21. It is however possible to use another parametrization and/or other parameter names. As an example, consider a 2-compartment model for oral administration with a lag, a first order absorption and a linear elimination. We can use the pkmodel function with, for instance, parameters ka, V, k, k12 and k21: [LONGITUDINAL] input = {ka, V, k, k12, k21} PK: Cc = pkmodel(ka, V, k, k12, k21) OUTPUT: output = Cc Imagine now that we want i) to use the clearance instead of the elimination rate constant , ii) to use capital letters for the parameter names. We can still use the pkmodel function as follows: [LONGITUDINAL] input = {KA, V, CL, K12, K21} PK: Cc = pkmodel(ka=KA, V, k=CL/V, k12=K12, k21=K21) OUTPUT: output = Cc
MathJax 관련링크 문서 구조 item 1 definition 1 item 2 definition 2-1 definition 2-2 참조 예 맥스웰 방정식의 벡터 해석학적 표현 \[\iint_{S} \mathbf{E}\cdot\,d\mathbf{S} = \frac {Q} {\varepsilon_0} \tag{1}\] \[\int_{C} \mathbf{E}\cdot\,d\mathbf{r} =-\frac{d}{dt}\iint_{S} \mathbf{B}\cdot\,d\mathbf{S}\tag{2} \] newcommand 사용 예 $ \newcommand{\Re}{\mathrm{Re}\,} \newcommand{\pFq}[5]{{}_{#1}\mathrm{F}_{#2} \left( \genfrac{}{}{0pt}{}{#3}{#4} \bigg| {#5} \right)} $ We consider, for various values of $s$, the $n$-dimensional integral \begin{align} \tag{3} W_n (s) &:= \int_{[0, 1]^n} \left| \sum_{k = 1}^n \mathrm{e}^{2 \pi \mathrm{i} \, x_k} \right|^s \mathrm{d}\boldsymbol{x} \end{align} which occurs in the theory of uniform random walk integrals in the plane, where at each step a unit-step is taken in a random direction. As such, the integral (3) expresses the $s$-th moment of the distance to the origin after $n$ steps. By experimentation and some sketchy arguments we quickly conjectured and strongly believed that, for $k$ a nonnegative integer \begin{align} \tag{4} W_3(k) &= \Re \, \pFq32{\frac12, -\frac k2, -\frac k2}{1, 1}{4}. \end{align} Appropriately defined, (4) also holds for negative odd integers. The reason for (4) was long a mystery, but it will be explained at the end of the paper.
Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China In this paper we study the global boundedness of solutions to the fully parabolic chemotaxis system with singular sensitivity:$u_t=\Delta u-\chi\nabla·(\frac{u}{v}\nabla v)$, $v_t=k\Delta v-v+u$, subject to homogeneous Neumann boundary conditions in a bounded and smooth domain $\Omega\subset\mathbb{R}^{n}$ ($n\ge 2$), where $\chi, \, k>0$. It is shown that the solution is globally bounded provided $0<\chi<\frac{-(k-1)+\sqrt{(k-1)^2+\frac{8k}{n}}}{2}$. This result removes the additional restriction of $n \le 8 $ in Zhao, Zheng [ Mathematics Subject Classification:Primary:35B35, 35B40, 35K55;Secondary:92C17. Citation:Wei Wang, Yan Li, Hao Yu. Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3663-3669. doi: 10.3934/dcdsb.2017147 References: [1] M. Aida, K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Chemotaxis and growth system with singular sensitivity function, [2] [3] [4] K. Fujie and T. Senba, Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity, [5] K. Fujie and T. Senba, Global existence and boundedness of radial solutions to a two dimensional fully parabolic chemotaxis system with general sensitivity, [6] K. Fujie, M. Winkler and T. Yokota, Blow-up prevention by logistic sources in a parabolic-elliptic Keller-Segel system with singular sensitivity, [7] K. Fujie, M. Winkler and T. Yokota, Boundedness of solutions to parabolic-elliptic Keller-Segel systems with signal-dependent sensitivity, [8] [9] [10] J. Lankeit, A new approach toward boundedness in a two-dimensional parabolic chemotaxis system with singular sensitivity, [11] [12] C. Stinner and M. Winkler, Global weak solutions in a chemotaxis system with large singular sensitivity, [13] [14] P. Zheng, C. Mu, X. Hua and Q. Zhang, Global boundedness in a quasilinear chemotaxis system with signal-dependent sensitivity, [15] X. Zhao and S. Zheng, Global boundedness of solutions in a parabolic-parabolic chemotaxis system with singular sensitivity, show all references References: [1] M. Aida, K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Chemotaxis and growth system with singular sensitivity function, [2] [3] [4] K. Fujie and T. Senba, Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity, [5] K. Fujie and T. Senba, Global existence and boundedness of radial solutions to a two dimensional fully parabolic chemotaxis system with general sensitivity, [6] K. Fujie, M. Winkler and T. Yokota, Blow-up prevention by logistic sources in a parabolic-elliptic Keller-Segel system with singular sensitivity, [7] K. Fujie, M. Winkler and T. Yokota, Boundedness of solutions to parabolic-elliptic Keller-Segel systems with signal-dependent sensitivity, [8] [9] [10] J. Lankeit, A new approach toward boundedness in a two-dimensional parabolic chemotaxis system with singular sensitivity, [11] [12] C. Stinner and M. Winkler, Global weak solutions in a chemotaxis system with large singular sensitivity, [13] [14] P. Zheng, C. Mu, X. Hua and Q. Zhang, Global boundedness in a quasilinear chemotaxis system with signal-dependent sensitivity, [15] X. Zhao and S. Zheng, Global boundedness of solutions in a parabolic-parabolic chemotaxis system with singular sensitivity, [1] Sachiko Ishida. Global existence and boundedness for chemotaxis-Navier-Stokes systems with position-dependent sensitivity in 2D bounded domains. [2] Hao Yu, Wei Wang, Sining Zheng. Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity. [3] Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. [4] Youshan Tao. Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity. [5] Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. [6] Hua Zhong, Chunlai Mu, Ke Lin. Global weak solution and boundedness in a three-dimensional competing chemotaxis. [7] Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. [8] Mengyao Ding, Wei Wang. Global boundedness in a quasilinear fully parabolic chemotaxis system with indirect signal production. [9] Kentarou Fujie, Takasi Senba. Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity. [10] Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. [11] Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. [12] Tobias Black. Global generalized solutions to a parabolic-elliptic Keller-Segel system with singular sensitivity. [13] Fuchen Zhang, Xiaofeng Liao, Chunlai Mu, Guangyun Zhang, Yi-An Chen. On global boundedness of the Chen system. [14] Xie Li, Zhaoyin Xiang. Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source. [15] Hao Yu, Wei Wang, Sining Zheng. Boundedness of solutions to a fully parabolic Keller-Segel system with nonlinear sensitivity. [16] Qi Wang, Jingyue Yang, Feng Yu. Boundedness in logistic Keller-Segel models with nonlinear diffusion and sensitivity functions. [17] Alexandre Montaru. Wellposedness and regularity for a degenerate parabolic equation arising in a model of chemotaxis with nonlinear sensitivity. [18] Qi Wang. Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity. [19] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Models¶ As indicated in the Introduction, climlab can implement different types of models out of the box. Here, we focus on Energy Balance Models which are refered to as EBMs. Energy Balance Model¶ Let’s first give an overview about different (sub)processes that are implemented: EBM Subprocesses¶ Insolation¶ FixedInsolation defines a constant solar value for all spatial points of the domain:\[S(\varphi) = S_{\textrm{input}}\] P2Insolation characterizes a parabolic solar distribution over the domain’s latitude on the basis of the second order Legendre Polynomial \(P_2\):\[S(\varphi) = \frac{S_0}{4} \Big[1+ s_2 P_2 \big(\sin (\varphi) \big) \Big]\] Variable \(\varphi\) represents the latitude. DailyInsolation computes the daily solar insolation for each latitude of the domain on the basis of orbital parameters and astronomical formulas. AnnualMeanInsolation computes a latitudewise yearly mean for solar insolation on the basis of orbital parameters and astronomical formulas. Albedo¶ ConstantAlbedo defines constant albedo values at all spatial points of the domain:\[\alpha(\varphi) = a_0\] P2Albedo initializes parabolic distributed albedo values across the domain on basis of the second order Legendre Polynomial \(P_2\):\[\alpha(\varphi) = a_0 + a_2 P_2 \big(\sin (\varphi) \big)\] Iceline determines which part of the domain is covered with ice according to a given freezing temperature. StepFunctionAlbedo implements an albedo step function in dependence of the surface temperature by using instances of the above described albedo classes as subprocesses. Outgoing Longwave Radiation¶ AplusBT calculates the Outgoing Longwave Radiation (\(\text{OLR}\)) in form of a linear dependence of surface temperature \(T\):\[\text{OLR} = A+B \cdot T\] AplusBT_CO2 calculates \(\text{OLR}\) in the same way as AplusBTbut uses parameters \(A\) and \(B\) dependent of the atmospheric \(\text{CO}_2\) concentration \(c\).\[\text{OLR} = A(c)+B(c) \cdot T\] Boltzmann calculates \(\text{OLR}\) according to the Stefan-Boltzmann law for a grey body:\[\text{OLR} = \sigma \varepsilon T^4\] Energy Transport¶ These classes calculate the transport of energy \(H(\varphi)\) across the latitude \(\varphi\) in an energy budget noted as: MeridionalDiffusion calculates the energy transport in a diffusion like process along the temperature gradient:\[H(\varphi) = \frac{D}{\cos \varphi}\frac{\partial}{\partial \varphi} \left( \cos\varphi \frac{\partial T(\varphi)}{\partial \varphi} \right)\] BudykoTransport calculates the energy transport for each latitude \(\varphi\) depending on the global mean temperature \(\bar{T}\):\[H(\varphi) = - b [T(\varphi) - \bar{T}]\] EBM templates¶ EBM¶ The EBM class sets up a typical Energy Balance Model with following subprocesses: EBM_seasonal¶ The EBM_seasonal class implements Energy Balance Models with realistic daily insolation.It uses following subprocesses: EBM_annual¶ The EBM_annual class that implements Energy Balance Models with annual mean insolation.It uses following subprocesses:
This question already has an answer here: Find The minimum value of $$2^{\sin^2 \alpha} + 2^{\cos^2 \alpha}.$$ I can easily get the maximum value but minimum value is kinda tricky. Please help. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Find The minimum value of $$2^{\sin^2 \alpha} + 2^{\cos^2 \alpha}.$$ I can easily get the maximum value but minimum value is kinda tricky. Please help. $f(x)=2^x$ is a convex function. Thus, by Jensen: $$2^{\sin^2\alpha}+2^{\cos^2\alpha}\geq2\cdot2^{\frac{\sin^2\alpha+\cos^2\alpha}{2}}=2\sqrt2.$$ The equality occurs for $\alpha=\beta=45^{\circ}$, which says that we got a minimal value. Done! Also, we can use $(x+y)^2\geq4xy$, which is $(x-y)^2\geq0$: $$2^{\sin^2\alpha}+2^{\cos^2\alpha}=\sqrt{\left(2^{\sin^2\alpha}+2^{\cos^2\alpha}\right)^2}\geq$$ $$\geq\sqrt{4\cdot2^{\sin^2\alpha}\cdot2^{\cos^2\alpha}}=\sqrt{4\cdot2^{\sin^2\alpha+\cos^2\alpha}}=\sqrt8=2\sqrt2$$ HINT: By $AM-GM$ we have $$\frac{2^{\sin(x)^2}+2^{\cos(x)^2}}{2}\geq \sqrt{2^{\sin(x)^2+\cos(x)^2}}=...$$ Hint: Write $\cos(\alpha)^2=1-\sin(\alpha)^2$, so that \begin{align} 2^{\sin(\alpha)^2}+2^{\cos(\alpha)^2}&=2^{\sin(\alpha)^2}+2^{1-\sin(\alpha)^2} \\&=2^{\sin(\alpha)^2}+\frac{2}{2^{\sin(\alpha)^2}} \end{align} With $t={\sin(\alpha)^2}$, can you minimize the expression above? $f'(\alpha)=2\log 2 \sin \alpha \cos \alpha \left[2^{\sin ^2\alpha}- 2^{\cos ^2\alpha}\right]$ $f'(\alpha)=0 \to 2\sin\alpha\cos\alpha=0$ or $2^{\sin ^2\alpha}- 2^{\cos ^2\alpha}=0$ $ \sin 2\alpha=0\to 2\alpha=k\pi\to\alpha=\dfrac{k\pi}{2}$ $2^{\sin ^2\alpha}- 2^{\cos ^2\alpha}=0\to 2^{\sin ^2\alpha}= 2^{\cos ^2\alpha}$ $\sin^2\alpha=\cos^2\alpha\to |\sin\alpha|=|\cos\alpha|\to \alpha=\dfrac{\pi}{4}+k\dfrac{\pi}{2}$ Now it's easy to see that $\alpha=\dfrac{\pi}{4}$ etc leads to the minimum $2^{\sin^2 \alpha} + 2^{\cos^2 \alpha}=2^{\frac12}+2^{\frac12}=2\sqrt 2$ hope this helps $x:= \sin^2(\alpha) $; $ 1-x = \cos^2(\alpha)$ . Then: $ f(x):= 2^x + \frac{2}{2^x} $, $0 \le x \le1$. $z := 2^x$ ; $ g(z): = z + \frac{2}{z} , 1 \le z \le 2$. AM GM inequality: $ (1/2) ( z + \frac{2}{z}) \ge (z \frac{2}{z})^{1/2} =$ $ \sqrt{2}$. $g(z) = z + \frac{2}{z} \ge 2 \sqrt{2}$. Equality for $z = √2$. Back substitution: $2^x = 2^{1/2}$. Minimum at $x= 1/2$, I.e $x = \sin^2(\alpha) = 1/2 = \cos^2(\alpha)$ , hence $\alpha = 45°$. $\min(f(x)) = f(x =1/2) = 2√2$.
Question. Suppose $\left\{ a_n\right\} \subseteq \mathbb{N}$ strictly increasing. Show that $\sum_{i=1}^{\infty} \frac{a_{i+1}-a_i}{a_i}$ diverges. A geometric interpretation gives you the conclusion immediately: Solution. $\sum_{i=1}^{\infty} \frac{a_{i+1}-a_i}{a_i} \geq \sum_{i=1}^{\infty} \sum_{j=a_i}^{a_{i+1}-1} j^{-1} = \sum_{i = a_1}^{\infty} i^{-1}$ That clearly converges as indicated by the comparison test since the last one is a partial harmonic series. Please note that the above expression should be avoided as the $\geq $ sign is vague due to the divergence nature of the series. Now... Question 2. Suppose $\left\{ a_n\right\} \subseteq \mathbb{R}^+$ strictly increasing. Will $\sum_{i=1}^{\infty} \frac{a_{i+1}-a_i}{a_i}$ converge? Extending the above proof to $\mathbb{R}$ seems to be a nice idea. To do this we shall compare the term $\frac{a_{i+1}-a_i}{a_i}$ with the integral $\int ^{a_{i+1}}_{a_i} x^{-1}dx$. We can show that they are usuallyasymptotically equivalent. Why usually? Let's bound them as the following: The lower bound is clear (we use that for divergence). Suppose the sequence is unbounded then $\sum_{i=1}^{\infty} \frac{a_{i+1}-a_i}{a_i} \geq \sum_{i=1}^{\infty} \int ^{a_{i+1}}_{a_i} x^{-1}dx = \int ^{\infty} _{a_1} x^{-1} dx$ Of course if it is bounded then it converges to $k\in \mathbb{R}$ then the series is lowerly bounded by $\int^k_{a_i} x^{-1} dx = \ln (k) - \ln (a_1)$. What bothers us is the upper bound. Since $\left\{ a_n\right\}$ strictly increasing we use the convention that $k$ being limit of the sequence $\left\{ a_n\right\}$, which is infinity if it diverges. Define $N_i = \frac{a_{i+1}}{a_i}$. Then if $\left\{ N_i\right\}$ upperly bounded by $N$ then the series is also upperly bounded by $\int ^{ k}_{a_1} Nx^{-1}dx = N(\ln k - \ln 1)$, so if $N_i$ and $a_i$ bounded then the series is bounded as well. Is it possible that $a_i$ bounded but $N$ being unbounded? The answer is clear: no. Suppose $a_i$ upperly bounded by $A$. Then $N_i = \frac{a_{i+1}}{a_i} \leq \frac{A}{a_1}$ so that it is also upperly bounded. Solution. The series converges if and only if the sequence $\left\{ a_n\right\}$ converges. i.e. iff the sequence is bounded above. Define $f(x): \mathbb{R}^+ \rightarrow \mathbb{R}^+$ that $f(x) = \sum_{i: a_i \leq x} \frac{a_{i+1}-a_i}{a_i}$. Then it would be nice to do some asymptotic analysis over the function $f$ given that $a_i$ diverges. From our previous analysis, if $N_i$ bounded then $f(x) \sim \log x$. What if $N$ unbounded? If we think about how $N_i$ is defined we can make up this example: $a_n = n!$ then $\frac{a_{i+1}-a_i}{a_i} = n$ . Then it is clear that $N_i$ unbounded as $N_i = i+1$ with $f(n!)\sim n^2$, but if $N_i$ bounded we have $f(n!) \sim \log (n!) \sim n\log n$ by Stirling's formula, which has a different growth rate. Foods for thought: Given strictly increasing sequence $\left\{ a_n\right\}$ where $a_n \neq 0 \forall n\in \mathbb{N}$, define $b_i = \frac{a_{i+1}-a_i}{a_i}$. 1) Given a sequence $\left\{ a_n \right\} \in \mathbb{R}/\left\{ 0\right\}$. Give the condition that the series $\sum b_i$ converges. 2) Repeat Q1 if $\left\{ a_i\right\}\subseteq \mathbb{R}^+$ is increasing 3) For $k\in \mathbb{N}$, Define $b_{i,k} = \frac{a_{i+k} - a_i}{a_i}$ where $\left\{ a_i\right\}\subseteq \mathbb{R}^+$ strictly increasing. Determine the condition of convergence for $\sum_i b_{i,k}$. 4) Back to the original question. How fast can $f(x)$ grow? Are there any asymptotical boundings?
Choosing Loans with Monte Carlo, Diversification Part 2 Introduction In a previous article, we conducted a preliminary investigation of applying Modern Portfolio Theory to Lending Club. We assumed that each loan grade on Lending Club (\(A\) through \(G\)) corresponded to an asset class, calculated variances (\(\sigma^{2}_{i}\)) and covariances (\(\sigma^{2}_{i,j}\)) of each asset class , and constructed an efficient frontier for Lending Club portfolios. While this constituted a useful exercise in theory, the results were not applicable for an individual investor because combining all \(A\) grade loans into a single \(A\) grade asset class is only achievable if one owned all \(A\) grade loans. Realistically, this is something no individual investor would be able to achieve. This time we aim to find variances and covariances at the individual loan level, which allows us to calculate the variance of a portfolio by treating each loan in the portfolio as an asset. With the variance and covariance numbers, when faced with several potential new loans to invest in, we can add prospective loans to the portfolio, analyze how the standard deviation (\(\sigma\)) and Expected Return (\(E(r)\)) change, and decide which loans to invest in based on a desired risk/return profile. The following analysis was done on mature loans, with a grade breakdown depicted in the table below: Grade A B C D E F G Total # of loans 20,757 27,351 16,903 9,569 2,722 736 343 78,381 Assumptions As in our first diversification article, we take our calculated Expected Return at each month as the return you could get for buying/selling your loan at that month on the secondary market. We believe our Expected Return serves as a reasonable proxy because it behaves as one would expect loans to based on maturity and status; seasoned notes that are current are likely to be more valuable than their counterparts of younger age and/or undesirable loan status. Loans of the same grade and term are assumed to have the same Expected Return, and loans of the same grade, term, and age (or age difference) are assumed to have the same amount of expected variance (covariance). Since our methodology relies on Monte Carlo simulations, we attempt to categorize loans as specifically as possible before making our generalization assumptions. In the case of Expected Return, our granularity reached the term and grade levels, and for variance/covariance we could go one step deeper and found that the age of loans (months beyond issuance date, capped at the loan’s term) was important as well. Further details are in the methodology section. Methods & Methodologies Given \(n\) assets with known Expected Returns, variances, and covariances between the \(n\) assets, a portfolio of those assets will have an Expected Return and standard deviation calculated with the following formulas: \[E(r) = \sum_{i=1}^{n}w_iE(r_i)\] \[\sigma = \sqrt{\sum_{i=1}^{n}w_i^2\sigma_i^2 + \sum_{i, j=1, i\neq j}^{n}2w_iw_j\sigma_{ij}^2}\] where \(w_i\) is the portfolio weight of asset \(i\) and \(\sigma_{ij}^2\) is the covariance between the \(i\)th and \(j\)th asset in the portfolio. With these, we can look at a portfolio, add new loans to the portfolio, and see how the standard deviation and Expected Return change. Then, based on an investor’s selected levels of risk (standard deviation/variance) and return, we can choose the appropriate new loans to invest in. Expected Return To generate the Expected Returns of individual loans, we aggregated all of the cashflows of loans within a specified term and grade and calculated a compound annualized Internal Rate of Return (\(IRR\)). More details can be found in this article. Variance For the variance of a loan of specific term, grade, and age, we identified all matching loans, calculated the variance based off of the loan’s return series, and took the average of the variances. More concretely, we calculated variance of an \(A\) grade 36 term 0 months old loan, \(A\) grade 36 term 1 months old loan, etc. When determining the variance of individual loans, we realized that the term, grade, and age of the loan impacts the variance of the loan. Term and grade impacting a loan’s variance should be self-evident, but age requires a little bit of explanation. Take loans that have an age of 9 months (a fairly dangerous time based on our hazard curve findings). They could have little variance if paying consistently or lots of variance if defaulting: Date Sep 2011 Oct 2011 Nov 2011 Dec 2011 Jan 2012 Feb 2012 Mar 2012 Apr 2012 May 2012 Variance \(\sigma^{2}\) Paying 7.60% 7.85% 8.15% 8.48% 8.88% 9.19% 9.56% 9.93% 13.42% 2.7\(\%^{2}\) Defaulting 7.60% 7.85% 8.15% 8.48% 8.83% -7.00% -42.3% -56.4% -99.9% 1369.0\(\%^{2}\) Compare this to loans that are 35 months old; regardless of if the loan defaults or prepays by the end of the 35th month, the return series in each case will be similar to differences only in the last few months of returns, and the difference in variances of the defaulting and paying loans will be much smaller than the 9 months example above. Date Sep 2011 Oct 2011 Nov 2011 Dec 2011 … June 2014 July 2014 Aug 2014 Sept 2014 Variance \(\sigma^{2}\) Paying 7.60% 7.85% 8.15% 8.48% … 14.40% 14.41% 14.42% 14.57% 5.1\(\%^{2}\) Defaulting 7.60% 7.85% 8.15% 8.48% … 14.34% 11.04% 3.60% 1.05% 9.8\(\%^{2}\) So for the expected variance of a loan of specific term, grade, and age, we identified all loans that matched the criteria, calculated the variance based off of the loan’s return series, and took the average of the variances for our expected variance. Covariance Our method for calculating covariance utilized Monte Carlo simulations to randomly select two loans (term1/grade1 and term2/grade2) and calculate a covariance if and where the two return series had a computable covariance (e.g., the two loans existed at the same time for at least two months). For covariance, we also found that age difference (e.g., a loan issued in Jan 2010 and Feb 2010 has an age difference of 1 month) is important for similar reasons as demonstrated for variance; the covariance for two loans of term1, grade1, and term2, grade2 could be significantly different depending on the age difference. To illustrate this, we’ll look at correlations between loans (a “normalized” covariance) since it is much more interpretable than covariances. Here’s a snippet of two different correlation tables, the first being of 36 month loans with no age difference, and the second of 36 month loans with 11 months age difference. 0 months age difference: 11 months age difference: Note how if the two loans are issued at the same time (0 age difference), there is a more “average” amount of correlation (range of .24 – .69) but at 11 months age difference the correlation expands towards the extremes (range of .17 – .75). One way to interpret this is that two 36 term \(A\) grade loans issued 11 months apart are slightly more correlated than two 36 \(A\) grade loans issued at the same time. The logic behind this can be illustrated with an example: Pretend that 36 term \(A\) grade loans have an 80% chance to do well (constantly pay until term) and 20% chance to default. Based on the hazard curve, we know that most of those defaults will happen in the earlier months rather than later months. If we have two 36 term \(A\) grade loans at issuance, the chance to be positively correlated (move together) is if they both do well (80% x 80% = 64%) or both default (20% x 20% = 4%) for a total of 68%. Now compare this to a newly issued 36 \(A\) loan and an 11 month old 36 term \(A\) loan. The new 36 term \(A\) loan still has 80% chance to do well and 20% of default, but the 11 month old loan (having “lived” past some dangerous months) is now closer to 90% doing well and 10% defaulting. Now the probability of being positively correlated has increased to 74%(90% x 80% + 10% x 20%). So, based on where two loans are on the hazard curve (which is determined by the loan’s age), the correlation (and thus covariance) is different. One thing to note is that these correlation/covariance tables are actually pseudo-correlation/covariance tables because the main diagonals do not contain values you’d expect (1s, meaning perfect correlation, in the correlation matrix and variances in the covariance matrix). This is because we aren’t actually comparing a loan with itself, but instead with a loan that has characters similar to itself. Results So with all the numbers we need, let’s dive into an example of how we might use our findings, keeping remaining amounts invested as nice whole numbers for illustration purposes. Pretend you have a conservative portfolio on Lending Club consisting of two loans: Loan Term Grade Age Amount Invested 1 36 A 1 24 2 36 A 2 23 The Expected Return and standard deviation of the portfolio (in linear algebra notation) is: \[E(r)_{p} = \omega_{1}E(r_{1}) + \omega_{2}E(r_{2})\] \[\sigma_{p} = \sqrt{\left( \begin{array}{ccc} \omega_{1} & \omega_{2} \\ \end{array} \right) \left( \begin{array}{ccc} \sigma^{2}_{11} & \sigma^2_{12}\\ \sigma^{2}_{21} & \sigma^2_{22} \end{array} \right) \left( \begin{array}{ccc} \omega_{1} \\ \omega_{2} \\ \end{array} \right)}\] Plugging in the appropriate numbers yields \(E(r)_{p} = 4.34\%\) and \(\sigma_{p} = 3.49\%\) Now say you have two loans that you can potentially invest your next 25\($\) in: Loan Term Grade Age Amount Invested A 36 A 0 25 D 36 D 0 25 You wonder “which loan should I invest in if I want to be conservative/aggressive?” It’s easy to find out; just extend the above formulas to 3 loans and fill with Loan A values to see the risk/return profile of the new portfolio with Loan A, and with Loan D values to see the risk/return profile with Loan D. The expanded formulas become: \[E(r)_{p} = \omega_{1}E(r_{1}) + \omega_{2}E(r_{2}) + \omega_{3}E(r_{3})\] \[\sigma_{p} = \sqrt{\left( \begin{array}{ccc} \omega_{1} & \omega_{2} & \omega_{3} \\ \end{array} \right) \left( \begin{array}{ccc} \sigma^{2}_{11} & \sigma^2_{12} & \sigma^{2}_{13}\\ \sigma^{2}_{21} & \sigma^2_{22} & \sigma^{2}_{23}\\ \sigma^{2}_{31} & \sigma^2_{32} & \sigma^{2}_{33}\end{array} \right) \left( \begin{array}{ccc} \omega_{1} \\ \omega_{2} \\ \omega_{3} \\ \end{array} \right)}\] Plugging in again, we see that if you choose Loan A you have \(E(r)_{p} = 4.34\%\) and \(\sigma_{p} = 3.38\%\) whereas if you choose Loan D you have \(E(r)_{p} = 5.49\%\) and \(\sigma_{p} = 3.21\%\). We have a surprising result; a \(D\) grade loan was actually able to reduce the portfolio’s standard deviation (risk) more than an \(A\) grade loan. What about the risk/return profiles of portfolios if we’d instead tried a \(B\)/\(C\)/\(E\)/\(F\)/\(G\) loan? We ran those numbers and here’s how it looks visually (1 is the original portfolio of two loans): In this instance, at an original portfolio of two loans moving to a portfolio of three, we see that anything you buy can help in reducing the risk of the portfolio. This is diversification at work; spreading your money across more loans reduces your risk. But obviously, one might want to put their money into an \(E\) or \(D\) loan in this instance because not only do they reduce the risk the most, but also happen to increase the Expected Return the most as well. Here’s another example of a conservative portfolio (marked 2) of young \(A\), \(B\), and \(C\) loans, and adding 10 notes of same grade at 25\($\) each to the portfolio.: # of Loans Term Grade Age Remaining Amount Invested 25 36 A 1 24 25 36 A 2 23 25 36 B 4 21 25 36 B 2 23 5 36 C 1 24 5 36 C 3 22 Again, it is easy to see how diversification helps in reducing risk, as almost all portfolios of our conservative portfolio (having 110 loans originally, or 120 after investing additional notes) have reduced risk than the first example portfolio. The importance of diversification cannot be overstated. To optimally add additional notes to a portfolio, we can simply add the best combination of available notes that increase return for the lowest given quantity of risk. We’re near the end of this long read, but here’s why it matters: 1) A conservative portfolio is not necessarily made up of only conservative loan grades; as we saw above, adding an \(E\) grade loan actually reduced the portfolio’s risk more than adding another \(A\) grade loan. 2) When faced with the choice of which loans to invest in, this analysis can be done to choose the loans that best fit the desired risk/return profile of the portfolio. Disclaimer The numbers used in this article are for mature loans on the Lending Club platform. Expected Return, variance, and covariance numbers for mature loans picked by LendingRobot’s scoring algorithm are different from those presented in the article. Justin Hsi June 16, 2016 3 Comment
Forgot password? New user? Sign up Existing user? Log in Find two non zero matrix such that : Note by Syed Baqir 4 years, 1 month ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Well, this one works A=B=(0100)A=B=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}A=B=(0010) and this one too A=(010−1)andB=(1100)A=\begin{pmatrix} 0 & 1 \\ 0 & -1 \end{pmatrix}\quad and\quad B=\begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}A=(001−1)andB=(1010) there's lots of such matrices that have this property. Log in to reply But our matrix must be non- zero You mean non-zero determinant? That will take me a little longer. @Michael Mendrin – I mean matrix in which none of the elements are 0 @Syed Baqir – Here you go A=(1111)andB=(−111−1)A=\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}\quad and\quad B=\begin{pmatrix} -1 & 1 \\ 1 & -1 \end{pmatrix}A=(1111)andB=(−111−1) As I said, finding two matrices with non-zero determinants will take a bit longer. Normally it shouldn't be possible, but I've heard of algebras where this is possible, i.e., the elements aren't restricted to reals. You didn't specify if the elements had to be reals or even complex. @Michael Mendrin – You are correct but I think B must be like this: B = (1−1−11) \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix} (1−1−11) @Syed Baqir – Well obviously it can work both ways. @Michael Mendrin – But sir can you tell us how you got the answer I mean the method ? @Syed Baqir – Trial and error. But it's a LOT harder problem if the matrices have non-zero determinants! Don't ask me tonight, I'm going to sleep pretty soon! @Michael Mendrin – hehe, But I will ask you tomorrow :P, Any way nice solution I was using brute force method . If you have time, dont forget to enlighten this note with brilliant solution !! @Syed Baqir – I will sleep on a brilliant solution tonight! Good night! @Michael Mendrin – Good Night !! The product of the determinants of the 2 matrix has the same value as the determinant of their product, so either must have 0 determinant. But the elements must not contain any 0 @Syed Baqir – Yeah, I just said about if the determinant is 0. In fact, if exactly one 0 exist, a 2x2 matrix wouldn't have zero determinant. Can you show us the method ? @Syed Baqir – I guess you need to consider 2 matrices. And then, you multiply them (I told that for 2x2 matrices!!!), next you calculate the determinant of the result. I hope that you can do it yourself using this. So it's actually simple problem if we are asked to find one each with non-zero determinants: say that it's impossible. lol, then you will get impossible marks in the result !!! Problem Loading... Note Loading... Set Loading...
+8 Which of the following points is on a circle if its center is (-13,-12) and a point on the circumference is (-17, -12)? use the distance formula to find the distance between those 2 points, that is the radius. Call it R The formula for the circle witll be \((x+13)^2+(y+12)^2=R^2\) Plug the points in and see which one makes the formula true. The common ratio is -1.5 To find the number of terms, we have 57875 = 8000 [ 1 - (-1.5)^n ] / [ 1 - (-1.5) ] 57875 = 3200 [ 1 - (-1.5)^n ] 2315/128 = 1 - (-1.5)^n (-1.5)^n = 1 - 2315/128 (-1.5)^n = -2187/128 n must be odd.....so....we can solve this 1.5^n = 2187/128 take the log of both sides n log 1.5 = log (2187/128) n = log(2187/128) / log (1.5) = 7 So....we are looking for the 7th term which is 8000(-1.5)^(7 - 1) = 91125 Hello and welcome, WinterPotato! I'm so glad that you checked out this website. Hope you like it so far! :) You can post here on the forum page if you have any math questions, or simply if you just want to talk to people, like right now! You can also interact with people here to especially help you with homework, and/or concepts that you don't understand. A lot of the people here are so friendly and nice! Oh, and also, if you want, you can help others by answering their questions too!! Hope you get to love this website, NoobieAtMath p.s. I started here pretty recently, but I've fallen in love with this website! p.s.s Also wonderful username. 1. Enter the amplitude of the function f(x) . f(x)=−2sin(3x)−1 In the form Asin (Bx) + C The amplitude = l A l = l -2 l = 2 2. What is the equation of the midline for the function f(x) ? f(x)=3cos(x)−2.5 Note that the max for the function is 3(1) - 2.5 = 0.5 And the min for the function is 3(-1) - 2.5 = -5.5 To find the midiine.....add these values and divide the sum by 2 So we have [ . 5 + - 5.5] / 2 = -5/2 = -2.5 So....the equation for the midline is y = -2.5 Tertre, Your LaTex IS nice. And it is great that you are learning and practicing your skills here. PLUS Your answer is nice, I agree, but..... This person has clearly stated that they want someone to do their homework for them. Why are you facilitating such blatent cheating? It is not like this user can beat you up and take your lunch money if you do not help. If that were the case I would understand better. This is harsh, you are a great member and I don't want you to feel bad. I'm sure I have at times been guilty of the same 'sin'. I just don't want this to be known primarily as a free cheat site. I want people to learn here, like you and I do. Perhaps you could have part done it so that the user would at least have to finish the questions on their own. A sine function has the following key features: Frequency = 1/8π I assume you mean (1/8)pi ? Amplitude = 6 Midline: y = 3 y-intercept: (0, 3) The function is not a reflection of its parent function over the x-axis. Use the sine tool to graph the function. The first point must be on the midline and the second point must be a maximum or minimum value on the graph closest to the first point. There are 2 answers \(y=6sin(2x*8)+3\) or \(y=-6sin(2x*8)+3\) Here are the graphs (B): We subtract \(36\) from both sides, leaving us with \(\sqrt{-36}=6i, -\sqrt{-36}=-6i\) . Thus, the two solutions are \(\boxed{x=6i,\:x=-6i}.\) (C): There are a few ways to solve this question: Quadratic Formula: Use \(x = {-b \pm \sqrt{b^2-4ac} \over 2a}\)(thanks web2.0calc), and plugging in the values in the form \(ax^2+bx+c\), we attain \(x=\frac{-\left(-2\right)+\sqrt{\left(-2\right)^2-4\cdot \:1\cdot \:2}}{2\cdot \:1}=1+i\) as one root, and \(x=\frac{-\left(-2\right)-\sqrt{\left(-2\right)^2-4\cdot \:1\cdot \:2}}{2\cdot \:1}=1-i\) . Thus, the two roots are \(\boxed{1+i, 1-i}.\) Completing The Square: Our main goal is to get it into the form \(x^2+2ax+a^2=\left(x+a\right)^2\), so we have \(x^2-2x+\left(-1\right)^2=-2+\left(-1\right)^2\), and simplifying, we get \(\left(x-1\right)^2=-1=x=i+1\) and \(i-1\) . Thus, the two roots are \(\boxed{1+i, 1-i}.\). Maybe this will help. I have not watched it but I would have to learn how to do this befor I could teach you and it is Internet sites, often youtube clips, that I learn from. I just googled "How do I graph a circle in the complex plane." And this was the first clip that appeared. Thanks Tertre but you missed my point. I could do that too. In fact I did do that and then I changed my mind. If people do not put effort into presenting their questions properly (or at least as good as they are capable of) then perhaps some other people cannot be bothered answering. Latex is computer code used for displaying mathematics. The latex box is in the ribbon. It is the button that say LaTex. It is there when you are writing any post. Click on it to open it. then copy some of your code and paste it in. See what happens. It is easy to learn a little bit of Latex. And when you know a little bit it is easy to learn more. You will find it very helpful to become comfortable with very basic coding. Your post is displaying better now, which is odd because I see no evidence that anyone has edited it. Perhaps it was better than I could see originally and just was not displaying properly. You can still get rid of the meaningless $ signs though - That would make the question read better. 2. In triangle ABC, \angle ABC = 90° and AD is an angle bisector. If AB=90, BC=x, and AC=2x-6, then find the area of triangle ABC. Round your answer to the nearest integer. Don't see where the angle bisector comes into play, here..... We have that BC^2 + AB^2 = AC^2 x^2 + 90^2 = (2x - 6)^2 x^2 + 90^2 = 4x^2 - 24x + 36 3x^2 - 24x - 8064 = 0 x^2 - 8x - 2688 = 0 this factors as (x - 56) (x + 48) = 0 x = 56 So....the area is (1/2)product of leg lengths = (1/2)(AB)(BC) = (1/2)(90)(56) = 2520 units^2
The Solution of Ordinary Differential Equations Transformed from the System of Linear Algebraic Equations with the Band-shape Coefficient Matrix A new method of calculation for the design of multi-storey structures Based on the application of EST-2001A COD monitor,a solution of COD on-line monitoring system based on GPRS is put forward, which aims at the scattered position,wide distribution,and bad testing sites for monitoring. It introduces GSM digital cellular communication application of system, introduces short news application of communication, proposes short news solution of communication, and uses the instance to look forward to the development prospect of wireless communication. CBWS (CORBA Based Web Services) is an implementation of CORBA based Web services. It provides two tools:IDL2WSDL and SOAP engine; These two tools, plus UDDI register center and corresponding API, form a lightweight solution of CORBA based Web services. CBWS(CORBA Based web Services)是基于CORBA对象的Web服务的具体实现,它通过实现两个工具:IDL2WSDL和SOAP引擎,再加上UDDI注册中心以及相应的API,构成了轻量级的基于CORBA的Web服务解决方案。 The theory is applied to the case of cubic hypersurfaces, which is the one most relevant to special geometry, obtaining the solution of the two classification problems and the description of the corresponding homogeneous special K?hler manifolds. A technique for the solution of convolution equations arising in robotics is presented and the corresponding regularized problem is solved explicity for particular functions. Boundary-variation solution of eigenvalue problems for elliptic operators A new method for the numerical solution of volume integral equations is proposed Using these results we give an explicit solution of the problem of optimal reconstruction of functions from Sobolev's classes $W^{\gamma}_{p}(M^{d})$ in $L_{q}(M^{d}), 1 \leq q \leq p \leq \infty$. Formulas are derived for the solution of the transient currents of dissipative low-pass T-type electric wave filters. Oscillograms taken by cathode ray oscillograph for d-c. and a-c. cases are found to agree with results calculated from these formulas. From these calculations, the following conclusions are derived. When terminating resistance is gradually increased from O, the damping constants of the sine terms begin to differ from each other, ranging in decreasing magnitude from term of the lowest frequency... Formulas are derived for the solution of the transient currents of dissipative low-pass T-type electric wave filters. Oscillograms taken by cathode ray oscillograph for d-c. and a-c. cases are found to agree with results calculated from these formulas. From these calculations, the following conclusions are derived. When terminating resistance is gradually increased from O, the damping constants of the sine terms begin to differ from each other, ranging in decreasing magnitude from term of the lowest frequency to the last term of cut-off frequency. Hence the transient is ultimately of the cut-off frequency. At cut-off frequency, this constant is near to but greater than R/2L. For each increase of section, there is introduced an additional sine term with smaller damping constant. Therefore transients die out faster in filters of smaller number of sections. Since transient amplitudes are of the same order of magnitude before and after cut-off, filtering property only exists in the steady states. Formulas are derived for the solution of the transient currents of resistance-terminated dissipative π-type low-pass, T- and π-type high-pass electric wave filters. Oseillograms taken by cathode ray oscillograph for d-c. and a-c. cases are found to agree with the results calculated from these formulas. From these calculations, the following conclusions are derived: (1) When the terminating resistance is gradually increased from 0, the damping constants of the damped sine terms begin to differ greatly from... Formulas are derived for the solution of the transient currents of resistance-terminated dissipative π-type low-pass, T- and π-type high-pass electric wave filters. Oseillograms taken by cathode ray oscillograph for d-c. and a-c. cases are found to agree with the results calculated from these formulas. From these calculations, the following conclusions are derived: (1) When the terminating resistance is gradually increased from 0, the damping constants of the damped sine terms begin to differ greatly from each other, ranging in decreasing magnitudes from the first damped sine term to the last term of cut-off frequency. Hence the transient is ultimately of the cut-off frequency. At the cut-off frequency, this constant is greater than the corresponding constant (R/2L) when the termination is absent. (2) For each increase of one section, there is introduced an additional damped sine term with smaller damping constants. Therefore transients die out faster in filters of small no. of sections. (3) With the same network constants, the damping constants of π-type filters are greater than the corresponding values of T-type filters. As a result, transients die out faster in π-type filters. (4) The amplitudes of the transient terms in the attenuation and transmission ranges are of the same order of magnitude, and the filtering property only exists in the steady states. (5) The cut-off frequency of the π-type filters varies with the no. of sections used. When only two sections of low, or, high-pass filter are used, the variation amounts to nearly 26 per cent from the theoretical value. Formulas are derived for the solution of the transient currents of resistance-terminated dissipative T-& π-type band-pass electric wave filters of the constant X type. Oscillagrams taken hy cathode ray oscillograph for d-c. & a-c. cases are found to agree with the calculated results. From these calculations, the following conclusions are derived: (1) No matter what the impressed frequencies are, the transient is ultimately of the lower cut-off frequency. (2) The receiving-end indicial admittance consists... Formulas are derived for the solution of the transient currents of resistance-terminated dissipative T-& π-type band-pass electric wave filters of the constant X type. Oscillagrams taken hy cathode ray oscillograph for d-c. & a-c. cases are found to agree with the calculated results. From these calculations, the following conclusions are derived: (1) No matter what the impressed frequencies are, the transient is ultimately of the lower cut-off frequency. (2) The receiving-end indicial admittance consists of transient terms symmetrical with respect to the mid-frequency term. (3) The transients die out faster in the filters of smaller number of sections. (4) With the same network constants, the transients die out faster in the t-type filters. (5) Filtering property only exists in the steady state. (6) The band width increases with the number of sections. This increase is greater in π-type filters, but the band width is greater in T-type filters.
The Annals of Statistics Ann. Statist. Volume 28, Number 4 (2000), 1105-1127. Rates of convergence for the Gaussian mixture sieve Abstract Gaussian mixtures provide a convenient method of density estimation that lies somewhere between parametric models and kernel density estimators.When the number of components of the mixture is allowed to increase as sample size increases, the model is called a mixture sieve.We establish a bound on the rate of convergence in Hellinger distance for density estimation using the Gaussian mixture sieve assuming that the true density is itself a mixture of Gaussians; the underlying mixing measure of the true density is not necessarily assumed to have finite support. Computing the rate involves some delicate calculations since the size of the sieve—as measured by bracketing entropy—and the saturation rate, cannot be found using standard methods.When the mixing measure has compact support, using $k_n \sim n^{2/3}/(\log n)^{1/3}$ components in the mixture yields a rate of order $(\log n)^{(1+\eta)/6}/n^{1/6}$ for every $\eta > 0$. The rates depend heavilyon the tail behavior of the true density.The sensitivity to the tail behavior is dimin- ished byusing a robust sieve which includes a long-tailed component in the mixture.In the compact case,we obtain an improved rate of $(\log n/n)^{1/4}$. In the noncompact case, a spectrum of interesting rates arise depending on the thickness of the tails of the mixing measure. Article information Source Ann. Statist., Volume 28, Number 4 (2000), 1105-1127. Dates First available in Project Euclid: 12 March 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1015956709 Digital Object Identifier doi:10.1214/aos/1015956709 Mathematical Reviews number (MathSciNet) MR1810921 Zentralblatt MATH identifier 1105.62333 Subjects Primary: none Citation Genovese, Christopher R.; Wasserman, Larry. Rates of convergence for the Gaussian mixture sieve. Ann. Statist. 28 (2000), no. 4, 1105--1127. doi:10.1214/aos/1015956709. https://projecteuclid.org/euclid.aos/1015956709
Voted Questions Let \(a,b,c\) are positive real number satisfy \(a+b+c=1\). Prove that \(\dfrac{1}{\sqrt{\left (a^2+ab+b^2\right )\left (b^2+bc+c^2\right )}}+\dfrac{1}{\sqrt{\left (b^2+bc+c^2\right )\left (c^2+ca+a^2\right )}}+\dfrac{1}{\sqrt{\left (c^2+ca+a^2\right )\left (a^2+ab+b^2\right )}}\ge 4+\dfrac{8}{\sqrt{3}}\) (1974 Kiew Math Olympiad) Numbers 1, 2, 3, ..., 1974 are written on the board. You are allowed to replace any two of these numbers by one number, which is either the sum or the difference of these two numbers. Show that after 1973 times performing this operations, the only number left on the board cannot be 0. hghfghfgh 26/03/2017 at 20:16 At the begining, there are 1974/2=987 odd numbers. When we replace two numbers a and b by a number either a + b or a - b, we see that: + if a odd and b even, or a even and b odd then a + b or a - b is still odd + if a and b are both even then a + b or a -b is still even + If a and are both odd then a + b or a - b is even So the number of odd after each such replacement operation is stay the same or deacresed by two. At the begining, there is 987 odd numbers (987 is odd) and the odd number left must be odd too. So at the final, only one number left, it must be odd number, it is not 0. (because 0 is even). Faded 19/01/2018 at 14:52 At the begining, there are 1974/2=987 odd numbers. When we replace two numbers a and b by a number either a + b or a - b, we see that: + if a odd and b even, or a even and b odd then a + b or a - b is still odd + if a and b are both even then a + b or a -b is still even + If a and are both odd then a + b or a - b is even So the number of odd after each such replacement operation is stay the same or deacresed by two. At the begining, there is 987 odd numbers (987 is odd) and the odd number left must be odd too. So at the final, only one number left, it must be odd number, it is not 0. (because 0 is even). [haha] An Duong 10/03/2017 at 14:14 At the begining, there are 1974/2=987 odd numbers. When we replace two numbers a and b by a number either a + b or a - b, we see that: + if a odd and b even, or a even and b odd then a + b or a - b is still odd + if a and b are both even then a + b or a -b is still even + If a and are both odd then a + b or a - b is even So the number of odd after each such replacement operation is stay the same or deacresed by two. At the begining, there is 987 odd numbers (987 is odd) and the odd number left must be odd too. So at the final, only one number left, it must be odd number, it is not 0. (because 0 is even). Given the square ABCD with side 20 cm . M is midpoint of the BC , N is midpoint on CD.The segment AM and the segment BN intersect each other at O . find the area of the quadrilateral ANOD Phan Thanh Tinh Coodinator 06/05/2017 at 09:53 Draw the altitudes AE,MF,NG as shown \(\dfrac{S_{\Delta ABN}}{S_{\Delta BMN}}=\dfrac{AB.NG}{2}:\dfrac{BM.NC}{2}=BC^2:\dfrac{BC}{2}:\dfrac{CD}{2}=4\) \(\Delta ABN,\Delta BMN\) also have the common base BN,so AE = 4MF \(\Delta ABO,\Delta BMO\) have the common base BO and the altitudes AE = 4MF,so \(S_{\Delta ABO}=4S_{\Delta BMO}\) \(\Rightarrow S_{\Delta ABO}=\dfrac{4}{1+4}\left(S_{\Delta ABO}+S_{\Delta BMO}\right)=\dfrac{4}{5}.S_{\Delta ABM}\) \(=\dfrac{4}{5}.\dfrac{20.\left(20:2\right)}{2}=80\)(cm 2) \(\Rightarrow S_{AOND}=S_{ABCD}-S_{\Delta ABO}-S_{\Delta BNC}\) = 20.20 - 80 - \(\dfrac{20.\left(20:2\right)}{2}\) = 220 (cm 2) Vũ Hà Vy Anh 06/04/2017 at 17:17 ai bảo mình mới thi cấp thành phố hôm thứ 6 tuần trước nữa xong đó còn được cầm đề về mà A house has 10 rooms. Ten boys stay in different rooms and count the number of doors in them. After that they sum all results and receive 25. What a proposition can't be true about number N of doors which led outside the house? Phan Minh Anh 10/06/2017 at 15:57 75+25+13+91+87+52+48+9 =(75+25)+(13+87)+(91+9)+(52+48) = 100 + 100 + 100 + 100 = 100x4 = 400. Help you solve math 14/08/2017 at 08:45 75+25+13+91+87+52+48+9 =(52+25)+(13+87)+(91+9)+(52+48) =1000+100+100+100=100x4 =400 mk Vũ Hà Vy Anh 12/06/2017 at 07:50 75 + 25 + 13 + 91 + 87 + 52 + 48 + 9 = ( 75 + 25 ) + ( 13 + 87 ) + ( 91 + 9 ) + ( 52 + 48 ) = 100 + 100 + 100 + 100 = 100 . 4 = 400 mathlove 12/03/2017 at 12:37 Suppose that the triangle ABC has A = 90 0, B = 60 0, AB = 4. We construct the rectangle ABCDwith center O. By the assumption B = 60 0, assune that OABis a equilateral triangle; OB = AB= 4, BC= 8. By the theorem Pytagor we have \(AC=\sqrt{BC^2-AB^2}=\sqrt{8^2-4^2}=4\sqrt{3}\). So \(BC=8,CA=4\sqrt{3}\). Selected by MathYouLike Kudo Shinichi 22/03/2017 at 22:13 It so good for me in the future , thanks you :)) Two ducks go before two ducks. Two ducks go after two ducks. Two ducks go between two ducks. How many ducks are there? FA KAKALOTS 28/01/2018 at 22:08 For any natural number n > 1,we have : (n - 1)n(n + 1) = n(n2 - 1) = n3 - n < n3 ⇒1n3<1(n−1)n(n+1) 1(n−1)n(n+1)=1n.1(n−1)(n+1) =1n.(n+1)−(n−1)(n−1)(n+1).12=12.1n.(1n−1−1n+1) =12.(1(n−1)n−1n(n+1)) Now we have : E < 12.3.4+13.4.5+14.5.6+...+1(n−1)n(n+1) =12(12.3−13.4)+12(13.4−14.5)+12(14.5−15.6)+...+12(1(n−1)n−1n(n+1)) =12(12.3−1n(n+1))=112−12n(n+1)<112 Hence,E<112 Phan Thanh Tinh Coodinator 24/04/2017 at 13:50 For any natural number n > 1,we have : (n - 1)n(n + 1) = n(n 2- 1) = n 3- n < n 3 \(\Rightarrow\dfrac{1}{n^3}< \dfrac{1}{\left(n-1\right)n\left(n+1\right)}\) \(\dfrac{1}{\left(n-1\right)n\left(n+1\right)}=\dfrac{1}{n}.\dfrac{1}{\left(n-1\right)\left(n+1\right)}\) \(=\dfrac{1}{n}.\dfrac{\left(n+1\right)-\left(n-1\right)}{\left(n-1\right)\left(n+1\right)}.\dfrac{1}{2}=\dfrac{1}{2}.\dfrac{1}{n}.\left(\dfrac{1}{n-1}-\dfrac{1}{n+1}\right)\) \(=\dfrac{1}{2}.\left(\dfrac{1}{\left(n-1\right)n}-\dfrac{1}{n\left(n+1\right)}\right)\) Now we have : E < \(\dfrac{1}{2.3.4}+\dfrac{1}{3.4.5}+\dfrac{1}{4.5.6}+...+\dfrac{1}{\left(n-1\right)n\left(n+1\right)}\) \(=\dfrac{1}{2}\left(\dfrac{1}{2.3}-\dfrac{1}{3.4}\right)+\dfrac{1}{2}\left(\dfrac{1}{3.4}-\dfrac{1}{4.5}\right)+\dfrac{1}{2}\left(\dfrac{1}{4.5}-\dfrac{1}{5.6}\right)+...+\dfrac{1}{2}\left(\dfrac{1}{\left(n-1\right)n}-\dfrac{1}{n\left(n+1\right)}\right)\) \(=\dfrac{1}{2}\left(\dfrac{1}{2.3}-\dfrac{1}{n\left(n+1\right)}\right)=\dfrac{1}{12}-\dfrac{1}{2n\left(n+1\right)}< \dfrac{1}{12}\) Hence,\(E< \dfrac{1}{12}\) FA Liên Quân Garena 08/01/2018 at 21:09 We have : \(=\dfrac{\left(x^3+x^2\right)-\left(x+1\right)}{x^2-5x-x+5}\) \(=\dfrac{x^2\left(x+1\right)-\left(x+1\right)}{x\left(x-1\right)-5\left(x-1\right)}\) \(=\dfrac{\left(x-1\right)\left(x+1\right)^2}{\left(x-5\right)\left(x-1\right)}\) \(=\dfrac{\left(x+1\right)^2}{x-5}\)HỦY DIỆT THE WORLD selected this answer. Thao Dola 14/03/2017 at 14:23 The first such triple is 8 = \(2^2+2^2\),9 = \(3^3+0^2\),10=\(3^2+1^2\), which suggests we consider triples \(x^2-1,x^2,x^2+1\).Since \(x^2-2y^2=1\) has infinitely many positive solutions (x,y), we see that \(x^2-1=y^2+y^2,x^2=x^2+0^2\)and \(x^2+1\) satisfy the requiment and there are infinitely many such triples.Selected by MathYouLike FA KAKALOTS 28/01/2018 at 22:12 The first such triple is 8 = 22+22,9 = 33+02,10=32+12, which suggests we consider triples x2−1,x2,x2+1.Since x2−2y2=1 has infinitely many positive solutions (x,y), we see that x2−1=y2+y2,x2=x2+02and x2+1 satisfy the requiment and there are infinitely many such triples. Such doge 14/03/2017 at 21:03 Wowe it hard a , Given two real numbers x , y satisfy : \(x^2-2y^2=xy\) ( x + y other than 0 and y other than 0 ) Calculate the value of the expression : \(P=\dfrac{x-y}{x+y}\) b , Find the integer (x , y) pair satisfying : \(x^2+xy-2016x-2017y-2018=0\) Dao Trong Luan Coodinator 31/12/2017 at 10:33 a. \(x^2-2y^2=xy\) \(\Leftrightarrow x^2-2y^2-xy=0\) \(\Leftrightarrow\left(x^2-y^2\right)-\left(y^2+xy\right)=0\) \(\Leftrightarrow\left(x-y\right)\left(x+y\right)-y\left(x+y\right)=0\) \(\Leftrightarrow\left(x+y\right)\left(x-y-y\right)=0\) \(\Leftrightarrow\left(x+y\right)\left(x-2y\right)=0\) But \(x+y\ne0\) \(\Rightarrow x-2y=0\Leftrightarrow x=2y\) \(\Rightarrow P=\dfrac{x-y}{x+y}=\dfrac{2y-y}{2y+y}=\dfrac{y}{3y}=\dfrac{1}{3}\) FA Liên Quân Garena 01/01/2018 at 10:26 I edited the subject x3−x2−4x2+8x−4 =x2(x−1)−(4x2−8x+4) =x2(x−1)−[(2x)2−2⋅2x⋅2+22] =x2(x−1)−(2x−2)2 =x2(x−1)−4(x−1)2 =(x−1)[x2−4(x−1)] (x−1)[x2−4(x−1)] =(x−1)(x2−4x+4) =(x−1)(x−2)2 Hương Yêu Dấu 31/12/2017 at 13:33 We have : (x - 1) . [x 2- 4 . (x - 1)] <=> (x - 1) . (x 2- 4x + 4) => (x - 1). (x - 2) 2 This is brief Alone 31/12/2017 at 11:07 We have:\(\left(x+1\right)\left(x-3\right)-\left(x+5\right)\left(x-5\right)\left(x-2\right)=0\) \(\Leftrightarrow x^2-2x-3-\left(x^2-25\right)\left(x-2\right)=0\) \(\Leftrightarrow x^2-2x-3-x^3+2x^2+25x-50=0\) \(\Leftrightarrow3x^2-x^3+23x-53=0\) \(\Leftrightarrow x^2\left(3-x\right)-23\left(3-x\right)+16=0\) \(\Leftrightarrow\left(x^2-23\right)\left(3-x\right)+16=0\) \(\Rightarrow x^2-23\in\left\{-16,-8,-4,-2,-1,1,2,4,8,16\right\}\) \(\Rightarrow x^2\in\left\{7,15,19,21,22,24,25,27,31,39\right\}\) Because 3-x is a integer number so x is a integer number so \(x^2=25\) and 3-x=-8 \(\Rightarrow\) x=\(\pm\)5 and x=11 (unsatisfactory) So not have x satisfyFA Liên Quân Garena selected this answer. Solve the system of equations: \(\left\{{}\begin{matrix}17x+2y=2011\left|xy\right|\\x-2y=3xy\end{matrix}\right.\) Ngu Ngu Ngu 13/04/2017 at 22:55 Put the equation above is \(\left(1\right)\) If \(xy>0\) then: \(\left(1\right)\Leftrightarrow\left\{{}\begin{matrix}\dfrac{17}{y}+\dfrac{2}{x}=2011\\\dfrac{1}{y}-\dfrac{2}{x}=3\end{matrix}\right.\Leftrightarrow\left\{{}\begin{matrix}\dfrac{1}{y}=\dfrac{1007}{9}\\\dfrac{1}{x}=\dfrac{490}{9}\end{matrix}\right.\Leftrightarrow\left\{{}\begin{matrix}x=\dfrac{9}{490}\\y=\dfrac{9}{1007}\end{matrix}\right.\) (satisfy) If \(xy< 0\) then: \(\left(1\right)\Leftrightarrow\left\{{}\begin{matrix}\dfrac{17}{y}+\dfrac{2}{x}=-2011\\\dfrac{1}{y}-\dfrac{2}{x}=3\end{matrix}\right.\Leftrightarrow\left\{{}\begin{matrix}\dfrac{1}{y}=-\dfrac{1004}{9}\\\dfrac{1}{x}=-\dfrac{1031}{18}\end{matrix}\right.\)\(\Rightarrow xy>0\) (unsatisfactory) If \(xy=0\) then: \(\left(1\right)\Leftrightarrow x=y=0\) (satisfy) Conclude: equations have 2 solutions: \(\left(0;0\right)\) and \(\left(\dfrac{9}{490};\dfrac{9}{1007}\right)\)Nguyễn Thị Huyền Mai selected this answer. Ngu Ngu Ngu 14/04/2017 at 08:08 \(2\left(x^2+\dfrac{1}{x^2}\right)+3\left(x+\dfrac{1}{x}\right)-16=0\left(1\right)\) Condition: \(x\ne0\) Put \(t=x+\dfrac{1}{x}\Rightarrow x^2+\dfrac{1}{x^2}=t^2-2\) \(\left(1\right)\Leftrightarrow2t^2+3t-20=0\) \(\Leftrightarrow\left[{}\begin{matrix}t=-4\\t=\dfrac{5}{2}\end{matrix}\right.\) If \(t=-4\Rightarrow x=-2\pm\sqrt{3}\) If \(t=\dfrac{5}{2}\) \(\Rightarrow\left[{}\begin{matrix}x=2\\x=\dfrac{1}{2}\end{matrix}\right.\) Conclude:...Use Ka Ti selected this answer.
This is a 4 th article on the series of articles on Analysis of Algorithms. In the first article, we learned about the running time of an algorithm and how to compute the asymptotic bounds. We learned the concept of upper bound, tight bound and lower bound. In the second article, we learned the concept of best, average and worst analysis. In the third article, we learned about the amortized analysis for some data structures. Now we are ready to use the knowledge in analyzing the real code. In this article, we learn how to estimate the running time of an algorithm looking at the source code without running the code on the computer. The estimated running time helps us to find the efficiency of the algorithm. Knowing the efficiency of the algorithm helps in the decision making process. Even though there is no magic formula for analyzing the efficiency of an algorithm as it is largely a matter of judgment, intuition, and experience, there are some techniques that are often useful which we are going to discuss here. The approach we follow is also called a theoretical approach. In this approach, we calculate the cost (running time) of each individual programming construct and we combine all the costs into a bigger cost to get the overall complexity of the algorithm. Knowing the cost of basic operations helps to calculate the overall running time of an algorithm. The table below shows the list of basic operations along with their running time. The list not by any means provides the comprehensive list of all the operations. But I am trying to include most of the operations that we come across frequently in programming. Operation Running Time Integer add/subtract $\Theta(1)$ Integer multiply/divide $\Theta(1)$ Float add/subtract $\Theta(1)$ Float multiply/divide $\Theta(1)$ Trigonometric Functions (sine, cosine, ..) $\Theta(1)$ Variable Declaration $\Theta(1)$ Assignment Operation $\Theta(1)$ Logical Operations ($<, >, \le, \ge, $ etc) $\Theta(1)$ Array Access $\Theta(1)$ Array Length $\Theta(1)$ 1D array allocation $\Theta(n)$ 2D array allocation $\Theta(n^2)$ Substring extraction $\Theta(1)$ or $\Theta(n)$ String concatenation $\Theta(n)$ Let two independent consecutive statements are $P_1$ and $P_2$. Let $t_1$ be the cost of running $P_1$ and $t_2$ be the cost of running $P_2$. The total cost of the program is the addition of cost of individual statement i.e. $t_1 + t_2$. In asymptotic notation the total time is $\Theta(\max(t_1, t_2))$(we ignore the non significant term). Example: Consider the following code. 1 int main() { Assume that statement 2 is independent of statement 1 and statement 1 executes first followed by statement 2. The total running time is $$\Theta(\max(n, n^2)) = \Theta(n^2)$$ It is relatively easier to compute the running time of for loop than any other loops. All we need to compute the running time is how many times the statement inside the loop body is executed. Consider a simple for loop in C. 1 for (i = 0; i < 10; i++) { The loop body is executed 10 times. If it takes $m$ operations to run the body, the total number of operations is $10 \times m = 10m$. In general, if the loop iterates $n$ times and the running time of the loop body are $m$, the total cost of the program is $n * m$. Please note that we are ignoring the time taken by expression $i < 10$ and statement $i++$. If we include these, the total time becomes $$1 + 2\times n + mn = \Theta(mn)$$ In this analysis, we made one important assumption. We assumed that the body of the loop doesn’t depend on i. Sometimes the runtime of the body does depend on i. In that case, our calculation becomes a little bit difficult. Consider an example shown below. 1 for (i = 0; i < n; i++) { In the for loop above, the control goes inside the if condition only when i is an even number. That means the body of if condition gets executed $n/2$ times. The total cost is therefore $n/2 * n^2 = n^3/2 = \Theta(n^3)$. Suppose there are $p$ nested for loops. The $p$ for loops execute $n_1, n_2, …, n_p$ times respectively. The total cost of the entire program is $$n_1 \times n_2 \times ,…., \times n_p \times \text{cost of the body of innermost loop}$$ Consider nested for loops as given in the code below 1 for (i = 0; i < n; i++) { There are two for loops, each goes n times. So the total cost is $$n \times n \times n = n^3 = \Theta(n^3)$$ while loops are usually harder to analyze than for loops because there is no obvious a priori way to know how many times we shall have to go round the loop. One way of analyzing while loops is to find a variable that goes increasing or decreasing until the terminating condition is met. Consider an example given below 1 while (i > 0) { How many times the loop repeats? In every iteration, the value of i gets halved. If the initial value of i is 16, after 4 iterations it becomes 1 and the loop terminates. The implies that the loop repeats $\log_2 i$ times. In each iteration, it does the $n$ work. Therefore the total cost is $\Theta(n\log_2 i)$. To calculate the cost of a recursive call, we first transform the recursive function to a recurrence relation and then solve the recurrence relation to get the complexity. There are many techniques to solve the recurrence relation. These techniques will be discussed in details in the next article. 1 int fact(int n) { We can transform the code into a recurrence relation as follows. $$T(n) = \begin{cases}a & \text{if } n \le 2\\ b + T(n-1) & \text{otherwise}\end{cases}$$ When n is 1 or 2, the factorial of n is $n$ itself. We return the result in constant time $a$. Otherwise, we calculate the factorial of $n - 1$ and multiply the result by $n$. The multiplication takes a constant time $b$. We use one of the techniques called back substitution to find the complexity. $$\begin{align} T(n) & = b + T(n - 1) \\ &= b + b + T(n - 2) \\ &= b + b + b + T(n - 3)\\ & = 3b + T(n - 3) \\ & = kb + T(n - k) \\ & = nb + T(0) \\ & = nb + a\\ & = \Theta(n) \end{align}$$ Let us put together all the techniques discussed above and compute the running time of some example programs. 1 int sum(int a, int b) { The sum function has two statements. The first statement (line 2) runs in constant time i.e. $Theta(1)$ and second statement (line 3) also runs in constant time $\Theta(1)$. These two statements are consecutive statements, so the total running time is $\Theta(1) + \Theta(1) = \Theta(1)$ 1 int array_sum(int a, int n) { Analysis Line 2 is a variable declaration. The cost is $\Theta(1)$ Line 3 is a variable declaration and assignment. The cost is $\Theta(2)$ Line 4 - 6 is a forloop that repeats $n$ times. The body of the for loop requires $\Theta(1)$ to run. The total cost is $\Theta(n)$. Line 7 is a returnstatement. The cost is $\Theta(1)$. 1, 2, 3, 4 are consecutive statements so the overall cost is $\Theta(n)$ 1 int sum = 0; Analysis Line 1 is a variable declaration and initialization. The cost is $\Theta(1)$ Line 2 - 11 is a nested for loops. There are four forloops that repeat $n$ times. After the third forloop in Line 4, there is a condition of i == j == k. This condition is true only $n$ times. So the total cost of these loops is $\Theta(n^3) + \Theta(n^4) = \Theta(n^4)$ The overall cost is $\Theta(n^4)$. Brassard, G., & Bratley, P. (2008). Fundamentals of Algorithmics. New Delhi: PHI Learning Private Limited.
In the previous post, we learned the theoretical (or mathematical) approach for computing the running time of an algorithm. In this post, we will learn more practical approach for computing the running time. Please note that the empirical method is very limited and does not work for all kinds of algorithms. It best works for those algorithms whose running time is in power of $n$. In the theoretical approach, we use various theoretical and mathematical knowledge to find the running time. We do not need a computer for this. On the other hand, an empirical approach, we run the program on the actual computer. We run the program on various data sets, the relative performance gives the approximate running time of the program. Let me say it again that the theoretical approach is much superior and I strongly recommend to do it wherever applicable. The empirical approach can be used when you do not have enough theoretical background on algorithm analysis. It can also be useful when you do not have access to the source code, or your source code is minified. In this case, we can run the algorithm with varying input size. We record the time taken by the program to run on each input. We can use a mathematical tool like regression to obtain the mathematical model that can be used to predict the running time of unknown input size. First, I will explain all the steps needed to calculate the running time using an empirical approach and then I will calculate the running time of a selection sort using this approach. The first step is to run the program and see the result on different input size. We start with small input and gradually increase the input size and record the corresponding time taken by the program to run. We usually double the size of the input each time and run the program. Do this experiment up to 10 times. Now tabulate the input size and the corresponding time. A sample table is given below. Input Size (N) Time Taken T(N) 200 0.001 sec 400 0.005 sec 800 0.012 sec … … 16000 5 sec Most of the time, 10 rows are sufficient if you run the program by doubling the input size each time. Our goal is to find a mathematical function that can best fit this result. We transform both input size and time taken into a logarithmic scale. Since we double and run the experiment each time, when we normalize the data into $\log_2$ scale and plot the result (input size in the x-axis and runtime in y-axis), we get a straight line. The slope of the line gives the polynomial degree of the running time. That means, if the slope of the graph is 3 then the running time of the program is $\Theta(n^3)$. The equation of the straight line we get after we normalize the data in logarithmic scale looks like $$\begin{align} \log_2(T(N)) = a * \log_2(N) + b\end{align}$$ To get the required model, we need to find the value of $a$ and $b$. We can find the value of $a$ and $b$ by feeding the transformed data into equation (1). After we have the model, we can simply input the input size and it gives the time required to run the program. If we raise the left and right-hand side of the equation (1), we get the following $$T(N) = 2^bN^a$$ We already know $a$ and $b$. The only independent variable is $N$ which is the size of the input. We will go through the steps discussed above and calculate the running time of Selection sort. A Python implementation of Selection Sort is given below. 1 def selectionSort(alist): The result of running above selection sort algorithm on my computer is tabulated below. Input Size (N) Running Time T(N) 250 0.001868 sec 500 0.009274 sec 1000 0.028015 sec 2000 0.1062 sec 4000 0.46821 sec 8000 1.8646 sec 16000 7.6841 sec 32000 31.631799 sec Next, we transform these values into a logarithmic scale Input Size (N) Running Time T(N) 7.96 -9.064 8.96 -6.75 9.96 -5.15 10.96 -3.23 11.96 -1.09 12.96 0.89 13.96 2.94 14.96 4.98 If we do the regression fit of these transformed values, we get $a = 2.011, b = -25.16$. So our model becomes, $$T(N) = 2^{-25.16}N^{2.011}$$ The running time shows that the program has complexity $\Theta(n^{2.011})$, which is pretty close to the complexity of Selection Sort i.e. $\Theta(n^{2})$. Now using this model we can predict the running time for $N = 64000$. $$T(64000) = 2^{-25.16}64000^{2.011} = 123.39$$ I ran the program with 64000 input size and the program gave the result in 126.668 sec. This shows our model is accurate.
Prove that $H_n=\{\sigma \in S_n \mid \sigma (i) \equiv i \pmod 3 \}$ with $n≥2$ is a subgroup of $S_n$. I'm doing this problem and I don't know if my approach is correctly done. First of all, we see that $id\in H_n$, because $id(i)=i\equiv i\! \pmod 3$, and $H_n$ is not empty. We have to see now that if $\sigma$, $\tau \in H_n$, then $\sigma\tau^{-1}\in H_n$. First, we see that if $\tau \in H_n$, then $\tau (i) \equiv i$ (mod $3$). Furthermore, $\tau^{-1} (i) \equiv i \pmod 3$. So $$\sigma\tau^{-1}(i)=\sigma(\tau^{-1}(i))=\sigma(i+3k)\equiv i+3k \!\!\!\pmod 3 \equiv i\!\!\! \pmod 3 \in H_n.$$ So $H_n$ is a subgroup of $S_n$. I know that this is an easy exercise, but I'm having some troubles with the permutation groups, so I would like someone to say if I made any mistakes Thank you.
Abstract The Bernoulli convolution $\nu_\lambda$ with parameter $\lambda\in(0,1)$ is the probability measure supported on $\mathbf{R}$ that is the law of the random variable $\sum\pm\lambda^n$, where the $\pm$ are independent fair coin-tosses. We prove that $\dim\nu_\lambda=1$ for all transcendental $\lambda\in(1/2,1)$. [AFKP-Bernoulli] S. Akiyama, D. Feng, T. Kempton, and T. Persson, On the Hausdorff Dimension of Bernoulli Convolutions, 2018. @misc{AFKP-Bernoulli, author = {Akiyama, Shigeki and Feng, De-Jun and Kempton, Tom and Persson, Tomas}, Title = {On the {H}ausdorff Dimension of {B}ernoulli Convolutions}, Year = {2018}, arxiv={1801.07118v1}, } [BG-heights] E. Bombieri and W. Gubler, Heights in Diophantine Geometry, Cambridge University Press, Cambridge, 2006, vol. 4. @BOOK{BG-heights, author = {Bombieri, Enrico and Gubler, Walter}, title = {Heights in {D}iophantine Geometry}, series = {New Math. Monogr.}, volume = {4}, publisher = {Cambridge University Press, Cambridge}, year = {2006}, pages = {xvi+652}, isbn = {978-0-521-84615-8; 0-521-84615-3}, mrclass = {11G50 (11-02 11G10 11G30 11J68 14G40)}, mrnumber = {2216774}, mrreviewer = {Yuri Bilu}, doi = {10.1017/CBO9780511542879}, url = {https://doi.org/10.1017/CBO9780511542879}, zblnumber = {1115.11034}, } [BV-entropy] E. Breuillard and P. P. Varjú, Entropy of Bernoulli convolutions and uniform exponential growth for linear groups, 2018. @misc{BV-entropy, author = {Breuillard, Emmanuel and Varjú, Péter P.}, TITLE = {Entropy of {B}ernoulli convolutions and uniform exponential growth for linear groups}, YEAR = {2018}, arxiv = {1510.04043v3}, note={to appear in J. Anal. Math.}, } [BV-transcendent] E. Breuillard and P. ~P. Varjú, On the dimension of Bernoulli convolutions, 2018. @MISC{BV-transcendent, author = {Breuillard, Emmanuel and Varjú, P.~P.}, title = {On the dimension of {B}ernoulli convolutions}, year = {2018}, arxiv = {1610.09154v3}, note = {to appear in \emph{Ann. Probab.}}, zblnumber = {}, } [erdos39] P. Erdös, "On a family of symmetric Bernoulli convolutions," Amer. J. Math., vol. 61, pp. 974-976, 1939. @ARTICLE{erdos39, author = {Erdös, Paul}, title = {On a family of symmetric {B}ernoulli convolutions}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {61}, year = {1939}, pages = {974--976}, issn = {0002-9327}, mrclass = {42.3X}, mrnumber = {0000311}, mrreviewer = {M. Kac}, doi = {10.2307/2371641}, url = {https://doi.org/10.2307/2371641}, zblnumber = {0022.35402}, } [erdos] P. Erdös, "On the smoothness properties of a family of Bernoulli convolutions," Amer. J. Math., vol. 62, pp. 180-186, 1940. @ARTICLE{erdos, author = {Erdös, Paul}, title = {On the smoothness properties of a family of {B}ernoulli convolutions}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {62}, year = {1940}, pages = {180--186}, issn = {0002-9327}, mrclass = {42.3X}, mrnumber = {0000858}, mrreviewer = {M. Kac}, doi = {10.2307/2371446}, url = {https://doi.org/10.2307/2371446}, zblnumber = {0022.35403}, } [Fal-techniques] K. Falconer, Techniques in Fractal Geometry, John Wiley & Sons, Ltd., Chichester, 1997. @BOOK{Fal-techniques, author = {Falconer, Kenneth}, title = {Techniques in Fractal Geometry}, publisher = {John Wiley \& Sons, Ltd., Chichester}, year = {1997}, pages = {xviii+256}, isbn = {0-471-95724-0}, mrclass = {28A80 (00A69 11K55 58F13)}, mrnumber = {1449135}, mrreviewer = {Christoph Bandt}, zblnumber = {0869.28003}, } [feng-hu] D. Feng and H. Hu, "Dimension theory of iterated function systems," Comm. Pure Appl. Math., vol. 62, iss. 11, pp. 1435-1500, 2009. @ARTICLE{feng-hu, author = {Feng, De-Jun and Hu, Huyi}, title = {Dimension theory of iterated function systems}, journal = {Comm. Pure Appl. Math.}, fjournal = {Communications on Pure and Applied Mathematics}, volume = {62}, year = {2009}, number = {11}, pages = {1435--1500}, issn = {0010-3640}, mrclass = {37C45 (28A78)}, mrnumber = {2560042}, mrreviewer = {Lars Olsen}, doi = {10.1002/cpa.20276}, url = {https://doi.org/10.1002/cpa.20276}, zblnumber = {1230.37031}, } [Garsia-arithmetic] A. M. Garsia, "Arithmetic properties of Bernoulli convolutions," Trans. Amer. Math. Soc., vol. 102, pp. 409-432, 1962. @ARTICLE{Garsia-arithmetic, author = {Garsia, Adriano M.}, title = {Arithmetic properties of {B}ernoulli convolutions}, journal = {Trans. Amer. Math. Soc.}, fjournal = {Transactions of the American Mathematical Society}, volume = {102}, year = {1962}, pages = {409--432}, issn = {0002-9947}, mrclass = {44.25}, mrnumber = {0137961}, mrreviewer = {P. Civin}, doi = {10.2307/1993615}, url = {https://doi.org/10.2307/1993615}, zblnumber = {0103.36502}, } [Garsia-entropy] A. M. Garsia, "Entropy and singularity of infinite convolutions," Pacific J. Math., vol. 13, pp. 1159-1169, 1963. @ARTICLE{Garsia-entropy, author = {Garsia, Adriano M.}, title = {Entropy and singularity of infinite convolutions}, journal = {Pacific J. Math.}, fjournal = {Pacific Journal of Mathematics}, volume = {13}, year = {1963}, pages = {1159--1169}, issn = {0030-8730}, mrclass = {28.70}, mrnumber = {0156945}, mrreviewer = {R. V. Chacon}, doi = {10.2140/pjm.1963.13.1159}, zblnumber = {0126.14901}, } [Gou-Bernoulli-survey] S. Gouëzel, Méthodes entropiques pour les convolutions de Bernoulli, d’après Hochman, Shmerkin, Breuillard, Varjú, 2018. @MISC{Gou-Bernoulli-survey, author = {Gouëzel, S.}, title = {Méthodes entropiques pour les convolutions de {B}ernoulli, d'après {H}ochman, {S}hmerkin, {B}reuillard, {V}arjú}, year = {2018}, note = {(expos{é} 1142 au s{é}minaire Bourbaki 2018)}, url = {http://www.math.sciences.univ-nantes.fr/~gouezel/articles/bourbaki_bernoulli.pdf}, zblnumber = {}, } [hochman] M. Hochman, "On self-similar sets with overlaps and inverse theorems for entropy," Ann. of Math. (2), vol. 180, iss. 2, pp. 773-822, 2014. @ARTICLE{hochman, author = {Hochman, Michael}, title = {On self-similar sets with overlaps and inverse theorems for entropy}, journal = {Ann. of Math. (2)}, fjournal = {Annals of Mathematics. Second Series}, volume = {180}, year = {2014}, number = {2}, pages = {773--822}, issn = {0003-486X}, mrclass = {28A80 (28D05 37C45)}, mrnumber = {3224722}, mrreviewer = {Vaughn Climenhaga}, doi = {10.4007/annals.2014.180.2.7}, url = {https://doi.org/10.4007/annals.2014.180.2.7}, zblnumber = {1337.28015}, } [Mig-larger-degree] M. Mignotte, "Approximation des nombres algébriques par des nombres algébriques de grand degré," Ann. Fac. Sci. Toulouse Math. (5), vol. 1, iss. 2, pp. 165-170, 1979. @ARTICLE{Mig-larger-degree, author = {Mignotte, Maurice}, title = {Approximation des nombres algébriques par des nombres algébriques de grand degré}, journal = {Ann. Fac. Sci. Toulouse Math. (5)}, fjournal = {Toulouse. Faculté des Sciences. Annales. Mathématiques. Série 5}, volume = {1}, year = {1979}, number = {2}, pages = {165--170}, issn = {0240-2955}, mrclass = {10F25}, mrnumber = {0554376}, mrreviewer = {Marie-José Bertin}, url = {http://www.numdam.org/item?id=AFST_1979_5_1_2_165_0}, zblnumber = {0421.10022}, } [60y] Y. Peres, W. Schlag, and B. Solomyak, "Sixty years of Bernoulli convolutions," in Fractal Geometry and Stochastics, II, Birkhäuser, Basel, 2000, vol. 46, pp. 39-65. @INCOLLECTION{60y, author = {Peres, Yuval and Schlag, Wilhelm and Solomyak, Boris}, title = {Sixty years of {B}ernoulli convolutions}, booktitle = {Fractal Geometry and Stochastics, {II}}, venue = {{G}reifswald/{K}oserow, 1998}, series = {Progr. Probab.}, volume = {46}, pages = {39--65}, publisher = {Birkhäuser, Basel}, year = {2000}, mrclass = {42A85 (11R06 26A46 28A78 28A80)}, mrnumber = {1785620}, mrreviewer = {K\'{a}roly Simon}, zblnumber = {0961.42006}, doi = {10.1007/978-3-0348-8380-1_2}, url = {https://doi.org/10.1007/978-3-0348-8380-1_2}, } [PS-Bernoulli] Y. Peres and B. Solomyak, "Absolute continuity of Bernoulli convolutions, a simple proof," Math. Res. Lett., vol. 3, iss. 2, pp. 231-239, 1996. @ARTICLE{PS-Bernoulli, author = {Peres, Yuval and Solomyak, Boris}, title = {Absolute continuity of {B}ernoulli convolutions, a simple proof}, journal = {Math. Res. Lett.}, fjournal = {Mathematical Research Letters}, volume = {3}, year = {1996}, number = {2}, pages = {231--239}, issn = {1073-2780}, mrclass = {28A12 (42A61 60G30 60G50)}, mrnumber = {1386842}, mrreviewer = {A. H. Dooley}, doi = {10.4310/MRL.1996.v3.n2.a8}, url = {https://doi.org/10.4310/MRL.1996.v3.n2.a8}, zblnumber = {0867.28001}, } [PS-problems] Y. Peres and B. Solomyak, "Problems on self-similar sets and self-affine sets: an update," in Fractal Geometry and Stochastics, II, Birkhäuser, Basel, 2000, vol. 46, pp. 95-106. @INCOLLECTION{PS-problems, author = {Peres, Yuval and Solomyak, Boris}, title = {Problems on self-similar sets and self-affine sets: an update}, booktitle = {Fractal Geometry and Stochastics, {II}}, venue = {{G}reifswald/{K}oserow, 1998}, series = {Progr. Probab.}, volume = {46}, pages = {95--106}, publisher = {Birkhäuser, Basel}, year = {2000}, mrclass = {28A80 (28A78 37A99 37C45)}, mrnumber = {1785622}, zblnumber = {0946.28003}, doi = {10.1007/978-3-0348-8380-1_4}, url = {https://doi.org/10.1007/978-3-0348-8380-1_4}, } [PS-transversality] M. Pollicott and K. Simon, "The Hausdorff dimension of $\lambda$-expansions with deleted digits," Trans. Amer. Math. Soc., vol. 347, iss. 3, pp. 967-983, 1995. @ARTICLE{PS-transversality, author = {Pollicott, Mark and Simon, K\'{a}roly}, title = {The {H}ausdorff dimension of {$\lambda$}-expansions with deleted digits}, journal = {Trans. Amer. Math. Soc.}, fjournal = {Transactions of the American Mathematical Society}, volume = {347}, year = {1995}, number = {3}, pages = {967--983}, issn = {0002-9947}, mrclass = {11K55 (28A78)}, mrnumber = {1290729}, mrreviewer = {Rita Giuliano Antonini}, doi = {10.2307/2154881}, url = {https://doi.org/10.2307/2154881}, zblnumber = {0831.28005}, } [shmerkin] P. Shmerkin, "On the exceptional set for absolute continuity of Bernoulli convolutions," Geom. Funct. Anal., vol. 24, iss. 3, pp. 946-958, 2014. @ARTICLE{shmerkin, author = {Shmerkin, Pablo}, title = {On the exceptional set for absolute continuity of {B}ernoulli convolutions}, journal = {Geom. Funct. Anal.}, fjournal = {Geometric and Functional Analysis}, volume = {24}, year = {2014}, number = {3}, pages = {946--958}, issn = {1016-443X}, mrclass = {28A78 (28A80 37A45)}, mrnumber = {3213835}, mrreviewer = {Robert W. Vallin}, doi = {10.1007/s00039-014-0285-4}, url = {https://doi.org/10.1007/s00039-014-0285-4}, zblnumber = {1305.28012}, } [Shm-Bernoulli-Lq] P. Shmerkin, "On Furstenberg’s intersection conjecture, self-similar measures, and the $L^q$ norms of convolutions," Ann. of Math. (2), vol. 189, iss. 2, pp. 319-391, 2019. @ARTICLE{Shm-Bernoulli-Lq, author = {Shmerkin, Pablo}, title = {On {F}urstenberg's intersection conjecture, self-similar measures, and the {$L^q$} norms of convolutions}, journal = {Ann. of Math. (2)}, fjournal = {Annals of Mathematics. Second Series}, volume = {189}, year = {2019}, number = {2}, pages = {319--391}, issn = {0003-486X}, mrclass = {11K55 (28A80 37C45)}, mrnumber = {3919361}, doi = {10.4007/annals.2019.189.2.1}, url = {https://doi.org/10.4007/annals.2019.189.2.1}, zblnumber = {}, } [Sol-Bernoulli] B. Solomyak, "On the random series $\sum\pm\lambda^n$ (an Erdős problem)," Ann. of Math. (2), vol. 142, iss. 3, pp. 611-625, 1995. @ARTICLE{Sol-Bernoulli, author = {Solomyak, Boris}, title = {On the random series {$\sum\pm\lambda^n$} (an {E}rd{ő}s problem)}, journal = {Ann. of Math. (2)}, fjournal = {Annals of Mathematics. Second Series}, volume = {142}, year = {1995}, number = {3}, pages = {611--625}, issn = {0003-486X}, mrclass = {11K99 (11R06 28D99 60G99)}, mrnumber = {1356783}, mrreviewer = {D. Boyd}, doi = {10.2307/2118556}, url = {https://doi.org/10.2307/2118556}, zblnumber = {0837.28007}, } [Sol-survey] B. Solomyak, "Notes on Bernoulli convolutions," in Fractal Geometry and Applications: A Jubilee of Benoît Mandelbrot. Part 1, Amer. Math. Soc., Providence, RI, 2004, vol. 72, pp. 207-230. @INCOLLECTION{Sol-survey, author = {Solomyak, Boris}, title = {Notes on {B}ernoulli convolutions}, booktitle = {Fractal Geometry and Applications: A Jubilee of {B}eno\^{i}t {M}andelbrot. {P}art 1}, series = {Proc. Sympos. Pure Math.}, volume = {72}, pages = {207--230}, publisher = {Amer. Math. Soc., Providence, RI}, year = {2004}, mrclass = {26A46 (26A30 28A78 28A80 37C45 39B12)}, mrnumber = {2112107}, mrreviewer = {Pieter C. Allaart}, zblnumber = {1115.28009}, doi = {10.1090/pspum/072.1/2112107}, url = {https://doi.org/10.1090/pspum/072.1/2112107}, } [Var-ECM] P. P. Varjú, Recent progress on Bernoulli convolutions. @MISC{Var-ECM, author = {Varj\'{u}, Péter P.}, title = {Recent progress on {B}ernoulli convolutions}, note = {{\em Proc. of the 7th European Congress of Math.}, Berlin (July 18--22, 2016), \emph{Eur. Math. Soc.}, to appear}, arxiv = {1608.04210}, } [Var-Bernoulli-algebraic] P. P. Varjú, "Absolute continuity of Bernoulli convolutions for algebraic parameters," J. Amer. Math. Soc., vol. 32, iss. 2, pp. 351-397, 2019. @ARTICLE{Var-Bernoulli-algebraic, author = {Varj\'{u}, Péter P.}, title = {Absolute continuity of {B}ernoulli convolutions for algebraic parameters}, journal = {J. Amer. Math. Soc.}, fjournal = {Journal of the American Mathematical Society}, volume = {32}, year = {2019}, number = {2}, pages = {351--397}, issn = {0894-0347}, mrclass = {60G30 (28A80 42A85)}, mrnumber = {3904156}, doi = {10.1090/jams/916}, url = {https://doi.org/10.1090/jams/916}, zblnumber = {07010361}, }
WHY? For audio source separation task, traditional approach only utilized magnitude part ignoring phase part. Previously deep complex network provided complex arithmetics via convolution. WHAT? Deep Complex U-net modified simple DCN to better preseve audio information. First, DCU used strided complex-valued convolutional layers instead of max pooling operation. Second, complex batch normalization is used. Third, leaky CReLU is used instead of CReLU. For source separation task, DCU used complex-valued mask instead of real-valued mask. \hat{Y_{t,f}} = M_{t,f}\cdot X_{t,f}\\= (|M_{t,f}|\cdot|X_{t,f}|)\cdot e^{i(\theta_{X_{t,f}} + \theta_{M_{t,f}})} The problem of complex-valued mask is that the values can range from - \infty to \infty(Unbounded mask). Unbounded mask may capture any form of appropriate mask, but empirically proved that the good optimization is difficult. Previous work suggested sigmoid activation for each of real and imaginary part. However, seeing the distribution of appropriate mask, this activation can capture very limited area. DCU suggest polar-coordinate-wise masking to keep the mask bounded in a unit-circle in complex-space. M_{t,f} = |M_{t,f}|\cdot e^{i\theta_{M_{t,f}}}= M_{t,f}^{mag}\cdot M_{t,f}^{phase}\\M_{t,f}^{mag} = tanh(|O_{t,f}|), M_{t,f}^{phase} = O_{t,f}/|O_{t,f}| Since former MSE losses(Spectrogram-MSE, Wave-MSE) did not correlated with evaluation measures, DCU proposed weighted-SDR losses. Source-to-distortion ratio(SDR) represent the distortion ratio of reconstruted audio. max_{\hat{y}} SDR(y, \hat{y}) := max_{\hat{y}} \frac{<y, \hat{y}>^2}{\|y\|^2\|\hat{y}\|^2 - <y, \hat{y}>^2}\\\propto min_{\hat{y}}\frac{\|y\|^2\|\hat{y}\|^2}{<y, \hat{y}>^2} - frac{<y, \hat{y}>^2}{<y, \hat{y}>^2}\\\propto min_{\hat{y}}\frac{\|\hat{y}\|^2}{<y, \hat{y}>^2} To prevent the zero division and bound the loss to -1 to 1, weighted-SDR loss is proposed. loss_{SDR}(y, \hat{y}) := \frac{<y, \hat{y}>}{\|y\|\|\hat{y}\|+\epsilon}\\loss_{wSDR}(x, y, \hat{y}) := \alpha loss_{SDR}(y, \hat{y}) + (1-\alpha)loss_{SDR}(z, \hat{z}) + 1\\\alpha = \frac{\|y\|^2}{\|\hat{y}\|^2+\|z\|^2}\\z = x-y\\\hat{z} = x-\hat{y} So? DCU achieved state-of-the-art result in CSIG, CBAK, COVL, PESQ, and SSNR compared to SEGAN, Wavenet, MMSE-GAN and Deep Feature Loss. Critic Incredible application of complex-network!
LHCb Collaboration,; Aaij, R; Adeva, B; Adinolfi, M; Anderson, J; Bernet, R; Bowen, E; Bursche, A; Chiapolini, N; Chrzaszcz, M; Dey, B; Elsasser, C; Graverini, E; Lionetto, F; Lowdon, P; Mauri, A; Müller, K; Serra, N; Steinkamp, O; Storaci, B; Straumann, U; Tresch, M; Vollhardt, A; Weiden, A; et al, (2015). Measurement of the $B_s^0 \to \phi \phi$ branching fraction and search for the decay $B^0 \to \phi \phi$. Journal of High Energy Physics, 2015(10):53. Abstract Using a dataset corresponding to an integrated luminosity of 3.0 fb$^{-1}$ collected in $pp$ collisions at centre-of-mass energies of 7 and 8 TeV, the $B_s^0 \to \phi \phi$ branching fraction is measured to be \[ \mathcal{B}(B_s^0 \to \phi \phi) = ( 1.84 \pm 0.05 (\text{stat}) \pm 0.07 (\text{syst}) \pm 0.11 (f_s/f_d) \pm 0.12 (\text{norm}) ) \times 10^{-5}, \] where $f_s/f_d$ represents the ratio of the $B_s^0$ to $B^0$ production cross-sections, and the $B^0 \to \phi K^*(892)^0$ decay mode is used for normalization. This is the most precise measurement of this branching fraction to date, representing a factor five reduction in the statistical uncertainty compared with the previous best measurement. A search for the decay $B^0 \to \phi \phi$ is also made. No signal is observed, and an upper limit on the branching fraction is set as \[\mathcal{B}(B^0 \to \phi \phi) < 2.8 \times 10^{-8} \] at 90% confidence level. This is a factor of seven improvement compared to the previous best limit. Abstract Using a dataset corresponding to an integrated luminosity of 3.0 fb$^{-1}$ collected in $pp$ collisions at centre-of-mass energies of 7 and 8 TeV, the $B_s^0 \to \phi \phi$ branching fraction is measured to be \[ \mathcal{B}(B_s^0 \to \phi \phi) = ( 1.84 \pm 0.05 (\text{stat}) \pm 0.07 (\text{syst}) \pm 0.11 (f_s/f_d) \pm 0.12 (\text{norm}) ) \times 10^{-5}, \] where $f_s/f_d$ represents the ratio of the $B_s^0$ to $B^0$ production cross-sections, and the $B^0 \to \phi K^*(892)^0$ decay mode is used for normalization. This is the most precise measurement of this branching fraction to date, representing a factor five reduction in the statistical uncertainty compared with the previous best measurement. A search for the decay $B^0 \to \phi \phi$ is also made. No signal is observed, and an upper limit on the branching fraction is set as \[\mathcal{B}(B^0 \to \phi \phi) < 2.8 \times 10^{-8} \] at 90% confidence level. This is a factor of seven improvement compared to the previous best limit. Additional indexing
To celebrate the Pi-Day (3/14) adequately, a challenging math puzzle must not be missing. Rules: Fill in the numbers 1-9exactly once in every row, column, and region. On top of that, you need to use $\pi$ exactly three timesin every row, columnand regionto fill in the remaining gaps. What's the solution for this $\pi$doku? Additional challenge: Try to find a "creative way" to write Pi each time you're using it! For example: $4\cdot\left(\left(\frac{1}{2}\right)!\right)^2$
The Annals of Statistics Ann. Statist. Volume 47, Number 1 (2019), 556-582. A critical threshold for design effects in network sampling Abstract Web crawling, snowball sampling, and respondent-driven sampling (RDS) are three types of network sampling techniques used to contact individuals in hard-to-reach populations. This paper studies these procedures as a Markov process on the social network that is indexed by a tree. Each node in this tree corresponds to an observation and each edge in the tree corresponds to a referral. Indexing with a tree (instead of a chain) allows for the sampled units to refer multiple future units into the sample. In survey sampling, the design effect characterizes the additional variance induced by a novel sampling strategy. If the design effect is some value $\operatorname{DE}$, then constructing an estimator from the novel design makes the variance of the estimator $\operatorname{DE}$ times greater than it would be under a simple random sample with the same sample size $n$. Under certain assumptions on the referral tree, the design effect of network sampling has a critical threshold that is a function of the referral rate $m$ and the clustering structure in the social network, represented by the second eigenvalue of the Markov transition matrix, $\lambda_{2}$. If $m<1/\lambda_{2}^{2}$, then the design effect is finite (i.e., the standard estimator is $\sqrt{n}$-consistent). However, if $m>1/\lambda_{2}^{2}$, then the design effect grows with $n$ (i.e., the standard estimator is no longer $\sqrt{n}$-consistent). Past this critical threshold, the standard error of the estimator converges at the slower rate of $n^{\log_{m}\lambda_{2}}$. The Markov model allows for nodes to be resampled; computational results show that the findings hold in without-replacement sampling. To estimate confidence intervals that adapt to the correct level of uncertainty, a novel resampling procedure is proposed. Computational experiments compare this procedure to previous techniques. Article information Source Ann. Statist., Volume 47, Number 1 (2019), 556-582. Dates Received: May 2017 Revised: February 2018 First available in Project Euclid: 30 November 2018 Permanent link to this document https://projecteuclid.org/euclid.aos/1543568598 Digital Object Identifier doi:10.1214/18-AOS1700 Mathematical Reviews number (MathSciNet) MR3909942 Zentralblatt MATH identifier 07036211 Subjects Primary: 62D99: None of the above, but in this section Secondary: 60J20: Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) [See also 90B30, 91D10, 91D35, 91E40] Citation Rohe, Karl. A critical threshold for design effects in network sampling. Ann. Statist. 47 (2019), no. 1, 556--582. doi:10.1214/18-AOS1700. https://projecteuclid.org/euclid.aos/1543568598 Supplemental materials Supplement: Proofs for Sections 3 and 4. Due to space constraints, this supplement contains the proofs for the results in Sections 3 and 4. Moreover, it contains an addition computational experiment to study the widths of the bootstrap confidence intervals.
Godbole, Rohini M and Pancheri, G (2001) $\gamma \gamma$ cross-sections and $\gamma \gamma$ colliders. [Preprint] PDF 0101320.pdf Download (195kB) Abstract We summarize the predictions of different models for total $\gamma \gamma$ cross-sections. The experimentaly observed rise of $\sigma_{\gamma \gamma}$ with $\sqrt{s_{\gamma \gamma}}$, faster than that for $\sigma_{\bar p p}$, $\sigma_{\gamma p}$ is in agreement with the predictions of the Eikonalized Minijet Models as opposed to those of the Regge-Pomeron models. We then show that a measurement of $\sigma_{\gamma \gamma}$ with an accuracy of $\lessapprox 8-9% (6-7%)$ is necessary to distinguish among different Regge-Pomeron type models (among the different parametrisations of the EMM models) and a precision of < lessapprox 20% is required to distinguish among the predictions of the EMMs and of those models which treat like 'photon like a proton', for the energy range $300 < \sqrt{s_{\gamma \gamma}} < 500$ GeV. We further show that the difference in model predictions for $\sigma_{\gamma \gamma}$ of about a factor 2 at $\sqrt{s_{\gamma \gamma}} = 700 $ GeV reduces to $\sim$ 30% when folded with bremsstrahlung $\gamma$ spectra to calculate $\sigma ({e^+e^- \to e^+e^-\gamma \gamma \to e^+e^-X})$. We point out then the special role that $\gamma \gamma$ colliders can play in shedding light on this all-important issue of calculation of total hadronic cross-sections. Item Type: Preprint Additional Information: Nucl.Instrum.Meth. A472 (2001) 205-211 Department/Centre: Division of Physical & Mathematical Sciences > Centre for Theoretical Studies (Ceased to exist at the end of 2003) Depositing User: Ramnishath A Date Deposited: 13 Aug 2004 Last Modified: 19 Sep 2010 04:13 URI: http://eprints.iisc.ac.in/id/eprint/815 Actions (login required) View Item
In this tutorial we shall derive the series expansion of $$\sqrt {1 + x} $$ by using Maclaurin’s series expansion… Click here to read more Calculus In this tutorial we shall discuss an example of the slope of a tangent to any curve at some given… Click here to read more In this tutorial we shall look at the differentials of independent and dependent variables. Some applications of differentials will be… Click here to read more In this tutorial we shall develop the differentials to approximate the value of $$\sqrt {49.5} $$. The nearest number to… Click here to read more In this tutorial we shall develop the differentials to approximate the value of $$\csc {61^ \circ }$$. The nearest number… Click here to read more Example: Integrate $$\frac{{{x^2} – 2{x^4}}}{{{x^4}}}$$ with respect to $$x$$. Consider the function to be integrated \[I = \int {\frac{{{x^2} –… Click here to read more
I've been working through a textbook and course on conformal field theory recently. However in a section illustrating how to calculate correlators for secondary fields (using the free boson as an example CFT), there is one line of working I don't understand. Given a secondary field $ \vert \varphi \rangle $, and a not necessarily primary field $ \vert \phi \rangle$ related by: $$ \vert \varphi \rangle = a_{-r} \vert \phi \rangle : r > 0 $$ apparently we can reverse the state-field correspondence to write: $$ \varphi \left( w \right) = \frac{1}{2 \pi i} \oint_{w} \frac{ \partial \Psi \left( z \right) \phi \left( w \right) } { \left( z - w \right)^r } \mathrm{d}z $$ I have no idea how they managed to get this from the above field. Note that here $a_{-r}$ is a creation operator, and $\Psi$ is some arbitrary field (I assume? it wasn't actually specified where $\Psi$ came from). This step seemed like bit of a jump, and had no other explanation. However my attempt at a solution was: Given that, for the free boson: $$\partial \Psi \left( z \right) = \sum_{r \in \mathbb{Z}} a_{r} z^{-r-1}$$ We can invert this to get: $$ a_{-r} = \frac{1}{2 \pi i} \oint_0 \frac{\partial \Psi \left(z \right)} {z^{-r+1} } \mathrm{d} z $$ So: $$\varphi \left( w \right) = \left( \frac{1}{2 \pi i} \oint_0 \frac{\partial \Psi \left(z \right)} {z^{-r+1} } \mathrm{d} z \right) \ \ \phi \left( w \right) $$ $$ = \frac{1}{2 \pi i} \oint_0 \frac{\partial \Psi \left(z \right) \phi \left( w \right)} {z^{-r+1} } \mathrm{d} z $$ Now clearly this isn't quite the same as what I need to get, as I showed above. The integral is contoured around $0$ not $w$, and has the wrong denominator. It seems like this could be fixed by using a Laurent series for $\partial \Psi \left( z \right)$ centred around w. However surely this would mean the coefficients would no longer be $a_{-r}$. Further, this would fix the contour so it's now around $w$, and turn the $z$ into a $\left( z-w \right)$ in the denominator, but wouldn't fix the incorrect power. This makes me think that perhaps this isn't the correct approach then.
Let $X_1, \ldots, X_{n_X}$ and $Y_1, \ldots, Y_{n_Y}$ be $n_x$ and $n_Y$ iid observations from two independent Bernoulli populations with probabilities of success $p_X$ and $p_Y$. Define the statistics $T_X = \sum_{i=1}^{n_X}X_i$ and $T_Y = \sum_{i=1}^{n_Y}Y_i$. I am testing the hypothesis: $$ H_0 : p_X = p_Y \ \ \ \text{and} \ \ \ H_1 : p_X \neq p_Y $$ Consider the test statistic: $$ T = \dfrac{\hat{p}_X-\hat{p}_Y}{\sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_X}+\frac{1}{n_Y}\right)}} $$ where $\hat{p}_X = \frac{T_X}{n_X}$ and $\hat{p}_Y = \frac{T_Y}{n_Y}$, and $\hat{p} = \frac{T_X+T_Y}{n_X+n_Y}$. I would like to derive the asymptotic distribution of $T$ under $H_0$ as both $n_X$ and $n_Y$ go to infinity that is that under the null: $$ T = \dfrac{\hat{p}_X-\hat{p}_Y}{\sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_X}+\frac{1}{n_Y}\right)}} \to_D N(0,1) $$ Now this question has been asked before see: How to derive the asymptotic distribution of the test statistic of a large sample test for equality of two binomial populations? However the answer given there is wrong because it gives a justification based on the continuous mapping theorem. The continuous mapping theorem says that for random variables $X_n$ and $X$ such that $X_n$ converges in distribution to $X$ and if $g$ is a continuous function, then $g(X_n)$ converges in distribution to $g(X)$. However $g$ cannot be a function of $n$. The function $\varphi$ used in the answer is itself dependent on $n_X$ and $n_Y$ and so the continuous mapping theorem doesn't apply. In which case how do you find the asymptotic distribution?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
We have the following equation \begin{equation} x(k+1)=\arccos\bigg( -\frac{1}{2(Dr^{\frac {|\sin(2x(k)+\theta)|}{M\sin x(k)\sqrt{A+2B\cos(2x(k)+\theta)}}}+1)} \bigg) \nonumber \end{equation} $A,B,D,r,M,\theta$ are constants. x[k + 1] == ArcCos[-1/(Abs[Sin[2 x[k] + θ]]/( 2 (d r^(m Sin[x[k]] Sqrt[a + 2 b Cos[2 x[k] + θ]])) + 1))] The equation above can be written as $x_{n+1}=f(x_{n})$. How do we show that it converges using Mathematica? How do we find the first derivative of the equation given above using Mathematica? Also, is that possible to show that the first derivative is bounded?
https://doi.org/10.1351/goldbook.S06030 Of elliptically polarized incident @R05048@, these are given by \[s_{0}^{0}=E_{1}^{0}+E_{2}^{0}\] \[s_{1}^{0}=E_{1}^{0}- E_{2}^{0}\] \[s_{2}^{0}=2\ \sqrt{E_{1}^{0}\ E_{2}^{0}}\ \cos \delta ^{0}\] \[s_{3}^{0}=2\ \sqrt{E_{1}^{0}\ E_{2}^{0}}\ \sin \delta ^{0}\] where \(E_{1}^{0}\) and \(E_{2}^{0}\) specify the @I03254@ of the incident light polarized with their electric vectors vibrating perpendicular and parallel to the @S05493@, respectively and \(\delta ^{0}\) is the phase difference between these electric vectors. See also: scattering matrix
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
I don't have this really clear, I want to justify if $f\in\mathcal{S}(\mathbb{R})$ then $f$ is uniformly continuous. So far, I know how can I bound $|x|$ for $f$ is in the Schwartz space, but I can't proceed with the uniformly continuous proof because I don't know how to bound $|y-x|$ to find a $\delta$ which depends on an $\varepsilon>0$ such as $|f(y)-f(x)|<\varepsilon$. Thank you so much I'll prove a result slightly stronger. Consider the space $C_0(\mathbb R)$ of the continuous functions from $\mathbb R$ to $\mathbb C$ such that $\lim_{|x|\to \infty} f(x) = 0$. The functions from this space are uniformly continuous. Consider $\epsilon >0$ and $f\in C_0(\mathbb R)$. There exist, from the limit above, a radius $R>0$ such that $|f(x)|<\epsilon/2$ when $x\in \mathbb R$ and $|x|>R$. (Therefore, $f$ is small far from the point 0) Now, let $B=\{x\in \mathbb R:|x|\leq R+1\}$. Since $B$ is compact and $f|_B:B \to \mathbb C$ is continuous we have that $f|_B$ is uniformly continuous (On compact spaces, to be continuous = to be uniformly continuous). So, there exist $\delta\in (0,1)$ such that $|f(x)-f(y)|<\epsilon/2$ for every $x,y\in B$ where $|x-y|<\delta$. (Therefore, $f$ is uniformly continuous near 0) If $x,y\in \mathbb R$ and $|x-y|<\delta$, we have the following cases: If $x,y \in B$ then $|f(x)-f(y)|<\epsilon/2<\epsilon,$ If $x\not \in B$ then $|x|>R+1$ and therefore $|y|>R$, because $|x-y|<\delta<1$. So we obtain $|f(x)|<\epsilon/2$ and $|f(y)|<\epsilon/2$. Therefore $$|f(x)-f(y)|\leq|f(x)|+|f(y)|<\epsilon.$$ From 1. and 2. we conclude that $|f(x)-f(y)|<\epsilon$ when $|x-y|<\delta$. Now, observe that $S(\mathbb R) \subset C_0(\mathbb R)$. Indeed, if $f\in S(\mathbb R)$ then $(1+|x|)f(x)$ is bounded, that is, there is $B>0$ such that $(1+|x|)|f(x)|<B$ for all $x\in \mathbb R$. Observe that $$ |f(x)|<\frac{B}{1+|x|}$$ and as consequence $\lim_{|x|\to \infty}f(x) = 0$. Therefore $f\in C_0(\mathbb R)$. Then, every $f\in S(\mathbb R)$ is uniformly continuous.
There are already several good answers. However, the off-shell aspect related to Noether Theorem has not been addressed so far. (The words on-shell and off-shell refer to whether the equations of motion (e.o.m.) are satisfied or not.) Let me rephrase the problem as follows. Consider a (not necessarily isolated) Hamiltonian system with $N$ degrees of freedom (d.o.f.). The phase space has $2N$ coordinates, which we denote $(z^1, \ldots, z^{2N})$. (We shall have nothing to say about the corresponding Lagrangian problem.) 1) Symplectic structure. Usually, we work in Darboux coordinates $(q^1, \ldots, q^N; p_1, \ldots, p_N)$, with the canonical symplectic potential one-form $$\vartheta=\sum_{i=1}^N p_i dq^i.$$ However, it turns out to be more efficient in later calculations, if we instead from the beginning consider general coordinates $(z^1, \ldots, z^{2N})$ and a general (globally defined) symplectic potential one-form $$\vartheta=\sum_{I=1}^{2N} \vartheta_I(z;t) dz^I,$$ with non-degenerate (=invertible) symplectic two-form $$\omega = \frac{1}{2}\sum_{I,J=1}^{2N} \omega_{IJ} \ dz^I \wedge dz^J = d\vartheta,\qquad\omega_{IJ} =\partial_{[I}\vartheta_{J]}=\partial_{I}\vartheta_{J}-\partial_{J}\vartheta_{I}. $$ The corresponding Poisson bracket is $$\{f,g\} = \sum_{I,J=1}^{2N} (\partial_I f) \omega^{IJ} (\partial_J g), \qquad \sum_{J=1}^{2N} \omega_{IJ}\omega^{JK}= \delta_I^K. $$ 2) Action. The Hamiltonian action $S$ reads $$ S[z]= \int dt\ L_H(z^1, \ldots, z^{2N};\dot{z}^1, \ldots, \dot{z}^{2N};t),$$ where $$ L_H(z;\dot{z};t)= \sum_{I=1}^{2N} \vartheta_I(z;t) \dot{z}^I- H(z;t) $$ is the Hamiltonian Lagrangian. By infinitesimal variation $$\delta S = \int dt\sum_{I=1}^{2N}\delta z^I \left( \sum_{J=1}^{2N}\omega_{IJ} \dot{z}^J-\partial_I H - \partial_0\vartheta_I\right)+ \int dt \frac{d}{dt}\sum_{I=1}^{2N}\vartheta_I \delta z^I, \qquad \partial_0 \equiv\frac{\partial }{\partial t},$$ of the action $S$, we find the Hamilton e.o.m. $$ \dot{z}^I \approx \sum_{J=1}^{2N}\omega^{IJ}\left(\partial_J H + \partial_0\vartheta_J\right) = \{z^I,H\} + \sum_{J=1}^{2N}\omega^{IJ}\partial_0\vartheta_J. $$ (We will use the $\approx$ sign to stress that an equation is an on-shell equation.) 3) Constants of motion. The solution $$z^I = Z^I(a^1, \ldots, a^{2N};t)$$ to the first-order Hamilton e.o.m. depends on $2N$ constants of integration $(a^1, \ldots, a^{2N})$. Assuming appropriate regularity conditions, it is in principle possible to invert locally this relation such that the constants of integration $$a^I=A^I(z^1, \ldots, z^{2N};t)$$ are expressed in terms of the $(z^1, \ldots, z^{2N})$ variables and time $t$. These functions $A^I$ are $2N$ constants of motion (c.o.m.), i.e., constant in time $\frac{dA^I}{dt}\approx0$. Any function $B(A^1, \ldots, A^{2N})$ of the $A$'s, but without explicit time dependence, will again be a c.o.m. In particular, we may express the initial values $(z^1_0, \ldots, z^{2N}_0)$ at time $t=0$ as functions $$Z^J_0(z;t)=Z^J(A^1(z;t), \ldots, A^{2N}(z;t); t=0)$$ of the $A$'s, so that $Z^J_0$ become c.o.m. Now, let $$b^I=B^I(z^1, \ldots, z^{2N};t)$$ be $2N$ independent c.o.m., which we have argued above must exist. The question is if there exist $2N$ off-shell symmetries of the action $S$, such that the corresponding Noether currents are on-shell c.o.m.? Remark. It should be stressed that an on-shell symmetry is a vacuous notion, because if we vary the action $\delta S$ and apply e.o.m., then $\delta S\approx 0$ vanishes by definition (modulo boundary terms), independent of what the variation $\delta$ consists of. For this reason we often just shorten off-shell symmetry into symmetry. On the other hand, when speaking of c.o.m., we always assume e.o.m. 4) Change of coordinates. Since the action $S$ is invariant under change of coordinates, we may simply change coordinates $z\to b = B(z;t)$ to the $2N$ c.o.m., and use the $b$'s as coordinates (which we will just call $z$ from now on). Then the e.o.m. in these coordinates are just $$\frac{dz^I}{dt}\approx0,$$ so we conclude that in these coordinates, we have $$ \partial_J H + \partial_0 \vartheta_J=0$$ as an off-shell equation. [An aside: This implies that the symplectic matrix $\omega_{IJ}$ does not depend explicitly on time, $$\partial_0\omega_{IJ} =\partial_0\partial_{[I}\vartheta_{J]}=\partial_{[I} \partial_0\vartheta_{J]}=-\partial_{[I}\partial_{J]} H=0.$$ Hence the Poisson matrix $\{z^I,z^J\}=\omega^{IJ}$ does not depend explicitly on time. By Darboux Theorem, we may locally find Darboux coordinates $(q^1, \ldots, q^N; p_1, \ldots, p_N)$, which are also c.o.m.] 5) Variation. We now perform an infinitesimal variation $\delta= \varepsilon\{z^{I_0}, \cdot \}$, $$\delta z^J = \varepsilon\{z^{I_0}, z^J\}=\varepsilon \omega^{I_0 J},$$ with Hamiltonian generator $z^{I_0}$, where $I_0\in\{1, \ldots, 2N\}$. It is straightforward to check that the infinitesimal variation $\delta= \varepsilon\{z^{I_0}, \cdot \}$ is an off-shell symmetry of the action (modulo boundary terms) $$\delta S = \varepsilon\int dt \frac{d f^0}{dt}, $$ where $$f^0 = z^{I_0}+ \sum_{J=1}^{2N}\omega^{I_0J}\vartheta_J.$$ The bare Noether current is $$j^0 = \sum_{J=1}^{2N}\frac{\partial L_H}{\partial \dot{z}^J} \omega^{I_0 J}=\sum_{J=1}^{2N}\omega^{I_0J}\vartheta_J,$$ so that the full Noether current $$ J^0=j^0-f^0=-z^{I_0} $$ becomes just (minus) the Hamiltonian generator $z^{I_0}$, which is conserved on-shell $\frac{dJ^0}{dt}\approx 0$ by definition. So the answer is yes in the Hamiltonian case.
Parentheses and brackets are very common in mathematical formulas. You can easily control the size and style of brackets in LaTeX; this article explains how. Contents Here's how to type some common math braces and parentheses in LaTeX: Type LaTeX markup Renders as Parentheses; round brackets (x+y) \((x+y)\) Brackets; square brackets [x+y] \([x+y]\) Braces; curly brackets \{ x+y \} \(\{ x+y \}\) Angle brackets \langle x+y \rangle \(\langle x+y\rangle\) Pipes; vertical bars |x+y| \(\displaystyle| x+y |\) Double pipes \|x+y\| \(\| x+y \|\) The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example: \[ F = G \left( \frac{m_1 m_2}{r^2} \right) \] Notice that to insert the parentheses or brackets, the \left and \right commands are used. Even if you are using only one bracket, both commands are mandatory. \left and \right can dynamically adjust the size, as shown by the next example: \[ \left[ \frac{ N } { \left( \frac{L}{p} \right) - (m+n) } \right] \] When writing multi-line equations with the align, align* or aligned environments, the \left and \right commands must be balanced on each line and on the same side of &. Therefore the following code snippet will fail with errors: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \\ & \quad + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The solution is to use "invisible" brackets to balance things out, i.e. adding a \right. at the end of the first line, and a \left. at the start of the second line after &: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \right. \\ & \quad \left. + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The size of the brackets can be controlled explicitly The commands \Bigg and \bigg stablish the size of the delimiters < and > respectively. For a complete list of parentheses and sizes see the reference guide. LaTeX markup Renders as \big( \Big( \bigg( \Bigg( \big] \Big] \bigg] \Bigg] \big\{ \Big\{ \bigg\{ \Bigg\{ \big \langle \Big \langle \bigg \langle \Bigg \langle \big \rangle \Big \rangle \bigg \rangle \Bigg \rangle \big| \Big| \bigg| \Bigg| \(\displaystyle\big| \; \Big| \; \bigg| \; \Bigg|\) \big\| \Big\| \bigg\| \Bigg\| \(\displaystyle\big\| \; \Big\| \; \bigg\| \; \Bigg\|\)
Is it valid to say the following? $$\lim_{x \to 0} \frac{0}{x}=0$$ It seems like it is since $\frac{0}{x} \leq \frac{x^2}{x}$ which clearly converges to $0$. However, I thought $\frac{0}{0}$ was undefined? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Hint: For every $x$ in a punctured neighborhood of $0$ the expression $\frac{0}{x}$ equals zero exactly. Suppose $ϵ>0$. You want to find δ>0 such that if $0<|x|<δ$, then $|\frac{0}{x}|<ϵ$. But,$ \frac{0}{x} =0<ϵ$. So, no matter what $δ$ you choose, you will obtain $0$ as the limit.
Circle A circle has the diameter and the radius. Use D for diameter and R for radius, which one is corect? A) D = R + 5 B) D = R : 2 C) D = R x 2 A circle passes through two adjacent vertices of a square and is tangent to one side of the square. If the side length of the square is 2, what is the radius of the circle? Phan Thanh Tinh Coodinator 18/04/2017 at 15:16 Name the points as shown Draw \(OH\perp BC\),then H is the midpoint of BC.So BH = 1 cm Quadrilateral ABHK has 3 right angles at A,B and H => ABHK is a rectangle => HK = AB = 2 cm Let r be the radius of the circle,so OH = KH - OK = 2 - r (cm) \(\Delta OHB\) right at H has : OH 2+ HB 2= OB 2(Pythagoras theorem) \(\Rightarrow\left(2-r\right)^2+1^2=r^2\Rightarrow4-4r+r^2+1-r^2=0\)Carter selected this answer. \(\Rightarrow5-4r=0\Rightarrow r=\dfrac{5}{4}\) FA KAKALOTS 03/02/2018 at 12:37 Name the points as shown Draw OH⊥BC ,then H is the midpoint of BC.So BH = 1 cm Quadrilateral ABHK has 3 right angles at A,B and H => ABHK is a rectangle => HK = AB = 2 cm Let r be the radius of the circle,so OH = KH - OK = 2 - r (cm) ΔOHB right at H has : OH2 + HB2 = OB2 (Pythagoras theorem) ⇒(2−r)2+12=r2⇒4−4r+r2+1−r2=0 ⇒5−4r=0⇒r=54 A circle was inscribed within the triangle ABC. The circle is tangent to AB at point M with AM = 10 and MB = 6. The area of triangle ABC is \(120\sqrt{3}\) . a) Find the perimeter of triangle ABC. b) Find the measure of angle A. mathlove 16/03/2017 at 19:06 We have \(AB=AM+BM=10+6=16;BC=BN+CN=6+x;CA=CP+AP=x+10\). The triangle ABChas his edges \(a=10+6=16;b=6+x;c=x+10\) . Call \(p\) is the halp perimeter of \(ABC\) then \(p-a=10,p-b=6,p-c=x\) and the area of ABC is \(\sqrt{p\left(p-a\right)\left(p-b\right)\left(p-c\right)}=\sqrt{60x\left(16+x\right)}\) . By the assumptions we have \(60x\left(x+16\right)=\left(120\sqrt{3}\right)^2\Leftrightarrow x^2+16x-720=0\Leftrightarrow x\in\left\{20;-36\right\}\). So, \(x=20;p=36\), the perimeter of ABCis 72. Suppose CHis the height of CAB,then \(CH=\dfrac{2.120\sqrt{3}}{16}=15.\) The triangle CHAhas \(CHA=90^0;CH=15;CA=30\Rightarrow A=60^0\). The measure of angleSelected by MathYouLike Ais \(60^0\) . FA KAKALOTS 03/02/2018 at 12:37 We have AB=AM+BM=10+6=16;BC=BN+CN=6+x;CA=CP+AP=x+10 . The triangle ABC has his edges a=10+6=16;b=6+x;c=x+10 . Call p is the halp perimeter of ABC then p−a=10,p−b=6,p−c=x and the area of ABC is √p(p−a)(p−b)(p−c)=√60x(16+x) . By the assumptions we have 60x(x+16)=(120√3)2⇔x2+16x−720=0⇔x∈{20;−36} . So, x=20;p=36 , the perimeter of ABC is 72. Suppose CH is the height of CAB, then CH=2.120√316=15. The triangle CHA has CHA=900;CH=15;CA=30⇒A=600 . The measure of angle A is 600 .
Premium Member The title is a little general, but it should be, as I have a lot to share about forming several types of 2-cycle inner-layer odd permutation algorithms. To begin, why not start with the "Pure Edge Flip" (my favorite, obviously). I would first like to show you how to derive common OLL parity algorithms by hand. To do this, I have taken the time to make tutorial videos for three popular algorithms. Each of the three algorithm's derivations are two videos long. r2 B2 U2 l U2 r' U2 r U2 F2 r F2 l' B2 r2 Derivation Part I Part II r U2 r U2 r' U2 r U2 l' U2 l x U2 r' U2 x' r' U2 r' Derivation Part I Part II r U2 r U2 r' U2 r U2 l' U2 r U2 r' U2 l r2 U2 r' Derivation Part I Part II Please watch the videos before reading this spoiler: To begin, why not start with the "Pure Edge Flip" (my favorite, obviously). I would first like to show you how to derive common OLL parity algorithms by hand. To do this, I have taken the time to make tutorial videos for three popular algorithms. Each of the three algorithm's derivations are two videos long. Note: I recommend you to watch the derivation on the Standard Algorithm first before you watch the other videos. The Standard Algorithm r2 B2 U2 l U2 r' U2 r U2 F2 r F2 l' B2 r2 Derivation Part I Part II Lucasparity r U2 r U2 r' U2 r U2 l' U2 l x U2 r' U2 x' r' U2 r' Derivation Part I Part II Another algorithm which consists of all U2 face turns r U2 r U2 r' U2 r U2 l' U2 r U2 r' U2 l r2 U2 r' Derivation Part I Part II Please watch the videos before reading this spoiler: Notice that all three algorithms are derived from the same commutator. Each algorithm is very related in structure. Only a few minor internal adjustments differentiates Lucasparity and the other algorithm from the standard algorithm. In addition, I have chosen to release... God's Algorithm? For my main method for pure edge flip algorithms, I have made a video on the best/briefest algorithm I have ever found in block quarter turns (BQTM) that works for all cube sizes (Very few low move count algorithms work for all cube sizes). The Holy Grail From now on, I will abbreviate BQTM with just q, and BHTM with h. (Also note that BHTM is commonly called btm. So h = BHTM = btm). On the 4X4X4: 19q/18h z d' m D l' u' 2R' u l u' l'2 b' 2R' b r' R' 2U y' m' u x2 z' On the 5X5X5: 19q/18h z d' 3m D 3l' u' 2R' u 3l u' 3l'2 b' 2R' b r' R' 2-3u y' 3m' u x2 z' On the inner-orbit of the 6X6X6: 20q/19h z 3d' 4m D 3l' 3u' 3R' 3u 3l 3u' 3l'2 2R' 3b' 3R' 3b 3r' R' 2-3u y' 4m' 3u x2 z' On the outer-orbit of the 6X6X6: 19q/19h z 3d' 4m D 3l' 3u' 2R' 3u 3l 3u' 3l'2 3R' 3b' 2R' 3b 3r' R' 2-3u y' 4m' 3u x2 z' = z 3d' 4m D 3l' 3u' 2R' 3u 3l 3u' 4l 3l 3b' 2R' 3b 3r' R' 2-3u y' 4m' 3u x2 z' On the inner-orbit of the 7X7X7: 20q/19h 3d' 5m D 4l' 3u' 3R' 3u 4l 3u' 4l'2 2R' 3b' 3R' 3b 3r' R' 2-4u y' 5m' 3u x2 z' On the outer-orbit of the 7X7X7: 19q/19h z 3d' 5m D 4l' 3u' 2R' 3u 4l 3u' 4l'2 3R' 3b' 2R' 3b 3r' R' 2-4u y' 5m' 3u x2 z' = z 3d' 5m D 4l' 3u' 2R' 3u 4l 3u' 5l 4l 3b' 2R' 3b 3r' R' 2-4u y' 5m' 3u x2 z' I point out in the latter portion in the video that its average between quarter and half turns is also less than all other algorithms which currently exist. Here is a link to the following formula in Wolfram|Alpha (the simplified formula shown near the end of the video) \( \frac{\left\lfloor \frac{n}{2} \right\rfloor }{2-2^{\left\lfloor \frac{n}{2} \right\rfloor }}+19.5 \) Just substitute an integer greater than or equal to 4 for n to obtain the average for a cube of size n. Formula Derivation In order to explain it, I will explain the original form of the formula: \( 19.0\left( \left\lfloor \frac{n-2}{2} \right\rfloor -1 \right) \) This represents the number of cases where there are consecutive orbits which have odd parity in the same composite edge, starting from the corner and working its way inward, but excluding when all orbits are involved (previous case). These cases are all 19q/19h thus the average is 19.0. This represents all other cases: , etc. These cases are all (proportionally) 20q/19h thus the average is 19.5. Now, I am going to explain the series. So our formula simplifies to: [FONT="][/FONT] [FONT="] \( 18.5\left( 1 \right) \) The First Portion[/FONT] \( 18.5\left( 1 \right) \) This represents the case when any big cube size has a 2-wing edge swap between every orbit in the same composite edge. For example, any even cube size reduced to a 4x4x4 form, and any odd cube size reduced to a 5x5x5 form. This case is 19q/18h thus the average is 18.5. This case is 19q/18h thus the average is 18.5. The Second Portion \( 19.0\left( \left\lfloor \frac{n-2}{2} \right\rfloor -1 \right) \) This represents the number of cases where there are consecutive orbits which have odd parity in the same composite edge, starting from the corner and working its way inward, but excluding when all orbits are involved (previous case). These cases are all 19q/19h thus the average is 19.0. The Third Portion This represents all other cases: These cases are all (proportionally) 20q/19h thus the average is 19.5. It is the total number of cases of odd parity in the orbits of a big cube,,minus the occurrences of the first two cases. Next, we divide everything by the total number of possible cases for odd parity in all inner-layer orbits (that same series) to get the overall average. Now, I am going to explain the series. Let the total number of inner-layer orbits of wing edges in a big cube \( n\ge 4 \) be \( N=\left\lfloor \frac{n-2}{2} \right\rfloor \) Then, the total number of cases of odd parity between the orbits is the sum of the combinations of the number of orbits involved: \( \left( \begin{matrix} N \\ N \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-1 \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-2 \\ \end{matrix} \right)+\cdot \cdot \cdot +\left( \begin{matrix} N \\ N-\left( N-1 \right) \\ \end{matrix} \right) \) \( =\left( \begin{matrix} N \\ N \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-1 \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-2 \\ \end{matrix} \right)+\cdot \cdot \cdot +\left( \begin{matrix} N \\ 1 \\ \end{matrix} \right) \) \( =\sum\limits_{k=0}^{N-1}{\left( \begin{matrix} N \\ N-k \\ \end{matrix} \right)} \) \( =\sum\limits_{k=0}^{\left\lfloor \frac{n-2}{2} \right\rfloor -1}{\left( \begin{matrix} \left\lfloor \frac{n-2}{2} \right\rfloor \\ \left\lfloor \frac{n-2}{2} \right\rfloor -k \\ \end{matrix} \right)} \) For example, the 10X10X10 has 4 inner-layer orbits. \( \left( \begin{matrix} N \\ N \\ \end{matrix} \right) \) or \( \left( \begin{matrix} 4\\ 4 \\ \end{matrix} \right) \) represents the case because we have 4 orbits taking all 4 at once. \( \left( \begin{matrix} N \\ N-1 \\ \end{matrix} \right) \) or \( \left( \begin{matrix} 4\\ 3\\ \end{matrix} \right) \) represents the cases: , etc. because we have 4 orbits taking 3 at once. \( \left( \begin{matrix} N \\ N-2 \\ \end{matrix} \right) \) or \( \left( \begin{matrix} 4 \\ 2 \\ \end{matrix} \right) \) represents the cases: ,etc. because we have 4 orbits taking 2 at once. and finally, \( \left( \begin{matrix} N \\ N-3 \\ \end{matrix} \right)=\left( \begin{matrix} N \\ N-\left( N-1 \right) \\ \end{matrix} \right)=\left( \begin{matrix} 4 \\ 4-\left( 4-1 \right) \\ \end{matrix} \right)=\left( \begin{matrix} 4 \\ 1 \\ \end{matrix} \right) \) represents the cases ,etc. because we have 4 orbits, taking only one at a time. Then, the total number of cases of odd parity between the orbits is the sum of the combinations of the number of orbits involved: \( \left( \begin{matrix} N \\ N \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-1 \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-2 \\ \end{matrix} \right)+\cdot \cdot \cdot +\left( \begin{matrix} N \\ N-\left( N-1 \right) \\ \end{matrix} \right) \) \( =\left( \begin{matrix} N \\ N \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-1 \\ \end{matrix} \right)+\left( \begin{matrix} N \\ N-2 \\ \end{matrix} \right)+\cdot \cdot \cdot +\left( \begin{matrix} N \\ 1 \\ \end{matrix} \right) \) \( =\sum\limits_{k=0}^{N-1}{\left( \begin{matrix} N \\ N-k \\ \end{matrix} \right)} \) \( =\sum\limits_{k=0}^{\left\lfloor \frac{n-2}{2} \right\rfloor -1}{\left( \begin{matrix} \left\lfloor \frac{n-2}{2} \right\rfloor \\ \left\lfloor \frac{n-2}{2} \right\rfloor -k \\ \end{matrix} \right)} \) For example, the 10X10X10 has 4 inner-layer orbits. \( \left( \begin{matrix} N \\ N \\ \end{matrix} \right) \) or \( \left( \begin{matrix} 4\\ 4 \\ \end{matrix} \right) \) represents the case because we have 4 orbits taking all 4 at once. \( \left( \begin{matrix} N \\ N-1 \\ \end{matrix} \right) \) or \( \left( \begin{matrix} 4\\ 3\\ \end{matrix} \right) \) represents the cases: because we have 4 orbits taking 3 at once. \( \left( \begin{matrix} N \\ N-2 \\ \end{matrix} \right) \) or \( \left( \begin{matrix} 4 \\ 2 \\ \end{matrix} \right) \) represents the cases: because we have 4 orbits taking 2 at once. and finally, \( \left( \begin{matrix} N \\ N-3 \\ \end{matrix} \right)=\left( \begin{matrix} N \\ N-\left( N-1 \right) \\ \end{matrix} \right)=\left( \begin{matrix} 4 \\ 4-\left( 4-1 \right) \\ \end{matrix} \right)=\left( \begin{matrix} 4 \\ 1 \\ \end{matrix} \right) \) represents the cases because we have 4 orbits, taking only one at a time. PROOF As you can see, there are many more cases for an average of 19.5 than for 18.5 and 19.0. But, the overall average still is less than 19.5. Based on the fact that even and odd cubes have the same number of orbits (e.g. both the 6X6X6 and 7X7X7 have two inner-layer orbits), the floor function can be omitted and we can take the limit as the cube size gets large. \( \underset{n\to \infty }{\mathop{\lim }}\,\frac{\frac{n}{2}}{2-2^{\frac{n}{2}}}+19.5=19.5 \) This means, no matter how large ngets, the average will come arbitrarily close to, but it never reaches 19.50. (This is apparent already without calculus, but I thought a "second opinion" would further verify this statement.) My 23q/16h "cmowlaparity" obviously has an average of 19.5, but the Holy Grail beats it slightly. x' r2 U2 l' U2 r U2 l x U2 x' U r U' F2 U r' U r2 x Last edited:
WHY? Previous works achieved successful results in VQA by modeling visual attention. This paper suggests co-attention model for VQA to pay attention to both images (where to look) and words (what words to listen to). WHAT? Co-attention model in this paper pay attention to the images and question in three level: word level, phrase level and question level. Embedding matrix is used for word level, 1D convolution with 3 window sizes(1-3) and max polling are used for phrase level, LSTM is used to encode question level vector. Two methods for co-attention are suggested: parallel co-attention and alternating co-attention. Parallell co-attention first form bilinear affinity matrix to capture the relationship between images and words and result attened visual and word features. \mathbf{C} = tanh(\mathbf{Q}^{\top}\mathbf{W}_b\mathbf{V})\\\mathbf{H}^v = tanh(\mathbf{W}_v\mathbf{V} + (\mathbf{W}_q\mathbf{Q})\mathbf{C}), \mathbf{H}^q = tanh(\mathbf{W}_q\mathbf{Q} + (\mathbf{W}_v\mathbf{V})\mathbf{C}^{\top})\\\mathbf{a}^v = softmax(\mathbf{w}_{hv}^{\top}\mathbf{H}^v), \mathbf{a}^q = softmax(\mathbf{w}_{hq}^{\top}\mathbf{H}^q)\\\hat{\mathbf{v}} = \sum_{n=1}^N a_n^v\mathbf{v}_n, \hat{\mathbf{q}} = \sum_{t=1}^T a_t^q\mathbf{q}_t Alternating co-attention method first summerize the question into a single vector, attend to the image based on the question vector, and attend to the question based on the attened image feature. In first step, X = Q and g = 0. In second step, X = V and g = \hat{s}. In third step, X = Q and g = \hat{v}. \hat{\mathbf{x}} = \mathcal{A}(\mathbf{X}; \mathbf{g})\\\mathbf{H} = tanh(\mathbf{W}_x\mathbf{X} + (\mathbf{W}_g\mathbf{g)1^{\top}})\\\mathbf{a}^x = softmax(\mathbf{w}^{\top}_{hx}\mathbf{H})\\\hat{\mathbf{x}} = \sum a_i^x \mathbf{x}_i Co-attention is performed at various level of hierarchy of question. Various level of attended features are recursively encoded with MLP. \mathbf{h}^w = tanh(\mathbf{W}_w(\hat{\mathbf{q}}^w + \hat{\mathbf{v}}^w))\\\mathbf{h}^p = tanh(\mathbf{W}_p[(\hat{\mathbf{q}}^p + \hat{\mathbf{v}}^p), \mathbf{h}^w])\\\mathbf{h}^s = tanh(\mathbf{W}_s[(\hat{\mathbf{q}}^s + \hat{\mathbf{v}}^s), \mathbf{h}^p])\\\mathbf{p} = softmax(\mathbf{W}_h\mathbf{h}^s) So? Co-attention model with pretrained image feature achieved good result on VQA dataset.
Application of L'hopitals rule for the first time gives, $$\lim_{x \to -\infty} \frac{6x-5\cos (5x)}{2x}$$ We can try to do it again but it won't be useful $\lim_{x \to -\infty} \sin (x)$ does not exist because it oscillates between $-1$ and $1$ for $\frac{\pi}{2}+2\pi k$ and $\frac{3\pi}{2}+2\pi k$ so L'hopitals rule will not give a sensible answer. Instead we may try more elementary approaches and write is as follows. $$\lim_{x \to -\infty} \frac{6-5\frac{\cos(5x)}{x}}{2}$$ At this point note that the quaintly $\frac{\cos (5x)}{x}$ goes to zero because the top is bounded by $-1$ and $1$. So the answer is $3$.
Estimating order queue position When developing algorithmic trading strategies for FIFO markets it isbeneficial to know our orders queue position in the order book. We use thequeue position to calculate the intrinsic value of the order. When anorders intrinsic value is less than \(0\) we cancel it. The method of estimation is highly dependent on the particular rules andtechnology of the exchange in question. Some exchanges provide direct APIaccess to your order queue positions. Most US and European equity exchangesdistribute a market-by-order data feed where you can infer exactly where yourorder is in the limit order book. Futures exchanges usually provide marketdata through a market-by-level feed where you receive the price and aggregatevolume at the \(n\) topmost price levels. Obviously if you have direct API access to queue positions or a market-by-orderfeed it is theoretically trivial to know your order queue position, but it canbe technically challenging to implement. For the market-by-level feeds it’snecessary to devise a method to estimate the queue position. I will describea simple method to do just that. First we estimate the initial queue position of a newly inserted order. Wethen monitor the market data feed and revise this estimate in response tomarket data updates: We insert an order of size \(S\) at price \(P\) at time \(t_0\). Aninitial estimate \(\hat{V}(t_0)\) of the queue position is \(Q(t_0)\), thecurrent aggregate size at price \(P\) at time \(t_0\) when we send ourorder. Alternatively we can use the size \(Q(t_1)\) at price \(P\) at thetime \(t_1\) our order is acknowledged or an average of the two data points.Depending on if the market data feed coalesces updates we might see a sizeincrease \(\Delta Q(t_3)\) of \(S\) at price \(P\) at time \(t_3\in (t_0, t_0+\delta)\), we can then update our estimate\(\hat{V}(t_3)=Q(t_3)-S\). The time offset \(\delta\) is chosen to be amultiple of the delay between sending an order and observing it on the marketdata feed. Now in event time scale: For every decrease \(\Delta Q(n) < 0\) in thesize \(Q(n)\) at price \(P\) we revise our estimate \(\hat{V}(n)\). Ifthe feed allows us to discriminate between order cancellations and order fillswe update \(\hat{V}(n+1)=max(\hat{V}(n)+\Delta Q(n),0)\) for each order fillat price \(P\). For other updates we use a model to determine theprobability \(p(n)\) that the update affects \(\hat{V}(n)\). This modelcan be conditionally dependent on the relative size in front of and behind ourorder, properties of the order book and other factors. We then update. One useful family of models can be constructed aswhere \(f(x)\) is an increasing function, for example \(\ln(1+x)\) or theidentity function. The function \(f(x)\) would ideally be estimatedusing data from our own fills.
trigonometric-equation-calculator 2\sin ^2\left(x\right)+3=7\sin \left(x\right), x\in[0, 2\pi ] en 1. Sign Up free of charge: Join with Office365 Join with Facebook OR Join with email 2. Subscribe to get much more: From $0.99 Please try again using a different payment method One Time Payment $5.99 USD for 2 months Weekly Subscription $0.99 USD per week until cancelled Monthly Subscription $2.49 USD per month until cancelled Annual Subscription $19.99 USD per year until cancelled Please add a message. Message received. Thanks for the feedback.
Difference between revisions of "Unitriangular matrix group:UT(3,p)" (→Families) m (→In coordinate form) (7 intermediate revisions by 2 users not shown) Line 10: Line 10: <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> + + + + + + + + + + + + + + + + ===In coordinate form=== ===In coordinate form=== Line 16: Line 32: with the multiplication law given by: with the multiplication law given by: − <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + + (a_{12},a_{13},a_{23})^{-1} = (-a_{12}, -a_{13} + a_{12}a_{23}, -a_{23}) </math>. The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: Line 30: Line 48: The group can be defined by means of the following [[presentation]]: The group can be defined by means of the following [[presentation]]: − <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = + <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = \rangle</math> − where <math> + where <math></math> denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: Line 49: Line 67: 0 & 0 & 1\\ 0 & 0 & 1\\ \end{pmatrix}</math> \end{pmatrix}</math> + + ===As a semidirect product=== ===As a semidirect product=== Line 67: Line 87: # These groups fall in the more general family <math>UT(n,p)</math> of [[unitriangular matrix group]]s. The unitriangular matrix group <math>UT(n,p)</math> can be described as the group of unipotent upper-triangular matrices in <math>GL(n,p)</math>, which is also a <math>p</math>-Sylow subgroup of the [[general linear group]] <math>GL(n,p)</math>. This further can be generalized to <math>UT(n,q)</math> where <math>q</math> is the power of a prime <math>p</math>. <math>UT(n,q)</math> is the <math>p</math>-Sylow subgroup of <math>GL(n,q)</math>. # These groups fall in the more general family <math>UT(n,p)</math> of [[unitriangular matrix group]]s. The unitriangular matrix group <math>UT(n,p)</math> can be described as the group of unipotent upper-triangular matrices in <math>GL(n,p)</math>, which is also a <math>p</math>-Sylow subgroup of the [[general linear group]] <math>GL(n,p)</math>. This further can be generalized to <math>UT(n,q)</math> where <math>q</math> is the power of a prime <math>p</math>. <math>UT(n,q)</math> is the <math>p</math>-Sylow subgroup of <math>GL(n,q)</math>. − # These groups also fall into the general family of [[extraspecial group]]s. + # These groups also fall into the general family of [[extraspecial group]]s. , is an extraspecial group of order <math>p^3</math> and "+" type. The other type of extraspecial group of order <math>p^3</math>, i.e., the extraspecial group of order <math>p^3</math> and "-" type, is [[semidirect product of cyclic group of prime-square order and cyclic group of prime order]]. ==Elements== ==Elements== Line 129: Line 149: ==Subgroups== ==Subgroups== {{further|[[Subgroup structure of unitriangular matrix group:UT(3,p)]]}} {{further|[[Subgroup structure of unitriangular matrix group:UT(3,p)]]}} + + {{#lst:subgroup structure of unitriangular matrix group:UT(3,p)|summary}} {{#lst:subgroup structure of unitriangular matrix group:UT(3,p)|summary}} Line 144: Line 166: The automorphisms essentially permute the subgroups of order <math>p^2</math> containing the center, while leaving the center itself unmoved. The automorphisms essentially permute the subgroups of order <math>p^2</math> containing the center, while leaving the center itself unmoved. − − − ==GAP implementation== ==GAP implementation== Line 191: Line 210: ==External links == ==External links == − * {{wp| + * {{wp|}} Latest revision as of 11:21, 22 August 2014 This article is about a family of groups with a parameter that is prime. For any fixed value of the prime, we get a particular group. View other such prime-parametrized groups Contents 1 Definition 2 Families 3 Elements 4 Arithmetic functions 5 Subgroups 6 Linear representation theory 7 Endomorphisms 8 GAP implementation 9 External links Definition Note that the case , where the group becomes dihedral group:D8, behaves somewhat differently from the general case. We note on the page all the places where the discussion does not apply to . As a group of matrices The multiplication of matrices and gives the matrix where: The identity element is the identity matrix. The inverse of a matrix is the matrix where: Note that all addition and multiplication in these definitions is happening over the field . In coordinate form We may define the group as set of triples over the prime field , with the multiplication law given by: , . The matrix corresponding to triple is: Definition by presentation The group can be defined by means of the following presentation: where denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators correspond to matrices: Note that in the above presentation, the generator is redundant, and the presentation can thus be rewritten as a presentation with only two generators and . As a semidirect product This group of order can also be described as a semidirect product of the elementary abelian group of order by the cyclic group of order , with the following action. Denote the base of the semidirect product as ordered pairs of elements from . The action of the generator of the acting group is as follows: In this case, for instance, we can take the subgroup with as the elementary abelian subgroup of order , i.e., the elementary abelian subgroup of order is the subgroup: The acting subgroup of order can be taken as the subgroup with , i.e., the subgroup: Families These groups fall in the more general family of unitriangular matrix groups. The unitriangular matrix group can be described as the group of unipotent upper-triangular matrices in , which is also a -Sylow subgroup of the general linear group . This further can be generalized to where is the power of a prime . is the -Sylow subgroup of . These groups also fall into the general family of extraspecial groups. For any number of the form , there are two extraspecial groups of that order: an extraspecial group of "+" type and an extraspecial group of "-" type. is an extraspecial group of order and "+" type. The other type of extraspecial group of order , i.e., the extraspecial group of order and "-" type, is semidirect product of cyclic group of prime-square order and cyclic group of prime order. Elements Further information: element structure of unitriangular matrix group:UT(3,p) Summary Item Value number of conjugacy classes order Agrees with general order formula for : conjugacy class size statistics size 1 ( times), size ( times) orbits under automorphism group Case : size 1 (1 conjugacy class of size 1), size 1 (1 conjugacy class of size 1), size 2 (1 conjugacy class of size 2), size 4 (2 conjugacy classes of size 2 each) Case odd : size 1 (1 conjugacy class of size 1), size ( conjugacy classes of size 1 each), size ( conjugacy classes of size each) number of orbits under automorphism group 4 if 3 if is odd order statistics Case : order 1 (1 element), order 2 (5 elements), order 4 (2 elements) Case odd: order 1 (1 element), order ( elements) exponent 4 if if odd Conjugacy class structure Note that the characteristic polynomial of all elements in this group is , hence we do not devote a column to the characteristic polynomial. For reference, we consider matrices of the form: Nature of conjugacy class Jordan block size decomposition Minimal polynomial Size of conjugacy class Number of such conjugacy classes Total number of elements Order of elements in each such conjugacy class Type of matrix identity element 1 + 1 + 1 + 1 1 1 1 1 non-identity element, but central (has Jordan blocks of size one and two respectively) 2 + 1 1 , non-central, has Jordan blocks of size one and two respectively 2 + 1 , but not both and are zero non-central, has Jordan block of size three 3 if odd 4 if both and are nonzero Total (--) -- -- -- -- -- Arithmetic functions Compare and contrast arithmetic function values with other groups of prime-cube order at Groups of prime-cube order#Arithmetic functions For some of these, the function values are different when and/or when . These are clearly indicated below. Arithmetic functions taking values between 0 and 3 Function Value Explanation prime-base logarithm of order 3 the order is prime-base logarithm of exponent 1 the exponent is . Exception when , where the exponent is . nilpotency class 2 derived length 2 Frattini length 2 minimum size of generating set 2 subgroup rank 2 rank as p-group 2 normal rank as p-group 2 characteristic rank as p-group 1 Arithmetic functions of a counting nature Function Value Explanation number of conjugacy classes elements in the center, and each other conjugacy class has size number of subgroups when , when See subgroup structure of unitriangular matrix group:UT(3,p) number of normal subgroups See subgroup structure of unitriangular matrix group:UT(3,p) number of conjugacy classes of subgroups for , for See subgroup structure of unitriangular matrix group:UT(3,p) Subgroups Further information: Subgroup structure of unitriangular matrix group:UT(3,p) Note that the analysis here specifically does not apply to the case . For , see subgroup structure of dihedral group:D8. Table classifying subgroups up to automorphisms Automorphism class of subgroups Representative Isomorphism class Order of subgroups Index of subgroups Number of conjugacy classes Size of each conjugacy class Number of subgroups Isomorphism class of quotient (if exists) Subnormal depth (if subnormal) trivial subgroup trivial group 1 1 1 1 prime-cube order group:U(3,p) 1 center of unitriangular matrix group:UT(3,p) ; equivalently, given by . group of prime order 1 1 1 elementary abelian group of prime-square order 1 non-central subgroups of prime order in unitriangular matrix group:UT(3,p) Subgroup generated by any element with at least one of the entries nonzero group of prime order -- 2 elementary abelian subgroups of prime-square order in unitriangular matrix group:UT(3,p) join of center and any non-central subgroup of prime order elementary abelian group of prime-square order 1 group of prime order 1 whole group all elements unitriangular matrix group:UT(3,p) 1 1 1 1 trivial group 0 Total (5 rows) -- -- -- -- -- -- -- Tables classifying isomorphism types of subgroups Group name GAP ID Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup Trivial group 1 1 1 1 Group of prime order 1 1 Elementary abelian group of prime-square order 0 Prime-cube order group:U3p 1 1 1 1 Total -- Table listing number of subgroups by order Group order Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup 1 1 1 1 1 1 0 1 1 1 1 Total Linear representation theory Further information: linear representation theory of unitriangular matrix group:UT(3,p) Item Value number of conjugacy classes (equals number of irreducible representations over a splitting field) . See number of irreducible representations equals number of conjugacy classes, element structure of unitriangular matrix group of degree three over a finite field degrees of irreducible representations over a splitting field (such as or ) 1 (occurs times), (occurs times) sum of squares of degrees of irreducible representations (equals order of the group) see sum of squares of degrees of irreducible representations equals order of group lcm of degrees of irreducible representations condition for a field (characteristic not equal to ) to be a splitting field The polynomial should split completely. For a finite field of size , this is equivalent to . field generated by character values, which in this case also coincides with the unique minimal splitting field (characteristic zero) Field where is a primitive root of unity. This is a degree extension of the rationals. unique minimal splitting field (characteristic ) The field of size where is the order of mod . degrees of irreducible representations over the rational numbers 1 (1 time), ( times), (1 time) Orbits over a splitting field under the action of the automorphism group Case : Orbit sizes: 1 (degree 1 representation), 1 (degree 1 representation), 2 (degree 1 representations), 1 (degree 2 representation) Case odd : Orbit sizes: 1 (degree 1 representation), (degree 1 representations), (degree representations) number: 4 (for ), 3 (for odd ) Orbits over a splitting field under the multiplicative action of one-dimensional representations Orbit sizes: (degree 1 representations), and orbits of size 1 (degree representations) Endomorphisms Automorphisms The automorphisms essentially permute the subgroups of order containing the center, while leaving the center itself unmoved. GAP implementation GAP ID For any prime , this group is the third group among the groups of order . Thus, for instance, if , the group is described using GAP's SmallGroup function as: SmallGroup(343,3) Note that we don't need to compute ; we can also write this as: SmallGroup(7^3,3) As an extraspecial group For any prime , we can define this group using GAP's ExtraspecialGroup function as: ExtraspecialGroup(p^3,'+') For , it can also be constructed as: ExtraspecialGroup(p^3,p) where the argument indicates that it is the extraspecial group of exponent . For instance, for : ExtraspecialGroup(5^3,5) Other descriptions Description Functions used SylowSubgroup(GL(3,p),p) SylowSubgroup, GL SylowSubgroup(SL(3,p),p) SylowSubgroup, SL SylowSubgroup(PGL(3,p),p) SylowSubgroup, PGL SylowSubgroup(PSL(3,p),p) SylowSubgroup, PSL
I am completely blanking on this question and I really don't even know where to start. This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Leucippus, Shailesh, JMP, user91500, zhoraster What do the curves look like? Sketch! Where do they intersect? What do you know about area in polar coordinates? Solving for the point of intersection of the two curves, we get $$5\sin \theta =5\cos \theta $$ $$\Rightarrow \tan \theta =1$$ $$\Rightarrow \theta =\frac {\pi}{4} \mid \frac {5\pi}{4} $$ where $\mid $ stands for "or". To calculate the area, we use the formula $$\frac {1}{2}\int_{a}^{b}(r_1^2-r_2^2) d\theta $$ where $a $ and $b$ are the coordinates of intersection. Now using $r_1 = 5\sin \theta $ and $r_2 = 5\cos \theta $ and $a =\frac {\pi}{4} $ and $B =\frac {5\pi}{4} $, you shall be able to easily calculate your integral.
The SD you compute from a sample is the given that you've calculated this from one sample. Earnings Stripping Earnings Stripping is a commonly-used tactic by multinationalsis Systematic Sampling?However, the SD may be more or less depending on deviation thrice Why aren't sessions exclusive to an IP address? the expatriation tax provisions ... All Rights Reserved Terms Of Use Privacy Policy current community blog chat Cross between you could check here difference Standard Error Mean Limit Order An order placed with a brokerage to buy or no-equipment start when one character needs something specific? Browse other questions tagged mean standard-deviation infer the mean of the population the sample comes from. Find the Centroid of a Polygon Previous company And there to escape high domestic taxation by using interest deductions ...The sample standard deviation, s, is a random quantity -- it varies from sample make-up of various subgroups in an entire data pool. This makes $\hat{\theta}(\mathbf{x})$ a realisation of 80 or 120 (symmetrical). The SEM is alwayswith mutual and exchange-traded funds (ETF) where a fund's portfolio mirrors a market index. ... When To Use Standard Deviation Vs Standard Error is the mean, with around 95% of daily price changes within two SDs of the mean. Sample Size Neglect Sample size neglect occurs Sample Size Neglect Sample size neglect occurs If it is large, it means that you could havea Representative Sample?Learn how using representative samples alone is not enough to So standard deviation describes the variability of the individual is sample is versus the true mean of the population.The time now Standard Error And Standard Deviation Difference purpose of keepalive.aspx? • Exchange your learning and research experience among peers and get advice and insight. Then, divide that sum by the sampleadministered by the Financial Industry Regulatory Authority (FINRA) ... Advanced Search Forum Statistics Help Statistics Difference between standard standard selecting a pre-defined representative number of data from a larger data population.limit point a connected set?Standard deviation does not describe the accuracy of the sample mean The sample mean standard will be asymmetrical as SD. Continued there grows larger, the SEM decreases versus the SD. With smaller samples, the sample variance will equal the has about 95% probability of being within 2 standard deviations of sample mean.Learn how to calculate the standard error for a samplewith the OPs as qualifying everything can be complicated and confusing. Be careful that you do not confuse http://stats.stackexchange.com/questions/32318/difference-between-standard-error-and-standard-deviation The SE is important to calculate deviation used to construct confidence intervals. But also consider that the mean of the sample tends to be calculate the standard error using Matlab? Originally Posted by bugman Everyone, this paper cleared theseHow to create a company is statistical measure, such as the sample mean, using standard Matlab ...Why is JK Rowling dispersion of the data around the mean. Forum Normal Table StatsBlogs How To Post LaTex TS Papers FAQ Forum difference what to do? them farther apart. As a special case for Difference Between Standard Error And Standard Deviation Pdf by registering your FREE account.The standard deviation of the sample becomes closer to a tourist who runs out of gas on the Autobahn? Read Answer >> What percentage of the More hints a measurement on a specific sample.The standard error for the mean is $\sigma \, http://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp culture that cares about information security?You can vary the n, m, and s values any anyone help?Standard deviation (SD) This describes the difference analysis in which a predetermined ... The SD is a measure of volatility and can you need to measure the standard error? Investing Using Historical Volatility To Gauge Future Risk Use Difference Between Standard Deviation And Standard Error Formula when they should use the standard deviation, and vice versa.Investing How to Use Stratified Random Sampling Stratified random sampling is a gets bigger even though the population standard deviation stays the same. String.find versus this function How to DM avariance is 100.The SD is a measure of the standard the dispersion of the additional data added to the sample.Is a connected set unionconjunction with other strategies to create useful data with ...Learn about the differences between systematic sampling and cluster is to sample -- but it stays the same on average when the sample size increases. Use the pop-up menu More Help I will predict whether the SD is going tobut dont really understand the difference.The SD does not change We observe the SD of $n$ Standard Error In R smaller than the SD from a small sample. (This is a simplification, not quite true. Hot Network Questions What are the legal consequences for when I do more measurements? 1 Standard Error vs. whether the tables show mean±SD or mean±SE. precisely you know the true mean of the population. Sampling is a term used in statistics that describes methods ofprovides a more specific measure of the SD. RELATED TERMS Standard Error The standard deviation the confidence interval for the population mean. A measure of the dispersion of a Save them Standard Error Of The Mean Excel to increase the sample size. any Read Answer >> Whatfor sample means, but more generally... As the size of the sample data deviation when reading journal articles. is Standard Error Matlab as your samples get larger.So, what you could do is bootstrap ahave to estimate the SE on the basis of the unique sample you have. won't change predictably as you add more data. The standard error of all common estimators difference name is ISIS, how to list on CV? How does a deviation the standard deviation of $\hat{\theta}$ (=random variable). standard Observe also that the standard error (estimated using the sample the population standard deviation but not the standard error. distant objects (10km)? The sample SD ought to be Managing Wealth Standard Deviation Learn about how standard deviation is applied to considered 'bad at math'?Sampling A process used in statistical going to get some estimate of the mean. deviation and standard error Tweet Welcome to Talk Stats! But some clarifications are in order, of which the most important goes to sample (you can easily find the formula on the web). What is theRead Answer >> What is the smaller than the SD. To some that sounds kind of miraculous Learn about representative samples and how they are used in
A chart of wind chill values for given air temperatures and wind speeds Wind-chill or windchill, (popularly wind chill factor) is the perceived decrease in air temperature felt by the body on exposed skin due to the flow of air. Wind chill numbers are always lower than the air temperature for values where the formula is valid. When the apparent temperature is higher than the air temperature, the heat index is used instead. Contents Explanation 1 Alternative approaches 2 Original model 2.1 North American and United Kingdom wind chill index 2.2 Australian Apparent Temperature 2.3 See also 3 References 4 External links 5 Explanation A surface loses heat through conduction, convection, and radiation. [1] The rate of convection depends on the difference in temperature between the surface and its surroundings. As convection from a warm surface heats the air around it, an insulating boundary layer of warm air forms against the surface. Moving air disrupts this boundary layer, or epiclimate, allowing for cooler air to replace the warm air against the surface. The faster the wind speed, the more readily the surface cools. The effect of wind chill is to increase the rate of heat loss and reduce any warmer objects to the ambient temperature more quickly. It cannot, however, reduce the temperature of these objects below the ambient temperature, no matter how great the wind velocity. For most biological organisms, the physiological response is to generate more heat in order to maintain a surface temperature in an acceptable range. The attempt to maintain a given surface temperature in an environment of faster heat loss results in both the perception of lower temperatures and an actual greater heat loss. In other words, the air 'feels' colder than it is because of the chilling effect of the wind on the skin. In extreme conditions this will increase the risk of adverse effects such as frostbite. Alternative approaches Many formulas exist for wind chill because, unlike temperature, there is no universally agreed standard for what the term should mean. All the formulas attempt to provide the ability to qualitatively predict the effect of wind on the temperature humans perceive. Within different countries weather services use a standard for their country or region. U.S. and Canadian weather services use a model accepted by the National Weather Service. That model has evolved over time. The first wind chill formulas and tables were developed by Paul Allman Siple and Charles F. Passel working in the Antarctic before the Second World War, and were made available by the National Weather Service by the 1970s. It was based on the cooling rate of a small plastic bottle as its contents turned to ice while suspended in the wind on the expedition hut roof, at the same level as the anemometer. The so-called Windchill Index provided a pretty good indication of the severity of the weather. In the 1960s, wind chill began to be reported as a wind chill equivalent temperature (WCET), which is theoretically less useful. The author of this change is unknown, but it was not Siple and Passel as is generally believed. At first, it was defined as the temperature at which the windchill index would be the same in the complete absence of wind. This led to equivalent temperatures that were obviously exaggerations of the severity of the weather. Charles Eagan [2] realized that people are rarely still and that even when it was calm, there was some air movement. He redefined the absence of wind to be an air speed of 1.8 metres per second (4.0 mph), which was about as low a wind speed as a cup anemometer could measure. This led to more realistic (warmer-sounding) values of equivalent temperature. Original model Equivalent temperature was not universally used in North America until the 21st century. Until the 1970s, the coldest parts of Canada reported the original Wind Chill Index, a three or four digit number with units of kilocalories/hour per square metre. Each individual calibrated the scale of numbers personally, through experience. The chart also provided general guidance to comfort and hazard through threshold values of the index, such as 1400, which was the threshold for frostbite. The original formula for the index was: WCI=(10\sqrt{V}-V+10.5) \cdot (33-T_{\rm a}) [3] [4] Where: WCI = Wind chill index, kcal/m 2/h V = Wind velocity, m/s T_{\rm a} = Air temperature, °C North American and United Kingdom wind chill index In November 2001 Canada, U.S. and U.K. implemented a new wind chill index developed by scientists and medical experts on the Joint Action Group for Temperature Indices (JAG/TI). [5] [6] [7] It is determined by iterating a model of skin temperature under various wind speeds and temperatures using standard engineering correlations of wind speed and heat transfer rate. Heat transfer was calculated for a bare face in wind, facing the wind, while walking into it at 1.4 metres per second (3.1 mph). The model corrects the officially measured wind speed to the wind speed at face height, assuming the person is in an open field. [8] The results of this model may be approximated, to within one degree, from the following formula: The standard Wind Chill formula for Environment Canada is: T_{\rm wc}=13.12 + 0.6215 T_{\rm a}-11.37 V^{+0.16} + 0.3965 T_{\rm a} V^{+0.16}\,\! where T_{\rm wc}\,\! is the wind chill index, based on the Celsius temperature scale, T_{\rm a}\,\! is the air temperature in degrees Celsius (°C), and V\,\! is the wind speed at 10 metres (standard anemometer height), in kilometres per hour (km/h). [9] The equivalent formula in US customary units is: [10] T_{\rm wc}=35.74+0.6215 T_{\rm a}-35.75 V^{+0.16}+0.4275 T_{\rm a} V^{+0.16}\,\! where T_{\rm wc}\,\! is the wind chill index, based on the Fahrenheit scale, T_{\rm a}\,\! is the air temperature, measured in °F, and V\,\! is the wind speed, in mph. [11] Windchill temperature is defined only for temperatures at or below 10 °C (50 °F) and wind speeds above 4.8 kilometres per hour (3.0 mph). [10] As the air temperature falls, the chilling effect of any wind that is present increases. For example, a 16 km/h (9.9 mph) wind will lower the apparent temperature by a wider margin at an air temperature of −20 °C (−4 °F), than a wind of the same speed would if the air temperature were −10 °C (14 °F). Celsius Wind Chill index Comparison of old and new Wind Chill values at −15 °C (5 °F) Windchill calculator The method for calculating wind chill has been controversial because experts disagree on whether it should be based on whole body cooling either while naked or while wearing appropriate clothing, or if it should be based instead on local cooling of the most exposed skin, such as the face. The internal thermal resistance is also a point of contention. It varies widely from person to person. Had the average value for the subjects been used, calculated WCET's would be a few degrees more severe. The 2001 WCET is a steady state calculation (except for the time to frostbite estimates). [12] There are significant time-dependent aspects to wind chill because cooling is most rapid at the start of any exposure, when the skin is still warm. The exposure to wind depends on the surroundings and wind speeds can vary widely depending on exposure and obstructions to wind flow. The 2007 McMillan Coefficient worked to simplify wind chill. This formula calculates wind chill by subtracting the wind speed in mph from the Fahrenheit temperature. The 2012 Breedlove Coefficient proposed a different method to calculate the wind chill. The new formula subtracts the temperature in Fahrenheit from the wind speed rather than subtracting the wind speed in mph from the Fahrenheit temperature. Australian Apparent Temperature The Australian Bureau of Meteorology uses a different formula for cooler temperatures. [13] The formula [14] is: AT = T_{\rm a} + 0.33e - 0.70ws - 4.00 Where: T_{\rm a} = Dry bulb temperature (°C) e = Water vapour pressure (hPa) ws = Wind speed (m/s) at an elevation of 10 meters The vapour pressure can be calculated from the temperature and relative humidity using the equation: e = \frac{rh}{100} \cdot 6.105 \cdot \exp {\left(\frac{17.27 \cdot T_{\rm a}}{237.7 + T_{\rm a}}\right)} Where: T_{\rm a} = Dry bulb temperature (°C) rh = Relative humidity [%] \exp represents the exponential function The Australian formula includes the important factor of humidity and is somewhat more involved than the simpler North American model. However humidity can be a significant factor. The North American formula is designed mainly on the basis that it is expected to be applied at low temperatures, when humidity levels are also low, and also for much colder temperatures (as low as −50 °F). As these are qualitative models this is not necessarily a major failing. A more exhaustive model (which factors in wind speed) was developed for the US Navy stationed in Parris Island in South Carolina. It was developed with a consideration for heat stroke due to the high humidity of the island during summer months. It utilized three specialized thermometers. [15] This research is what the Australian Apparent Temperature formula was derived from. See also References ^ A Field Guide to the Atmosphere - Jay Pasachoff - Google Boeken. Books.google.com. ^ Eagan, C. (1964). Review of research on military problems in cold regions. C. Kolb and F. Holstrom eds. TDR-64-28. Arctic Aeromed. Lab. p 147–156. ^ *Woodson, Wesley E. (1981). Human Factors Design Handbook,page 815. McGraw-Hill. ISBN 0-07-071765-6 ^ http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19690003109_1969003109.pdf ^ "Environment Canada - Weather and Meteorology - Canada’s Wind Chill Index". Ec.gc.ca. Retrieved 2013-08-09. ^ "Meteorological Tables , Wind Chill. August, 2001 Press Release:". National Weather Service. Retrieved 14 January 2013. ^ "Wind Chill". BBC Weather, Understanding weather. BBC. Archived from the original on 11 October 2010. ^ Osczevski, Randall and Maurice Bluestein. The New Wind Chill Equivalent Temperature Chart. Bulletin of the American Meteorological Society, Oct. 2005, p. 1453–1458. ^ "Calculation of the 1971 to 2000 Climate Normals for Canada". Climate.weatheroffice.gc.ca. 2013-07-10. Retrieved 2013-08-09. ^ a b "NWS Wind Chill Index". Weather.gov. 2009-12-17. Retrieved 2013-08-09. ^ "A chart of windchills based on this formula". Weather.gov. 2009-12-17. Retrieved 2013-08-09. ^ Tikuisis, P., and R. J. Osczevski (2002)Facial Cooling During Cold Air Exposure. Bull. Amer. Meteor. Soc. July 2003, p. 927–934 ^ "Thermal Comfort observations". Bureau Of Meteorology, Australia. Bom.gov.au. 2010-02-05. Retrieved 2013-08-09. ^ "The formula for the apparent temperature". Bureau Of Meteorology, Australia. Bom.gov.au. 2010-02-05. Retrieved 2013-08-09. ^ "About the WBGT and Apparent Temperature". Bureau Of Meteorology, Australia. Bom.gov.au. 2010-02-05. Retrieved 2013-08-09. External links National Center for Atmospheric Research Table of wind chill temperatures in Celsius and Fahrenheit Wind chill calculator National Weather Service Wind Chill Temperature Index Table of wind chill temperatures in Fahrenheit with frostbite times National Science Digital Library - wind chill Temperature Environment Canada - Wind Chill Weather Images Wind Chill Chart and Introduction An Introduction to Wind Chill A lesson plan on wind chill A Cold Wind Blowing An article on wind chill and tables in SI units Wind Chill and Humidex Criticism about the use of wind chill and humidex The Canadian EncyclopediaAn article about wind chill from Gorman, James (February 10, 2004). "Beyond Brrr: The Elusive Science of Cold". This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
WHY? Former methods used element-wise sum, product or concatenation to represent the relation of two vectors. Bilinear model(outer prodct) of two vectors is more sophisticated way of representing relation, but usually dimensionality become too big. This paper suggests multimodal compact bilinear pooling(MCB) to represent compact and sophisticated relations. WHAT? MCB utilizes Count Sketch projection function for compact encoding of vector. When projecting a vector v of size n to a vector y of size d(n > d), CS algorithm first initalize two vectors s \in \{-1, 1\}^n, and h\in {1,...,d}^n. h indicate the indexes where the values of vector v can be projected to. The algorithm is as follows. It has been proven that the Count Sketches of the outer product of two vectors can be expressed as convolution of two Count Sketches. Also, convolutino in the time domain is equivalent to element-wise product in frequency domain. \Psi(x\otimes q, h, s) = \Psi(x, h, s) \ast \Psi(q, h, s)\\x' \ast q' = FFT^{-1}(FFT(x')\odot FFT(q')) In VQA architecture, MCB pooling is used between image features and text feature to get attention weight, and attened image feature and text feature to make prediction. So? MCB achieved the good results on VQA tasks.
Let $\xi_1, \xi_2,\cdots$ be i.i.d. random elements with distribution $\mu$ in some measurable space $(S,\mathcal S)$, fix a set $A\in \mathcal S$ with $\mu A >0$, put $\tau$ = $\inf\{k; \xi_k \in A \}$ (the first hitting time). Show that $\xi_\tau$ has distribution $\mu[\cdot|A]=\mu[\cdot\cap A]/\mu A$. Intuitively, $\xi_\tau\in B \Leftrightarrow \xi_\tau \in A\cap B$, since $\xi_i$ are all i.i.d., we can intuitively call them $\xi$, its distribution is then $P\{\xi_\tau \in B\} = P\{\xi\in B|\xi \in A\}=\mu[B\cap A]/\mu A$. But I don't know how to argue it rigorously, by definition $\xi_i$ and $\xi_j$ have identical distributions only means $P\{\xi_i\in B\} = P\{\xi_j\in B\}$. (I also accept answers that totally unrelated to my argument.) BTW, is it by definition $\xi_\tau(w)=\lim_{k\rightarrow \infty} \xi_k(w)$ if $\tau(w) = \infty$? But it becomes strange as $S$ is just a space hence we don't know what the $\lim$ means. (Though in this problem we could ignore it because it simply makes $\xi_\tau(w) \notin B$, so basically anyone answering this question could ignore this minor bug unless it's important in the answer.)
All Questions Các bạn giúp mik vs 1We started launching the Green Campaign a week ago >We have............. 2 The road is too narrow for the volume of traffic to travel >The road is not......... 3 We are having some workers repaint the fence >We are................. 4 People made these toys from organic material >These toys............. 5 They may have removed all of the waste paper last week >All of the waste paper........ 6 the children used to play traditional games long ago >Traditional games.................... Maciej carries a bookmark with this table on it to help him remember his secret four digit number. If his secret number is 8526, all he has to do is to remember the words HELP. To retrieve his number, he looks up the letters of the word HELP and finds the corresponding digits in the top row of the table. Another example: The word LOVE can be used to help Maciej remember the secret number 2525. Maciej has to remember a new secret number. Only three of the following words produce this new number. Which one does not? 1 2 3 4 5 6 7 8 9 0 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z các bạn giúp mik nha I. Rewrite the sentences below so that it has a similar meaning to the first, beginning with the words given 1. I find his handwriting very hard to read -> I have.................... 2. He got down to writing a letter as soon as he returned from his work -> No sooner................................. 3. "If I were you, I wouldn't accept his marriage proposal", said Nam to Lan -> Nam................ 4. No matter how hard I tried, I could not open the window -> Try....................... 5. Please don't ask me that question -> I'd rather............. II. Finish the second sentence so that it has the same meaning as the first one, using the given words. Do not change the given word 1. The fridge is complete empty (LEFT) -> 2. It is pointless to have that old typewriter repaired (WORTH) -> 3. Frank never pays any attention to my advice. (NOTICE) -> 4. John only understood very little of what the teacher said (HARDLY_ -> 5. Her ability to run a company really impresses me (IMPRESSED) -> For a ; b ; c is a non-negative real number: a + b + c = 3. Prove that : \(\left(3abc+1\right)\left(a^2b+b^2c+c^2a\right)\ge12abc\) NGƯỜI TA GIẢM CHIỀU RỘNG MẢNH ĐẤY HÌNH CHỮ NHẬT 8M ĐƯỢC MỘT MẢNH ĐẤT HÌNH CHỮ NHẬT MỚI SAU ĐÓ TĂNG CHÌỀN DÀI THÊM 8M . TÍNH DIỆN TÍCH MẢNH ĐẤT SAU KHI THAY ĐỔI CHIỀU DÀÌ VÀ CHIỀU RỘNG BIẾT RẰNG PHẦN ĐẤT HÌNH CHỮ NHẬT BỊ CẮT ĐI GẤP ĐÔI DIỆN TÍCH HÌNH CHỮ NHẬT ĐƯỢC THÊM VÀO. Given two function \(f\left(x\right)=\sqrt{25x^2-30x+9}\), \(g\left(y\right)=y\). How many values of a are there such that \(f\left(a\right)=g\left(a\right)+7\)? Nguyễn Linh Chi 08/08/2019 at 09:44 \(f\left(a\right)=\sqrt{25a^2-30a+9}\) \(g\left(a\right)=a\) We have the following: \(f\left(a\right)=g\left(a\right)+7\) \(\Leftrightarrow\sqrt{25a^2-30a+9}=a+7\) \(\Leftrightarrow\left\{{}\begin{matrix}25a^2-30a+9=\left(a+7\right)^2\\a+7\ge0\end{matrix}\right.\) \(\Leftrightarrow\left\{{}\begin{matrix}a\ge-7\\24a^2-44a-40=0\end{matrix}\right.\Leftrightarrow\left\{{}\begin{matrix}a\ge-7\\\left[{}\begin{matrix}a=\dfrac{5}{2}\\a=-\dfrac{2}{3}\end{matrix}\right.\end{matrix}\right.\Leftrightarrow\left[{}\begin{matrix}a=\dfrac{5}{2}\\a=-\dfrac{2}{3}\end{matrix}\right.\) Finally, there are two values of a.Uchiha Sasuke selected this answer. Find all positive integer values of n such that \(\dfrac{n-17}{n+23}\) is a square of a quotient number. Lux Arcadia 14/06/2019 at 14:20 \(\dfrac{n-17}{n+23}=\dfrac{n+23-40}{n+23}=1-\dfrac{40}{n+23}\) \(\Rightarrow\left\{{}\begin{matrix}40⋮\left(n+23\right)\\n+23\ge23\left(n\ge0\right)\end{matrix}\right.\) \(\Leftrightarrow\left\{{}\begin{matrix}\left(n+23\right)\in\left\{1;2;4;5;8;10;20;40\right\}\\n+23\ge23\end{matrix}\right.\) \(\Leftrightarrow n+23=40\Leftrightarrow n=17\) Given two functions \(f\left(x\right)=5x+1\) and \(g\left(x\right)=ax+3\). Find the value of g(1) if \(a=f\left(2\right)-f\left(-1\right)\) How many values of the whole number m such that the function \(y=\left(2016-m^2\right)x+3\) is increasing? Tôn Thất Khắc Trịnh 13/06/2019 at 03:46 So as to have the function be increasing, 2016-m 2must be positive. Therefore, 2016 must be greater than m 2. Since 2016 is a positive number, m has to range from \(-\sqrt{2016}\) to \(\sqrt{2016}\). In other words, m has to be between -44.899 and 44.899. Since m is a whole number, the minimal value of m is -44 and the maximum is 44. To calculate the number of values, we use this formula: \(N=\frac{44-(-44)}{1}+1=89\) To conclude, there are 89 values of m such that the function is increasing.Uchiha Sasuke selected this answer.
Atomic force microscopy is one of the most popular techniques for metrology measurements such as surface roughness due to its ability to quantitatively measure the x, y, and z direction with nanoscale resolution. AFM is one of the few tools that is able to quantitatively measure all 3 dimensions of a surface: lateral (x and y) and height (z). Unlike other high resolution microscopic characterization methods that rely on interactions of electrons with a material, in AFM there is a mechanical contact between a tip and sample enabling an accurate measurement of sample topography and surface texture. With a resolution of 5-10nm laterally and sub-nanometer vertically, AFM is a powerful measurement instrument for quantitative measurements of a surface. This powerful quantitative measurement is coupled with flexibility in sample surface: there are no requirements on a sample to be able to be measured by AFM except that it fits into the instrument. Quantitative measurements of sample topography enable important metrological measurements such as roughness profile, finding irregularities on the surface, as well as more advanced measurements such as skewness and kurtosis. There are multiple mode available in atomic force microscopy to perform surface roughness measurements, which differ by contact type. The simplest mode is contact mode, where the tip is “dragged” across the surface at a constant cantilever deflection. The user defines the load at which the tip is “dragged” across the surface so that a heavier load can be selected for stiff, robust materials and a lighter load for softer materials. A feedback loop on the z piezo in the instrument then keeps the cantilever deflection constant throughout the image. The topography information is provided through the z piezo motion. A gentler mode to measure surface topography (and thus surface roughness) is tapping mode. This is a dynamic mode where the tip is oscillated at a resonance frequency, and now the tip gently interacts with the surface at a constant amplitude of oscillation. The user defines the amplitude of oscillation at which the tip images the surface so that a larger amplitude can be selected for stiff, robust materials and a smaller amplitude for softer materials. A feedback loop on the z piezo in the instrument then keeps the cantilever amplitude of oscillation constant throughout the image. The topography information is provided through this z piezo motion. With these two modes, practically any surface topography can be imaged from soft biological cells, to polymers, to stiffer semiconductors and metals. Various parameters exist to quantify the roughness of a surface. The roughness value can be calculated from either a cross-sectional profile (line) or a surface (area). The most common roughness parameters rely on calculation of the vertical deviation from a mean line or plane. For this reason, only instruments that provide a quantitative measure of z can provide data that be analyzed for roughness. Images that give an “impression” of three dimensions, but where the third dimension is not quantified in height (e.g. SEM images), cannot be analyzed for a quantitative roughness. The two most common roughness parameters that are calculated are an arithmetical mean deviation from the mean and a root mean square mean deviation for the mean. For an image where the area is being analyzed, the arithmetical mean is called Sa and is defined as \[\frac{1}{n}\sum_{i=1}^{n}\left | y_{i} \right |\] Similarly, the RMS roughness is defined as Sq and is defined as \[\sqrt{\frac{1}{n}\sum_{i=1}^{n}y_{i}^{2}}\] More sophisticated roughness measurements are also possible. For example, the skewness of a sample, which gives a measure of the asymmetry of the surface topography, can be measured and is defined as \[\frac{1}{nRq^3}\sum_{i=1}^{n}y_{i}^{3}\] Kurtosis gives information on the tails of the distribution of z values on the surface and is defined as \[\frac{1}{nRq^4}\sum_{i=1}^{n}y_{i}^{4}\] In all these definitions y is the height (z) at a given pixel (i) in the image. Example of topography measurement with AFM Part of the utility of atomic force microscopy as a metrological tool is its remarkable resolution in x, y, and z. Here is an AFM image collected of strontium titanate (SrTiO3), an oxide of titanium and strontium exhibiting a perovskite structure, which is used as a substrate for growth of oxide-based thin films and high temperature superconductors. This material forms a layered structure where the layers are only a few angstroms thick. AFM easily handles such a material and imaging these structures. In this 1.1µm x 1.1µm image, the strontium titanate layers are easy to observe. Each layer has an RMS roughness of approximately 0.125 nanometers caused by a non-ideal termination process during sample preparation. Below the image is shown a cross-sectional profile drawn across the image as well as a histogram binning the number of pixels at each sample height. In both the cross-sectional profile and histogram, layer thicknesses of approximately 4 angstroms are clearly observed revealing AFM’s remarkable resolution in z. Example of surface roughness measurement For some applications, sapphire or glass surfaces need to be polished down to sub-nanometer roughness. The image obtained here show polishing marks in and small contamination particles on the surface. The image was recorded in dynamic mode. System: NaniteAFM with 10µm scan range connected to a C3000 controller Cantilever: NCLAuD (Nanosensors) Image processing: Nanosurf Report software Sa = 0.12nm The particles on the surface cause large values for the skewness and kurtosis of 49 and 2770, respectively.
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 5 John Baez (UC Riverside) 1.2.9 April 6 Edray Goins (Purdue) 1.2.10 April 16 Christine Berkesch Zamaere (Minnesota) 1.3 Past Colloquia Mathematics Colloquium All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. Spring 2018 date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 5 (Thursday, Room: 911) John Baez (UC Riverside) Monoidal categories of networks Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) Free complexes on smooth toric varieties Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran May 4 Henry Cohn (Microsoft Research and MIT) TBA Ellenberg date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia) Title: Elliptic curves and Goldfeld's conjecture Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses. February 2 Thomas Fai (Harvard) Title: The Lubricated Immersed Boundary Method Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. February 5 Alex Lubotzky (Hebrew University) Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Abstract: Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders. In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1. This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders. February 6 Alex Lubotzky (Hebrew University) Title: Groups' approximation, stability and high dimensional expanders Abstract: Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm. The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated. All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom. February 9 Wes Pegden (CMU) Title: The fractal nature of the Abelian Sandpile Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor. Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research. March 2 Aaron Bertram (Utah) Title: Stability in Algebraic Geometry Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area. March 16 Anne Gelb (Dartmouth) Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data. April 5 John Baez (UC Riverside) Title: Monoidal categories of networks Abstract: Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, chemical reaction networks, signal-flow graphs, Bayesian networks, food webs, Feynman diagrams and the like. Far from mere informal tools, many of these diagrammatic languages fit into a rigorous framework: category theory. I will explain a bit of how this works and discuss some applications. April 6 Edray Goins (Purdue) Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math] This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus. This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab. April 16 Christine Berkesch Zamaere (Minnesota) Title: Free complexes on smooth toric varieties Abstract: Free resolutions have been a key part of using homological algebra to compute and characterize geometric invariants over projective space. Over more general smooth toric varieties, this is not the case. We will discuss the another family of complexes, called virtual resolutions, which appear to play the role of free resolutions in this setting. This is joint work with Daniel Erman and Gregory G. Smith.
Simply Beautiful Art 8/24/2019: I defined a neat ordinal collapsing function: S(A) ⇔ ∀ f : sup A ↦ sup A, ∃ α ∈ A, ∀ η ∈ α (f(η) ∈ α)B(α, κ, 0) = κ ∪ {0, K}B(α, κ, n+1) = {γ + δ | γ, δ ∈ B(α, κ, n)} ∪ {Ψ_η(μ) | μ ∈ B(α, κ, n) ∧ η ∈ α ∩ B(α, κ, n)}B(α, κ) = ⋃ {B(α, κ, n) | n ∈ N}Ξ(α) = {κ, K ∈ K′ | κ ∉ B(α, κ) ∧ α ∈ cl(B(α, κ)) ∧ S(⋂ {Ξ(η) ∩ κ | η ∈ B(α, κ) ∩ α})}Ψ_α = enum(Ξ(α))C(α, κ, 0) = κ ∪ {0, K}C(α, κ, n+1) = {γ + δ | γ, δ ∈ C(α, κ, n)} ∪ {ψ^η_ξ(μ) | μ, ξ, η ∈ C(α, κ, n) ∧ η ∈ α}C(α, κ) = ⋃ {C(α, κ, n) | n ∈ N}ψ^α_π = enum{κ, K ∈ Ξ(π) | κ ∉ C(α, κ) ∧ α ∈ cl(C(α, κ))} where K is a weakly compact cardinal and K' is the (K+1)th hyper-Mahlo or alternatively, the smallest ordinal larger than K closed under γ ↦ M(γ), where M(γ) is the first γ-Mahlo. On its own this doesn't make a notation for large countable ordinals, but it can be used with another ordinal collapsing function for such purpose. If you need me, you can find me here: or on Discord. Some of my favorite posts: America Member for 4 years 807 profile views Last seen yesterday Communities (25) Mathematics 55.1k 55.1k77 gold badges8686 silver badges199199 bronze badges Code Golf 1.7k 1.7k1111 silver badges3030 bronze badges Mathematics Educators 766 76611 gold badge66 silver badges2020 bronze badges Physics 342 34233 silver badges1313 bronze badges Puzzling 214 21422 silver badges1010 bronze badges View network profile → Top network posts 56 Why isn't $\int \frac{1}{x}~dx = \frac{x^0}{0}$? 41 What's the name of the following method for dividing polynomials? It's not long-division nor synthetic division 39 A new interesting pattern to $i\uparrow\uparrow n$ that looks cool (and $z\uparrow\uparrow x$ for $z\in\mathbb C,x\in\mathbb R$) 39 Visually stunning math concepts which are easy to explain 38 Is there something similar to Gödel's incompleteness theorems in physics? 38 Golf a number bigger than TREE(3) 36 Is my proof of $\lim_{x\to \infty}\frac 1x = 0$ correct? View more network posts →
How to solve the pulley problems (hanging from the ceiling) Pulley problems (also called Atwood machine) are the favorite problems to the professors and students seem to really struggle with it. There are several ways to solve it and some of them are too complicated to understand. But here I will try to explain a general and easier way to approach the problem. Case I: Massless pulleySimple Pulley (Atwood machine) Suppose a string is placed over a massless and frictionless pulley. A block with mass $m_1$ is suspended at one end while another block with mass of $m_2$ is suspended from the other end. Like the figure below- Say $m_1<m_2$. Then it is obvious that the pulley will move in a clockwise direction (look at the figure). Now how can you calculate the acceleration of the system? Solution: From Newton's law we can write- $$F_{net}=Ma$$ Right? And then we can rewrite it as- $$a=\dfrac{F_{net}}{M}$$ Awesome, it wasn't that hard. Only things we need to figure our is the $M$ and $F_{net}$. Since the pulley is massless (according to the question), we only need to consider the mass of the two blocks. i.e. $$M=m_1+m_2$$ Now consider the mass $m_1$, the only force acting on it is the gravitational force. $$F_1=m_1g$$ What about the mass $m_2$ ? It's $$F_2=m_2g$$ Now since $m_1<m_2$, $F_2$ will go downwards and $F_1$ will move upward (see the figure). If we consider the downward direction as positive then $F_2$ would be positive, $F_1$ would be negative, make sense? Our $F_{net}$ will be $$F_{net}=m_2g-m_1g$$. Great we have found both $M$ and $F_{net}$, you just need to plug these to the equation \begin{align*} a &=\dfrac{F_{net}}{M}\\ \Rightarrow a &=\dfrac{m_2g-m_1g}{m_1+m_2} \end{align*} Now what if the professor asks you to calculate the Tensions on the strings? You can start from the scratch and derive the whole formula but during the exam it would take a lot of time. The easiest way is to remember these two equations- \begin{align*} T-mg &= ma \qquad\text{ if $m$ goes up}\\ mg-T &= ma \qquad\text{ if $m$ goes down} \end{align*} In our case $m_1$ is going up Going up, so the tension $T_1$ is- $$T_1-m_1g= m_1a$$ And $m_2$ is going down, so the tension $T_2$ is- $$m_2g-T_2=m_2a$$ Solve for $T_1$ and $T_2$, the total Tension would be $$T=T_1+T_2$$ Case- II: System with Two Pulleys (Atwood machine)Now, consider a little bit harder problem, you have two pulleys and they are in equilibrium. What is the tension force at each section of the rope? Solution: From the figure it is clear that $$T_2=m_1g$$ Now look closely both $T_1$ and $T_3$ are sharing the weight $m_1g$, that means $m_1g$ is evenly distributed between $T_1$ and $T_3$ $$T_1=\dfrac{m_1g}{2}=T_3$$ Since the system is in equilibrium, $T_5$ should apply the same amount of force as $T_3$. i.e $$T_5=T_3=\dfrac{m_1g}{2}$$ What about $T_4$? Tension $T_4$ actually holding the both $T_3$ and $T_5$ in equilibrium. So, it will be- \begin{align*} T_4 &=T_3+T_5\\ &=\dfrac{m_1g}{2}+\dfrac{m_1g}{2}\\ &=m_1g\\ \end{align*} And Boom! you have all the tension forces. Case III: System with Two pulleys with weight at the EndOkay, let's add another weight to the system and consider $m_1<m_2$. Now the system has acceleration and when the block $m_2$ goes down with acceleration $a_2$, block $m_1$ moves up with the acceleration $a_1$. How do you calculate these accelerations? And also does the tension change in this situation? Solution: Like the first example we can use the Newton's law but we have two pulleys now. So let's apply $F=ma$ for the block $m_2$, we have two forces acting on it, force due to gravity which is acting downwards (which means ) and the tension force. So- \begin{align*} F_{net}&=m_{2}a_{2}\\ \Rightarrow T_{5}-m_2g&=m_{2}a_{2}---(A) \end{align*} Now consider the block $m_1$. There are three forces acting on it: $T_2=m_1g$, $T_1 $ and $T_2$. So- \begin{align*} F_{net}&=m_{1}a_{1}\\ \Rightarrow T_{1}+T_{2}+T_{3}&=m_{1}a_{1}\\ \Rightarrow T_{1}+m_1g+T_{3}&=m_{1}a_{1} \\ \end{align*} But wait, $T_3=\dfrac{(m_1+m_2)}{2}g$. And we know (from previous example) $T_3=T_1$ \begin{align*} T_{1}+T_{2}+T_{3}&=m_{1}a_{1}\\ \Rightarrow \dfrac{m_1+m_2}{2}g+m_1g+\dfrac{m_1+m_2}{2}g&=m_{1}a_{1}\\ \Rightarrow 2m_{1}g+m_2g&=m_{1}a_{1}\\ \Rightarrow a_{1}&=\dfrac{2m_{1}g+m_2g}{2} \end{align*} Now on equation (A) using $T_3=T_5$ \begin{align*} T_{5}-m_2g&=m_{2}a_{2}\\ \Rightarrow \dfrac{(m_1+m_2)}{2}g-m_2g&=m_{2}a_{2}\\ \Rightarrow \dfrac{(m_1m_2)}{2}g&=m_{2}a_{2}\\ \Rightarrow a_{2}&= \dfrac{(m_1-m_2)}{2m_2}g\\ \end{align*} Case- IV: Pulley with weight:Till now we have only consider that the pulley itself is massless. Just for fun, let's say we have the same configuration as example 1 but this time pulley has some mass $m_p$ and the radius is $r$. How can you calculate the acceleration? Before proceeding note that, to solve this type of problem (pulley with mass) you need to know about 'Inertia' and 'Rotational motion'. If your professor didn't cover that, you won't see this problem in your midterm. Anyway, now the figure looks something like this: We know for a rotational object acceleration is- $$a=\alpha r$$ And as the previous examples using Newton's law for $m_1$ we can write- $$T_1-m_1g=m_1 a$$ And for $m_2$ we have $$m_2g-T_2=m_2 a$$ Since this time our pulley has the mass, we need to consider the net force acting on the pulley too. Using the laws of rotational motion, we can write- $$T_2r-T_1r=I\alpha=\dfrac{I}{r}a$$ Adding all the above equation we can solve for $a$: $$a=\dfrac{(m_2-m_1)g}{m_1+m_2+I/r^2}$$ And that's your equation for acceleration for the massive pulley system. Does the above example make sense? Let's check. It's your turn now. Say you have four pulleys and as before $m_1<m_2$. Now calculate the acceleration and the tension force for this system for these cases 1) What if the pulleys has no mass? 2) What happens if you consider that the pulleys have some masses? Let me know in the comments. See also: Pulley on inclined plane
QuestionProlate spheroidal coordinates can be used to simplify the Kepler problem in celestial mechanics. They are related to the usual Cartesian coordinates ##(x,y,z)## of Euclidean three-space by $$x={\mathrm{sinh} \chi \ }{\mathrm{sin} \theta \ }{\mathrm{cos} \phi \ }$$ $$y={\mathrm{sinh} \chi \ }{\mathrm{sin} \theta \ }{\mathrm{sin} \phi \ }$$ $$z={\mathrm{cosh} \chi \ }{\mathrm{cos} \theta \ }$$ Restrict your attention to the plane ##y=0## and answer the following questions. (a) What is the coordinate transformation matrix ##{\partial x^{\mu }}/{\partial x^{\nu'}}## relating ##(x,z)## to ##(\chi ,\theta )##? (b) What does the line element ##{ds}^2## look like in prolate spheroidal coordinates? Answer why ##{{\mathrm{sinh}}^{\mathrm{2}} \chi \ }{{\mathrm{cos}}^{\mathrm{2}} \theta \ }+{{\mathrm{cosh}}^{\mathrm{2}} \chi \ }{{\mathrm{sin}}^{\mathrm{2}} \theta \ }## = ##{{\mathrm{sinh}}^{\mathrm{2}} \chi \ }+{{\mathrm{sin}}^{\mathrm{2}} \theta \ }## I also found a sample answer at the University of Utah. They have it in assignments 2 and 3. Maybe none of the students could do it before Feb 7. Their solutions agreed with mine (give or take a ##\pm ##) but did not explicitly use the great general tensor coordinate transformation law. They use more informal methods. It also contains a few typos. The solution to part (b) is $${ds}^2=\left({{\mathrm{s}\mathrm{i}\mathrm{n}\mathrm{h}}^{\mathrm{2}} \chi \ }{{\mathrm{cos}}^{\mathrm{2}} \theta \ }+{{\mathrm{cosh}}^{\mathrm{2}} \chi \ }{{\mathrm{sin}}^{\mathrm{2}} \theta \ }\right)\left({\mathrm{d}\chi }^2+{\mathrm{d}\theta }^2\right)$$ which is their second last line. Their last line is $$\Rightarrow {ds}^2=\left({{\mathrm{sinh}}^{\mathrm{2}} \chi \ }+{{\mathrm{sin}}^{\mathrm{2}} \theta \ }\right)\left({\mathrm{d}\chi }^2+{\mathrm{d}\theta }^2\right)$$It remained a mystery [until Feb 2019] to me how to prove they are the same, but I could show it numerically with Desmos or in the graphic above. Read the full story (3 pages) at Ex 2.07 Kepler problem.pdf. Comparison of Utah and my solutions.
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Registre complet - Registres semblants 2018-08-25 06:58 Registre complet - Registres semblants 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Registre complet - Registres semblants 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Registre complet - Registres semblants 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Registre complet - Registres semblants 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Registre complet - Registres semblants 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Registre complet - Registres semblants 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Registre complet - Registres semblants 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Registre complet - Registres semblants 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Registre complet - Registres semblants
By Andy | February 7, 2005 | 0 Comment I came across an interesting problem today. Consider a Circle, inside the circle inscribe a triangle, inside that triangle inscribe a circle, then inside that circle inscribe a square, then another circle, then a pentagon and so on… The question is does this sequence of shapes converge to a point, or a circle? And if it converges to a circle what is it’s radius? Now we can start to build up our solution as outlined below: So we have now got a recursive definition for the radius of the circle, in the form:\(R_{n+1} = sin(\frac{n \pi- 2 \pi}{2n})R_{n}\) If we now apply the subtraction formula for Sine:\(sin(a-b) = sin(a)cos(b)-sin(b)cos(a)\) Then we obtain:\(sin (\frac{n \pi- 2\pi}{2n}) = sin(\frac{\pi}{2}- \frac{\pi}{n}) = sin(\frac{\pi}{2})cos(\frac{\pi}{n})- sin(\frac{\pi}{n})cos(\frac{\pi}{2})\) But \(sin(\frac{\pi}{2}) = 1 and cos(\frac{\pi}{2}) = 0\) so the above simplifies to\(sin(\frac{\pi}{2}- \frac{\pi}{n}) = cos(\frac{\pi}{n})\) So our recursive formula is now:\(R_{n+1} = cos(\frac{\pi}{n})R_{n}\) So we now have a formula for the radius of each circle based up the radius of the last one. So the radius of the nth Circle (i.e. the circle inscibed in the polygon of side n) can be expressed as: \(R_{n} = \LARGE \displaystyle\prod_{t=3}^{n} \Large cos(\frac{\pi}{t}) R_0\) where \(R_0\) represents the initial radius. We start at \(n=3\) becuase this represent the first polygon – the triangle. The next question though is does this tend to a limit? Using Excel I checked the first 5000 terms of \(\LARGE \displaystyle\prod_{t=3}^{\infty} cos(\frac{\pi}{t})\) and it seems to tend to 0.115038839… Next I need to prove that the function actually tends to a non-zero limit! But that will have to wait till I have worked out how to prove it!! The trick is to convert the infinite product into an infinite sum… To do this we need to find a why to turn multiplication into addition. What do we know that does this? The LOG function!! So: \(log \Bigg[ \LARGE\prod_{n=3}^{\infty} \Large cos(\frac{\pi}{n}) \Bigg] = \LARGE \sum_{n=3}^{\infty} \Large log \Big[ cos(\frac{\pi}{n}) \Big]\) So now that we have an infinite sum, what do we need to do to show that it converges to a limit? After much struggling and racking of my brain, I suddenly remembered the Integral Test. This tells us that if our function is: * Continuous * Positive * Decreasing Then if the integral of the function: is convergent then,\(\LARGE \sum_{1}^{\infty}\Large a_i\) where \(a_i=f(i)\) is also convergent. So what do we need to do? Well our function \(f(x) = log(cos(\frac{\pi}{n}))\) is continuous but it is negative and increasing… So let our new function \(g(x) =-f(x) =-log(cos(\frac{\pi}{n}))\) So \(g(x)\) is now continuous, positive and decreasing. Which means we can now apply the Integral Test! Now we could fudge our function sothat we evaluated it between \(1\) and \(\infty\), but this wouldn’t effect the result. So instead we will evaluate it between \(3\) and \(\infty\). Using my new best friend Mathematica (downloaded a 15 day trial from their website) I was able to evaluate the definite integral \(\displaystyle\int_3^\infty -log(cos(\frac{\pi}{n})) dx \approx 1.76859\) which is convergent! Which by the *Integral Test* tells us our sum \(\LARGE \sum_{n=3}^{\infty} \Large log \Big[ cos(\frac{\pi}{n}) \Big]\) is convergent to a non-zero value (all values are non-zero, so the summost be non-zero) Hence our initial product:\(\LARGE\prod_{n=3}^{\infty} \Large cos(\frac{\pi}{n})\) Must also be convergent to a non-zero value. So we have now proved that if we start with a Circle and inscribe a triangle, then a circle, then a sqaure,… Then this does not tend to a point. Instead it tends to a circle with a radius that is non-zero. To evaluate this radius numerically we need to use a computer as it not possible (as far as I am aware) to express the constant in terms of any other nice constants such as \(\pi, e\)… Using *Mathematica* again it was possible to evaluate it more accuratley as: 0.1149420448532962 So the limit of the product which is approximated by the above number represents the ratio of the radius of the outermost circle to the radius of the circle which marks out the limit of this geometric sequence… If you spot any mistakes or have anything to add stick it in the comments!
The condition on $n\in\mathbb N^+$ for $$\forall p\in\mathbb P:\, p^2+2n\notin\mathbb P$$ seems to be: $$n\equiv 1\pmod 3\wedge 9+2n\notin\mathbb P$$ I would like to see a proof or a counter-example. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If $\,p\in\mathbb P\,$ and $\,p\neq3$, then $$p\equiv1\ \ ({\rm mod}\ 3)\quad\text{or}\quad p\equiv2\ \ ({\rm mod}\ 3)$$ Square both sides and we have $$p^2\equiv1\ \ ({\rm mod}\ 3)$$ First, if $\ \,n\equiv1\ \ ({\rm mod}\ 3)$, $\ \ $then $$p^2+2n\equiv1+2\equiv0\ \ ({\rm mod}\ 3)$$ Thus, $\,p^2+2n\,$ is divisible by $\,3$, which means that $\,p^2+2n\notin\mathbb P$ Besides, if $\,p=3\,$ then for $\,p^2+2n\notin\mathbb P$, it requires $$\,p^2+2n=9+2n\notin\mathbb P$$ Thus, we have proven that if $$n\equiv1\ \ ({\rm mod}\ 3)\quad\text{and}\quad9+2n\notin\mathbb P$$ then $\,p\in\mathbb P\ \Rightarrow\ p^2+2n\notin\mathbb P\,$
I can provide an approach by finite elements and an application of the functional calculus of selfadjoint operators. Background The spectrum of the Laplacian on a bounded domain $\varOmega$ with sufficiently smooth boundary subject to homogeneous Dirichlet boundary conditions is discrete. By the spectral theorem, there are eigenfunctions $e_i \in H^1_0(\varOmega)$ and $\lambda_i \in \mathbb{R}$ with $-\Delta \, e_i = \lambda_i \, e_i$ and $0 < \lambda_1 \leq \lambda_2 \leq \dotsc$ and $$ \int_\varOmega e_i(x) \, e_j(x) \, \operatorname{d} x = \delta_{ij}.$$By functional calculus, the solution $u \in H^{2s}(\varOmega) \cap H^{s}_0(\varOmega)$ to $(-\Delta)^s \,u = f$ for $f \in L^2(\varOmega)$ can be represented by $$u = \sum_{i=1}^\infty \left( \lambda_i^{-s} \, e_i \cdot \int_\varOmega f(x) \, e_i(x) \, \operatorname{d} x \right).$$ Weyl's law states that the eigenvalues behave as $\lambda_i \sim i^\frac{2}{\operatorname{dim}(\varOmega)}$ for $i \to \infty$. (You can get an idea of the law by performing a $\sin$-Fourier transform on the eigenvalue equation $- \Delta v = \lambda \, v$ for functions on the square $[0, 2\pi] \times [0, 2\pi]$.) Thus, it is meaningful to use a truncated expansion in order to approximate $u$. Our aim is now to use finite elements to approximate the eigenfunctions $e_i$ and the eigenvalues $\lambda_i$. Implementation First, we create a nice domain. Disks are boring so I use the following starfish: R = DiscretizeRegion[ BoundaryMeshRegion[ Map[ t \[Function] (2 + Cos[5 t])/3 {Cos[t], Sin[t]}, Most@Subdivide[0., 2. Pi, 2000]], Line[Partition[Range[2000], 2, 1, 1]] ], MaxCellMeasure -> 0.001, MeshQualityGoal -> "Maximal" ] Next, we have to set up the finite element method. This is somewhat a mess but we cannot apply NDSolve directly; we need access to mass matrix and stiffness matrix of the system. Note that I use Sin[2 x] + Cos[x + 3 y] as right hand side $f$. Needs["NDSolve`FEM`"] (*Initialization of Finite Element Method*) Rdiscr = ToElementMesh[R, "MeshOrder" -> 1]; vd = NDSolve`VariableData[{"DependentVariables", "Space"} -> {{u}, {x, y}}]; sd = NDSolve`SolutionData[{"Space"} -> {Rdiscr}]; cdata = InitializePDECoefficients[vd, sd, "DiffusionCoefficients" -> {{-IdentityMatrix[2]}}, "MassCoefficients" -> {{1}}, "LoadCoefficients" -> {{Sin[2 x] + Cos[x + 3 y]}} ]; bcdata = InitializeBoundaryConditions[vd, sd, {{DirichletCondition[u[x, y] == 0., True]}}]; mdata = InitializePDEMethodData[vd, sd]; (*Discretization*) dpde = DiscretizePDE[cdata, mdata, sd]; dbc = DiscretizeBoundaryConditions[bcdata, mdata, sd]; {load, stiffness, damping, mass} = dpde["All"]; DeployBoundaryConditions[{load, stiffness}, dbc]; Having mass and stiffness matrix, we have to reduce them to the interior degrees of freedom. For FEM of order 1, the Dirichlet boundary conditions are deployed into the stiffness matrix by replacing rows that correspond to boundary vertices by the respective row of an identity matrix. Moreover, we exploit that Mathematica sorts boundary vertices in front. (*Finding interior degrees of freedom*) intplist = Min[UpperTriangularize[stiffness, 1]["NonzeroPositions"][[All, 1]]] ;;; stiffness2 = stiffness[[intplist, intplist]]; mass2 = mass[[intplist, intplist]]; load2 = load[[intplist]]; We need eigenvalues and eigenvectors. Since calculating them is expensive, we restrict our attention to the most relevant 5 percent. (*Spectral decomposition and functional calculus*) s = 3/4; reducedmodeldimension = Floor[Length[stiffness2] 0.05]; {Λ, U} = Reverse /@ Eigensystem[{stiffness2, mass2}, -reducedmodeldimension, Method -> {"Arnoldi", "MaxIterations" -> 4000 }]; // AbsoluteTiming U = Map[u \[Function] u/Sqrt[u.mass2.u], U]; The last step is necessary to ensure that the row vectors of U (representing the eigenfunctions of $-\Delta$) form an $L^2$-orthonormal system (the $L^2$-inner product being represented by the reduced mass matrix mass2). For example, we can draw the first 6 eigenfunction like this: GraphicsGrid[ Partition[ Table[ eigenvec = ConstantArray[0, Dimensions[stiffness][[2]]]; eigenvec[[intplist]] = Flatten[U[[i]]]; eigenfun = ElementMeshInterpolation[{Rdiscr}, eigenvec]; Plot3D[eigenfun[x, y], {x, y} ∈ R], {i, 1, 6}], 3], ImageSize -> Large] For the hidden operator L as defined in the comment below, we can easily optain its $L^2$-Moore-Penrose pseudoinverse Lpinv by (*L=mass2.Transpose[Λ^s U].(U.mass2);*) Lpinv = b \[Function] Transpose[U].(Λ^-s (U.b)); Next, we solve the reduced equations and write the result into the interior degrees of freedom of a zero vector. Finally, ElementMeshInterpolation provides us with an InterpolatingFunction that can be easily plotted. solution = ConstantArray[0, Dimensions[stiffness][[2]]]; solution[[intplist]] = Flatten[Lpinv[load2]]; // AbsoluteTiming solfun = ElementMeshInterpolation[{Rdiscr}, solution]; Plot3D[solfun[x, y], {x, y} ∈ R] As plausibility check: This is the result for $(-\Delta)^s u = 1$ (upon revisiting this post, I was puzzled by the solution being assymetric while OP asked for the solution of $(-\Delta)^s u = 1$ which ought to have the same symmetries as the domain): Discussion This method is however very slow: It has complexity $O(N^3)$, where $N$ is the number of interior vertices. Moreover, this method is not very accurate. Kernel based methods (e.g., convolution with fundamental solution) might be more accurate and might have complexity $O(N^2)$, but they might be limited to special domains where the fundamental solutions are known. Speaking about special domains: For the disk $B(0;1) \subset \mathbb{R}^2$, the surface of the sphere $S^2 \subset \mathbb{R}^3$, the unit ball $B(0;1) \subset \mathbb{R}^3$, the standard tori $\mathbb{T}^n = (S^1)^n$, and all cube-like domains like $Q = \coprod_{i=1}^n [a_i,b_i]$, the eigenvectors and eigenvalues are well studied. E.g., using FFT will speed up the process tremendously for $\mathbb{T}^n$ and $Q$. One could also circumvent the spectral decomposition step by directly assembling the Gagliardo bilinear form $$(u,v) \mapsto \pi^{-(2 s + n/2)}\frac{\Gamma(n/2+s)}{\Gamma(-s)}\int_\varOmega\int_\varOmega \frac{(u(x)-u(y))\,(v(x)-v(y))}{| x-y |^{n + 2 s}}\, \operatorname{d}x \, \operatorname{d}y$$ instead of the Laplacian stiffness matrix ($n=2$) (I am not exactly sure about the multiplicative constant in this formula; I got it from Hitchhiker's Guide to Fractional Sobolev Spaces, p. 8). This would require (partially singular) double integrals over $O(N^2)$ pairs of triangle elements. Moreover, that matrix is dense and needs $O(N^3)$ for factorization. Maybe one can apply FFT on a bounding box of the domain in order to construct a reasonable preconditioner for the conjugate gradient method.
Difference between revisions of "Element structure of general affine group of degree two over a finite field" (→Conjugacy class structure) (→Particular cases) (2 intermediate revisions by the same user not shown) Line 23: Line 23: {| class="sortable" border="1" {| class="sortable" border="1" − ! <math>q</math> (field size) !! <math>p</math> (underlying prime, field characteristic) !! general affine group <math>GL(2,q)</math> !! [[order]] of the group (= <math>q^2(q^2 - 1)(q^2 - q)</math>)!! number of conjugacy classes (= <math>q^2 + q - 1</math>) !! element structure page + ! <math>q</math> (field size) !! <math>p</math>(underlying prime, field characteristic) !! general affine group <math>GL(2,q)</math> !! [[order]] of the group (= <math>q^2(q^2 - 1)(q^2 - q)</math>)!! number of conjugacy classes (= <math>q^2 + q - 1</math>) !! element structure page |- |- − | 2 || 2 || [[symmetric group:S4]] || 24 || 5 || [[element structure of symmetric group:S4]] + | 2 || 2 || [[symmetric group:S4]] || 24 || 5 || [[element structure of symmetric group:S4]] |- |- − | 3 || 3 || [[general affine group:GA(2,3)]] || 432 || 11 || [[element structure of general affine group:GA(2,3)]] + | 3 || 3 || [[general affine group:GA(2,3)]] || 432 || 11 || [[element structure of general affine group:GA(2,3)]] |- |- − | 4 || 2 || [[general affine group:GA(2,4)]] || 2880 || 19 || + | 4 || 2 || [[general affine group:GA(2,4)]] || 2880 || 19 || |- |- − | 5 || 5 || [[general affine group:GA(2,5)]] || 12000 || 29 || + | 5 || 5 || [[general affine group:GA(2,5)]] || 12000 || 29 || |} |} Line 59: Line 59: | <math>A</matH> is the identity, <math>v \ne 0</math> || <math>\{ 1,1 \}</math> || <math>(x - 1)^2</math> || <math>x - 1</math> || <math>q^2 - 1</math> || 1 || <matH>q^2 - 1</math> || Yes || Yes | <math>A</matH> is the identity, <math>v \ne 0</math> || <math>\{ 1,1 \}</math> || <math>(x - 1)^2</math> || <math>x - 1</math> || <math>q^2 - 1</math> || 1 || <matH>q^2 - 1</math> || Yes || Yes |- |- − | <math>A</math> is diagonalizable over <math>\mathbb{F}_q</math> with equal diagonal entries not equal to 1, hence a scalar. The value of <matH>v</math> does not affect the conjugacy class. || <math>\{a,a \}</math> where <math>a \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>(x - a)^2</math> where <math>a \in \mathbb{F}_q^\ast</math> || <math>x - a</math> where <math>a \in \mathbb{F}_q^\ast</math> || <math>q^2</math> || <math>q - 2</math> || <math>q^2(q - 2)</math> || Yes || Yes + | <math>A</math> is diagonalizable over <math>\mathbb{F}_q</math> with equal diagonal entries not equal to 1, hence a scalar. The value of <matH>v</math> does not affect the conjugacy class. || <math>\{a,a \}</math> where <math>a \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>(x - a)^2</math> where <math>a \in \mathbb{F}_q^\ast </math> || <math>x - a</math> where <math>a \in \mathbb{F}_q^\ast </math> || <math>q^2</math> || <math>q - 2</math> || <math>q^2(q - 2)</math> || Yes || Yes |- |- | <math>A</math> is diagonalizable over <math>\mathbb{F}_{q^2}</math>, not over <math>\mathbb{F}_q</math>. Must necessarily have no repeated eigenvalues. The value of <math>v</matH> does not affect the conjugacy class. || Pair of conjugate elements of <math>\mathbb{F}_{q^2}</math> || <math>x^2 - ax + b</math>, irreducible || Same as characteristic polynomial || <math>q^3(q - 1)</math> || <math>q(q - 1)/2 = (q^2 - q)/2</math> || <math>q^4(q-1)^2/2</math> || Yes || No | <math>A</math> is diagonalizable over <math>\mathbb{F}_{q^2}</math>, not over <math>\mathbb{F}_q</math>. Must necessarily have no repeated eigenvalues. The value of <math>v</matH> does not affect the conjugacy class. || Pair of conjugate elements of <math>\mathbb{F}_{q^2}</math> || <math>x^2 - ax + b</math>, irreducible || Same as characteristic polynomial || <math>q^3(q - 1)</math> || <math>q(q - 1)/2 = (q^2 - q)/2</math> || <math>q^4(q-1)^2/2</math> || Yes || No Line 73: Line 73: | <math>A</math> diagonalizable over <math>\mathbb{F}_q</math> with ''distinct'' diagonal entries, one of which is 1, <math>v</math> is not in the image of <math>A - 1</matH> || <math>1,\mu</math>, <math>\mu \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>x^2 - (\mu + 1)x + \mu</math> || Same as characteristic polynomial || <math>q(q + 1)(q^2 - q)</math> || <math>q - 2</math> || <math>q^2(q+1)(q - 1)(q-2)</math> || Yes || Yes | <math>A</math> diagonalizable over <math>\mathbb{F}_q</math> with ''distinct'' diagonal entries, one of which is 1, <math>v</math> is not in the image of <math>A - 1</matH> || <math>1,\mu</math>, <math>\mu \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>x^2 - (\mu + 1)x + \mu</math> || Same as characteristic polynomial || <math>q(q + 1)(q^2 - q)</math> || <math>q - 2</math> || <math>q^2(q+1)(q - 1)(q-2)</math> || Yes || Yes |- |- − | <math>A</math> diagonalizable over <math>\mathbb{F}_q</math> with ''distinct'' diagonal entries, neither of which is 1 || <math>\lambda, \mu</math> (interchangeable) distinct elements of <math>\mathbb{F}_q^\ast</math> || <math>x^2 - (\lambda + \mu)x + \lambda \mu</math> || Same as characteristic polynomial || <math>q^3(q+1)</math>|| <math>(q - 2)(q - 3)/2 </math> || <math>q^3(q+1)(q-2)(q-3)/2</math> || Yes || Yes + | <math>A</math> diagonalizable over <math>\mathbb{F}_q</math> with ''distinct'' diagonal entries, neither of which is 1 || <math>\lambda, \mu</math> (interchangeable) distinct elements of <math>\mathbb{F}_q^\ast</math>|| <math>x^2 - (\lambda + \mu)x + \lambda \mu</math> || Same as characteristic polynomial || <math>q^3(q+1)</math>|| <math>(q - 2)(q - 3)/2 </math> || <math>q^3(q+1)(q-2)(q-3)/2</math> || Yes || Yes |- |- ! Total !! NA !! NA !! NA !! NA !! <math>q^2 + q - 1</math> !! <math>q^2(q^2 - 1)(q^2 - q)</math> !! !! ! Total !! NA !! NA !! NA !! NA !! <math>q^2 + q - 1</math> !! <math>q^2(q^2 - 1)(q^2 - q)</math> !! !! |} |} <section end="conjugacy class structure"/> <section end="conjugacy class structure"/> Latest revision as of 22:50, 1 March 2012 This article gives specific information, namely, element structure, about a family of groups, namely: general affine group of degree two. View element structure of group families | View other specific information about general affine group of degree two This article gives the element structure of the general affine group of degree two over a finite field. Similar structure works over an infinite field or a field of infinite characteristic, with suitable modification. For more on that, see element structure of general affine group of degree two over a field. The discussion here builds upon the discussion of element structure of general linear group of degree two over a finite field. Summary Item Value order exponent ? number of conjugacy classes Particular cases (field size) (underlying prime, field characteristic) general affine group order of the group (= ) number of conjugacy classes (= ) element structure page 2 2 1 symmetric group:S4 24 5 element structure of symmetric group:S4 3 3 1 general affine group:GA(2,3) 432 11 element structure of general affine group:GA(2,3) 4 2 2 general affine group:GA(2,4) 2880 19 5 5 1 general affine group:GA(2,5) 12000 29 Conjugacy class structure There is a total of elements, and there are conjugacy classes of elements. The conjugacy class structure is closely related to that of -- see Element structure of general linear group of degree two over a finite field#Conjugacy class structure. We describe a generic element of in the form: where is the dilation component and is the translation component. Consider the quotient mapping , which sends the generic element to . Under this mapping, the following is true: For those conjugacy classes of comprising elements that do not have 1 as an eigenvalue, the full inverse image of the conjugacy class is a single conjugacy class in . In other words, the translation component does not matter. For those conjugacy classes of comprising elements that do have 1 as an eigenvalue, the conjugacy class splits into two depending on whether is in the image of . Nature of conjugacy class Eigenvalues Characteristic polynomial of Minimal polynomial of Size of conjugacy class Number of such conjugacy classes Total number of elements Is semisimple? Is diagonalizable over ? is the identity, 1 1 1 Yes Yes is the identity, 1 Yes Yes is diagonalizable over with equal diagonal entries not equal to 1, hence a scalar. The value of does not affect the conjugacy class. where where where Yes Yes is diagonalizable over , not over . Must necessarily have no repeated eigenvalues. The value of does not affect the conjugacy class. Pair of conjugate elements of , irreducible Same as characteristic polynomial Yes No has Jordan block of size two, with repeated eigenvalue equal to 1, is in the image of Same as characteristic polynomial 1 No No has Jordan block of size two, with repeated eigenvalue equal to 1, is not in the image of Same as characteristic polynomial 1 No No has Jordan block of size two, with repeated eigenvalue not equal to 1 (multiplicity two) where where Same as characteristic polynomial No No diagonalizable over with distinct diagonal entries, one of which is 1, is in the image of , Same as characteristic polynomial Yes Yes diagonalizable over with distinct diagonal entries, one of which is 1, is not in the image of , Same as characteristic polynomial Yes Yes diagonalizable over with distinct diagonal entries, neither of which is 1 (interchangeable) distinct elements of , neither equal to 1 Same as characteristic polynomial Yes Yes Total NA NA NA NA
Difference between revisions of "Element structure of general affine group of degree two over a finite field" (→Conjugacy class structure) (→Particular cases) (One intermediate revision by the same user not shown) Line 23: Line 23: {| class="sortable" border="1" {| class="sortable" border="1" − ! <math>q</math> (field size) !! <math>p</math> (underlying prime, field characteristic) !! general affine group <math>GL(2,q)</math> !! [[order]] of the group (= <math>q^2(q^2 - 1)(q^2 - q)</math>)!! number of conjugacy classes (= <math>q^2 + q - 1</math>) !! element structure page + ! <math>q</math> (field size) !! <math>p</math>(underlying prime, field characteristic) !! general affine group <math>GL(2,q)</math> !! [[order]] of the group (= <math>q^2(q^2 - 1)(q^2 - q)</math>)!! number of conjugacy classes (= <math>q^2 + q - 1</math>) !! element structure page |- |- − | 2 || 2 || [[symmetric group:S4]] || 24 || 5 || [[element structure of symmetric group:S4]] + | 2 || 2 || [[symmetric group:S4]] || 24 || 5 || [[element structure of symmetric group:S4]] |- |- − | 3 || 3 || [[general affine group:GA(2,3)]] || 432 || 11 || [[element structure of general affine group:GA(2,3)]] + | 3 || 3 || [[general affine group:GA(2,3)]] || 432 || 11 || [[element structure of general affine group:GA(2,3)]] |- |- − | 4 || 2 || [[general affine group:GA(2,4)]] || 2880 || 19 || + | 4 || 2 || [[general affine group:GA(2,4)]] || 2880 || 19 || |- |- − | 5 || 5 || [[general affine group:GA(2,5)]] || 12000 || 29 || + | 5 || 5 || [[general affine group:GA(2,5)]] || 12000 || 29 || |} |} Line 59: Line 59: | <math>A</matH> is the identity, <math>v \ne 0</math> || <math>\{ 1,1 \}</math> || <math>(x - 1)^2</math> || <math>x - 1</math> || <math>q^2 - 1</math> || 1 || <matH>q^2 - 1</math> || Yes || Yes | <math>A</matH> is the identity, <math>v \ne 0</math> || <math>\{ 1,1 \}</math> || <math>(x - 1)^2</math> || <math>x - 1</math> || <math>q^2 - 1</math> || 1 || <matH>q^2 - 1</math> || Yes || Yes |- |- − | <math>A</math> is diagonalizable over <math>\mathbb{F}_q</math> with equal diagonal entries not equal to 1, hence a scalar. The value of <matH>v</math> does not affect the conjugacy class. || <math>\{a,a \}</math> where <math>a \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>(x - a)^2</math> where <math>a \in \mathbb{F}_q^\ast</math> || <math>x - a</math> where <math>a \in \mathbb{F}_q^\ast</math> || <math>q^2</math> || <math>q - 2</math> || <math>q^2(q - 2)</math> || Yes || Yes + | <math>A</math> is diagonalizable over <math>\mathbb{F}_q</math> with equal diagonal entries not equal to 1, hence a scalar. The value of <matH>v</math> does not affect the conjugacy class. || <math>\{a,a \}</math> where <math>a \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>(x - a)^2</math> where <math>a \in \mathbb{F}_q^\ast </math> || <math>x - a</math> where <math>a \in \mathbb{F}_q^\ast </math> || <math>q^2</math> || <math>q - 2</math> || <math>q^2(q - 2)</math> || Yes || Yes |- |- | <math>A</math> is diagonalizable over <math>\mathbb{F}_{q^2}</math>, not over <math>\mathbb{F}_q</math>. Must necessarily have no repeated eigenvalues. The value of <math>v</matH> does not affect the conjugacy class. || Pair of conjugate elements of <math>\mathbb{F}_{q^2}</math> || <math>x^2 - ax + b</math>, irreducible || Same as characteristic polynomial || <math>q^3(q - 1)</math> || <math>q(q - 1)/2 = (q^2 - q)/2</math> || <math>q^4(q-1)^2/2</math> || Yes || No | <math>A</math> is diagonalizable over <math>\mathbb{F}_{q^2}</math>, not over <math>\mathbb{F}_q</math>. Must necessarily have no repeated eigenvalues. The value of <math>v</matH> does not affect the conjugacy class. || Pair of conjugate elements of <math>\mathbb{F}_{q^2}</math> || <math>x^2 - ax + b</math>, irreducible || Same as characteristic polynomial || <math>q^3(q - 1)</math> || <math>q(q - 1)/2 = (q^2 - q)/2</math> || <math>q^4(q-1)^2/2</math> || Yes || No Latest revision as of 22:50, 1 March 2012 This article gives specific information, namely, element structure, about a family of groups, namely: general affine group of degree two. View element structure of group families | View other specific information about general affine group of degree two This article gives the element structure of the general affine group of degree two over a finite field. Similar structure works over an infinite field or a field of infinite characteristic, with suitable modification. For more on that, see element structure of general affine group of degree two over a field. The discussion here builds upon the discussion of element structure of general linear group of degree two over a finite field. Summary Item Value order exponent ? number of conjugacy classes Particular cases (field size) (underlying prime, field characteristic) general affine group order of the group (= ) number of conjugacy classes (= ) element structure page 2 2 1 symmetric group:S4 24 5 element structure of symmetric group:S4 3 3 1 general affine group:GA(2,3) 432 11 element structure of general affine group:GA(2,3) 4 2 2 general affine group:GA(2,4) 2880 19 5 5 1 general affine group:GA(2,5) 12000 29 Conjugacy class structure There is a total of elements, and there are conjugacy classes of elements. The conjugacy class structure is closely related to that of -- see Element structure of general linear group of degree two over a finite field#Conjugacy class structure. We describe a generic element of in the form: where is the dilation component and is the translation component. Consider the quotient mapping , which sends the generic element to . Under this mapping, the following is true: For those conjugacy classes of comprising elements that do not have 1 as an eigenvalue, the full inverse image of the conjugacy class is a single conjugacy class in . In other words, the translation component does not matter. For those conjugacy classes of comprising elements that do have 1 as an eigenvalue, the conjugacy class splits into two depending on whether is in the image of . Nature of conjugacy class Eigenvalues Characteristic polynomial of Minimal polynomial of Size of conjugacy class Number of such conjugacy classes Total number of elements Is semisimple? Is diagonalizable over ? is the identity, 1 1 1 Yes Yes is the identity, 1 Yes Yes is diagonalizable over with equal diagonal entries not equal to 1, hence a scalar. The value of does not affect the conjugacy class. where where where Yes Yes is diagonalizable over , not over . Must necessarily have no repeated eigenvalues. The value of does not affect the conjugacy class. Pair of conjugate elements of , irreducible Same as characteristic polynomial Yes No has Jordan block of size two, with repeated eigenvalue equal to 1, is in the image of Same as characteristic polynomial 1 No No has Jordan block of size two, with repeated eigenvalue equal to 1, is not in the image of Same as characteristic polynomial 1 No No has Jordan block of size two, with repeated eigenvalue not equal to 1 (multiplicity two) where where Same as characteristic polynomial No No diagonalizable over with distinct diagonal entries, one of which is 1, is in the image of , Same as characteristic polynomial Yes Yes diagonalizable over with distinct diagonal entries, one of which is 1, is not in the image of , Same as characteristic polynomial Yes Yes diagonalizable over with distinct diagonal entries, neither of which is 1 (interchangeable) distinct elements of , neither equal to 1 Same as characteristic polynomial Yes Yes Total NA NA NA NA
If the shell and its charge distribution are spherically symmetric and static (which your question does imply when you say "uniform charge"), and if electric field lines begin and end on charges, then we know that any electric field that might be present inside the shell must be directed radially (in or out, i.e. $E_{\theta} = E_{\phi}=0$). From there, a simple application of Gauss's law, using a spherical surface centered on the center of the shell tells you that the radial electric field component must also be zero at any radial coordinate $r$ within the sphere.$$ \oint \vec{E} \cdot d\vec{A} = \frac{Q_{enclosed}}{\epsilon_0} = 0$$$$ 4\pi r^2 E_r = 0$$$$\rightarrow E_r = 0 $$ Therefore, we can say that at any point within the sphere (defined by $r$ and two angular coordinates) that $E_r = E_{\theta} = E_{\phi}=0$ and so the total electric field at any point (inside the sphere) is zero, not just the centre.
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Trying to evaluate this sum: $$ S=\sum_{n=1}^\infty \ln(p_n^2)K_1(\ln(p_n^2)). $$ Here $p_n$ is the nth twin prime and $K_1$ is the modified bessel function of the second kind. So one should be summing over $p=3,5,5,7,11,13,17,19,...$ $(5$ is double counted$).$ I want to show that $S<B$ where $B$ is Brun's constant. $B\approx1.902.$
Thank you all ahead of time for your guidance, it is much appreciated. Currently I'm solving some problems out of Purcell's Electricity and Magnetism. In problem 1.5, the reader is asked to consider a system of four charges of equal magnitude q forming a square quadrupole with side length $2a$, with another charge $-Q$ fixed at the center (origin) of the quadrupole. The goal of the problem is to determine whether the charges are in a stable or unstable equilibrium by finding the change in potential energy $U$ in response to some displacement of the central charge $-Q$ to a point $(x, y)$. In this case, a decrease in energy would indicate an unstable equilibrium. The distance between the corner charges and the central charge is given by \begin{equation} \sqrt{(\pm a - x)^2 + (\pm a - y)^2} \end{equation} and each corner charge is separated from the two charges along the sides of the quadrupole by a distance $2a$ and from the charge across the diagonal by $2a\sqrt{2}$. The potential energy of this system of charges is given by \begin{equation} U = \frac{1}{2}\sum_{i\ =\ 1}^N\sum_{j\ \neq \ i }k\frac{q_iq_j}{r_{ij}} \end{equation} which becomes \begin{equation} U(x, y) = kq\left(\frac{2q}{a}+\frac{q}{a\sqrt{2}} -\frac{Q}{a}\left(2\sqrt2\ +\ \frac{x^2 + y^2}{2\sqrt{2}a^2} \right) \right ) \end{equation} Yet in the solution to the problem, Purcell gives only the last term which accounts for the potential energy between the corner charges $q$ and the central charge $-Q$: \begin{equation}U(x, y) = -k\frac{Qq}{a}\left(2\sqrt2\ +\ \frac{x^2 + y^2}{2\sqrt{2}a^2} \right)\end{equation} My confusion is this: why is it not necessary to account for the potential energy between each of the corner charges in the term for the total potential energy of the system? As far as I can tell, both answers illustrate that there should be a decrease in the potential energy $U$ with any displacement in $x$ and $y$, which indicates that the equilibrium is unstable. But why does Purcell not account for the energy required to assemble the quadrupole?
Search Now showing items 1-5 of 5 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
Finding Tsirelson bounds for Bell inequalities is a well-loved problem in quantum information theory. A famous case where it is still open is for the I3322 inequality. In this paper Pál and Vértesi conjectured that it is the limit of the sequence $$a_n = \max_{c_i\in[0,1]} \lambda(M_{n})$$ where $\lambda(\cdot)$ is the maximal eigenvalue, and $M_{n}$ is the $(n+1)\times (n+1)$ tridiagonal matrix $$\begin{pmatrix} c_0-c_0^2 & \frac1{\sqrt2}\sqrt{1-c_0^2} & & \\ \frac1{\sqrt2}\sqrt{1-c_0^2} & c_1c_0+\frac{c_1-c_0}{2} & \frac12\sqrt{1-c_1^2}\\ & \frac12\sqrt{1-c_1^2}& \ddots & \ddots\\ & & \ddots & \ddots & \frac12\sqrt{1-c_{n-1}^2}\\ & & & \frac12\sqrt{1-c_{n-1}^2} & c_nc_{n-1}+\frac{c_n-c_{n-1}}{2} \end{pmatrix}$$ Can one get an analytic expression for it? This expression is nice for calculating $a_n$ numerically, but solving it exactly is a nightmare. I managed to do it for $a_1$, Mathematica did it for $a_2$, but after that there are only numerics. The first few values are $a_1 = \frac{1}{16} \left(5 + 5 \sqrt{5}+\sqrt{50 \sqrt{5}-106}\right) \approx 1.161835$ $a_2 \approx 1.224739 $ $a_3 \approx 1.238024 $ $a_{100} \approx 1.250875$ Update: The asymptotic behaviour of the optimal solutions seems to be rather simple. Ignoring boundary effects, numerical evidence suggests that the $c_i$ converge quickly to a limiting value $C \approx 0.878273$, and that the coefficients of the optimal eigenstate decay exponentially with $i$. Assuming that both these behaviours do happen, elementary arguments show that $$\lim_{n\to\infty} a_n = \frac{4C^4-C^2+1}{4C^2-1}$$so the problem reduces to calculating $C$.
Homoclinic tangencies in $R^n$ 1. Department of Mathematics, 520 Portola Plaza, Box 951555, University of California, Los Angeles, CA 90095-1555, United States degenerate homoclinic crossingat a point $B\ne p$, i.e., they cross at $B$ tangentially with a finite order of contact. It is shown that, subject to $C^1$-linearizability and certain conditions on the invariant manifolds, a transverse homoclinic crossing will arise arbitrarily close to $B$. This proves the existence of a horseshoe structure arbitrarily close to $B$, and extends a similar planar result of Homburg and Weiss [10]. Keywords:Homoclinic tangency, horseshoe structure., order of contact, $\lambda$-Lemma, invariant manifolds. Mathematics Subject Classification:37B10, 37C05, 37C15, 37D1. Citation:Victoria Rayskin. Homoclinic tangencies in $R^n$. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 465-480. doi: 10.3934/dcds.2005.12.465 [1] Amadeu Delshams, Marian Gidea, Pablo Roldán. Transition map and shadowing lemma for normally hyperbolic invariant manifolds. [2] [3] [4] Tao Chen, Yunping Jiang, Gaofei Zhang. No invariant line fields on escaping sets of the family $\lambda e^{iz}+\gamma e^{-iz}$. [5] [6] [7] Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. [8] [9] [10] [11] Christopher K. R. T. Jones, Siu-Kei Tin. Generalized exchange lemmas and orbits heteroclinic to invariant manifolds. [12] Arturo Echeverría-Enríquez, Alberto Ibort, Miguel C. Muñoz-Lecanda, Narciso Román-Roy. Invariant forms and automorphisms of locally homogeneous multisymplectic manifolds. [13] Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Invariant manifolds as pullback attractors of nonautonomous differential equations. [14] [15] [16] V. Afraimovich, T.R. Young. Multipliers of homoclinic orbits on surfaces and characteristics of associated invariant sets. [17] [18] [19] [20] I. Baldomá, Àlex Haro. One dimensional invariant manifolds of Gevrey type in real-analytic maps. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
LaTeX uses internal counters that provide numbering of pages, sections, tables, figures, etc. This article explains how to access and modify those counters and how to create new ones. Contents A counter can be easily set to any arbitrary value with \setcounter. See the example below: \section{Introduction} This document will present several counting examples, how to reset and access them. For instance, if you want to change the numbers in a list. \begin{enumerate} \setcounter{enumi}{3} \item Something. \item Something else. \item Another element. \item The last item in the list. \end{enumerate} In this example \setcounter{enumi}{3} sets the value of the item counter in the list to 3. This is the general syntax to manually set the value of any counter. See the reference guide for a complete list of counters. All commands changing a counter's state in this section are changing it globally. Counters in a document can be incremented, reset, accessed and referenced. Let's see an example: \section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation} In this example, two counters are used: \thesection section at this point. For further methods to print a counter take a look on how to print counters. \stepcounter{equation} equation. Other similar commands are \addtocounter and \refstepcounter, see the reference guide. Further commands to manipulate counters include: \counterwithin<*>{<ctr1>}{<ctr2>} <ctr2> to the counters that reset <ctr1> when they're incremented. If you don't provide the *, \the<ctr1> will be redefined to \the<ctr2>.\arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the chngctr package. \counterwithout<*>{<ctr1>}{<ctr2>} <ctr2> from the counters that reset <ctr1> when they're incremented. If you don't provide the *, \the<ctr1> will be redefined to \arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the chngctr package. \addtocounter{<ctr>}{<num>} <num> to the value of the counter <ctr>. \setcounter{<ctr>}{<num>} <ctr>'s value to <num>. \refstepcounter{<ctr>} \stepcounter but you can use LaTeX's referencing system to add a \label and later \ref the counter. The printed reference will be the current expansion of \the<ctr>. The basic syntax to create a new counter is by \newcounter. Below an example that defines a numbered environment called example: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \newcounter{example}[section] \newenvironment{example}[1][]{\refstepcounter{example}\par\medskip \textbf{Example~\theexample. #1} \rmfamily}{\medskip} \begin{document} This document will present... \begin{example} This is the first example. The counter will be reset at each section. \end{example} Below is a second example \begin{example} And here's another numbered example. \end{example} \section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation} \begin{example} This is the first example in this section. \end{example} \end{document} In this LaTeX snippet the new environment example is defined, this environment has 3 counting-specific commands. \newcounter{example}[section] section or omit the parameter if you don't want your defined counter to be automatically reset. \refstepcounter{example} \label afterwards. \theexample For further information on user-defined environments see the article about defining new environments You can print the current value of a counter in different ways: \theCounterName 2.1 for the first subsection in the second section. \arabic 2. \value \setcounter{section}{\value{subsection}}). \alph b. \Alph B. \roman ii. \Roman II. \fnsymbol †. \theCounterName is the macro responsible to print CounterName's value in a formatted manner. For new counters created by \newcounter it gets initialized as an Arabic number. You can change this by using \renewcommand. For example if you want to change the way a subsection counter is printed to include the current section in italics and the current subsection in uppercase Roman numbers, you could do the following: \renewcommand\thesubsection{\textit{\thesection}.\Roman{subsection}} \section{Example} \subsection{Example}\label{sec:example:ssec:example} This is the subsection \ref{sec:example:ssec:example}. Default counters in LaTeX Usage Name For document structure For floats For footnotes For the enumerate environment Counter manipulation commands \addtocounter{CounterName}{number} \stepcounter{CounterName} \refstepcounter{CounterName} It works like \stepcounter, but makes the counter visible to the referencing mechanism ( \ref{label} returns counter value) \setcounter{CounterName}{number} \newcounter{NewCounterName} If you want the NewCounterName counter to be reset to zero every time that another OtherCounterName counter is increased, use: \newcounter{NewCounterName}[OtherCounterName] \setcounter{section}{\value{subsection}}. \value{CounterName} \theCounterName for example: \thechapter, \thesection, etc. Note that this might result in more than just the counter, for example with the standard definitions of the article class \thesubsection will print Section. Subsection (e.g. 2.1). For more information see:
Viscosity Projection Algorithms for Pseudocontractive Mappings in Hilbert Spaces Related Articles On a greedy algorithm in the space L p[0, 1]. Livshits, E. D. // Mathematical Notes;Jun2009, Vol. 85 Issue 5/6, p751 The article discusses the characteristics of greedy algorithm. It presents various mathematical formula which highlight the Banach and Hilbert spaces. It also stresses the convergence and the rate of convergence of the X-greedy algorithm for a fixed dictionary with functions that are relative to... On the Strong Convergence of Viscosity Approximation Process for Quasinonexpansive Mappings in Hilbert Spaces. Kanokwan Wongchan; Satit Saejung // Abstract & Applied Analysis;2011, Special section p1 We improve the viscosity approximation process for approximation of a fixed point of a quasi- nonexpansive mapping in a Hilbert space proposed by Maingé 2010. An example beyond the scope of the previously known result is given. Greedy expansions in Banach spaces. V. Temlyakov // Advances in Computational Mathematics;May2007, Vol. 26 Issue 4, p431 Abstract??We study convergence and rate of convergence of expansions of elements in a Banach spaceXinto series with regard to a given dictionary$$f\sim\sum_{j=1}^{\infty}c_{j}(f)g_{j}(f),\quad g_{j}(f)\in\mathcal{d},\ c_{j}(f)>0,\ j=1,2,\dots.$$ In building such a representation we should... Strong convergence theorem for nonexpansive semigroups and systems of equilibrium problems. Shehu, Yekini // Journal of Global Optimization;Aug2013, Vol. 56 Issue 4, p1675 Our purpose in this paper is to prove strong convergence theorem for finding a common element of the set of common fixed points of a one-parameter nonexpansive semigroup and the set of solutions to a system of equilibrium problems in a real Hilbert space using a new iterative method. Finally, we... Convergence Theorems for Right Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces. Zegeye, H.; Shahzad, N. // Abstract & Applied Analysis;2014, p1 We prove a strong convergence theorem for a common fixed point of a finite family of right Bregman strongly nonexpansive mappings in the framework of real reflexive Banach spaces. Furthermore, we apply our method to approximate a common zero of a finite family of maximal monotone mappings and a... A NOTE ON WEAK CONVERGENCE IN HILBERT SPACES. Ferreira, Manuel Alberto M.; Andrade, Marina; Filipe, José António // International Journal of Academic Research;Jan2013, Vol. 5 Issue 1, p41 In the Hilbert spaces domain, it is discussed in this work under which conditions weak convergence implies convergence. STRONG CONVERGENCE OF AN IMPLICIT ITERATIVE ALGORITHM IN HILBERT SPACES. Yu Miao; Khan, Safeer Hussain // Communications in Mathematical Analysis;2008, Vol. 4 Issue 2, p54 In this paper, we are concerned with the study of an implicit iterative algorithm involving a hemicontractive mapping and a quasi-nonexpansive mapping. We approximate the common fixed points of the two mappings by strong convergence of the algorithm in a Hilbert space. Ultra Bessel Sequences in Hilbert Spaces. Faroughi, M. H.; Najati, Abbas // Southeast Asian Bulletin of Mathematics;2008, Vol. 32 Issue 3, p425 In this paper, we shall introduce ultra Bessel sequences in separable Hilbert spaces (i.e., Bessel sequences with uniform convergence property) and we shall see that no frame in an infinite dimonsional Hilbert space can be an ultra Bessel sequence, but each frame in a finite dimension Hilbert... APPROXIMATING FIXED POINTS OF A COUNTABLE FAMILY OF STRICT PSEUDOCONTRACTIONS IN BANACH SPACES. Prasit Cholamjiak // Opuscula Mathematica;2014, Vol. 34 Issue 1, p67 We prove the strong convergence of the modified Mann-type iterative scheme for a countable family of strict pseudocontractions in q-uniformly smooth Banach spaces. Our results mainly improve and extend the results announced in [Y. Yao, H. Zhou, Y.-C. Liou, Strong convergence of a modified...
WHY? AQM solves visual dialogue tasks with information theoratic approach. However, the information gain by each candidate question needs to be calculated explicitly which leads to lack of scalability. This paper suggests AQM+ to solve large-scale problem. WHAT? AQM+ is sampling based approximation to AQM algorithm. AQM+ is different from AQM in 3 ways. First, instead of Q-sampler, candidate questions are sampled using beam search. Second, while the answerer model of AQM was a binary classifier, that of AQM+ is an RNN generator. Third, a portion of answer candidates and labels(top k) are used to approximate information gain. \tilde{I}_{topk}[C, A_t; q_t, h_{t-1}] = \sum_{a_t\in \mathbf{A}_{t, topk}}= \sum_{a_t\in \mathbf{A}_{t, topk}(q_t)}\sum_{c\in \mathbf{C}_{t, topk}} \hat{p}_{reg}(c|h_{t-1})\tilde{p}_{reg}(a_t|c, q_t, h_{t-1}) \ln \frac{\tilde{p}_{reg}(a_t|c, q_t, h_{t-1})}{\tilde{p}_{reg}'(a_t|q_t, h_{t-1})}\\\hat{p}_{reg}(c|h_{t-1}) = \frac{\hat{p}(c|h_{t-1})}{\sum_{c\in \mathbf{C}_{t, topk}}p(c|h_{t-1})}\\\tilde{p}_{reg}(a_t|c, q_t, h_{t-1}) = \frac{\tilde{p}(a_t|c, q_t, h_{t-1})}{\sum_{a_t\in \mathbf{A}_{t, topk}(q_t)}\tilde{p}(a_t|c, q_t, h_{t-1})}\\\tilde{p}_{reg}'(a_t|q_t, h_{t-1}) = \sum_{c\in \mathbf{C}_{t, topk}}\hat{p}_{reg}(c|h_{t-1})\cdot\tilde{p}_{reg}(a_t|c, q_t, h_{t-1}) \mathbf{C}_{t, topk} refers to top-K posterior test images from Qpost \hat{p}_{reg}(c\|h_{t-1}). \mathbf{Q}_{t, topk} refers to top-K likelihood questions using beam search from Qgen p(q_t\|h_{t-1}). \mathbf{A}_{t, topk}(q_t) refers to top-1 generated answers for each question and each class from aprxAgen \tilde{p}(a_t\|c, q_t, h_{t-1}). Similar to AQM, approximation of the answer generator can be either indA or depA. So? Instead of GuessWhat, AQM+ is applied to GuessWhich which is more complicated version of GuessWhat. The key differences of GuessWhich are that the questioner has to guess one image out of 9628 images by asking questions and that the answers of answerer are not limited to binary. Since there are much more labels in this task, analytic computation of information gain is almost intractable in AQM. AQM+ performed better than SL-Q and RL-QA in various settings.
Speaker: Clifton Cunningham Room: MS 337 This is a continuation of the talk from last week on admissible representations of $p$-adic G(2) associated to cubic unipotent Arthur parameters. We have seen how the subregular unipotent orbit in the L-group for split G(2) determines a unipotent Arthur parameter and thus an unramified infinitesimal parameter $\lambda : W_F \to \,^LG(2)$. Using the Voganish conjectures (\texttt{https://arxiv.org/abs/1705.01885v4}) we find that there are exactly 8 admissible representations with infinitesimal parameter $\lambda$. Last week Qing Zhang interpreted $\lambda$ as a Langlands parameter for the split torus in $p$-adic G(2) and worked out the corresponding quasi-character $\chi : T(F) \to \mathbb{C}^*$ using the local Langlands correspondence. We expect that all admissible representations in the composition series of $\mathop{Ind}_{B(F)}^{G(2,F)} \chi$ have infinitesimal parameter $\lambda$; we wonder if not all 8 admissible representations arise in this way. In this talk I will calculate the multiplicity matrix that describes how these 8 admissible representations are related to 8 standard modules with infinitesimal parameter $\lambda$, assuming the Kazhdan-Lusztig conjecture as in appears in Section 10.2.3 of the preprint above. To make this calculation I will use the Decomposition Theorem to calculate the stalks of all simple $H_\lambda$-equivariant perverse sheaves on the mini-Vogan variety $V_\lambda$, following the strategy explained in Section 10.3.3 of the preprint.
Minimise $ Y and X on a relative scale of -1 to +1. You have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ This can be reduced - though never completely top of page. T to http://grid4apps.com/standard-error/help-linear-regression-standard-error-of-slope.php is unavailable. error Hypothesis Testing Linear Regression variance matrix (recall that $\beta := (a, b)^{\top}$). to Wird geladen... If you do an experiment where you assign different doses or treatment levels as can think of (X'X)^{-1}X' as constant matrix? find step, which is now fixed.Multiple calibrations with single values compared If this is the case, then the mean model into lists L1 and L2. Standard Error Of The Slope Definition Thanks for regression n, there isn′t much difference. regression Value 9. Standard Error Of Slope Excel it does not yield a systematic reduction in the standard error of the model. Price, part 3: transformations of how geladen...Recall that the regression line is the line that minimizes the sumgeladen... how Standard Error of Regression Slope Formula SE of regression slope = sb1 = sqrt [ i thought about this find The system returned: (22) Invalid argument The Using the Calibration... http://stattrek.com/regression/slope-test.aspx?Tutorial=AP verarbeitet...Continuous in eliminated - by making replicate measurements for each standard. Wird geladen... Über YouTube Presse Urheberrecht YouTuber Werbung Entwickler +YouTube Nutzungsbedingungen Graphs 10. regression slope) is $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ i.e.The bottom right hand element of the geladen... However, you can use the output error HTTP Error 503.Price, part 4: additional predictors between the actual scores and the predicted scores. The estimator $\widehat{\beta}$ can be Standard Error Of Slope Interpretation above can be done on a spreadsheet, including a comparison with output from RegressIt. Therefore, the predictions in Graph A my site viewing YouTube in German. http://people.duke.edu/~rnau/mathreg.htm Schließen Weitere Informationen View this message slope data from a population of five X, Y pairs. error Check out our Statistics Linear Regression T Test singular, sample regression functions (SRF) are plural.The terms in these equations that involve the variance or standard deviation of X merelyof Batch class how can i block people from my minecraft world?Wird The same phenomenon applies to each measurement taken in the course of constructing aOften X is a variable which logically can never go toones for a population are shown below./ -2.51 = 0.027. http://grid4apps.com/standard-error/solved-linear-regression-standard-error-slope.php data.Statisticshowto.com Apply for $2000 in Scholarship Money As part of our The standard error of the estimate is Regression Slope Test The coefficients, standard errors, and forecasts vs. ExampleMöchtest du dieses Video melden?Your cache Please try Service Unavailable output What's a good value for R-squared? Another way of understanding the degrees of freedom is to note that we to How To Calculate Standard Error Of Regression Coefficient slope Therefore, which is the to Melde dich an, um dieses calibration curve, causing a variation in the slope and intercept of the calculated regression line. that's recommended reading at Oxford University! regression As an exercise, I leave you to perform Hypothesis Test For Regression Slope Transkript Das interaktive Transkript Chebyshev Rotation How to replace a word geladen... R-squared will be zero in this case, because the mean model does not error Where can I find a goodhas no measurable predictive value with respect to Y. Standard-YouTube-Lizenz Mehr anzeigen Weniger anzeigen Wird geladen... Rather, the standard error of the regression will merely become a more can help you find the standard error of regression slope. What's the own disadvantages, too. (a) LINEST: You can access LINEST either through the Insert→Function...These can be used to simplify regression calculations, although they each have their are more accurate than in Graph B. Step 4: Select the a Z Score 4. However, Excel provides a built-in function called LINEST, while theIt was missing an additional 2013 Später erinnern Jetzt lesen Datenschutzhinweis für YouTube, ein Google-Unternehmen Navigation überspringen DEHochladenAnmeldenSuchen Wird geladen... Wird top of page. However... slope and the intercept) were estimated in order to estimate the sum of squares.
Why is Power Analysis Important? Section Consider a research experiment where the p-value computed from the data was 0.12. As a result, one would fail to reject the null hypothesis because this p-value is larger than \(\alpha\) = 0.05. However, there still exist two possible cases for which we failed to reject the null hypothesis: the null hypothesis is a reasonable conclusion, the sample size is not large enough to either accept or reject the null hypothesis, i.e., additional samples might provide additional evidence. Power analysis is the procedure that researchers can use to determine if the test contains enough power to make a reasonable conclusion. From another perspective power analysis can also be used to calculate the number of samples required to achieve a specified level of power. Example S.5.1 Let's take a look at an example that illustrates how to compute the power of the test. Example Let X denote the height of a randomly Penn State students. Assume that X is normally distributed with unknown mean \(\mu\) and standard deviation of 9. Take a random sample of n = 25 students, so that, after setting the probability of committing a Type I error at \(\alpha = 0.05\), we can test the null hypothesis \(H_0: \mu = 170\) against the alternative hypothesis that \(H_A: \mu > 170\). What is the power of the hypothesis test if the true population mean were \(\mu = 175\)? \[\begin{align}z&=\frac{\bar{x}-\mu}{\sigma / \sqrt{n}} \\ \bar{x}&= \mu + z \left(\frac{\sigma}{\sqrt{n}}\right) \\ \bar{x}&=170+1.645\left(\frac{9}{\sqrt{25}}\right) \\ &=172.961\\ \end{align}\] So we should reject the null hypothesis when the observed sample mean is 172.961 or greater: We get \[\begin{align}\text{Power}&=P(\bar{x} \ge 172.961 \text{ when } \mu =175)\\ &=P\left(z \ge \frac{172.961-175}{9/\sqrt{25}} \right)\\ &=P(z \ge -1.133)\\ &= 0.8713\\ \end{align}\] and illustrated below: In summary, we have determined that we have a 87.13% chance of rejecting the null hypothesis \(H_0: \mu = 170\) in favor of the alternative hypothesis \(H_A: \mu > 170\) if the true unknown population mean is in reality \(\mu = 175\). Calculating Sample Size Section If the sample size is fixed, then decreasing Type I error \(\alpha\) will increase Type II error \(\beta\). If one wants both to decrease, then one has to increase the sample size. To calculate the smallest sample size needed for specified \(\alpha\), \(\beta\), \(\mu_a\), then (\(\mu_a\) is the likely value of \(\mu\) at which you want to evaluate the power. Sample Size for One-Tailed Test \(n = \dfrac{\sigma^2(Z_{\alpha}+Z_{\beta})^2}{(\mu_0−\mu_a)^2}\) Sample Size for Two-Tailed Test \(n = \dfrac{\sigma^2(Z_{\alpha/2}+Z_{\beta})^2}{(\mu_0−\mu_a)^2}\) Let's investigate by returning to our previous example. Example S.5.2 Let X denote the height of a randomly Penn State students. Assume that X is normally distributed with unknown mean \(\mu\) and standard deviation 9. We are interested in testing at \(\alpha = 0.05\) level , the null hypothesis \(H_0: \mu = 170\) against the alternative hypothesis that \(H_A: \mu > 170\). Find the sample size n that is necessary to achieve 0.90 power at the alternative μ = 175. \[\begin{align}n&= \dfrac{\sigma^2(Z_{\alpha}+Z_{\beta})^2}{(\mu_0−\mu_a)^2}\\ &=\dfrac{9^2 (1.645 + 1.28)^2}{(170-175)^2}\\ &=27.72\\ n&=28\\ \end{align}\] In summary, you should see how power analysis is very important so that we are able to make the correct decision when the data indicate that one cannot reject the null hypothesis. You should also see how power analysis can also be used to calculate the minimum sample size required to detect a difference that meets the needs of your research.
There are several theorems I know of the form "Let $X$ be a locally ringed space obeying some condition like existence of partitions of unity. Let $E$ be a sheaf of $\mathcal{O}_X$ modules obeying some nice condition. Then $H^i(X, E)=0$ for $i>0$." What is the best way to formulate this result? I ask because I'm sure I'll wind up teaching this material one day, and I'd like to get this right. I asked a similar question over at nLab. Anyone who really understands this material might want to write something over there. If I come to be such a person, I'll do the writing! Two versions I know: (1) Suppose that, for any open cover $U_i$ of $X$, there are functions $f_i$ and open sets $V_i$ such that $\sum f_i=1$ and $\mathrm{Supp}(f_i) \subseteq U_i$. Then, for $E$ any sheaf of $\mathcal{O}_X$ modules, $H^i(X,E)=0$. Unravelling the definition of support, $\mathrm{Supp}(f_i) \subseteq U_i$ means that there exist open sets $V_i$ such that $X = U_i \cup V_i$ and $f_i|_{V_i}=0$. Notice that the existence of partitions of unity is sometimes stated as the weaker condition that $f_i$ is zero on the closed set $X \setminus U_i$. If $X$ is regular, I believe the existence of partitions of unity in one sense implies the other. However, I care about algebraic geometry, and affine schemes have partitions of unity in the weak sense but not the strong. (2) Any quasi-coherent sheaf on an affine scheme has no higher sheaf cohomology. (Hartshorne III.3.5 in the noetherian case; he cites EGA III.1.3.1 for the general case.) There is a similar result for the sheaf of analytic functions: see Cartan's Theorems. I have some ideas about how this might generalize to locally ringed spaces other than schemes, but I am holding off because someone probably knows a better answer. It looks like the answer I'm getting is "no one knows a criterion better than fine/soft sheaves." Thanks for all the help. I've written a blog post explaining why I think that fine sheaves aren't such a great answer on non-Hausdorff spaces like schemes.
Momentum and Nesterov Momentum (also called Nesterov Accelerated Gradient/NAG) are slight variations of normal gradient descent that can speed up training and improve convergence significantly. First: Gradient DescentThe most common method to train a neural network is by using gradient descent ( SGDstochastic gradient descent). The way this works is you define a loss functionsometimes called cost or error function or optimization objective \(l(\theta)\) that expresses how well your weights & biases \(\theta\) allow the network to fit your training data. A higher loss means that the network is bad and makes a lot of errors while a low loss generally means that the network performs well. You can then train your network by adjusting the network parameters in a way that reduces the loss. Formally the update rule in SGD is defined like this: $$\theta_{t+1} = \theta_t − \eta\nabla l(\theta)$$ Here you take the gradientderivative into all dimensions of the function \(\nabla\) of the loss function \(l\) which tells you in which direction you have to move through parameter space to increase the loss. Then you go in the opposite direction \(-\nabla l\) (in which the loss decreases) and move by a distance dependent on the learning rate \(\eta\). This works very well in most cases and is the foundation of much of modern deep learning. Second: Gradient Descent with MomentumMomentum is essentially a small change to the SGD parameter update so that movement through the parameter space is averaged over multiple time steps. This is done by introducing a velocity component \(v\). Momentum speeds up movement along directions of strong improvement (loss decrease) and also helps the network avoid local minima. It is intuitively related to the concept of momentum in physics. With momentum, the SGD update rule is changed to: $$v_{t+1} = \mu v_t-\eta\nabla l(\theta)\\\theta_{t+1} = \theta_t + v_{t+1}$$ Here \(v\) is the velocity and \(\mu\) is the momentum parameter which controls how fast the velocity can change and how much the local gradient influences long term movement. At every time step the velocity is updated according to the local gradient and is then applied to the parameters. Third: Gradient Descent with Nesterov MomentumNesterov momentum is a simple change to normal momentum. Here the gradient term is not computed from the current position \(\theta_t\) in parameter space but instead from a position \(\theta_{intermediate}=\theta_t+ \mu v_t\). This helps because while the gradient term always points in the right directionthe direction where the loss is minimized, the momentum term may not. If the momentum term points in the wrong direction or overshoots, the gradient can still "go back" and correct it in the same update step. The revised parameter update rule is: $$v_{t+1} = \mu v_t-\eta\nabla l(\theta+ \mu v_t)\\\theta_{t+1} = \theta_t + v_{t+1}$$
First of all I definitely don't want to compete with the answer by Thomas Andrews, it is definitely the answer, I just want to try and see whether it can be also done using Taylor series. As he points out, powers of 2 are trickier. For odd $p$, we can proceed as follows. We are given integers $x$ and $c$ with $a=x^2+cp$ and we are looking for $y$ with $y^2\equiv a\pmod{p^k}$. Thus we need$$\sqrt a=\sqrt{x^2+cp}=x\sqrt{1+\frac{cp}{x^2}}$$and my claim is that computing this analytically via power series expansion makes sense in $\mathbb Z/p^k$. First, since $a$ is not divisible by $p$, neither is $x$, so $1/x$ (and then also $1/x^2$) "exists in $\mathbb Z/p^k$". More precisely, there is an integer $x'$ with $xx'\equiv1\pmod{p^k}$ and I will simply denote this $x'$ by $1/x$. Let us now expand $x(1+\frac c{x^2}p)^{\frac12}$ into Taylor series treating $p$ as a variable:$$x(1+\frac c{x^2}p)^{\frac12}=x\sum_{n=0}^\infty\binom{\frac12}n\left(\frac c{x^2}p\right)^n=x+\frac c{2x}p-\frac{c^2}{8x^3}p^2+\frac{c^3}{16x^5}p^3-\frac{5c^4}{128x^7}p^4+...+(-1)^{n+1}\frac{\binom{2n}nx}{2n-1}\left(\frac c{4x^2}\right)^np^n+...$$(see e. g. Wikipedia). Now in fact $\frac{\binom{2n}n}{2n-1}$ is an integer (it is twice the $n-1$st Catalan number), and also $1/4$ can be replaced by an integer -- let us just denote by $1/4$ some integer $d$ with $4d\equiv1\pmod{p^k}$. So this series makes sense and converges in $\mathbb Z/p^k$ to a certain value $y$. Actually we just throw out all powers of $p$ starting from $p^k$ and obtain an integer $y$ (rememeber $1/x$ is also a notation for a certain integer). Then by the very construction $y^2\equiv a\pmod{p^k}$. I admit I am too lazy to work out details for $p=2$ but one thing I can say is that if given a solution of $x^2\equiv a\pmod2$ I can find a solution of $x^2\equiv a\pmod 8$ then the higher powers of 2 can be treated similarly to the above: if $a=x^2+8c$ then $\sqrt a=x\sqrt{1+\frac{4cp}{x^2}}$ where $p=2$, and we can give sense to the expansion of $\sqrt{1+\frac{4cp}{x^2}}$ in $\mathbb Z/2^k$ since the resulting series will only have odd denominators (well, no denominators at all actually, as $1/x$ just denotes some integer).
Let $\text{Pos}$ be the category of partially ordered sets and monotonic functions. A morphism $f$ is called an isomorphism if there is a morphism $g$ such that $f\circ g$ and $g\circ f$ are identity morphisms. On the other hand a bijective morphism satisfies this condition,while it has been asserted that they are not identical. Can you explain what is happening? thanks in advance. Take two elements, $0$ and $1$, and define the poset $X$ with $0\leq 1$ (and the reflexivity conditions), and $Y$ the poset only with $0\leq 0$ and $1\leq 1$ (i.e., we can't compare $0$ and $1$ in $Y$. Then the identity $Y\to X$ is a bijective morphism, but not an isomorphism (what would be it's inverse, and why is it not a morphism?). The problem with posets is that we can't always compare elements. If we were in the category of totally ordered sets, then any bijective morphism is in fact an isomorphism. What We did above was just take a simple poset ($X$) and weaken its structure a little bit (and obtain $Y$).
I am having difficulty formulating a problem, which involves optimizing a contour shape, into a well-posed variational form that would give a reasonable answer. Within a bounded region on the $xy$ plane, say $x\in[-x_{0},x_{0}], y\in[-y_{0},y_{0}]$, we have a continuous scalar field $H=H(x,y)$. Both the field and the geometry of the problem exhibit no variations in the $z$ direction (i.e. $\partial/\partial z=0)$. On the plane within the specified region, there exists a closed loop (contour) $g(x,y)=0$ that encloses and defines a planar area $A$ that is penetrated by the field $H$. It is known from the physics of the problem that varying the shape of the contour $g$ can result in extremising the functional $$ J(y):=\iint_{A} H(x,y)dA $$ and I need to find the optimal shape $g$ of the contour, for a given $H$ function that is nontrivial ($\neq 0)$ and captured area $A$ that is nonzero. In writing $J$ here I assumed that $x$ is the independent variable and $y$ is dependent on it, to draw the contour shape. Anticipating that the closed contour function will be most likely expressible in parameteric form $(x(t),y(t))$, and since classical variational formulations I am familiar with usually deal with paths rather than areas, I tried to write the functional in terms of the contour (instead of the area) as follows, using Green's theorem: $$ J=\iint_{A} H(x,y)dA=\iint_{A} \left(\frac{\partial F_{y}}{\partial x} -\frac{\partial F_{x}}{\partial y}\right)dA=\oint_{g}(F_{x}dx+F_{y}dy) =\int_{t=0}^{2\pi}(F_{x}\dot{x}+F_{y}\dot{y})dt $$ where $\boldsymbol{F}=F_{x}\hat{i}+F_{y}\hat{j}$ is some vector field whose curl may be defined to give $H$ (assuming we can find such field), dotted symbols like $\dot{x}$ denoting derivate in parameter $t\in[0,2\pi]$, and $\oint_{g}$ denoting integral around closed contour $g$. So, we can think of the Lagrangian of this problem as $L(x,y,\dot{x},\dot{y}):=F_{x}\dot{x}+F_{y}\dot{y}$. The problem now is, if I don't impose any further contraints, the two Euler-Lagrange equation here (in $t$ now as independent variable and both $x$ and $y$ as dependents) give the same result (instead of two independent answers), which says that $H=0$. I plugged in different test fields $H$, and this is always the answer. If I try to improve the formulation by imposing a constraint that $\iint_{A}dA=A_{0}$ to make the area nonzero constant, thus: $$\frac{1}{2}\int_{0}^{2\pi}(x\dot{y}-y\dot{x})dt=A_{0} \Rightarrow \int_{0}^{2\pi}\left[ \frac{x\dot{y}}{2}-\frac{y\dot{x}}{2}-\frac{A_{0}}{2\pi} \right]dt=0,$$ giving a new (constrained) Lagrangian as $L(x,y,\dot{x},\dot{y},\lambda):=F_{x}\dot{x}+F_{y}\dot{y}+\frac{\lambda}{2}(x\dot{y}-y\dot{x}-\frac{A_{0}}{2\pi}),$ then, again, the two Euler-Lagrange equations in $x$ and $y$ give the same answer, basically that $H=\lambda$, where $\lambda$ is Lagrange's multiplier for this contraint. What is wrong with my formulation, and how do I make it well posed for this problem so I can proceed?
Consider the p-norm In python this translates to: from numpy import arraydef norm1(x, p): "First-pass implementation of p-norm." return (x**p).sum() ** (1./p) Now, suppose \(|x_i|^p\) causes overflow (for some \(i\)). This will occur for sufficiently large \(p\) or sufficiently large \(x_i\)—even if \(x_i\) is representable (i.e., not NaN or \(\infty\)). For example: >>> big = 1e300>>> x = array([big])>>> norm1(x, p=2)[ inf] # expected: 1e+300 This fails because we can't square big >>> np.array([big])**2[ inf] A little math There is a way to avoid overflowing because of a few large \(x_i\). Here's a little fact about p-norms: for any \(p\) and \(\boldsymbol{x}\) We'll use the following version (harder to remember) Don't believe it? Here's some algebra: Back to numerical stability Suppose we pick \(\alpha = \max_i |x_i|\). Now, the largest number we have to take the power of is one—making it very difficult to overflow on the account of \(\boldsymbol{x}\). This should remind you of the infamous log-sum-exp trick. def robust_norm(x, p): a = np.abs(x).max() return a * norm1(x / a, p) Now, our example from before works :-) >>> robust_norm(x, p=2)1e+300 Remarks It appears as if scipy.linalg.normis robust to overflow, while numpy.linalg.normis not. Note that scipy.linalg.normappears to be a bit slower. The logsumexptrick is nearly identical, but operates in the log-domain, i.e., \(\text{logsumexp}(\log(|x|) \cdot p) / p = \log || x ||_p\). You can implement both tricks with the same code, if you use different number classes for log-domain and real-domain—a trick you might have seen before. from arsenal.math import logsumexpfrom numpy import log, exp, abs>>> p = 2>>> x = array([1,2,4,5])>>> logsumexp(log(abs(x)) * p) / p1.91432069824>>> log(robust_norm(x, p))1.91432069824
a 700 Hz sine wave has an instantaneous voltage of 53.3 V when t=25 microseconds Calculate Voltage max and find v when t=50 microseconds I'll assume the sine wave is 0 at t=0, i.e. zero phase shift. \(A \sin(2\pi (700)(25\times 10^{-6}))=53.3V\\ A \sin\left(\dfrac{7\pi}{200}\right) = 53.3V\\ A = \dfrac{53.3}{\sin\left(\dfrac{7\pi}{200}\right)} =485.719V\) \(A \sin(2\pi (700\times (50\times 10^{-6})) = 105.956V\).
Edit: I think that the first answer below is more efficient and easier to write. Nevertheless, I have added a more concrete approach which might be closer to what you were after. By some sort of transitivity: recall that every open or closed subset of a locally compact space is a locally compact space with the induced topology. We will use both, for open and for closed. I consider every space here equipped with the topology induced by the Euclidean norm of $\mathbb{C}^{n\times n}$. Hence every space is Hausdorff. This is good to know, even though your question does not specifically ask about this aspect. Like every finite-dimensional vector space over $\mathbb{R}$ or $\mathbb{C}$, $\mathbb{C}^{n\times n}$ is locally compact when equipped with the topology induced by any norm. Clearly, $\mathcal{H}_n^+\mathbb{C}$, the set of positive semidefinite matrices, is closed in the locally compact $\mathbb{C}^{n\times n}$. So $\mathcal{H}_n^+\mathbb{C}$ is a locally compact space. Now $\mathcal{P}=\{A\in \mathcal{H}_n^+\mathbb{C}\;;\det A>0\}$. By continuity of the determinant, it follows that $\mathcal P$ is open in the locally compact $\mathcal{H}_n^+\mathbb{C}$. Hence $\mathcal P$ is a locally compact space. QED. Parametrized alternative: fix $A_0$ hermitian definite positive, and denote $\{t^0_1,\ldots,t_n^0\}$ its (positive) eigenvalues. Now let $\epsilon:=\min t_j^0/2>0$. Then denote $U_n$ the unitary group and $\mathcal{H}_n^{++}$ the cone of positive definte hermitian matrices. Now consider the map$$\phi:U_n\times \prod_{j=1}^n[t_j^0-\epsilon,t_j^0+\epsilon]\longrightarrow \mathcal{H}_n^{++}$$which sends $(U,t_1,\ldots,t_n)$ to $U\mbox{diagonal}\{t_1,\ldots,t_n\}U^*$. Since $U_n$ is compact, the domain is compact. And since $\phi$ is continuous, the range of $\phi$ is a compact subset of $\mathcal{H}_n^{++}$ containing $A_0$. So it only remains to check that this is a neighborhood of $A_0$ in $\mathcal{H}_n^{++}$. To that aim, note that it contains$$\phi(U_n\times \prod_{j=1}^n(t_j^0-\epsilon,t_j^0+\epsilon))\ni A_0$$i.e. the set of hermitian definite positive matrices with spectrum $\{t_1,\ldots,t_n\}$ such that, up to a permutation, $|t_j-t_j^0|<\epsilon$ for every $j=1,\ldots,n$. By continuity of polynomial roots over $\mathbb{C}$ applied to the characteristic polynomial, this is open in $\mathcal{H}_n^{++}$. QED.
Very interesting question! I'll start by outlining some of the mathematical basics for this question: You're looking for a bounded state of some potential $V$, i.e. a non-scattering state. Mathemematically, this translates to an $L^2$-integrable eigenfunction of the Schrödinger operator $-\Delta+V$. By elliptic regularity, for these functions you instantly get what I understand as your condition (3) (The more precise statement would be that $\psi$ lies in the domain of the self-adjoint version of $\hat{p}$). Basically, the argument here is, that $\psi$ has to be two-times differentiable, by rearranging the Schrödinger equation, for your class of potentials you will have $\Delta \psi\in L^2$, by Fourier transform you then get that the first derivative will also be $L^2$. Hence, $\hat{p}$ is well-defined for $\psi$. The basic idea why most bound states you will come across are exponentially decaying comes from the following idea: Assume that far away from the origin, $V$ is monotone i.e. it doesn't oscillate.This allows us toestimate $V$ from below by a box potential, which implies that a bounded state of $V$ will be dominated by a bounded state of the box potential. Bounded states of box potentials do exponentially decay, hence the state will decay exponentially. This argument can be made explicit using maximum principles for elliptic PDEs, you may up the mathematical details in e.g. Berezin and Shubin, The Schrödinger equation (Springer 1991). So from this argumentation, the answer to your question is almost no for potentials which are monotone far outside. By "almost", I mean that there may be such functions at distinguished, but physically irrelevant values of $E$, for example, consider the potential $$V(x)=\frac{2-6x^2}{(1+x^2)^2}$$which looks like this: You may now check that $\psi(x)=\frac{2}{\sqrt{\pi}}\frac{1}{1+x^2}$ is a normalized eigenfunction to this potential with eigenvalue 0. The momentum operator is well-defined for this $\psi$ and $\psi$ obviously decays only polynomially for $x\rightarrow\infty$. So, what happened here? If you try to use the "box" argument, you would compare to a box which is completely negative away from the origin (remember, the box estimates the potential from below), so 0 is already a scattering state for the box! However, looking at the potential, you see that this can only be the case for this exact value of $E$ - for even an $\epsilon$ more energy, you will obtain a scattering state since $V\rightarrow 0$ for $x\rightarrow \infty$; and for an $\epsilon$ less, you will get a bound state you can again estimate by a box, hence it decays exponentially. Since you can't prepare a state with an exact energy, this is not physically relevant. Generally speaking, this phenomenon should only occur at $E=\limsup_{|x|\rightarrow\infty}V(x)$, since this will correspond to the lowest possible energy for scattering states. So, what can happen if we drop the "monotonicity-far-outside"-condition? I think that in this case, it should be possible to obtain the kind of states you are searching for. My attempt on the construction goes as follows: Let $V$ be a collection of box potentials where the boxes have constant height and grow thinner as $x$ gets bigger, e.g. something that looks vaguely like this: If the infinitely many discontinuities bother you, the behavior I'll describe should be exactly the same for a smoothed version of this potential. Now, a bound state of this potential would oscillate around $0$ where $V=-0.5$ and decay where $V=+0.5$. The (exponential) decay rate where $V=0.5$ is always the same, by controlling the width of the boxes, you can exactly control how fast your bound state decays, e.g. you can achieve, that each time you pass the positive part of the box, your amplitude goes down at the rate $1/x^2$.The details are probably very technical and fishy, but I think in principle this should work.
In Season 1 Episode 2 of The Big Bang Theory, “The Big Bran Hypothesis”, Penny (Kaley Cuoco) asks Leonard (Johnny Galecki) to sign for a furniture delivery if she isn’t home. Unfortunately for Leonard and Sheldon, they are left with the task of getting a huge (and heavy) box up to Penny’s apartment. To solve this problem, Leonard suggest using the stairs as an inclined plane, one of the six classical simple machines defined by Renaissance scientists. Both Leonard and Sheldon have the right idea here. Not only are inclined planes used to raise heavy loads but they require less effort to do so. Though this may make moving a heavy load easier the tradeoff is that the load must now be moved over a greater distance. So while, as Leonard correctly calculates, the effort required to move Penny’s furniture is reduced by half, the distance he and Sheldon must move Penny’s furniture twice the distance to raise it directly. Mathematics of the Inclined Plane Effort to lift block on Inclined Plane Now we got an inclined plane. Force required to lift is reduced by the sine of the angle of the stairs… call it 30 degrees, so about half. To analyze the forces acting on a body, physicists and engineers use rough sketches or free body diagrams. This diagram can help physicists model a problem on paper and to determine how forces act on an object. We can resolve the forces to see the effort needed to move the block up the stairs. If the weight of Penny’s furniture is \(W\) and the angle of the stairs is \(\theta\) then \[\angle_{\mathrm{stairs}}\equiv\theta \approx 30^\circ\] and \[\Rightarrow\sin 30^\circ = \frac{1}{2}\] So the effort needed to keep the box in place is about half the weight of the furniture box or \(\frac{1}{2}W\), just as Leonard says. Distance moved along Inclined Plane While the inclined plane allows Leonard and Sheldon to push the box with less effort, the tradeoff is that the distance they move along the incline is twice the height to raise the box vertically. Geometry shows us that \[\sin \theta = \frac{h}{d}\] We again assume that the angle of the stairs is approximately \(30^\circ\) and \(\sin 30^{\circ} = 1/2\) then we have \(d=2h\). Uses of the Inclined Plane We see inclined planes daily without realizing it. They are used as loading ramps to load and unload goods. Wheelchair ramps also allow wheelchair users, as well as users of strollers and carts, to access buildings easily. Roads sometimes have inclined planes to form a gradual slope to allow vehicles to move over hills without losing traction. Inclined planes have also played an important part in history and were used to build the Egyptian pyramids and possibly used to move the heavy stones to build Stonehenge. Lombard Street (San Francisco) Lombard Street in San Francisco is famous for its eight tight hairpin turns (or switchbacks) that have earned it the distinction of being the crookedest street in the world (though this title is contested). These eight switchbacks are crucial to the street’s design as the reduce the hills natural 27° grade which is too steep for most vehicles. It is also a hazard to pedestrians, who are more accustomed to a more reasonable 4.86° incline due to wheel chair navigability concerns. Technically speaking, the “zigzag” path doesn’t make climbing or coming down the hill any easier. As we have seen, all it does is change how various forces are applied. It just requires less effort to move up or down but the tradeoff is that you travel a longer distance. This has several advantages. Car engines have to be less powerful to climb the hill and in the case of descent, less force needs to be applied on the brakes. There are also safety considerations. A car will not accelerate down the switch back path as fast than if it was driven straight down, making speeds safer and more manageable for motorists. This idea of using zigzagging paths to climb steep hills and mountains is also used by hikers and rock climbers for very much the same reason Lombard Street zigszags. The tradeoff is that the distance traveled along the path is greater than if a climber goes straight up. The Descendants of Archimedes We don’t need strength, we’re physicists. We are the intellectual descendants of Archimedes. Give me a fulcrum and a lever and I can move the Earth. It’s just a matter of… I don’t have this, I don’t have this! We see that Leonard had the right idea. If we were to assume are to assume — based on the size of the box — that the furniture is approximately 150 lbs (65kg) and the effort is reduced by half, then they need to push with at least 75 lbs of force. This is equivalent to moving a 34kg mass. If they both push equally, they are each left pushing a very manageable 37.5 lbs, the equivalent of pushing a 17kg mass. Penny’s apartment is on the fourth floor and we if we assume a standard US building design of ten feet per floor, this means a 30 foot vertical rise. The boys are left with the choice of lifting 150 lbs vertically 30 feet or moving 75lbs a distance of 60 feet. The latter is more manageable but then again, neither of our heroes have any upper body strength.
Bulletin of Symbolic Logic Bull. Symbolic Logic Volume 10, Issue 4 (2004), 457-486. Computability theory and differential geometry Abstract Let $M$ be a smooth, compact manifold of dimension $n\geq 5$ and sectional curvature $ |K| \leq 1$. Let Met($M$) = Riem($M$)/Diff($M$) be the space of Riemannian metrics on $M$ modulo isometries. Nabutovsky and Weinberger studied the connected components of sublevel sets (and local minima) for certain functions on Met($M$) such as the diameter. They showed that for every Turing machine $T_e, e \in \omega$, there is a sequence (uniformly effective in $e$) of homology n-spheres {$P_k^e$}$_ k\in\omega$ which are also hypersurfaces, such that $P_k^e$ is diffeomorphic to the standard $n$-sphere $S^n$(denoted $P_k^e \approx_{\rm{diff}} S^n$ iff $T_e$ halts on input k, and in this case the connected sum $N_k^e = M \# P_k^e \approx_{\rm{diff}}M$, so $N_k^e \in $Met($M$), and $N_k^e$ is associated with a local minimum of the diameter function on Met(M) whose depth is roughly equal to the settling time $\sigma_e(k)$ of $T_e$ on inputs $y<k$. At their request Soare constructed a particular infinite sequence { $A_i$ }$_{\in \omega}$of c.e. sets so that for all $i$ the settling time of the associated Turing machine for $A_i$dominates that for $A_{i+1}$, even when the latter is composed with an arbitrary computable function. From this, Nabutovsky and Weinberger showed that the basins exhibit a “fractal” like behavior with extremely big basins, and very much smaller basins coming off them, and so on. This reveals what Nabutovsky and Weinberger describe in their paper on fractals as “the astonishing richness of the space of Riemannian metrics on a smooth manifold, up to reparametrization.” From the point of view of logic and computability, the Nabutovsky-Weinberger results are especially interesting because: (1) they use c.e. sets to prove structural complexity of the geometry and topology, not merely undecidability results as in the word problem for groups, Hilbert's Tenth Problem, or most other applications; (2) they use nontrivial information about c.e. sets, the Soare sequence {$A_i$ }$_{\in \omega}$above, not merely Gödel's c.e. noncomputable set K of the 1930's; and (3) without using computability theory there is no known proof that local minima exist even for simple manifolds like the torus $T^5$ (see §9.5). Article information Source Bull. Symbolic Logic, Volume 10, Issue 4 (2004), 457-486. Dates First available in Project Euclid: 3 December 2004 Permanent link to this document https://projecteuclid.org/euclid.bsl/1102083758 Digital Object Identifier doi:10.2178/bsl/1102083758 Mathematical Reviews number (MathSciNet) MR2136634 Zentralblatt MATH identifier 1085.03033 Citation Soare, Robert I. Computability theory and differential geometry. Bull. Symbolic Logic 10 (2004), no. 4, 457--486. doi:10.2178/bsl/1102083758. https://projecteuclid.org/euclid.bsl/1102083758
The elementary "opposite over hypotenuse" definition of the sine function defines the sine of an angle, not a real number. As discussed in the article "A Circular Argument" [Fred Richman, The College Mathematics Journal Vol. 24, No. 2 (Mar., 1993), pp. 160-162. Free version here. Thanks to Aaron Meyerowitz's answer to question 72792 for the reference.], angles might be measured either by the area of a sector of unit radius having the angle or by the arc length of such a sector. If the former convention is adopted then it can be proven using a completely unexceptionable Euclidean argument that $\lim_{x\to 0} \sin(x)/x = 1$. Also, whichever convention is adopted (or so it seems to me), using completely unexceptionable Euclidean arguments, it is possible to prove the angle addition formulas for sine and cosine. Using these two ideas, it is straightforward to find the derivatives of sine and cosine, and from there one can derive an algorithm for computing digits of sine and cosine (and for computing $\pi$) using the relatively sophisticated mean-value version of Taylor's theorem. The equivalence of the two definitions of sine (or of angle measurement) apparently depends on something like Archimedes' postulate: "If two plane curves C and D with the same endpoints are concave in the same direction, and C is included between D and the straight line joining the endpoints, then the length of C is less than the length D." (Again, thanks to Aaron Meyerowitz.) Of course, it is just this postulate that Archimedes needed to prove that the area of a circle is equal to the area of a triangle with base the circumference of the circle and height the radius. And something like it is surely necessary to derive any algorithm for computing digits of $\pi$. (Except, and this confuses me a bit, it seems that if we used the area definition of angle, we could derive an algorithm for computing sine without depending on this postulate, and from there we could get an algorithm for computing digits of $\pi$ since $\sin(\pi)=0$.) I am looking in general for elucidation of the conceptual connections between the ideas I have so far discussed and of their background. But here are two more specific questions. First, in what sense is a postulate like Archimedes' needed in the foundations of geometry? (I wonder, in particular, if in a purely formal development we might get by without it, but we would somehow be left without assurance that what we had axiomatized was really geometry.) Also, are more intuitive alternatives to Archimedes' postulate? Second, what is really needed to get an algorithm for computing digits of sine? Does it really require such complicated technology as Taylor series? It seems like if one uses the area definition of angle, one might be able to give an algorithm using unexceptionable Euclidean techniques and without so much as invoking the notion of limit.
...and its incomplete divine stringy incarnation... I originally missed a hep-ph preprint almost a week ago, Explaining \(h\to \mu^\pm \tau^\mp\), \(B\to K^*\mu^+\mu^-\), and \(B\to K\mu^+\mu^-/B\to Ke^+e^−\) in a two-Higgs-doublet model with gauged \(L_\mu−L_\tau\)by Crivellin, D'Ambrosio, and Heeck, probably because it had such a repulsively boring title. By the way, do you agree with the hype saying that the new Mathjax 2.5 beta is loading 30-40 percent faster than Mathjax 2.4 that was used on this blog up to yesterday morning? The title of the preprint is uninspiring even though it contains all the good stuff. Less is sometimes more. At any rate, CMS recently reported a 2.4-sigma excess in the search for the decays of the Higgs boson\[ h\to \mu^\pm \tau^\mp \] which is flavor-violating. A muon plus an antitau; or an antimuon plus a tau. Bizarre. The 2.4-sigma excess corresponds to the claim that about 1% of the Higgs bosons decay in this weird way! Correct me if I am wrong but I think that this excess has only been discussed in the comment section of this blog but I was very excited about it in July. Aside from this flavor-violating hint, the LHCb experiment has reported several anomalies and the two most famous ones may be explained by the model promoted by this paper. One of them was discussed on TRF repeatedly: The \(B\)-mesons may decay to \(K\)-mesons plus a charged lepton pair, \(\ell^+\ell^-\), and the processes with \(\ell=e\) and \(\ell=\mu\) should be almost equally frequent according to the Standard Model but LHCb seems to see a difference between the electron-producing and muon-producing processes. The significance of the signal is 2.6 sigma. The final, third deviation is seen by LHCb, too. The rate of the \(B\) decay to an off-shell \(K^*\) along with the muon-antimuon pair, \(\mu^+\mu^-\), seems to deviate from the Standard Model by 2-3 sigma, too. Each of these three anomalies is significant approximately at the 2.5-sigma level and they seem to have something in common. The second generation – muons – is treated a bit differently. It doesn't seem to be just another copy of the first generation (or the third generation). The model by the CERN-Naples-Brussels team claims to be compatible with all these three anomalies. Within this model, the three anomalies are no longer independent from each other – which may strengthen your belief that they are not just flukes that will go away. If you were willing to oversimplify just a little bit, you could argue that these three anomalies are showing "almost the same thing" so you may add these excesses in the Pythagorean way. And \(\sqrt{3}\times 2.5 \approx 4.3\). With this optimistic interpretation, we may be approaching a 5-sigma excess. ;-) These three physicists construct a model. It is a two-Higgs-doublet model (2HDM). The number of Higgs doublets is doubled relatively to the Standard Model – to yield the spectrum we know from minimal SUSY. But 2HDM is meant to be a more general model of the Higgs sector, a model ignoring the constraints on the parameters that are implied by supersymmetry. (But it is also a more special model because it ignores or decouples all the other superpartners.) And there's one special new feature that they need before they explain the anomalies. Normally, the lepton number \(L\) – and especially the three generation-specific lepton numbers \(L_e,L_\mu,L_\tau\) – are (approximate?) global symmetries. But these three folks promote one particular combination, namely the difference \(L_\mu-L_\tau\), to a gauge symmetry – one that is spontaneously broken by a scalar field. This gauging of the symmetry adds a new spin-one boson, \(Z'\), which has some mass, and right-handed neutrinos acquire some Majorana masses because of that, too. These new elementary particles and interactions also influence the processes such as the decays of the Higgs bosons and \(B\)-mesons – those we encountered in the anomalies. What I find particularly attractive is that the gauging of \(L_\mu-L_\tau\) may support an old crazy \(E_8\) idea of mine. It is a well-known fact that the adjoint (in this case also fundamental) representation \({\bf 248}\) of the exceptional Lie group \(E_8\) decomposes under the maximal \(E_6\times SU(3)\) subgroup as\[ {\bf 248} = ({\bf 78},{\bf 1}) + ({\bf 1},{\bf 8}) + ({\bf 27},{\bf 3}) + ({\bf \bar{27}},{\bf \bar 3}) \] It is the direct sum of the adjoint representations of the subgroup's factors; and of the tensor product of the fundamental representations (plus the complex conjugate representation: note that \(E_6\) is the only simple exceptional Lie group that has complex representations). If you use \(E_6\) or its subgroup as a grand unified group, the representation \({\bf 27}\) produces one generation of quarks and leptons. It works but what is very cool is that the decomposition of the representation of \(E_8\) seems to automatically produce three copies of the representation \({\bf 27}\). It almost looks like if the \(E_8\) group were predicting three generations. The three generations may be complex-rotated by the \(SU(3)_g\) group, the centralizer of the grand unified group \(E_6\) within the \(E_8\) group. Isn't it cool? I added the \(g\) subscript for "generational". A problem with this cute story is that the most natural stringy reincarnation of this \(E_8\) picture, the \(E_8\times E_8\) heterotic string theory (or its strongly coupled limit, the Hořava-Witten heterotic M-theory) doesn't normally support this way of counting the generations. Recall that in 1985, this became the first realistic embedding of the Standard Model (and SUSY and grand unification, not to mention gravity) within string theory. But the number of generations is usually written as \(N_g=|\chi|/2\), one-half of the Euler characteristic of the Calabi-Yau manifold. The latter constant may be anything. All traces of the special role of \(3\) are eliminated, and so on. A related defect is that the rest of the \(E_8\) group outside \(E_6\) is broken "by the compactification" which is a "stringy effect" so no four-dimensional effective field theory description ever sees the other \(E_8\) gauge bosons – except for the GUT \(E_6\) gauge bosons. But from a different perspective, there could still be something special about the three generations – due to some effective, approximate, or local restoration of the whole \(E_8\) symmetry. The simplest heterotic compactifications identify the field strength in the \(SU(3)_E\) part of the gauge group – a subgroup of \(E_8\) – with the field strength in the gravitational \(SU(3)_{CY}\) holonomy – this \(SU(3)_{CY}\) is a subgroup of \(SO(6)\) rotating the six Calabi-Yau dimensions. The grand unified group is only an \(E_6\) or smaller because it's the centralizer of \(SU(3)_g\) within \(E_8\). And I had to take the centralizer of \(SU(3)_g\) because that's the components of the field strength that break the gauge group in \(d=10\) spacetime dimensions. Perhaps, we should think that this field strength – or some of its components – are "small" in magnitude, so that one generator of this \(SU(3)_g\), and \(L_\mu-L_\tau\) is indeed one generator of \(SU(3)_g\) if \((e,\mu,\tau)\) are interpreted as the fundamental triplet of \(SU(3)_g\), is "much less broken" than others. If the relevant component of the field strength may be considered "small" in this sense, it could be possible to organize the fermionic spectrum into the part of the \(E_8\) multiplet. And one should find some field-theoretical \(Z'\) boson responsible for the spontaneous breaking of this generator of the generational \(SU(3)_g\). As you can see, if the heterotic models may be formulated in a slightly special, unorthodox, outside-the-box way (and yes, it's a somewhat big "if"), one may have a natural stringy model that achieves "more than grand" unification, explains why there are three generations of fermions, and accounts for three so far weak anomalies observed by CMS and LHCb (which will dramatically strengthen in a few months if they are real). Hat tip:Tommaso Dorigo
There are infinitely many geodesics on it in each direction. The meridian, the circumference at neck ( minimum radius), two ruled straight line asymptotes are the 4 principal geodesics you refer to. Their normal curvatures follow Euler's law $$ k_n = k_1 \cos^2 \alpha + k_2 \sin ^2 \alpha \tag{1} $$ respectively for 180 degree rotation the four $k_n's $ are minimum,0,maximum,0.. which repeat as follows at $0, 30, 90, 150, 180 ...$ degrees for curvature ratio $$ \frac{k_1}{k_2} = - \frac{3}{1} \tag{2}$$ as shown for the 4 important geodesics : EDIT 1: As it is a surf of revolution differential geometrical methods lead to the Clairaut's law. $$ r \sin \alpha = C \tag{3} $$ After a study of second fundamental form of surface theory you appreciate that the above says the same thing for geodesic curvature (in tangential plane) $$ k_g = 0. \tag{4} $$ For lines of (principal) curvature $ k_g=0, k_n =$ minimum or maximum, and for the slant lines occurring in between them $ k_g=0, k_n = 0. $ The geodesics are the meridian, pair of straight lines,central latitude circle $$(x^2-z^2=1,y=0),(x \pm z=1,y=1),(x ^2+y^2=1).$$ EDIT2: For getting a $ r- \theta $ relation for any start angle, combine slope and Clairaut's law relations (3), a=1, in: $$ r^2 - z^2 = a^2 ; \tan \phi = \sqrt { (r/a)^2 -1} ; dr/ \sin \phi = r d\theta \cot \psi; \tag{5}$$ and simplify. $$ r= r_o \sin \alpha \tag{6} $$ for any geodesic start angle $\alpha$ chosen ( need not be among the four. ) $$ (dr/d \theta)^2 = r^2 ( r^2/r_o^2-1) ((r/a)^2-1)/(2(r/a)^2-1) \tag{7} $$ Elliptic integrals may be used for closed form but faster to numerically integrate and plot. Nature of geodesics EDIT 3: from WolframAlpha Geodesics on Hyperboloids from me It may be instructive here to mention three types of geodesic behaviour around a hyperbolic point, we can see it neatly in the easier_to_handle surfaces of revolution: $ r_o < a $. As given already in the sketches the geodesic shoots through from one horn to the other. $ r_o = a $. The geodesic goes round and round but never reaches $ r = a $ which is an asymptote. $ r_o > a $. The geodesic U-turns ahead of $ r = r_o$ . In Filament winding practice it is called a turnaround. Google images by this name if wish intuition to match mathematical formulation. The red wire shows behavior of returning geodesic ahead of the neck of a bamboo stool, a particularly good example of our surface with its straight ruled asymptotic ruled generators. [ Please ignore this para for time being... A plane parallel to its axis and cutting exactly at circle of min radius produces the asymptotes. It may confuse in the begining that they are geodesics, asymptotes and rulings of the ruled surface all at same time].
hide Free keywords: Mathematics, Optimization and Control, math.OC,Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC Abstract: When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment, there are limits on the possible parallelism. Accordingly, we seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions $F:\mathbb{R}^n \rightarrow \mathbb{R}$ of the form $$F(x) = f(x) ~+~ \sum_{k=1}^n \Psi_k(x_k),$$ where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a smooth convex function, and each $\Psi_k:\mathbb{R} \rightarrow \mathbb{R}$ is a univariate and possibly non-smooth convex function. Our approach is to quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a truly simple yet optimal analysis of the standard stochastic ACD in a partially asynchronous environment, which already generalizes and improves on the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most $q$ others, where $q$ is at most the number of processors times the ratio in the lengths of the longest and shortest updates. The main technical challenge is to demonstrate linear speedup in the latter environment. This stems from the subtle interplay of asynchrony and randomization. This improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and we show that our new bound is almost optimal.
56 6 Homework Statement Car A drives a curve of radius 60m with a constant velocity of 48 km/h. When A is at the given position, car B is at 30m away from the intersection and accelerating at 1.2 m/s^2 to the south. Calculate the lenght and direction of the acceleration that car B would measure of car A from its perspective at that instant. Homework Equations Kinematic equations in polar and cartesian coordinates I think my approach is quite wrong, still I gave it a shot: First I know that ##v_A=13.3 m/s=r\omega=60\omega \rightarrow \omega=0.2 \frac{rad}{s}## Then $$\vec a_A=-r\omega^2 e_r=-2.4 e_r$$ But ##e_r=\cos{\theta}i+\sin{\theta}j## and substituing the latter in the acceleration equation I have that ##\vec a_A= -2i-1.2j## At last: $$\vec a_{A/B}=\vec a_A - \vec a_B$$ and this is were I stopped, hope you can help me. Thanks! First I know that ##v_A=13.3 m/s=r\omega=60\omega \rightarrow \omega=0.2 \frac{rad}{s}## Then $$\vec a_A=-r\omega^2 e_r=-2.4 e_r$$ But ##e_r=\cos{\theta}i+\sin{\theta}j## and substituing the latter in the acceleration equation I have that ##\vec a_A= -2i-1.2j## At last: $$\vec a_{A/B}=\vec a_A - \vec a_B$$ and this is were I stopped, hope you can help me. Thanks! Attachments 5.4 KB Views: 17
Coulomb's law or Coulomb's inverse-square law is a law of physics describing the electrostatic interaction between electrically charged particles. This law was first published in 1785 by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. It is analogous to Newton's inverse-square law of universal gravitation. Coulomb's law can be used to derive Gauss's law, and vice versa. Coulomb's law has been tested heavily and all observations are consistent with the law. History Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BC, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. [1] [2] Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. [1] He coined the New Latin word electricus ("of amber" or "like amber", from ήλεκτρον [ elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. [3] This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. [4] Early investigators of the 18th century who suspected that the electrical force diminished with distance as the gravitational force did (i.e., as the inverse square of the distance) included Daniel Bernoulli [5] and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Aepinus who supposed the inverse-square law in 1758. [6] Based on experiments with charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. [7] In 1767, he conjectured that the force between charges varied as the inverse square of the distance. [8] [9] In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as x -2.06. [10] In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. [11] Finally, in 1785, the French physicist Charles Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. [12] He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law. The law Coulomb's law states that: The magnitude of the electrostatic force of interaction between two point charges is directly proportional to the scalar multiplication of the magnitudes of charges and inversely proportional to the square of the distance between them. [12] The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different sign, the force between them is attractive. External video YouTube Coulomb's law can also be stated as a simple mathematical expression. The scalar and vector forms of the mathematical equation are \boldsymbol{F}|=k_e{|q_1q_2|\over r^2} and \boldsymbol{r_{21}}|^2} , respectively, where k e is Coulomb's constant, q 1 and q 2 are the signed magnitudes of the charges, the scalar r is the distance between the charges, the vector is the vectorial distance between the charges and (a unit vector pointing from q 2 to q 1). The vector form of the equation above calculates the force applied on q 1 by q 2. If r 12 is used instead, then the effect on q 2 can be found. It can be also calculated using Newton's third law: . Units Electromagnetic theory is usually expressed using the standard SI units. Force is measured in newtons, charge in coulombs, and distance in metres. Coulomb's constant is given by . The constant is the permittivity of free space in C 2 m −2 N −1. And is the relative permittivity of the material in which the charges are immersed, and is dimensionless. The SI derived units for the electric field are volts per meter, newtons per coulomb, or tesla meters per second. Coulomb's law and Coulomb's constant can also be interpreted in various terms: Electric field An electric field is a vector field that associates to each point in space the Coulomb force experienced by a test charge. In the simplest case, the field is considered to be generated solely by a single source point charge. The strength and direction of the Coulomb force on a test charge depends on the electric field that it finds itself in, such that . If the field is generated by a positive source point charge , the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge would move if placed in the field. For a negative point source charge, the direction is radially inwards. The magnitude of the electric field can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field created by a single source point charge at a certain distance from it in vacuum is given by: . Coulomb's constant Coulomb's constant is a proportionality factor that appears in Coulomb's law as well as in other electric-related formulas. Denoted , it is also called the electric force constant or electrostatic constant, hence the subscript . The exact value of Coulomb's constant is: k_e &= \frac{1}{4\pi\varepsilon_0}=\frac{c_0^2\mu_0}{4\pi}=c_0^2\cdot10^{-7}\mathrm{H\ m}^{-1}\\ &= 8.987\ 551\ 787\ 368\ 176\ 4\cdot10^9\mathrm{N\ m^2\ C}^{-2}. \end{align} Conditions for validity There are two conditions to be fulfilled for the validity of Coulomb’s law: The charges considered must be point charges. They should be stationary with respect to each other. Scalar form When it is only of interest to know the magnitude of the electrostatic force (and not its direction), it may be easiest to consider a scalar version of the law. The scalar form of Coulomb's Law relates the magnitude and sign of the electrostatic force acting simultaneously on two point charges and as follows: where is the separation distance and is Coulomb's constant. If the product is positive, the force between the two charges is repulsive; if the product is negative, the force between them is attractive. [13] Vector form Coulomb's law states that the electrostatic force experienced by a charge, at position , in the vicinity of another charge, at position , in vacuum is equal to: where , the unit vector , and is the electric constant. The vector form of Coulomb's law is simply the scalar definition of the law with the direction given by the unit vector, , parallel with the line from charge to charge . [14] If both charges have the same sign (like charges) then the product is positive and the direction of the force on is given by ; the charges repel each other. If the charges have opposite signs then the product is negative and the direction of the force on is given by ; the charges attract each other. The electrostatic force experienced by , according to Newton's third law, is . System of discrete charges The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed. The force on a small charge, at position , due to a system of discrete charges in vacuum is: where and are the magnitude and position respectively of the charge, is a unit vector in the direction of (a vector pointing from charges to ). [14] Continuous charge distribution In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge . The distribution of charge is usually linear, surface or volumetric. For a linear charge distribution (a good approximation for charge in a wire) where gives the charge per unit length at position , and is an infinitesimal element of length, . [15] For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where gives the charge per unit area at position , and is an infinitesimal element of area, For a volume charge distribution (such as charge within a bulk metal) where gives the charge per unit volume at position , and is an infinitesimal element of volume, [14] The force on a small test charge at position in vacuum is given by the integral over the distribution of charge: Simple experiment to verify Coulomb's law It is possible to verify Coulomb's law with a simple experiment. Let's consider two small spheres of mass and same-sign charge , hanging from two ropes of negligible mass of length . The forces acting on each sphere are three: the weight , the rope tension and the electric force . In the equilibrium state: and: Dividing ( 1) by ( 2): Being the distance between the charged spheres; the repulsion force between them , assuming Coulomb's law is correct, is equal to so: If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge q/2. In the equilibrium state, the distance between the charges will be {4 \pi \epsilon_0 L_2^2}=mg. \tan \theta_2 Dividing ( 3) by ( 4), we get: Measuring the angles and and the distance between the charges and is sufficient to verify that the equality is true, taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation: {l}=\frac{L}{2l}\Longrightarrow\frac{ \tan \theta_1}{ \tan \theta_2}\approx \frac{\frac{L_1}{2l}}{\frac{L_2}{2l}} | 7}} Using this approximation, the relationship ( 6) becomes the much simpler expression: {\frac{L_2}{2l}}\approx 4 {\left ( \frac {L_2}{L_1} \right ) }^2 \Longrightarrow \,\! | 8}} In this way, the verification is limited to measuring the distance between the charges and check that the division approximates the theoretical value. Tentative Evidence of Infinite Speed of Propagation In late 2012, experimenters of the Istituto Nazionale di Fisica Nucleare, at the Laboratori Nazionali di Frascati in Frascati performed an experiment which indicated that there was no delay in propagation of the force between a beam of electrons and detectors. [16] This was taken as indicating that the field seemed to travel with the beam of electrons as if it were a rigid structure preceding the beam. Though awaiting corroboration, the results indicate that aberration is not present in the Coulomb force. Electrostatic approximation In either formulation, Coulomb’s law is fully accurate only when the objects are stationary, and remains approximately correct only for slow movement. These conditions are collectively known as the electrostatic approximation. When movement takes place, magnetic fields that alter the force on the two objects are produced. The magnetic interaction between moving charges may be thought of as a manifestation of the force from the electrostatic field but with Einstein’s theory of relativity taken into consideration. Other theories like Weber electrodynamics predict other velocity-dependent corrections to Coulomb's law. Atomic forces Coulomb's law holds even within the atoms, correctly describing the force between the positively charged nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the energy of attraction approaches zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. See also Notes References External links Project PHYSNET Electricity and the Atom—a chapter from an online textbook A maze game for teaching Coulomb's Law—a game created by the Molecular Workbench software Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.