text
stringlengths
256
16.4k
The simplest approach (in terms of programming effort) might be to try using an existing graph layout tool. Those solve a related problem: given a graph with distances on the edges, try to find the best layout to draw the graph on the plane. You can treat your problem as an instance of the graph layout problem: we have one vertex per point, and for each pair of points $v,w$ with distance bounds $[\ell,u]$, we create an edge $v \to w$ with length $(\ell+u)/2$. However, this does have some limitations: typical graph layout algorithms try to get the distances between vertices correct, but also try to avoid edges that cross each other; whereas in your case you don't care about crossings. So, your problem might be easier. Another possibility is to apply the ideas used for graph layout to your problem. There are several algorithmic techniques for graph layout. For instance, you could use a spring-based model, where you have a spring between each pair of vertices that have a distance bound, and the spring tries to keep those pair of vertices a suitable distance apart. A third approach is to use black-box mathematical optimization. Introduce an objective function $\Phi$ which, given a set of locations for the points, calculates a penalty value (how "badly" the arrangement violates your constraints), and then try to find an arrangement that minimizes $\Phi$. For instance, suppose for each pair $v,w$ of points, you have a lower bound $\ell_{v,w}$ and an upper bound $u_{v,w}$. You could define $$\Phi(x_1,\dots,x_n) = \sum_{i,j} \frac{[||x_i - x_j||_2 - (u_{i,j} + \ell_{i,j})/2]^2}{(u_{i,j} - \ell_{i,j})^2},$$ and then use some optimization technique to find an arrangement $x_1,\dots,x_n$ that minimizes $\Phi(x_1,\dots,x_n)$. For instance, you could try using hillclimbing, gradient descent, or other convex optimization methods. This approach might be sensitive to the initial values for $x_1,\dots,x_n$, so you might want to repeat it multiple times with different random choices for the initial value, and take the best result. Finally, you could try using simulated annealing. The latter two approaches can be easily adjusted to incorporate angle constraints, simply by modifying the objective function appropriately to add a term that penalizes angles that differ from the desired value.
Consider the 1D poisson equation $$ \frac{d^2 u}{dx^2} = -\rho $$ with Dirichlet boundary conditions $u(0) = u(l) = g$. Using a finite difference scheme, with a 5-point grid $u_1,u_2,u_3,u_4,u_5$ (excluding boundary points $u_0$ and $u_l$), we get the set of linear equations $$ \left( \begin{array}{ccc} 2 & -1 & & & \\ -1 & 2 & -1 & & \\ & -1 & 2 & -1 & \\ & & -1 & 2 & -1 \\ & & & -1 & 2 \\ \end{array} \right)\left( \begin{array}{c} u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5\end{array} \right) = \left( \begin{array}{c} \rho_1+g \\ \rho_2 \\ \rho_3 \\ \rho_4 \\ \rho_5+g\end{array} \right) $$ My question is: What would the matrix look like if I made $u_3$ a boundary point too? Would it look like $$ \left( \begin{array}{ccc} 2 & -1 & & & \\ -1 & 2 & 0 & & \\ & 0 & 1 & 0 & \\ & & 0 & 2 & -1 \\ & & & -1 & 2 \\ \end{array} \right)\left( \begin{array}{c} u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5\end{array} \right) = \left( \begin{array}{c} \rho_1+g \\ \rho_2+g \\ g \\ \rho_4+g \\ \rho_5+g\end{array} \right) $$ I ask because I have a 3d system with small but irregular internal boundary regions, and it would be currently more convenient for my purposes to leave them in the matrix (even if it means extra computational cost).
On the extension of an arc-search interior-point algorithm for semidefinite optimization 1. Department of Applied Mathematics, Azarbaijan Shahid Madani University, Tabriz, I. R. Iran This paper concerns an extension of the arc-search strategy that was proposed by Yang [ Keywords:Interior-point method, semidefinite optimization, arc-search strategy, polynomial complexity. Mathematics Subject Classification:90C51. Citation:Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015 References: [1] F. Alizadeh, [2] [3] F. Alizadeh, J.A. Haeberly and M. L. Overton, Primal-dual interior-point methods for semidefinite programming: Convergence rates, stability and numerical results, [4] [5] [6] D. Herbison-Evans, Solving quartics and cubics for graphics Technical Report, R94-487, Basser Department of Computer Science, University of Sydney, Sydney, Australia, 1994. doi: 10.1016/B978-0-12-543457-7.50009-7. Google Scholar [7] [8] M. Kojima, S. Shindoh and S. Hara, Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, [9] M. Kojima, M. Shida and S. Shindoh, Local convergence of predictor-corrector infeasible interior-point algorithm for SDPs and SDLCPs, [10] Y. Li and T. Terlaky, A new class of large neighborhood path-following interior point algorithms for semidefinite optimization with $O(\sqrt{n}\log(\frac{{\rm tr}(X^0S^0)}{ε}))$ iteration complexity, [11] H. W. Liu, C. H. Liu and X. M. Yang, New complexity analysis of a Mehrotra-type predictor-corrector algorithm for semidefinite programming, [12] [13] [14] R. D. C. Monteiro, Polynomial convergence of primal-dual algorithms for semidefinite programming based on the Monteiro and Zhang family of directions, [15] R. D. C. Monteiro and Y. Zhang, A unified analysis for a class of long-step primal-dual path-following interior-point algorithms for semidefinite programming, [16] Y. E. Nesterov and A. S. Nemirovskii, [17] [18] [19] F. A. Potra and R. Sheng, A superlinearly convergent primal-dual infeasible -interior-point algorithm for semidefinite programming, [20] M. J. Todd, K. C. Toh and R. H. $T\ddot u\ddot unc\ddot u$, On the Nesterov-Todd direction in semidefinite programming, [21] [22] H. Wolkowicz, R. Saigal and L. Vandenberghe, [23] [24] Y. Yang, Arc-search path-following interior-point algorithm for linear programming, Optimization online, 2009.Google Scholar [25] [26] [27] X. Yang, Y. Zhang and H. Liu, A wide neighborhood infeasible-interior-point method with arc-search for linear programming, [28] X. Yang, H. Liu and Y. Zhang, An arc-search infeasible-interior-point method for symmetric optimization in a wide neighborhood of the central path, [29] show all references References: [1] F. Alizadeh, [2] [3] F. Alizadeh, J.A. Haeberly and M. L. Overton, Primal-dual interior-point methods for semidefinite programming: Convergence rates, stability and numerical results, [4] [5] [6] D. Herbison-Evans, Solving quartics and cubics for graphics Technical Report, R94-487, Basser Department of Computer Science, University of Sydney, Sydney, Australia, 1994. doi: 10.1016/B978-0-12-543457-7.50009-7. Google Scholar [7] [8] M. Kojima, S. Shindoh and S. Hara, Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, [9] M. Kojima, M. Shida and S. Shindoh, Local convergence of predictor-corrector infeasible interior-point algorithm for SDPs and SDLCPs, [10] Y. Li and T. Terlaky, A new class of large neighborhood path-following interior point algorithms for semidefinite optimization with $O(\sqrt{n}\log(\frac{{\rm tr}(X^0S^0)}{ε}))$ iteration complexity, [11] H. W. Liu, C. H. Liu and X. M. Yang, New complexity analysis of a Mehrotra-type predictor-corrector algorithm for semidefinite programming, [12] [13] [14] R. D. C. Monteiro, Polynomial convergence of primal-dual algorithms for semidefinite programming based on the Monteiro and Zhang family of directions, [15] R. D. C. Monteiro and Y. Zhang, A unified analysis for a class of long-step primal-dual path-following interior-point algorithms for semidefinite programming, [16] Y. E. Nesterov and A. S. Nemirovskii, [17] [18] [19] F. A. Potra and R. Sheng, A superlinearly convergent primal-dual infeasible -interior-point algorithm for semidefinite programming, [20] M. J. Todd, K. C. Toh and R. H. $T\ddot u\ddot unc\ddot u$, On the Nesterov-Todd direction in semidefinite programming, [21] [22] H. Wolkowicz, R. Saigal and L. Vandenberghe, [23] [24] Y. Yang, Arc-search path-following interior-point algorithm for linear programming, Optimization online, 2009.Google Scholar [25] [26] [27] X. Yang, Y. Zhang and H. Liu, A wide neighborhood infeasible-interior-point method with arc-search for linear programming, [28] X. Yang, H. Liu and Y. Zhang, An arc-search infeasible-interior-point method for symmetric optimization in a wide neighborhood of the central path, [29] MTYAlgor NewAlgor Iter. CPU DGAP Iter. CPU DGAP 55 0.1882 11 0.0729 73 0.2476 13 0.0679 MTYAlgor NewAlgor Iter. CPU DGAP Iter. CPU DGAP 55 0.1882 11 0.0729 73 0.2476 13 0.0679 MTYAlgor NewAlgor Iter. CPU Iter. CPU 97.2 3.4915 28.8 1.1028 117.3 18.7385 32.3 5.9383 116.2 67.5448 33.7 23.2533 115.9 70.7626 33.1 24.0481 136.7 179.9818 35.4 87.9356 MTYAlgor NewAlgor Iter. CPU Iter. CPU 97.2 3.4915 28.8 1.1028 117.3 18.7385 32.3 5.9383 116.2 67.5448 33.7 23.2533 115.9 70.7626 33.1 24.0481 136.7 179.9818 35.4 87.9356 [1] Guoqiang Wang, Zhongchen Wu, Zhongtuan Zheng, Xinzhong Cai. Complexity analysis of primal-dual interior-point methods for semidefinite optimization based on a parametric kernel function with a trigonometric barrier term. [2] Siqi Li, Weiyi Qian. Analysis of complexity of primal-dual interior-point algorithms based on a new kernel function for linear optimization. [3] Yanqin Bai, Pengfei Ma, Jing Zhang. A polynomial-time interior-point method for circular cone programming based on kernel functions. [4] Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. [5] Yanqin Bai, Xuerui Gao, Guoqiang Wang. Primal-dual interior-point algorithms for convex quadratic circular cone optimization. [6] Behrouz Kheirfam. A full Nesterov-Todd step infeasible interior-point algorithm for symmetric optimization based on a specific kernel function. [7] Yanqin Bai, Lipu Zhang. A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. [8] Yinghong Xu, Lipu Zhang, Jing Zhang. A full-modified-Newton step infeasible interior-point algorithm for linear optimization. [9] Yu-Hong Dai, Xin-Wei Liu, Jie Sun. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. [10] Behrouz Kheirfam, Guoqiang Wang. An infeasible full NT-step interior point method for circular optimization. [11] Soodabeh Asadi, Hossein Mansouri. A Mehrotra type predictor-corrector interior-point algorithm for linear programming. [12] [13] Mingyong Lai, Xiaojiao Tong. A metaheuristic method for vehicle routing problem based on improved ant colony optimization and Tabu search. [14] Lunji Song, Zhimin Zhang. Polynomial preserving recovery of an over-penalized symmetric interior penalty Galerkin method for elliptic problems. [15] Victor Magron, Marcelo Forets, Didier Henrion. Semidefinite approximations of invariant measures for polynomial systems. [16] [17] Jianjun Liu, Min Zeng, Yifan Ge, Changzhi Wu, Xiangyu Wang. Improved Cuckoo Search algorithm for numerical function optimization. [18] [19] Yong Xia, Yu-Jun Gong, Sheng-Nan Han. A new semidefinite relaxation for $L_{1}$-constrained quadratic optimization and extensions. [20] Adel Dabah, Ahcene Bendjoudi, Abdelhakim AitZai. An efficient Tabu Search neighborhood based on reconstruction strategy to solve the blocking job shop scheduling problem. Impact Factor: Tools Metrics Other articles by authors [Back to Top]
This library uses the LaTeX package pgfplots to produce plots. It integrates with IJulia, outputting SVG images to the notebook. This version of PGFPlots requires Julia 0.6 or later. Pkg.add("PGFPlots") In addition, you will need to install the following dependencies if you do not already have them on your system. sudo apt-get install pdf2svg and on RHEL/Fedora by running sudo dnf install pdf2svg. On Windows, you can download the binaries from http://www.cityinthesky.co.uk/opensource/pdf2svg/. Be sure to add pdf2svg to your path (and restart). Once these things are installed, you should be able to run the following: using PGFPlots You can create a very basic plot by passing in vectors of $x$ and $y$ coordinates. x = [1,2,3]y = [2,4,1]plot(x, y) The version of the plot function above actually just creates an empty Axis and inserts a Plots.Linear instance containing the data. Axis(Plots.Linear(x, y)) If you create the Axis object explicitly, as done above, then you can set various properties of the axis. pushPGFPlotsOptions("scale=1.5")a = Axis(Plots.Linear(x, y, legendentry="My Plot"), xlabel="X", ylabel="Y", title="My Title") The options can be set after the plot a is created. Here we rotate the y-label and move the legend by setting the ylabelStyle and legendStyle: a.ylabelStyle = "rotate = -90"a.legendStyle = "{at={(1.05,1.0)},anchor=north west}"a This will remove the latest added setting popPGFPlotsOptions(); And to reset all options, use resetPGFPlotsOptions(); You can set the width and height of the axis. a = Axis(Plots.Linear(x, y), width="3cm", height="3cm") Since latex is used to typeset everything, you can use any latex math symbols you want. If you use L"..." (as below), you do not have to escape \ and $. Axis(Plots.Linear(x, y), xlabel=L"$X$", ylabel=L"$Y$", title=L"$\int_0^\infty e^{\pi x}dx$") It is possible to pass a dictionary with arbitrary options to the axis with the customOptions keyword: You can pass in a function and its domain. It will automatically be evaluated based on the provided domain at xbins evenly-spaced points. Plots.Linear(x->sqrt(2*x) + sin(x), (0,10), xbins=51) You can put multiple plots on the same axis and assign legend entries. Axis([ Plots.Linear(sin, (0,10), legendentry=L"$\sin(x)$"), Plots.Linear(x->sqrt(2*x), (0,10), legendentry=L"$\sqrt{2x}$")]) You can change the legend position by setting the legendPos parameter in the axis. Axis([ Plots.Linear(sin, (0,10), legendentry=L"$\sin(x)$"), Plots.Linear(x->sqrt(2*x), (0,10), legendentry=L"$\sqrt{2x}$") ], legendPos="north west") You can do comb plots by setting the style. The style string gets passed directly into PGFPlots, giving you full control over the plots (see PGFPlots documentation). Plots.Linear(1:10, sin.(1:10), style="ycomb") You can also do horizontal comb plots. Plots.Linear(abs.(sin.(1:10)), 1:10, style="xcomb") You can also make it smooth. Plots.Linear(1:10, sin.(1:10), style="smooth") There is support for constant plots. Plots.Linear(1:10, sin.(1:10), style="const plot") Plots.Linear(1:10, sin.(1:10), style="ybar") Plots.Linear(1:10, sin.(1:10), style="ybar,fill=green", mark="none") You can give an axis a log scale by specifying xmode or ymode parameters of Axis: p = Plots.Linear(0.01:0.01:1, 10 .^(0.01:0.01:1), mark="none")Axis(p, ymode="log") Fill and fill opacity can be handled through the style parameter. p = Plots.Linear(0:10, (0:10).^2, style="red, fill=blue, fill opacity=0.3", mark="none") If you want the tick marks to be equal, you can set axisEqual to true (equivalent to axis equal in LaTeX). Note that this will also change the limit sizes, over riding xmax, xmin, ymin, and ymax. p = Plots.Linear(0:10, 2*(0:10))a = Axis(p, axisEqual=true, xmin=0, xmax=10) # note xmin and xmax are disregarded... If this flippant disregard of your axis limit authority displeases you, you can set axisEqualImage to true (equivalent to axis equal image). This will leave the limits alone, and let you modify them. p = Plots.Linear(0:10, 2*(0:10))a = Axis(p, axisEqualImage=true) You can change the size of the markers with the markSize argument. The default marker size is 2. Plots.Linear(0:10, 2*(0:10), markSize=10) To eliminate the line and only use marks, you can set the onlyMarks argument to true. Plots.Linear(0:10, 2*(0:10), onlyMarks=true) You can plot error bars for Linear and Scatter plots. Here we specify an array for the y error. x = [1,2,3]y = [2,4,1]plot(x, y, errorBars = ErrorBars(y=[1, 0.3, 0.5])) The y error does not have to be symmetric. plot(x, y, errorBars = ErrorBars(yplus=[1, 0.3, 0.5], yminus=[0.5, 0.1, 0.1])) You can also specify x error. plot(x, y, errorBars = ErrorBars(y=[1, 0.3, 0.5], x=[0.1, 0.1, 0.05])) You can change the style. plot(x, y, errorBars = ErrorBars(y=[1, 0.3, 0.5], style="red,very thick")) You can also specify the mark. plot(x, y, errorBars = ErrorBars(y=[1, 0.3, 0.5], mark="diamond")) You can control the style of the plot line along with the error bars. plot(x, y, style="red", errorBars = ErrorBars(y=[1, 0.3, 0.5], style="black,very thick", mark="diamond")) A simple scatter plot is just a linear plot with "only marks". The following code returns a Linear plot with "only marks" selected: x = 1:10y = 2xPlots.Scatter(x, y) PGFPlots gives you the option of picking a color for each scatter point. You can provide a third vector with the desired color values. The following code returns a Scatter plot where points with smaller z-values are blue. Points redden as the z-values increase. z = 3xPlots.Scatter(x, y, z) To add a colorbar, you can set the colorbar argument to true. p = Plots.Scatter(x, y, z)a = Axis(p, colorbar=true) If you want non-numeric data to determine the coloration and marking of each scatter point, you must provide the scatterClasses argument and describe what each symbol means. This is the same string that would be passed into the tex file if you were writing it yourself. The following code colors points by their class ("a", "b", or "c"). z = ["a", "a", "a", "b", "b", "b", "b", "c", "c", "c"]sc = "{a={mark=square,blue},b={mark=triangle,red},c={mark=o,black}}"Plots.Scatter(x, y, z, scatterClasses=sc) You can add a legend using the legendentry keyword. Plots.Scatter(x, y, z, scatterClasses=sc, legendentry=["A", "B", "C"]) You can customize the legend using optionsto the Axis (since the legend style is a property of the Axis). Axis(Plots.Scatter(x, y, z, scatterClasses=sc, legendentry=["A", "B", "C"]), style="legend columns=-1", legendPos="north west") It is very easy to make histograms. It is just another type under the Plots module. You should be able to use autocompletion in your editor (e.g., IJulia) to see what Plots are supported. d = randn(100)Axis(Plots.Histogram(d, bins=10), ymin=0) You can even create a cumulative distribution function from the data. Axis(Plots.Histogram(d, bins=20, cumulative=true, density=true), ymin=0) As with the other plots, you can control the style. The documentation on tikz and pgfplots can give you more information about what styles are supported. Axis(Plots.Histogram(d, bins=10, style="red,fill=red!10"), ymin=0, ylabel="Counts") Sometimes you do not want to store your raw dataset in a Tikz file, especially when the dataset is large.The discretization option lets you specify what discretization algorithm to use. Axis(Plots.Histogram(d, discretization=:auto), ymin=0) using Randomdiscretizations = [:default, # use PGFPlots for small data sizes and :auto for large :pgfplots, # use the PGFPlots histogram function (uses nbins, which defaults to 10) :specified, # use Discretizers.jl but with the specified number of bins (which defaults to 10) :auto, # max between :fd and :sturges. Good all-round performance :fd, # Freedman Diaconis Estimator, robust :sturges, # R's default method, only good for near-Gaussian data :sqrt, # used by Excel and others for its simplicity and speed :doane, # improves Sturges’ for non-normal datasets. :scott, # less robust estimator that that takes into account data variability and data size. ]Random.seed!(0)data = [randn(500).*1.8 .+ -5; randn(2000).*0.8 .+ -4; randn(500).*0.3 .+ -1; randn(1000).*0.8 .+ 2; randn(500).*1.5 .+ 4; ]data = filter!(x->-15.0 <= x <= 15.0, data)g = GroupPlot(3, 3, groupStyle = "horizontal sep = 1.75cm, vertical sep = 1.5cm")for discretization in discretizations push!(g, Axis(Plots.Histogram(data, discretization=discretization), ymin=0, title=string(discretization)))endg Bar charts differ from histograms in that they represent values assigned to distinct items.A BarChart has keys and values. Plots.BarChart(["a", "b"], [1,2]) If only values are passed in, the keys will be set to the first $n$ integers. Plots.BarChart([3,4,5,2,10]) Vectors of AbstractStrings will be counted and the strings used as keys. Plots.BarChart(["cat", "dog", "cat", "cat", "dog", "mouse"]) Plots.BarChart([L"x", L"x^2", L"x^2", L"sin(x)", "hello world"]) Axis(Plots.BarChart(["potayto", "potahto", "tomayto", "tomahto"], [1,2,3,4], style="cyan"), xlabel="vegetables", ylabel="counts", style="bar width=25pt") Error bars on the value can be added using the errorBars keyword. Plots.BarChart(["a", "b", "c"], [2, 3, 5], errorBars=ErrorBars(y=[0.5, 0.7, 1.5])) Image plots create a PNG bitmap and can be used to visualize functions. The second and third arguments below are tuples specifying the x and y ranges. f = (x,y)->x*exp(-x^2-y^2)Plots.Image(f, (-2,2), (-2,2)) You can set the zmin and zmax. By default, it uses the minimum and maximum values of the data. Plots.Image(f, (-2,2), (-2,2), zmin=-1, zmax=1) You can invert the Gray colormap. Plots.Image(f, (-2,2), (-2,2), colormap = ColorMaps.GrayMap(invert = true))
I'm solving 2D axisymmetrical Euler's equations in conservative form: $$ \frac{\partial U}{\partial t} + \frac{\partial F(U)}{\partial x} + \frac{\partial G(U)}{\partial r} = H(U) $$ where $$ U = \left( \begin{array}{c} r\rho \\ r\rho u \\ r\rho v \\ re\end{array} \right), \; F(U) = \left( \begin{array}{c} r\rho u \\ r(\rho u^2 + p) \\ r\rho uv \\ r(e+p)u\end{array} \right), \; G(U) = \left( \begin{array}{c} r\rho v \\ r\rho uv \\ r(\rho v^2 + p) \\ r(e+p)v\end{array} \right), \; H(U) = \left( \begin{array}{c} 0 \\ 0 \\ p \\ 0 \end{array} \right), $$ using finite-difference WENO5 method. How to correctly impose discrete axisymmetric boundary condition at $r=0$? One paper regarding boundary condition (for full cylindrical coordinates) just mentions that symmetry condition should be used for axisymmetric flows, but without any details. When I was using MacCormack's method (2nd order, for each grid point one more neighboring point in each direction is needed), the procedure was pretty simple (C-syntax): //internal pointsfor (k = 1; k <= k_max - 1; k++){ for (l = 1; l <= l_max - 1; l++) { R[k][l] = ... (calculated by MacCormack's method); U[k][l] = ...; V[k][l] = ...; P[k][l] = ...; }}//boundary at r = 0for (k = 0; k <= k_max; k++){ R[k][0] = R[k][1]; U[k][0] = U[k][1]; V[k][0] = 0; P[k][0] = P[k][1];}//other boundaries... where R,U,V,P are arrays for $\rho, u, v, p$, first array index k is for $x$, second l is for $r$ (uniform square grid) and l=0 correspond to $r=0$. This boundary condition seems to work well. With WENO5, the problem is that for each grid point, three more points in each direction are used in the stencil, so specifying one point at l=0 isn't enough. My current ideas are: 1. Explicitly set three near-axis points: //internal pointsfor (k = 3; k <= k_max - 3; k++){ for (l = 3; l <= l_max - 3; l++) { R[k][l] = ... (calculated by WENO5 method); U[k][l] = ...; V[k][l] = ...; P[k][l] = ...; }}//boundary at r = 0for (k = 0; k <= k_max; k++){ for (l = 0; l <= 2; l++) { R[k][l] = R[k][l]; U[k][l] = U[k][l]; V[k][l] = 0; P[k][l] = P[k][l]; }}//other boundaries... 2. Add three ghost points for $r<0$ (so that l=3 at $r=0$) and set them with mirrored values: //internal pointsfor (k = 3; k <= k_max - 3; k++){ for (l = 3; l <= l_max - 3; l++) { R[k][l] = ... (calculated by WENO5 method); U[k][l] = ...; V[k][l] = ...; P[k][l] = ...; }}//boundary at r = 0for (k = 0; k <= k_max; k++){ /values at r < 0 for (l = 0; l <= 2; l++) { R[k][l] = R[k][6 - l]; U[k][l] = U[k][6 - l]; V[k][l] = -V[k][6 - l]; P[k][l] = P[k][6 - l]; } //values at r = 0 R[k][3] = R[k][4]; U[k][3] = U[k][4]; V[k][3] = 0; P[k][3] = P[k][4];}//other boundaries... or mirror with reversed sign (since values at ghost points are multiplied by negative $r$ in equations): ... for (l = 0; l <= 2; l++) { R[k][l] = -R[k][6 - l]; U[k][l] = -U[k][6 - l]; V[k][l] = V[k][6 - l]; P[k][l] = -P[k][6 - l]; } ... Methods 1 and 2 generate noticable artifacts near the symmetry axis. 3. Shift the grid by half-step so that no point is placed exactly at $r=0$ and use ghost-points. This method leads to computation instability, though. What would be the correct way?
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Linear Factors Let $f(x)$ be the proper rational function $f(x)=\frac{P(x)}{Q(x)}$. To use the method of partial fractions to integrate $f(x)$, we first factor $Q(x)$. In the case when The video below goes over some examples. More Examples ---------------------------------------------------------------------------------- Example 2: Find the partial fraction decomposition of $$ \frac{1}{x^4-3x^3+3x^2-x}. $$ DO: The issue here is factoring a quartic, which can be quite difficult. Hint: $Q(x)=x\,\left(x-1\right)^3$. Work on this before proceeding. Solution 2: Since $Q(x)=x\,\left(x-1\right)^3$, we have both a non-repeated factor, $x$, and a repeated factor, $(x-1)^3$, so we're looking for a decomposition of the form $$ \frac{1}{x\,\left(x-1\right)^3}=\frac{A}{x}+\frac{B_1}{x-1}+\frac{B_2}{\left(x-1\right)^2}+\frac{B_3}{\left(x-1\right)^3}. $$ Taking the common denominator, we have $$ \frac{1}{x\,\left(x-1\right)^3}=\frac{A\left(x-1\right)^3+B_1x\,(x-1)^2+B_2x\,\left(x-1\right)+B_3x} {x\left(x-1\right)^3}. $$ This is true if the numerators are equal, i.e., $$ 1=A\left(x-1\right)^3+B_1x\,(x-1)^2+B_2x\,\left(x-1\right)+B_3x.$$ To find out $B_3$, substitute $x=1$, then $1=0+B_3$, so $B_3=1$. Similarly, substituting $x=0$ we find that $A=-1$. To find the other coefficients, we need to choose other values for $x$. For example, taking $x=2$ yields $ 1=A+2B_1+2B_2+2B_3=-1+2B_1+2B_2+2, $ i.e., $ B_1+B_2=0. $ Taking $x=-1$ yields $ 1=-8A-4B_1+2B_2-B_3=8-4B_1+2B_2-1, $ i.e., $ -2B_1+B_2=-6. $ Solving the system $$ \begin{cases} B_1+B_2=0\\ -2B_1+B_2=-6 \end{cases} $$ yields $B_1=2$ and $B_2=-2$, so that our decomposition becomes $$ \frac{1}{x\,\left(x-1\right)^3}=-\frac{1}{x}+\frac{2}{x-1}-\frac{2}{\left(x-1\right)^2}+\frac{1}{\left(x-1\right)^3}. $$ ---------------------------------------------------------------------------------- Example 3: DO: Use your answer above to find $\displaystyle\int\frac{1}{x(x-1)^3}\,dx$. Solution 3: $\displaystyle\int\frac{1}{x\,\left(x-1\right)^3}\,dx=-\int\frac{1}{x}\,dx+\int\frac{2}{x-1}\,dx-\int\frac{2}{\left(x-1\right)^2}\,dx+\int\frac{1}{\left(x-1\right)^3}\,dx$ $\displaystyle=-\ln|x|+2\ln|x-1|+\frac{2}{(x-1)}-\frac{1}{2(x-1)^2}+C$, using $u$-substitution on the last two terms. Differentiate to see if this answer is right. DO:
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map \[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$. An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$. The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders. Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers. If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6). Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits \[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$. In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture. Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle. Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect). There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps. A periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$. Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic, but $p$ itself is not periodic. For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin. Let’s do an example, already used by Sullivan himself: \[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$) The critical points $0$ and $2$ are not periodic, but they become eventually periodic: \[ 2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic. For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic. If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical. Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic. So, the system is always completely chaotic unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two. Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment
In the current assignment for real analysis, we consider the following function \[ f(x) = \begin{cases} e^{-1/x^2} & x \neq 0\ 0 & x = 0. \end{cases} \] We are asked to show that \(f^{(n)}(0) = 0\) for all \(n\), and that the Taylor series for \(f\) at \(0\) converges to \(f(x)\) only for \(x \neq 0\). To compute the derivatives \(f^{(n)}(0)\), it is easiest to use induction on \(n\). For \(n = 1\), \[ f'(0) = \lim_{x \to 0} \frac{f(x) – f(0)}{x – 0} = \lim_{x \to 0} \frac 1 x e^{-1 / x^2}. \] To compute the limit, we first use the change of variables \(y = 1 / x\) so that as \(y \to \pm \infty\) we have \(x \to 0\). Then \[ \lim_{x \to 0} \frac 1 x e^{-1 / x^2} = \lim_{y \to \pm \infty} \frac{y}{e^{y^2}}. \] Using L’Hopital’s rule, it is easy to verify that the latter limit is zero. In fact, a similar argument shows that for any \(k\) \[ \lim_{x \to 0} \frac{1}{x^k} e^{-1 / x^2} = 0. \] We will need this fact later on. At any rate, we’ve shown that \(f'(0) = 0\). For the inductive step, assume that \(f^{(n)}(0) = 0\). Then \[ f^{(n + 1)}(0) = \lim_{x \to 0} \frac{f^{(n)}(x) – f^{(n)}(0)}{x – 0} = \lim_{x \to 0} \frac{1}{x} f^{(n)}(x). \] After computing the first few derivatives, \(f'(x),\ f”(x), \ldots\) for \(x \neq 0\), you should be convinced that \(f'(x)\) is of the form \[ f^{(n)}(x) = \left (\frac{a_k}{x^k} + \frac{a_{k-1}}{x^{k-1}} + \cdots + a_0 \right ) e^{-1 / x^2}. \] Using the same trick as before, we can compute \[ \lim_{x \to 0} \frac{a_i}{x^{i+1}} e^{-1 / x^2} = 0, \] hence \[ f^{(n + 1)}(0) = \lim_{x \to 0} \frac{1}{x} f^{(n)}(x) = 0. \] Since all of the derivatives of \(f\) are \(0\) at \(x = 0\), the Taylor series there is \[ \sum_{k = 0}^\infty f^{(k)}(0) x^k = 0. \] However, \(f(x) \neq 0\) for \(x \neq 0\), so the Taylor series only agrees with \(f\) at \(x = 0\).
I want to plot the movement of the rocket relative to time ($t$) in the triple dimension $(x, y, z)$. I have all the information about the rocket. I simulate the motion of a rocket. Can you help me to find equations of motion coordinates? closed as off-topic by Gert, stafusa, John Rennie, Kyle Kanos, JMac Dec 28 '17 at 18:05 This question appears to be off-topic. The users who voted to close gave this specific reason: "Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – stafusa, Kyle Kanos, JMac You can use these equations, assuming that the acceleration of the rocket is constant and that air friction is negligible. There are 3 equations, one for the z-axis, which is the up-down axis, one for the y-axis, which is the left-right axis, and one for the x-axis, which is the forward-backward axis. The z-axis equation is: $$z = z_0 + v_0 \sin\theta t + 1/2(g+a\sin\theta) t^2 \;,$$ where $z_0$ is the starting position in the z-axis, $v_0$ is the starting velocity, $\theta$ is the angle between the rocket and the ground, $g$ is the acceleration due to gravity,$t$ is the time, and $a$ is the acceleration of the rocket. The x-axis equation is, $$x =x_0 + v_0(\cos\alpha) t+1/2 (a\cos\alpha) t^2 \;,$$ where $x_0$ is the starting position in the x-axis, $\alpha$ is the angle between the rocket and the x-axis, and t is the time. Finally, the equation for the y-axis is, $$y =y_0 + (v_0\sin\alpha) t+1/2 (a\sin\alpha) t^2 \;,$$ where $y_0$ is the starting position in the y-axis, $\alpha$ is the angle between the rocket and the x-axis, and t is the time. Hopefully those equations help you out in your simulation.
One-dimensional parameter-dependent boundary-value problems in Hölder spaces Hanna Masliuk National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Peremogy Avenue 37, 03056, Kyiv-56, Ukraine Vitalii Soldatov Institute of Mathematics, National Academy of Sciences of Ukraine, Tereshchenkivska Str. 3, 01004 Kyiv-4, Ukraine Abstract We study the most general class of linear boundary-value problems for systems of $r$-th order ordinary differential equations whose solutions range over the complex H\"older space $C^{n+r,\alpha}$, with $0\leq n\in\mathbb{Z}$ and $0<\alpha\leq1$. We prove a constructive criterion under which the solution to an arbitrary parameter-dependent problem from this class is continuous in $C^{n+r,\alpha}$ with respect to the parameter. We also prove a two-sided estimate for the degree of convergence of this solution to the solution of the corresponding nonperturbed problem. Key words: Differential system, boundary-value problem, continuity in parameter, Hölder space Full Text Article Information Title One-dimensional parameter-dependent boundary-value problems in Hölder spaces Source Methods Funct. Anal. Topology, Vol. 24 (2018), no. 2, 143-151 Milestones Received 23/01/2018; Revised 20/02/2018 Copyright The Author(s) 2018 (CC BY-SA) Authors Information Hanna Masliuk National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Peremogy Avenue 37, 03056, Kyiv-56, Ukraine Vitalii Soldatov Institute of Mathematics, National Academy of Sciences of Ukraine, Tereshchenkivska Str. 3, 01004 Kyiv-4, Ukraine Export article Citation Example Hanna Masliuk and Vitalii Soldatov, One-dimensional parameter-dependent boundary-value problems in Hölder spaces, Methods Funct. Anal. Topology 24 (2018), no. 2, 143-151. BibTex @article {MFAT1054, AUTHOR = {Masliuk, Hanna and Soldatov, Vitalii}, TITLE = {One-dimensional parameter-dependent boundary-value problems in Hölder spaces}, JOURNAL = {Methods Funct. Anal. Topology}, FJOURNAL = {Methods of Functional Analysis and Topology}, VOLUME = {24}, YEAR = {2018}, NUMBER = {2}, PAGES = {143-151}, ISSN = {1029-3531}, URL = {http://mfat.imath.kiev.ua/article/?id=1054}, } References Coming Soon.
If $j$ is a function with $V_{\lambda+1}\subseteq\mathrm{Dom}(f)$, then define a mapping $j\upharpoonright_{\lambda+1}:V_{\lambda+1}\rightarrow V_{\lambda+1}$ by letting $j\upharpoonright_{\lambda+1}(A)=j(A)\cap V_{\lambda}$ for each $A\subseteq V_{\lambda}$. Suppose that $\lambda$ is an inaccessible cardinal. Then we shall say that a function $f:V_{\lambda+1}\rightarrow V_{\lambda+1}$ is an extendibility mapping if for all $\alpha>\lambda$ there is some elementary embedding $j:V_{\alpha}\rightarrow V_{\beta}$ where $f=j\upharpoonright_{\lambda+1}$. If $\kappa<\lambda$ and $\lambda$ is inaccessible, then $\kappa$ is extendible if and only if there is some extendibility mapping $f:V_{\lambda+1}\rightarrow V_{\lambda+1}$ with $\mathrm{crit}(f)=\kappa.$ Let $\Phi(f_{1},...,f_{n},\lambda)$ denote the following formula. i. $f_{1},...,f_{n}:V_{\lambda+1}\rightarrow V_{\lambda+1}$ are non-trivial extendibility mappings with $\mathrm{crit}(f_{1})<...<\mathrm{crit}(f_{n})<\lambda$ and $\lambda$ inaccessible, and ii. whenever $1\leq i<j\leq n$ and $\alpha>\lambda$ and $j:V_{\alpha}\rightarrow V_{\beta}$ is an elementary embedding with $j\upharpoonright_{\lambda+1}=f_{j}$ then $(j(f_{i}))\upharpoonright_{\lambda+1}=f_{i}$. Is the existence of $n$ extendible cardinals equiconsistent with the existence of extendibility mappings $f_{1},...,f_{n}:V_{\lambda+1}\rightarrow V_{\lambda+1}$ where $\Phi(f_{1},...,f_{n},\lambda)$ is false? The motivation behind asking this question is that the extendible cardinals begin to have some aspects of the algebras of elementary embeddings and these algebraic properties of the extendible cardinals may be easier to investigate than the algebraic properties at the level around $n$-hugeness and rank-into-rank. I will probably ask more questions like this one in the near future.
Introduction Formatting of time-to-event data in the MonolixSuite Single event Repeated events User defined likelihood function for time-to-event data Objectives: learn how to implement a model for (repeated) time-to-event data with different censoring processes. Projects: tte1_project, tte2_project, tte3_project, tte4_project, rtteWeibull_project, rtteWeibullCount_project Here, observations are the “times at which events occur”. An event may be one-off (e.g., death, hardware failure) or repeated (e.g., epileptic seizures, mechanical incidents, strikes). Several functions play key roles in time-to-event analysis: the survival, hazard and cumulative hazard functions. We are still working under a population approach here so these functions, detailed below, are thus individual functions, i.e., each subject has its own. As we are using parametric models, this means that these functions depend on individual parameters \((\psi_i)\). The \(S(t, \psi_i)\) gives the probability that the event happens to individual survival function iafter time \(t>t_{\text{start}}\): $$S(t,\psi_i) = \mathbb{P}(T_i>t; \psi_i) $$ The \(h(t,psi_i)\) is defined for individual hazard function ias the instantaneous rate of the event at time t, given that the event has not already occurred: $$h(t, \psi_i) = \lim_{dt \to 0} \frac{S(t, \psi_i) – S(t + dt, \psi_i)}{ S(t, \psi_i) dt} $$ This is equivalent to $$h(t, \psi_i) = -\frac{d}{dt} \left(\log{S(t, \psi_i)}\right)$$ Another useful quantity is the \(H(a,b; \psi_i)\), defined for individual cumulative hazard function ias $$H(a,b; \psi_i) = \int_a^b h(t,\psi_i) dt $$ Note that \(S(t, \psi_i) = e^{-H(t_{\text{start}},t; \psi_i)}\). Then, the hazard function \(h(t,\psi_i)\) characterizes the problem, because knowing it is the same as knowing the survival function \(S(t, \psi_i)\). The probability distribution of survival data is therefore completely defined by the hazard function. Time-to-event (TTE) models are thus defined in Monolix via the hazard function. Monolix also holds a TTE library that contains typical hazard functions for time-to-event data. More details and modeling guidelines can be found on the TTE dedicated webpage, along with case studies. In the data set, exactly observed events, interval censored events and right censoring are recorded for each individual. Contrary to other softwares for survival analysis, the MonolixSuite requires to specify the time at which the observation period starts. This allows to define the data set using absolute times, in addition to durations (if the start time is zero, the records represent durations between the start time and the event). The column TIME also contains the end of the observation period or the time intervals for interval-censoring. The column OBSERVATION contains an integer that indicates how to interpret the associated time. The different values for each type of event and observation are summarized in the table below: The figure below summarizes the different situations with examples: For instance for single events, exactly observed (with or without right censoring), one must indicate the start time of the observation period (Y=0), and the time of event (Y=1) or the time of the end of the observation period if no event has occurred (Y=0). In the following example: ID TIME Y 1 0 0 1 34 1 2 0 0 2 80 0 the observation period lasts from starting time t=0 to the final time t=80. For individual 1, the event is observed at t=34, and for individual 2, no event is observed during the period. Thus it is noticed that at the final time (t=80), no event had occurred. Using absolute times instead of duration, we could equivalently write: ID TIME Y 1 20 0 1 54 1 2 33 0 2 113 0 The duration between start time and event (or end of the observation period) are the same as before, but this time we record the day at which the patients enter the study and the days at which they have events or leave the study. Different patients may enter the study at different times. To begin with, we will consider a one-off event. Depending on the application, the length of time to this event may be called the survival time (until death, for instance), failure time (until hardware fails), and so on. In general, we simply say “time-to-event”. The random variable representing the time-to-event for subject i is typically written Ti. tte1_project(data = tte1_data.txt , model=lib:exponential_model_singleEvent.txt) The event time may be exactly observed at time \(t_i\), but if we assume that the trial ends at time \(t_{\text{stop}}\), the event may happen after the end. This is “right censoring”. Here, Y=0 at time t means that the event happened after t and Y=1 means that the event happened at time t. The rows with t=0 are included to show the trial start time \(t_{\text{start}}=0\): By clicking on the button Observed data, it is possible to display the Kaplan Meier plot (i.e. the empirical survival function) before fitting any model: A very basic model with constant hazard is used for this data: [LONGITUDINAL] input = Te EQUATION: h = 1/Te DEFINITION: Event = {type=event, maxEventNumber=1, hazard=h} OUTPUT: output = {Event} Here, Te is the expected time to event. Specification of the maximum number of events is required both for the estimation procedure and for the diagnosis plots based on simulation, such as the predicted interval for the Kaplan Meier plot which is obtained by Monte Carlo simulation: tte2_project(data = tte2_data.txt , model=exponentialIntervalCensored_model.txt) We may know the event has happened in an interval \(I_i\) but not know the exact time \(t_i\). This is interval censoring. Here, Y=0 at time t means that the event happened after t and Y=1 means that the event happened before time t. Event for individual 1 happened between t=10 and t=15. No event was observed until the end of the experiment (t=100) for individual 5. We use the same basic model, but we now need to specify that the events are interval censored: [LONGITUDINAL] input = Te EQUATION: h = 1/Te DEFINITION: Event = {type=event, maxEventNumber=1, eventType=intervalCensored, hazard = h intervalLength=5 ; used for the plots (not mandatory) } OUTPUT: output = Event Sometimes, an event can potentially happen again and again, e.g., epileptic seizures, heart attacks. For any given hazard function h, the survival function S for individual i now represents the survival since the previous event at \(t_{i,j-1}\), given here in terms of the cumulative hazard from \(t_{i,j-1}\) to \(t_{i,j}\): $$S(t_{i,j} | t_{i,j-1}; \psi_i) = \mathbb{P}(T_{i,j} > t_{i,j} | T_{i,j-1} = t_{i,j-1}; \psi_i) = \exp(-\int_{t_{i,j-1}}^{t_{i,j}}h(t,\psi_i) dt)$$ tte3_project(data = tte3_data.txt , model=lib:exponential_model_repeatedEvents.txt) A sequence of \(n_i\) event times is precisely observed before \(t_{\text{stop}} = 200\): We can then display the Kaplan Meier plot for the first event and the mean number of events per individual: After fitting the model, prediction intervals for these two curves can also be displayed on the same graph as on the following tte4_project(data = tte4_data.txt , model=exponentialIntervalCensored_repeated_model.txt) We do not know the exact event times, but the number of events that occurred for each individual in each interval of time. weibullRTTE(data = weibull_data.txt , model=weibullRTTE_model.txt) A Weibull model is used in this example: [LONGITUDINAL] input = {lambda, beta} EQUATION: h = (beta/lambda)*(t/lambda)^(beta-1) DEFINITION: Event = {type=event, hazard=h, eventType=intervalCensored, intervalLength=5} OUTPUT: output = Event weibullCount(data = weibull_data.txt , model=weibullCount_model.txt) Instead of defining the data as events, it is possible to consider the data as count data: indeed, we count the number of events per interval. An additional column with the start of the interval is added in the data file and defined as a regression variable. We then use a model for count data (see rtteWeibullCount_model.txt).
Surjection iff Cardinal Inequality Theorem Let $S$ and $T$ be sets such that $S \sim \card S$ and $T \sim \card T$. Furthermore, let $S$ be non-empty. Then: Proof Necessary Condition Suppose $f: S \to T$ is a surjection. Then $\Img f = T$ by definition. $\card T \le \card S$ Furthermore, if $S$ is non-empty, then $\map f x \in T$ for some $x \in S$. $\Box$ Sufficient Condition Suppose that $0 < \card T \le \card S$. Take an arbitrary $y \in T$. Define the function $f: S \to T$ as follows: $\map f x = \begin{cases} \map {g^{-1} } x & : x \in \Img g \\ y & : x \notin \Img g \end{cases}$ For any $z \in T$, $\map g z \in \Img g$. Thus, $\map f x = z$ for some $x \in S$. It follows that $f: S \to T$ is a surjection. $\blacksquare$
99 3 The rate that a stationary clock slows down near a massive object, relative to one far away, can be read off from the Schwartzschild metric: $$c^2d\tau^2=\left(1-\frac{r_s}{r}\right)c^2dt^2-\left(1-\frac{r_s}{r}\right)^{-1}dr^2-r^2\left(d\theta^2+\sin^2\theta d\phi^2\right)$$ by setting ##dr=d\theta=d\phi=0## to give: $$\frac{d\tau}{dt}=\left(1-\frac{r_s}{r}\right)^{1/2}$$ where the Schwartzschild radius ##r_s=2GM/c^2##. If the clock is running slowly compared to a distant clock is this equivalent to the clock having a lower energy compared to a distant clock? If the clock was an atomic system then the frequency of its oscillation would be less near the massive object. As energy is proportional to frequency for atomic systems then I would have thought that this would imply that the energy of the atomic system would be less near the massive object than it was far away. $$c^2d\tau^2=\left(1-\frac{r_s}{r}\right)c^2dt^2-\left(1-\frac{r_s}{r}\right)^{-1}dr^2-r^2\left(d\theta^2+\sin^2\theta d\phi^2\right)$$ by setting ##dr=d\theta=d\phi=0## to give: $$\frac{d\tau}{dt}=\left(1-\frac{r_s}{r}\right)^{1/2}$$ where the Schwartzschild radius ##r_s=2GM/c^2##. If the clock is running slowly compared to a distant clock is this equivalent to the clock having a lower energy compared to a distant clock? If the clock was an atomic system then the frequency of its oscillation would be less near the massive object. As energy is proportional to frequency for atomic systems then I would have thought that this would imply that the energy of the atomic system would be less near the massive object than it was far away.
We will now see a way of evaluating the line integral of a smooth vector field around a simple closed curve. A vector field \(\textbf{f}(x, y) = P(x, y)\textbf{i} + Q(x, y)\textbf{j}\) is smooth if its component functions \(P(x, y)\) and \(Q(x, y)\) are smooth. We will use Green’s Theorem (sometimes called Green’s Theorem in the plane) to relate the line integral around a closed curve with a double integral over the region inside the curve: Theorem 4.7: Green's Theorem Let \(R\) be a region in \(\mathbb{R}^2\) whose boundary is a simple closed curve \(C\) which is piecewise smooth. Let \(\textbf{f}(x, y) = P(x, y)\textbf{i}+Q(x, y)\textbf{j}\) be a smooth vector field defined on both \(R\) and \(C\). Then \[\oint_C \textbf{f}\cdot d\textbf{r} = \iint\limits_R \left ( \dfrac{∂Q}{ ∂x} - \dfrac{∂P}{ ∂y} \right )\,dA, \label{Eq4.21}\] where \(C\) is traversed so that \(R\) is always on the left side of \(C\). Proof: We will prove the theorem in the case for a simple region \(R\), that is, where the boundary curve \(C\) can be written as \(C = C_1 \cup C_2\) in two distinct ways: \[\begin{align} C_1 &= \text{ the curve }y = y_1(x)\text{ from the point }X_1 \text{ to the point }X_2 \label{Eq4.22} \\ C_2 &=\text{ the curve }y = y_2(x) \text{ from the point }X_2 \text{ to the point } X_1 , \label{Eq4.23} \\ \end{align}\] where \(X_1\) and \(X_2\) are the points on \(C\) farthest to the left and right, respectively; and \[\begin{align} C_1 &= \text{ the curve }x = x_1(y)\text{ from the point }Y_2 \text{ to the point } Y_1 \label{Eq4.24} \\ C_2 &= \text{ the curve } x = x_2(y)\text{ from the point } Y_1 \text{ to the point }Y_2,\label{Eq4.25} \\ \end{align}\] where \(Y_1\) and \(Y_2\) are the lowest and highest points, respectively, on \(C\). See Figure 4.3.1. Figure 4.3.1 Integrate \(P(x, y)\) around \(C\) using the representation \(C = C_1 \cup C_2\) given by Equation \ref{Eq4.23} and Equation \ref{Eq4.24}. Since \(y = y_1(x) \text{ along }C_1\) (as \(x\) goes from \(a \text{ to }b)\) and \(y = y_2(x) \text{ along }C_2\) (as \(x\) goes from \(b \text{ to }a)\), as we see from Figure 4.3.1, then we have \[\nonumber \begin{align} \oint_C P(x, y)\,dx&=\int_{C_1}P(x, y)\,dx+\int_{C_2}P(x, y)\,dx \\ \nonumber &=\int_a^b P(x, y_1(x))\,dx+\int_b^a P(x, y_2(x))\,dx \\ \nonumber &=\int_a^b P(x, y_1(x))\,dx - \int_a^b P(x, y_2(x))\,dx \\ \nonumber &=-\int_a^b (P(x, y_2(x)) - P(x, y_1(x)))\, dx \\ \nonumber &=-\int_a^b \left ( P(x, y) \Big |_{y=y_1(x)}^{y=y_2(x)} \right )\,dx \\ \nonumber &=-\int_a^b \int_{y_1(x)}^{y_2(x)} \dfrac{∂P(x, y)}{ ∂y}\,dy\,dx \text{ (by the Fundamental Theorem of Calculus)} \\ &=-\iint\limits_R \dfrac{∂P}{ ∂y}\,dA. \\ \label{Eq4.26} \end{align}\] Likewise, integrate \(Q(x, y)\) around \(C\) using the representation \(C = C_1 \cup C_2\) given by Equation \ref{Eq4.25} and Equation \ref{Eq4.26}. Since \(x = x_1(y) \text{ along }C_1\) (as \(y\) goes from \(d\) to \(c\)) and \(x = x_2(y) \text{ along }C_2\) (as \(y\) goes from \(c\) to \(d\)), as we see from Figure 4.3.1, then we have \[\nonumber \begin{align} \oint_C Q(x, y)\,dy&=\int_{C_1}Q(x, y)\,dy+\int_{C_2}Q(x, y)\,dy \\ \nonumber &=\int_d^c Q(x_1(y), y)\,dy+\int_c^d Q(x_2(y), y)\,dy \\ \nonumber &=-\int_c^d Q(x_1(y), y)\,dy + \int_c^d Q(x_2(y), y)\,dy \\ \nonumber &=\int_c^d (Q(x_2(y), y) - Q(x_1(y), y))\, dy \\ \nonumber &=\int_c^d \left ( Q(x, y) \Big |_{x=x_1(y)}^{x=x_2(y)} \right )\,dy \\ \nonumber &=\int_c^d \int_{x_1(y)}^{x_2(y)} \dfrac{∂Q(x, y)}{ ∂x}\,dx\,dy \text{ (by the Fundamental Theorem of Calculus)} \\ \nonumber &=\iint\limits_R \dfrac{∂Q}{ ∂x}\,dA,\text{ and so} \\ \end{align}\] \[\nonumber \begin{align} \oint_C \textbf{f}\cdot d\textbf{r} &= \oint_C P(x, y)\,dx + \oint_C Q(x, y)\,d y \\ \nonumber &= -\iint_R \dfrac{∂P}{ ∂y}\,dA + \iint_R \dfrac{∂Q}{∂x}\,dA \\ \nonumber &= \iint_R \left ( \dfrac{∂Q}{ ∂x}-\dfrac{∂P}{ ∂y} \right ) \,dA. \\ \end{align}\] \(\tag{\(\textbf{QED}\)}\) Though we proved Green’s Theorem only for a simple region \(R\), the theorem can also be proved for more general regions (say, a union of simple regions). Example 4.7 Evaluate \(\oint_C (x^2 + y^2 )\,dx+2x y\, d y\), where \(C\) is the boundary (traversed counterclockwise) of the region \(R = {(x, y) : 0 ≤ x ≤ 1, 2x^2 ≤ y ≤ 2x}\). Solution: \(R\) is the shaded region in Figure 4.3.2. By Green’s Theorem, for \(P(x, y) = x^2 + y^2 \text{ and }Q(x, y) = 2x y\), we have \[\nonumber \begin{align} \oint_C (x^2+y^2)\,dx+2x y \,d y &=\iint_R \left ( \dfrac{∂Q}{ ∂x}-\dfrac{∂P}{ ∂y} \right ) \, dA \\ \nonumber &=\iint_R (2y−2y)\,d A = \iint_R 0\,dA = 0. \\ \end{align}\] Figure 4.3.2 We actually already knew that the answer was zero. Recall from Example 4.5 in Section 4.2 that the vector field \(\textbf{f}(x, y) = (x^2 + y^2 )\textbf{i}+2x y\textbf{j}\) has a potential function \(F(x, y) = \dfrac{1}{3} x^3 + x y^2\) , and so \(\oint_C \textbf{f}\cdot d\textbf{r} = 0\) by Corollary 4.6. Example 4.8 Let \(\textbf{f}(x, y) = P(x, y)\textbf{i}+Q(x, y)\textbf{j}\), where \[\nonumber P(x, y) =\dfrac{-y}{x^2+y^2} \text{ and }Q(x, y) =\dfrac{x}{x^2+y^2},\] and let \(R = {(x, y) : 0 < x^2 + y^2 ≤ 1}\). For the boundary curve \(C : x^2 + y^2 = 1\), traversed counterclockwise, it was shown in Exercise 9(b) in Section 4.2 that \(\oint_C \textbf{f}\cdot d\textbf{r} = 2π\). But \[\nonumber \dfrac{∂Q }{∂x} = \dfrac{y^2+x^2}{(x^2+y^2)^2} = \dfrac{∂P }{∂y} \Rightarrow \iint\limits_R \left ( \dfrac{∂Q}{ ∂x} - \dfrac{ ∂P}{ ∂y} \right )\,dA= \iint\limits_R 0\,dA = 0\] This would seem to contradict Green’s Theorem. However, note that \(R\) is not the entire region enclosed by \(C\), since the point \((0,0)\) is not contained in \(R\). That is, \(R\) has a “hole” at the origin, so Green’s Theorem does not apply. If we modify the region \(R\) to be the annulus \(R = {(x, y) : 1/4 ≤ x^2 + y^2 ≤ 1}\) (see Figure 4.3.3), and take the “boundary” \(C \text{ of }R \text{ to be }C = C_1 \cup C_2\) , where \(C_1\) is the unit circle \(x^2 + y^2 = 1\) traversed counterclockwise and \(C_2\) is the circle \(x^2 + y^2 = 1/4\) traversed clockwise, then it can be shown (see Exercise 8) that \[\nonumber \oint_C \textbf{f} \cdot d\textbf{r} = 0 \] Figure 4.3.3 The annulus \(R\) We would still have \(\iint\limits_R \left ( \dfrac{∂Q}{∂x} − \dfrac{∂P}{ ∂y } \right )\,d A = 0\), so for this \(R\) we would have \[\nonumber \oint_C \textbf{f}\cdot d\textbf{r} = \iint\limits_R \left ( \dfrac{∂Q}{ ∂x} - \dfrac{∂P}{ ∂y} \right ) \, dA,\] which shows that Green’s Theorem holds for the annular region \(R\). It turns out that Green’s Theorem can be extended to multiply connected regions, that is, regions like the annulus in Example 4.8, which have one or more regions cut out from the interior, as opposed to discrete points being cut out. For such regions, the “outer” boundary and the “inner” boundaries are traversed so that \(R\) is always on the left side. Figure 4.3.4 Multiply connected regions The intuitive idea for why Green’s Theorem holds for multiply connected regions is shown in Figure 4.3.4 above. The idea is to cut “slits” between the boundaries of a multiply connected region \(R\) so that \(R\) is divided into subregions which do not have any “holes”. For example, in Figure 4.3.4(a) the region \(R\) is the union of the regions \(R_1 \text{ and }R_2\) , which are divided by the slits indicated by the dashed lines. Those slits are part of the boundary of both \(R_1 \text{ and }R_2\) , and we traverse then in the manner indicated by the arrows. Notice that along each slit the boundary of \(R_1\) is traversed in the opposite direction as that of \(R_2\) , which means that the line integrals of \textbf{f} along those slits cancel each other out. Since \(R_1 \text{ and }R_2\) do not have holes in them, then Green’s Theorem holds in each subregion, so that \[\nonumber \oint_{bdy\,of\,R_1}\textbf{f} \cdot d\textbf{r} = \iint\limits_{R_1}\left (\dfrac{ ∂Q }{∂x} - \dfrac{∂P }{∂y} \right )\,dA \text{ and }\oint_{bdy\,of\,R_2}\textbf{f}\cdot d\textbf{r} = \iint\limits{R_2} \left ( \dfrac{∂Q }{∂x} - \dfrac{∂P}{ ∂y} \right )\,dA.\] But since the line integrals along the slits cancel out, we have \[\nonumber \oint_{C_1 \cup C_2} \textbf{f}\cdot d\textbf{r} = \oint_{bdy\,of\,R_1} \textbf{f} \cdot d\textbf{r} +\oint_{bdy\,of\,R_2}\textbf{f}\cdot d\textbf{r},\] and so \[\nonumber \oint_{C_1 \cup C_2} \textbf{f}\cdot d\textbf{r} = \iint\limits_{R_1} \left ( \dfrac{∂Q}{ ∂x} − \dfrac{∂P}{ ∂y} \right ) \,dA + \iint\limits_{R_2} \left ( \dfrac{∂Q}{ ∂x} − \dfrac{∂P}{ ∂y} \right ) \,dA = \iint\limits_R \left ( \dfrac{∂Q}{ ∂x} - \dfrac{∂P }{∂y} \right ) \,dA,\] which shows that Green’s Theorem holds in the region \(R\). A similar argument shows that the theorem holds in the region with two holes shown in Figure 4.3.4(b). We know from Corollary 4.6 that when a smooth vector field \(\textbf{f}(x, y) = P(x, y)\textbf{i}+Q(x, y)\textbf{j}\) on a region \(R\) (whose boundary is a piecewise smooth, simple closed curve \(C\)) has a potential in \(R\), then \(\oint_C \textbf{f}\cdot d\textbf{r} = 0\). And if the potential \(F(x, y)\) is smooth in \(R\), then \(\dfrac{∂F}{ ∂x} = P \text{ and }\dfrac{∂F}{ ∂y} = Q\), and so we know that \[\nonumber \dfrac{∂^2F }{∂y∂x} = \dfrac{∂^2F}{ ∂x∂y} \Rightarrow \dfrac{∂P}{ ∂y} = \dfrac{∂Q }{∂x}\text{ in }R\] Conversely, if \(\dfrac{∂P}{ ∂y} = \dfrac{∂Q}{ ∂x}\) in \(R\) then \[\nonumber \oint_C \textbf{f} \cdot d\textbf{r} = \iint\limits_R \left ( \dfrac{∂Q }{∂x}-\dfrac{∂P }{∂y} \right ) \,dA \iint\limits_R 0\,dA = 0 \] For a simply connected region \(R\)(i.e. a region with no holes), the following can be shown: The following statements are equivalent for a simply connected region \(R\) in \(\mathbb{R}^2\) : \(\textbf{f}(x, y) = P(x, y)\textbf{i}+Q(x, y)\textbf{j} \) has a smooth potential \(F(x, y)\) in \(R\) \(\int_C \textbf{f}\cdot d\textbf{r}\) is independent of the path for any curve \(C\) in \(R\) \(\oint_C \textbf{f} \cdot d\textbf{r} = 0\) for every simple closed curve \(C\) in \(R\) \(\dfrac{ ∂P}{ ∂y} = \dfrac{∂Q }{∂x} \) in \(R\) (in this case, the differential form \(P dx+Q d y\) is exact)
The same procedure as above applied to the following multiple integral leads to a second-order quasilinear partial differential equation. Set $$E(v)=\int_\Omega\ F(x,v,\nabla v)\ dx,$$ where \(\Omega\subset\mathbb{R}^n\) is a domain, \(x=(x_1,\ldots,x_n)\), \(v=v(x):\ \Omega\mapsto\mathbb{R}^1\), and \(\nabla v=(v_{x_1},\ldots,v_{x_n})\). Assume that the function \(F\) is sufficiently regular in its arguments. For a given function \(h\), defined on \(\partial\Omega\), set $$V=\{v\in C^2(\overline{\Omega}):\ v=h\ \mbox{on}\ \partial\Omega\}.$$ Euler equation. Let \(u\in V\) be a solution of (P), then $$\sum_{i=1}^n\frac{\partial}{\partial x_i}F_{u_{x_i}}-F_u=0$$ in \(\Omega\). Proof. Exercise. Hint: Extend the above fundamental lemma of the calculus of variations to the case of multiple integrals. The interval \((x_0-\delta,x_0+\delta)\) in the definition of \(\phi\) must be replaced by a ball with center at \(x_0\) and radius \(\delta\). Example 1.2.2.1: Dirichlet integral In two dimensions the Dirichlet integral is given by $$D(v)=\int_\Omega\ \left(v_x^2+v_y^2\right)\ dxdy$$ and the associated Euler equation is the Laplace equation \(\triangle u=0\) in \(\Omega\). Thus, there is natural relationship between the boundary value problem $$\triangle u=0\ \ \mbox{in}\ \Omega,\ u=h\ \ \mbox{on}\ \ \partial\Omega$$ and the variational problem $$\min_{v\in V}\ D(v).$$ But these problems are not equivalent in general. It can happen that the boundary value problem has a solution but the variational problem has no solution, for an example see Courant and Hilbert [4], Vol. 1, p. 155, where \(h\) is a continuous function and the associated solution \(u\) of the boundary value problem has no finite Dirichlet integral. The problems are equivalent, provided the given boundary value function \(h\) is in the class \(H^{1/2}(\partial\Omega)\), see Lions and Magenes [14]. Example 1.2.2.2: Minimal surface equation The non-parametric minimal surface problem in two dimensions is to find a minimizer \(u=u(x_1,x_2)\) of the problem $$\min_{v\in V}\int_\Omega\ \sqrt{1+v_{x_1}^2+v_{x_2}^2}\ dx,$$ where for a given function \(h\) defined on the boundary of the domain \(\Omega\) $$V=\{v\in C^1(\overline{\Omega}):\ v=h\ \mbox{on}\ \partial\Omega\}.$$ Figure 1.2.2.1: Comparison surface Suppose that the minimizer satisfies the regularity assumption \(u\in C^2(\Omega)\), then \(u\) is a solution of the minimal surface equation (Euler equation) in \(\Omega\) \begin{equation} \label{mse} \frac{\partial}{\partial x_1}\left(\frac{u_{x_1}}{\sqrt{1+|\nabla u|^2}}\right)+ \frac{\partial}{\partial x_2}\left(\frac{u_{x_2}}{\sqrt{1+|\nabla u|^2}}\right)=0. \end{equation} In fact, the additional assumption \(u\in C^2(\Omega)\) is superfluous since it follows from regularity considerations for quasilinear elliptic equations of second order, see for example Gilbarg and Trudinger [9]. Let \(\Omega=\mathbb{R}^2\). Each linear function is a solution of the minimal surface equation (\ref{mse}). It was shown by Bernstein [2] that there are no other solutions of the minimal surface quation. This is true also for higher dimensions \(n\le7\), see Simons [19]. If \(n\ge8\), then there exists also other solutions which define cones, see Bombieri, De Giorgi and Giusti [3]. The linearized minimal surface equation over \(u\equiv0\) is the Laplace equation \(\triangle u=0\). In \(\mathbb{R}^2\) linear functions are solutions but also many other functions in contrast to the minimal surface equation. This striking difference is caused by the strong nonlinearity of the minimal surface equation. More general minimal surfaces are described by using parametric representations. An example is shown in Figure 1.2.2.2 1. See [18], pp. 62, for example, for rotationally symmetric minimal surfaces. Figure 1.2.2.2: Rotationally symmetric minimal surface 1An experiment from Beutelspacher's Mathetikum, Wissenschaftsjahr 2008, Leipzig Neumann type boundary value problems Set \(V=C^1(\overline{\Omega})\) and $$E(v)=\int_\Omega\ F(x,v,\nabla v)\ dx-\int_{\partial\Omega}\ g(x,v)\ ds,$$ where \(F\) and \(g\) are given sufficiently regular functions and \(\Omega\subset\mathbb{R}^n\) is a bounded and sufficiently regular domain. Assume \(u\) is a minimizer of \(E(v)\) in \(V\), that is $$u\in V:\ \ E(u)\le E(v)\ \ \mbox{for all}\ v\in V,$$ then \begin{eqnarray*} \int_\Omega\ \big(\sum_{i=1}^nF_{u_{x_i}}(x,u,\nabla u)\phi_{x_i}&+& F_u(x,u,\nabla u)\phi\big)\ dx\\ &-& \int_{\partial\Omega}\ g_u(x,u)\phi\ ds =0 \end{eqnarray*} for all \(\phi\in C^1(\overline{\Omega})\). Assume additionally \(u\in C^2(\Omega)\), then \(u\) is a solution of the Neumann type boundary value problem \begin{eqnarray*} \sum_{i=1}^n\frac{\partial}{\partial x_i}F_{u_{x_i}}-F_u&=&0\ \ \mbox{in}\ \Omega\\ \sum_{i=1}^nF_{u_{x_i}}\nu_i-g_u&=&0\ \ \mbox{on}\ \partial\Omega, \end{eqnarray*} where \(\nu=(\nu_1,\ldots,\nu_n)\) is the exterior unit normal at the boundary \(\partial\Omega\). This follows after integration by parts from the basic lemma of the calculus of variations. Example 1.2.2.3: Laplace equation Set $$E(v)=\frac{1}{2}\int_\Omega\ |\nabla v|^2\ dx-\int_{\partial\Omega}\ h(x)v\ ds,$$ then the associated boundary value problem is \begin{eqnarray*} {\triangle} u&=&0\ \ \mbox{in}\ \Omega\\ \frac{\partial u}{\partial\nu}&=&h\ \ \mbox{on}\ \partial\Omega. \end{eqnarray*} Example 1.2.2.4: Capillary equation Let \(\Omega\subset\mathbb{R}^2\) and set $$E(v)=\int_\Omega\ \sqrt{1+|\nabla v|^2}\ dx+\frac{\kappa}{2}\int_\Omega\ v^2\ dx -\cos\gamma\int_{\partial\Omega}\ v\ ds.$$ Here \(\kappa\) is a positive constant (capillarity constant) and \(\gamma\) is the (constant) boundary contact angle, i. e., the angle between the container wall and the capillary surface, defined by \(v=v(x_1,x_2)\), at the boundary. Then the related boundary value problem is \begin{eqnarray*} \text{div}\ (Tu)&=&\kappa u\ \ \mbox{in}\ \Omega\\ \nu\cdot Tu&=&\cos\gamma \ \mbox{on}\ \partial\Omega, \end{eqnarray*} where we use the abbreviation $$Tu=\frac{\nabla u}{\sqrt{1+|\nabla u|^2}},$$ div \((Tu)\) is the left hand side of the minimal surface equation (\ref{mse}) and it is twice the mean curvature of the surface defined by \(z=u(x_1,x_2)\), see an exercise. The above problem describes the ascent of a liquid, water for example, in a vertical cylinder with cross section \(\Omega\). Assume the gravity is directed downwards in the direction of the negative \(x_3\)-axis. Figure 1.2.2.3 shows that liquid can rise along a vertical wedge which is a consequence of the strong non-linearity of the underlying equations, see Finn [7]. This photo was taken from [15]. Figure 1.2.2.3: Ascent of liquid in a wedge
User:Pranav Rathi/Notebook/OT/2010/12/10/Olympus Water Immersion Specs Water immersion objective details We are using Olympus UPLANSAPO (UIS 2) water immersion IR objective for Optical tweezers ( DNA-overstretching stretching and unzipping experiments). The detail specifications of the objective can be found in the link:[1] Other specification are as follows; Mag 60X Wavelength 1064nm NA 1.2 medium water Max ray angle 64.5degrees f # of eyepiece 26.5 Effective FL in water 1.5 to 1.6mm (distance between the focal spot and the exit aperture surface) Entrance aperture diameter 8.5mm Exit aperture diameter 6.6mm Working distance .28mm Cover glass correction (spherical aberration control) .13 to .21 (we use .15) Resolution and achievable spot size The resolution and the spotsize (beam waist) presented here is in the theoretical limits; we cannot achieve better than this. Resolution and spotsize are diffraction limited and to reach these limits our optics has to be perfect; no aberrations and other artifacts. Since our optics is not perfect and very clean we can hardly reach these limits in real life; definitely the resolution and spotsize in real is worse than the numbers presented here. A good way to do a quick estimation of the resolution (diameter of the airy disk) is that its 1/3 of a wavelength λ=.580 μm; λ/3*n= 145 nm. Since we do all our experiments in water we will have take index of water in account (n=1.33). I am ignoring the NA of the condenser in the calculations. Wavelength of the visible light λ v= .590μm. Wavelength of the IR λ IR= 1.064μm. Diameter of the incident beam at the exit pupil (D=2ω' o.)=6500μm. (ω' ois the incident beam waist) Focal length of the objective f=1500μm. Angular resolution inside water: [math]\mathrm{\theta} = \sin^{-1}\frac{1.22\lambda_v}{nD}= 8.1e^{-5}rad[/math] Spatial resolution in water; [math]\mathrm{\Delta l} = \frac{1.22f\lambda_v}{nD}= 122nm[/math] Since we are not too sure of the focal length of the objective, so i derived the resolution formula in terms of the numerical aperture NA (the math can be seen through this link[2]). Resolution in terms of NA: [math]\mathrm{\Delta l} = \frac{1.22\lambda_v}{2n}\sqrt{(\frac{n}{NA})^{2}-1}= 127nm[/math] Now the minimum spotsize (beam waist ω o) can be calculated using the same formula where Δl=2ω o. But this time for infrared wavelength Minimum beam waist; [math]\mathrm{\omega_o} = \frac{1.22f\lambda_{IR}}{2nD}= 112nm[/math] and the beam diameter 224nm. In terms of NA: [math]\mathrm{\omega_o} = \frac{1.22\lambda_{IR}}{4n}\sqrt{(\frac{n}{NA})^{2}-1}= 116nm[/math] and the beam diameter 232nm. I also used Gaussian paraxial approach to calculate the spot size and the results are not much different, which proves that either approach is right. With Gaussian approach; [math]\mathrm{\omega_o} = \frac{\lambda_{IR}}{\pi n}\sqrt{(\frac{n}{NA})^{2}-1}= 121nm[/math] and the beam diameter 242nm. A LabView code is given in the link to calculate the parameters[3]. Results are not much different and either approach is right. BUT zeroth order Gaussian approximation (paraxial) is not correct for high NA (for high convergent beams) objective lens. All above approaches are based on paraxial approximation (when diffraction angle is less than 30 o). For high convergent beam like here this approximation (Gaussian paraxial approach) is not valid any more and that's why calculated spotsize here is an orders of magnitude less. A geometrical view of the problem is presented in the link:[4] For better approximation one have to use spotsize equation given by electromagnetic field theory with higher order Gaussian corrections[5][6][7][8]. As we reach higher order corrections we get better and better theoretical results. But still it will be in the range of above calculate spotsize. The best theoretical value for the spotsize is needed to be multiplied by pi ( 121 X pi = 380 nm; This value is much closer to the experimental value so i am taking it as the beamwaist inside water for my optical trap.). This value seems to be closer to the experimental value and gives a Rayleigh range of 567nm. This figure is used to calculate the PSF (point spread function) for the optical trap, which will be used in caliberation for the sensitivity. Also a quick way to estimate the spotsize: ω o=λ0/2n=400nm (if we also divide it by πwe will get 127nm). The results of the beam waist can be experimentally verified by directly measuring the spot size, but the process is rather cumbersome and hardly interesting[9][10][11]. The results given here are the best in terms of the theoretical limits of the aberration free optics. Experimentally we suffer on number of bases: one of them is experimental-setup itself; because of aberrations introduces by index mismatch among water, oil and glass interfaces, which also introduce the multiple reflection. And we should also not forget that the focal plane of the objective is not infinitely thin in z-direction, which implies the out of focus rays degrading the overall image reducing the resolution and degrading the spotsize.
Abstract A formula is given for the special multiple zeta values occurring in the title as rational linear combinations of products $\zeta(m)\pi^{2n}$ with $m$ odd. The existence of such a formula had been proved using motivic arguments by Francis Brown, but the explicit formula (more precisely, certain 2-adic properties of its coefficients) were needed for his proof in~\cite1 of the conjecture that all periods of mixed Tate motives over $\mathbb{Z}$ are $\mathbb{Q}[(2\pi i)^{\pm1}]$-linear combinations of multiple zeta values. The formula is proved indirectly, by computing the generating functions of both sides in closed form (one as the product of a sine function and a ${}_3F_2$-hypergeometric function, and one as a sum of 14 products of sine functions and digamma functions) and then showing that both are entire functions of exponential growth and that they agree at sufficiently many points to force their equality. We also show that the space spanned by the multiple zeta values in question coincides with the space of double zeta values of odd weight and find a relation between this space and the space of cusp forms on the full modular group. @article{1, author={Brown, F.}, TITLE={Mixed {T}ate motives over $\Z$}, JOURNAL={Ann. of Math.}, VOLUME={175}, YEAR={2012}, PAGES={949--976}, DOI = {10.4007/annals.2012.175.2.10}, } [2] F. Brown, On the decomposition of motivic multiple zeta values. @misc{2, author={Brown, F.}, TITLE={On the decomposition of motivic multiple zeta values}, ARXIV={1102.1310v2}, } [3] P. Deligne and A. B. Goncharov, "Groupes fondamentaux motiviques de Tate mixte," Ann. Sci. École Norm. Sup., vol. 38, iss. 1, pp. 1-56, 2005. @article {3, MRKEY = {2136480}, AUTHOR = {Deligne, Pierre and Goncharov, Alexander B.}, TITLE = {Groupes fondamentaux motiviques de {T}ate mixte}, JOURNAL = {Ann. Sci. École Norm. Sup.}, FJOURNAL = {Annales Scientifiques de l'École Normale Supérieure. Quatrième Série}, VOLUME = {38}, YEAR = {2005}, NUMBER = {1}, PAGES = {1--56}, ISSN = {0012-9593}, CODEN = {ASENAH}, MRCLASS = {11G55 (14F42 14G10 19F27)}, MRNUMBER = {2136480}, MRREVIEWER = {Tam{á}s Szamuely}, DOI = {10.1016/j.ansens.2004.11.001}, ZBLNUMBER = {1084.14024}, } [4] L. Euler, "Meditationes circa singulare serierum genus," Novi Comm. Acad. Sci. Petropol., vol. 20, pp. 140-186, 1776. @article{4, author={Euler, L.}, TITLE={Meditationes circa singulare serierum genus}, JOURNAL={Novi Comm. Acad.~Sci. Petropol.}, VOLUME={20}, YEAR={1776}, PAGES={140--186}, NOTE={{\it Opera Omnia}, Ser.~I, vol.~15, B.G.~Teubner, Berlin (1927), 217--267}, URL = {http://eulerarchive.maa.org/pages/E477.html}, } [5] H. Gangl, M. Kaneko, and D. Zagier, "Double zeta values and modular forms," in Automorphic Forms and Zeta Functions, World Sci. Publ., Hackensack, NJ, 2006, pp. 71-106. @incollection {5, MRKEY = {2208210}, AUTHOR = {Gangl, Herbert and Kaneko, Masanobu and Zagier, Don}, TITLE = {Double zeta values and modular forms}, BOOKTITLE = {Automorphic Forms and Zeta Functions}, PAGES = {71--106}, PUBLISHER = {World Sci. Publ., Hackensack, NJ}, YEAR = {2006}, MRCLASS = {11M41 (11F11)}, MRNUMBER = {2208210}, MRREVIEWER = {Hirofumi Tsumura}, DOI = {10.1142/9789812774415_0004}, ZBLNUMBER = {1122.11057}, } [6] A. B. Goncharov, "Galois symmetries of fundamental groupoids and noncommutative geometry," Duke Math. J., vol. 128, iss. 2, pp. 209-284, 2005. @article {6, MRKEY = {2140264}, AUTHOR = {Goncharov, A. B.}, TITLE = {Galois symmetries of fundamental groupoids and noncommutative geometry}, JOURNAL = {Duke Math. J.}, FJOURNAL = {Duke Mathematical Journal}, VOLUME = {128}, YEAR = {2005}, NUMBER = {2}, PAGES = {209--284}, ISSN = {0012-7094}, CODEN = {DUMJAO}, MRCLASS = {11G55 (11G09 14C30 16W30 19E15 20F34)}, MRNUMBER = {2140264}, MRREVIEWER = {Matilde Marcolli}, DOI = {10.1215/S0012-7094-04-12822-2}, ZBLNUMBER = {1095.11036}, } [7] M. E. Hoffman, "The algebra of multiple harmonic series," J. Algebra, vol. 194, iss. 2, pp. 477-495, 1997. @article {7, MRKEY = {1467164}, AUTHOR = {Hoffman, Michael E.}, TITLE = {The algebra of multiple harmonic series}, JOURNAL = {J. Algebra}, FJOURNAL = {Journal of Algebra}, VOLUME = {194}, YEAR = {1997}, NUMBER = {2}, PAGES = {477--495}, ISSN = {0021-8693}, CODEN = {JALGA4}, MRCLASS = {11M41 (05E05)}, MRNUMBER = {1467164}, DOI = {10.1006/jabr.1997.7127}, ZBLNUMBER = {0881.11067}, } [8] K. Ihara, J. Kajikawa, Y. Ohno, and J. Okuda, "Multiple zeta values vs. multiple zeta-star values," J. Algebra, vol. 332, pp. 187-208, 2011. @article{8, author={Ihara, K. and Kajikawa, J. and Ohno, Y. and Okuda, J.}, TITLE={Multiple zeta values vs.~multiple zeta-star values}, JOURNAL = {J. Algebra}, VOLUME = {332}, YEAR = {2011}, PAGES = {187--208}, MRNUMBER = {2774684}, ZBLNUMBER = {05969524}, DOI = {doi:10.1016/j.jalgebra.2010.12.029}, } [9] Y. Ohno and D. Zagier, "Multiple zeta values of fixed weight, depth, and height," Indag. Math., vol. 12, iss. 4, pp. 483-487, 2001. @article {9, MRKEY = {1908876}, AUTHOR = {Ohno, Yasuo and Zagier, Don}, TITLE = {Multiple zeta values of fixed weight, depth, and height}, JOURNAL = {Indag. Math.}, FJOURNAL = {Koninklijke Nederlandse Akademie van Wetenschappen. Indagationes Mathematicae. New Series}, VOLUME = {12}, YEAR = {2001}, NUMBER = {4}, PAGES = {483--487}, ISSN = {0019-3577}, MRCLASS = {11M41}, MRNUMBER = {1908876}, MRREVIEWER = {L. Skula}, DOI = {10.1016/S0019-3577(01)80037-9}, ZBLNUMBER = {1031.11053}, } [10] T. Terasoma, "Mixed Tate motives and multiple zeta values," Invent. Math., vol. 149, iss. 2, pp. 339-369, 2002. @article {10, MRKEY = {1918675}, AUTHOR = {Terasoma, Tomohide}, TITLE = {Mixed {T}ate motives and multiple zeta values}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {149}, YEAR = {2002}, NUMBER = {2}, PAGES = {339--369}, ISSN = {0020-9910}, CODEN = {INVMBH}, MRCLASS = {11G55 (11M41 19F27)}, MRNUMBER = {1918675}, MRREVIEWER = {Jan Nekov{á}{\v{r}}}, DOI = {10.1007/s002220200218}, ZBLNUMBER = {1042.11043}, }
sidagar wrote: Hi brunel evem if M is -2 remainder will be 1 when divided by 3. 10^m = 10^-2 = 1/100 = 1/100/3 = 1/300 gives remainder 1 tell me where i am going wrong .As per me answer mUst by A The question asks whether (10^M + N)/3 an integer? If M = -2 and N = 5, (10^M + N)/3 = 167/100, which is not an integer. Below is another solution which might help:If \(M\) and \(N\) are integers, is \(\frac{10^M + N}{3}\) an integer? Basically the the question ask whether \(10^m+n\) is divisible by 3. Now, in order \(10^m+n\) to be divisible by 3: A. It must be an integer, and B. the sum of its digits must be multiple of 3. (1) N = 5 --> if \(m<0\) (-1, -2, ...) then \(10^m+n\) won't be an integer at all (for example if \(m=-1\) --> \(10^m+n=\frac{1}{10}+5=\frac{51}{10}\neq{integer}\)), thus won't be divisible by 3, but if \(m\geq{0}\) (0, 1, 2, ...) then \(10^m+n\) will be an integer and also the sum of its digits will be divisible by 3 (for example for \(m=1\) --> \(10^m+n=10+5=15\) --> 15 is divisible by 3). Not sufficient. (2) MN is even --> clearly insufficient, as again \(m\) can be -2 and \(n\) any integer and the answer to the question will be NO or \(m\) can be 0 and \(n\) can be 2 and the answer to the question will be YES. Not sufficient. (1)+(2) From \(mn=even\) and \(n=5\) it's still possible for \(m\) to be negative even integer (-2, -4, ...), so \(10^m+n\) may or may not be divisible by 3. Not sufficient. Answer: E. _________________
Let us prove that $$\sigma(\{w\}/ w\neq w_0) = \{A\subset \Omega, (w_0\in A \implies A^c \text{ is countable}) \text{ and } (w_0\notin A \implies A \text{ is countable})\}$$ Let $\mathcal A = \{A\subset \Omega, (w_0\in A \implies A^c \text{ is countable}) \text{ and } (w_0\notin A \implies A \text{ is countable})\}$ and $\mathcal C=\sigma(\{w\}/ w\neq w_0)$. Consider $A\in \mathcal A$. If $w_0\in A$, then $A^c=\bigcup_{w\in A^c} \{w\}$ is a countable union of elements of $\mathcal C$ (because $w_0\notin A^c$). Since $\mathcal C$ is a $\sigma$-algebra, this implies $A^c\in \mathcal C$, hence $A\in \mathcal C$. If $w_0\notin A$, then $A=\bigcup_{w\in A} \{w\}$ is a countable union of elements of $\mathcal C$ (because $w_0\notin A$). Hence $A\in \mathcal C$. This implies $\mathcal A \subset \mathcal C$. To prove $\mathcal C \subset \mathcal A$, it suffices to show that $\mathcal A$ is a $\sigma$-algebra containing all the $\{w\}$ for $w\neq w_0$. $\bullet$ If $w\neq w_0$, $\{w\}$ is countable, hence $\{w\}\in \mathcal A$. $\bullet$ $\mathcal A$ is stable under complement. Consider $A\in \mathcal A$. If $w_0\in A$, then $A^c$ is countable and $w_0\notin A^c$, hence $A^c\in \mathcal A$. If $w_0\notin A$, then $w_0\in A^c$ and $(A^c)^c=A$ is countable, hence $A^c\in \mathcal A$. $\bullet$ $\mathcal A$ is stable under countable union. Consider $(A_i)\in \mathcal A^{\mathbb N}$. If no $A_i$ contains $w_0$, $w_0\notin \cup_i A_i$ and all the $A_i$ are countable, so $\cup_i A_i$ is countable. Hence $\cup_i A_i\in \mathcal A$. If WLOG $w_0\in A_1$, $w_0\in \cup_i A_i$. Note that $A_1^c$ is countable and $(\cup_i A_i)^c = \cap_i A_i^c\subset A_1^c$ is countable. Hence $\cup_i A_i\in \mathcal A$. The rest of the exercise if simple. Let $A\in \mathcal A$ be a $\delta_{w_0}$-null set. Then $w_0\notin A$. Given the previous characterization, $A$ is countable. If $B\subset A$, then $w_0\notin B$ and $B$ is countable, hence $B\in \mathcal A$. So all subsets of $\delta_{w_0}$-null sets are in $\mathcal A$, so the measure space is complete.
I am studying the property of Schrödinger equation that it automatically preserves the normalization of the wave function. We know: $$\int_{-\infty}^\infty \frac{\partial}{\partial t}|\Psi(x,t)|^2dx$$ By product rule: $$\frac{\partial}{\partial t}|\Psi|^2 = \Psi^*\frac{\partial\Psi}{\partial t} + \Psi\frac{\partial\Psi^*}{\partial t}$$ Now I want to determine the value of $\frac{\partial\Psi}{\partial t}$ and $\frac{\partial\Psi^*}{\partial t}$ and to do so we just have to use the Schrödinger equation: $$i\hbar\frac{\partial\Psi}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2\Psi}{\partial x^2} + V\Psi$$ My book (Introduction to QM by David J. Griffiths) states: $$\frac{\partial\Psi}{\partial t} = \frac{i\hbar}{2m}\frac{\partial^2\Psi}{\partial x^2} - \frac{i}{\hbar}V\Psi$$ But I got: $$\frac{\partial\Psi}{\partial t} = -\frac{i\hbar}{2m}\frac{\partial^2\Psi}{\partial x^2} + \frac{i}{\hbar}V\Psi$$ The same happened to me with the complex conjugate. Is this a mistake from the book or mine?
This is one of the accompanying workshops of the 50 Years Faculty of Mathematics Anniversary Conference. The conference takes place at Bielefeld University. You can find detailed directions here. All talks will be held in T2-213. If you are planning your trip to Germany, you might also be interested in the conference Buildings 2019, which will take place in Magdeburg in the following week. This conference is supported by the DFG priority programme 2026 Geometry at Infinity. Tuesday, 24.09 10:00 - 11:00 Rigidity of the Torelli subgroup in Out(F n) ( Camille Horbez ) 11:30 - 12:30 Maximal direct products of free groups in Out(F N) ( Ric Wade) 14:00 - 15:00 Typical Trees: An Out(F r) Excursion ( Catherine Pfaff) 15:45 - 16:45 Problem session Wednesday, 25.09 10:00 - 11:00 The geometry of hyperbolic free-by-cyclic groups ( Yael Algom-Kfir) 11:30 - 12:30 The Minimally displaced set of an irreducible automorphism is locally finite ( Dionysios Syrigos) 14:00 - 15:00 Homotopy type of the free factor complex ( Radhika Gupta) 15:45 - 16:45 Surface subgroups of Out(F n) via combination of Veech subgroups ( Funda Gultepe ) 19:00 - 00:00 Conference Dinner Thursday, 26.09 10:00 - 11:00 Aut(F n) has property (T) ( Marek Kaluba) 11:30 - 12:30 Graph products, quasi-median graphs, and automorphisms ( Anthony Genevois) For $\phi$ an outer automorphism of $F_n$, the corresponding free-by-cyclic group is the group with the following presentation: $G_\phi = \langle x_1, \dots, x_n, t \mid t x_i t^{-1} = \phi(x_i) \rangle$. There is a satisfying correspondence between properties of $\phi$ and properties of the group $G_\phi$. For example, $\phi$ is atoroidal if and only if $G_\phi$ is hyperbolic (Brinkman). In the hyperbolic case, the Gromov boundary of $G_\phi$ contains a cut point if and only if some power of $\phi$ is reducible (Bowditch and Kapovich-Kleiner). We restrict to the case that $G_\phi$ is hyperbolic and $\partial G_\phi$ contains no cut points, then $\partial G_\phi$ is homeomorphic to the Menger curve (Kapovich-Kleiner). Thus, from the point of view of the topology of their boundary, $G_\phi, G_\psi$ are indistinguishable for $\phi \in Out(F_n)$ and $\psi \in Out(F_m)$. However, Paulin showed that the conformal structure of a hyperbolic group is a complete quasi-isometric invariant. Is it possible that all groups of this form are quasi-isometric? In this talk we shall discuss some geometric aspects of $G_\phi$ that effect the conformal structure of $\partial G_\phi$ and ultimately, its dimension. This is joint work with Arnaud Hilion, Emily Stark and Mladen Bestvina. This talk is dedicated to graph products of groups, a common generalisation of free products and right-angled Artin groups. I will explain how quasi-median graphs appear as natural geometric models for graph products, and how they can be used in order to study their automorphisms. Using an amalgamation of Veech subgroups of the mapping class group a la Leininger-Reid, we construct new examples of surface subgroups of \(\mathrm{Out}( F_n)\) whose elements are either conjugate to elements in the Veech group, or except for one accidental parabolic, all fully irreducible. I will talk about this construction, and on the way describe Veech subgroups of \(\mathrm{Out}(F_n)\) . This is part of a joint work with Binbin Xu. The mapping class group of a surface acts on the curve complex which is known to be homotopy equivalent to a wedge of spheres. In this talk, I will define the 'free factor complex', an analog of the curve complex, on which the group of outer automorphisms of a free group acts by isometries. This complex has many similarities with the curve complex. I will present the result (joint with Benjamin Brück) that the free factor complex is also homotopy equivalent to a wedge of spheres. We will also look at higher connectivity results for the simplicial boundary of Outer space. I will present a joint work with Sebastian Hensel and Ric Wade. We prove that when n is at least 4, every injective morphism from IAn (outer automorphisms of a free group \(F_n\) acting trivially on homology) to \(Out(F_n)\) differs from the inclusion by a conjugation. This applies more generally to a wide collection of subgroups of \(Out(F_n)\) that we call twist-rich, which include all terms in the Andreadakis-Johnson filtration and all subgroups of \(Out(F_n)\) that contain a power of every Dehn twist. This extends previous works on commensurations of \(Out(F_n)\) and its subgroups. I will sketch the recent proof of (arXiv:1812.03456) that the group of automorphisms of free group on \(n \geq 6\) generators has Kazhdan's property (T). The proof follows by estimating the spectral gap of \(\Delta_n\), the group Laplace operator via sum of squares decomposition in real group algebra. We use the action of "Weyl" group to simplify the combinatorics of computing \( \Delta_n^2\) and reduce the problem of finding a sum of squares decompositions (for all \(n \geq 6\)) to a single computation for \(n=5\). The final computation is just small enough to be performed using computer software. As a side-result we produce asymptotically optimal lower estimates on Kazhdan constants for both \(\operatorname{SAut}(F_n)\) and \(\operatorname{SL}_n(\mathbb{Z})\). In the latter case these considerably narrow the gap between the upper and lower bounds. This is joint work with Dawid Kielak and Piotr W. Nowak. Random walks are not new to geometric group theory (see, for example, work of Furstenberg, Kaimonovich, Masur). However, following independent proofs by Maher and Rivin that pseudo-Anosovs are generic within mapping class groups, and then new techniques developed by Maher-Tiozzo, Sisto, and others, the field has seen in the past decade a veritable explosion of results. In a 2 paper series, we answer with fine detail a question posed by Handel-Mosher asking about invariants of generic outer automorphisms of free groups and then a question posed by Bestvina as to properties of R-trees of full hitting measure in the boundary of Culler-Vogtmann outer space. This is joint work with Ilya Kapovich, Joseph Maher, and Samuel J. Taylor. Let \(G\) be a group which splits as \(G = F_n * G_1 *...*G_k\), where every \(G_i\) is freely indecomposable and not isomorphic to the group of integers. Guirardel and Levitt generalised the Culler- Vogtmann Outer space of a free group by introducing an Outer space on which \(Out(G)\) acts. Francaviglia and Martino introduced the Lipschitz metric for the Culler- Vogtmann space and later for the general Outer space. For any automorphism of \(Out(G)\), we can define the displacement function with respect to the Lipschitz metric and the corresponding level sets. Recently, the same authors proved that for every L, the L-level set (the set of points of the outer space which are displaced by at most L) is connected, whenever it is non-empty . In the special case of an irreducible automorphism, they proved that the Min- set (the set of points which are minimally displaced) is always non-empty and it coincides with the set of the points that admit train track representatives for the automorphism. In a joint paper with Francaviglia and Martino, we prove that the Min set of a (hyperbolic) irreducible automorphism is (uniformly) locally finite, even if the relative outer space is locally infinite. Compared to maximal rank free abelian groups, maximal direct products of free groups in \(Out(F_N)\) are remarkably rigid. I will talk about some joint work with Martin Bridson, where we show that every subgroup of \(Out(F_N)\) isomorphic to a direct product of 2N-4 free groups fixes a splitting called an N-2 rose. This rose is canonical, so also gives us information about the centralizers and normalizers of such subgroups. As an application, we show that every endomorphism of \(Out(F_N)\) sends Nielsen automorphisms to powers of Nielsen automorphisms.
The climate of Earth has been roughed up quite a bit last century. But it has no idea what it's got coming with this portal of yours. Earth turns into Venus. Update: As R.M. pointed out, the amount of energy is not 'maybe a long term thing', it's the Major Issue. This has been fixed now. How much water are we talking? Let's say your portal is 10km below sea level. Dropping from that pressure to pressure at sea level gives a flow speed of somewhat over 400 meters per second: $\sqrt{10^4m * 10\frac{m}{s^2} * 2}$ (water is incompressible so we can just use potential energy). This is well over the speed of sound, or comparable to the speed of a typical handgun bullet. This 400 m/s flow is though the entire portal, $\pi * 5000^2 = 7.9 * 10^7 m^2$, for a total of about $30*10^9 m^3$ of water per second, that's a cube of water about 3 km or 2 miles on a side per second. Comparing that to other rivers. The discharge of your portal is about 100 000 cubic kilometers per hour, or about three times the the amount of water discharged by all of Earth's rivers in a year. Comparing it to a lahar, a very destructive mud flow. These can be 100 meters deep and run at 'several tens of meters per second'. If the stream from your portal would turn into a lahar of 50 meters deep, it would be 2000 km wide. If we were to slow it down to a mere 40 meters per second (as fast as a car going over the speed limit), that would require it to be again 10 times as wide, so 20 000 km. Sailing around the entire African continent is only slightly more than that. So the entire African coast would turn into an extremely destructive mud flow of almost 50 m deep. Given that there are mountains to the south of the Sahara, the mud flow will probably be much deeper and mostly to the North. At this point most of my assumptions are starting to break down. I assumed the effect on the ocean surface would not be too great. It will likely be a giant maelstrom tens or maybe even hundreds of kilometers across. This means the amount of water flowing through it is going to be somewhat less. Let's cut it by a factor 10, so $30*10^8 \frac{m^3}{s}$. The volume of Earth's oceans is about 1.3 billion cubic kilometers. There are about 30 million seconds in a year, so that's about 90 million cubic kilometers per year. So it takes about 15 years for the portal to cycle through the equivalent of all the water. What happens to that water? The specific heat of water is about 4 J/(K kg), so it takes about 4 kilojoules to heat a Liter of water 1 degree Celsius. That's about 4 MJ to heat a cubic meter 1 degree. The potential energy in dropping a cubic meter of water from 10 km up is about 100 MJ, so you're going to heat up your water about 25 degrees by slamming it high speed into the sand (or, quite quickly, other water). This means that, if the Earth's energy loss from radiation would stay the same, in about 15 years all the water would have cycled through the portal once and the ocean would, on average, have heated up 25 degrees. Another 45 years and all the water will be boiling. Water has a latent heat of about 2.3 MJ/kg, or 2300 MJ/m³. So it then takes a few centuries for all the water to boil off and turn into vapor. What will happen to all that energy? Normally, the Earth radiates away energy as long wave, or infra red radiation. Greenhouse gases like carbon dioxide reflect this radiation back at us. There is a lot of carbon dioxide stored in the oceans, around 60 times that of the pre-industrial atmosphere. However, if water is heated, it can't store as much carbon dioxide. Water of about 30 degrees Celsius can store only about a third of what water of 4 degrees can store. This is actually much more complicated than dividing by a third. So, if all the oceans heat up 25 degrees, the carbon dioxide in the atmosphere is going to increase by factor of 10 or so. We've managed to increase it by about 50% or so in the last hundred years. To add to this, warmer water evaporates more readily, and water vapor is also a very strong greenhouse gas. This means that the Earth wouldn't cool nearly as fast as it normally would. And, at some point, maybe after a few years, maybe after several decades, even if you were to turn off the portal, the increased greenhouse gases and incoming energy from the Sun will will cause a runaway greenhouse effect and turn Earth into Venus. Who dies first? The first creatures to die is probably a fish being blasted into an unlucky scorpion at high speed. After that anything going through the portal will die. Next to go is anything within a few hundred kilometers of the Mariana Trench and anything in Northern Africa. Europe and the rest of Africa will soon (within hours? days? weeks?) follow. The shortest path from the Saharan portal back to the Mariana Trench is through the Himalaya's, so my guess is the water will mainly flow through Africa towards the Southern Atlantic and through Southern Europe the Eastern part of Africa and the Arabian Peninsula into the Indian Ocean. Everything in its way will die. So the America's, most of Asia and Australia will likely not flood. Australia is closest to the Mariana Trench, so the weather there will turn weird after a day. It will take probably take a few days for the effects of the sudden change of energy distribution to be transported by the jet steam towards the America's and the remaining parts of Asia, so they've got maybe another week before the freak weather begins (think hurricanes, extreme rainfall, etc.). Some animals and some humans in those parts of the world could possibly survive this for a few years. Some speculation on what would happen if the water didn't heat up: The remaining flow is still a good 20 times the amount of water the Gulf Stream transports at its peak, so I would wager all ocean currents stop doing what they're doing and start flowing towards the South-West Pacific. This means no more warm water flowing to the North Atlantic so that's a new ice age for Europe. Similar goes for Japan. Most of North Africa is going to get flooded, I have no clue what that will do with the climate. This water is going to be under 10 degrees Celsius, so it'll cool down equatorial areas by a lot. The giant maelstrom around the Mariana Trench is going to cause a lot of mixing in the Pacific Ocean, so most of that ocean is going to be a lot colder than it was before. All of this cold water near the surface everywhere will mean that the Earth will radiate away much less energy and that there will be much less energy to drive atmospheric processes.The internal ocean water however will heat up on average.
For general discussion about Conway's Game of Life. "Very", "extra", etc. are adverb that modify "long" (and each other), and can't directly modify "boat". They can only be used directly as adjectives in phrases like "This collision makes the object I want, plus an extra boat" or "That glider eliminated the very boat that was causing problems!" Small Life patterns were originally assigned arbitrary mnemonic names, which were adequately unique and descriptive for small patterns, but that nomeclature gets more and more strained, the larger and more complicated that patterns get. It also becomes more and more futile as patterns get larger, as there are exponentially more of them at each given size. In Life's early days, there were unique names for all still-lifes up to 7 bits, about half of the 8-bit ones, and only one 9-bit one. In my pattern collections, I've tried to extrapolate meaningful names for small lists of objects (up to around a hundred objects or so), with mixed results, but after a point, using long chains of adjectives to describe a feature becomes tedious. Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. Small Life patterns were originally assigned arbitrary mnemonic names, which were adequately unique and descriptive for small patterns, but that nomeclature gets more and more strained, the larger and more complicated that patterns get. It also becomes more and more futile as patterns get larger, as there are exponentially more of them at each given size. In Life's early days, there were unique names for all still-lifes up to 7 bits, about half of the 8-bit ones, and only one 9-bit one. In my pattern collections, I've tried to extrapolate meaningful names for small lists of objects (up to around a hundred objects or so), with mixed results, but after a point, using long chains of adjectives to describe a feature becomes tedious. Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. I would prefer "long^5 boat" to "very very very very long boat", personally.mniemiec wrote:Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. EDIT: Also, Catagolue puts this in PATHOLOGICAL: Code: Select all x = 4, y = 8, rule = B3/S2-i34q2bo$bobo2$b3o3$bo$3o! "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane Well, guess who... yup, you're right.77topaz wrote:That's a good point. Who decided that the LifeWiki should get rid of the long^5 format names and replace them with the less adaptable adjectival names like "terribly long boat", anyway? I remember a discussion before that, though. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. she/they // "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Please stop using my full name. Refer to me as dani. "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. On the contrary, "amazingly long boat" is long^37, if you follow the footnotes for Long on the LifeWiki... which I definitely hope nobody does, since that thread has all the hallmarks of a "naming frenzy". People inevitably seem to get into naming frenzies every so often, and then hopefully come to regret them later.danny wrote:I remember a discussion before that, though. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. Me, I was just really happy to be able to go in and delete all the over-long pattern names in the LifeWiki collection that were messing up columnar lists, like Very_very_very_very_very_very_very_long_boat... so I didn't worry too much about whether the replacement names were really a good idea. They were a huge improvement if nothing else. I believe muzik added those very very very long names also, but it was some time before the most recent cleanup project. And just by the way, that 12-bit still life project was a whole heck of a lot of work on muzik's part, and it did definitely succeed in cleaning up a lot of things... though sometimes just by drawing attention to the problems, so Ian07's follow-up work also definitely deserves a good round of applause. I remember from the very old days, there were a small number of patterns that had qualifer names (e.g. long boat, long snake=python) but there was no real consistent nomenclature beyond that. Other than a few notable still-lifes (like paperclip), there generally wasn't even a standard nomenclature for still-lifes above 8 bits. When I started to systematically categorize pseudo-objects, it was much easier to use symbolic names (rather than empirical ones like 14.123 or 14P1.123), so I needed a way to concisely name the pieces involved. This meant coming up with ad-hoc names for objects up to 12 bits. I knew that the system would not hold up well much beyond that point, but it was never intended to. Also, as history tends to show, whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be), with the lack of any better nomenclature, that name tends to stick. When I started to systematically categorize pseudo-objects, it was much easier to use symbolic names (rather than empirical ones like 14.123 or 14P1.123), so I needed a way to concisely name the pieces involved. This meant coming up with ad-hoc names for objects up to 12 bits. I knew that the system would not hold up well much beyond that point, but it was never intended to. Also, as history tends to show, whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be), with the lack of any better nomenclature, that name tends to stick. coughjolsoncoughrunnynosecoughmniemiec wrote:whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be) Hard agree.77topaz wrote:I think "long^3 boat" would be the best naming scheme/format for these pages. What do others think? Another somewhat controversial opinion I have is that everything above a certain number doesn't really deserve a page, but let's go one step at a time. she/they // "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Please stop using my full name. Refer to me as dani. "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Fine by me. Along with this, the LifeWiki needs standard pnames -- lowercase alphanumeric-only names for the RLE and plaintext pattern files. The one good thing about the Arbitrary Adjective naming system was that it avoided the whole '"long<sup>10</sup> boat" / "Long%5E10_boat" mess and produced decent-looking pnames.77topaz wrote:I think "long^3 boat" would be the best naming scheme/format for these pages. What do others think? I think "long3boat" is fine for a pname, though. I think the only Arbitrary Adjectival pname that I'm personally responsible for is "abominably long boat", which I pretty much made up in desperation to get rid of an even more abominable "very very very very..." name. I've now removed abominablylongboat.cellsand abominablylongboat.rlefrom the server, uploaded long10boat.cellsand long10boat.rleinstead, and fixed the pname in the article. If other pnames can be patched up to be consistent with this, and if someone can keep a list of all the RLE:arbitraryadjectivelongsomething pages that get moved to RLE:veryNlongsomething, then I can go through at some point and delete all the arbitraryadjectivelongsomething.cells/.rlepattern files from the server, and the auto-upload script will take care of the rest. Maybe put the list, and any further discussion on this topic, on the Tiki Bar or someone's LifeWiki user page? Moosey Posts:2491 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: I feel that the arbitrary adjective name should be kept as an alternative name, as in: Long^103 doorjamb (or uselessly long doorjamb) is the long^103 equivalent of the doorjamb. Long^103 doorjamb (or uselessly long doorjamb) is the long^103 equivalent of the doorjamb. Minor, but prodigal is lowercase on Catagolue. I and wildmyron manage the 5S project, which collects all known spaceship speeds in Isotropic Non-totalistic rules. Things to work on: - Find a (7,1)c/8 ship in a Non-totalistic rule - Finish a rule with ships with period >= f_e_0(n) (in progress) Things to work on: - Find a (7,1)c/8 ship in a Non-totalistic rule - Finish a rule with ships with period >= f_e_0(n) (in progress) https://catagolue.appspot.com/object/xp15_4R4Z4R4/b3s23 https://catagolue.appspot.com/census/b3 ... ?offset=-2 https://catagolue.appspot.com/census/b3 ... ?offset=-2 x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Sometimes when I tried to view not-yet-searched censuses, it says "No one has investigated this, investigate it yourself". But sometimes I just get an empty symmetry list. This post was brought to you by the Element of Magic. Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Code: Select all #C Favorite Gun. Found by me.x = 4, y = 6, rule = B2e3i4at/S1c23cijn4ao2bo$4o3$4o$o2bo! Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X As far as I can tell, the "It appears that no-one has yet investigated this combination of rule and symmetry options.™" only appears if you also specify a symmetry. If you dont specify a symmetry (Only a rule), you will get said empty symmetry list.Hunting wrote:Sometimes when I tried to view not-yet-searched censuses, it says "No one has investigated this, investigate it yourself". But sometimes I just get an empty symmetry list. Airy Clave White It Nay (Check gen 2) Code: Select all x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! Moosey Posts:2491 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: Here’s an oddity: https://catagolue.appspot.com/object/xs ... jns23-ckqy It seems to use a different lifeviewer theme... the “inverse” theme. EDIT: Holy cow, it’s EVERYWHERE on Catagolue! EDIT: Oh. https://catagolue.appspot.com/object/xs ... jns23-ckqy It seems to use a different lifeviewer theme... the “inverse” theme. EDIT: Holy cow, it’s EVERYWHERE on Catagolue! EDIT: Oh. "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane What's odd about that? It's a perfectly legitimate P2 oscillator.Hdjensofjfnen wrote:This: https://catagolue.appspot.com/object/xp ... v0rr/b3s23 But it is called ":D".mniemiec wrote:What's odd about that? It's a perfectly legitimate P2 oscillator.Hdjensofjfnen wrote:This: https://catagolue.appspot.com/object/xp ... v0rr/b3s23 This post was brought to you by the Element of Magic. Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Code: Select all #C Favorite Gun. Found by me.x = 4, y = 6, rule = B2e3i4at/S1c23cijn4ao2bo$4o3$4o$o2bo! Figure eight on pentadecathlon does not appear in the large objects section of the statistics page despite having a maximum population of 66. I'm guessing this is because its period is too high for Catagolue to calculate it. Wiki: http://www.conwaylife.com/wiki/User:Ian07 Discord: Ian07#6028 Discord: Ian07#6028 It's there now, perhaps the page hadn't been updated yet? I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section?wildmyron wrote: It's there now, perhaps the page hadn't been updated yet? Wiki: http://www.conwaylife.com/wiki/User:Ian07 Discord: Ian07#6028 Discord: Ian07#6028 Don't mind me, can't read properly. Sorry.Ian07 wrote:I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section? I'm confirming this odd bug. Maybe Catagolue just needs some time for Adam P. Goucher to add xp120 to the list of objects that Catagolue considers "large". (e.g. the count doesn't consider still-life bins below cloverleaf interchange, since that would waste time)wildmyron wrote:Don't mind me, can't read properly. Sorry.Ian07 wrote:I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section? EDIT: This looks fishy. Very fishy. http://catagolue.appspot.com/census/b345s4567/iC1 EDIT: By the way, the extremely high period of the new xp120 makes it the first object other than linear-growth patterns to break the preview. "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane I'm fairly certain this explanation is correct - Catagolue only displays population statistics (or, @Hdjen, animated GIFs) for objects of period <100 (or sometimes ≤100 - I think the threshold isn't always consistent across functions). So, the system doesn't knowthe xp120 belongs in that section.
I think it can be useful to suggest a trick which is more general, in the sense that it can be applied also in situations where it is not that easy to reformat the equation to fit. The equation number is moved down because the split environment produces a single block that has the width of the longest of its lines, thus the surrounding equation environment treats it as a single line that, in this case, is too wide to fit together with the number and hence moves it down. However, we see that the number would fit in the second line, if only this line could be regarded as an equation in itself. Now, from a logical standpoint, the markup in the question is correct, and, generally speaking, is the only one that should be employed: there is one equation, to which the number refers as a whole, and this equation consists of two lines. However, in a case like this, a little trick based on visual markup can solve the problem: specify two equations instead of one, by using an align environment, each, in theory, with its own number; but suppress the number on the first line by means of a \notag command placed at the end of the first “equation”. Here’s the code: \documentclass[a4paper]{article} \usepackage[T1]{fontenc} \usepackage{amsmath} \begin{document} \begin{align} \max \Biggl\{\sum_{(a,b)\in MW} a_ba_{bc} &- \sum_{(a,b)\in M} a_ba_{bc} - \sum_{a\in A} \sum_{(a,b)\in M} \sum_{(a,b)\in M} a_ba_{bc} - \zeta \sum_{(a,b)\in MW} a_{bc} \notag \\ % \notag suppresses the % equation number &- \sum_{(a,b)\in M} a_ba_{bc} - \sum_{(a,b)\in M} a_b (a^+_b+a^-_b) \Biggr\} \end{align} \end{document} And here’s the output it yields:
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
I’m trying to get into the latest Manin-Marcolli paper Quantum Statistical Mechanics of the Absolute Galois Group on how to create from Grothendieck’s dessins d’enfant a quantum system, generalising the Bost-Connes system to the non-Abelian part of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$. In doing so they want to extend the action of the multiplicative monoid $\mathbb{N}_{\times}$ by power maps on the roots of unity to the action of a larger monoid on all dessins d’enfants. Here they use an idea, originally due to Jordan Ellenberg, worked out by Melanie Wood in her paper Belyi-extending maps and the Galois action on dessins d’enfants. To grasp this, it’s best to remember what dessins have to do with Belyi maps, which are maps defined over $\overline{\mathbb{Q}}$ \[ \pi : \Sigma \rightarrow \mathbb{P}^1 \] from a Riemann surface $\Sigma$ to the complex projective line (aka the 2-sphere), ramified only in $0,1$ and $\infty$. The dessin determining $\pi$ is the 2-coloured graph on the surface $\Sigma$ with as black vertices the pre-images of $0$, white vertices the pre-images of $1$ and these vertices are joined by the lifts of the closed interval $[0,1]$, so the number of edges is equal to the degree $d$ of the map. Wood considers a very special subclass of these maps, which she calls Belyi-extender maps, of the form \[ \gamma : \mathbb{P}^1 \rightarrow \mathbb{P}^1 \] defined over $\mathbb{Q}$ with the additional property that $\gamma$ maps $\{ 0,1,\infty \}$ into $\{ 0,1,\infty \}$. The upshot being that post-compositions of Belyi’s with Belyi-extenders $\gamma \circ \pi$ are again Belyi maps, and if two Belyi’s $\pi$ and $\pi’$ lie in the same Galois orbit, then so must all $\gamma \circ \pi$ and $\gamma \circ \pi’$. The crucial Ellenberg-Wood idea is then to construct “new Galois invariants” of dessins by checking existing and easily computable Galois invariants on the dessins of the Belyi’s $\gamma \circ \pi$. For this we need to know how to draw the dessin of $\gamma \circ \pi$ on $\Sigma$ if we know the dessins of $\pi$ and of the Belyi-extender $\gamma$. Here’s the procedure Here, the middle dessin is that of the Belyi-extender $\gamma$ (which in this case is the power map $t \rightarrow t^4$) and the upper graph is the unmarked dessin of $\pi$. One has to replace each of the black-white edges in the dessin of $\pi$ by the dessin of the expander $\gamma$, but one must be very careful in respecting the orientations on the two dessins. In the upper picture just one edge is replaced and one has to do this for all edges in a compatible manner. Thus, a Belyi-expander $\gamma$ inflates the dessin $\pi$ with factor the degree of $\gamma$. For this reason i prefer to call them dessinflateurs, a contraction of dessin+inflator. In her paper, Melanie Wood says she can separate dessins for which all known Galois invariants were the same, such as these two dessins, by inflating them with a suitable Belyi-extender and computing the monodromy group of the inflated dessin. This monodromy group is the permutation group generated by two elements, the first one gives the permutation on the edges given by walking counter-clockwise around all black vertices, the second by walking around all white vertices. For example, by labelling the edges of $\Delta$, its monodromy is generated by the permutations $(2,3,5,4)(1,6)(8,10,9)$ and $(1,3,2)(4,7,5,8)(9,10)$ and GAP tells us that the order of this group is $1814400$. For $\Omega$ the generating permutations are $(1,2)(3,6,4,7)(8,9,10)$ and $(1,2,4,3)(5,6)(7,9,8)$, giving an isomorphic group. Let’s inflate these dessins using the Belyi-extender $\gamma(t) = -\frac{27}{4}(t^3-t^2)$ with corresponding dessin It took me a couple of attempts before I got the inflated dessins correct (as i knew from Wood that this simple extender would not separate the dessins). Inflated $\Omega$ on top: Both dessins give a monodromy group of order $35838544379904000000$. Now we’re ready to do serious work. Melanie Wood uses in her paper the extender $\zeta(t)=\frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with associated dessin and says she can now separate the inflated dessins by the order of their monodromy groups. She gets for the inflated $\Delta$ the order $19752284160000$ and for inflated $\Omega$ the order $214066877211724763979841536000000000000$. It’s very easy to make mistakes in these computations, so probably I did something horribly wrong but I get for both $\Delta$ and $\Omega$ that the order of the monodromy group of the inflated dessin is $214066877211724763979841536000000000000$. I’d be very happy when someone would be able to spot the error! Similar Posts: 214066877211724763979841536000000000000 the mystery Manin-Marcolli monoid permutation representations of monodromy groups the modular group and superpotentials (1) Klein’s dessins d’enfant and the buckyball Complete chaos and Belyi-extenders The best rejected proposal ever anabelian geometry Monstrous dessins 3 the modular group and superpotentials (2)
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map \[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$. An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$. The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders. Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers. If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6). Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits \[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$. In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture. Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle. Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect). There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps. A periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$. Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic, but $p$ itself is not periodic. For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin. Let’s do an example, already used by Sullivan himself: \[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$) The critical points $0$ and $2$ are not periodic, but they become eventually periodic: \[ 2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic. For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic. If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical. Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic. So, the system is always completely chaotic unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two. Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Adding, Multiplying, and Dividing Power Series Suppose that $\displaystyle f(x) = \sum_{n=0}^\infty a_n x^n$ and that $\displaystyle g(x) = \sum_{n=0}^\infty b_n x^n$. We can get the power series for $f(x)+g(x)$, $f(x)g(x)$ and $f(x)/g(x)$ by adding, multiplying, and dividing these expressions, as if they were polynomials: Computing $f(x)/g(x)$, however, is trickier as we have to perform long division, and treat larger powers of $x$ as being less important than smaller powers. These ideas are explained in the following video. Unless you are told otherwise by your instructor, you will at the most be working with the sums and differences of series, and not the more complicated products and quotients.
We owe Paul Dirac two excellent mathematical jokes. I have amended them with a few lesser known variations. A. Square root of the Laplacian: we want $\Delta$ to be $D^2$ for some first order differential operator (for example, because it is easier to solve first order partial differential equations than second order PDEs). Writing it out, $$\sum_{k=1}^n \frac{\partial^2}{\partial x_k^2}=\left(\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\right)\left(\sum_{j=1}^n \gamma_j \frac{\partial}{\partial x_j}\right) = \sum_{i,j}\gamma_i\gamma_j \frac{\partial^2}{\partial x_i x_j},$$ and equating the coefficients, we get that this is indeed true if $$D=\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\quad\text{and}\quad \gamma_i\gamma_j+\gamma_j\gamma_i=\delta_{ij}.$$ It remains to come up with the right $\gamma_i$'s. Dirac realized how to accomplish it with $4\times 4$ matrices when $n=4$; but a neat follow-up joke is to simply define them to be the elements $\gamma_1,\ldots,\gamma_n$ of $$\mathbb{R}\langle\gamma_1,\ldots,\gamma_n\rangle/(\gamma_i\gamma_j+\gamma_j\gamma_i - \delta_{ij}).$$ Using symmetry considerations, it is easy to conclude that the commutator of the $n$-dimensional Laplace operator $\Delta$ and the multiplication by $r^2=x_1^2+\cdots+x_n^2$ is equal to $aE+b$, where $$E=x_1\frac{\partial}{\partial x_1}+\cdots+x_n\frac{\partial}{\partial x_n}$$ is the Euler vector field. A boring way to confirm this and to determine the coefficients $a$ and $b$ is to expand $[\Delta,r^2]$ and simplify using the commutation relations between $x$'s and $\partial$'s. A more exciting way is to act on $x_1^\lambda$, where $\lambda$ is a formal variable: $$[\Delta,r^2]x_1^{\lambda}=((\lambda+2)(\lambda+1)+2(n-1)-\lambda(\lambda-1))x_1^{\lambda}=(4\lambda+2n)x_1^{\lambda}.$$ Since $x_1^{\lambda}$ is an eigenvector of the Euler operator $E$ with eigenvalue $\lambda$, we conclude that $$[\Delta,r^2]=4E+2n.$$ B. Dirac delta function: if we can write $$g(x)=\int g(y)\delta(x-y)dy$$ then instead of solving an inhomogeneous linear differential equation $Lf=g$ for each $g$, we can solve the equations $Lf=\delta(x-y)$ for each real $y$, where a linear differential operator $L$ acts on the variable $x,$ and combine the answers with different $y$ weighted by $g(y)$. Clearly, there are fewer real numbers than functions, and if $L$ has constant coefficients, using translation invariance the set of right hand sides is further reduced to just one, $\delta(x)$. In this form, the joke goes back to Laplace and Poisson. What happens if instead of the ordinary geometric series we consider a doubly infinite one? Since $$z(\cdots + z^{-n-1} + z^{-n} + \cdots + 1 + \cdots + z^n + \cdots)= \cdots + z^{-n} + z^{-n+1} + \cdots + z + \cdots + z^{n+1} + \cdots,$$ the expression in the parenthesis is annihilated by the multiplication by $z-1$, hence it is equal to $\delta(z-1)$. Homogenizing, we get $$\sum_{n\in\mathbb{Z}}\left(\frac{z}{w}\right)^n=\delta(z-w)$$ This identity plays an important role in conformal field theory and the theory of vertex operator algebras. Pushing infinite geometric series in a different direction, $$\cdots + z^{-n-1} + z^{-n} + \cdots + 1=-\frac{z}{1-z} \quad\text{and}\quad 1 + z + \cdots + z^n + \cdots = \frac{1}{1-z},$$ which add up to $1$. This time, the sum of doubly infinite geometric series is zero!Thus the point $0\in\mathbb{Z}$ is the sum of all lattice points on the non-negative half-line and all points on the positive half-line: $$0=[\ldots,-2,-1,0] + [0,1,2,\ldots] $$ A vast generalization is given by Brion's formula for the generating function for the lattice points in a convex lattice polytope $\Delta\subset\mathbb{R}^N$ with vertices $v\in{\mathbb{Z}}^N$ and closed inner vertex cones $C_v\subset\mathbb{R}^N$: $$\sum_{P\in \Delta\cap{\mathbb{Z}}^N} z^P = \sum_v\left(\sum_{Q\in C_v\cap{\mathbb{Z}}^N} z^Q\right),$$ where the inner sums in the right hand side need to be interpreted as rational functions in $z_1,\ldots,z_N$. Another great joke based on infinite series is the Eilenberg swindle, but I am too exhausted by fighting the math preview to do it justice.
A long while ago I promised to take you from the action by the modular group $\Gamma=PSL_2(\mathbb{Z})$ on the lattices at hyperdistance $n$ from the standard orthogonal laatice $L_1$ to the corresponding ‘monstrous’ Grothendieck dessin d’enfant. Speaking of dessins d’enfant, let me point you to the latest intriguing paper by Yuri I. Manin and Matilde Marcolli, ArXived a few days ago Quantum Statistical Mechanics of the Absolute Galois Group, on how to build a quantum system for the absolute Galois group from dessins d’enfant (more on this, I promise, later). Where were we? We’ve seen natural one-to-one correspondences between (a) points on the projective line over $\mathbb{Z}/n\mathbb{Z}$, (b) lattices at hyperdistance $n$ from $L_1$, and (c) coset classes of the congruence subgroup $\Gamma_0(n)$ in $\Gamma$. How to get from there to a dessin d’enfant? The short answer is: it’s all in Ravi S. Kulkarni’s paper, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1135. It is a complete mystery to me why Tatitscheff, He and McKay don’t mention Kulkarni’s paper in “Cusps, congruence groups and monstrous dessins”. Because all they do (and much more) is in Kulkarni. I’ve blogged about Kulkarni’s paper years ago: – In the Dedekind tessalation it was all about assigning special polygons to subgroups of finite index of $\Gamma$. – In Modular quilts and cuboid tree diagram it did go on assigning (multiple) cuboid trees to a (conjugacy class) of such finite index subgroup. – In Hyperbolic Mathieu polygons the story continued on a finite-to-one connection between special hyperbolic polygons and cuboid trees. – In Farey codes it was shown how to encode such polygons by a Farey-sequence. – In Generators of modular subgroups it was shown how to get generators of the finite index subgroups from this Farey sequence. The modular group is a free product \[ \Gamma = C_2 \ast C_2 = \langle s,u~|~s^2=1=u^3 \rangle \] with lifts of $s$ and $u$ to $SL_2(\mathbb{Z})$ given by the matrices \[ S=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},~\qquad U= \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix} \] As a result, any permutation representation of $\Gamma$ on a set $E$ can be represented by a $2$-coloured graph (with black and white vertices) and edges corresponding to the elements of the set $E$. Each white vertex has two (or one) edges connected to it and every black vertex has three (or one). These edges are the elements of $E$ permuted by $s$ (for white vertices) and $u$ (for black ones), the order of the 3-cycle determined by going counterclockwise round the vertex. Clearly, if there’s just one edge connected to a vertex, it gives a fixed point (or 1-cycle) in the corresponding permutation. The ‘monstrous dessin’ for the congruence subgroup $\Gamma_0(n)$ is the picture one gets from the permutation $\Gamma$-action on the points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$, or equivalently, on the coset classes or on the lattices at hyperdistance $n$. Kulkarni’s paper (or the blogposts above) tell you how to get at this picture starting from a fundamental domain of $\Gamma_0(n)$ acting on teh upper half-plane by Moebius transformations. Sage gives a nice image of this fundamental domain via the command FareySymbol(Gamma0(n)).fundamental_domain() Here’s the image for $n=6$: The boundary points (on the halflines through $0$ and $1$ and the $4$ half-circles need to be identified which is indicaed by matching colours. So the 2 halflines are identified as are the two blue (and green) half-circles (in opposite direction). To get the dessin from this, let’s first look at the interior points. A white vertex is a point in the interior where two black and two white tiles meet, a black vertex corresponds to an interior points where three black and three white tiles meet. Points on the boundary where tiles meet are coloured red, and after identification two of these reds give one white or black vertex. Here’s the intermediate picture The two top red points are identified giving a white vertex as do the two reds on the blue half-circles and the two reds on the green half-circles, because after identification two black and two white tiles meet there. This then gives us the ‘monstrous’ modular dessin for $n=6$ of the Tatitscheff, He and McKay paper: Let’s try a more difficult example: $n=12$. Sage gives us as fundamental domain giving us the intermediate picture and spotting the correct identifications, this gives us the ‘monstrous’ dessin for $\Gamma_0(12)$ from the THM-paper: In general there are several of these 2-coloured graphs giving the same permutation representation, so the obtained ‘monstrous dessin’ depends on the choice of fundamental domain. You’ll have noticed that the domain for $\Gamma_0(6)$ was symmetric, whereas the one Sage provides for $\Gamma_0(12)$ is not. This is caused by Sage using the Farey-code \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_1 & \frac{1}{5} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & 1} \] One of the nice results from Kulkarni’s paper is that for any $n$ there is a symmetric Farey-code, giving a perfectly symmetric fundamental domain for $\Gamma_0(n)$. For $n=12$ this symmetric code is \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & \frac{5}{6} \ar@{-}[r]_1 & 1} \] It would be nice to see whether using these symmetric Farey-codes gives other ‘monstrous dessins’ than in the THM-paper. Remains to identify the edges in the dessin with the lattices at hyperdistance $n$ from $L_1$. Using the tricks from the previous post it is quite easy to check that for any $n$ the monstrous dessin for $\Gamma_0(n)$ starts off with the lattices $L_{M,\frac{g}{h}} = M,\frac{g}{h}$ as below Let’s do a sample computation showing that the action of $s$ on $L_n$ gives $L_{\frac{1}{n}}$: \[ L_n.s = \begin{bmatrix} n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} \] and then, as last time, to determine the class of the lattice spanned by the rows of this matrix we have to compute \[ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -n \end{bmatrix} \] which is class $L_{\frac{1}{n}}$. And similarly for the other edges.2 Comments
Forgot password? New user? Sign up Existing user? Log in How Many Rational Point's are enclosed by an circle Having centre eee, pipipi ? Please Help Thanks! Note by Karan Shekhawat 4 years, 9 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: What is the radius of the circle? Log in to reply Actually This question was Given To me By My Freind And He is So cruel and Doesn't Give any Information for radius . If the question means asks to find the number of rational points ( assuming it means points whose coordinates are rational numbers ) " enclosed " by the circle i.e., the union of set of points within the circle and on the circle it would strongly depend upon radius and I feel there would not be any nice looking formula for the same. If the questions demands to find the points lying "on" the circle then Ronak's argument seems quite valid. @Sudeep Salgia – yeah Thanks a ton , Now I got it ! If the radius of the circle not given then there is at most one rational point, since your arbitarily choose a point like (3,5) and hence decide the radius of the circle but then you will not get another rational point. Could You Please Explain More , It sound's Interesting ! And If Radius is fixed Then ? @Ronak Agarwal My argument works for only the points lying on the circle. For finding the number of points enclosed by the circle it is impossible to find it without knowing the radius. @Ronak Agarwal – Yes I understand , My cruel friend tell me Poorly , Instead of This He should ask number of points Lies on Circle , which is 1 as You Explained betterly ! And Thanks for Replying :) @Karan Shekhawat – Is your friend enrolled in fiitjee , cause this question recently appeared in AITS PART-TEST 2 @KARAN SHEKHAWAT @Ronak Agarwal – I don't know , May be He is ! Hey when did Fiitjee AITS starded ? Can I enroll in it now ? I want to give full length test of AITS . Can You Please Tell me Procedure and Time table of Exams ? Also What rank did You get in it ?, I'am just curious to know ! @Karan Shekhawat – Part Test -1 All India Rank-4 Part Test-2 All India Rank-34 Part Test-3 All India Rank-Yet to be known @Ronak Agarwal – wow ! That's Great ! Can You Please Tell me procedure also ? @Ronak Agarwal Also It is not good to ask But still , How much % wise You Got in AITS ?Please reply Ronak . @Karan Shekhawat – Check this Fiitjee AITS @KARAN SHEKHAWAT @Karan Shekhawat – AITS-1 Percentage =63.33 % AITS-2 Percentage =56.11 % AITS-3 Percentage = Yet to be known. @Ronak Agarwal – Thanks ! Is All paper are of 360 marks ? i think difficulty level of these paper are High . @Karan Shekhawat – Yes, actually there are two papers paper 1, paper 2 and yes they are quite tough. @Ronak Agarwal – Congratulations!! Nice work!! @Deepanshu Gupta @megh choksi @Ronak Agarwal @Mvs Saketh @Sudeep Salgia @Karthik Kannan @jatin yadav @Krishna Sharma Problem Loading... Note Loading... Set Loading...
This is a follow up of this question : I have the rotation matrix $$ \left( \begin{matrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{matrix}\right) $$ I'm using pre-multiplying rotation matrix (that operates on column vectors) for intrinsic rotations (i.e. I make rotations about the axes of the plane that rotates). And since the fixed frame is my reference frame --- $$ \left( \begin{matrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{matrix}\right) $$ My rotation matrix is nothing but the column unit-vectors of the axes of the rotated frame, i.e. $$ \left( \begin{matrix} x_{1} & x_{2} & x_{3}\\ y_{1} & y_{2} & y_{3}\\ z_{1} & z_{2} & z_{3}\\ \end{matrix}\right) $$ So therefore I have the values of a11,a12,a13,a21,a22,a23,a31,a32,a33 as x1, x2, x3, y1, y2, y3, z1, z2, z3. $$ \left(\small{ \begin{matrix} \cos(b)\cos(c) & -\cos(b)\sin(c) & \sin(b)\\ \cos(a)\sin(c) + \cos(c)\sin(a)\sin(b) & \cos(a)\cos(c) - \sin(a)\sin(b)\sin(c) & -\cos(b)\sin(a)\\ \sin(a)\sin(c) - \cos(a)\cos(c)\sin(b) & \cos(c)\sin(a) + \cos(a)\sin(b)\sin(c) & \cos(a)\cos(b)\\ \end{matrix}} \right) $$ Now if I have to solve for the above angles a, b & c (pitch, yaw and roll), I basically have nine equations but three unknowns. Following are the equations --- a11 = cos(b)∗cos(c)a12 = −cos(b)∗sin(c)a13 = sin(b)a21 = cos(a)∗sin(c)+cos(c)∗sin(a)∗sin(b)a22 = cos(a)∗cos(c)−sin(a)∗sin(b)∗sin(c)a23 = −cos(b)∗sin(a)a31 = sin(a)∗sin(c)−cos(a)∗cos(c)∗sin(b)a32 = cos(c)∗sin(a)+cos(a)∗sin(b)∗sin(c)a33 = cos(a)∗cos(b) This is where I learnt about it. Now I am using a non-linear least squares curve fitting method to solve the above set of over-determined equations. There are two major problems that I am encountering The final values of a,b, c change as I change the initial values in the iterative algorithm. I get different results if I start from [50 50 50] and different results with [0 0 0]. Secondly I don't think that the values obtained are correct; since the angles of pitch, yaw & roll seem pretty much different in the video. I'm using the lsqcurvefit(click for the question I asked on stackoverflow) command in Matlab. (click for documentation) I have been on this problem of how to calculate pitch, yaw & roll for quite some time now. I think this post will give you all the details of what I have tried. I need your help to know if what I am doing is the best approach to tackle my problem. If so, please point out what is wrong in my method. If you think there are other simpler methods, please let me know about them. I'm sure there has to be a better method, since this seems like a pretty simple thing to do. Should I change my Matlab algorithm? Anyone know any special Matlab/Mathematica toolbox that calculates the yaw, pitch, roll? Thanks!
Digital communication systems neutralize many problems that beset analog systems. One of these problems is faithfully communicating the information across a channel. With analog signals, we need to make a decision as to what voltage level was transmitted. This voltage level comes from a very large range of voltage amplitudes possible. By supplanting it with a digital system, we massively reduce the problem to deducing whether the binary 'ON' (1) was sent or 'OFF' (0). However, because of the inherent noise in the receivers (thermal noise and all), and the imperfectness of the communication channels, there are chances that a '0' would be corrupted to a '1' or vice-versa. So, it becomes important to design receivers so as to minimize such chances of error. This decision is often done at the detector stage in the receiver. It should be obvious that the detector plays a large part in making the receiver 'good'. In technical parlance, what we want to do is minimize the probability of making an error. Statistically, we want to minimize the Average Probability of Error which is given by: \(P_e = \mathbb{E}(P\{\hat{X} \neq a_i | X = a_i\}) = \displaystyle\sum_{i=1}^{|\chi|}P\{\hat{X} \neq a_i | X = a_i\} \cdot P\{X = a_i\}\) where \(|\chi|\)is the cardinality of the constellation (number of symbols in the constellation). So, we want to minimize it? Yes. How? Intuitively, by not making a mistake in detection. We could juggle some equations and write the above expression as: \(P_e = \displaystyle\sum_{i=1}^{|\chi|}(1 - P\{\hat{X} = a_i | X = a_i\} \cdot P\{X = a_i\}) = 1 - \displaystyle\sum_{i=1}^{|\chi|}P\{\hat{X} = a_i | X = a_i\} \cdot P\{X = a_i\}\) Okay! So, what is our objective? minimize \(P_e\). Note that the expression is a subtraction and that it is a probability. Probabilities are non-negative (\(0\leq P_e \leq 1\)). So, minimizing \(P_e\)is equivalent to maximizing the subtrahend. Or, \(maximize\mbox{ }\displaystyle\sum_{i=1}^{|\chi|}P\{\hat{X} = a_i | X = a_i\} \cdot P\{X = a_i\}\) Since this term above is a summation, maximizing each one of its members would maximize the whole thing. Or, \(\displaystyle\sum_{i=1}^{|\chi|}maximize\mbox{ }P\{\hat{X} = a_i | X = a_i\} \cdot P\{X = a_i\}\) This means, that the detector should make a decision such that each conditional probability of correct decision is maximized. Imagine that each arriving signal is put on individual pieces of paper which represent a complex plane like the one shown below. Now, the detector needs to decide which of the symbol from the codebook might have been sent by looking at what is received. This is an inference process. We're looking at an outcome and guessing what caused it. To do so, the detector has a heuristic - it divides the entire sheet of paper into as many disjoint regions as there are codes in the codebook or symbols in the constellation. These regions encompass the entire sheet. Then, to each region, it has assigned a symbol value - if the incoming signal lies in the region \(R_1\), I'll say that \(a_1\) was transmitted (\(\hat{X}=a_1\)). Likewise, it associates a symbol with each region. Now, how good the mapping of those regions is, determines how well the detector will perform. In an optimal detector, the regions would be such that the above expression is maximized. But how do we know what the regions should be? Then, the above equation can be written as \(\displaystyle\sum_{i=1}^{|\chi|}P\{y \in R_i | X = a_i\} \cdot P\{X = a_i\} = \displaystyle\sum_{i=1}^{|\chi|} \int_{R_i} f_Y(y | X = a_i) \cdot P\{X = a_i\}\) with \(f_Y(y|X = a_i)\)is the conditional probability density function of the received signal. Depending on what kind of p.d.f. Y has will determine the decision regions in the optimal detector. Thus, the choice of how the regions look like, or what they should be depends on the p.d.f. of the incoming signal. When we consider an AWGN channel, we model the noise as Gaussian, and consequently, Y also takes a Gaussian form. If we had modeled noise as - let's say uniform distribution - Y would have a different form and consequently, the regions and the mapping would change. Note that \(P\{Y = y | X = a_i\} \cdot P\{X = a_i\} = P\{X = a_i | Y = y \} \cdot P\{Y = y\}\)(by Bayes rule). This is the Aposteriori probability that \(a_i\)was transmitted, given that 'y' was received. There is another receiver, the Maximum Likelihood receiver, which selects \(a_i\) that maximizes the likelihood of obtaining \(Y = y\). For an ML Detector, we have: \(maximize P\{Y=y|X=a_i\}\) Note that if the apriori probabilities of all symbols were equal, i.e. if transmitting any of the symbols from the codebook is equally probable, then the MAP detector reduces to the ML Detector. This looks like a special case but is a very powerful tool which is widely exploited.
The Annals of Probability Ann. Probab. Volume 46, Number 3 (2018), 1441-1454. A Gaussian small deviation inequality for convex functions Abstract Let $Z$ be an $n$-dimensional Gaussian vector and let $f:\mathbb{R}^{n}\to \mathbb{R}$ be a convex function. We prove that \[\mathbb{P}(f(Z)\leq \mathbb{E}f(Z)-t\sqrt{\operatorname{Var}f(Z)})\leq\exp (-ct^{2}),\] for all $t>1$ where $c>0$ is an absolute constant. As an application we derive variance-sensitive small ball probabilities for Gaussian processes. Article information Source Ann. Probab., Volume 46, Number 3 (2018), 1441-1454. Dates Received: November 2016 Revised: May 2017 First available in Project Euclid: 12 April 2018 Permanent link to this document https://projecteuclid.org/euclid.aop/1523520021 Digital Object Identifier doi:10.1214/17-AOP1206 Mathematical Reviews number (MathSciNet) MR3785592 Zentralblatt MATH identifier 06894778 Subjects Primary: 60D05: Geometric probability and stochastic geometry [See also 52A22, 53C65] Secondary: 52A21: Finite-dimensional Banach spaces (including special norms, zonoids, etc.) [See also 46Bxx] 52A23: Asymptotic theory of convex bodies [See also 46B06] Citation Paouris, Grigoris; Valettas, Petros. A Gaussian small deviation inequality for convex functions. Ann. Probab. 46 (2018), no. 3, 1441--1454. doi:10.1214/17-AOP1206. https://projecteuclid.org/euclid.aop/1523520021
tl;dr;You can teach your machine to break arbitrary Caesar cipher by observing enough training examples using Trusted Region Policy Optimization for Policy Gradients: Full textImagine the world where a hammer was introduced to the public just couple a years ago. Everyone is running around trying to apply the hammer to anything that even resembles a nail. This is the world we are living in and the hammer is deep learning. Today I will be applying it to a task that can be much easier solved by other means but hey, it's Deep Learning Age! Specifically, I will teach my machine to break a simple cipher like Caesar cipher just by looking at several (actually, a lot) examples of English text and corresponding encoded strings. You may have heard that machines are getting pretty good at playing games so I decided to formulate this code breaking challenge as a game. Fortunately there is this OpenAI Gym toolkit that can be used "for developing and comparing reinforcement learning algorithms". It provides some great abstractions that help us define games in terms that computer can understand. For instance, they have a game (or environment) called "Copy-v0" with the following setup and rules: There is an input tape with some characters. You can move cursor one step left or right along this tape. You can read symbols under the cursor and output characters one at a time to the output tape. You need to copy input tape characters to output tape to win. Now let's talk a bit about the hammer itself. The hottest thing on the Reinforcement Learning market right now is Policy Gradients and specifically this flavorTrust Region Policy Optimization. There is an amazing article from Andrej Karpathy on Policy Gradients so I will not give here an introduction. If you are new to Reinforcement Learning you just stop reading this post and go read that one. Seriously, it's so much better! Still here? Ok, I will tell you about TRPO then. TRPO is a technique for Policy Gradients optimization that produces much better results than vanilla gradient descent and even guarantees (theoretically, of course) that you can get an improved policy network on every iteration. With vanilla PG you start by defining a policy network that produces scores for the actions given the current state. You then simulate hundreds and thousands of games taking actions suggested by the network and note which actions produced better results. Having this data available you can then use backpropagation to update your policy network and start all over again. The only thing that TRPO adds to this is that you solve a constrained optimization problem instead of an unconstrained one: $$ \textrm{maximize } L(\theta) \textrm{ subject to } \bar{D}_{KL}(\theta_{\textrm{old}},\theta)<\delta$$ Here \(L(\theta)\) is a loss that we are trying to optimize. It is defined as $$E_{a \sim q}[\frac{\pi_\theta(a|s_n)}{q(a|s_n)} A_{\theta_{\textrm{old}}}(s_n,a)],$$ where \(\theta\) is our weights vector, \(\pi_\theta(a|s_n)\) is a probability (score) of the selected action \(a\) in state \(s_n\) according to the policy network, \(q(a|s_n)\) is a corresponding score using the policy network from the iteration before and \(A_{\theta_{\textrm{old}}}(s_n,a)\) is an advantage (more on it later). Running simple gradient descent on this is the vanilla Policy Gradients approach. TRPO approach doesn't blindly descend along the gradient but takes into account the \(\bar{D}_{KL}(\theta_{\textrm{old}},\theta)<\delta\) constraint. To make sure the constraint is satisfied we do the following. First, we approximately solve the following equation to find a search direction: $$Ax = g,$$ where A is the Fisher information matrix, \(A_{\textrm{ij}} = \frac{\partial}{\partial \theta_i}\frac{\partial}{\partial \theta_j}\bar{D}_{KL}(\theta_{\textrm{old}},\theta)\) and \(g\) is the gradient that you can get from the loss using backpropagation. This is done using conjugate gradients algorithm. Once we have a search direction we can easily find a maximum step along this direction that still satisfies the constraint. One thing that I promised to get back to is the advantage. It is defined as $$A_\pi(s,a)= Q_\pi(s,a)−V_\pi(s),$$ where \(Q_\pi(s,a)\) is a state-action value function (actual reward of taking an action in this state, it usually includes discounted rewards for all upcoming states) and \(V_\pi(s)\) is a value function (in our case it's just a separate network that we train to predict the value of the state). Bored enough already? I promise, it's not that scary in code. You can find the full implementation here: tilarids/reinforcement_learning_playground. Specifically, look at trpo_agent.py. You can reproduce the Caesar cipher breaking by running trpo_caesar.py. wojzaremba's implementation a lot - you are right. I was copying some TRPO code from there and then rewriting it to make it more readable and also to make sure it follows the paper closely.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
I’m thrilled to join everyone at the best-named math blog. I am just home from Combinatorial Link Homology Theories, Braids, and Contact Geometry at ICERM in Providence, Rhode Island. The conference was aimed at students and non-experts with a focus on introducing open problems and computational techniques. Videos of many of the talks are available at ICERM’s site. (Look under “Programs and Workshops,” then “Summer 2014”.) One of the highlights of the workshop was the ‘Computational Problem Session’ MC’d by John Baldwin with contributions from Rachel Roberts, Nathan Dunfield, Joanna Mangahas, John Etnyre, Sucharit Sarkar, and András Stipsicz. Each spoke for a few minutes about open problems with a computational bent. I’ve done my best to relate all the problems in order with references and some background. Any errors are mine. Corrections and additions are welcome! Rachel Roberts Contact structures and foliations Eliashberg and Thurston showed that a one-dimensional foliation of a three-manifold can be -approximated by a contact structure (as long as it is not the product foliation on ). Vogel showed that, with a few other restrictions, any two approximating contact structures lie in the same isotopy class. In other words, there is a map from , taut, oriented foliations to contact structures modulo isotopy for any closed, oriented three-manifold. What is the image of ? Geography: What do the fibers of look like? Botany: The image of is known to be contained within the space of weakly symplectically fillable and universally tight contact structures. Etnyre showed that if one removes “taut”, then is surjective. Etnyre and Baldwin showed that doesn’t “see” universal tightness. L-spaces and foliations A priori the rank of the Heegaard Floer homology groups associated to a rational homology three-sphere Y are bounded by the first ordinary homology group: . An L-space is a rational homology three-sphere for which equality holds. Conjecture: Y is an L-space if and only if it does not contain a taut, oriented, foliation. Ozsváth and Szabó showed that L-spaces do not contain such foliations. Kazez and Roberts proved that the theorem applies to a class of foliations and perhaps all foliations. The classification of L-spaces is incomplete and we are led to the following: How can one prove the (non-)existence of such a foliation? Question: Existing methods are either ad hoc or difficult (e.g. show that the manifold does not act non-trivially on a simply-connected (but not necessarily Hausdorff!) one-manifold). Roberts suggested that Agol and Li’s algorithm for detecting “Reebless” foliations via laminar branched surfaces may be useful here, although the algorithm is currently impractical. Nathan Dunfield What do random three-manifolds look like? First of all, how does one pick a random three-manifold? There are countably many compact three-manifolds (because there are countably many finite simplicial complexes, or because there are countably many rational surgeries on the countably many links in , or because…) so there is no uniform probability distribution on the set of compact orientable three-manifolds. To dodge this issue, we first consider random objects of bounded complexity, then study what happens as we relax the bound. (A cute, more modest example: the probability that two random integers are relatively prime is $6/\pi^2$. 1). Fix a genus and write for the mapping class group of the oriented surface of genus . Pick some generators of . Let be a random word of length in the chosen generators. We can associate a unique closed, orientable three-manifold to by identifying the boundaries of two genus handlebodies via . How is your favorite invariant distributed for random 3-manifolds of genus ? How does it behave as ? Experiment! (Ditto for knots, links, and their invariants.) Metaquestion: Show that your favorite conjecture about some class of three-manifolds or links holds with positive probability. For example: Challenge: Conjecture: a random three-manifold is not an -space, has left-orderable fundamental group, admit a taut foliation, and admit a tight contact structure. These methods can also be used to prove more traditional-sounding existence theorems. Perhaps you’d like to show that there is a three-manifold of every genus satisfying some condition. It suffices to show that a random three-manifold of fixed genus satisfies the condition with non-negative probability! For example, Theorem: (Lubotzky-Maher-Wu, 2014): For any integers and with , there exist infinitely many closed hyperbolic three-manifolds which are integral homology spheres with Casson invariant and Heegaard genus . Johanna Mangahas What do generic mapping classes look like? Here are two sensible ways to study random elements of bounded complexity in a finitely-generated group. Fix a generating set. Look at all words of length N or less in those generators and their inverses. (word ball) Fix a generating set and the associated Cayley graph. Look at all vertices within distance N of the identity. (Cayley ball) A property of elements in a group is generic if a random element has the property with probability, so the meaning of “generic” differs with the meaning of “random.” For example, consider the group $G = \langle a, b \rangle \oplus \mathbb{Z}$ with generating set $\{(a,0), (b,0), (id,1)\}$. The property “is zero in the second coordinate” is generic for the first notion but not the second. So we are stuck/blessed with two different notions of genericity. Recall that the mapping class group of a surface is the group of orientation-preserving homeomorphisms modulo isotopy. Thurston and Nielsen showed that a mapping class falls into one of three categories: Finite order:for some . Reducible:fixes some finite set of simple closed curves. Pseudo-Anosov:there exists a transverse pair of measured foliations which stretches by and . The first two classes are easier to define, but the third is generic. Are pseudo-Anosov mapping classes generic in the second sense? Question: The braid group on n strands can be understood as the mapping class group of the disk with n punctures. But the braid group is not just a mapping class group; it admits an invariant left-order and a Garside structure. Tetsuya Ito gave a great minicourse on both of these structures! Can one leverage these additional structures to answer genericity questions about the braid group? Question’: Fast algorithms for the Nielsen-Thurston classification Is there a polynomial-time algorithm for computing the Thurston-Nielsen classification of a mapping class? Question: Matthieu Calvez has described an algorithm to classify braids in where is the length of the candidate braid. The algorithm is not yet implementable because it relies on knowledge of a function where is the index of the braid. These numbers come from a theorem of Masur and Minsky and are thus difficult to compute. These difficulties, as well as the power of the Garside structure and other algorithmic approaches, are described in Calvez’s linked paper. Implement Calvez’s algorithm, perhaps partially, without knowing . Challenge: Mark Bell is developing Flipper which implements a classification algorithm for mapping class groups of surfaces. How fast are such algorithms in practice? Question: 2 John Etnyre Contactomorphism and isotopy of unit cotangent bundles For background on all matters symplectic and contact see Etnyre’s notes. Let be a manifold of any (!) dimension. The total space of the cotangent bundle is naturally symplectic: the cotangent bundle of supports the Liouville one-form characterized by for any one-form ; the pullback is along the canonical projection . The form is symplectic on . Inside the cotangent bundle is the unit cotangent bundle . (This is not a vector bundle!) The form restricts to a contact structure on the . Fact: If the manifolds and are diffeomorphic, then their unit cotangent bundles and are contactomorphic In which dimensions greater than two is the converse true? Hard question: This question is attributed to Arnol’d, perhaps incorrectly. The converse is known to be true in dimensions one and to and also in the case that is the three-sphere (exercise!). Does contactomorphism type of unit cotangent bundles distinguish lens spaces from each other? Tractable (?) question: Also intriguing is the relative version of this construction. Let be an Legendrian embedded (or immersed with transverse self-intersections) submanifold of . Define the unit cosphere bundle of to be . You can think of it as the boundary of the normal bundle to . It is a Legendrian submanifold of the unit cotangent bundle . Fact: If is Legendrian isotopic to then is Legendrian isotopic to . Under what conditions is the converse true? Relative question: Etnyre noted that contact homology may be a useful tool here. Lenny Ng’s “A Topological Introduction to Knot Contact Homology” has a nice introduction to this problem and the tools to potentially solve it. Sucharit Sarkar How many Szabó spectral sequences are there, really? Ozsváth and Szabó constructed a spectral sequence from the Khovanov homology of a link to the Heegaard Floer homology of the branched double cover of over that link. (There are more adjectives in the proper statement.) This relates two homology theories which are defined very differently. Construct an algorithm to compute the Ozsváth-Szabó spectral sequence. Challenge: Sarkar suggested that bordered Heegaard Floer homology may be useful here. Alternatively, one could study another spectral sequence, combinatorially defined by Szabó, which also seems to converge to the Heegaard Floer homology of the branched double cover. Is Szabó’s spectral sequence isomorphic to the Ozsváth-Szabó spectral sequence? Question: Again, the bordered theory may be useful here. Lipshitz, Ozsváth, and D. Thurston have constructed a bordered version of the Ozsváth-Szabó spectral sequence which agrees with the original under a pairing theorem. If the answer is “yes” then Szabó’s spectral sequence should have more structure. This was the part of Sarkar’s research talk which was unfortunately scheduled after the problem session. I hope to return to it in a future post (!). Can Szabó’s spectral sequence be defined over a two-variable polynomial ring? Is there an action of the dihedral group on the spectral sequence? Question: András Stipsicz Knot Floer Smörgåsbord Link Floer homology was spawned from Heegaard Floer homology but can also be defined combinatorially via grid diagrams. Lenny Ng explained this in the second part of his minicourse. However you define it, the theory assigns to a link a bigraded -module $HFK^-(L)$. From this group one can extract the numerical concordance invariant $\tau(L)$. Defining over or one can define invariants and . Are these invariants distinct from ? Question: Does have -torsion for some ? (From a purely algebraic perspective, a “no” to the first question suggests a “no” to this one.) Harder question: Stipsicz noted that there are complexes of -modules for which the answer is yes, but those complexes are not known to be of any link. Speaking of which, Characterize those modules which appear as . “A shot in the dark:” In another direction, Stipsicz spoke earlier about a family of smooth concordance invariants . These were constructed from link Floer homology by Ozsváth, Stipsicz, and Szabó. Earlier, Hom constructed the smooth concordance invariant . Both invariants can be used to show that the smooth concordance group contains a summand, but their fibers are not the same: Hom produced a knot which has for all t and . Is there a knot with by ? Conversely: Stipsicz closed the session by waxing philosophical: “When I was a child we would get these problems like ‘Jane has 6 pigs and Joe has 4 pigs’ and I used to think these were stupid. But now I don’t think so. Sit down, ask, do calculations, answer. That’s somehow the method I advise. Do some calculations, or whatever.” 1. An analogous result holds for arbitrary number fields — I make no claims about the cuteness of such generalizations. ↩ 2. An old example: the simplex algorithm from linear programming runs in exponential time in the worst-case, but in
I want to know how many roots does the following equation have and how to solve them all. $f(z)=\frac{\text{sech}(35.0937 x)}{\left(e^{70.1873 x}-1.\right) x} \left((0.\, +0.0235822 i) x^2 \exp \left(35.0937 x+(0.\, +0.0000253744 i) \sqrt{-1.91278\times 10^{12} x^2+(0.\, +4.54224\times 10^{10} i)}\right)-(0.\, +0.0117911 i) x^2+\text{8.73597$\grave{ }$*${}^{\wedge}$-9} x \sqrt{-1.91278\times 10^{12} x^2+(0.\, +4.54224\times 10^{10} i)}+e^{70.1873 x} \left(-(0.\, +0.0117911 i) x^2-\text{8.73597$\grave{ }$*${}^{\wedge}$-9} x \sqrt{-1.91278\times 10^{12} x^2+(0.\, +4.54224\times 10^{10} i)}-0.00049\right)-0.00049\right)$ From above equation, I know there is a pole $(z=0)$. Plotting the quadrant in the region $\{0<x<0.5$, $0<y<0.5\}$, it seems that there are a root near point $(0.15+i0.08)$, and the other roots near by and along with y-axis. I try Newton method given $z_0=0.1+i0.1$, but get an irrelative answer $z1=0.3625 - i0.0604$. f[z_]:=1/((-1.+E^(70.1873 z)) z) (-0.00049-(0.+0.0117911 I) z^2+(0.+0.0235822 I) E^(35.0937 z+(0.+0.0000253744 I) Sqrt[(0.+4.54224*10^10 I)-1.91278*10^12 z^2]) z^2+8.73597*10^-9 z Sqrt[(0.+4.54224*10^10 I)-1.91278*10^12 z^2]+E^(70.1873 z) (-0.00049-(0.+0.0117911 I) z^2-8.73597*10^-9 z Sqrt[(0.+4.54224*10^10 I)-1.91278*10^12 z^2])) Sech[35.0937 z];quad[z_] := Module[{q},u = N[ComplexExpand[Re[z]]];v = N[ComplexExpand[Im[z]]]; If[NumberQ[z], If[u == 0 || v == 0, q = 0, If[u*v > 0, If[u > 0, q = 1, q = 3], If[u > 0, q = 4, q = 2]]], q = ComplexInfinity];q];ContourPlot[quad[f[x + I y]], {x, 0, 0.5}, {y, 0, 0.5},FrameLabel -> {"Re", "Im"}]z1=NestWhile[(# - f[#]/f'[#]) &, 0.1 + I 0.1, Abs[f[#]] > 10^-7 &]
I'm writing a toy radiative-convective atmospheric model and need to relate the heat flux convergences (either surface sensible heat flux or the radiative flux) to changes in atmospheric layer temperatures. The first law of thermodynamics states: The change in internal energy of a closed system is equal to the amount of heat supplied to the system plus the work done on the system by the surroundings. My question is: Should gravitational potential energy be directly incorporated into this in any way, and where? When I calculate the internal energy of the layer, should that be just the thermodynamic internal energy $\rho c_v T$ or should it also include the potential energy $\rho g z$? The pressure work done on the system by its surroundings at the bottom of the layer is $p_{bot} \Delta z_{bot}$, which is clearly related to the raising and lowering of the layer. Should this work then be excluded from the RHS if the potential energy is excluded from the LHS? I essentially have the following possibilities for calculating the change in temperatures: $\int_{z_{bot}}^{z_{top}} dz \, \rho c_v \Delta T = \Delta E_{rad}$ $\int_{z_{bot}}^{z_{top}} dz \, (\rho c_v \Delta T + \rho g z) = \Delta E_{rad}$ $\int_{z_{bot}}^{z_{top}} dz \, \rho c_v \Delta T = \Delta E_{rad} + p_{bot} \Delta z_{bot} - p_{top} \Delta z_{top}$ $\int_{z_{bot}}^{z_{top}} dz \, (\rho c_v \Delta T + \rho g z) = \Delta E_{rad} + p_{bot} \Delta z_{bot} - p_{top} \Delta z_{top}$ It seems like the first and last of these are equivalent (at least assuming hydrostatic balance), but the first would use the isochoric heat capacity and the last the isobaric heat capacity. I believe the last one is the correct choice but I feel very unsure about it. The same dilemma persists when a surface heat flux is applied, or convection changes the height of the upper atmosphere without changing any of its temperatures. Relatedly, the full equation in a fluid dynamics point of view would include kinetic energy, and for an RCE model that would be turbulent kinetic energy. Should this be tracked? If I exclude it, is that the same as assuming that all kinetic energy gets turned into heat by viscosity between each timestep? (These may be very basic questions to some of you, but for the life of me I can't convince myself of the right approach to take. I don't have any of my atmospheric science books with me, so if you throw in a reference, please make it to an online resource.)
In calculus classes, the main subject of investigation was functions and their rates of change. In linear algebra, functions will again be focus of your attention, but now functions of a very special type. In calculus, you probably encountered functions \(f(x)\), but were perhaps encouraged to think of this a machine "\(f\)'', whose input is some real number \(x\). For each input \(x\) this machine outputs a single real number \(f(x)\). . In linear algebra, the functions we study will take vectors, of some type, as both inputs an outputs. We just saw that vectors were objects that could be added or scalar multiplied---a very general notion---so the functions we are going study will look novel at first. So things don't get too abstract, here are five questions that can be rephrased in terms of functions of vectors: Example 2: Functions of Vectors in Disguise What number \(x\) solves \(10x=3\)? What vector \(u\) from 3-space satisfies the cross product equation \(\begin{pmatrix}1\\ 1\\ 0\end{pmatrix} \times\ u = \begin{pmatrix}0\\ 1\\ 1\end{pmatrix}\) ? What polynomial \(p\) satisfies \(\int_{-1}^{1} p(y) dy = 0\) and \(\int_{-1}^{1} y p(y) dy=1\) ? What power series \(f(x)\) satisfies \(x\frac{d}{dx} f(x) -2f(x)=0\)? What number \(x\) solves \(4 x^2=1\)? For part (a), the machine needed would look like the picture below. \(x\) \(10x\). This is just like a function \(f(x)\) from calculus that takes in a number \(x\) and spits out the number \(f(x)=10x\). For part~(\ref{FVB}), we need something more sophisticated. \(\begin{pmatrix}x\\ y\\ z\end{pmatrix}\) \(\begin{pmatrix}z\\ -z\\ y-x\end{pmatrix}\). The inputs and outputs are both 3-vectors. You are probably getting the gist by now, but here is the machine needed for part~(\ref{FVC}): \(p\)\(\begin{pmatrix}\int_{-1}^{1} p(y) dy\\ \int_{-1}^{1} y p(y) dy\end{pmatrix}\). Here we input a polynomial and get a 2-vector as output! By now you may be feeling overwhelmed and thinking that absolutely any function with any kind of vector as input and any other kind of vector as output can pop up next to strain your brain! Rest assured that linear algebra involves the study of only a very simple (yet very important) class of functions of vectors; its time to describe the essential characteristics of linear functions. Let's use the letter \(L\) for these functions and think again about vector addition and scalar multiplication. Lets suppose \(v\) and \(u\) are vectors and \(c\) is a number. Then we already know that \(u+v\) and \(cu\) are also vectors. Since \(L\) is a function of vectors, if we input \(u\) into \(L\), the output \(L(u)\) will also be some sort of vector. The same goes for \(L(v)\), \(L(u+v)\) and \(L(cu)\). Moreover, we can now also think about adding \(L(u)\) and \(L(v)\) to get yet another vector \(L(u)+L(v)\) or of multiplying \(L(u)\) by \(c\) to obtain the vector \(cL(u)\). Perhaps a picture of all this helps: The ``blob'' on the left represents all the vectors that you are allowed to input into the function \(L\), and the blob on the right denotes the corresponding outputs. Hopefully you noticed that there are two vectors apparently {\it not shown} on the blob of outputs: $$ L(u)+L(v)\quad \&\quad cL(u)\, . $$ You might already be able to guess the values we would like these to take. If not, here's the answer, it's the key equation of the whole class, from which everything else \hypertarget{twopart}{follows}: 1. Additivity: $$L(u+v)=L(u)+L(v)\, .$$ 2. Homogeneity: $$L(cu)=cL(u)\, .$$ Most functions of vectors do not obey this requirement; linear algebra is the study of those that do. Notice that the additivity requirement says that the function \(L\) respects vector addition: \(\textit{it does not matter if you first add}\) \(u\) \(\textit{and}\) \(v\) \(\textit{and then input their sum into}\) \(L\)\(\textit{, or first input}\) \(u\) \(\textit{and}\) \(v\) \(\textit{into}\) \(L\) \(\textit{separately and then add the outputs}\). Function = Transformation = Operator The questions in cases (a) - (d) of our example can all be restated as a single equation: \[Lv = w\] where \(v\) is an unknown and \(w\) a known vector, and \(L\) is a linear transformation. To check that this is true, one needs to know the rules for adding vectors (both inputs and outputs) and then check linearity of \(L\). Solving the equation \(Lv=w\) often amounts to solving systems of linear equations, the skill you will learn in Chapter 2. A great example is the derivative operator: Example 3:The derivative operator is linear For any two functions \(f(x)\), \(g(x)\) and any number \(c\), in calculus you probably learnt that the derivative operator satisfies \(\frac{d}{dx} (cf)=c\frac{d}{dx} f\), \(\frac{d}{dx}(f+g)=\frac{d}{dx}f+\frac{d}{dx}g\). If we view functions as vectors with addition given by addition of functions and scalar multiplication just multiplication of functions by a constant, then these familiar properties of derivatives are just the linearity property of linear maps. Before introducing matrices, notice that for linear maps \(L\) we will often write simply \(L u\) instead of \(L(u)\). This is because the linearity property of a linear transformation \(L\) means that \(L(u)\) can be thought of as multiplying the vector \(u\) by the linear operator \(L\). For example, the linearity of \(L\) implies that if \(u,v\) are vectors and \(c,d\) are numbers, then \[L(c u + d v) = c L u + d L v,\] which feels a lot like the regular rules of algebra for numbers. Notice though, that "\(u L\)'' makes no sense here. Remark A sum of multiples of vectors \(c u + dv\) is called a \(\textit{linear combination}\) of \(u\) and \(v\).
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
The recurrence relation for the Bessel function of general order \(\pm\nu\) can now be solved by using the gamma function, \[a_{m} = -\frac{1}{m(m\pm 2\nu)} a_{m-2}\] has the solutions (\(x > 0\)) \[\begin{aligned} J_{\nu}(x) &= \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!\Gamma(\nu+k+1)} \left(\frac{x}{2}\right)^{\nu+2k}, \\ J_{-\nu}(x) &= \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!\Gamma(-\nu+k+1)} \left(\frac{x}{2}\right)^{-\nu+2k}.\end{aligned}\] The general solution to Bessel’s equation of order \(\nu\) is thus \[y(x) = A J_{\nu}(x)+BJ_{-\nu}(x),\] for any non-integer value of \(\nu\). This also holds for half-integer values (no logs).
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
3.2.1 Matrices and vectors Before we can start talking about linear systems of ODEs, we will need to talk about matrices, so let us review these briefly. A matrix is an \(m \times n \) array of numbers (\(m\) rows and \(n\) columns). For example, we denote a \( 3 \times 5\) matrix as follows \[ A = \begin {bmatrix} a_{11} & a_{12} & a_{13} & a_{14} & a_{15} \\ a_{21} & a_{22} & a_{23} & a_{24} & a_{25} \\ a_{31} & a_{32} & a_{33} & a_{34} & a_{35} \end {bmatrix} \] By a vector we will usually mean a column vector, that is an \( m \times 1 \) matrix. If we mean a row vector we will explicitly say so (a row vector is a \( 1 \times n\) matrix). We will usually denote matrices by upper case letters and vectors by lower case letters with an arrow such as \( \vec {\text {x}}\) or \( \vec {b} \). By \( \vec {0} \) we will mean the vector of all zeros. It is easy to define some operations on matrices. Note that we will want \( 1 \times 1 \) matrices to really act like numbers, so our operations will have to be compatible with this viewpoint. First, we can multiply by a scalar (a number). This means just multiplying each entry by the same number. For example, \[ 2 {\begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end {bmatrix}} = \begin {bmatrix} 2 & 4 & 6 \\ 8 & 10 & 12 \end {bmatrix} \] Matrix addition is also easy. We add matrices element by element. For example, \[ \begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end {bmatrix} + \begin {bmatrix} 1 & 1 & -1 \\ 0 & 2 & 4 \end {bmatrix} = \begin {bmatrix} 2 & 3 & 2 \\ 4 & 7 & 10 \end {bmatrix} \] If the sizes do not match, then addition is not defined. If we denote by 0 the matrix of with all zero entries, by \( c, d \) scalars, and by \( A, B, C\) matrices, we have the following familiar rules. \[ A + 0 = A = 0 + A \] \[ A + B = B + A \] \[(A + B) + C = A + ( B + C) \] \[ c( A + B) = cA + cB \] \[ ( c + d) A = cA + dA \] 3.2.2 Matrix Multiplication Let us now define matrix multiplication. First we define the so-called dot product (or inner product) of two vectors. Usually this will be a row vector multiplied with a column vector of the same size. For the dot product we multiply each pair of entries from the first and the second vector and we sum these products. The result is a single number. For example, \[ \begin {bmatrix} a_1 & a_2 & a_3 \end {bmatrix} \cdot \begin {bmatrix} b_1 \\ b_2 \\ b_3 \end {bmatrix} = \begin {bmatrix} a_1b_1 + a_2b_2 + a_3b_3 \end {bmatrix} \] And similarly for larger (or smaller) vectors. Armed with the dot product we can define the product of matrices. First let us denote by \( \text {row}_i (A) \) the \( i^{th}\) row of \(A\) and by \( \text {column}_j (A) \) the \(j^{th} \) column of \(A\). For an \(m \times n \) matrix \(A\) and an \( n \times p \) matrix \(B\) we can define the product \(AB\). We let \(AB\) be an \(m \times p \) matrix whose \( ij^{th} \) entry is \[ \text {row}_i (A) \cdot \text {column}_j (B) \] Do note how the sizes match up. Example: \[ \begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end {bmatrix} \begin {bmatrix} 1 & 0 & -1 \\ 1 & 1 & 1 \\ 1 & 0 & 0 \end {bmatrix} = \begin {bmatrix} 1. 1 + 2.1 + 3.1 & 1. 0 + 2.1 + 3.0 & 1. (-1) + 2.1 + 3.0 \\ 4.1 + 5.1 + 6.1 & 4.0 + 5.1 + 6.0 & 4.(-1) + 5.1 + 6.0 \end {bmatrix} = \begin {bmatrix} 6 & 2 & 1 \\ 15 & 5 & 1 \end {bmatrix} \] For multiplication we want an analog of a 1. This analog is the so-called identity matrix. The identity matrix is a square matrix with 1s on the main diagonal and zeros everywhere else. It is usually denoted by \(I\). For each size we have a different identity matrix and so sometimes we may denote the size as a subscript. For example, the \(I_3\) would be the \( 3 \times 3\) identity matrix \[ I = I_3 = \begin {bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {bmatrix} \] We have the following rules for matrix multiplication. Suppose that \( A, B, C\) are matrices of the correct sizes so that the following make sense. Let \( \alpha\) denote a scalar (number). \[ A (BC) = (AB) C\] \[ A (B + C) = AB + AC \] \[ (B + C) A = BA + CA \] \[ \alpha (AB) = ( \alpha A )B = A ( \alpha B) \] \[ IA = A = AI \] \( AB \ne BA \) in general (it may be true by fluke sometimes). That is, matrices do not commute. For example take \( A = \begin {bmatrix} 1 & 1 \\ 1 & 1 \end {bmatrix} \) and \( B = \begin {bmatrix} 1 & 0 \\ 0 & 2 \end {bmatrix} \). \( AB = AC \) does not necessarily imply \(B = C\), even if \(A\) is not 0. \(AB = 0\) does not necessarily mean that \(A = 0\) or \(B = 0\). For example take \(A = B = \begin {bmatrix} 0 & 1 \\ 0 & 0 \end {bmatrix} \). For the last two items to hold we would need to “divide” by a matrix. This is where the matrix inverse comes in. Suppose that \(A\) and \(B\) are \( n \times n \) matrices such that \[ AB = I = BA\] Then we call \(B\) the inverse of \(A\) and we denote \(B\) by \(A^{-1}\). If the inverse of \(A\) exists, then we call \(A\) invertible. If \(A\) is not invertible we sometimes say \(A\) is singular. If \(A\) is invertible, then \(AB = AC\) does imply that \(B = C\) (in particular the inverse of \(A\) is unique). We just multiply both sides by \(A^{-1} \) to get \(A^{-1} AB = A^{-1} AC\) or \(IB = IC\) or \(B = C\). It is also not hard to see that \( {(A^{-1})}^{-1} = A\). 3.2.3 The determinant We can now talk about determinants of square matrices. We define the determinant of a \( 1 \times 1\) matrix as the value of its only entry. For a \( 2 \times 2\) matrix we define \[ \text {det} \left ( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \right ) \overset {\text {def}}{=} ad - bc \] Before trying to compute the determinant for larger matrices, let us first note the meaning of the determinant. Consider an \(n \times n\) matrix as a mapping of the \( n\) dimensional euclidean space \( \mathbb {R}^n\) to \( \mathbb {R}^n\). In particular, a \( 2 \times 2\) matrix \(A\) is a mapping of the plane to itself, where \( \vec {x} \) gets sent to \( A \vec {x} \). Then the determinant of \(A\) is the factor by which the area of objects gets changed. If we take the unit square (square of side 1) in the plane, then \(A\) takes the square to a parallelogram of area \( \mid \text {det} (A) \mid \). The sign of \( \text {det} (A) \) denotes changing of orientation (negative if the axes got flipped). For example, let \[ A = \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \] Then \( \text {det} (A) = 1 + 1 = 2 \). Let us see where the square with vertices \( (0, 0), (1, 0), (0, 1)\) and \( (1, 1) \) gets sent. Clearly \( (0, 0 ) \) gets sent to \( (0, 0)\). \[ \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \begin {bmatrix} 1 \\ 0 \end {bmatrix} = \begin {bmatrix} 1 \\ -1 \end {bmatrix}, \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \begin {bmatrix} 0 \\ 1 \end {bmatrix} = \begin {bmatrix} 1 \\ 1 \end {bmatrix}, \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \begin {bmatrix} 1 \\ 1 \end {bmatrix} = \begin {bmatrix} 2 \\ 0 \end {bmatrix}\] So the image of the square is another square. The image square has a side of length \( \sqrt {2} \) and is therefore of area 2. If you think back to high school geometry, you may have seen a formula for computing the area of a parallelogram with vertices \( (0, 0), (a, c), (b, d)\) and \( (a + b, c + d ) \). And it is precisely \[ \mid \text {det} \left ( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \right ) \mid \] The vertical lines above mean absolute value. The matrix \( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \) carries the unit square to the given parallelogram. Now we can define the determinant for larger matrices. We define \( A_{ij} \) as the matrix \(A\) with the \( i^{th}\) row and the \(j^{th} \) column deleted. To compute the determinant of a matrix, pick one row, say the \(i^{th} \) row and compute. \[ \text {det} (A) = \sum _ {j=1}^n (-1)^{i+j} a_{ij} \text {det} (A_{ij}) \] For the first row we get \[ \text {det} (A) = a_{11} \text {det} (A_{11}) - a_{12} \text {det} (A_{12}) + a_{13} \text {det} (A_{13}) - \dots \begin {cases} +a_{1n} \text {det} (A_{1n} & \text {if n is odd} \\ -a_{1n} \text {det} (A_{1n} & \text {if n even} \end {cases} \] We alternately add and subtract the determinants of the submatrices \(A_{ij}\) for a fixed \(i\) and all \(j\). For a \(3 \times 3\) matrix, picking the first row, we would get \( \text {det} (A) = a_{11} \text {det} (A_{11}) - a_{12} \text {det} (A_{12}) + a_{13} \text {det} (A_{13})\). For example, \[ \text {det} \left ( \begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end {bmatrix} \right ) = 1 \cdot \text {det} \left ( \begin {bmatrix} 5 & 6 \\ 8 & 9 \end {bmatrix} \right ) - 2 \cdot \text {det} \left ( \begin {bmatrix} 4 & 6 \\ 7 & 9 \end {bmatrix} \right ) + 3 \cdot \text {det} \left ( \begin {bmatrix} 4 & 5 \\ 7 & 8 \end {bmatrix} \right ) \] \[ = 1(5 \cdot 9 - 6 \cdot 8) - 2 ( 4 \cdot 9 - 6 \cdot 7) + 3 ( 4 \cdot 8 - 5 \cdot 7 ) = 0 \] The numbers \( (-1)^{i+j} \text {det} (A_{ij}) \) are called cofactors of the matrix and this way of computing the determinant is called the cofactor expansion. It is also possible to compute the determinant by expanding along columns (picking a column instead of a row above). Note that a common notation for the determinant is a pair of vertical lines: \[ \begin {bmatrix} a & b \\ c & d \end {bmatrix} = \text {det} \left ( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \right ) \] I personally find this notation confusing as vertical lines usually mean a positive quantity, while determinants can be negative. I will not use this notation in this book. One of the most important properties of determinants (in the context of this course) is the following theorem. Theorem 3.2.1. An \( n \times n\) matrix \(A\) is invertible if and only if \( \text {det} (A) \ne 0 \). In fact, there is a formula for the inverse of a \(2 \times 2 \) matrix \[ {\begin {bmatrix} a & b \\ c & d \end {bmatrix}}^ {-1} = \frac {1}{ad - bc} \begin {bmatrix} d & -b \\ -c & a \end {bmatrix} \] Notice the determinant of the matrix in the denominator of the fraction. The formula only works if the determinant is nonzero, otherwise we are dividing by zero. 3.2.4 Solving linear systems One application of matrices we will need is to solve systems of linear equations. This is best shown by example. Suppose that we have the following system of linear equations \[ 2x_1 + 2x_2 + 2x_3 = 2\] \[ x_1 + x_2 + 3x_3 = 5\] \[ x_1 + 4x_2 + x_3 = 10\] It is easier to write the system as a matrix equation. Note that the system can be written as \[ \begin {bmatrix} 2 & 2 & 2 \\ 1 & 1 & 3 \\ 1 & 4 & 1 \end {bmatrix} \begin {bmatrix} x_1 \\ x_2 \\ x_3 \end {bmatrix} = \begin {bmatrix} 2 \\ 5 \\ 10 \end {bmatrix} \] To solve the system we put the coefficient matrix (the matrix on the left hand side of the equation) together with the vector on the right and side and get the so-called augmented matrix \[ \left [ \begin {array}{ccc|c} 2 & 2 & 2 & 2 \\ 1 & 1 & 3 & 5 \\ 1 & 4 & 1&10 \end {array} \right ] \] We apply the following three elementary operations. Swap two rows. Add a multiple of one row to another row. Multiply a row by a nonzero number. We will keep doing these operations until we get into a state where it is easy to read off the answer, or until we get into a contradiction indicating no solution, for example if we come up with an equation such as \( 0 = 1\). Let us work through the example. First multiply the first row by \( \frac {1}{2} \) to obtain \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 1 & 1 & 3 & 5 \\ 1 & 4 & 1&10 \end {array} \right ] \] Now subtract the first row from the second and third row. \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 0 & 2 & 4 \\ 0 & 3 & 0 & 9 \end {array} \right ] \] Multiply the last row by \(\frac {1}{3} \) and the second row by \(\frac {1}{2} \). \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 1 & 0 & 3 \end {array} \right ] \] Swap rows 2 and 3. \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 2 \end {array} \right ] \] Subtract the last row from the first, then subtract the second row from the first. \[ \left [ \begin {array}{ccc|c} 1 & 0 & 0 & -4 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 2 \end {array} \right ] \] If we think about what equations this augmented matrix represents, we see that \( x_1 = -4, x_2 = 3 \) and \( x_3 = 2 \). We try this solution in the original system and, voilà, it works! Exercise \(\PageIndex{1}\): Check that the solution above really solves the given equations. If we write this equation in matrix notation as \[ A \vec {x} = \vec {b} \] where \(A\) is the matrix \( \begin {bmatrix} 2 & 2 & 2 \\ 1 & 1 & 3 \\ 1 & 4 & 1 \end {bmatrix} \) and \( \vec {b} \) is the vector \( \begin {bmatrix} 2 \\ 5 \\ 10 \end {bmatrix} \). The solution can be also computed via the inverse, \[ \vec {x} = A^{-1} A \vec {x} = A^{-1} \vec {b} \] One last note to make about linear systems of equations is that it is possible that the solution is not unique (or that no solution exists). It is easy to tell if a solution does not exist. If during the row reduction you come up with a row where all the entries except the last one are zero (the last entry in a row corresponds to the right hand side of the equation) the system is inconsistent and has no solution. For example if for a system of 3 equations and 3 unknowns you find a row such as \( \begin {bmatrix} 0 & 0 & 0 \mid & 1 \end {bmatrix} \) in the augmented matrix, you know the system is inconsistent. You generally try to use row operations until the following conditions are satisfied. The first nonzero entry in each row is called the leading entry. There is only one leading entry in each column. All the entries above and below a leading entry are zero. All leading entries are 1. Such a matrix is said to be in reduced row echelon form. The variables corresponding to columns with no leading entries are said to be free variables. Free variables mean that we can pick those variables to be anything we want and then solve for the rest of the unknowns. Example \(\PageIndex{1}\): The following augmented matrix is in reduced row echelon form. \[ \left [ \begin {array}{ccc|c} 1 & 2 & 0 & 3 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \end {array} \right ] \] Suppose the variables are \( x_1, x_2\) and \(x_3\). Then \(x_2\) is the free variable, \( x_1 = 3 - 2x_2\), and \( x_3 = 1\). On the other hand if during the row reduction process you come up with the matrix \[ \left [ \begin {array}{ccc|c} 1 & 2 & 13 & 3 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 3 \end {array} \right ] \] there is no need to go further. The last row corresponds to the equation \( 0x_1 + 0x_2 + 0x_3 = 3 \), which is preposterous. Hence, no solution exists. 3.2.5 Computing the inverse If the coefficient matrix is square and there exists a unique solution \( \vec {x} \) to \( A \vec {x} = \vec {b} \) for any \( \vec {b} \), then \( A\) is invertible. In fact by multiplying both sides by \( A^{-1} \) you can see that \( \vec {x} = A^{-1} \vec {b} \). So it is useful to compute the inverse if you want to solve the equation for many different right hand sides \( \vec {b}\). The \( 2 \times 2 \) inverse can be given by a formula, but it is also not hard to compute inverses of larger matrices. While we will not have too much occasion to compute inverses for larger matrices than \( 2 \times 2\) by hand, let us touch on how to do it. Finding the inverse of \(A\) is actually just solving a bunch of linear equations. If we can solve \( A \vec {x}_k = \vec {e}_k\) where \( \vec {e}_k \) is the vector with all zeros except a 1 at the \( k^{th} \) position, then the inverse is the matrix with the columns \( \vec {x}_k \) for \( k = 1, \dots , n \) (exercise: why?). Therefore, to find the inverse we can write a larger \(n \times 2n \) augmented matrix \( [ A \mid I ] \), where \(I\) is the identity. We then perform row reduction. The reduced row echelon form of \( [ A \mid I ] \) will be of the form \( [ I \mid A^{-1} ] \) if and only if \(A\) is invertible. We can then just read off the inverse \( A^{-1}\).
 $\Delta V_{ABCDA} = - \int_A^A \vec{E} \dot{}d\vec{l}$ The requirement that the round-trip potential difference be zero means that $E_1$ and $E_2$ have to be equal. Therefore the electric field must be uniform both along the length of the wire and also across the cross-sectional area of the wire. Since the drift speed is proportional to $E$, we find that the current is indeed uniformly distributed across the cross section. This result is true only for uniform cross section and uniform material, in the steady state. The current is not uniformly distributed across the cross section in the case of high-frequency (non-steady-state) alternating currents, because time-varying currents can create non-Coulomb forces, as we will see in a later chapter on Faraday's law. Say the field is non-uniform but completely longitudinal. That would result in the integral above being non-zero, which is impossible, so the field always has to be uniform if its longitudinal. If the wire is made of different types of material, the different parts would act as dielectrics, which would cause the material to polarize and dilute the electric field at different points, would cause the electric field to be non-uniform but still longitudinal, which contradicts the point made in the paragraph above. How can I resolve this apparent contradiction?
tl;dr;You can teach your machine to break arbitrary Caesar cipher by observing enough training examples using Trusted Region Policy Optimization for Policy Gradients: Full textImagine the world where a hammer was introduced to the public just couple a years ago. Everyone is running around trying to apply the hammer to anything that even resembles a nail. This is the world we are living in and the hammer is deep learning. Today I will be applying it to a task that can be much easier solved by other means but hey, it's Deep Learning Age! Specifically, I will teach my machine to break a simple cipher like Caesar cipher just by looking at several (actually, a lot) examples of English text and corresponding encoded strings. You may have heard that machines are getting pretty good at playing games so I decided to formulate this code breaking challenge as a game. Fortunately there is this OpenAI Gym toolkit that can be used "for developing and comparing reinforcement learning algorithms". It provides some great abstractions that help us define games in terms that computer can understand. For instance, they have a game (or environment) called "Copy-v0" with the following setup and rules: There is an input tape with some characters. You can move cursor one step left or right along this tape. You can read symbols under the cursor and output characters one at a time to the output tape. You need to copy input tape characters to output tape to win. Now let's talk a bit about the hammer itself. The hottest thing on the Reinforcement Learning market right now is Policy Gradients and specifically this flavorTrust Region Policy Optimization. There is an amazing article from Andrej Karpathy on Policy Gradients so I will not give here an introduction. If you are new to Reinforcement Learning you just stop reading this post and go read that one. Seriously, it's so much better! Still here? Ok, I will tell you about TRPO then. TRPO is a technique for Policy Gradients optimization that produces much better results than vanilla gradient descent and even guarantees (theoretically, of course) that you can get an improved policy network on every iteration. With vanilla PG you start by defining a policy network that produces scores for the actions given the current state. You then simulate hundreds and thousands of games taking actions suggested by the network and note which actions produced better results. Having this data available you can then use backpropagation to update your policy network and start all over again. The only thing that TRPO adds to this is that you solve a constrained optimization problem instead of an unconstrained one: $$ \textrm{maximize } L(\theta) \textrm{ subject to } \bar{D}_{KL}(\theta_{\textrm{old}},\theta)<\delta$$ Here \(L(\theta)\) is a loss that we are trying to optimize. It is defined as $$E_{a \sim q}[\frac{\pi_\theta(a|s_n)}{q(a|s_n)} A_{\theta_{\textrm{old}}}(s_n,a)],$$ where \(\theta\) is our weights vector, \(\pi_\theta(a|s_n)\) is a probability (score) of the selected action \(a\) in state \(s_n\) according to the policy network, \(q(a|s_n)\) is a corresponding score using the policy network from the iteration before and \(A_{\theta_{\textrm{old}}}(s_n,a)\) is an advantage (more on it later). Running simple gradient descent on this is the vanilla Policy Gradients approach. TRPO approach doesn't blindly descend along the gradient but takes into account the \(\bar{D}_{KL}(\theta_{\textrm{old}},\theta)<\delta\) constraint. To make sure the constraint is satisfied we do the following. First, we approximately solve the following equation to find a search direction: $$Ax = g,$$ where A is the Fisher information matrix, \(A_{\textrm{ij}} = \frac{\partial}{\partial \theta_i}\frac{\partial}{\partial \theta_j}\bar{D}_{KL}(\theta_{\textrm{old}},\theta)\) and \(g\) is the gradient that you can get from the loss using backpropagation. This is done using conjugate gradients algorithm. Once we have a search direction we can easily find a maximum step along this direction that still satisfies the constraint. One thing that I promised to get back to is the advantage. It is defined as $$A_\pi(s,a)= Q_\pi(s,a)−V_\pi(s),$$ where \(Q_\pi(s,a)\) is a state-action value function (actual reward of taking an action in this state, it usually includes discounted rewards for all upcoming states) and \(V_\pi(s)\) is a value function (in our case it's just a separate network that we train to predict the value of the state). Bored enough already? I promise, it's not that scary in code. You can find the full implementation here: tilarids/reinforcement_learning_playground. Specifically, look at trpo_agent.py. You can reproduce the Caesar cipher breaking by running trpo_caesar.py. wojzaremba's implementation a lot - you are right. I was copying some TRPO code from there and then rewriting it to make it more readable and also to make sure it follows the paper closely.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
I have a small 2D system I'm trying to model using a non-linear extension of Darcy's law for fluid flow in porous media. I'm primarily interested in the local flow velocity, not necessarily the pressure, that's later used in the real model I'm working on: The original equation known as Darcy's Law used in ground water modeling is essentially Poisson's equation, where $\vec{q}$ is the fluid velocity, $\mu$ and $\kappa$ are constants that together describe the hydraulic conductivity of the porous medium, $p$ is local hydrostatic head and f is a source: $$ -\frac{\mu}{\kappa} \nabla p = \vec{q} $$ $$ \nabla \cdot ( \vec{q} ) = f = -\frac{\mu}{\kappa} \nabla^2 p $$ The non-linear equation I want to solve instead is the following, known as the Darcy–Forchheimer law. As you can see it's essentially a polynomial instead of a linear model like Darcy's law: $$ \frac {\partial p}{\partial x} = \frac {\mu }{\kappa} \vec{q} - \frac {\rho }{\kappa _{1}}\vec{q}^{2} $$ I've picked up on some authors [1] that go through the calculus and have figured out a way to represent $\vec{q}$ as a function of $p$ by calculating the inverse of the flow/pressure equation, and I see somewhat how they arrive at their methodology, but what I'm interested in is actually calculating the flux ($\vec{q}$) itself, which I use elsewhere in the model I'm developing - I don't have much of a use for $p$ besides in the initial problem setup with some simple Dirichlet boundary conditions. Somewhat related, I've also read that for some numerical methods such as DG FEM, diffusion dominated problems can very unstable due to the non-directional flux. In this paper [2], the authors investigate some 'equation splitting' methods where they solve one equation for the state variable, and another for the flux, essentially the first two equations I showed above. This apparently relieves some of the instability issues caused by diffusion Looking at these two, it looks like I have two options: Use the methodology of [1] to calculate the pressure throughout the system, and then calculate the gradient of the hydrostatic head to generate the local velocity, Or... Split my equation into two equations and solve them together. I end up having the solve two equations at once, but I need to have the velocity regardless so that doesn't seem so bad. Other than that I know nothing about this method. I don't know how to implement the equation splitting method as I've never done it before, so I have two questions: Is this even a good idea? Is splitting the equations up into double the number of equations disadvantageous for any other reason than doubling the number of state equations to solve? Is there a source that goes into the actual implementation of equation splitting for (preferably) the finite element or finite difference methods. I am thoroughly intimidated by any DG theory I come across, and I'm hoping there is an easier way to solve this problem
Forgot password? New user? Sign up Existing user? Log in Find : 1212+121212+1212+121212+1212+1212+1212…∞ \sqrt{ \frac{1}{2}}\sqrt{ \frac{1}{2} + \frac{1}{2} \sqrt{\frac{1}{2}}} \sqrt{ \frac{1}{2} + \frac{1}{2} \sqrt{ \frac{1}{2} + \frac{1}{2} \sqrt{\frac{1}{2}}}}\sqrt{ \frac{1}{2} + \frac{1}{2} \sqrt{ \frac{1}{2} + \frac{1}{2} \sqrt{\frac{1}{2} +\frac{1}{2}\sqrt{\frac{1}{2}}}}}\\ \ldots \infty 2121+212121+2121+212121+2121+2121+2121…∞ Note by Kushagraa Aggarwal 6 years, 1 month ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Let a1=12a_1 = \sqrt{\dfrac{1}{2}}a1=21 and let an+1=12+12ana_{n+1} = \sqrt{\dfrac{1}{2} + \dfrac{1}{2}a_n}an+1=21+21an for integers n≥1n \ge 1n≥1. We want the value of ∏n=1∞an\displaystyle\prod_{n = 1}^{\infty}a_nn=1∏∞an. Note that a1=cos(π4)a_1 = \cos \left(\dfrac{\pi}{4}\right)a1=cos(4π) and if an=cos(π2n+1)a_n = \cos\left(\dfrac{\pi}{2^{n+1}}\right)an=cos(2n+1π) then an+1=12+12cos(π2n+1)=cos(π2n+2)a_{n+1} = \sqrt{\dfrac{1}{2} + \dfrac{1}{2}\cos\left(\dfrac{\pi}{2^{n+1}}\right)} = \cos\left(\dfrac{\pi}{2^{n+2}}\right)an+1=21+21cos(2n+1π)=cos(2n+2π). Therefore, by induction, we have an=cos(π2n+1)a_n = \cos\left(\dfrac{\pi}{2^{n+1}}\right)an=cos(2n+1π) for all integers n≥1n \ge 1n≥1. Let PN=∏n=1Nan=∏n=1Ncos(π2n+1)P_N = \displaystyle \prod_{n = 1}^{N} a_n = \displaystyle \prod_{n = 1}^{N}\cos\left(\dfrac{\pi}{2^{n+1}}\right)PN=n=1∏Nan=n=1∏Ncos(2n+1π). Using the identity sinθcosθ=12sin2θ\sin \theta \cos \theta = \dfrac{1}{2}\sin 2\thetasinθcosθ=21sin2θ repeatedly, we have: PNsin(π2N+1)=sin(π2N+1)∏n=1Ncos(π2n+1)=12Nsin(π2)=12NP_N \sin\left(\dfrac{\pi}{2^{N+1}}\right) = \sin\left(\dfrac{\pi}{2^{N+1}}\right)\displaystyle \prod_{n = 1}^{N}\cos\left(\dfrac{\pi}{2^{n+1}}\right) = \dfrac{1}{2^N}\sin\left(\dfrac{\pi}{2}\right) = \dfrac{1}{2^N}PNsin(2N+1π)=sin(2N+1π)n=1∏Ncos(2n+1π)=2N1sin(2π)=2N1. Therefore, PN=12Ncsc(π2N+1)P_N = \dfrac{1}{2^N}\csc\left(\dfrac{\pi}{2^{N+1}}\right)PN=2N1csc(2N+1π) for all integers N≥1N \ge 1N≥1. For x≈0x \approx 0x≈0 we have cscx=1x+O(x)\csc x = \dfrac{1}{x} + O(x)cscx=x1+O(x). Hence, PN=12N[2N+1π+O(π2N+1)]=2π+O(2−2N)P_N = \dfrac{1}{2^N}\left[\dfrac{2^{N+1}}{\pi} + O\left(\dfrac{\pi}{2^{N+1}}\right)\right] = \dfrac{2}{\pi} + O(2^{-2N})PN=2N1[π2N+1+O(2N+1π)]=π2+O(2−2N). As N→∞N \to \inftyN→∞ we have 2−2N→02^{-2N} \to 02−2N→0. Therefore, ∏n=1∞an=limN→∞PN=2π\displaystyle\prod_{n = 1}^{\infty}a_n = \lim_{N \to \infty}P_N = \dfrac{2}{\pi}n=1∏∞an=N→∞limPN=π2. Log in to reply thanks a lot!.it was actually not easy to get to substitute 12 \frac{1}{\sqrt{2}} 21 by cos(π4) cos(\frac{\pi}{4}) cos(4π) If you don't mind, can you please share the source of this problem? Thanks! @Pranav Arora – it was asked in a mock test. @Kushagraa Aggarwal – Are you talking about an IIT-JEE mock test? If so, FIITJEE? @Pranav Arora – Nope , Vidyamandir Classes. Oh yeah, 2/pi, Viete's formula. Problem Loading... Note Loading... Set Loading...
Let's say that you are pushing a wagon by applying a force $\vec{F}$ from behind, as such: Now, according to Newton's second law, the center of mass of the wagon should accelerate, because the sum of the external forces is not $\vec{0}\text{ N}$. If we assume that the coefficient of friction between the wheels and the ground is $0$, then I fully understand what's going on - $\vec{F}$ is the only applied force (or the sum of all the forces) on the particle system, which causes the CM to accelerate. However, the problem arises when I take into account the cases where the coefficient of friction isn't $0$. In these cases, the wheels will presumably begin to rotate. Since this rotation is due to friction, it seems likely that there is a frictional force (let's call it $\vec{f}$) on the wheels from the ground. However, if such a force is present, and the initial force remains unchanged, then according to Newton's second law, the acceleration of the wagon should be given by: $$\vec{a} = \frac{1}{m_{\text{wagon}}}(\vec{F} + \vec{f})$$ So if this is the case, wouldn't that make the acceleration different from the case where the coefficient of friction between the surfaces is $0$? In picture form, this is pretty much what I mean is the case generating the second equation: Moreover, I don't even know whether the direction of the frictional forces in the above picture is correct or not (i.e. whether $k_1, k_2 > 0$), which kind of goes to show that I'm fairly confused about this whole thing. Intuitively, it feels like $k_1 < 0 \wedge k_2 < 0$, since the ground "holds the wheels back", but in that case, won't friction "slow the wagon down" rather than helping it roll? I'll summarize my questions: When $\mu \neq 0$, which forces are present on the object? If there is a frictional force, which direction does it have? How does this force affect the acceleration of the wagon's CM?
The General Curl The Theory The image you posted above is trace visualization for what is known as a divergence free vector field. To understand what that means, consider particles that move along the lines you see above (the field represents their instantaneous velocity), if the field is indeed divergence free, those particles will never collide. That's what gives those visualizations their beauty, the lines never intersect. Mathematically, a vector field $F$ is divergence free if $\nabla \cdot F = 0$. It is hard to compute a random vector field that is divergence free proceduraly, so we use some math to aid us. In vector calculus, the Curl of any vector field is divergence free. The curl is an operator that operates on a vector field and returns another vector field that represents the infinitesimal (microscopic) rotation of the input vector field. The infinitesimal rotation of a vector field can be understood using this intuitive interpretation from Wikipedia 1: Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. We are not interested in what the curl represents, we are only interested in the fact that the curl of any vector field is divergence free. For our artistic use, the input of the curl operator $F$ can be a simplex or perlin noise-based vector field, that is, $F = (F_x, F_y, F_z)$ where $F_x, F_y, F_z$ are different perlin or simplex noise functions. Let us now look into the curl operator that will take $F$ as an input and returns a divergence free vector field, the curl of $F$ denoted by $\nabla \times F$ is equal to: $$\nabla \times F =\left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right)\mathbf{i} +\left( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right)\mathbf{j} + \left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right)\mathbf{k}$$ Where $\mathbf{i}$,$\mathbf{j}$ and $\mathbf{k}$ are the fundamental vectors of the rectangular space. Don't worry if you don't understand those equations, we shall understand them soon. The notation $\frac{\partial F_z}{\partial y}$ represents the partial derivative of $F_z$ with respect to the $y$ axis, similarly, $\frac{\partial F_y}{\partial z}$ represents the partial derivative of $F_y$ with respect to the $z$ axis, and so on. A partial derivative measures how much a function changes along an axis. Take a brief look at derivatives before we continue, you don't have to study it, just recognize what it represents. To compute the partial derivative, we use what is known as the central finite difference method or Symmetric Derivative which states that the partial derivative is equal to: $$\frac{f(x+h) - f(x-h)}{2h}$$ Where $h$ is some arbitrary small number and $f$ is the function in question. The same equation applies for the $y$ and $z$ axis. For instance to compute $\frac{\partial F_z}{\partial y}$, we add $h$ (a small number) to the $y$ component of the the vector, evaluate the $F_z$ noise at that vector, subtract $h$ from the $y$ component of the vector, evaluate the noise and take the difference between the two evaluated noise values, and finally divide them by $2h$. To understand how the central finite difference method work, consider this intuitive interpretation: Suppose one is standing on a mountain blindfolded, one was asked to report the steepness (slope, partial derivative) of the point one is standing at, one might move a step to the right and identify whether one ascended or descended and how much did one ascend or descend by, another might move a step to the left and identify whether one ascended or descended and how much did one ascend or descend by. Now consider the situation where one is at the very top of the mountain, at that point, the slope is zero, one is not ascending or descending, but both persons will report that they are descending, because if one move to the left or the right one will be descending. We conclude that to get an accurate result, we have to move a step to the right, observe, move a step to the left, observe and then determine if you are ascending or descending. In our case, if one move both to the left and to the right, one will understand that one is at an even surface (at the top). So when we say $f(x+h)$ we simply mean that we observe after we move a step $h$ to the right and $f(x-h)$ means to the left. Once we compute the curl of the vector field, we can do what is known as advection using Euler's integration. When treating the curl vector field as the instantaneous velocity of some particles in space, Euler's integration approximates the location of the particles after one second by adding the curl vector field to the original particles location. So to get the whole trace of a particle, we evaluate the curl, add it to the location of the particle, evaluate the curl at the new location, add it to the new location, and so on. Implementation Simple Curl Implementation The first step is to make a group that generates the vector field by initializing its components with different simplex noises $(F_x, F_y, F_z)$: The next step is to compute the partial derivatives. This can be done using the formula for the symmetric derivative: Take your time to understand this node tree. It can be hard to grasp at first. Having all the partial derivatives, we can now compute the curl easily using the equation above: And by advecting some initial vectors along the curl vector field using Euler's integration and creating splines from the output points: We get something like this: Optimized Curl Implementation While the previous implementation works, it is very slow because it requires a lot of noise evaluations and noise is expensive to compute. Thankfully, the noise functions in Animation Nodes v2.1 is implemented using SIMD instructions, which means it can somewhat be executed in parallel for CPUs that support Advanced Vector Extensions (AVX) which is pretty much every modern CPU. This result in a speed up of up to 600x the original implementation. SIMD requires the input data to occupy a contiguous memory block. So to utilize SIMD instructions and optimize this setup, we should combine all our data into a single big vector list, evaluate the noise at it, segment the output, and process it. The following implementation is hard to understand, so it is better to do it on your own, my implementation looks like this: Euler's integration loop stays the same: However, notice that we are appending a list of vectors and not a vector, so the output will be of length $n \cdot m$, where $n$ is the number of iterations and $m$ is the number of input initial vectors. We have to do another segmentation before creating splines from those points, so the spline loop will include segmentation and spline creation: And what we get is a fully functional, fast, curl trace generator: Blend file for study and practice: Surface Curl Now that we have an understanding of what the curl is, we can go ahead and finally answer your question. Theory Let us define the problem as follows: We want the output vector field of the curl operator to always be tangent to the surface of our meshes, that is, the curl should always be perpendicular to the vector field that represents our mesh surface normals. If our initial points were on the surface of the mesh and the above condition is satisfied, then we will get the result we are looking for. The above condition is satisfied only if $(\nabla \times F) \cdot \vec{N} = 0$, where $(\nabla \times F)$ is the curl and $\vec{N}$ is our normals vector field. The problem is now to compute a vector field $F$ so that the dot product between its curl and the normals vector field is equal to zero. By some analysis we get, $F$ is equal to $\vec{N} \cdot p(x,y,z)$ where $p$ is a perlin or simplex noise function just like we used in the general case, the curl in that case becomes: $$\begin{aligned}F &= p(x,y,z) \begin{bmatrix} N_x \\ N_y \\ N_z \end{bmatrix}\\(\nabla \times F) &=\left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}, \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x},\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right)\\&= \left( N_z \frac{\partial p}{\partial y} - N_y \frac{\partial p}{\partial z}, N_x \frac{\partial p}{\partial z} - N_z \frac{\partial p}{\partial x},N_y \frac{\partial p}{\partial x} - N_x \frac{\partial p}{\partial y} \right)\end{aligned}$$ And the proof: $$\begin{aligned}(\nabla \times F) \cdot \vec{N} =&N_z N_x \frac{\partial p}{\partial y} - N_y N_x \frac{\partial p}{\partial z} +\\&N_x N_y \frac{\partial p}{\partial z} - N_z N_y \frac{\partial p}{\partial x} +\\&N_y N_z \frac{\partial p}{\partial x} - N_x N_z \frac{\partial p}{\partial y} = 0\end{aligned}$$ Notice that if the normal is $\vec{N}=(0,0,1)$, you only end up with the equation that Robert Bridson proposed in his paper "Curl-Noise for Procedural Fluid Flow", which is: $$\frac{\partial p}{\partial y}\mathbf{i} -\frac{\partial p}{\partial x}\mathbf{j}$$ So now you know where my other answer came from. Since the curl have zero z component, we shall call this special case the 2D Case. Implementation The implementation is much easier because we only have to differentiate a single function. You should by now know how to implement that. Here is the non-trivial implementation of the surface curl; notice how there is an extra input called normals now: Lets consider the simple example in which we use a sphere as our mesh, the normal of the sphere at any point on its surface is equal to that point normalized, so the normal is simply the point locations normalized. After the curl is computed and the the points advected, we can project the output onto the surface to make sure the splines always lie on the surface of the sphere since we are merely performing a finite approximation. We won't need this step for sufficiently small step sizes for Euler's integration which is in our case decided by the magnitude of the normal vector field or the amplitude of the noise function. To project any point onto the surface of a sphere of radius $r$ we normalize the vector and multiply by the radius, so the advection loop looks like this now: The result are those magnificent and beautiful splines: Blend file for study and practice: Since the 2D case have two zero normal components, let's implement the 2D case using an optimized setup. We simply remove anything that got multiplied by zero to get: This gives: Blend file: Ok, I have given you an example where the mesh is defined implicitly, but how do we use an actual mesh? Well, we can use a BVH tree: We approximate the normal field by the Nearest Surface Point node Normal output, and we change the generator to be the location of the nearest surface point. Why? Because this is the projection of the point on the surface, which is a step we have to do to make sure we get accurate results as we discussed before. Make sure to reduce the step size, because BVH trees are not as accurate as the implicit definition of our sphere. It should be noted that constructing BVH trees from objects directly won't apply modifiers, so such node tree can be used: And finally, here is the result of both setups: Inviscid Boundary Condition An inviscid boundary condition required the dot product between the curl and the surface normals around the boundaries of objects to be zero, in other words, it requires the curl to be orthogonal to the normals of the object around its boundaries. Ensuring this condition in the surface curl case is easy, while it is somewhat challenging for the general 3D case, so I shall only introduce the surface curl. Looking back at the equation for the 2D case: $$\frac{\partial p}{\partial y}\mathbf{i} -\frac{\partial p}{\partial x}\mathbf{j}$$ We notice that the curl is actually orthogonal to the Gradient, the gradient vector being, in simple terms, a vector pointing in the direction of the greatest change. The gradient is computed by: $$\frac{\partial p}{\partial x}\mathbf{i} +\frac{\partial p}{\partial y}\mathbf{j}$$ Can you see the similarities? So we can implement the boundary condition by making sure the noise field have a gradient tangent to the normals of the object around its boundaries, in other words, the noise field should increase the most when moving in the direction of the surface normals. We actually know a field that satisfied this condition everywhere, that is, its gradient is tangent to the surface normals. This field is the Signed Distance Field (SDF). An SDF is, for every point, the distance to the closest surface point, having a negative sign if it is inside the surface and positive if outside (However, the sign is redundant in this particular application). So the condition can be implemented by modulating the noise field based on the SDF of the surface using a ramp. As Robert implemented it, the noise field should be multiplied by the ramp: $$\begin{cases} 1 & r > 1 \\ \frac{15}{8}r - \frac{10}{8}r^3 + \frac{3}{8}r^5 & -1 \leq r \leq 1 \\ -1 & r < -1\end{cases}$$ Where $r$ is the SDF divided by a scalar defining the modulation width. Which we can implement as follows: What we did is replace the noise node with the above subprogram and add a BVH tree as an input. Using Older Animation Nodes Versions The only node used in this answer and does not exist prior to Animation Nodes v2.1 is the Vector Noise node which provides us with the perlin or simplex noise functions. To make the node tree work for older versions, you simply have to replace the Vector Noise node with the built-in blender noise functions that exists in the mathutils python module. To do so, we can use the expression node: With this code: [mathutils.noise.noise(x) for x in vectors] Aside from noise.noise, there are other functions that give you more control, which you can find in the API. File for study:
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Trig Substitution By using suitable substitutions, we can No matter which substitution we use, we end up with an integral involving $\theta$ and trig functions of $\theta$. Understand that while you can always make a trig substitution, the resulting integral may or may not be possible to solve. As is true with other techniques of integration, we try it and see if it works. Here, works means after using the technique, we can evaluate the integral. If the technique doesn't work, we try something else. Example: Evaluate $\displaystyle\int {\sqrt{x^2+6x+10}}\, dx.$ Solution: We first complete the square to get $x^2+6x+10=(x+3)^2+1$ and then use the $u$-substitution $ u=x+3$ This gives us $\displaystyle\int{\sqrt{(x+3)^2+1}}\, dx= \int \sqrt{u^2+1}\,du \overset{\fbox{$ \,\,u\,=\,\tan\theta,\\ du\,=\,\sec^2(\theta)\,d\theta$}\\}{=} \int \left(\sqrt{\tan^2(\theta)+1}\right)\sec^2(\theta)\,d\theta =\int \sec^3(\theta)\,d\theta.$ We computed this integral in a previous example, using parts twice, getting $$\int \sec^3(\theta)\,d\theta= \frac{\sec(\theta)\tan(\theta) + \ln\lvert\sec(\theta)+\tan(\theta)\rvert}{2}+C,$$ so what we've left to do is to substitute back $u$ and then $x$. Notice that $\sec(\theta)=\sqrt{1+u^2}$, by using right triangles. All together, we have $$ \hspace{-3cm} \int {\sqrt{x^2+6x+10}}\, dx =\frac{\left(\sqrt{1+u^2}\right)u+ \ln\Bigl\lvert\left(\sqrt{1+u^2}\right)+u\Bigr\rvert}{2}+C$$$$\qquad\qquad =\frac{\left(\sqrt{1+(x+3)^2}\right)(x+3)+ \ln\Bigl\lvert\left(\sqrt{1+(x+3)^2}\right)+x+3\Bigr\rvert}{2}+C $$ If this had been a definite integral, you would now be able to evaluate it using your antiderivative.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Genetic Algorithms face a ton of criticism. Rather than being a general purpose optimizer, I believe GA is more suited to specific processes with a sensible meaning to what evolution means there. Being interested in heuristic optimization methods, one thing I notice a lot is confusion in the applicability and efficiency of the methods. Mind you, I am notexpert enough to resolve the doubts, but have an opinion which might help in 'some' situations. Ask around for methods to do derivative free optimization andyou will stop keeping the count after a while. Specifically in the category ofmetaheuristics; Cuckoo Search, Genetic Algorithm, Stimulated Annealing, TabuSearch etc. etc. But there is an issue. Give the algorithms a first glance andyou will feel they are different. Give them a second glance and many of themwill start to cluster in groups. Give a third glance and they are differentagain! Earlier, due to the reducible design, I used to consider Genetic Algorithm (GA)as a more general technique as compared to many other evolutionary methods. Butthen, this might not be what we mean when we want a generally applicable method.A simplistic representation embodying the general principle, like ' mutation inpopulation' for population based heuristics, is what I was looking for instead.GA is, in essence, much more powerful if used in specialized setting withknowledge of the problem. I will go over this proposition later. Genetic Algorithm faces a lot of criticism. Criticism from researchers are usually constructive. For example there has been a lot of 'debate and work' onthe role of crossover in GAs. But I have also seen the users and learnersfrowning at few of its aspects without going through what it is that GA reallydoes. The main source of inspiration for GA is sexual reproduction. The core ideasincluding mate selection, gene crossover and mutation are all faithfully (to acertain level of abstraction) incorporated in it. But there are vitaldifferences between the role of sexual reproduction and our needs from GA, atopic covering the meaning of fitness functions and its population aggregate.Let us leave the way fitness is used in traditional GA and focus on the two major operations. This is simply what every other random search algorithm does. Slight change from the current solution. Using a bit string solution representation like \(0100100\), a simple point mutation along with fitness based selection procedure essentially is the hill climb algorithm. Change bits, evaluate the new positions and shift the population towards the better fitness. This case is stronger with situations including \(N\to\infty\) (larger population size, \(N\)), higher rate of mutation, elite individuals etc. Mutation coupled with a population and selection procedure results in a niceglobal optimizer. Even more important is the ease of understanding the word "mutation" with respect to the problem representation. Use bit strings or directreal values or other symbols, mutation means what it means. In a common representation, a crossover operation swaps sections of solutionstrings from two mating parents. Now, this needs a little bit of digging. Whatpurpose copying a snippet of gene from parents to children could have? Toinherit and move around some specific chunks of functional properties. A topic of debate among researchers is whether crossover actually has some theoretical advantage and meaning for the problems. Some arguments reduce crossover as a fancy form of mutation, while some arguments support its usage separately. Without going into the arguments, its better to stick to the fundamentals ofcrossover and, as a user, exploit its basics. It maintains chunks of propertiesamong the population and across generations. In GA, you can represent the solution as chromosomes in multiple ways. A commonmethod to represent real number parameters is to use direct value encoding. Letthe fitness function simply be \(f(a)\). For crossing over individuals in thisform, there are blending and interpolation techniques for numbers. While matingtwo solutions, \(a_1\) and \(a_2\), a simple \(\alpha\) blending gives \[ a_3 = \alpha a_1 + (1-\alpha) a_2 \] Now, here is what the wikipedia page on Differential Evolution (DE) says DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. Is there an identity crisis here? Look, everything works. Some variations are better and there are theoretical and empirical results backing them up. But then sometimes I feel it is unfair to throw any problem at GA assuming it as a general purpose optimizer. Back to the proposition made in the beginning of the post and the main question of the post. I really like to think the answer to be something on the following lines GA provides a framework for manipulation of a structure made from symbolsto achieve a desired behaviour in decently low time. Many variations of GA stick to this idea. Genetic Programming is a nice example. Consider following lisp sexpr: (defun test-function (fn x) "Evaluate negative of slope of function at x" (let ((dx 0.01)) (* -1 (/ (- (fn (+ x dx)) (fn (- x dx))) (* 2 dx))))) The function is approximating the (-ve of) slope of another function by calculating \[ -\frac{fn(x + \Delta x) - fn(x - \Delta x)}{2 \times \Delta x} \] The tree structure of operations has usable chunks which are essential to get the final output. Consider the numerator chunk. ;; ... (- (fn (+ x dx)) (fn (- x dx))) ;; ... This generates an important template for good solutions if you want a quantitylike slope. In GA terminology, this template consisting of some form of fixedstructure and spaces for variations is called schema. This schema has a fitnessassociated with it which is the average of fitness of all solutions matchingthis template. For the above template, the fitness is good since most of thevariations around the template will result in some simple operation on thedifference of function values. This is better than a case, say, where thetemplate is a sum of function values at two neighbouring points. The fundamental theorem of GA says that the power of GA comes from increasingthe population fraction of schemata with smaller fixed parts and better fitnessover generations. The better schemata survive the operations and live on. Now here is a case. I was using the traditional value encoded form of GA for aglobal optimization at some point in my undergraduate life. I kept the approachsomewhat similar to Matlab's. Value encoding, elite individuals, gaussianmutation, simulated binary crossover and tournament selection. It did whateverit was meant to. But then I started to fiddle around a bit with parameters andswitches (whether to use this operation, that selection criteria etc.) andrealized that I am doing what can be called cargo cult tuning. Consider the following fitness surface and value encoding. Considering crossover as extrapolations along some dimension, you can see two kind of results of mating. One better, one worse. That's fine, it happens in real crossovers too. But the point is, user can't seem to understand and extract the power of GAhere. What are the schemata getting passed on? Where the crossover points are?More importantly, what can I do to improve performance? In my opinion, its better to use a method whose knobs (parameters etc.)correspond to the problem in hand and which really pass on the essence of the operations as some connectable effects on the problem. And then judge thealgorithm, in case a judgment is to be made. Last year, Randy Olson blogged about solving a Travelling Salesman Problem (TSP)with Genetic Algorithm. TSP solutions have chunks of continuous paths which canbe locally optimal. Debugging makes sense here because the operations areintuitive. Also consider NEAT, which evolves neural network structures. Sameidea. Manipulation of symbols and actual implications of crossovers (of chunk ofstructural trees let's say) etc. Similar is the case with many examples whichuse Genetic Programming. There are cases where a genetic approach actuallyprovides easier implementation and better results than other methods. I found this piece of criticism by Steven Skiena on Wikipedia page of GA [I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. The problem Skiena is addressing is important. The operators are not intuitiveat all for general problems and add on the burden of the user. Well, you do getgood performance than other methods in different cases. But my opinion is to respect [the full fledged] GA for its help in context of a certain class ofproblem, not as a go-to general purpose optimizer (for which you would be betterstarted by programming a simple and quick disruptive method). Your problem can have some allowed set of schemata. For example, if you have abit string representation of 3-4 parameters, you might like a form of schematawhich break the string at the junctions of the numbers for crossovers. Or insome cases you might want to group few bits of strings and evolve themseparately. You can try inspirations from speciation. You can create artificialislands. You can try some weird animal group behaviour. They all are good ifyour problem needs them and the effects are reflected in the evolution ofsolutions. Otherwise, they are another set of instances of misjudgment of themethod.
Is is true that if $A \in M_n(\mathsf{k})$ is arbitrary, for some field $\mathsf{k}$, then there exists two sequences $(L_i)_{i=1}^{m}, (R_i)_{i=1}^{m} \subseteq M_n(\mathsf{k})$ such that $$ \sum_{i=1}^{m} L_i A R_i = I_n, $$ where $I_n$ is the $n \times n$ identity matrix? I think this is true, but the proof I have in mind is fairly abstract and I was wondering if there is a direct-ish way of proving this (if it is indeed true). Edit: As @Michael Burr points out, I forgot the condition that $A \neq 0$.
I'm having a bit of a struggle with the following proof. Statement: Prove that $N((0,0);1)$ is an open set in $\mathbb{R}\times\mathbb{R}$ with metric $d((x_1,x_2),(y_1,y_2))=|x_1-y_1|+|x_2-y_2|$. Attempt: Let $x\in N((0,0);1)$. We need $N(x;\epsilon)=\{(a_1,a_2):|x_1-a_1|+|x_2-a_2|<\epsilon \} \subseteq N((0,0);1)=\{(y_1,y_2):|y_1|+|y_2|<1 \}$. For this to occur, we must have $|x_1-a_1|+|x_2-a_2|<\epsilon\implies |a_1|+|a_2|<1$. Notice $|x_1-a_1|+|x_2-a_2|< |x_1|+|a_1|+|x_2|+|a_2|<\epsilon$. Pick $\epsilon=|x_1|+|x_2|+1$. This is greater than 0, for $0<|x_1|+|x_2|<1$. Then, $N(x;\epsilon)=\{(a_1,a_2):|a_1|+|a_2|<1 \}$, so $N((0,0);1)$ is open. I feel as though it's not valid for me to have chosen $\epsilon$ as I did; could someone point me in the right direction? Thanks, exam in a few hours, so any and all help is appreciated.
Here's my two cents worth. Why Lie Algebras? First I'm just going to talk about Lie algebras. These capture almost all information about the underlying group. The only information omitted is the discrete symmetries of the theory. But in quantum mechanics we usually deal with these separately, so that's fine. The Lorentz Lie Algebra It turns out that the Lie algebra of the Lorentz group is isomorphic to that of $SL(2,\mathbb{C})$. Mathematically we write this (using Fraktur font for Lie algebras) $$\mathfrak{so}(3,1)\cong \mathfrak{sl}(2,\mathbb{C})$$ This makes sense since $\mathfrak{sl}(2,\mathbb{C})$ is non-compact, just like the Lorentz group. Representing the Situation When we do quantum mechanics, we want our states to live in a vector space that forms a representation for our symmetry group. We live in a real world, so we should consider real representations of $\mathfrak{sl}(2,\mathbb{C})$. A bit of thought will convince you of the following. Fact: real representations of a Lie algebra are in one-to-one correspondence (bijection) with complex representations of its complexification. That sounds quite technical, but it's actually simple. It just says that we can have complex vector spaces for our quantum mechanical states! That is, provided we use complex coefficients for our Lie algebra $\mathfrak{sl}(2,\mathbb{C})$. When we complexify $\mathfrak{sl}(2,\mathbb{C})$ we get a direct sum of two copies of it. Mathematically we write $$\mathfrak{sl}(2,\mathbb{C})_{\mathbb{C}} = \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So Where Does $SU(2)$ Come In? So we're looking for complex representations of $\mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$. But these just come from a tensor product of two representations of $\mathfrak{sl}(2,\mathbb{C})$. These are usually labelled by a pair of numbers, like so $$|\psi \rangle \textrm{ lives in the } (i,j) \textrm{ representation of } \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So what are the possible representations of $\mathfrak{sl}(2,\mathbb{C})$? Here we can use our fact again. It turns out that $\mathfrak{sl}(2,\mathbb{C})$ is the complexification of $\mathfrak{su}(2)$. But we know that the real representations of $\mathfrak{su}(2)$ are the spin representations! So really the numbers $i$ and $j$ label the angular momentum and spin of particles. From this perspective you can see that spin is a consequence of special relativity! What about Compactness? This tortuous journey shows you that things aren't really as simple as Ryder makes out. You are absolutely right that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) \neq \mathfrak{so}(3,1)$$ since the LHS is compact but the RHS isn't! But my arguments above show that compactness is not a property that survives the complexification procedure. It's my "fact" above that ties everything together. Interestingly in Euclidean signature one does have that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) = \mathfrak{so}(4)$$ You may know that QFT is closely related to statistical physics via Wick rotation. So this observation demonstrates that Ryder's intuitive story is good, even if his mathematical claim is imprecise. Let me know if you need any more help!
The Cauchy-Schwarz inequality is among the most useful inequalities in all of mathematics. Suppose \(V\) is a (real) vector space and \(\langle \cdot, \cdot \rangle\) be an inner product on \(V\). That is, for all \(\mathbf{u}, \mathbf{v}, \mathbf{w} \in V\) and \(a, b \in \mathbf{R}\), the inner product \(\langle \cdot, \cdot \rangle : V \times V \to \mathbf{R}\) satisfies the following properties: \(\langle \mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{v} ,\mathbf{u} \rangle \) \( \langle a \mathbf{u} + b \mathbf{v}, \mathbf{w} \rangle = a \langle \mathbf{u}, \mathbf{w} \rangle + b \langle \mathbf{v}, \mathbf{w} \rangle\) \(\langle \mathbf{u}, \mathbf{u} \rangle \geq 0\) with \(\langle \mathbf{u}, \mathbf{u} \rangle = 0\) if and only if \(\mathbf{u} = \mathbf{0}\) Under these conditions, the Cauchy-Schwarz inequality states that \[ |\langle \mathbf{u}, \mathbf{v} \rangle| \leq |\mathbf{u}| |\mathbf{v}|. \] Here, the norm of a vector is given by \(|\mathbf{v}|^2 = \langle \mathbf{v}, \mathbf{v} \rangle\). To prove the inequality, we employ the following trick: For any \(\lambda \in \mathbf{R}\), we have \[ \langle \mathbf{u} + \lambda \mathbf{v}, \mathbf{u} + \lambda \mathbf{v} \rangle \geq 0 \] by property 3. The Cauchy-Schwarz inequality follows from choosing a suitable value for \(\lambda\). Using properties 1 and 2, we can expand \[ \langle \mathbf{u} + \lambda \mathbf{v}, \mathbf{u} + \lambda \mathbf{v} \rangle = \langle \mathbf{u}, \mathbf{u} \rangle + 2 \lambda \langle \mathbf{u}, \mathbf{v} \rangle + \lambda^2 \langle \mathbf{v}, \mathbf{v} \rangle \geq 0. \] Notice that this is a quadratic equation in the variable \(\lambda\). Recall that the quadratic equation \(a \lambda^2 + b \lambda + c\) obtains its minimum (for \(a > 0\)) when \(\lambda = \frac{- b}{2 a}\). Therefore, we take \[ \lambda = – \frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\langle \mathbf{v}, \mathbf{v} \rangle}. \] Plugging this into the previous expression, we obtain \[ \langle \mathbf{u}, \mathbf{u} \rangle – 2 \frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\langle \mathbf{v}, \mathbf{v} \rangle} \langle \mathbf{u}, \mathbf{v} \rangle + \frac{\langle \mathbf{u}, \mathbf{v} \rangle^2}{\langle \mathbf{v}, \mathbf{v} \rangle^2} \langle \mathbf{v}, \mathbf{v} \rangle \geq 0. \] Multiplying both sides by \(\langle \mathbf{v}, \mathbf{v}\rangle\) and rearranging gives \[ \langle \mathbf{u}, \mathbf{v} \rangle^2 \leq \langle \mathbf{v}, \mathbf{v} \rangle \langle \mathbf{u}, \mathbf{u} \rangle \] whence the Cauchy-Schwarz inequality immediately follows.
3D Perspective Projection (far objects are smaller than near objects) Triangular Polygons for Rendering Simplistic Lighting Model Colored Polygon Faces (no texturing) The largest hurdle for me was getting past perspective projection. I did some research about matrix transformations and got caught up quite deeply in mathematical abstraction. It was the image below that really set my mind in motion. I decided to take a step back and use some simple high school trigonometry to solve the problem. Next I decided to work on projecting the actual points onto the plane. I started out on a piece of paper with a neat little drawing and started to realize that it is quite an easy feat. This is an extremely simple problem to solve. We know the camera position and we know the point that we wish to plot. Now it is just a matter of solving for the angle of projection and then using the concept of similar triangles to resolve the point on the screen. The camera position will be (0, 0, d) where d is the distance that you calculated earlier. I will demonstrate how to solve the y-component of the projected point first. The same formula will then be applied to the x-component. First we must solve for the angle of projection in the y-axis. $$ \tan(\theta_{\mathrm{proj}}) = \mathrm{\frac{P_{y}}{P_{z} + C_{z}}} \\ \theta_{\mathrm{proj}} = \tan^{-1}\left(\frac{P_{y}}{P_{z} + C_{z}}\right) $$ We will leave it in this form and solve for the y component of the projected point. $$ \tan(\theta_{\mathrm{proj}}) = \mathrm{\frac{Proj_{y}}{C_{z}}} $$ We can now substitute the equation for the projected angle into the equation for the y-component of the projected point. $$ \tan\left(\tan^{-1}\left(\frac{P_{y}}{P_{z} + C_{z}}\right)\right) = \mathrm{\frac{Proj_{y}}{C_{z}}} \\ \frac{P_{y}}{P_{z} + C_{z}} = \mathrm{\frac{Proj_{y}}{C_{z}}} \\ \mathrm{Proj_{y}} = \frac{C_{z}P_{y}}{P_{z} + C_{z}} $$ That's it! The formula is quite simple and the same can be applied to the x-component of the projected point as well. $$ \mathrm{Proj_{x}} = \frac{C_{z}P_{x}}{P_{z} + C_{z}} $$ At this point, you will be able to plot points onto your screen! I have implemented the rendering engine in C# using Mono on Linux. I am using the built-in System.Drawing namepsace that is based on Cairo to render the polygons. I will be publishing another article that explains how I modelled the geometry and the concepts behind lighting. Leon Battista Alberti and Perspective Projection I have spent approximately 13 hours on this project and the results so far are fantastic looking. This entire experiment has given me great insights into the way 3D geometry is rendered onto a 2D plane. The Classic Utah Teapot :] In this article I will show you the various steps that I have taken to render this image. I will focus on the mathematics behind perspective projection that I used to arrive at the teapot rendering above. The image is not perfect, but I am happy with the results. Perspective Projection The largest stumbling block for me was rendering the geometry onto the plane that is my display. As I mentioned earlier, most explanations of perspective projection start out with camera matrix transforms. I found this very abstract. For the scope of this project I wanted to ignore the concepts of world coordinates, camera coordinates and so on. I started out with the assumption that the geometry that I wish to render is already located in front of the camera. The viewing frustum is the most basic concept of perspective projection. It defines the plane onto which your 3D geometry will be projected (labelled "near" in the image below). Viewing Frustum The viewing frustum is defined by the users field of view. Field of view is specified in degrees and then in combination with the width of the viewport in pixels we can determine the users distance from the screen. Field of View The field of view is a user defined parameter of the rendering engine. I set will set mine to 100 degrees. Let's say that the width of the viewport for this example is 800 pixels. We can calculate the users distance from the plane as follows:$$ \tan(\theta _{\mathrm{fov}} /2) = \mathrm{\frac{w/2}{d}} \\ \mathrm{d} = \mathrm{\frac{w/2}{\tan(\theta _{\mathrm{fov}} / 2)}} \\ \mathrm{d} = \mathrm{\frac{800/2}{\tan(100 / 2)}} \\ \mathrm{d} = 335.639 $$ Next I decided to work on projecting the actual points onto the plane. I started out on a piece of paper with a neat little drawing and started to realize that it is quite an easy feat. Projection Sketch Perspective Projection Diagram $$ \tan(\theta_{\mathrm{proj}}) = \mathrm{\frac{P_{y}}{P_{z} + C_{z}}} \\ \theta_{\mathrm{proj}} = \tan^{-1}\left(\frac{P_{y}}{P_{z} + C_{z}}\right) $$ We will leave it in this form and solve for the y component of the projected point. $$ \tan(\theta_{\mathrm{proj}}) = \mathrm{\frac{Proj_{y}}{C_{z}}} $$ We can now substitute the equation for the projected angle into the equation for the y-component of the projected point. $$ \tan\left(\tan^{-1}\left(\frac{P_{y}}{P_{z} + C_{z}}\right)\right) = \mathrm{\frac{Proj_{y}}{C_{z}}} \\ \frac{P_{y}}{P_{z} + C_{z}} = \mathrm{\frac{Proj_{y}}{C_{z}}} \\ \mathrm{Proj_{y}} = \frac{C_{z}P_{y}}{P_{z} + C_{z}} $$ That's it! The formula is quite simple and the same can be applied to the x-component of the projected point as well. $$ \mathrm{Proj_{x}} = \frac{C_{z}P_{x}}{P_{z} + C_{z}} $$ At this point, you will be able to plot points onto your screen! An early test render :]
Computational Science people: I originally posted this question at Math Stack Exchange and someone commented that I might get "much better" answers here: I am a novice at numerical methods and Matlab. I am attempting to evaluate the following sum of two triple integrals (it can obviously be written more simply, but you still cannot evaluate it symbolically (?)). I am having trouble getting the $\LaTeX$ to work here, so I reluctantly broke it up into pieces here: I want to find the sum of $$\frac{2}{((1/0.3) - 1)^2}\left(\int_1^{1/0.3}\int_1^{r_1}\int_0^{r_1-r_0}F_1(r_0,r_1,t)\exp(-\frac{(0.3)^2 t^2}{4})\,dt\,dr_0\,dr_1 \right),$$ and $$\frac{2}{((1/0.3) - 1)^2}\left(\int_1^{1/0.3}\int_1^{r_1}\int_{r_1-r_0}^{r_1+r_0} F_2(r_0,r_1,t)\exp(-\frac{(0.3)^2 t^2}{4})\,dt\,dr_0\,dr_1 \right),$$ where $$F_1(r_0,r_1,t)=\frac{t^2 r_0^3*(0.3)^3}{2r_1^3\sqrt{\pi}}$$ and $$F_2(r_0,r_1,t)=\frac{(0.3)^3\pi^{3/2}(r_0+r_1-t)^4 (t^2+2t(r_0+r_1)-3(r_1-r_0)^2)^2}{288(\frac{4}{3}\pi r_0^3)(\frac{4}{3}\pi r_1^3)}.$$ EDIT (March 2 2013): Someone responded that they got Mathematica to do the integrals symbolically. I just attempted to do this (with simplified versions of the integrals) and Mathematica could only do the outer two of the first one, and stalled on the second one. I would appreciate some help. Here is what I did.: I attempted to evaluate $$\int_1^2 \int_1^{r_2} \int_0^{r_2-r_1} \frac{r_1^3 t^2 \exp(-t^2)}{r_2^3}\,dt\,dr_1\,dr_2$$ via Integrate[r1^3/r2^3*t^2*Exp (-t^2), {t, 0, r2 - r1}, {r1, 1, r2}, {r2, 1, 2}] and Mathematica returns (I had trouble with the $\LaTeX$ here because the result is long. I broke it into two equations. if anyone knows a good way to display this please tell me): $$\int_1^2 \frac{1}{64r2^2} e^{-1-r2^2}(2e^{2r2}(25+r2(19+2r2(1+r2)))-$$ $$e^{1+r2^2}(32r2(2+r2^2)) +\sqrt{\pi}(11+4r2^2(9+r2^2))\operatorname{Erf}[1-r2])\,dr2.$$ Then I tried to evaluate $$\int_1^2\int_1^{r_2}\int_{r_2-r_1}^{r_2+r_1} \ldots \qquad \qquad \qquad $$ $$\ldots\frac{\exp(-t^2)(r_1+r_2-t)^4(t^2+2t(r_1+r_2)-3(r_2-r_1)^2)^2}{r_1^3 r_2^3}\,dt\,dr_\,dr_2$$ using Integrate[(r1 + r2 - t)^4*(t^2 + 2*t*(r1 + r2) - 3*(r2 - r1)^2)^2* Exp[-t^2]/r1^3/r2^3, {r2, 1, 2}, {r1, 1, r2}, {t, r2-r1, r2 + r1}] just now, and Mathematica has not returned an answer after about half an hour (but I am having computer network problems right now, and they may be to blame). [END OF MARCH 2 EDIT] I used Matlab's "triplequad" command, with no extra options. I handled the variable limits of integration by means of heaviside functions, because I didn't know any other way to do it. Matlab gave me $0.007164820144202$. I know Matlab is good software, but I have heard that numerical triple integrals are hard to do accurately, and mathematicians are supposed to be skeptical, so I want some way to verify the accuracy of this answer. The integrals give the expected value of a certain experiment (if anyone wants, I can edit this question to describe the experiment): I implemented the experiment in Matlab using appropriately randomly generated numbers, a million times, and averaged the results. I repeated this process four times. Here are the results (I apologize if I have used the word "trial" improperly): Trial 1: $0.007133292603256$ Trial 2: $0.007120455071989$ Trial 3: $0.007062595022049$ Trial 4: $0.007154940168452$ Trial 5: $0.007215000289130$ Although each trial used a million samples, the simulation values only agree in the first significant digit. They are not close enough to each to each other for me to determine whether the numerical triple integral is accurate. So can anyone tell me whether I can trust the result of "triplequad" here, and under what circumstances one can trust it in general? One suggestion I got at Math Stack Exchange was to try other software like Mathematica, Octave, Maple, and SciPy. Is this good advice? Do people actually do numerical work in Mathematica and Maple? Octave is kind of a Matlab clone, so can I assume it uses the same integration algorithms? I haven't even heard of SciPy before and would appreciate any opinions about it. UPDATE: Someone from Math Stack Exchange did it in Maple and got $0.007163085468$. That is agreement to three significant figures. That is a good sign. Also, I would appreciate suggestions on how to enter long, multi-line expression in $\LaTeX$ in Stack Exchange. Can you use the "aligned" environment here? I tried, and I couldn't get it to work.
This might seem as a silly question, but is it feasible to use a mathematical expression as the name of a function? For example, I'm studying the cross section of Compton scattering and there is a formula for the differential cross section, which is denoted as $\frac{d\sigma}{d\Omega}$. I'd like to express the latter as a function of energy E, angle θ and atomic number Z. If I, naively, use $\frac{d\sigma}{d\Omega}[Z\_, E\_, \theta\_] := \ldots$ I get the anticipated error: SetDelayed::write : Tag Times in dσ/dΩ[Z_,E_,θ_] is Protected Instead of the obvious workaround of using a name such as diffSection[Z_,E_,θ_] := ... is there any other way ?
Beam-Energy Dependence of Directed Flow of $\mathrm{\ensuremath{\Lambda}}$, $\overline{\mathrm{\ensuremath{\Lambda}}}$, ${K}^{\ifmmode\pm\else\textpm\fi{}}$, ${K}_{s}^{0}$, and $\ensuremath{\phi}$ in $\mathrm{Au}+\mathrm{Au}$ Collisions Author STAR Collaboration Brandenburg, J.D. Butterworth, J. Eppley, G. Geurts, F. Roberts, J.B. Tlusty, D. Date2018 Abstract Rapidity-odd directed-flow measurements at midrapidity are presented for Λ, ¯Λ, K±, K0s, and ϕ at √sNN=7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV in Au+Au collisions recorded by the Solenoidal Tracker detector at the Relativistic Heavy Ion Collider. These measurements greatly expand the scope of data available to constrain models with differing prescriptions for the equation of state of quantum chromodynamics. Results show good sensitivity for testing a picture where flow is assumed to be imposed before hadron formation and the observed particles are assumed to form via coalescence of constituent quarks. The pattern of departure from a coalescence-inspired sum rule can be a valuable new tool for probing the collision dynamics. Citation Physical Review Letters,120, no. 6 (2018) American Physical Society: https://doi.org/10.1103/PhysRevLett.120.062301. Published Version Type Journal article Publisher Citable link to this pagehttps://hdl.handle.net/1911/105008 RightsPublished by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and MetadataShow full item record Except where otherwise noted, this item's license is described as Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and
I plan to study elliptic function. Can you recommend some books? What is the relationship between elliptic function and elliptic curve?Many thanks in advance! McKean and Moll have written the nice book Elliptic Curves: Function Theory, Geometry, Arithmetic that cleanly illustrates the connection between elliptic curves and elliptic/modular functions. If you haven't seen the book already, you should. As for elliptic functions proper, my suggested books tend to be a bit on the old side, so pardon me if I don't know the newer treatments. Anyway, I quite liked Lawden's Elliptic Functions and Applications and Akhiezer's Elements of the Theory of Elliptic Functions. An oldie but goodie is Greenhill's classic, The Applications of Elliptic Functions; the notation is a bit antiquated, but I have yet to see another book that has a wide discussion of the applications of elliptic functions to physical problems. At one time... every young mathematician was familiar with $\mathrm{sn}\,u$, $\mathrm{cn}\,u$, and $\mathrm{dn}\,u$, and algebraic identities between these functions figured in every examination. – E.H. Neville Finally, I would be remiss if I did not mention the venerable Abramowitz and Stegun, and the successor work, the DLMF. The chapters on the Jacobi and Weierstrass elliptic functions give a nice overview of the most useful identities, and also point to other fine work on the subject. First of all ever-modern Course of modern analysis by Whittaker-Watson. For a more introductory style, I highly recommend V. Prasolov, Y. Solovyev Elliptic Functions and Elliptic Integrals. The relation between elliptic curves and elliptic functions can be sketched as follows. Elliptic curve is topologically a torus which can be realized by cutting a parallelogram in $\mathbb{C}$ and identifying its opposite edges. On the other hand, it can be realized in $\mathbb{CP}^2$ by an algebraic equation of the form $$y^2=x^3+ax+b.$$ Elliptic functions provide a map between the two pictures, which is also called uniformization. Essentially, $x,y$ are given by some elementary elliptic functions of $z$ (complex coordinate on the parallelogram). Compare this with a more familiar example: trigonometric functions $\sin$, $\cos$ provide a uniformization of the circle, which can be defined either via an algebraic equation or in a parametric form: $$x^2+y^2=1\quad \begin{array}{c}\sin,\cos\\ \Longleftrightarrow \\ \;\end{array}\quad \begin{cases}x=\cos t,\\ y=\sin t,\\ t\in[0,2\pi].\end{cases}$$ There is a classical 3 volume series by C.L. Siegel. It is well-written, though the perspective is a little bit outdated. I guess (no book at hand) you can find treatments by Serge Lang in the GTM series as well. I am not sure if Stein's book on complex analysis studied this topic.
You can solve this with a heavy-light decomposition of the tree. This will give you a data structure where the "rotate" operation runs in $O(\log n)$ time and the "retrieve position of leaf" operation runs in $O(\log^2 n)$ time, where $n$ is the number of nodes in the tree. Heavy-light decomposition is a big cannon, so it's possible there might be some simpler solution, but this should work. Let me explain the ideas in several steps. Closure property of affine transformations If $T$ is a a rotation around some point in space, note that it can be represented as an affine transformation. Also, the set $\mathcal{T}$ of affine transformations is closed under composition. In particular, if $T_1,T_2$ are two affine transformations, let $T_2 \circ T_1$ denote the result of first applying transformation $T_1$, then applying transformation $T_2$; then $T_2 \circ T_1$ is itself an affine transformation, and its parameters can be efficiently computed from the parameters of $T_1,T_2$. So, if we have a sequence of transformations that should be applied to a point, we can concisely represent the composition of the sequence of transformations by $O(1)$ parameters (namely, the parameters of their composition). This will be useful in a moment. In particular, rather than updating the position of individual nodes, we will store information to help us recover the transformation that should be applied to any particular node. When we want to retrieve up the location of a node, we first look up this transformation, then apply it to the original location of that node to obtain its current location, and output that. Warm-up: a path As a warm-up, let's consider the most extreme special case of your situation: where the tree is a path of length $n$, i.e., each node has exactly one child, and there is a single leaf. I'll assume each node in this original tree may have a position associated with it. Data structure. We can build a data structure for this case where all operations run in $O(\log n)$ time. Build a complete binary tree (of height $\lceil \lg n \rceil$) over these $n$ nodes; to give it a name distinct from your original tree, call this the "index tree". Each node of the original tree is a leaf in the index tree. Each node $w$ of the index tree corresponds to a consecutive subpath of the original tree/path (namely, the ones corresponding to the leaves of the index tree that are descendants in the index tree of $w$). Now each node $v$ of the original tree has a transformation $T_v$ associated with it. Suppose a node $w$ of the index tree corresponds to the subpath of nodes $v_1,\dots,v_k$ in the original tree. Then we will label $w$ with the transformation $T_{v_k} \circ \cdots \circ T_{v_1}$. This will help us to retrieve the position of points in the original tree. Rotation operations. To support rotation operations, if we want to rotate node $v$ in the original tree and all its descendents, then we follow the path (in the index tree) from $v$ to the root of the index tree. By construction, there are only $\lg n$ such nodes to visit. We'll need to update the label on each of these nodes in the index tree. That is easy. In particular, the label $T_w$ on each node $w$ in the index tree can be computed from the labels $T_{w_1},T_{w_2}$ on its two children $w_1,w_2$ as $T_w = T_{w_2} \circ T_{w_1}$. So, to handle a rotation operation on node $v$ in the original tree, we update the the label on $v$ in the index tree, then follow the path (in the index tree) from $v$ to the root of the index tree and recompute the label of each such node in the index tree. All of this can be done in $O(\log n)$ time. Retrieval operations. If we want to retrieve the location of the point associated with node $v$ in the original tree, we can do that in $O(\log n)$ time, too. Consider the path in the original tree from its root to $v$. This is a subpath of the original tree. It turns out that it can be expressed as the disjoint union of $O(\lg n)$ subpaths, where each subpath corresponds to a node in the index tree. Let $w_1,\dots,w_k$ be those nodes in the index tree. Then the transformation that needs to be applied is $T_{w_k} \circ \cdots T_{w_1}$; we apply this to the original location of $v$, and output that. This can all be done in $O(\log n)$ time. So this handles the case of a path, i.e., a tree of depth $n$. This shows that it is possible to handle imbalanced trees. But how do we handle the general case? I'll show that next. General case: an arbitrary tree To handle an arbitrary tree, we will first build a heavy-light decomposition of the tree. This expresses the edges of the tree as a union of (disjoint) heavy paths, plus some light edges; with the property that any path from the root to some leaf visits at most $O(\log n)$ light edges. We'll treat each individual heavy path as a case of the warmup above, i.e., we'll build one index tree per heavy path, to keep track of the transformations associated with the nodes in the heavy path. Also, we'll have a path tree that stores the light edges, and we'll store the transformation associated with the head of a light edge in that node of the path tree. Rotation operations. To handle a rotation operation, we update the associated node in the path tree (if it is the head of a light edge) or do the appropriate update operation on the corresponding index tree (if it is the head of a heavy edge). This can be takes $O(\log n)$ time at worst. Retrieval operations. To retrieve the location of a node $v$ in the original tree, we follow the path in the path tree from the root to $v$. This will involve traversing at most $O(\log n)$ light edges. It also traverses at most $O(\log n)$ heavy paths. In each heavy path, we might potentially traverse many vertices of the heavy path (possibly much more than $O(\log n)$ of them), but we don't need to visit all of them; since that is a consecutive subpath of the heavy path, we can quickly retrieve the transformation associated with that subpath (i.e., the composition of the transformations of the nodes in that heavy path) using the index tree for that subpath. This takes $O(\log n)$ time per heavy path, and there are at most $O(\log n)$ heavy paths to visit. Finally, we compose all of these transformations and apply it to the original location of $v$. Naively, the running time seems to be $O(\log n)$ time for the light edges, plus $O(\log n) \times O(\log n)$ time for the heavy paths, for a total of $O(\log^2 n)$ time. This achieves all of your goals, and gives a guaranteed worst-case running time of $O(\log n)$ for the rotate operation and $O(\log^2 n)$ time for the lookup operation, no matter what shape the original tree has.
Given the general (real valued) equation of aconic section:$$ A x^2 + B xy + C y^2 + D x + E y + F = 0$$Then what is the circular cone associated with it ? Is it unique ?And is there a way to derive its equation, expressed in $(A,B,C,D,E,F)$ and $(x,y,z)$ ?I've done some homeworkhere andhere,but wasn't able to extract a simple method to find the cone(s), given the conic section. Given the general (real valued) equation of aconic section:$$ A x^2 + B xy + C y^2 + D x + E y + F = 0$$Then what is the @Jyrki's suggestion to consider Dandelin Spheres is the key. It's possible (even easy) to construct a family of Dandelin Spheres from a particular conic, and these give the family of cones you seek. Let's take the case of an ellipse. Viewing the curve's plane edge-on, we visually collapse the ellipse to its major axis $\overline{PQ}$. Let $F$ and $F^\prime$ be the foci. Choose any $R$ such that $\overline{RF}\perp\overline{PQ}$, and draw the circle through $F$ with center $R$. This circle is the intersection of a Dandelin sphere with the plane perpendicular to the ellipse through its major axis. Draw circles about $P$ and $Q$ through $F$ to determine points $S$ and $T$ on $\bigcirc R$. Necessarily, $\overleftrightarrow{PS}$ and $\overleftrightarrow{QT}$ are tangent to the circle. Let $C$ be the point where these lines meet. The incircle of $\triangle PQC$ is our other "Dandelin circle". From here, we see that $C$ is the apex the of the cone we seek. (If the tangent lines are parallel, then $C$ is the "point at infinity" and our cone is actually a cylinder.) Those tangent lines are the intersections of the cone with the perpendicular plane. Thus, we get a family of such cones based on the parameter-point $R$. Finding the equation of the cone family should be relatively straightforward for an ellipse in "standard position". For a general ellipse, a few coordinate transformations will be helpful. We handle parabolas and hyperbolas similarly. My try. Attempt to find an analytical solution. From the Conic Sectionsarticle as mentioned. We have two sets of six variables: the well known $(A,B,C,D,E,F)$ for the conic section and the unknown $(\phi,\alpha,\gamma,p,q,h)$ for the cone. We also have six equations and three of them have been solved already in the article:$$ \tan{2\gamma} = \frac{B}{A - C} \\ \cos(\alpha) = \sqrt{\sqrt{B^2 + (A-C)^2}} \\ \cos(\phi) = \sqrt{\frac{(A+C) + \sqrt{B^2 + (A-C)^2}}{2}} $$ So we are left with three other equations - see article - and three unknowns $(p,q,h)$: $$ D = - 2 A\,p - B\,q + \sin(2\alpha)\cos(\gamma)\,h \\ E = - B\,p - 2 C\,q + \sin(2\alpha)\sin(\gamma)\,h \\ A p^2 + B p q + C q^2 + D p + E q + F = h^2 \left[ \cos^2(\phi) - \sin^2(\alpha) \right] $$ From the first two of these we find, with $h$ as the only unknown left: $$ p = \frac{- 2 C \sin(2\alpha)\cos(\gamma)\,h + B \sin(2\alpha)\sin(\gamma)\,h - B E + 2 C D}{-B^2+4 A C}\\ q = \frac{2 A \sin(2\alpha)\sin(\gamma)\,h - 2 A E - \sin(2\alpha)\cos(\gamma)\,h B + D B}{-B^2+4 A C} $$ Substitution of $p(h)$ and $q(h)$ into the third equation gives a quadratic equation in $h$, which can be solved, in principle. MAPLE does it in a page or two, but I find it not a pleasure nor instructive to reproduce them here. That's what I meant by not "a simple method"; hence the accepted answer. The cone is certainly not unique. For example, a circle is made by cutting a cone with a plane perpendicular to the cone's axis. But you could get the same circle by cutting a cone that has a smaller apex angle with a plane further from the apex (or whatever it is properly called). You could ask for an equation with a parameter, giving a family of cones, but that is beyond me right now.
It is known from the theory of continued fractions that if $\epsilon<1/2$ then the only $a,b\in\mathbb{Z}$ such that $$(a,b)=1\quad\text{and}\quad\left|\frac{a}{b}-x\right|<\frac{\epsilon}{b^2}$$ are the convergents of the simple continued fraction of $x$. Hence not all $x$ are in the set measured. I am interested how many irrational number can be approximated by rational numbers arbitrarily, in a certain sense. That's why I am interested in the following expression, in which $m$ denotes the Lebesgue's measure. $$\lim \limits_{\epsilon \to0^+} m(\{x\in [0,1]:\exists\;\text{infinitely many}\;a,b \in \mathbb{Z},\text{ s.t.}~(a,b)=1\quad\text{and}\quad\left|\frac{a}{b}-x\right|<\frac{\epsilon}{b^2}\})$$ The expression inside the limit is equal to $1$ for every $\epsilon > 0$. In fact a much more precise statement is true. Theorem (Khinchin): Let $\phi(q) : \mathbb{N} \to \mathbb{R}$ be a monotonically decreasing function. For almost all real numbers $\alpha$, the number of pairs of positive integers $(q, p)$ satisfying $$\left| p - q\alpha \right| < \phi(q)$$ is infinite if $\sum \phi(q)$ diverges, and finite if $\sum \phi(q)$ converges.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
The result $$\zeta(2) = \frac{\pi^2}{6},$$ tends to amaze young students because of its beauty. However, although in literature there are many proofs of this result, I find that none is suitable for someone who didn't take some advanced classes in calculus. Moreover, as far as I know, they do not offer an intuitive explanation of why this result should be true. So my question is: What is the key intuition -- that is, the picture -- behind the result? Are there any visualof anyway intuitiveproofs (*) of the statement that can be used to convey a better(or, at least, different) understanding of it (to people who haven't taken advanced calculus yet)? (*) Note that full rigour is not compulsory for the scope of this question (it is very appreciated by me personally, though).
Click the titles to show/hide the abstracts. Click here to expand all. Main Talks A Framework for Imperfectly Observed Networks David Aldous (Berkeley, USA) Model a network as an edge-weighted graph, where the (unknown) weight $w_e$ of edge $e$ indicates the frequency of observed interactions, that is over time $t$ we observe a Poisson($t w_e$) number of interactions across edges $e$. How should we estimate some given statistic of the underlying network? This leads to wide-ranging and challenging problems. First-Passage Statistics of Extreme Values Eli Ben-Naim (Los Alamos, USA) Theoretical concepts from nonequilibrium statistical physics such as scaling and correlations are used to analyze first-passage processes involving extreme values. The focus of this talk is statistics of the running maxima, defined as the largest variable in a sequence of random variables. In particular, the running maxima of multiple independent sequences of stochastic variables are compared. The probability that these maxima remain perfectly ordered decays algebraically with the number of random variables, and the decay exponent characterizing this decay is nontrivial. Exact solutions for the scaling exponents will be discussed for uncorrelated variables as well as Brownian trajectories which are correlated. Relevance of such statistical measures for analysis of empirical data will be discussed as well. Scale-Free Percolation Remco van der Hofstad (Eindhoven, The Netherlands) We propose and study a random graph model on the hypercubic lattice that interpolates between models of scale-free random graphs and long-range percolation. In our model, each vertex $x$ has a weight $W_x$, where the weights of different vertices are i.i.d. random variables. Given the weights, the edge between $x$ and $y$ is, independently of all other edges, occupied with probability $1-e^{-\lambda W_xW_y/|x-y|^{\alpha}}$, where $\lambda$ is the percolation parameter, $|x-y|$ is the Euclidean distance between $x$ and $y$, and $\alpha$ is a long-range parameter. The most interesting behavior can be observed when the random weights have a power-law distribution, i.e., when $\mathbb P(W_x>w)$ is regularly varying with exponent $1-\tau$ for some $\tau>1$. In this case, we see that the degrees are infinite a.s. when $\gamma =\alpha(\tau-1)/d < 1$, while the degrees have a power-law distribution with exponent $\gamma$ when $\gamma>1$. Our main results describe phase transitions in the positivity of the critical value and in the graph distances in the percolation cluster as $\gamma$ varies. Let $\lambda_c$ denote the critical value of the model. Then, we show that $\lambda_c=0$ when $\gamma < 2$, while $\lambda_c > 0$ when $\gamma > 2$. Further, conditionally on $0$ and $x$ being connected, the graph distance between $0$ and $x$ is of order $\log\log|x|$ when $\gamma < 2$ and at least of order $\log|x|$ when $\gamma > 2$. These results are similar to the ones in inhomogeneous random graphs, where a wealth of further results is known. We also discuss many open problems, inspired both by recent work on long-range percolation (i.e., $W_x=1$ for every $x$), and on inhomogeneous random graphs (i.e., the model on the complete graph of size n and where $|x-y|=n$ for every $x\neq y$). This is joint work with Mia Deijfen and Gerard Hooghiemstra. Local Graph Coloring Alexander Holroyd (Microsoft/Seattle, USA) How can we color the vertices of a graph by a local rule based on i.i.d. vertex labels? More precisely, suppose that the color of vertex $v$ is determined by examining the labels within a finite (but perhaps random and unbounded) distance $R$ of $v$, with the same rule applied at each vertex. (The coloring is then said to be a finitary factor of the i.i.d. labels). Focusing on $Z^d$, we investigate what can be said about the random variable $R$ if the coloring is required to be proper, i.e. if adjacent vertices must have different colors. Depending on the dimension and the number of colors, the optimal tail decay is either a power law, or a tower of exponentials. I will briefly discuss generalizations to shifts of finite type and finitely dependent processes. Based on joint work with Oded Schramm and David B Wilson. slides, video Linear Spaces of Tilings Richard Kenyon (Brown Univ., USA) It is a well-known theorem that given a rectangle tiling of a rectangle there is a combinatorially equivalent tiling in which the rectangles have prescribed areas. We discuss generalizations of this results to tilings with other shapes, including random tilings and their scaling limits. Shotgun Assembly of Graphs Elchanan Mossel (UPenn & Berkeley, USA) We will present some results and some open problems related to shotgun assembly of graphs for random generating models. Shotgun assembly of graphs is the problem of recovering a random graph or a randomly labelled graphs from small pieces. The question of shotgun assembly presents novel problems in random graphs, percolation, and random constraint satisfaction problems. Based on joint works with Nathan Ross, with Nike Sun and with Charles Bordenave and Uri Feige. Bounds on the Condensation Threshold in Stochastic Block Models Joe Neeman (UT Austin, USA & Hausdorff Inst., Germany) Consider a random graph model consisting of $k$ communities of equal size. Edges are added to the graph independently at random, but are somewhat more likely to connect two vertices that belong to the same community. We choose parameters so that the average degree of the graph is fixed as the number of vertices tends to infinity. This random graph model is believed to have two phase transitions as the strength of community attachment increases. At the first transition, it becomes information-theoretically possible to detect the hidden communities from the graph structure; at the second, one can find them in polynomial time. We give upper and lower bounds on this first phase transition, which is known as the "condensation threshold" in statistical physics. Our bounds are asymptotically sharp in some limits, but not in others. This is joint work with Jess B, Cristopher Moore, and Praneeth Netrapalli. Erdös-Rényi Percolation Joel Spencer (NYU, USA) The Random Graph $G(n,p)$ undergoes a macroscopic change at $p = \frac 1 n$. Why does this occur, how does this occur, how can $p$ be parametrized to "see" this change. What are the analogies to the Galton-Watson birth process, to bond percolation in $d$-space. We give a comprehensive discription, with (mostly!) proofs, of this vital phenomenon. Superstar Model and Networks from Surgery on Branching Processes Michael Steele (UPenn, USA) If you look at the graph one obtains from twitter retweets, the largest component is almost a tree. Moreover, in many empirical cases, this tree looks like there is one vertex that has a degree that evolves linearly over time, while other vertices show "power law" behavior of a common type. This talk mainly considers a one-parameter model that exhibits such behavior. The construction of the model goes via surgery on multi-type branching processes. Part of this work is joint with Bhamidi and Zaman and part addresses recent work of Jog and Loh. On the Graph Limit Approach To Random Regular Graphs Balazs Szegedy (Budapest, Hungary) Let $G=G(n,d)$ denote the random $d$-regular graph on $n$ vertices. A celebrated result by J. Friedman solves Alon's second eigenvalue conjecture saying that if $d$ is fixed and $n$ is large then $G$ is close to be Ramanujan. Despite of significant effort, much less was known about the structure of the eigenvectors of $G$. We use a combination of graph limit theory and information theory to prove that every eigenvector of $G$ (when normalized to have length equal to square root of $n$) has an entry distribution that is close to some Gaussian distribution in the weak topology. Our results also work in the more general setting of almost-eigenvectors. Joint work with A. Backhausz. The Phase Transition in Bounded-Size Achlioptas Processes Lutz Warnke (Cambridge, UK) Perhaps the best understood phase transition is that in the component structure of the uniform random graph process introduced by Erdös and Rényi around 1960. Since the model is so fundamental, it is very interesting to know which features of this phase transition are specific to the model, and which are `universal', at least within some larger class of processes. Achlioptas process, a class of variants of the Erdös-Rényi process that are easy to define but difficult to analyze, have been extensively studied from this point of view. Here, settling a number of conjectures and open problems, we show that all `bounded-size' Achlioptas processes share the key features of the Erdös-Rényi phase transition (in particular the asymptotic behaviour of the size of the largest component above and below the critical window). We do not expect this to hold for Achlioptas processes in general. This is joint work with Oliver Riordan. Analysis of Random Processes on Regular Graphs With Large Girth Nick Wormald (Monash Univ., Australia) We introduce a general class of algorithms and analyse their application to regular graphs of large girth. The algorithms involve random processes which repeat a step whose description changes over time. In particular, we can transfer several results proved for random regular graphs into (deterministic) results about all regular graphs with sufficiently large girth. This reverses the usual direction, which is from the deterministic setting to the random one. In particular, this approach enables, for the first time, the achievement of results equivalent to those obtained on random regular graphs by a powerful class of algorithms which contain prioritised actions. As a result, we obtain new upper or lower bounds on the size of maximum independent sets, minimum dominating sets, maximum $k$-independent sets, minimum $k$-dominating sets and maximum $k$-separated matchings in $r$-regular graphs with large girth. Joint work with Carlos Hoppen. Minicourses Information Diffusion on Random Graphs: Small Worlds, Percolation and Competition Remco van der Hofstad (Eindhoven, The Netherlands) Many phenomena in the real world can be translated in terms of networks. Examples include the World-Wide Web, social interactions, traffic and Internet, but also the interaction patterns between proteins, food webs and citation networks. The large-scale networks have, despite their diversity in backgrounds, surprisingly much in common. Many of these networks are small worlds, in the sense that one requires few links to hop between pairs of vertices. Also the variability of the number of connections between elements tends to be enormous, which is related to the scale-free phenomenon. The topology of the networks has a dramatic effect on how information spreads through the network. Information spread can be modelled by various percolation models, the prime example being smallest-weight routing or first passage percolation. In this mini-course, we describe a few real-world networks and some of their empirical properties. We then explain results about distances and first-passage percolation in random graphs, and their implications to competition processes on them. Joint work with Shankar Bhamidi, Mia Deijfen, Gerard Hooghiemstra. Friendly Frogs, Stable Marriage, and the Magic of Invariance Alexander Holroyd (Microsoft/Seattle, USA) I'll introduce a simple game in which two players take turns to move either one of two tokens between the points of a fixed set, with the proviso that the distance between the two tokens must decrease. This game and its variants are intimately tied to stable marriage, the topic of the 2012 Nobel prize in economics. Matters become particularly interesting if the points form a random countable set. Probabilistic tools such as invariance, ergodicity, deletion-tolerance and mass-transport provide elegant solutions to games that are seemingly inaccessible to other methods. However, we do not need to go far to reach a variant in which the outcome of the game is an open problem. Based on joint work with Maria Deijfen and James Martin. slides, video Random Embeddings of Planar Graphs Richard Kenyon (Brown Univ., USA) We discuss natural coordinates on the space of embeddings of a planar graph with convex faces and pinned boundary, and various probability measures on this space. Erdös-Rényi Percolation Joel Spencer (NYU, USA) The Random Graph $G(n,p)$ undergoes a macroscopic change at $p = \frac 1 n$. Why does this occur, how does this occur, how can $p$ be parametrized to "see" this change. What are the analogies to the Galton-Watson birth process, to bond percolation in $d$-space. We give a comprehensive discription, with (mostly!) proofs, of this vital phenomenon. Random Regular Graphs and Differential Equations Nick Wormald (Monash Univ., Australia) I will discuss some properties of random regular graphs and how to obtain them, in particular the "differential equation method". This lets us predict quite precisely the results of many algorithms applied to random regular graphs.
In this page you will find the documentation about the Maths library. The Rotation Library: rotation.hpp The Rotation Library provides various tools to rotate matrices that represents stiffness, compliance and localisation tensors, and vectors that represents strain and stress tensors. The Statistic Library: stats.hpp The Rotation Library: rotation.hpp Rot_strain(vec, mat) Rotates a strain vector, providing a 3×3 orientation matrix. This function computes the 6×6 rotation matrix to rotate the strain vector in Voigt notation, and rotate it accordingly. vec E = randu(6); double theta = 30*pi/180.; mat DR; DR << cos(theta) << sin(theta) << 0 << endr << -1.*sin(theta) << cos(theta) << 0 << endr << 0 << 0 << 1; Rot_strain(E, DR); E is rotated according to DR. Rot_stress(vec, mat) Rotates a stress vector, providing a 3×3 orientation matrix. This function computes the 6×6 rotation matrix to rotate the stress vector in Voigt notation, and rotate it accordingly. vec S = randu(6); double theta = 30*pi/180.; mat DR; DR << cos(theta) << sin(theta) << 0 << endr << -1.*sin(theta) << cos(theta) << 0 << endr << 0 << 0 << 1; Rot_stress(S, DR); S is rotated according to DR. fillQE(double, int) Fills the 6×6 strain rotation matrix from a given angle and rotation axis (1, 2 or 3) double theta = 30*pi/180.; int dir = 1; fillQE(theta, dir); QE is filled for a angle of 30° around the axis 1. fillQS(double, int) Fills the 6×6 stress rotation matrix from a given angle and rotation axis (1, 2 or 3) double theta = 30*pi/180.; int dir = 1; fillQS(theta, dir); QS is filled for a angle of 30° around the axis 1. rotateL(mat, double, int) Rotates the 6×6 stiffness matrix from a given angle and rotation axis (1, 2 or 3) mat L = randu(6,6); double theta = 30*pi/180.; int dir = 1; rotateL(L, theta, dir); L is rotated for a angle of 30° around the axis 1. rotateL(mat, double, int) Rotates the 6×6 compliance matrix from a given angle and rotation axis (1, 2 or 3) mat M = randu(6,6); double theta = 30*pi/180.; int dir = 1; rotateL(L, theta, dir); M is rotated for a angle of 30° around the axis 1. rotateA(mat, double, int) Rotates the 6×6 strain concentration tensor from a given angle and rotation axis (1, 2 or 3) mat A = randu(6,6); double theta = 30*pi/180.; int dir = 1; rotateA(A, theta, dir); A is rotated for a angle of 30° around the axis 1. The Statistics Library: stats.hpp normal_distrib(double, const double, const double) provides an approximation (near \(10^{-7}\)) of the normal law distribution function of the variable x with \(mean\), the mean of the distribution and \(dev\), it’s standard deviation. \(N=1-\frac{1}{ exp(-0.5*X_{norm}²)\sqrt{2pi}}(b_1k+b_2k²+b_3k³+b_4k⁴+b_5k⁵)\) with \(X_{norm}=(x-mean)/dev\), \(k=\frac{1}{1+0.2316419X_{norm}}\) and \(b_1=\small{0.319381530}\), \( b_2=\small{0.356563782}\), \(b_3=\small{1.781477937}\), \(b_4=\small{-1.821255978}\), \(b_5=\small{1.330274429})\) double x = (double)rand(); const double mean = (double)rand(); const double dev = (double)rand(); double N= normal_distrib(x, mean, dev) ODF_sd(const double, const double, const double, const double, const double, const double) provides a standard orientation distribution function of \(\theta\) defined by : \(F= a_1 * cos(\theta-m)^{(2*p_1)} + a_2 * cos(\theta-m)^{(2*p_2 + 1)} * sin(\theta)^{(2*p_2)}\) example: const double theta = (double)rand(); const double p1 = (double)rand(); const double p2 = (double)rand(); const double m =(double)rand(); const double a1 =(double)rand(); const double a2 =(double)rand(); double F= ODF_sd(theta, m, a1, a2, p1, p2); ODF_hard(const double, const double, const double, const double) Gaussian(const double, const double, const double, const double) provides the Gaussian distribution function of the variable X, with \(mean\), the mean of the distribution and \(sd\), it’s standard deviation. the law is given by: \(G=\frac{ampl}{(sd\sqrt{2 pi})} * exp(\small{-0.5} (\frac{X – mean}{sd})²)\) example: const double X = (double)rand(); const double mean = (double)rand(); const double sd = (double)rand(); const double ampl =1; double G= Gaussian(x, mean, sd, ampl); Mult_Gaussian(const double, const int, const vec, const vec, const vec) provides the cumulative distribution of the gaussian function of the variable X,with \(mean\), the means vector of the distribution, \(sd\), the standard deviations vector and Npeak the numberof peaks. \(MG=\Sigma^{Npeak}_{i=1}\frac{ampl(i)}{(sd(i)\sqrt{2 pi})} * exp(\small{-0.5} (\frac{X – mean(i)}{sd(i)})²)\) example: const double X = (double)rand(); const int Npeak = (int)rand(); const vec mean = randu(Npeak); const vec sd = randu(Npeak); const vec ampl =randu(Npeak); double Mg= Mult_Gaussian(x,Npeak, mean, sd, ampl); Lorentzian(const double, const double, const double, const double) provides the Lorentzian distribution function of the variable X, with \(mean\) is the location parameter wiche specifies the peak location of the distribution and \(Widht\) is the full widht at the half maximum.the function is given by : \(L=\frac{Widht .Ampl}{2pi(x-mean)²+(\frac{Widht}{2})²}\) example: const double X = (double)rand(); const double mean = (double)rand(); const double Widht = (double)rand(); const double ampl =1; double L= Lorentzian(x, mean, Widht, ampl); Mult_Lorentzian(const double, const int, const vec, const vec, const vec) provides the cumulative distribution of the Lorentzian function of the variable X,with \(mean\), the location parameters vector of the distribution, \(Widht\), the full widhts at the half maximum vector and Npeak the number of peaks. the function is given by: \(ML=\Sigma^{Npeak}_{i=1}\frac{Widht(i) .Ampl(i)}{2pi(x-mean(i))²+(\frac{Widht(i)}{2})²}\) example: const double X = (double)rand(); const int Npeak = (int)rand(); const vec mean = randu(Npeak); const vec Widht = randu(Npeak); const vec ampl =randu(Npeak); double Mg= Mult_Lorentzian(x,Npeak, mean, Widht, ampl); PseudoVoigt(const double, const double, const double, const double, const double, const double) Provides the Pseudo-Voigt function \(P\)of the variable X. This function is the sum of a gaussian and a lorentzian function wich have the same position and the same aera. the expression is given by : \(P= \eta L+(1-\eta)G\) \( =\eta\frac{Widht_Lor.Ampl}{2pi(x-mean)²+(\frac{Widht_Lor}{2})²}+(1-\eta)\frac{ampl}{(sd_gau\sqrt{2 pi})} * exp(\small{-0.5}(\frac{X – mean}{sd_gau})²)\) with \(\eta\) the lorentzian factor. example: const double X = (double)rand(); const double eta = (double)rand(); const double mean = (double)rand(); const double Widht_Lor = (double)rand(); const double sd_gau = (double)rand(); const double ampl =1; double Pv= PseudoVoigt(x, eta,mean, Widht_Lor,sd_gau, ampl); Mult_PseudoVoigt(const double, const int, const vec, const vec, const vec, const vec, const vec) provides the cumulative distribution of the Pseudo-Voigt function of the variable X given by : \(MP=\Sigma^{Npeak}_{i=1}\eta(i) L(i)+(1-\eta(i))G(i)\) \(=\Sigma^{Npeak}_{i=1}\eta(i)\frac{Widht_Lor(i).Ampl(i)}{2pi(x-mean(i))²+(\frac{Widht_Lor(i)}{2})²}+(1-\eta(i))\frac{ampl(i)}{(sd_gau(i)\sqrt{2 pi})} * exp(\small{-0.5}(\frac{X – mean(i)}{sd_gau(i)})²)\) example: const double X = (double)rand(); const int Npeak = (int)rand(); const vec eta = randu(Npeak); const vec mean = randu(Npeak); const vec Widht_Lor = randu(Npeak); const vec sd_gau = randu(Npeak); const vec ampl =randu(Npeak); double Pv= PseudoVoigt(x,Npeak, eta,mean, Widht_Lor,sd_gau, ampl); Pearson7(const double, const double, const double, const double, const double) Provides the \(7^(th)\) Pearson function \(P7\)of the variable X with \(invWidth\) a normalization factor and \(shape\) the shape parameter.The expression is given by: \(P_7= max(1+\frac{(invWidth(X-mean))²}{shape})^{shape}\) example: const double X = (double)rand(); const double invWidth = (double)rand(); const double mean = (double)rand(); const double shape = (double)rand(); const double max = 1; double P_7= Pearson7(x,mean, inv_width,shape, max); Mult_Pearson7(onst double, const int, const vec, const vec, const vec, const vec) Provides the cumulative distribution of the \(7^(th)\) Pearson function \(P7\)of the variable X with \(invWidth\) a normalization factor, \(shape\) the shape parameter and Npeak the number of peaks. The expression is given by: \(MP_7=\Sigma^{Npeak}_{i=1}max(i)(1+\frac{(invWidth(i)(X-mean(i)))²}{shape(i)})^{shape(i)}\) example: const double X = (double)rand(); const int Npeak = (int)rand(); const vec invWidth = vec randu(Npeak); const vec mean = vec randu(Npeak); const vec shape = vec randu(Npeak); const vec max = vec randu(Npeak); double MP_7= Mult_Pearson7(x,Npeak, mean, inv_width,shape, max);
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
The plane spanned by r' and r'' is the osculating plane that a curve is "closest to lying in" at that point. This generalizes to higher dimensional hyperplanes for higher dimensional curves, right? Like, (by thinking about Taylor polynomials?) the 3d hyperplane spanned by the first three derivatives is the best fitting one. @N.Maneesh (1) says the integral of it is exactly $2\pi$, so the integral of the real part is $2\pi$. But (2) says the most the real part could be is 1, so it needs to be 1 almost everywhere on the interval. Since f is a polynomial, it must be identically 1. analysis-2 eg was half a semester of metric spaces and half a semester of multivariable calculus done badly with riemann integration (which is one of the major things one should learn in a first analysis course imo) was sandwiched half-assedly in between it was terribad one learns more analysis by going through rudin than sitting in these courses I don't think ability really matters at the end of the day, what matters is sticking to it, whatever you are doing. You spent a whole day and solved a problem that you wanted to solve; that's how one does math I believe that if you truly want to develop academically you should develop as a person as well and that implies developing yourself socially and academically. That will happen I guess if you interact outside our comfort zones such as academics etc. @Albas the notion of enjoying college life is synonymous to lowkey adolescent hedonism to be honest so i dont think those two are compatible. i mean the process of learning doesn't pay off in the short term, it's painful and unpleasant because you realize how pathetic you are and there's so little you actually understand and even though you understand it how slow you are to internalize it 2 its sort of depressing if you linger on it really but ofc you can learn and socialize at the same time its just that the postmodern culture equates socialization with partying Yup that's the issue isn't it, conforming people to certain norms which are mostly hollow. But again I don't have much of a problem with partying even though I don't really prefer it. If people are partying, drinking etc it's their thing. If they know what their priorities are then they know how to get themselves back. I know many quite distinguished people in science (mathematics specifically) who have done "partying" in their undergraduate/graduate studies but are no less in learning I think that it's more of a problem of how mathematics is put forward as a subject. It's said to be a subject of prodigies and geniuses. A subject if you don't have anything unique in yourself you cannot succeed. This kind of an outlook forces people to build deep insecurities in themselves about their own knowledge and they start shying away from learning anything new, experimenting anything new. Whereas it's actually a lot of hard work, a lot of failures. There aren't any moments. There are culminations. sure, but i dont think its a result of the social view of math. anyone with minimum working knowledge in math can experience this learning mechanism, where you see some technique, try to apply it to solve problems, fail innumerably many times, and then culminate in to a proper understanding of the technique the idea is that's how learning always ever works, this should not be a point of demotivation groups of order $p^r$ has a non trivial centre (had to look it up, I forgot). So, in case of $p^2$, there are only a few possibilities. Centre is of order $1$ (impossibility); of order $p$ and of order $p^2$. suppose, $o(Z) \neq p^2$ then $Z$ is cyclic. But, $G/Z$ cannot be non trivial cyclic. I know. The problem is that here I get a scholarship which kind of forces me to do a "summer project" each summer where I have to write up a report and give a seminar in the end. It's mostly learning stuff from a book but that's what they call it. The questions though that you gave @SubhasisBiswas is a very nice exercise. I remember it troubling me a little in my group theory course. I am just studying the two at the moment and let's see where that takes me. There was this person here who did his project on the topic of the atiyah singer index theorem and it's relations to the heat equations. That was kinda cool. I only learnt the Toeplitz index theorem and some basic things about Fredholm operators because I had to write a report on the works of an Indian mathematician for a non-credit course on intro to mathematics in my first sem, and I chose Patodi I didn't actually write anything about the super general Atiyah-Patod-Singer index theorem, but just mentioned Toeplitz and Gauss-Bonnet as examples of index theorems I just understand Gauss Bonnet that too only for surfaces. I just have heard this but isn't atiyah singer index some generalisation of the Riemann Roch theorem. The guy who did his project was talking about it in his presentation. I am taking this course on algebraic curves next semester(will Riemann Roch pop up there?)I know like algebra till rings and modules. Is that going to be enough? Do I need to do some more stuff? I think they will follow Fulton or Shafarevich @BalarkaSen: Did you receive my message above where I mentioned my email? I just checked my email and saw that there has been no email from you. In any case, if you send me those images via email, please give me a ping. Here's something which is true. If there is a homomorphism $\varphi : G \to G'$ such that $\varphi(H) \subset H'$ so that $\varphi|_H : H \to H'$ is an isomorphism and the induced homomorphism $G/H \to G'/H'$ from $\varphi$ is an isomorphism, then $\varphi$ is an isomorphism between $G$ and $G'$. This is the "corrected" statement for the statement I was giving a counterexample of at first For yours, the correct statement is simply that if there is an isomorphism $\varphi : G \to G'$ such that $\varphi(H) \subset H'$ such that $\varphi|_H : H \to H'$ is also an isomorphism, then the induced homomorphism $G/H \to G'/H'$ is an isomorphism @SubhasisBiswas I just realized I again gave a counterexample to something different: $G \cong G'$, $G/H \cong G'/H'$ but $H \not \cong H'$. Sigh. You are asking $G/H \cong G'/H'$, $H \cong H'$ but $G \not \cong G'$. Take $G = \Bbb Z_6$, $G' = S_3$ and $H = H' = \Bbb Z_3$. Both quotients $G/H$ and $G'/H'$ are $\Bbb Z_2$. Tons of example, eg with $\Bbb Z_2 \times \Bbb Z_2$ and $\Bbb Z_4$, with the diagonally embedded $\Bbb Z_2$ in the first and the cyclic subgroup $\Bbb Z_2$ in the latter (this is even abelian) This is known as the "extension problem": Given $N$ a normal subgroup of $G$, and the quotient $G/N$, if you know the isomorphism type of $N$ and $G/N$, how well do you understand the isomorphism type of $G$? Sorry for the confusion. Basically there are tons of counterexamples to all possible variations to the problem, is the takeaway Guys, can you answer my question? Median is 50th percentile. But there are two kinds of definition of percentiles, exclusive and inclusive. Which definition of a percentile do we use when calculating median? Or can be just choose arbitrary definition and use it to calculate the median? This is what I have done so far (the prev question). @BalarkaSen, although I have missed a short part of the proof, here is what I have done far. I am trying to fully complete it (the missed part). Let $a \in G$, with $o(G)=6$. Then either $o(a)=3$, or $o(a)=2$. The number of elements of order $2$ in $G$ must be odd. Therefore, the number of elements of order $2$ is either $3$ or $1$. It cannot be the case where $|\{a \in G: o(a)=2\}|=1$ (I missed the proof here, will come back to it). So, let the elements be $ p, q, r , x, y, e$, with $ o(p)=o(q)=o(r)=2$ and $o(x)=o(y)=3$. Now, we ass… Suppose that $V,W$ are normed linear spaces such that $\mathcal{L}(V,W)$, the space of bounded linear operators $V\rightarrow W$ with the operator norm, is complete. Is $W$ necessarily complete? The converse of this is well-known to be true.
Basics of Graphing Polynomial Functions A polynomial function in one real variable can be represented by a graph. Learning Objectives Discuss the factors that affect the graph of a polynomial Key Takeaways Key Points The graph of the zero polynomial [latex]f(x) = 0[/latex] is the x-axis. The graph of a degree 1 polynomial (or linear function ) [latex]f(x) = a_0 + a_1x[/latex], where [latex]a_1 \neq 0[/latex], is a straight line with y-intercept [latex]a_0[/latex] and slope [latex]a_1[/latex]. The graph of a degree 2 polynomial [latex]f(x) = a_0 + a_1x + a_2x^2[/latex], where [latex]a_2 \neq 0[/latex] is a parabola. The graph of any polynomial with degree 2 or greater [latex]f(x) = a_0 + a_1x + a_2x^2 +… + a_nx^n[/latex], where [latex]a_n \neq 0[/latex] and [latex]n \geq 2[/latex] is a continuous non-linear curve. Key Terms polynomial: an expression consisting of a sum of a finite number of terms, each term being the product of a constant coefficient and one or more variables raised to a non-negative integer power, such as [latex]a_n x^n + a_{n-1}x^{n-1} +… + a_0 x^0[/latex]. Importantly, because all exponents are positive, it is impossible to divide by x. indeterminate: not accurately determined or determinable. term: any value (variable or constant) or expression separated from another term by a space or an appropriate character, in an overall expression or table. Polynomials appear in a wide variety of areas of mathematics and science. To better study and understand a polynomial, we sometimes like to draw its graph. Visible Properties of a Polynomial A typical graph of a polynomial function of degree 3 is the following: Zeros If we factorize the above function we see that [latex]y = \frac{1}{4}(x-2)(x+1)(x+4)[/latex], so the zeros of the polynomial are [latex]2, -1[/latex] and [latex]-4[/latex]. This is one thing we can read from the graph. In general, we can read the number of zeros from a polynomial just by looking at how many times it meets the [latex]x[/latex]-axis. Behavior Near Infinity As [latex]\frac {x^3}{4}[/latex] tends to be much larger (in absolute value) than [latex]\frac {3x^2}{4} - \frac {3x}{2} - 2[/latex] when [latex]x[/latex] tends to positive or negative infinity, we see that [latex]y[/latex] goes, like [latex]\frac {x^3}{4}[/latex], to negative infinity when [latex]x[/latex] goes to negative infinity, and to positive infinity when [latex]x[/latex] goes to positive infinity. This is again something we can read from the graph. In general, polynomials will show the same behavior as their highest-degree term. Functions of even degree will go to positive or negative infinity (depending on the sign of the coefficient of the highest-degree term) if [latex]x[/latex] goes to infinity. Functions of odd degree will go to negative or positive infinity when [latex]x[/latex] goes to negative infinity and vice versa, again depending on the highest-degree term coefficient. How to Sketch a Graph Conversely, if we know the zeros of a polynomial, and we know how it behaves near infinity, we can already make a nice sketch of the graph. We can exactly draw the points [latex](z,0)[/latex] for each root [latex]z[/latex]. Between two zeros (and before the smallest zero, and after the greatest zero) a function will always be either positive, or negative. We know whether it is positive or negative at infinity. Every time we cross a zero of odd multiplicity (if the number of zeros equals the degree of the polynomial, all zeros have multiplicity one and thus odd multiplicity) we change sign. So in our example, we start with a negative sign until we reach [latex]x = -4[/latex], when our graph rises above the [latex]x[/latex]-axis. At some point it starts to descend again, until we reach [latex]x=-1[/latex] and the graph goes below the [latex]x[/latex]-axis again till [latex]x=2[/latex], where it becomes positive again. With this procedure, we can draw a reasonable sketch of our graph, by only looking at the sign of the function and drawing a smooth line with the same sign! However, we can do better. For example, the number of times a function reaches a local minimum or maximum (i.e. a point where the graph descends and then starts to ascend again, or vice versa) is finite. In particular, it is smaller than the degree of the given polynomial. So if you draw a graph, make sure you draw no more local extremum points than you should. Easy Points to Draw Another easy point to draw is the intersection with the [latex]y[/latex]-axis, as this equals the function value in the point zero, which equals the constant term of the polynomial. We also call this the [latex]y[/latex]-intercept of the function. So if we draw our smooth line, we make sure it crosses the [latex]y[/latex]-axis in the same place. In general, the more function values we compute, the more points of the graph we know, and the more accurate our graph will be. Conversely, we can easily read the constant term of the polynomial by looking at its intersection with the [latex]y[/latex]-axis if its graph is given (and indeed, we can readily read any function value if the graph is given). Examples The graph of the zero polynomial [latex]f(x)=0[/latex] is the [latex]x[/latex]-axis, since all real numbers are zeros. The graph of a degree [latex]0[/latex] polynomial [latex]f(x)=a_0[/latex], where [latex]a_0 \not = 0[/latex], is a horizontal line with [latex]y[/latex]-intercept [latex]a_0[/latex]. The graph of a degree 1 polynomial (or linear function)[latex]f(x) = a_0 + a_1x[/latex], where [latex]a_1 \not = 0[/latex], is a straight line with [latex]y[/latex]-intercept [latex]a_0[/latex] and slope [latex]a_1[/latex]. The graph of a degree 2 polynomial [latex]f(x) = a_0 + a_1x + a_2x^2[/latex], where [latex]a_2 \neq 0[/latex] is a parabola. The graph of a degree 3 polynomial [latex]f(x) = a_0 + a_1x + a_2x^2 + a_3x^3[/latex], where [latex]a_3 \neq 0[/latex], is a cubic curve. The graph of any polynomial with degree 2 or greater [latex]f(x) = a_0 + a_1x + a_2x^2 +… + a_nx^n[/latex], where [latex]a_n \neq 0[/latex] and [latex]n \geq 2[/latex] is a continuous non-linear curve. The graph of a non-constant (univariate) polynomial always tends to infinity when the variable increases indefinitely (in absolute value). Examples Below are some examples of graphs of functions. The Leading-Term Test Analysis of a polynomial reveals whether the function will increase or decrease as [latex]x[/latex] approaches positive and negative infinity. Learning Objectives Use the leading-term test to describe the end behavior of a polynomial graph Key Takeaways Key Points Properties of the leading term of a polynomial reveal whether the function increases or decreases continually as [latex]x[/latex] values approach positive and negative infinity. If [latex]n[/latex] is odd and [latex]a_n[/latex] is positive, the function declines to the left and inclines to the right. If [latex]n[/latex] is odd and [latex]a_n[/latex] is negative, the function inclines to the left and declines to the right. If [latex]n[/latex] is even and [latex]a_n[/latex] is positive, the function inclines both to the left and to the right. If [latex]n[/latex] is even and [latex]a_n[/latex] is negative, the function declines both to the left and to the right. Key Terms Leading term: The term in a polynomial in which the independent variable is raised to the highest power. Leading coefficient: The coefficient of the leading term. Leading Term, Leading Coefficient and Leading Test All polynomial functions of first or higher order either increase or decrease indefinitely as [latex]x[/latex] values grow larger and smaller. It is possible to determine the end behavior (i.e. the behavior when [latex]x[/latex] tends to infinity) of a polynomial function without using a graph. Consider the polynomial function: [latex]f(x)=a_nx^n + a_{n-1}x^{n-1}+…+a_1x+a_0[/latex] [latex]a_nx^n[/latex] is called the leading term of [latex]f(x)[/latex], while [latex]a_n \not = 0[/latex] is known as the leading coefficient. The properties of the leading term and leading coefficient indicate whether [latex]f(x)[/latex] increases or decreases continually as the [latex]x[/latex]-values approach positive and negative infinity: If [latex]n[/latex] is odd and [latex]a_n[/latex] is positive, the function declines to the left and inclines to the right. If [latex]n[/latex] is odd and [latex]a_n[/latex] is negative, the function inclines to the left and declines to the right. If [latex]n[/latex] is even and [latex]a_n[/latex] is positive, the function inclines both to the left and to the right. If [latex]n[/latex] is even and [latex]a_n[/latex] is negative, the function declines both to the left and to the right. Examples Consider the polynomial [latex]f(x) = \frac {x^3}{4} + \frac {3x^2}{4} - \frac {3x}{2} -2.[/latex] In the leading term, [latex]a_n[/latex] equals [latex]\frac {1}{4}[/latex] and [latex]n[/latex] equals [latex]3[/latex]. Because [latex]n[/latex] is odd and [latex]a[/latex] is positive, the graph declines to the left and inclines to the right. This can be seen on its graph below: [latex]g(x) = - \frac{1}{14} (x+4)(x+1)(x-1)(x-3) + \frac{1}{2}[/latex] which has [latex]-\frac {x^4}{14}[/latex] as its leading term and [latex]- \frac{1}{14}[/latex] as its leading coefficient. Thus [latex]g(x)[/latex]approaches negative infinity as [latex]x[/latex] approaches either positive or negative infinity; the graph declines both to the left and to the right as seen in the next figure: The Leading Test Explained Intuitively, one can see why we need to look at the leading coefficient to see how a polynomial behaves at infinity: When [latex]x[/latex] is very big (in absolute value ), then the highest degree term will be much bigger (in absolute value) than the other terms combined. For example [latex]x - 1000[/latex] differs a lot from [latex]x[/latex] when [latex]x = 0[/latex] or [latex]1000[/latex], but (relatively) not when [latex]x = 9999999999999[/latex] or [latex]-9999999999999999[/latex]. Indeed, both functions can be described as “very big and positive” in the first point and “very big and negative” in the second. In general, when we have a polynomial [latex]f(x) = a_nx^n + \ldots + a_0[/latex] and the absolute value of [latex]x[/latex] is bigger than [latex]MnK[/latex], where [latex]M[/latex] is the absolute value of the largest coefficient divided by the leading coefficient, [latex]n[/latex] is the degree of the polynomial and [latex]K[/latex] is a big number, then the absolute value of [latex]a_nx^n[/latex] will be bigger than [latex]nK[/latex] times the absolute value of any other term, and bigger than [latex]K[/latex] times the other terms combined! So when [latex]x[/latex] grows very large, [latex]f(x)[/latex] very much resembles its leading term [latex]a_n x^n.[/latex] This function grows very big as [latex]x[/latex] grows very big. Now [latex]a_nx^n[/latex] takes on the sign of [latex]a_n[/latex] if [latex]x^n[/latex] is positive, which happens if [latex]x[/latex] is positive or if [latex]n[/latex] is even, and the opposite sign of [latex]a_n[/latex] if [latex]x^n[/latex] is negative, which happens if [latex]x[/latex] is negative and [latex]n[/latex] is odd. (Notice that we do not care about [latex]x = 0[/latex] since we are only interested in very large [latex]x.[/latex]) Thus, [latex]a_nx^n[/latex] (and thus [latex]f(x)[/latex], in the neighborhood of infinity) goes up (as [latex]x[/latex] approaches infinity) if [latex]a^n[/latex] is positive and down if [latex]a_n[/latex] is negative. Except when [latex]x[/latex] is negative and [latex]n[/latex] is odd; then the opposite is true.
I am trying to solve a system of two equations with two unknowns. In these equations I have, a part from constants: Unknown nr 1, $$D_{\perp}$$ Unknown nr 2, $$\omega_C$$ Known function of r: $$\mu(r)$$ The full system looks like: equation1: $$ D_{||}= \frac 1 {2 \omega_0}(-\alpha-1)\sqrt{(-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}+ \frac{\alpha}{2\omega_0}\sqrt{(-2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}+ \frac{1}{2\omega_0}\sqrt{(2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2} $$ and equation2: $$ 1= \mu(r) \frac{\alpha}{2\omega_0} \ln{\left[\frac{-2\omega_0-\omega_C+\mu(r)D_{||}+\sqrt{(-2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}}{-\omega_C+\mu(r)D_{||}+\sqrt{(-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}}\right]}+ \frac{\mu(r)}{2\omega_0} \ln{\left[\frac{2\omega_0-\omega_C+\mu(r)D_{||}+\sqrt{(2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}} {-\omega_C+\mu(r)D_{||}+\sqrt{(-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}}\right]} $$ So the solution of the system of equations will be $$D_{\perp}(r),\;\omega_C(r)$$. What I've tried to do is simply Solve[{equation1, equation2},{Dorthogonal, omegaC}] but Mathematica keeps on running forever without any output. I have also tried: DorthFun[r_]:=Solve[{equation1, equation2},{Dorthogonal, omegaC}][[1,1]] omegaCFun[r_]:=Solve[{equation1, equation2},{Dorthogonal, omegaC}][[1,2]] and it just keeps on running... It doesn't return any errors. Just...eternal running. Forrest Gump Syndrome... I have also tried to solve the system putting $$\mu(r)=1$$ without any change. I have given Mathematica about 20 minutes. Should I give it more time or does this mean that Mathematica cannot solve this? Or is there something I could do differently? Thank you for your help! My code looks like this: NSolve[{ Dparallel==1/(2 omega0) Sqrt[(-omegaC + Dparallel)^2 + (Dorth)^2] (-alpha-1)+alpha/(2omega0)Sqrt[(-2omega0-omegaC+Dparallel)^2+(Dorth)^2]+1/(2omega0)Sqrt[(2omega0-omegaC+Dparallel)^2+(Dorth)^2], 1 == alpha/(2omega0)Log[(-2omega0-omegaC+Dparallel+Sqrt[(-2omega0-omegaC+Dparallel)^2+(Dorth)^2])/(-omegaC+Dparallel+Sqrt[(-omegaC+Dparallel)^2+(Dorth)^2])]+1/(2omega0)Log[(2omega0-omegaC+Dparallel+Sqrt[(2omega0-omegaC+Dparallel)^2+(Dorth)^2])/(-omegaC+Dparallel+Sqrt[(-omegaC+Dparallel)^2+Sqrt[(-omegaC+Dparallel)^2+(Dorth)^2]])] }, {Dorth, omegaC}]
Suppose we have a sequence of random variables $X_1, X_2,...$ that takes value $\left\{ 0,1\right\}$ with $\lim _{n\rightarrow \infty} \frac{\sum_{i=1}^n X_i}{n} = 0.5$. i.e. In long term this is a fair coin toss. Design an mechanism so that for some $n$ and $k$, the distribution of $\sum_{i=n}^{n+k-1} X_i$ has a smaller variance. It is very clear that if the variables are independent then the distribution is always fixed. Therefore we focus on dependent variables, in particular we consider creating a Markov Chain, that $X_{n+k}$ depends on $\frac{\sum_{i=n}^{n+k-1} X_i}{k}$. We call this dependence function be $f: [0,1]\rightarrow [0,1]$, which is rotational symmetric around $(0.5,0.5)$. In order to investigate the effectiveness of our mechanism we have to go deep into the calculations because the mean calculations won't work (we need variance!) Consider $k=2$ with $f(x) = 0.9 - 0.8x$ and the four states are $00,01,10,11$. Then the transition matrix is given by: $P = \begin{bmatrix} 0.1&0&0.5&0\\0.9&0&0.5&0\\0&0.5&0&0.9\\0&0.5&0&0.1\end{bmatrix}$ Where the transition matrix assuming independence equal to $P = \begin{bmatrix} 0.5&0&0.5&0\\0.5&0&0.5&0\\0&0.5&0&0.5\\0&0.5&0&0.5\end{bmatrix}$ Since $P$ represents a regular Markov chain it's clear that it has an eigenvalue 1 with the steady state vector $\frac{5}{28}(1,1.8,1.8,1)^T$. Comparing with $(0.25,0.25,0.25,0.25)^T$ this one has a high tendency at the central. What about higher k? Consider the case $k=3$ with 8 states $000,001,010,011,100,101,110,111$, where $P =\begin{bmatrix} \frac{1}{10}&0&0&0&\frac{11}{30}&0&0&0\\ \frac{9}{30}&0&0&0&\frac{19}{30}&0&0&0\\ 0&\frac{11}{30}&0&0&0&\frac{19}{30}&0&0\\ 0&\frac{19}{30}&0&0&0&\frac{11}{30}&0&0\\ 0&0&\frac{11}{30}&0&0&0&\frac{19}{30}&0\\ 0&0&\frac{19}{30}&0&0&0&\frac{11}{30}&0\\ 0&0&0&\frac{19}{30}&0&0&0&\frac{9}{10}\\ 0&0&0&\frac{11}{30}&0&0&0&\frac{1}{10}\\\end{bmatrix}$ With steady state vector, according to mathematica, $c(0.161905, 0.397403, 0.397403, 0.397403, 0.397403, 0.397403, 0.397403, 0.161905)^T$ (where $c$ is the constant to make the sum of the probabilities 1.) Surely there is a higher tendency to stay at the centre, but it looks less centre-located than $k=2$. Of course it's very hard to determine the variance through the steady state vector from the Markov chain as every states represents a series of random variables, but we can plot the distribution for it as follows. Here we sample 10000 times for $\sum_{i=1}^{100} X_i$. Here we see the black line as the standard binomial distribution, red as $k=2$, green as $k = 8$, blue as $k=25$ and mathematically the line tends to the standard binomial distribution as $n \rightarrow \infty$. If we take a function that is closer to $f(x) = 0.5$ (take the distance in inner product space, if you like) it will, of course, closer to the original distribution. For the above graph, red refers to $y = 0.9-0.8x$, green is $y = 0.8-0.6x$ while blue is $0.7-0.4x$, and the difference is obvious. Convexity also affect the ability of 'forcing' the RV back to the mean using the same idea (distance between functions). The variance is naturally smaller if $f(x)$ is larger approaching to $0.5^-$ and smaller from $0.5^+$. If you do some Fourier analysis we can 'force' the random variable sequence into a perfect alternating 0 1 0 1 0 1 0 1!!! The red line refers to $f(x) = 0.5 + 0.4((1-x)^5-x^5))$, the green line being $f(x) = 0.9-0.8x$ and the blue line is $f(x) = 0.5+0.4\cos (\pi x)$. Let's demonstrate the Fourier series correlation here. For the function $f: [0,1]\rightarrow [0,1]$ where $f(x) = 1$ for $x\in [0,0.5]$ and $f(x) = 0$ for $x\in (0.5,1]$ we have $f(x) = \frac{1}{2} + \sum_{k=1}^{\infty}\frac{2\sin ((4n-2)\pi x)}{(2n-1)\pi}$. Turncating the values that exceeds 1 and below 0 we have the following distribution by taking $1,2,3$ terms with $k=15$: We put a large graph here because it takes great effort for us to separate the three lines. Zooming gives We see (once more) that Fourier series converges very quick despite the occurrence of Gibbs' phenomenon here. There's an interesting question that why does the distribution converges to something non-trivial instead of going to the impulse $\delta (\frac{n}{2})$, where the sequence is perfectly 1 0 1 0 ......? I'll leave this question to the reader as the solution is quite simple. I don't intend to 'teach' something here but creating a sequence of dependent random variables is a very practical topic and the variance could play an important role. For instance in some round-based RPG battle you certainly don't want your game to be dominated by randomness, so it makes sense to restrict the randomness so that it's about the same for both sides over most if not all short periods. The faster converging speed is what we want to tweek the variance. Foods for thought. 1) Why does the distribution using Fourier series as $f(x)$ does not converge to an impulse, where the sequence of distribution has no variance? You may want to change the following variables to see what happens (a) $n$, the length of sequence simulated (b) $k$ the length of correlation (c) the number of terms in the Fourier series or the turncation (d) numerical precision. 2) It is possible to calculate the variance directly from the Markov chain steady state probability? 3) By inverting the idea we can't create sequence of RVs of 4) For the Markov Chain case $k=3$, why does the steady state probability for all 6 states except $000,111$ are equal? (i.e., That looks 'unusual'.) larger variancedirectly, for instance, $f(x) = 0.5-0.4\cos (\pi x)$ doesn't give a sequence of larger variance. Think about if it is possible, then see if there are practical applications for it. 4) For the Markov Chain case $k=3$, why does the steady state probability for all 6 states except $000,111$ are equal? (i.e., That looks 'unusual'.) R code, for $f(x)$: f = function(n,k,g) { s = NULL for(i in 1:k) { s = c(s,rbinom(1,1,0.5)) } for(i in (k+1):n) { prev_mean = mean(s[(i-k):(i-1)]) s = c(s,rbinom(1,1,g(prev_mean))) } return(sum(s)) }
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
In linear algebra, we frequently use the fact that a set of eigenvectors with pairwise distinct eigenvalues is linearly independent. Hoffman and Kunze prove this fact (see the proof of the second lemma on page 186) by using some elementary facts about polynomials. Here we prove the linear independence of eigenvectors using the well-ordering principle: Well-ordering principle Suppose \(S\) is a non-empty subset of the natural numbers, \(S \subseteq \mathbf{N}\). Then \(S\) has a minimal element. The well-ordering principle is equivalent to mathematical induction, and is thus a fundamental property of the natural numbers. Like induction, the well-ordering principle is frequently used to prove properties which hold for every natural number. The following is a common strategy for applying the well-ordering principle to prove that every natural number satisfies some property \(P\): Suppose \(S\) is the set of natural numbers that do not possess property \(P\). By the well-ordering principle, if \(S\) is not empty it must contain a smallest element, \(x_0\). Construct some \(x The existence of \(x\) contradicts the minimality of \(x_0\), implying that \(S\) has no minimal element. Therefore, \(S\) must be empty (by the well-ordering principle), hence \(P\) holds for every natural number. We are now ready to apply the technique described above to prove the main result. Theorem Let \(V\) be a vector space over \(\mathbf{F}\) and \(T : V \to V\) a linear operator. Suppose \(v_1, v_2, \ldots, v_m\) are eigenvectors with corresponding eigenvalues \(\lambda_1, \lambda_2, \ldots, \lambda_m\) where \(\lambda_i \neq \lambda_j\) for all \(i \neq j\). Then the set \[ S = {v_1, v_2, \ldots, v_m} \] is linearly independent. Proof Suppose to the contrary that \(S\) is linearly dependent. That is, there exist \(a_1, a_2, \ldots, a_m \in \mathbf{F}\), not all zero, such that \[ \sum_{i = 1}^m a_i v_i = 0. \] Let \(k\) be the smallest natural number such that there exist \(k\) indices \(I = {i_1, i_2, \ldots, i_k}\) and coefficients \(a_{i_1}, a_{i_2}, \ldots, a_{i_k} \neq 0\) satisfying \[ \sum_{j = 1}^k a_{i_j} v_{i_j} = 0. \] Such a minimal \(k\) exists by the well-ordering principle. Since the \(v_i\) correspond to distinct eigenvalues, we have \[ (T – \lambda_i I) v_i = 0 \] while \[ (T – \lambda_i I) v_j \neq 0 \text{ for } i \neq j. \] Therefore, \[ (T – \lambda_{i_1} I) \sum_{j = 1}^k a_{i_j} v_{i_j} = \sum_{j = 2}^k b_{i_j} v_{i_j} = 0 \] where \(b_{i_j} = a_{i_j}(\lambda_{i_1} – \lambda_{i_j})\). In particular, \(I’ = {i_2, i_3, \ldots, i_k}\) is a set of \(k-1\) nonzero coefficients whose corresponding linear combination of the \(v_i\)s is zero. This contradicts the minimality of \(k\). Therefore, no nontrivial linear combination of the vectors in \(S\) can be zero, so \(S\) is linearly independent.∎
Difference between revisions of "Graph" (Replacing with TEX done template) (Thanks for fixing the typo; replacing table markup with display tex) (2 intermediate revisions by 2 users not shown) Line 17: Line 17: Two-place operations over a graph are employed in a number of problems in graph theory. Let $G_1(V_1,E_1)$ and $G_2(V_2,E_2)$ be graphs such that $V_1 \cap V_2 = \emptyset$ and $E_1 \cap E_2 = \emptyset$. The union of $G_1$ and $G_2$ is the graph $G = G_1 \cup G_2$ with set of vertices $V = V_1 \cup V_2$ and set of edges $E = E_1 \cup E_2$. The product of $G_1$ and $G_2$ is the graph $G = G_1 \times G_2$ whose set of vertices is the Cartesian product $V = V_1 \times V_2$, any two of the vertices $(u_1,u_2)$ and $(v_1,v_2)$ being adjacent if and only if either $u_1=v_1$ and $u_2$ is adjacent to $v_2$, or $u_2=v_2$ and $u_1$ is adjacent to $v_1$. For example, any graph is the union of its connected components; a graph known as the $n$-dimensional unit cube $Q_n$ can be recursively defined by the product operation Two-place operations over a graph are employed in a number of problems in graph theory. Let $G_1(V_1,E_1)$ and $G_2(V_2,E_2)$ be graphs such that $V_1 \cap V_2 = \emptyset$ and $E_1 \cap E_2 = \emptyset$. The union of $G_1$ and $G_2$ is the graph $G = G_1 \cup G_2$ with set of vertices $V = V_1 \cup V_2$ and set of edges $E = E_1 \cup E_2$. The product of $G_1$ and $G_2$ is the graph $G = G_1 \times G_2$ whose set of vertices is the Cartesian product $V = V_1 \times V_2$, any two of the vertices $(u_1,u_2)$ and $(v_1,v_2)$ being adjacent if and only if either $u_1=v_1$ and $u_2$ is adjacent to $v_2$, or $u_2=v_2$ and $u_1$ is adjacent to $v_1$. For example, any graph is the union of its connected components; a graph known as the $n$-dimensional unit cube $Q_n$ can be recursively defined by the product operation − + $Q_n = K_2 \times Q_{n-2}$ where $Q_1=K_2$ is the graph consisting of a pair of vertices connected by one edge. These operations can also be defined for intersecting graphs, in particular for subgraphs of a given graph. The addition modulo $2$ of two graphs $G_1$ and $G_2$ is defined as the graph $G$ with set of vertices $V = V_1 \cup V_2$ and set of edges $E = (E_1 \cup E_2) \setminus (E_1 \cap E_2)$. Other many-place operations on graphs are also employed. where $Q_1=K_2$ is the graph consisting of a pair of vertices connected by one edge. These operations can also be defined for intersecting graphs, in particular for subgraphs of a given graph. The addition modulo $2$ of two graphs $G_1$ and $G_2$ is defined as the graph $G$ with set of vertices $V = V_1 \cup V_2$ and set of edges $E = (E_1 \cup E_2) \setminus (E_1 \cap E_2)$. Other many-place operations on graphs are also employed. − For certain classes of graphs it is possible to find simple operations, a repeated application of which makes it possible to pass from any graph in the given class to any other graph in the same class. With the aid of the operation shown in Fig. A is possible to pass from any graph to any other graph within the class of graphs with the same set of degrees. + For certain classes of graphs it is possible to find simple operations, a repeated application of which makes it possible to pass from any graph in the given class to any other graph in the same class. With the aid of the operation shown in Fig. A is possible to pass from any graph to any other graph within the class of graphs with the same set of degrees. − + :=Figure A − − − Figure The operation shown in Fig. B makes it possible to pass from any triangulation to any other triangulation within the class of planar triangulations (cf. [[Graph, planar|Graph, planar]]). The operation shown in Fig. B makes it possible to pass from any triangulation to any other triangulation within the class of planar triangulations (cf. [[Graph, planar|Graph, planar]]). − + :=Figure B − − − Figure The description and study of certain classes of graphs also involves operations and sets of graphs making it possible to obtain any graph of a given class. Operations on graphs are also employed to construct graphs with given properties, to calculate numerical characteristics of graphs, etc. (cf. [[Graph, numerical characteristics of a|Graph, numerical characteristics of a]]). The description and study of certain classes of graphs also involves operations and sets of graphs making it possible to obtain any graph of a given class. Operations on graphs are also employed to construct graphs with given properties, to calculate numerical characteristics of graphs, etc. (cf. [[Graph, numerical characteristics of a|Graph, numerical characteristics of a]]). Latest revision as of 14:09, 28 May 2016 A set $V$ of vertices and a set $E$ of unordered and ordered pairs of vertices; denoted by $G(V,E)$. An unordered pair of vertices is said to be an edge, while an ordered pair is said to be an arc. A graph containing edges alone is said to be non-oriented or undirected; a graph containing arcs alone is said to be oriented or directed. A pair of vertices can be connected by two or more edges (arcs of the same direction) and such edges (arcs) are then said to be multiple. An arc (or edge) can begin and end at the same vertex, in which case it is known as a loop. (A "graph" is sometimes understood to be a graph without loops or multiple edges; in such a case a graph with multiple edges is said to be a multi-graph, whereas one containing both multiple edges and loops is said to be a pseudo-graph.) Vertices connected by an edge or a loop are said to be adjacent. Edges with a common vertex are also called adjacent. An edge (arc) and any one of its two vertices are said to be incident. One says that an edge $\{u,v\}$ connects two vertices $u$ and $v$, while an arc $(u,v)$ begins at the vertex $u$ and ends at the vertex $v$. Each graph can be represented in Euclidean space by a set of points, corresponding to the vertices, which are connected by lines, corresponding to the edges (or the arcs) of the graph. In three-dimensional space any graph can be represented in such a way that the lines corresponding to edges (arcs) do not intersect at interior points. There are various ways of specifying a graph. Let $u_1,\dots,u_n$ be the vertices of a graph $G(V,E)$ and let $e_1,\dots,e_m$ be its edges. The adjacency matrix corresponding to $G$ is the matrix $A=(a_{i,j})$ in which the element $a_{i,j}$ equals the number of edges (arcs) which join the vertices $u_i$ and $u_j$ (go from $u_i$ to $u_j$) and $a_{i,j}=0$ if the corresponding vertices are not adjacent. In the incidence matrix $B=(b_{i,j})$ of $G$ the element $b_{i,j}=1$ if the vertex $u_i$ is incident to the edge $e_j$, and $b_{i,j}=0$ if the vertex $u_i$ and the edge $e_j$ are not incident. A graph can be specified by lists, for example, by specifying pairs of vertices connected by edges (arcs) or by specifying the set of vertices adjacent to each vertex. Two graphs $G(V,E)$ and $H(W,I)$ are called isomorphic if there is a one-to-one correspondence between the sets of vertices $V,W$ and the sets of edges $E,I$ which preserves the incidence relationship (see also Graph isomorphism). A subgraph $G'(V',E')$ of a graph $G(V,E)$ is defined as a graph with set of vertices $V'\subseteq V$ and set of edges (arcs) $E'\subseteq E$, each one of which is incident with vertices from $V'$ only. A subgraph $G'(V',E')$ is said to be generated or induced by the subset $V'\subseteq V$ if it is a graph with set of vertices $V'$ and ordered set of edges (arcs) $E'$, $E'$ consists of by all edges of $G$ which connect vertices of $V'$. A skeleton subgraph or spanning subgraph $G'(V,E')$ contains all vertices of $G$ and some subset of its edges (arcs) $E'\subseteq E$. A sequence of edges $(u_0,u_1),\dots,(u_{r-1},u_r)$ is called an edge progression or walk connecting the vertices $u_0$ and $u_r$. An edge progression is called a chain or trail if all its edges are different and a simple chain or path if all its vertices are different. A closed (simple) chain is also called a (simple) cycle. A graph is said to be connected if any pair of its vertices is connected by an edge progression. A maximal connected subgraph of a graph $G$ is said to be a connected component. A disconnected graph has at least two connected components (see also Graph, connectivity of a). The length of an edge progression (chain, simple chain) is equal to the number of edges in the order in which they are traversed. The length of the shortest simple chain connecting two vertices $u_i$ and $u_j$ in a graph $G$ is said to be the distance $d(u_i,u_j)$ between $u_i$ and $u_j$. In a connected undirected graph the distance satisfies the axioms of a metric. The quantity $\min_{u_i} \max_{u_j} d(u_i,u_j)$ is called the diameter, while a vertex $u_0$ for which $\max_{u_j} d(u_i,u_j)$ assumes its minimum value is called a centre of $G$. A graph can contain more than one centre or no centre at all. The degree of a vertex $u_i$ of a graph $G$, denoted by $d_i$, is the number of edges incident with that vertex. If a (loop-free) graph $G$ has $n$ vertices and $m$ edges, then $\sum_{i=1}^n d_i = 2m$. A vertex $u_i$ is said to be isolated if $d_i=0$ and terminal or pendant if $d_i=1$. A graph all vertices of which have the same degree (equal to $k$) is said to be regular of degree $k$. A complete graph has no loops and each pair of vertices is connected by exactly one edge. Let a graph $G(V,E)$ be free from loops or multiple edges; then the complementary graph to $G$ is the graph $\bar{G}(V,E)$ in which $\bar{V}=V$ and vertices are adjacent in $\bar{G}$ only if they are not adjacent in $G$. A graph which is complementary to a complete graph consists of isolated vertices and is known as empty. Many characteristics of a graph $G$ and of its complement $\bar{G}$ are related. In a directed graph $G$ one defines, for each vertex $u_i$, the output (or out) and the input (or in) (semi-) degree as the number of arcs issuing from and entering this vertex, respectively. A complete directed graph is known as a tournament. To each graph $G$ can be assigned a number of graphs which are derived from of $G$. Thus, the edge graph $L(G)$ of $G$ is the graph whose vertices correspond to the edges of $G$ and two vertices are adjacent in $L(G)$ if and only if the corresponding edges of $G$ are adjacent. In the total graph $T(G)$ of $G$ the vertices correspond to the elements of $G$, i.e. to vertices and edges, and two vertices in $T(G)$ are adjacent if and only if the corresponding elements in $G$ are adjacent or incident. Many properties of $G$ carry over to $L(G)$ and $T(G)$. Many generalizations of the concept of a "graph" are known, including that of a hypergraph and of a network graph. With the aid of suitable operations it is possible to construct a graph from simpler graphs, to pass from a graph to simpler ones, to subdivide a graph into simpler ones, to pass from one graph to another in a given class of graphs, etc. The most common one-place operations include the removal of an edge (the vertices of the edge are preserved), the addition of an edge between two vertices of a graph, the removal of a vertex together with its incident edges (the graph obtained by removal of a vertex $v$ from a graph $G$ is often denoted by $G-v$), the addition of a vertex (which may be connected by edges with certain vertices of the graph), the contraction of an edge — identification of a pair of adjacent vertices, i.e. removal of a pair of adjacent vertices and addition of a new vertex which is adjacent to those vertices of the graph which were adjacent to at least one of the vertices which have been removed, and subdivision of an edge — removal of an edge and addition of a new vertex which is joined by an edge to each vertex of the edge which has been removed. Two-place operations over a graph are employed in a number of problems in graph theory. Let $G_1(V_1,E_1)$ and $G_2(V_2,E_2)$ be graphs such that $V_1 \cap V_2 = \emptyset$ and $E_1 \cap E_2 = \emptyset$. The union of $G_1$ and $G_2$ is the graph $G = G_1 \cup G_2$ with set of vertices $V = V_1 \cup V_2$ and set of edges $E = E_1 \cup E_2$. The product of $G_1$ and $G_2$ is the graph $G = G_1 \times G_2$ whose set of vertices is the Cartesian product $V = V_1 \times V_2$, any two of the vertices $(u_1,u_2)$ and $(v_1,v_2)$ being adjacent if and only if either $u_1=v_1$ and $u_2$ is adjacent to $v_2$, or $u_2=v_2$ and $u_1$ is adjacent to $v_1$. For example, any graph is the union of its connected components; a graph known as the $n$-dimensional unit cube $Q_n$ can be recursively defined by the product operation $$Q_n = K_2 \times Q_{n-2}$$ where $Q_1=K_2$ is the graph consisting of a pair of vertices connected by one edge. These operations can also be defined for intersecting graphs, in particular for subgraphs of a given graph. The addition modulo $2$ of two graphs $G_1$ and $G_2$ is defined as the graph $G$ with set of vertices $V = V_1 \cup V_2$ and set of edges $E = (E_1 \cup E_2) \setminus (E_1 \cap E_2)$. Other many-place operations on graphs are also employed. For certain classes of graphs it is possible to find simple operations, a repeated application of which makes it possible to pass from any graph in the given class to any other graph in the same class. With the aid of the operation shown in Fig. A it is possible to pass from any graph to any other graph within the class of graphs with the same set of degrees. The operation shown in Fig. B makes it possible to pass from any triangulation to any other triangulation within the class of planar triangulations (cf. Graph, planar). The description and study of certain classes of graphs also involves operations and sets of graphs making it possible to obtain any graph of a given class. Operations on graphs are also employed to construct graphs with given properties, to calculate numerical characteristics of graphs, etc. (cf. Graph, numerical characteristics of a). The concept of a "graph" is employed in defining mathematical ideas such as a control system, in certain definitions of an algorithm, of a grammar, etc. The exposition of a number of mathematical theories becomes more easily understood if geometric representations of graphs are employed, e.g. the theory of Markov chains. The concept of a "graph" is widely employed in the formulation and description of various mathematical models in economics, biology, etc. References [1] C. Berge, "The theory of graphs and their applications" , Wiley (1962) (Translated from French) [2] O. Ore, "Theory of graphs" , Amer. Math. Soc. (1962) [3] A.A. Zykov, "The theory of finite graphs" , 1 , Novosibirsk (1969) (In Russian) [4] F. Harary, "Graph theory" , Addison-Wesley (1969) pp. Chapt. 9 Comments There is as yet no universally accepted terminology in graph theory. In the English literature there are basically three schools of terminology: the French school typified by Berge's books [1] and [a1], the Canadian school typified by the books [a2] and [a3] and the American (especially Michigan) school typified by the books [4] and [a4]. Some of the terminology used in the present article differ from all these. References [a1] C. Berge, "Graphs and hypergraphs" , North-Holland (1973) (Translated from French) [a2] J.A. Bondy, U.S.R. Murthy, "Graph theory with applications" , Macmillan (1976) [a3] W.T. Tutte, "Graph theory" , Addison-Wesley (1984) [a4] M. Behzad, G. Chartrand, L.L. Foster, "Graphs and digraphs" , Prindle, Weber & Schmidt (1979) [a5] R.J. Wilson, "Introduction to graph theory" , Longman (1985) How to Cite This Entry: Graph. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Graph&oldid=38862
How can I find a closed-form expression for the following improper integral in a slick way? $$\mathcal{I}= \int_0^\infty \frac{x^{23}}{(5x^2+7^2)^{17}}\,\mathrm{d}x$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Starting from Thomas Andrews's reformulation, repeated integration by parts gives $$\int_a^\infty{(x-a)^{11}\over x^{17}}dx={11\over16}\int_a^\infty{(x-a)^{10}\over x^{16}}dx=\cdots={11\cdot10\cdots1\over16\cdot15\cdots6}\int_a^\infty{dx\over x^6}={1\over{16\choose5}}{1\over5a^5}$$ so your answer, if I've done all the arithmetic correctly, is $${1\over2\cdot5^{12}}{1\over{16\choose5}}{1\over5\cdot7^{10}}={1\over2^5\cdot3\cdot5^{13}\cdot7^{11}\cdot13}$$ Let $u=5x^2+49$ then this integral is: $$\frac{1}{10}\int_{49}^{\infty}\frac{ \left(\frac{u-49}{5}\right)^{11}}{u^{17}}\,du=\frac{1}{2\cdot 5^{12}}\int_{49}^\infty \frac{(u-49)^{11}}{u^{17}}\,du$$ That's going to be messy, but it isn't hard. We get: $$\frac{(u-49)^{11}}{u^{17}}=\sum_{i=0}^{11}\binom{11}{i}(-49)^{i}u^{-6-i}$$ So an indefinite integral is: $$\sum_{i=0}^{11} \frac{-1}{5+i}\binom{11}{i}(-49)^i u^{-5-i}$$ which is zero at $\infty$ so we only subtract that case $u=49$ which gives: $$\int_{49}^\infty \frac{(u-49)^{11}}{u^{17}}\,du = \frac{1}{49^5}\sum_{i=0}^{11}\frac{(-1)^i}{5+i}\binom{11}{i}$$ This is in the form $$ J=\int_0^{\infty} \frac{x^{s-1}}{(a+bx^n)^m} \, dx, \tag{1} $$ with $a=49,b=5,n=2,m=17,s=24$. It is clear that this converges To do (1), first change variables to $y=(b/a) x^n$, so $dy/y=n dx/x$, and $$ J = \frac{1}{n} \int_0^{\infty} \frac{(a/b)^{s/n}y^{s/n-1}}{a^m(1+y)^m} \, dy = \frac{1}{n}\frac{a^{s/n-m}}{b^{s/n}} \int_0^{\infty} \frac{y^{s/n-1}}{(1+y)^m} \, dy. $$ Now, at this point you can stick the numbers in, and do $\int_0^{\infty} \frac{y^{11}}{(1+y)^{17}} \, dy$, but I'm going to do (1) in general, which is now a matter of evaluating $$ J' = \int_0^{\infty} \frac{y^{s/n-1}}{(1+y)^m} \, dy $$ There are still a number of ways to do this: contour integration is a possibility, although problematic if $s/n$ is an integer, as in this case. The easier way turns out to be to write $$ \frac{1}{(1+y)^m} = \frac{1}{(m-1)!}\int_0^{\infty} \alpha^{m-1} e^{-(1+y)\alpha} \, d\alpha. \tag{2} $$ (Of course, this generalises to non-integers by using the Gamma function). Putting this into $J'$ and changing the order of integration gives $$ J' = \frac{1}{(m-1)!} \int_0^{\infty} \alpha^{m-1}e^{-\alpha} \left( \int_0^{\infty} y^{s/n-1} e^{-\alpha y} \, dy \right) \, d\alpha. $$ Doing the inner integral is just a matter of using (2) again: it is $$ \int_0^{\infty} y^{s/n-1} e^{-\alpha y} \, dy = \alpha^{-s/n} (s/n-1)!. $$ Then we just have to do the outer integral, which is $$ J' = \frac{(s/n-1)!}{(m-1)!} \int_0^{\infty} \alpha^{m-s/n-1}e^{-\alpha} \, d\alpha = \frac{(s/n-1)!}{(m-1)!}(m-s/n-1)!, $$ applying (2) yet again. Hence the original integral evaluates to $$ J = \frac{a^{s/n-m}}{b^{s/n}}\frac{(s/n-1)!(m-s/n-1)}{n(m-1)!} = \frac{a^{s/n-m}}{b^{s/n}} \frac{1}{s} \binom{m-1}{s/n}^{-1} . $$ Sticking the numbers in gives $$ \mathcal{I} = \frac{49^{-5}}{5^{12}}\frac{11!4!}{2(16!)}, $$ which is easy enough to calculate. $$\mathcal{I}= \int_0^{\infty} \frac{x^{23}}{(5x^2+49)^{17}}\,\mathrm{d}x$$Let $y=\left(\frac{49b}{1-b}\right)^{1/2}$ which is the sum total of some 4 or 5 substitutions in one go, it is not advisable to use this directly but go with $y=x^2$ then $z=5+49/y$ then $a=1-5/z$ and then $b=1-a$. Note that result could be evaluated at a but the $(1-a)^{11}$ is tedious to expand so I changed the form.Anyways by beta function there is no difference. Now: $$\mathcal{I}= \frac{1}{2*49^5*5^{12}}\int_{0}^{1} b^{11}(1-b)^4\,\mathrm{d}b$$ Now using beta function or expanding and integrating and adding: $$\int_{0}^{1} b^{11}(1-b)^4\,\mathrm{d}b=\frac{11!*4!}{16!}$$ So answer is: $$\boxed{\displaystyle\large\qquad\mathcal{I}=\frac{11!*4!}{16!*2*49^5*5^{12}}=\frac1{3012333710039062500000}\qquad}$$
My question concerns the definition 1.3 page 2 of Brownian motion and stochastic calculus of Karatzas & Shreve. Recall that a stochastic process is a collection of random variables (r.v.) $$ X = \{ X_t, t \in [0,\infty) \}. $$ Each of these r.v. is a measurable map from $(\Omega,\mathcal{F})$ (the sample space) to $(S,\mathcal{S})$ the state space. Then the definition of indistinguishability between two stochastic processes $X$ and $Y$ defined on the same probability space $(\Omega,\mathcal{F}, \mathbb{P})$ is as follows: $X$ and $Y$ are indistinguishable if $$\mathbb{P} (X_t = Y_t, \forall t \in [0,\infty) ) = 1.$$ I am confused by a measurability issue: Why $\cap_{t \geq 0} \{ X_t = Y_t \} \in \mathcal{F}$? (not countable intersection). I think it is implicitly assumed.
TIPS FOR SOLVING QUESTIONS RELATED TO PROFIT AND LOSS: 1. The price at which an article is purchased is called its cost price (C.P.) 2. The price at which the article is sold is called its selling price (S.P.) 3. If the cost price (C.P.) of the article is equal to the selling price (S.P.), then there is no loss or gain. 4. If the selling price (S.P.) > cost price (C.P.), then the seller is said to have a profit or gain, Gain/Profit = S.P. - C.P. 5. If the cost price (C.P.) > selling price (S.P.), then the seller is said to have a loss, Loss = C.P. - S.P. \begin{aligned} 6. & Gain\% = \left(\frac{Gain*100}{C.P.}\right) \\ 7. & Loss\% = \left(\frac{Loss*100}{C.P.}\right) \\ 8. & S.P. = \left(\frac{100+Gain\%}{100}\times C.P.\right) \\ 9. & S.P. = \left(\frac{100-Loss\%}{100}\times C.P.\right) \\ 10. & C.P. = \left(\frac{100}{100+Gain\%}\times S.P.\right) \\ 11. & C.P. = \left(\frac{100}{100-Loss\%}\times S.P.\right) \\ \end{aligned} 12. If an article is sold at a profit/gain of 30%, then S.P. = 130% of the C.P. 13. If an article is sold at a loss of 20%, then S.P. = 80% of the C.P. 14. When a person sells two similar items, one at a gain of say x%, and the other at a loss of x%, then in this transaction the seller always incurs a loss given by: \begin{aligned} \left(\frac{x^2}{100}\right)\%\\ \end{aligned} 15. A single discount equivalent to discount series of x% and y% given by the seller is equal to \begin{aligned} \left(x +y - \frac{xy}{100}\right)\%\\ \end{aligned} 16. If a trader professes to sell his goods at cost price, but uses false weights, then \begin{aligned} Gain\% = \left[\frac{Error}{\text{True value - Error}} \times 100\right] \% \end{aligned}
Current browse context: math.AP Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Probability Title: Some properties of non-linear fractional stochastic heat equations on bounded domains (Submitted on 4 May 2016 (v1), last revised 16 Dec 2016 (this version, v2)) Abstract: Consider the following stochastic partial differential equation, \begin{equation*} \partial_t u_t(x)= \mathcal{L}u_t(x)+ \xi\sigma (u_t(x)) \dot F(t,x), \end{equation*} where $\xi$ is a positive parameter and $\sigma$ is a globally Lipschitz continuous function. The stochastic forcing term $\dot F(t,x)$ is white in time but possibly colored in space. The operator $\mathcal{L}$ is a non-local operator. We study the behaviour of the solution with respect to the parameter $\xi$, extending the results in \cite{FoonNual} and \cite{Bin} Submission historyFrom: Erkan Nane [view email] [v1]Wed, 4 May 2016 15:48:18 GMT (12kb) [v2]Fri, 16 Dec 2016 04:20:21 GMT (10kb)
Notice: If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help! Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Phred-scaled Quality Scores You may have noticed that a lot of the scores that are output by the GATK are in Phred scale. The Phred scale was originally used to represent base quality scores emitted by the Phred program in the early days of the Human Genome Project (see this Wikipedia article for more historical background). Now they are widely used to represent probabilities and confidence scores in other contexts of genome science. Phred scale in context In the context of sequencing, Phred-scaled quality scores are used to represent how confident we are in the assignment of each base call by the sequencer. In the context of variant calling, Phred-scaled quality scores can be used to represent many types of probabilities. The most commonly used in GATK is the QUAL score, or variant quality score. It is used in much the same way as the base quality score: the variant quality score is a Phred-scaled estimate of how confident we are that the variant caller correctly identified that a given genome position displays variation in at least one sample. Phred scale in practice In today’s sequencing output, by convention, most useable Phred-scaled base quality scores range from 2 to 40, with some variations in the range depending on the origin of the sequence data (see the FASTQ format documentation for details). However, Phred-scaled quality scores in general can range anywhere from 0 to infinity. A higher score indicates a higher probability that a particular decision is correct, while conversely, a lower score indicates a higher probability that the decision is incorrect. The Phred quality score (Q) is logarithmically related to the error probability (E). $$ Q = -10 \log E $$ So we can interpret this score as an estimate of error, where the error is e.g. the probability that the base is called incorrectly by the sequencer, but we can also interpret it as an estimate of accuracy, where the accuracy is e.g. the probability that the base was identified correctly by the sequencer. Depending on how we decide to express it, we can make the following calculations: If we want the probability of error (E), we take: $$ E = 10 ^{-\left(\frac{Q}{10}\right)} $$ And conversely, if we want to express this as the estimate of accuracy (A), we simply take $$ \begin{eqnarray} A &=& 1 - E \nonumber \ &=& 1 - 10 ^{-\left(\frac{Q}{10}\right)} \nonumber \ \end{eqnarray} $$ Here is a table of how to interpret a range of Phred Quality Scores. It is largely adapted from the Wikipedia page for Phred Quality Score. For many purposes, a Phred Score of 20 or above is acceptable, because this means that whatever it qualifies is 99% accurate, with a 1% chance of error. Phred Quality Score Error Accuracy (1 - Error) 10 1/10 = 10% 90% 20 1/100 = 1% 99% 30 1/1000 = 0.1% 99.9% 40 1/10000 = 0.01% 99.99% 50 1/100000 = 0.001% 99.999% 60 1/1000000 = 0.0001% 99.9999% And finally, here is a graphical representation of the Phred scores showing their relationship to accuracy and error probabilities. The red line shows the error, and the blue line shows the accuracy. Of course, as error decreases, accuracy increases symmetrically. Note: You can see that below Q20 (which is how we usually refer to a Phred score of 20), the curve is really steep, meaning that as the Phred score decreases, you lose confidence very rapidly. In contrast, above Q20, both of the graphs level out. This is why Q20 is a good cutoff score for many basic purposes.
This question already has an answer here: I have data set which I would like to fit to the function $\quad \quad f(r) = A\cos(kr - \pi/4 + \phi_1)\times\exp(i(-2\log(r) + \phi_2))$, where $k$ is a constant I know and $(A,\phi_1,\phi_2)$ are the fitting parameters. In fact, if I split my data set into real and imaginary parts and fit these separately to the function above, using a command like solRe = FindFit[ReU, A*Cos[k*x - π/4 + B] Cos[-2 Log[x] + C], {A, B, C}, x]solIm = FindFit[ImU, A*Cos[k*x - π/4 + B] Sin[-2 Log[x] + C], {A, B, C}, x], everything works well. I get the same value for B for the two fits, which is encouraging. However, the values for A and C are different. Therefore I would like to fit to the complex ansatz. I tried doing this with soltot = FindFit[U, A*Cos[k*x - π/4 + B] Exp[I (-2 Log[x] + C)], {A, B, C}, x] but I get errors. Does anyone have an idea how I could do such a fit?
In Maxwell's treatise, he discusses the potential of two closed curves: I have a lengthy derivation up to equation 15. In equation 16, $\dfrac{dx}{ds} \dfrac{dx}{ds'} + \dfrac{dy}{ds} \dfrac{dy}{ds'} + \dfrac{dz}{ds} \dfrac{dz}{ds'}$ is replaced by cos $\epsilon$. That is fine. Also since the strength ($\phi$ and $\phi'$) of each shell is unity, it disappears in equation 16. But what I don't understand is the reversing of sign in equation 16. It is written that "equation 15 gives the potential energy due to the mutual action of the two shells". It is also written that "equation 15 with its sign reversed, when the strength of each shell is unity, is called the potential of two closed curves $s$ and $s'$" Since a thin magnetic shell is equivalent to a closed circuit, therefore the potential energy of one shell due to another should be equivalent to the potential energy of one closed circuit due to another closed circuit. Then how is it that the sign got reversed in case of closed curves (circuits). My derivation:
In this post, we are going to examine how Maximum Likelihood Estimators work, how to make one of our own and how to use it. What is Maximum Likelihood Estimation? It is a statistical technique that strives to maximize the probability of occurrence of a given observation sequence by choosing an appropriate model. The maximum likelihood principle states that the desired probability distribution is the one that makes the observed data ' most likely'. For example, suppose we have 2 models A and B, and a given observation sequence D. Further, let model A claim that using it, the probability of observing the sequence D is 0.7, and let B claim the probability to be 0.8. Then, by the ML Principle, we will select the model which claims the observation sequence to be more likely, that is, we will use the model B. (Image Credits) For example, let \(\mathbb{D}\) be an observation sequence consisting of '\(n\)' observations. We seek a probabilistic model that best explains the observation sequence. Thus, if we have a family of models specified by model parameter vector \(\underline{\theta} = {(\theta_1, \theta_2 \ldots \theta_k)}^T\) where \(k\) is a positive real number, then, we seek to find a model with parameters \(\underline{\hat{\theta}}\) which maximize the likelihood of the observation sequence\(\mathbb{D}\). Therefore, the Maximum Likelihood Objective may be stated as: \(\begin{equation} \label{eq:1} \arg\max_{\underline{\theta}} P(\underline{D}|\underline{\theta}) \end{equation}\) Here, \(P(\underline{D}| \underline{\theta})\)is the probability of occurrence of sequence \(\mathbb{D}\) given that \(\underline{\theta}\) model is chosen as the driving model. To better understand this, let us take a few simple but more concrete examples: In a coin tossing experiment, a coin was tossed '\(n\)' times, and the following sequence(s) was/were observed. \(n=1, \mathbb{D}=\{H\}\) where H stands for Heads and T for tails. \(n=3, \mathbb{D}=\{H,H,T\}\) We wish to find a model that best explains these observation sequences. So, first we need to understand the aspects of the experiment. A coin has 2 faces, one is called 'Heads' while the other is called 'Tails'. Tossing the coin yields only one of these two outcomes. If you were seeing a coin for the first time, like, if you were some alien landing on earth, you would not know what outcome to expect. Because you know absolutely nothing about the coin, you conduct the experiment of actually tossing it! So, for a single coin toss, the sample space is just these two outcomes, \(\mathbb{S} = \{H,T\}\). As an aside (and as an earthling), we know that coins can have varied behaviour. By virtue of its construction, it can be heavier on one side (loaded coin) when we say that the coin has a bias towards one particular outcome. As students of science, we also know that the coin flipping and its outcome are all governed by physical laws. So, we might setup elaborate experiment to study the angle of initial hold, flip velocity, wind velocity, air pressure and other parameters and accurately determine the outcome. But that would be much ado about nothing. It is just a coin flip! Instead, we just want to know in a 'general sense of things' that ' If I flip this coin, what are the chances that I get a Heads?'. We neatly tuck all these things under the bias umbrella and try to give our answer based on that one parameter (for simplicity's sake). Clearly, the alien cannot answer that! He doesn't know anything about it. So, we'd like to see how his notion of bias varies. What he does know is Maximum Likelihood Estimation. So, here is how he proceeds: Step 1: Assume a model that we believe to best encapsulate the observations This involves setting up a probability density function. For this simple case, we believe that only the bias affects the outcome. So, let \(\theta\) be the bias in the coin, such that, \(P\{H|\theta\} = \theta\). Since the sum of probabilities of all disjoint events in a sample space must sum to 1, we have \(P\{T|\theta\}= 1-\theta\). Now, our p.d.f should encapsulate all the information and parameters. Let \(x\)denote our random variable which takes a value for each outcome. For mathematical convenience, let: \(x = 1 \text{ on Heads (success)}\\ x = 0 \text{ on Tails (failure)}\) So, our p.d.f may be given as \(f(x|\theta) = \theta^x(1-\theta)^{1-x}\) We could have taken different assignments for our random variable, but we chose this particular form because it has one simplifying property. One of the two parts \(\theta^x\) and \(\theta^{1-x}\) disappears when evaluating for Heads or tails. So, it neatly simplifies to \(P\{H|\theta\} = \theta\) for our case. Step 2: Define a Likelihood function Note that the p.d.f. was defined for a single observation only. For the entire observation sequence, a joint p.d.f must be defined. However, we believe that previous outcomes to not affect future outcomes. Consequently, we can assume the independence of each observation. Then, the Likelihood function is defined as: \(\begin{equation} \mathcal{L}(\theta|\mathbb{D}) = P(\mathbb{D}|\theta) = P\{x_1,x_2,x_3\ldots,x_n | \theta\} \\ = P(x_1|\theta)P(x_2|\theta)\ldots P(x_n|\theta) = \prod_{i=1}^{n}\theta^{x_i}(1-\theta)^{(1-x_i)} \end{equation}\) Note that \(\mathcal{L}(\theta|\mathbb{D})\) is usually not a probability density function. For example, it may not even exist in some cases![1][2] Step 3: Maximize the Likelihood function One may proceed as one likes to maximize this objective function. We know that the product is easily converted to a sum by taking its logarithm. So, instead of maximizing this product, we can maximize the log likelihood function and still get the same answer [3]. Log likelihood may be written as: \(ln\mbox{ }P(\mathbb{D}|\theta) = \sum_{i=1}^{n}\{x_iln(\theta)+ (1-x_i)ln(1-\theta)\}\) Now, taking a derivate and setting it to 0 gets us our candidates for a relative maxima or minima and we may find its nature by a double derivative [4]. It is easily shown that the above value is maximized for: \(\begin{equation} \label{eq:2} \theta = \frac{\sum_{i=1}^{n}{x_i}}{n} \end{equation}\) From our simplified analysis also, we could proceed and find the same result. For instance, in part 1, \(\mathcal{L}(\theta|\mathbb{D}) = P(H|\theta) = \theta\). Here, it will be maximized for \(\theta = 1\) .if we used the above generalized formula, we'd obtain the same result. Then in part 2, \(\mathcal{L}(\theta|\mathbb{D}) = P(H,H,T|\theta) = \theta^2(1-\theta)\). This is maximized for \(\theta = 2/3\) which is in agreement with the formula in the above equation. From the alien's point of view, the maths is right. But from our earthling perspective, don't you think that the guesses are a bit too extreme? Particularly, if a string of heads is observed, we are more prone to consider it as an instance of good luck rather than proclaiming that the coin will always yield Heads. We always like to keep some margin for error. Particularly, we would like to believe that the world is usually a fair place, and the coins also are usually fair. So, we would heavily down-weight the possibility of the coin being totally loaded. In such cases, we would like to codify our prior beliefs about the coin in our problem formulation. This is also where we start to depart from MLE towards a Bayesian approach (of MAP). This is one of the shortcomings of MLE. Take our first example again. Let us add another observation to D, and then reestimate the probabilities with model A and B. Let A state the new probability to be 0.72, and B now have a reduced probability of 0.71. In this case, a mere change of 1 observation would make us switch models. That seems counter-intuitive, no? However, this does not mean that MLE is not useful. It is extensively used in our mobile phones to decide what code was transmitted by the base station tower. We'd cover that sometime later. P.S.: Here's an interesting trivia: You Can Load a Die, But You Can’t Bias a Coin [1] Wikipedia Talk: Likelihood function [2] What is the reason that a likelihood function is not a pdf? [3] C. M. Bishop, "Pattern Recognition and Machine Learning" [4] In Jae Myung, "Tutorial on maximum likelihood estimation"
Predicting Returns for Ongoing Loans Introduction In a previous paper, we showed how we could predict the future return of a loan at its inception, based on historical data from the marketplace and the loan’s characteristics. Although such information is crucial to decide what to invest in on the primary market, it is also beneficial to be able to re-assess expected returns when loans are already generating payments, in particular to monitor the performance of a portfolio or to compare strategies. The fact that a loan hasn’t defaulted yet bears information that should be used to update its default likelihood and increase its expected return. The goal of the present paper is to offer a mathematical model to factor in such payments. Internal Rate of Return As seen previously, our preferred method for calculating loans performance is the Internal Rate of Return. The monthly rate of return r is the discounting rate such that the Net Present Value, or the sum of the discounted future cash flows minus the original investment equals 0. Given a loan of amount A that makes m monthly payments of p, r is such that: $$ NPV = 0 = \frac {-A} {(1+r)^0} + \frac {p_1} {(1+r)^1} + \frac {p_2} {(1+r)^2} + … + \frac {p_{m}} {(1+r)^{m}} $$ or: $$ A = \sum_{i=1}^{m} \frac {p_i} {(1+r)^i} $$ The amount paid each month p is constant over time and known in advance, therefore predicting the rate of return only requires to estimate the number of payments m. $$ A = p \times \frac{1-(1+r)^{-m}}{r} $$ If the borrower decides to repay the loan early, he will make an extraneous payment, of a different amount p’. That payment is likely to happen out-of-schedule. We can express that time of payment as i a floating number of months since issuance date. For instance, the Lending Club loan #69001 was issued on September 25th, 2009, and was repaid early on July, 5th, 2012, or i=33.31 months later. Likewise, a loan could default, yet generate some recovery payment at a later date. We can include an optional adjustment, such as an early re-payment or late recovery, by adding its present value to the right side of the previous equation. Therefore we have: $$ A = p \times \frac{1-(1+r)^{-m}}{r} + \frac{p’}{(1+r)^i} $$ (eq1) Nota bene: obtaining the annual rate R requires to annualize the monthly rate r with the following formula: $(R = (1 + r)^{12} – 1 ) $ Number of Payments Let $ h(t)$ be the unconditional probability of a loan to default during the month preceding t, calculated as the number of loans defaulting during the month preceding t divided by the total number of loans considered. The probability that a loan makes the first payment is $ 1-h(1)$. Likewise, the probability to make a second payment is $ \left(h-(1)\right)-h(2)$. By extension, the probability P(k) to keep paying until the kth month and calculated at issuance equals 1 minus all the probabilities that it defaulted before reaching k: $$ P(k) = 1 – \sum_{i=1}^{k} h(i) $$ Which means in turn that at issuance, the expected total number of payments m made by a loan of term n is the sum of all those probabilities until maturity: $$ m = \sum_{i=1}^{n} \left( 1 – \sum_{j=1}^{i} h(j) \right)$$ When k payments have been made already, the risk of default before reaching k drops to 0. Therefore m becomes: $$ m = \sum_{i=1}^{k} 1 + \sum_{j=k+1}^{n} \left( 1 – \sum_{t=k+1}^{j} h(t) \right) $$ Which can be simplified in: $$ m = k + \sum_{j=k+1}^{n} \left( 1 – \sum_{t=k+1}^{j} h(t) \right) $$ (eq2) A central question is the homogeneity of borrowers’ behavior access time and risk profile. A study of 1,300,000 European consumer loans (Francesca, 2012) shows that while the probability of default is only monotonously decreasing for low-risk borrowers, the profiles of hazards rates remain somewhat similar. Analyzing the hazard rate for all Lending Club loans by grade A-B, C-D and E-F-G also shows similar curves: (Only the shape similarity is important, not the values themselves as including young loans makes early defaults over-represented) Predicting Default Occurrences We can estimate $ h(t)$, the probability of a loan to default between t and t–1, also called the hazard function, by observing the occurrence of defaults in historical data. Lending Club’s historical data shows that out of 23,156 Lending Club loans old enough to have reached maturity, 2,871 have defaulted. Since we know when those defaulting loans stopped paying, we can graph the Probability density function, or hazard rate for defaulting loans with a term of 36 months. The risk of default sharply increases during the first third of its life, then decreases each month until maturity: As seen in a previous paper, we can analyze the effect of the default rate on the ratio of missed payments (term minus payments made divided by term) through a Monte-Carlo simulation. This relationship is very linear, indicating the hazard rate is purely proportional to the default rate: In other words, whatever the default probability of a loan, its hazard cuve will always have the same shape. For instance, a grade A and a grade F loans are both 4 times less likely to default on the 34th month than on the 13th. Therefore knowing d the overall probability of default of a loan and Pdf(n,t) the generic probability distribution function of default occurring between the months t–1 and t when the term is n, we have: $$ h(n,t,d) = d \times Pdf(n,t) $$ (eq3) Which means, with (eq2), that the predicted number of payments m is: $$ m = k + \sum_{j=k+1}^{n} \left( 1 – d \sum_{t=k+1}^{j} Pdf(n,t) \right) $$ (eq4) Predicting Return By combining (eq1) and (eq4), we can obtain the monthly rate of return r for a loan of amount A, term n, installment p and default probability d when it has made k payments already: $$ A = p \times \frac{1-(1+r)^{-\left( k + \sum_{j=k+1}^{n} \left( 1 – d \sum_{t=k+1}^{j} Pdf(n,t) \right) \right)}}{r} + \frac{p’}{(1+r)^i} $$ (eq5) Let us take for example the Lending Club loan #3696613 that was issued on 09–12–2013. The loan has a sub-grade B2 with an interest rate of 11.14%. The loan amount is \$10,000, its term is 36, and it has to pay back \$328.06 per month, minus 1% service fees. To estimate the probability of default, we take the average default rate amongst loans with the same sub-grade, which is 10.31% amongst 1,280 B2 loans past maturity. Number of payments made Expected number of payments Expected Return 0 33.83 6.80% 1 33.88 6.92% 2 33.96 7.08% 3 34.06 7.28% 4 34.17 7.50% 5 34.29 7.73% 6 34.41 7.97% 7 34.53 8.20% 8 34.65 8.43% 9 34.76 8.66% 10 34.87 8.87% 11 34.98 9.07% 12 35.08 9.26% 13 35.17 9.44% 14 35.26 9.60% 15 35.34 9.75% 16 35.42 9.90% 17 35.49 10.03% 18 35.55 10.14% 19 35.61 10.25% 20 35.66 10.35% 21 35.71 10.44% 22 35.76 10.53% 23 35.80 10.60% 24 35.83 10.67% 25 35.87 10.72% 26 35.90 10.77% 27 35.92 10.82% 28 35.94 10.86% 29 35.96 10.89% 30 35.97 10.91% 31 35.98 10.93% 32 35.99 10.94% 33 35.99 10.95% 34 36.00 10.96% 35 36.00 10.96% 36 36.00 10.96% Late Payments According to Lending Club, the probabilities of defaulting once a loan is late in payment are as follows: Status Probability of default In Grace (1 to 15 days late) 23% Late (16 to 30 days) 49% Late (31 to 60 days) 62% Late (61 to 90 days) 78% Late (91 to 120 days) 84% These data can be interpolated with a bi-exponential function, such that the probability h 2(l) that a loan won’t make any further payments when it is l days late is: $$ h_2(l) = a_1 + a_2 \cdot e^{-a_3 \cdot l} + a_4 \cdot e^{-a_5 \cdot l} $$ with a 1 = 1.0191, a, 2=–0.3564 a, 3=0.0857 aand 4=–0.6653 a 5=0.01278 Historical data do not show what happens for loans that resume paying, and we have to consider that they return to their previous default risk levels. Since the missed payments accumulate, they are to be paid once the loan becomes current again, and late fees shall compensate for delayed payments. In other words, the loan is ‘back to normal’. Therefore, given (eq4), the expected number of payments m for a loan of term n, default probability d, that has made k payments already and which is l days late is: $$ {m = k + 0 \text { with probability } h_2(l) k + \sum_{j=k+1}^{n} \left( 1 – d \sum_{t=k+1}^{j} Pdf(n,t) \right) \text {with probability } \left(1 – h_2(l) \right)} $$ Which can be simplified in: $$ m = k + \left (1 – h_2(l) \right) \cdot \sum_{j=k+1}^{n} \left( 1 – d \sum_{t=k+1}^{j} Pdf(n,t) \right) $$ (eq6) Combining (eq6) with (eq5) gives: $$ A = p \times \frac{1-(1+r)^{-\left( k + \left(1 – h_2(l) \right) \cdot \sum_{j=k+1}^{n} \left( 1 – d \sum_{t=k+1}^{j} Pdf(n,t) \right) \right)}}{r} + \frac{p’}{(1+r)^i} $$ (eq7) For instance, if the loan #3696613 paid 9 months already then begins to be late in payments, we have: Number of days late Expected number of payments Expected Return 0 34.76 8.66% 1 33.72 6.58% 2 32.33 3.60% 3 30.71 –0.19% 5 27.92 –7.68% 10 22.91 –24.78% 20 17.48 –49.47% 30 14.75 –64.03% 40 13.14 –72.87% 50 12.07 –78.60% 60 11.30 –82.49% 70 10.74 –85.22% 80 10.31 –87.18% 90 9.99 –88.59% References Francesca, G., 2012. A Discrete-Time Hazard Model for Loans: Some Evidence from Italian Banking System. American Journal of Applied Sciences Emmanuel Marot July 3, 2014 5 Comment
[Click here for a PDF of this post with nicer formatting] This is my first set of notes for the UofT course ECE1229, Advanced Antenna Theory, taught by Prof. Eleftheriades, covering ch. 2 [1] content. Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.) Poynting vector The Poynting vector was written in an unfamiliar form \begin{equation}\label{eqn:chapter2Notes:560} \boldsymbol{\mathcal{W}} = \boldsymbol{\mathcal{E}} \cross \boldsymbol{\mathcal{H}}. \end{equation} I can roll with the use of a different symbol (i.e. not \(\BS\)) for the Poynting vector, but I’m used to seeing a \( \frac{c}{4\pi} \) factor ([6] and [5]). I remembered something like that in SI units too, so was slightly confused not to see it here. Per [3] that something is a \( \mu_0 \), as in \begin{equation}\label{eqn:chapter2Notes:580} \boldsymbol{\mathcal{W}} = \inv{\mu_0} \boldsymbol{\mathcal{E}} \cross \boldsymbol{\mathcal{B}}. \end{equation} Note that the use of \( \boldsymbol{\mathcal{H}} \) instead of \( \boldsymbol{\mathcal{B}} \) is what wipes out the requirement for the \( \frac{1}{\mu_0} \) term since \( \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{B}}/\mu_0 \), assuming linear media, and no magnetization. Typical far-field radiation intensity It was mentioned that \begin{equation}\label{eqn:advancedantennaL1:20} U(\theta, \phi) = \frac{r^2}{2 \eta_0} \Abs{ \BE( r, \theta, \phi) }^2 = \frac{1}{2 \eta_0} \lr{ \Abs{ E_\theta(\theta, \phi) }^2 + \Abs{ E_\phi(\theta, \phi) }^2}, \end{equation} where the intrinsic impedance of free space is \begin{equation}\label{eqn:advancedantennaL1:480} \eta_0 = \sqrt{\frac{\mu_0}{\epsilon_0}} = 377 \Omega. \end{equation} (this is also eq. 2-19 in the text.) To get an understanding where this comes from, consider the far field radial solutions to the electric and magnetic dipole problems, which have the respective forms (from [3]) of \begin{equation}\label{eqn:chapter2Notes:740} \begin{aligned} \boldsymbol{\mathcal{E}} &= -\frac{\mu_0 p_0 \omega^2 }{4 \pi } \frac{\sin\theta}{r} \cos\lr{w t – k r} \thetacap \\ \boldsymbol{\mathcal{B}} &= -\frac{\mu_0 p_0 \omega^2 }{4 \pi c} \frac{\sin\theta}{r} \cos\lr{w t – k r} \phicap \\ \end{aligned} \end{equation} \begin{equation}\label{eqn:chapter2Notes:760} \begin{aligned} \boldsymbol{\mathcal{E}} &= \frac{\mu_0 m_0 \omega^2 }{4 \pi c} \frac{\sin\theta}{r} \cos\lr{w t – k r} \phicap \\ \boldsymbol{\mathcal{B}} &= -\frac{\mu_0 m_0 \omega^2 }{4 \pi c^2} \frac{\sin\theta}{r} \cos\lr{w t – k r} \thetacap \\ \end{aligned} \end{equation} In neither case is there a component in the direction of propagation, and in both cases (using \( \mu_0 \epsilon_0 = 1/c^2\)) \begin{equation}\label{eqn:chapter2Notes:780} \Abs{\boldsymbol{\mathcal{H}}} = \frac{\Abs{\boldsymbol{\mathcal{E}}}}{\mu_0 c} = \Abs{\boldsymbol{\mathcal{E}}} \sqrt{\frac{\epsilon_0}{\mu_0}} = \inv{\eta_0}\Abs{\boldsymbol{\mathcal{E}}} . \end{equation} A superposition of the phasors for such dipole fields, in the far field, will have the form \begin{equation}\label{eqn:chapter2Notes:800} \begin{aligned} \BE &= \inv{r} \lr{ E_\theta(\theta, \phi) \thetacap + E_\phi(\theta, \phi) \phicap } \\ \BB &= \inv{r c} \lr{ E_\theta(\theta, \phi) \thetacap – E_\phi(\theta, \phi) \phicap }, \end{aligned} \end{equation} with a corresponding time averaged Poynting vector \begin{equation}\label{eqn:chapter2Notes:820} \begin{aligned} \BW_{\textrm{av}} &= \inv{2 \mu_0} \BE \cross \BB^\conj \\ &= \inv{2 \mu_0 c r^2} \lr{ E_\theta \thetacap + E_\phi \phicap } \cross \lr{ E_\theta^\conj \thetacap – E_\phi^\conj \phicap } \\ &= \frac{\thetacap \cross \phicap}{2 \mu_0 c r^2} \lr{ \Abs{E_\theta}^2 + \Abs{E_\phi}^2 } \\ &= \frac{\rcap}{2 \eta_0 r^2} \lr{ \Abs{E_\theta}^2 + \Abs{E_\phi}^2 }, \end{aligned} \end{equation} verifying \ref{eqn:advancedantennaL1:20} for a superposition of electric and magnetic dipole fields. This can likely be shown for more general fields too. Field plots We can plot the fields, or intensity (or log plots in dB of these). It is pointed out in [3] that when there is \( r \) dependence these plots are done by considering the values of at fixed \( r \). The field plots are conceptually the simplest, since that vector parameterizes a surface. Any such radial field with magnitude \( f(r, \theta, \phi) \) can be plotted in Mathematica in the \( \phi = 0 \) plane at \( r = r_0 \), or in 3D (respectively, but also at \( r = r_0\)) with code like that of the following listing Intensity plots can use the same code, with the only difference being the interpretation. The surface doesn’t represent the value of a vector valued radial function, but is the magnitude of a scalar valued function evaluated at \( f( r_0, \theta, \phi) \). The surfaces for \( U = \sin\theta, \sin^2\theta \) in the plane are parametrically plotted in fig. 2, and for cosines in fig. 1 to compare with textbook figures. fig 1. Cosinusoidal radiation intensities fig 2. Sinusoidal radiation intensities Visualizations of \( U = \sin^2 \theta\) and \( U = \cos^2 \theta\) can be found in fig. 3 and fig. 4 respectively. Even for such simple functions these look pretty cool. fig 3. Square sinusoidal radiation intensity fig 4. Square cosinusoidal radiation intensity dB vs dBi Note that dBi is used to indicate that the gain is with respect to an “isotropic” radiator. This is detailed more in [2]. Trig integrals Tables 1.1 and 1.2 produced with tableOfTrigIntegrals.nb have some of the sine and cosine integrals that are pervasive in this chapter. Polarization vectors The text introduces polarization vectors \( \rhocap \) , but doesn’t spell out their form. Consider a plane wave field of the form \begin{equation}\label{eqn:chapter2Notes:840} \BE = E_x e^{j \phi_x} e^{j \lr{ \omega t – k z }} \xcap + E_y e^{j \phi_y} e^{j \lr{ \omega t – k z }} \ycap. \end{equation} The \( x, y \) plane directionality of this phasor can be written \begin{equation}\label{eqn:chapter2Notes:860} \Brho = E_x e^{j \phi_x} \xcap + E_y e^{j \phi_y} \ycap, \end{equation} so that \begin{equation}\label{eqn:chapter2Notes:880} \BE = \Brho e^{j \lr{ \omega t – k z }}. \end{equation} Separating this direction and magnitude into factors \begin{equation}\label{eqn:chapter2Notes:900} \Brho = \Abs{\BE} \rhocap, \end{equation} allows the phasor to be expressed as \begin{equation}\label{eqn:chapter2Notes:920} \BE = \rhocap \Abs{\BE} e^{j \lr{ \omega t – k z }}. \end{equation} As an example, suppose that \( E_x = E_y \), and set \( \phi_x = 0 \). Then \begin{equation}\label{eqn:chapter2Notes:940} \rhocap = \xcap + \ycap e^{j \phi_y}. \end{equation} Phasor power In section 2.13 the phasor power is written as \begin{equation}\label{eqn:chapter2Notes:620} I^2 R/2, \end{equation} where \( I, R \) are the magnitudes of phasors in the circuit. I vaguely recall this relation, but had to refer back to [4] for the details. This relation expresses average power over a period associated with the frequency of the phasor \begin{equation}\label{eqn:chapter2Notes:640} \begin{aligned} P &= \inv{T} \int_{t_0}^{t_0 + T} p(t) dt \\ &= \inv{T} \int_{t_0}^{t_0 + T} \Abs{\BV} \cos\lr{ \omega t + \phi_V } \Abs{\BI} \cos\lr{ \omega t + \phi_I} dt \\ &= \inv{T} \int_{t_0}^{t_0 + T} \Abs{\BV} \Abs{\BI} \lr{ \cos\lr{ \phi_V – \phi_I } + \cos\lr{ 2 \omega t + \phi_V + \phi_I} } dt \\ &= \inv{2} \Abs{\BV} \Abs{\BI} \cos\lr{ \phi_V – \phi_I }. \end{aligned} \end{equation} Introducing the impedance for this circuit element \begin{equation}\label{eqn:chapter2Notes:660} \BZ = \frac{ \Abs{\BV} e^{j\phi_V} }{ \Abs{\BI} e^{j\phi_I} } = \frac{\Abs{\BV}}{\Abs{\BI}} e^{j\lr{\phi_V – \phi_I}}, \end{equation} this average power can be written in phasor form \begin{equation}\label{eqn:chapter2Notes:680} \BP = \inv{2} \Abs{\BI}^2 \BZ, \end{equation} with \begin{equation}\label{eqn:chapter2Notes:700} P = \textrm{Re} \BP. \end{equation} Observe that we have to be careful to use the absolute value of the current phasor \( \BI \), since \( \BI^2 \) differs in phase from \( \Abs{\BI}^2 \). This explains the conjugation in the [4] definition of complex power, which had the form \begin{equation}\label{eqn:chapter2Notes:720} \BS = \BV_{\textrm{rms}} \BI^\conj_{\textrm{rms}}. \end{equation} Radar cross section examples Flat plate. \begin{equation}\label{eqn:chapter2Notes:960} \sigma_{\textrm{max}} = \frac{4 \pi \lr{L W}^2}{\lambda^2} \end{equation} fig. 6. Square geometry for RCS example. Sphere. In the optical limit the radar cross section for a sphere fig. 7. Sphere geometry for RCS example. \begin{equation}\label{eqn:chapter2Notes:980} \sigma_{\textrm{max}} = \pi r^2 \end{equation} Note that this is smaller than the physical area \( 4 \pi r^2 \). Cylinder. fig. 8. Cylinder geometry for RCS example. \begin{equation}\label{eqn:chapter2Notes:1000} \sigma_{\textrm{max}} = \frac{ 2 \pi r h^2}{\lambda} \end{equation} Tridedral corner reflector fig. 9. Trihedral corner reflector geometry for RCS example. \begin{equation}\label{eqn:chapter2Notes:1020} \sigma_{\textrm{max}} = \frac{ 4 \pi L^4}{3 \lambda^2} \end{equation} Scattering from a sphere vs frequency Frequency dependence of spherical scattering is sketched in fig. 10. Low frequency (or small particles): Rayleigh\begin{equation}\label{eqn:chapter2Notes:1040} \sigma = \lr{\pi r^2} 7.11 \lr{\kappa r}^4, \qquad \kappa = 2 \pi/\lambda. \end{equation} Mie scattering (resonance),\begin{equation}\label{eqn:chapter2Notes:1060} \sigma_{\textrm{max}}(A) = 4 \pi r^2 \end{equation} \begin{equation}\label{eqn:chapter2Notes:1080} \sigma_{\textrm{max}}(B) = 0.26 \pi r^2. \end{equation} optical limit ( \(r \gg \lambda\) )\begin{equation}\label{eqn:chapter2Notes:1100} \sigma = \pi r^2. \end{equation} fig 10. Scattering from a sphere vs frequency (from Prof. Eleftheriades’ class notes). FIXME: Do I have a derivation of this in my optics notes? Notation Time average. Both Prof. Eleftheriades and the text [1] use square brackets \( [\cdots] \) for time averages, not \( <\cdots> \). Was that an engineering convention? Prof. Eleftheriades writes \(\Omega\) as a circle floating above a face up square bracket, as in fig. 1, and \( \sigma \) like a number 6, as in fig. 1. Bold vectors are usually phasors, with (bold) calligraphic script used for the time domain fields. Example: \( \BE(x,y,z,t) = \ecap E(x,y) e^{j \lr{\omega t – k z}}, \boldsymbol{\mathcal{E}}(x, y, z, t) = \textrm{Re} \BE \). fig. 11. Prof. handwriting decoder ring: Omega fig 12. Prof. handwriting decoder ring: sigma References [1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005. [2] digi.com. Antenna Gain: dBi vs. dBd Decibel Detail, 2015. URL http://www.digi.com/support/kbase/kbaseresultdetl?id=2146. [Online; accessed 15-Jan-2015]. [3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999. [4] J.D. Irwin. Basic Engineering Circuit Analysis. MacMillian, 1993. [5] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975. [6] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.
Introduction Naive Bayes is a family of simple probabilistic classifier based on applying Bayes theorem with strong (naive) independence assumptions between the feauture. Naive Bayes classifier follows under classification in supervised learning task for modeling and predicting categorical variables. It is a very simple algorithm based around conditional probability and counting. It is called “naive” because of its core assumption of conditional independence (i.e all input feautures are independent from one another) which is rarely holds true in the real world. Applications of Naive Bayes Algorithms Real time Prediction: Naive Bayes is an eager learning classifier and it is sure fast. Thus, it could be used for making predictions in real time. Multi class Prediction: This algorithm is also well known for multi class prediction feature. Here we can predict the probability of multiple classes of target variable. Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayes classifiers mostly used in text classification (due to better result in multi class problems and independence rule) have higher success rate as compared to other algorithms. As a result, it is widely used in Spam filtering (identify spam e-mail) and Sentiment Analysis (in social media analysis, to identify positive and negative customer sentiments). Recommendation System: Naive Bayes Classifier and Collaborative Filtering together builds a Recommendation System that uses machine learning and data mining techniques to filter unseen information and predict whether a user would like a given resource or not. Some of the real world examples To mark an email as sparm or not sparm. Classify a news article about technology, politics, or sports. Check a piece of text expressing positive emotions, or negative emotions? Used for face recognition software. Strengths of Naives Bayes Even though the conditional independence assumption rarely holds true, NB models actually perform surprisingly well in practice, especially for how simple they are. They are easy to implement and can scale with your dataset. Naive Bayes Pros: It is easy and fast to predict class of test data set. It also perform well in multi class prediction. When assumption of independence holds, a Naive Bayes classifier performs better compare to other models like logistic regression and you need less training data. It perform well in case of categorical input variables compared to numerical variable(s). For numerical variable, normal distribution is assumed (bell curve, which is a strong assumption). Weaknesses of Naives Bayes Due to their sheer simplicity, NB models are often beaten by models properly trained and tuned using the other algorithms. Naive Bayes Cons: If categorical variable has a category (in test data set), which was not observed in training data set, then model will assign a 0 (zero) probability and will be unable to make a prediction. This is often known as “Zero Frequency”. To solve this, we can use the smoothing technique. One of the simplest smoothing techniques is called Laplace estimation. On the other side naive Bayes is also known as a bad estimator, so the probability outputs from predict_proba are not to be taken too seriously. Another limitation of Naive Bayes is the assumption of independent predictors. In real life, it is almost impossible that we get a set of predictors which are completely independent. How Naive Bayes handles: Missing data: Since all the variables are considered independently, removing one from the probability calculation produces the same result as if you had trained without that variable to begin with; It’s pretty straightforward to build a Naive Bayes classifier that handles data with lots of null values. Varience (Overfitting): Overfitting in Naive Bayes classifiers are controlled by introducing priors. MAP estimation is necessary to avoid overfitting and extreme probability values. Bias: Naive Bayes, on the other hand, doesn’t care how erroneous the result might be, its weights are dictated by the empirical conditional probabilities of the features in the training set. Minimizing the error function gives other algorithms i.e logistic regression lower bias than naive Bayes because the meaning of “bias” is how much error there is in a model. Bayes Theorem Bayes theorem is a famous equation that allows us to make predictions based on data. Here is the classic version of the Bayes theorem: This might be too abstract, so let us replace some of the variables to make it more concrete. In a bayes classifier, we are interested in finding out the class (e.g. male or female, spam or harm) of an observation given the data: \begin{equation}P(class|data) = \frac{P(class)\cdot P(data|class)}{P(data)} \end{equation} where: $class$ is a particular class (admitted or not admiited) $data$ is an observation’s data, $P(class)$ is called the prior; it quantify uncertainity about class $P(data)$ is called the marginal probability, $P(data \mid class)$ is called the likelihood; the probability of the data under the given class (parameter) and $P(class \mid data)$ is called the posterior our update belief, Naives Bayes Consider the data with $X$ features such that $X = {x_1\ldots x_n}$ and $y$ label with $k$ classes such that $k \in {1 \ldots k }$. Given a class variable $y = k$ and a dependent feature vector $x_1$ through $x_n$, we can rewrite the above expression as follows: Using the naive independence assumption that: It clear that: Thus the previous equation simplify to: Since $P(x_1, \ldots, x_n)$ is constant given the input, we can use the following classification rule: Training (learning parameter from data) To estimate $P(y=k)$ and $P(x_i|y=k)$ Maximum A Posteriori (MAP) can be used where: $P(y=k)$ is simply the fraction of records with $y = k$ in the training set. $P(x_i \mid y=k)$ is the fraction of $y=k$ records that also have $x_i$ in the training set. Prediction To predict the new label $\hat{y}$ value given observations of all the $x_i$ values, the following prediction rule is employed: Types of Naives Bayes The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of $P(x_i\mid y=k)$ Gaussian: It used for real-values (continous) features.It assumes features follow a normal distribution. Multinomial: It is used for discrete counts. For example, let’s say, we have a text classification problem. Bernoulli: The binomial model is useful if your feature vectors are binary (i.e. zeros and ones). One application would be text classification with ‘bag of words’ model where the 1s & 0s are “word occurs in the document” and “word does not occur in the document” respectively. Let consider Gaussian Naives Bayes In the case of real-valued features, we can use the Gaussian distribution. And this is called Gaussian Naive Bayes. To train gaussian naive bayes: For each value of k: Estimate $\pi _k = P(y=k)$ For each attribute $x_i$ estimate class conditional mean $\mu _{ik}$ and variance $\sigma _{ik}$ Example Let us consider an exmple application of Gaussian naives bayes where given a model trained on a portion of admission data, we want to predict wether a student with given grades will be admitted or not. import numpy as npimport pandas as pddata = pd.read_csv('data/admission.csv', names = ["grade1", "grade2", "remark"])data.tail() grade1 grade2 remark 95 83.489163 48.380286 1 96 42.261701 87.103851 1 97 99.315009 68.775409 1 98 55.340018 64.931938 1 99 74.775893 89.529813 1 Calculate Class Priors # Number of studentsn_admitted = data['remark'][data['remark'] == 1].count()n_not_admitted = data['remark'][data['remark'] == 0].count()total_class = data['remark'].count() # Class probabilityp_admitted = n_admitted/total_classp_not_admitted = n_not_admitted/total_classp_admitted Calculate Likelihood # Group the data by grade and calculate the means of each featuredata_means = data.groupby('remark').mean()# View the valuesdata_means grade1 grade2 remark 0 52.032301 54.620392 1 74.718923 73.956402 # calculate the variance of each featuredata_variance = data.groupby('remark').var()# View the valuesdata_variance grade1 grade2 remark 0 307.969146 258.617568 1 222.380256 256.397065 Now we can create all the variables we need. The code below might look complex but all we are doing is creating a variable out of each cell in both of the tables above. # Mean for admitted classgrade1_admit_mean = data_means['grade1'][data_means.index == 1].values[0]grade2_admit_mean = data_means['grade2'][data_means.index == 1].values[0]# variance for admitted classgrade1_admit_var = data_variance['grade1'][data_variance.index == 1].values[0]grade2_admit_var = data_variance['grade2'][data_variance.index == 1].values[0]# Mean for not admitted classgrade1_not_admit_mean = data_means['grade1'][data_means.index == 0].values[0]grade2_not_admit_mean = data_means['grade2'][data_means.index == 0].values[0]# variance for not admitted classgrade1_not_admit_var = data_variance['grade1'][data_variance.index == 0].values[0]grade2_not_admit_var = data_variance['grade2'][data_variance.index == 0].values[0] Finally, we need to create a function to calculate the probability density of each of the terms of the likelihood def pdf_gausian(x, mean, var): # Input the arguments into a probability density function pdf = 1/(np.sqrt(2*np.pi*var**2)) * np.exp((-(x-mean)**2)/(2*var)) # return p return pdf Prediction Now suppose we have a student with $grade1 = 50$ and $grade2 = 45$. Let check wether this student be admitted or not grade1 = 50grade2 = 45#consider first class admittedposterior_admitted = p_admitted * pdf_gausian(grade1,grade1_admit_mean,grade1_admit_var)*\ pdf_gausian(grade2,grade2_admit_mean,grade2_admit_var)posterior_not_admitted = p_not_admitted * pdf_gausian(grade1,grade1_not_admit_mean,grade1_not_admit_var)*\ pdf_gausian(grade2,grade2_not_admit_mean,grade2_not_admit_var) print("Posterior probility for admitted class: {}".format(posterior_admitted))print("Posterior probility for not-admitted class: {}".format(posterior_not_admitted)) Posterior probility for admitted class: 8.264138816397577e-08Posterior probility for not-admitted class: 6.638833793017984e-07 Because the posterior for not-admitted class is greater than admitted class, then we predict that the student is not-admitted. References H. Zhang (2004). The optimality of Naive Bayes. Proc. FLAIRS. C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. V. Metsis, I. Androutsopoulos and G. Paliouras (2006). Spam filtering with Naive Bayes – Which Naive Bayes? 3rd Conf. on Email and Anti-Spam (CEAS).
How do physicists mathematically define entropy (for the second law of thermodynamics) and how is it related to statistical definitions of entropy? Even though there are many questions on this site about entropy (such as this one), none I could find was mathematically rigorous or had a complete rigorous answer. I am looking for precise answers that can be understood by mathematicians. In mathematical statistics, we have many different definitions of the entropy of (or between) probability distributions. Notable ones are: The $\alpha$-entropy $N(\alpha)$ of a distribution $\rho$ on the integers, which is defined as $$\log \sum_{n \in \mathbb{N}} \rho(n)^\alpha.$$ It can be extended to the entropy of a distribution defined on any separable metric space. The Kullback-Leibler divergence (or relative entropy) $$D_{KL}(P, Q) = \int \log\frac{dP}{dQ} dP.$$ Note that transforms $T$ of the sample space can only increase relative entropy: $D_{KL}(PT^{-1}, QT^{-1}) \geq D_{KL}(P, Q)$, with equality iff $T$ is a sufficient statistic for $\{P, Q\}$ and where $PT^{-1}(A) = P(T^{-1}(A))$. That's all I know about increase of entropy and the impossibility to create information.
Abstract Let $L^{2,p}(\mathbb{R}^2)$ be the Sobolev space of real-valued functions on the plane whose Hessian belongs to $L^p$. For any finite subset $E \subset \Bbb{R}^2$ and $p>2$, let $L^{2,p}(\Bbb{R}^2)|_E$ be the space of real-valued functions on $E$, equipped with the trace seminorm. In this paper we construct a bounded linear extension operator $T : L^{2,p}(\mathbb{R}^2)|_E \rightarrow L^{2,p}(\mathbb{R}^2)$. We also provide an explicit formula that approximates the $L^{2,p}(\mathbb{R}^2)|_E$ trace seminorm. [BS3] Y. A. Brudnyui and P. A. Shvartsman, "A linear extension operator for a space of smooth functions defined on a closed subset in ${\bf R}^n$," Dokl. Akad. Nauk SSSR, vol. 280, iss. 2, pp. 268-272, 1985. @article {BS3, MRKEY = {0775048}, AUTHOR = {Brudny{\u\i}, Yu. A. and Shvartsman, P. A.}, TITLE = {A linear extension operator for a space of smooth functions defined on a closed subset in {${\bf R}\sp n$}}, JOURNAL = {Dokl. Akad. Nauk SSSR}, FJOURNAL = {Doklady Akademii Nauk SSSR}, VOLUME = {280}, YEAR = {1985}, NUMBER = {2}, PAGES = {268--272}, ISSN = {0002-3264}, MRCLASS = {46E35 (26A16)}, MRNUMBER = {0775048}, MRREVIEWER = {J. Musielak}, ZBLNUMBER = {0597.46027}, NOTE={English translation in \emph{Soviet Math. Dokl.} {\bf 31} (1985), 48--51}, } [BS1] Y. Brudnyi and P. Shvartsman, "Generalizations of Whitney’s extension theorem," Internat. Math. Res. Notices, iss. 3, p. 129, 1994. @article {BS1, MRKEY = {1266108}, AUTHOR = {Brudnyi, Yuri and Shvartsman, Pavel}, TITLE = {Generalizations of {W}hitney's extension theorem}, JOURNAL = {Internat. Math. Res. Notices}, FJOURNAL = {International Mathematics Research Notices}, YEAR = {1994}, NUMBER = {3}, PAGES = {129 ff., approx. 11 pp.}, ISSN = {1073-7928}, MRCLASS = {58C25 (41A10 46E35)}, MRNUMBER = {1266108}, MRREVIEWER = {H. Wallin}, DOI = {10.1155/S1073792894000140}, ZBLNUMBER = {0845.57022}, } [BS4] Y. Brudnyi and P. Shvartsman, "The Whitney problem of existence of a linear extension operator," J. Geom. Anal., vol. 7, iss. 4, pp. 515-574, 1997. @article {BS4, MRKEY = {1669235}, AUTHOR = {Brudnyi, Yuri and Shvartsman, Pavel}, TITLE = {The {W}hitney problem of existence of a linear extension operator}, JOURNAL = {J. Geom. Anal.}, FJOURNAL = {The Journal of Geometric Analysis}, VOLUME = {7}, YEAR = {1997}, NUMBER = {4}, PAGES = {515--574}, ISSN = {1050-6926}, MRCLASS = {46E35 (58C25)}, MRNUMBER = {1669235}, MRREVIEWER = {P. Szeptycki}, DOI = {10.1007/BF02921632}, ZBLNUMBER = {0937.58007}, } [BS2] Y. Brudnyi and P. Shvartsman, "Whitney’s extension problem for multivariate $C^{1,\omega}$-functions," Trans. Amer. Math. Soc., vol. 353, iss. 6, pp. 2487-2512, 2001. @article {BS2, MRKEY = {1814079}, AUTHOR = {Brudnyi, Yuri and Shvartsman, Pavel}, TITLE = {Whitney's extension problem for multivariate {$C\sp {1,\omega}$}-functions}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {353}, YEAR = {2001}, NUMBER = {6}, PAGES = {2487--2512}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {46E35 (58C25)}, MRNUMBER = {1814079}, MRREVIEWER = {Lubo{š} Pick}, DOI = {10.1090/S0002-9947-01-02756-8}, ZBLNUMBER = {0973.46025}, } [Evans] L. C. Evans, Partial Differential Equations, Providence, RI: Amer. Math. Soc., 1998, vol. 19. @book {Evans, MRKEY = {1625845}, AUTHOR = {Evans, Lawrence C.}, TITLE = {Partial Differential Equations}, SERIES = {Grad. Studies in Math.}, VOLUME = {19}, PUBLISHER = {Amer. Math. Soc.}, ADDRESS = {Providence, RI}, YEAR = {1998}, PAGES = {xviii+662}, ISBN = {0-8218-0772-2}, MRCLASS = {35-01}, MRNUMBER = {1625845}, MRREVIEWER = {Luigi Rodino}, ZBLNUMBER = {0902.35002}, } [F1] C. Fefferman, "A sharp form of Whitney’s extension theorem," Ann. of Math., vol. 161, iss. 1, pp. 509-577, 2005. @article {F1, MRKEY = {2150391}, AUTHOR = {Fefferman, Charles}, TITLE = {A sharp form of {W}hitney's extension theorem}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {161}, YEAR = {2005}, NUMBER = {1}, PAGES = {509--577}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {58C25 (26B35)}, MRNUMBER = {2150391}, MRREVIEWER = {M. Laczkovich}, DOI = {10.4007/annals.2005.161.509}, ZBLNUMBER = {1102.58005}, } [F5] C. Fefferman, "Interpolation and extrapolation of smooth functions by linear operators," Rev. Mat. Iberoamericana, vol. 21, iss. 1, pp. 313-348, 2005. @article {F5, MRKEY = {2155023}, AUTHOR = {Fefferman, Charles}, TITLE = {Interpolation and extrapolation of smooth functions by linear operators}, JOURNAL = {Rev. Mat. Iberoamericana}, FJOURNAL = {Revista Matemática Iberoamericana}, VOLUME = {21}, YEAR = {2005}, NUMBER = {1}, PAGES = {313--348}, ISSN = {0213-2230}, MRCLASS = {58C25 (30E05 41A65 52A35)}, MRNUMBER = {2155023}, MRREVIEWER = {Yu. A. Brudny{\u\i}}, DOI = {10.4171/RMI/424}, ZBLNUMBER = {1084.58003}, } [F3] C. Fefferman, "Whitney’s extension problem for $C^m$," Ann. of Math., vol. 164, iss. 1, pp. 313-359, 2006. @article {F3, MRKEY = {2233850}, AUTHOR = {Fefferman, Charles}, TITLE = {Whitney's extension problem for {$C\sp m$}}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {164}, YEAR = {2006}, NUMBER = {1}, PAGES = {313--359}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {58C25 (26B35 46E35)}, MRNUMBER = {2233850}, MRREVIEWER = {Pavel A. Shvartsman}, DOI = {10.4007/annals.2006.164.313}, ZBLNUMBER = {1109.58016}, } [F4] C. Fefferman, "$C^m$ extension by linear operators," Ann. of Math., vol. 166, iss. 3, pp. 779-835, 2007. @article {F4, MRKEY = {2373373}, AUTHOR = {Fefferman, Charles}, TITLE = {{$C\sp m$} extension by linear operators}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {166}, YEAR = {2007}, NUMBER = {3}, PAGES = {779--835}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {46E35 (26B35 58C25)}, MRNUMBER = {2373373}, MRREVIEWER = {Pavel A. Shvartsman}, DOI = {10.4007/annals.2007.166.779}, ZBLNUMBER = {1161.46013}, } [F2] C. Fefferman, "Extension of ${C}^{m,\omega}$-smooth functions by linear operators," Rev. Mat. Iberoam., vol. 25, iss. 1, pp. 1-48, 2009. @article {F2, MRKEY = {2514337}, AUTHOR = {Fefferman, Charles}, TITLE = {Extension of {${C}\sp {m,\omega}$}-smooth functions by linear operators}, JOURNAL = {Rev. Mat. Iberoam.}, FJOURNAL = {Revista Mathemática Iberoamericana}, VOLUME = {25}, YEAR = {2009}, NUMBER = {1}, PAGES = {1--48}, ISSN = {0213-2230}, MRCLASS = {46E15 (26E10 41A05)}, MRNUMBER = {2514337}, MRREVIEWER = {M. Laczkovich}, DOI = {10.4171/RMI/568}, ZBLNUMBER = {1173.46014}, } [FK1] C. Fefferman and B. Klartag, "Fitting a $C^m$-smooth function to data. I," Ann. of Math., vol. 169, iss. 1, pp. 315-346, 2009. @article {FK1, MRKEY = {2480607}, AUTHOR = {Fefferman, Charles and Klartag, Bo'az}, TITLE = {Fitting a {$C\sp m$}-smooth function to data. {I}}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {169}, YEAR = {2009}, NUMBER = {1}, PAGES = {315--346}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {58C25 (26B05 65D10)}, MRNUMBER = {2480607}, MRREVIEWER = {M. Laczkovich}, DOI = {10.4007/annals.2009.169.315}, ZBLNUMBER = {1175.41001}, } [FIL1] C. Fefferman, A. Israel, and G. K. Luli, The structure of Sobolev extension operators, 2012. @misc{FIL1, author={Fefferman, Charles and Israel, A. and Luli, G. K.}, TITLE={The structure of {S}obolev extension operators}, NOTE={to appear in \emph{Revista Matematica Iberoamericana}}, YEAR={2012}, } @misc{FIL2, SORTYEAR={2011}, AUTHOR={Fefferman, Charles and Israel, A. and Luli, G. K.}, TITLE={Sobolev extension by linear operators}, NOTE={posted on February 28, 2013 (to appear in print)}, DOI = {10.1090/S0894-0347-2013-00763-8}, } @misc{L1, author={Luli, Garving K.}, TITLE={Whitney Extension for ${W}^{k,p}({E})$ in One Dimension}, SORTYEAR={2013}, URL={http://users.math.yale.edu/~gkl6/papers/notes.pdf}, } [L2] G. K. Luli, "$C^{m,\omega}$ extension by bounded-depth linear operators," Adv. Math., vol. 224, iss. 5, pp. 1927-2021, 2010. @article {L2, MRKEY = {2646115}, AUTHOR = {Luli, Garving K.}, TITLE = {{$C\sp {m,\omega}$} extension by bounded-depth linear operators}, JOURNAL = {Adv. Math.}, FJOURNAL = {Advances in Mathematics}, VOLUME = {224}, YEAR = {2010}, NUMBER = {5}, PAGES = {1927--2021}, ISSN = {0001-8708}, CODEN = {ADMTA4}, MRCLASS = {58C25}, MRNUMBER = {2646115}, MRREVIEWER = {M. Laczkovich}, DOI = {10.1016/j.aim.2010.01.027}, ZBLNUMBER = {1195.47005}, } [S2] P. A. Shvartsman, The traces of functions of two variables satisfying to the Zygmund condition, 1982. @misc{S2, author = {Shvartsman, P. A.}, TITLE = {The traces of functions of two variables satisfying to the {Z}ygmund condition}, NOTE={Studies in the Theory of Functions of Several Real Variables, Yaroslav. State Univ., Yaroslavl, pp. 145--168}, YEAR={1982}, } [S3] P. A. Shvartsman, "Lipschitz selections of multivalued mappings and the traces of the Zygmund class functions to an arbitrary compact," Dokl. Akad. Nauk SSSR, vol. 276, iss. 3, pp. 559-562, 1984. @article {S3, author = {Shvartsman, P. A.}, TITLE = {Lipschitz selections of multivalued mappings and the traces of the {Z}ygmund class functions to an arbitrary compact}, JOURNAL = {Dokl. Akad. Nauk SSSR}, FJOURNAL = {Doklady Akademii Nauk SSSR}, NOTE = {In Russian; translated in {\it Soviet. Math. Dokl.} {\bf 29} (1984), 565--568}, VOLUME = {276}, YEAR = {1984}, NUMBER = {3}, PAGES = {559--562}, ISSN = {0002-3264}, MRCLASS = {46E35 (42B30)}, MRREVIEWER = {J. Musielak}, ZBLNUMBER = {0598.46026}, } [S4] P. A. Shvartsman, "Traces of functions of Zygmund class," Sibirsk. Mat. Zh., vol. 28, iss. 5, pp. 203-215, 1987. @article {S4, MRKEY = {0924998}, AUTHOR = {Shvartsman, P. A.}, TITLE = {Traces of functions of {Z}ygmund class}, JOURNAL = {Sibirsk. Mat. Zh.}, FJOURNAL = {Akademiya Nauk SSSR. Sibirskoe Otdelenie. SibirskiĭMatematicheskiĭZhurnal}, VOLUME = {28}, YEAR = {1987}, NUMBER = {5}, PAGES = {203--215}, ISSN = {0037-4474}, MRCLASS = {46E99}, MRNUMBER = {0924998}, MRREVIEWER = {Boris I. Botvinnik}, NOTE = {In Russian; translated in \emph{Siberian Math. J.} {\bf 28} (1987), 853--863}, ZBLNUMBER = {0648.46030}, } [S5] P. Shvartsman, "Sobolev $W^1_p$-spaces on closed subsets of ${\bf R}^n$," Adv. Math., vol. 220, iss. 6, pp. 1842-1922, 2009. @article {S5, MRKEY = {2493183}, AUTHOR = {Shvartsman, P. }, TITLE = {Sobolev {$W\sp 1\sb p$}-spaces on closed subsets of {${\bf R}\sp n$}}, JOURNAL = {Adv. Math.}, FJOURNAL = {Advances in Mathematics}, VOLUME = {220}, YEAR = {2009}, NUMBER = {6}, PAGES = {1842--1922}, ISSN = {0001-8708}, CODEN = {ADMTA4}, MRCLASS = {46E35}, MRNUMBER = {2493183}, MRREVIEWER = {Olli Martio}, DOI = {10.1016/j.aim.2008.09.020}, ZBLNUMBER = {1176.46040}, } [S6] P. Shvartsman, Lipschitz spaces generated by the Sobolev-Poincaré inequality and extensions of Sobolev functions, 2011. @misc{S6, author={Shvartsman, P.}, TITLE = {Lipschitz spaces generated by the {S}obolev-{P}oincaré inequality and extensions of {S}obolev functions}, YEAR={2011}, NOTE={preprint}, } [S7] P. Shvartsman, Sobolev ${L}^2_p$-functions on closed subsets of $\mathbb{R}^2$, 2012. @misc{S7, author={Shvartsman, P.}, TITLE = {Sobolev ${L}^2_p$-functions on closed subsets of {$\mathbb{R}^2$}}, YEAR={2012}, ARXIV={1210.0590}, } [St1] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton, N.J.: Princeton Univ. Press, 1970, vol. 30. @book {St1, MRKEY = {0290095}, AUTHOR = {Stein, Elias M.}, TITLE = {Singular Integrals and Differentiability Properties of Functions}, SERIES = {Princeton Math. Series}, VOLUME={30}, PUBLISHER = {Princeton Univ. Press}, ADDRESS = {Princeton, N.J.}, YEAR = {1970}, PAGES = {xiv+290}, MRCLASS = {46.38 (26.00)}, MRNUMBER = {0290095}, MRREVIEWER = {R. E. Edwards}, ZBLNUMBER = {0207.13501}, } [St2] E. M. Stein, "The characterization of functions arising as potentials. II," Bull. Amer. Math. Soc., vol. 68, pp. 577-582, 1962. @article {St2, MRKEY = {0142980}, AUTHOR = {Stein, E. M.}, TITLE = {The characterization of functions arising as potentials. {II}}, JOURNAL = {Bull. Amer. Math. Soc.}, FJOURNAL = {Bulletin of the American Mathematical Society}, VOLUME = {68}, YEAR = {1962}, PAGES = {577--582}, ISSN = {0002-9904}, MRCLASS = {31.11}, MRNUMBER = {0142980}, MRREVIEWER = {I. I. Hirschman, Jr.}, ZBLNUMBER = {0127.32002}, } [T1] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, Second ed., Heidelberg: Johann Ambrosius Barth, 1995. @book {T1, MRKEY = {1328645}, AUTHOR = {Triebel, Hans}, TITLE = {Interpolation Theory, Function Spaces, Differential Operators}, EDITION = {Second}, PUBLISHER = {Johann Ambrosius Barth}, ADDRESS = {Heidelberg}, YEAR = {1995}, PAGES = {532}, ISBN = {3-335-00420-5}, MRCLASS = {46-02 (35Jxx 46E10 46E35 46M35)}, MRNUMBER = {1328645}, ZBLNUMBER = {0830.46028}, } [T2] H. Triebel, Theory of Function Spaces. III, Basel: Birkhäuser, 2006. @book {T2, MRKEY = {2250142}, AUTHOR = {Triebel, Hans}, TITLE = {Theory of Function Spaces. {III}}, SERIES = {Monogr. Math.}, NUMBER = {100}, PUBLISHER = {Birkhäuser}, ADDRESS = {Basel}, YEAR = {2006}, PAGES = {xii+426}, ISBN = {978-3-7643-7581-2; 3-7643-7581-7}, MRCLASS = {46E35 (46-02)}, MRNUMBER = {2250142}, MRREVIEWER = {Lubo{š} Pick}, ZBLNUMBER = {1104.46001}, } [W1] H. Whitney, "Differentiable functions defined in closed sets. I," Trans. Amer. Math. Soc., vol. 36, iss. 2, pp. 369-387, 1934. @article {W1, MRKEY = {1501749}, AUTHOR = {Whitney, Hassler}, TITLE = {Differentiable functions defined in closed sets. {I}}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {36}, YEAR = {1934}, NUMBER = {2}, PAGES = {369--387}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {58C25}, MRNUMBER = {1501749}, DOI = {10.2307/1989844}, ZBLNUMBER = {0009.20803}, } [W2] H. Whitney, "Analytic extensions of differentiable functions defined in closed sets," Trans. Amer. Math. Soc., vol. 36, iss. 1, pp. 63-89, 1934. @article {W2, MRKEY = {1501735}, AUTHOR = {Whitney, Hassler}, TITLE = {Analytic extensions of differentiable functions defined in closed sets}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {36}, YEAR = {1934}, NUMBER = {1}, PAGES = {63--89}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {26B05}, MRNUMBER = {1501735}, DOI = {10.2307/1989708}, ZBLNUMBER = {0008.24902}, }
After completing this reading you should be able to: Describe the basic steps to conduct a Monte Carlo simulation. Describe ways to reduce Monte Carlo sampling error. Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Describe the bootstrapping method and its advantage over Monte Carlo simulation. Describe the pseudo-random number generation method and how a good simulation design alleviates the effects the choice of the seed has on the properties of the generated series. Describe situations where the bootstrapping method is ineffective. Describe disadvantages of the simulation approach to financial problem solving. Monte Carlo Simulations The analysis of properties and characteristics of various interesting statistics calls for the application of simulation studies. In econometrics, the technique comes in handy in case a particular estimation method is unknown. The following are some of the areas in econometrics where simulations are probably applicable: Quantification of the simultaneous equation bias which is induced when an endogenous variable is considered exogenous. When a Dickery-Fuller test’s appropriate critical values are being determined. When the impact of heteroskedasticity on an autocorrelation test’s size and power is being determined. In finance, the application of simulations include: Where exotic options are priced. When the impacts of variations in the macroeconomic environment are being determined. When risk management models are undergoing stress-testing. In this regard, we are going to study how a Monte Carlo simulation can be conducted. We follow the following steps: First, the data is generated with respect to the desired data generating process. The errors will be drawn from some specified distribution. Next, the test statistic is computed after conducting the regression. Then the test statistic is saved, (this can include any parameter of interest). Conduct the first step \(N\) times. This quantity \(N\) will denote the number of replications. It is advisable that it should be large and feasible. Applying a few replications increases the sensitivity of the results to odd combinations of random number draws. Here, we can see graphically the results of a Monte Carlo simulation on the price of a stock: Techniques of Reducing Variance Let \({ x }_{ i }\) denote the value of the parameter of interest for replication \(i\). It is almost certain that we will obtain different average values of \(x\) by either computing the average value of the said parameter of interest or undertake an exactly similar study only this time to apply different sets of random draws. The following is an equation, \({ S }_{ X }\), for the standard error estimate applied in the evaluation of the sampling variation in a Monte Carlo study: $$ { S }_{ X }=\sqrt { \frac { var\left( X \right) }{ N } } $$ The estimates of the quantities of interest over the \(N\) replications have a variance denoted as \(var\left( X \right)\). \(N\) should be set at an unfeasibly high level for an acceptable accuracy to be achieved. Application of the variance reduction techniques can also suppress Monte Carlo sampling error. These techniques are numerous. We will describe two of those techniques namely: Antithetic Variates technique, and Control Variates technique. Antithetic Variates Since to adequately cover the entire probability space requires sets of sampling to be done repeatedly over and over, the Monte Carlo study requires a lot of replications. In the method of antithetic variate, the complement of a set of random numbers is taken and a parallel simulation ran on them. Consider two sets of Monte Carlo simulations and the parameter of interest across them has the following average value: $$ \bar { x } =\frac { { x }_{ 1 }+{ x }_{ 2 } }{ 2 } \quad \quad \quad \quad \quad equation\quad I $$ Where the replication sets 1 and 2 have average parameter values denoted as \({ x }_{ 1 }\) and \({ x }_{ 2 }\) respectively. The following is the equation of the variance of \( \bar { x }\): $$ var\left( \bar { x } \right) =\frac { 1 }{ 4 } \left( var\left( { x }_{ 1 } \right) +var\left( { x }_{ 2 } \right) +2cov\left( { x }_{ 1 },{ x }_{ 2 } \right) \right) \quad \quad \quad equation\quad II $$ Independence will be displayed by the two sets of Monte Carlo replications, absent the application of antithetic variates. Therefore, the covariance will be zero, such that: $$ var\left( \bar { x } \right) =\frac { 1 }{ 4 } \left( var\left( { x }_{ 1 } \right) +var\left( { x }_{ 2 } \right) \right) \quad \quad \quad equation\quad III $$ If antithetic variates are applied, the covariance in equation \(II\) will become negative hence reducing the Monte Carlo sampling error. Since \(corr\left( { u }_{ t },-{ u }_{ t } \right) =cov\left( { u }_{ t },-{ u }_{ t } \right) =-1\), the first impression when antithetic variates will be applied is that there will be a huge reduction in Monte Carlo sampling variation. However, the relevant covariance lies between the standard replication’s simulated quantity of interest and those applying the antithetic variates. Between the random draws and their antithetic variates, we can have the perfect negative covariance. The quasi-random sequences of draws are the alternative techniques of variance reduction operating via similar principles. They include stratified sampling, moment-matching, and low-discrepancy sequencing. In these techniques, a specific sequence of representative samples is selected from a specified probability distribution. Then subsequent replications are used to fill the unselected gaps in the probability distribution, by selecting successive samples. This will yield a set of appropriately distributed random draws across all the outcomes of interest. Control Variates In control variates, a similar variable to that applied in the simulation will be used. However, before the simulation, the properties of the variable should be known. Let the variable of known properties be denoted as \(y\). The variable whose properties are under simulation should be denoted as \(x\). We will carry out the simulation on both \(x\) and \(y\). Employed in both classes will be the same set of random number draws. Furthermore, the simulation estimate of \(x\) will be denoted as \(\hat { x } \), and that of \(y\) as \(\hat { y } \). Therefore, we can derive a new estimate of \(x\) in the following manner: $$ { x }^{ \ast }=y+\left( \hat { x } -\hat { y } \right) \quad \quad \quad equation\quad IV $$ Under certain conditions, the Monte Carlo sampling error of \(x\) will surpass that of \({ x }^{ \ast }\).Taking the variance of both the RHS and the LHS in equation \(IV\), we have that: $$ var\left( { x }^{ \ast } \right) =var\left( y+\left( \hat { x } -\hat { y } \right) \right) \quad \quad \quad equation\quad V $$ Because \(y\) is the analytically known quantity and hence cannot be subjected to sampling variation, \(var\left( { y } \right)=0\). For Monte Carlo sampling variance to be lower with control variates than without them, then: $$ var\left( { x }^{ \ast } \right) <var\left( \hat { x } \right) $$ Therefore: $$ var\left( \hat { y } \right) -2cov\left( \hat { x },\hat { y } \right) <0 $$ Or: $$ cov\left( \hat { x } ,\hat { y } \right) >\frac { 1 }{ 2 } var\left( \hat { y } \right) \quad \quad equation\quad VI $$ If inequality \(VI\) is divided on both sides by the products of the standard deviations, it follows that: $$ corr\left( \hat { x } ,\hat { y } \right) >\frac { 1 }{ 2 } \sqrt { \frac { var\left( \hat { y } \right) }{ var\left( \hat { x } \right) } } $$ Re-Usage of Random Numbers across Experiments The variability of the difference in the estimates across experiments can decline in a massive way if the same set of draws are applied across experiments. Furthermore, when a long series of draws are taken and then divided into several smaller sets to be applied to various different experiments is an alternative possibility. It is difficult for the computational time to be saved through random number re-usage. This is because a very small proportion of the overall time taken to undertake the whole experiment is usually taken when making the random draws. Bootstrapping A description of the parameters of empirical estimators is obtained through bootstrapping. This entails the application of sample data points and repeated sampling with replacement from the actual data. Let us consider the estimation of some parameter \(\theta\) given a sample of data: $$ y={ y }_{ 1 },{ y }_{ 2 },\dots ,{ y }_{ t } $$ We can study a sample of bootstrap estimators to approximate the statistical features of \({ \hat { \theta } }_{ T }\).To do this, we take \(N\) samples of size \(T\) with replacement from \(y\) and with each new sample, \(\hat { \theta } \) is re-computed. We then obtain a series of \(\hat { \theta } \) estimates can consider their distributions. With bootstrapping, the researcher can make inferences absent some distributional assumptions that are strong. This can be attributed to the fact that the applied distribution is that of the actual data. To compute the test statistic of interest from each set of new samples drawn with replacement from the sample, the sample is treated as a population from which samples can be drawn, hence sampling from the sample. Let \({ \hat { \theta } }^{ \ast }\) be the test statistics computed from the new samples. We can obtain a distribution of values of \({ \hat { \theta } }^{ \ast }\) and compute the standard errors or any other statistics of interest from the said distribution. Bootstrapping in the Context of Regression The following is a standard regression model: $$ y=X\beta +u $$ There are two methods to bootstrap the regression model. Resampling the Data In this method, the data is taken and the entire row corresponding to observation \(i\) is resampled together. We follow the below-listed steps in data resampling: A sample of size \(T\) is resampled from the original data by resampling with replacement from the whole rows taken together. The coefficient matrix, \({ \hat { \beta } }^{ \ast }\), for the bootstrap sample is computed. The above steps should be repeated \(N\) times to obtain a set of \(N\) coefficient vectors, \({ \hat { \beta } }^{ \ast }\), which will be all different. This will yield to a distribution of estimates for each of the coefficients. Resampling from Residuals The following steps are applied: First, the model is estimated on the actual data and the fitted values \(\hat { y } \) obtained. Then the residuals, \(\hat { u } \), are computed. Next, the sample size \(T\) will be taken with replacement from these residuals. We then add the fitted values to the bootstrapped residuals for the bootstrapped-dependent variable to be generated. The bootstrapped residuals are: $$ { y }^{ \ast }=\hat { y } +{ \hat { u } }^{ \ast } $$ To obtain a bootstrapped coefficient vector, \({ \hat { \beta } }^{ \ast }\), the new dependent variable is then regressed on the original \(X\) data. Finally, apply step 2 to repeat a total of \(N\) times. Simulations Where Bootstrap Will be Ineffective The following are the two situations where bootstraps will not be sufficiently effective: In cases where there are outliers in the data, hence there is a likelihood that the bootstrap’s conclusion will be affected. Non-independent data – When bootstrap is applied, the assumption the data are independent of one another. Random Number Generation The following regression can be used as the basis for generating numbers that are a continuous uniform (0, 1): $$ { y }_{ i+1 }=\left( a{ y }_{ i }+c \right) modulo\quad m,i=0,1,\dots ,T $$ Then: $$ { R }_{ i+1 }=\frac { { y }_{ i+1 } }{ m } \quad for\quad i=0,1,\dots ,T $$ Where there are \(T\) random draws, the initial value of \(y\) is denoted as \({ y }_{ 0 }\), otherwise called the seed, the multiplier is denoted as \(a\), and the increment denoted as \(c\). The initial value, \({ y }_{ 0 }\), should be specified in order for the random draws to be generated, in all simulation studies. The properties of the generated series will be affected by the choice of this value. Demerits of Simulation in Financial Analysis The calculations involved might be long and sophisticated. There is a likelihood of not getting precise results. In many situations, replicating the results is difficult. Experiment-specific simulation results. The output is only as good as the input (garbage in garbage out) Monte Carlo simulation tends to underestimate the probability of extreme events like a financial crisis (Monte Carlo models failed in 2007/2008) Monte Carlo Simulation in Econometrics: Deriving a Set of Critical values for a Dickey-Fuller Test The following is the equation for a Dickey-Fuller test, \({ y }_{ t }\), applied to some series: $$ { y }_{ t }=\phi { y }_{ t-1 }+{ u }_{ t } $$ We then test for \({ H }_{ 0 }:\phi =1\), against \({ H }_{ 1 }:\phi <1\). In that case, the following equation gives the relevant test statistic: $$ \tau =\frac { \hat { \phi } -1 }{ SE\left( \hat { \phi } \right) } $$ It will be necessary for the simulation to obtain the relevant critical values since the test statistic never follow a standard distribution, under the null hypothesis of a unit root.
What is Charles Law? Charles’ law is one of the gas laws which explains the relationship between volume and temperature of a gas. It states that when pressure is held constant, the volume of a fixed amount of dry gas is directly proportional to its absolute temperature. When two measurements are in direct proportion then any change made in one of them affects the other through direct variation. Charles’ Law is expressed by the equation: \(V\alpha T\) Or \(\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}\) Where, V1 and V2 are the Initial Volume and Final Volume respectively. T1 refers to the Initial Temperature and T2 refers to the Final Temperature. Both the temperatures are in the units of Kelvin. Jacques Charles, a French scientist, in 1787, discovered that keeping the pressure constant, the volume of a gas varies on changing its temperature. Later, Joseph Gay-Lussac, in 1802, modified and generalized the concept as Charles’s law. At very high temperatures and low pressures, gases obey Charles’ law. Derivation: Charles’ Law states that at a constant pressure, the volume of a fixed mass of a dry gas is directly proportional to its absolute temperature. We can represent this using the following equation: \(V\alpha T\) Since V and T vary directly, we can equate them by making use of a constant k. \(\frac{V}{T}=constant=k\) The value of k depends on the pressure of the gas, the amount of the gas and also on the unit of the volume. VT = k ———– (I) Let V1 and T1 be the initial volume and temperature of an ideal gas. We can write equation I as: \(\frac{V_{1}}{T_{1}}=k\) ———– (II) Let’s change the temperature of the gas to T2. Consequently, its volume changes to V2. So we can write, \(\frac{V_{2}}{T_{2}}=k\) ———– (III) Equating equations (II) and (III), \(\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}=k\) Hence, we can generalize the formula and write it as: \(\frac{(V_{1})}{(T_{1})}=\frac{(V_{2})}{(T_{2})}\) Or \(V_{1}T_{2}=V_{2}T_{1}\) You know that on heating up a fixed mass of gas, that is, increasing the temperature, the volume also increases. Similarly, on cooling, the volume of the gas decreases. At 0 degree centigrade, the volume of the gas increases 1/273 of its original volume for a unit degree increase in temperature. It is to be noted here that the unit Kelvin is preferred for solving problems related to Charles’ Law, and not Celsius. Kelvin (T) is also known as the Absolute temperature scale. For converting a temperature to Kelvin scale, you add 273 to the temperature in the centigrade/Celsius scale. Charles’ Law in Real Life: Charles’ law has a wide range of applications in our daily life. Some of the common examples are given below: In cold weather or environment, balls and helium balloons shrink. In bright sunlight, the inner tubes swell up. In colder weather, the human lung capacity will also decrease. This makes it more difficult to do jogging or athletes to perform on a freezing winter day. Solved Examples: Question 1: Solution: Given, V1= 400 cm³ V2 =? T1= 0oC = 0+273 = 273 K T2= 80oC = 80+273 = 353 K Here the pressure is constant and only the temperature is changed. Using Charles Law, \(\frac{(V_{1})}{(T_{1})}=\frac{(V_{2})}{(T_{2})}\) \(\frac{400}{273}=\frac{V_{2}}{353}\) \(V_{2}=\frac{400\times 353}{273}\) \(V_{2}=517.21cm^{3}\) 1 cubic centimeter = 0.001 litre =1 x 10-3 litre ∴ 517.21 cubic centimeter = 517.21 x 10-3 = 0.517 litres Question 2: Solution: Given, V1=? V2 =6 L T1= 150 K T2= 100 K Using Charles Law, \(\frac{(V_{1})}{(T_{1})}=\frac{(V_{2})}{(T_{2})}\) \(\frac{(V_{1})}{(150)}=\frac{(6)}{(100)}\) \(V_{1}=\frac{6\times 150}{100}\) \(V_{1}=9L\) The initial volume of a gas at 150 K is 9 litres. To solve more problems on the topic, download Byju’s -The Learning App.
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
I’m thrilled to join everyone at the best-named math blog. I am just home from Combinatorial Link Homology Theories, Braids, and Contact Geometry at ICERM in Providence, Rhode Island. The conference was aimed at students and non-experts with a focus on introducing open problems and computational techniques. Videos of many of the talks are available at ICERM’s site. (Look under “Programs and Workshops,” then “Summer 2014”.) One of the highlights of the workshop was the ‘Computational Problem Session’ MC’d by John Baldwin with contributions from Rachel Roberts, Nathan Dunfield, Joanna Mangahas, John Etnyre, Sucharit Sarkar, and András Stipsicz. Each spoke for a few minutes about open problems with a computational bent. I’ve done my best to relate all the problems in order with references and some background. Any errors are mine. Corrections and additions are welcome! Rachel Roberts Contact structures and foliations Eliashberg and Thurston showed that a one-dimensional foliation of a three-manifold can be -approximated by a contact structure (as long as it is not the product foliation on ). Vogel showed that, with a few other restrictions, any two approximating contact structures lie in the same isotopy class. In other words, there is a map from , taut, oriented foliations to contact structures modulo isotopy for any closed, oriented three-manifold. What is the image of ? Geography: What do the fibers of look like? Botany: The image of is known to be contained within the space of weakly symplectically fillable and universally tight contact structures. Etnyre showed that if one removes “taut”, then is surjective. Etnyre and Baldwin showed that doesn’t “see” universal tightness. L-spaces and foliations A priori the rank of the Heegaard Floer homology groups associated to a rational homology three-sphere Y are bounded by the first ordinary homology group: . An L-space is a rational homology three-sphere for which equality holds. Conjecture: Y is an L-space if and only if it does not contain a taut, oriented, foliation. Ozsváth and Szabó showed that L-spaces do not contain such foliations. Kazez and Roberts proved that the theorem applies to a class of foliations and perhaps all foliations. The classification of L-spaces is incomplete and we are led to the following: How can one prove the (non-)existence of such a foliation? Question: Existing methods are either ad hoc or difficult (e.g. show that the manifold does not act non-trivially on a simply-connected (but not necessarily Hausdorff!) one-manifold). Roberts suggested that Agol and Li’s algorithm for detecting “Reebless” foliations via laminar branched surfaces may be useful here, although the algorithm is currently impractical. Nathan Dunfield What do random three-manifolds look like? First of all, how does one pick a random three-manifold? There are countably many compact three-manifolds (because there are countably many finite simplicial complexes, or because there are countably many rational surgeries on the countably many links in , or because…) so there is no uniform probability distribution on the set of compact orientable three-manifolds. To dodge this issue, we first consider random objects of bounded complexity, then study what happens as we relax the bound. (A cute, more modest example: the probability that two random integers are relatively prime is $6/\pi^2$. 1). Fix a genus and write for the mapping class group of the oriented surface of genus . Pick some generators of . Let be a random word of length in the chosen generators. We can associate a unique closed, orientable three-manifold to by identifying the boundaries of two genus handlebodies via . How is your favorite invariant distributed for random 3-manifolds of genus ? How does it behave as ? Experiment! (Ditto for knots, links, and their invariants.) Metaquestion: Show that your favorite conjecture about some class of three-manifolds or links holds with positive probability. For example: Challenge: Conjecture: a random three-manifold is not an -space, has left-orderable fundamental group, admit a taut foliation, and admit a tight contact structure. These methods can also be used to prove more traditional-sounding existence theorems. Perhaps you’d like to show that there is a three-manifold of every genus satisfying some condition. It suffices to show that a random three-manifold of fixed genus satisfies the condition with non-negative probability! For example, Theorem: (Lubotzky-Maher-Wu, 2014): For any integers and with , there exist infinitely many closed hyperbolic three-manifolds which are integral homology spheres with Casson invariant and Heegaard genus . Johanna Mangahas What do generic mapping classes look like? Here are two sensible ways to study random elements of bounded complexity in a finitely-generated group. Fix a generating set. Look at all words of length N or less in those generators and their inverses. (word ball) Fix a generating set and the associated Cayley graph. Look at all vertices within distance N of the identity. (Cayley ball) A property of elements in a group is generic if a random element has the property with probability, so the meaning of “generic” differs with the meaning of “random.” For example, consider the group $G = \langle a, b \rangle \oplus \mathbb{Z}$ with generating set $\{(a,0), (b,0), (id,1)\}$. The property “is zero in the second coordinate” is generic for the first notion but not the second. So we are stuck/blessed with two different notions of genericity. Recall that the mapping class group of a surface is the group of orientation-preserving homeomorphisms modulo isotopy. Thurston and Nielsen showed that a mapping class falls into one of three categories: Finite order:for some . Reducible:fixes some finite set of simple closed curves. Pseudo-Anosov:there exists a transverse pair of measured foliations which stretches by and . The first two classes are easier to define, but the third is generic. Are pseudo-Anosov mapping classes generic in the second sense? Question: The braid group on n strands can be understood as the mapping class group of the disk with n punctures. But the braid group is not just a mapping class group; it admits an invariant left-order and a Garside structure. Tetsuya Ito gave a great minicourse on both of these structures! Can one leverage these additional structures to answer genericity questions about the braid group? Question’: Fast algorithms for the Nielsen-Thurston classification Is there a polynomial-time algorithm for computing the Thurston-Nielsen classification of a mapping class? Question: Matthieu Calvez has described an algorithm to classify braids in where is the length of the candidate braid. The algorithm is not yet implementable because it relies on knowledge of a function where is the index of the braid. These numbers come from a theorem of Masur and Minsky and are thus difficult to compute. These difficulties, as well as the power of the Garside structure and other algorithmic approaches, are described in Calvez’s linked paper. Implement Calvez’s algorithm, perhaps partially, without knowing . Challenge: Mark Bell is developing Flipper which implements a classification algorithm for mapping class groups of surfaces. How fast are such algorithms in practice? Question: 2 John Etnyre Contactomorphism and isotopy of unit cotangent bundles For background on all matters symplectic and contact see Etnyre’s notes. Let be a manifold of any (!) dimension. The total space of the cotangent bundle is naturally symplectic: the cotangent bundle of supports the Liouville one-form characterized by for any one-form ; the pullback is along the canonical projection . The form is symplectic on . Inside the cotangent bundle is the unit cotangent bundle . (This is not a vector bundle!) The form restricts to a contact structure on the . Fact: If the manifolds and are diffeomorphic, then their unit cotangent bundles and are contactomorphic In which dimensions greater than two is the converse true? Hard question: This question is attributed to Arnol’d, perhaps incorrectly. The converse is known to be true in dimensions one and to and also in the case that is the three-sphere (exercise!). Does contactomorphism type of unit cotangent bundles distinguish lens spaces from each other? Tractable (?) question: Also intriguing is the relative version of this construction. Let be an Legendrian embedded (or immersed with transverse self-intersections) submanifold of . Define the unit cosphere bundle of to be . You can think of it as the boundary of the normal bundle to . It is a Legendrian submanifold of the unit cotangent bundle . Fact: If is Legendrian isotopic to then is Legendrian isotopic to . Under what conditions is the converse true? Relative question: Etnyre noted that contact homology may be a useful tool here. Lenny Ng’s “A Topological Introduction to Knot Contact Homology” has a nice introduction to this problem and the tools to potentially solve it. Sucharit Sarkar How many Szabó spectral sequences are there, really? Ozsváth and Szabó constructed a spectral sequence from the Khovanov homology of a link to the Heegaard Floer homology of the branched double cover of over that link. (There are more adjectives in the proper statement.) This relates two homology theories which are defined very differently. Construct an algorithm to compute the Ozsváth-Szabó spectral sequence. Challenge: Sarkar suggested that bordered Heegaard Floer homology may be useful here. Alternatively, one could study another spectral sequence, combinatorially defined by Szabó, which also seems to converge to the Heegaard Floer homology of the branched double cover. Is Szabó’s spectral sequence isomorphic to the Ozsváth-Szabó spectral sequence? Question: Again, the bordered theory may be useful here. Lipshitz, Ozsváth, and D. Thurston have constructed a bordered version of the Ozsváth-Szabó spectral sequence which agrees with the original under a pairing theorem. If the answer is “yes” then Szabó’s spectral sequence should have more structure. This was the part of Sarkar’s research talk which was unfortunately scheduled after the problem session. I hope to return to it in a future post (!). Can Szabó’s spectral sequence be defined over a two-variable polynomial ring? Is there an action of the dihedral group on the spectral sequence? Question: András Stipsicz Knot Floer Smörgåsbord Link Floer homology was spawned from Heegaard Floer homology but can also be defined combinatorially via grid diagrams. Lenny Ng explained this in the second part of his minicourse. However you define it, the theory assigns to a link a bigraded -module $HFK^-(L)$. From this group one can extract the numerical concordance invariant $\tau(L)$. Defining over or one can define invariants and . Are these invariants distinct from ? Question: Does have -torsion for some ? (From a purely algebraic perspective, a “no” to the first question suggests a “no” to this one.) Harder question: Stipsicz noted that there are complexes of -modules for which the answer is yes, but those complexes are not known to be of any link. Speaking of which, Characterize those modules which appear as . “A shot in the dark:” In another direction, Stipsicz spoke earlier about a family of smooth concordance invariants . These were constructed from link Floer homology by Ozsváth, Stipsicz, and Szabó. Earlier, Hom constructed the smooth concordance invariant . Both invariants can be used to show that the smooth concordance group contains a summand, but their fibers are not the same: Hom produced a knot which has for all t and . Is there a knot with by ? Conversely: Stipsicz closed the session by waxing philosophical: “When I was a child we would get these problems like ‘Jane has 6 pigs and Joe has 4 pigs’ and I used to think these were stupid. But now I don’t think so. Sit down, ask, do calculations, answer. That’s somehow the method I advise. Do some calculations, or whatever.” 1. An analogous result holds for arbitrary number fields — I make no claims about the cuteness of such generalizations. ↩ 2. An old example: the simplex algorithm from linear programming runs in exponential time in the worst-case, but in
Methods Funct. Anal. Topology 14 (2008), no. 1, 1-9 We suggest a method to solve boundary and initial-boundary value problems for a class of nonlinear parabolic equations with the infinite dimensional L'evy Laplacian $\Delta _L$ $$f\Bigl(U(t,x),\frac{\partial U(t,x)}{\partial t},\Delta_LU(t,x)\Bigl)=0$$ in fundamental domains of a Hilbert space. Methods Funct. Anal. Topology 14 (2008), no. 1, 10-19 Small transversal vibrations of the Stieltjes string, i.e., an elastic thread bearing point masses is considered for the case of one end being fixed and the other end moving with viscous friction in the direction orthogonal to the equilibrium position of the string. The inverse problem of recovering the masses, the lengths of subintervals and the coefficient of damping by the spectrum of vibrations of such a string and its total length is solved. Methods Funct. Anal. Topology 14 (2008), no. 1, 20-31 The perturbations of Nevanlinna type functions which preserve the set of zeros of this function or add to this set new points are discussed. Generalized stochastic derivatives on a space of regular generalized functions of Meixner white noise Methods Funct. Anal. Topology 14 (2008), no. 1, 32-53 We introduce and study generalized stochastic derivatives on a Kondratiev-type space of regular generalized functions of Meixner white noise. Properties of these derivatives are quite analogous to the properties of the stochastic derivatives in the Gaussian analysis. As an example we calculate the generalized stochastic derivative of the solution of some stochastic equation with a Wick-type nonlinearity. The involutive automorphisms of $\tau$-compact operators affiliated with a type I von Neuman algebra Methods Funct. Anal. Topology 14 (2008), no. 1, 54-59 Let $M$ be a type I von Neumann algebra with a center $Z,$ and a faithful normal semi-finite trace $\tau.$ Consider the algebra $L(M, \tau)$ of all $\tau$-measurable operators with respect to $M$ and let $S_0(M, \tau)$ be the subalgebra of $\tau$-compact operators in $L(M, \tau).$ We prove that any $Z$-linear involutive automorphisms of $S_0(M, \tau)$ is inner. About nilpotent $C_0$-semigroups of operators in the Hilbert spaces and criteria for similarity to the integration operator Methods Funct. Anal. Topology 14 (2008), no. 1, 60-66 In the paper, we describe a class of operators $A$ that have empty spectrum and satisfy the nilpotency property of the generated $C_0$-semigroup $U(t)=\exp\{-iAt\},\, t\geqslant 0$, and such that the operator$A^{-1}$ is similar to the integration operator on the corresponding space $L_2(0,a)$. Methods Funct. Anal. Topology 14 (2008), no. 1, 67-80 We construct two types of equilibrium dynamics of infinite particle systems in a locally compact Polish space $X$, for which certain fermion point processes are invariant. The Glauber dynamics is a birth-and-death process in $X$, while in the case of the Kawasaki dynamics interacting particles randomly hop over $X$. We establish conditions on generators of both dynamics under which corresponding conservative Markov processes exist. Methods Funct. Anal. Topology 14 (2008), no. 1, 81-100 The interpolation of couples of separable Hilbert spaces with a function parameter is studied. The main properties of the classical interpolation are proved. Some applications to the interpolation of isotropic Hörmander spaces over a closed manifold are given.