url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://math.stackexchange.com/questions/1207705/properties-of-alternating-subgroup
# properties of alternating subgroup? I was wondering, is it true that if $Alt_n$ is an alternating subgroup of $Sym_n$ for $n>3$, $Alt_{n-i}\leq Alt_n$ for all $i<n$? • What do you mean by $\le$ here? Does that mean that $Alt_{n-i}$ is a subgroup of $Alt_n$? – MJD Mar 26 '15 at 16:20 • Yes! I do mean that. Mar 26 '15 at 16:21 • More interesting is, that $S_n$ cannot be embedded into $A_{n+1}$, see here. Mar 26 '15 at 16:24 The answer is no, in a strict sense. A permutation of $\{1,2,3\}$ is not a permutation of $\{1,2,3,4\}$. However, there is a natural bijection between the set of permutations of the former, and the set of permutations of the latter that fix $4$. Similarly, there is a natural bijection between $Alt_{n-i}$ and a subgroup of $Alt_n$, namely those permutations that fix $(n-i+1), (n-i+2),\ldots, n$. • I guess that is what I was looking for then... I meant, is it true then that there exists a subgroup of $Alt_{n}$ that is isomorphic to $Alt_{n-i}$ for all $i<n$? Mar 26 '15 at 16:25 • It is clear, but it should still be stated, that every permutation in $Alt_n$ in the range of the bijection is an even permutation: i.e. it really is in $Alt_n$ and not just in $Sym_n$. Mar 26 '15 at 16:30 • In fact, there are several such isomorphisms besides the "obvious" one, as you can make $n$ distinct isomorphs of $S_{n-1}$ inside $S_n$, by choosing different elements to be "fixed". Mar 26 '15 at 19:20
2021-12-01 08:50:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576970934867859, "perplexity": 136.99471165771658}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00405.warc.gz"}
https://ai.stackexchange.com/tags/convolution/hot
# Tag Info ## Hot answers tagged convolution 13 3D CNN's are used when you want to extract features in 3 Dimensions or establish a relationship between 3 dimensions. Essentially its the same as 2D convolutions but the kernel movement is now 3-Dimensional causing a better capture of dependencies within the 3 dimensions and a difference in output dimensions post convolution. The kernel on convolution ... 7 Short answer Theoretically, convolutional neural networks (CNNs) can either perform the cross-correlation or convolution: it does not really matter whether they perform the cross-correlation or convolution because the kernels are learnable, so they can adapt to the cross-correlation or convolution given the data, although, in the typical diagrams, CNNs are ... 5 3D convolutions should when you want to extract spatial features from your input on three dimensions. For Computer Vision, they are typically used on volumetric images, which are 3D. Some examples are classifying 3D rendered images and medical image segmentation 4 For a 3 channel image (RGB), each filter in a convolutional layer computes a feature map which is essentially a single channel image. Typically, 2D convolutional filters are used for multichannel images. This can be a single filter applied to each layer or a seperate filter per layer. These filters are looking for features which are independent of the color,... 4 You are partially correct. On CNNs the output shape per layer is defined by the amount of filters used, and the application of the filters (dilation, stride, padding, etc.). CNNs shapes In your example, your input is 30 x 30 x 3. Assuming stride of 1, no padding, and no dilation on the filter, you will get a spatial shape equal to your input, that is ... 4 About the images inside the CNN layers: I really recommend this article since there is no one short answer to this question and it probably will be better to experiment with it. About the RGB input images: When needed to train on RGB pictures it is not advised to split the RGB channels, you can think of it by trying to identify a fictional cat with red ears,... 4 What are the parameters in a convolutional layer? The (learnable) parameters of a convolutional layer are the elements of the kernels (or filters) and biases (if you decide to have them). There are 1d, 2d and 3d convolutions. The most common are 2d convolutions, which are the ones people usually refer to, so I will mainly focus on this case. 2d ... 4 No, nothing really prevents the weights from being different. In practice though they end up almost always different because it makes the model more expressive (i.e. more powerful), so gradient descent learns to do that. If a model has $n$ features, but 2 of them are the same, then the model effectively has $n-1$ features, which is a less expressive model ... 3 Am I right in thinking that because there are only newImageX * newImageY patterns in the 32 x 32 image, that the maximum amount of filters should be newImageX * newImageY, and any more would be redundant? Your assumption is wrong. If you have a $32 \times 32$ images (so consider only grayscale images), then you have $256^{32 \times 32}$ possible patterns (i.... 3 Usually, you need to ensure that your convolutions are causal, meaning that there is no information leakage from the future into the past. You could start by looking at this paper, which compares Temporal Convolutional Networks (TCN) with vanilla RNNs models. 3 The reason why you go from 16 to 3 channels is that, in a 2d convolution, filters span the entire depth of the input. Therefore, your filters would actually be $7 \times 7 \times 16$ in order to cover all channels of the input. Detailed procedure The output of the convolution automatically has a depth equal to the number of filters (so in your case this is $... 3 To show how the convolution (in the context of CNNs) can be viewed as matrix-vector multiplication, let's suppose that we want to apply a$3 \times 3$kernel to a$4 \times 4$input, with no padding and with unit stride. Here's an illustration of this convolutional layer (where, in blue, we have the input, in dark blue, the kernel, and, in green, the feature ... 3 I have had similar thoughts about neural networks before. Convolution layers are layers of two dimensional nodes effectively passing the spacial data so why don't we use two dimensional hidden layers to receive information out of them. I'm sure someone has used this type of implementation before. I believe the papers bellow are using this. Part of the ... 3 I'm going to post another guess to this question - it won't be a complete answer, but hopefully it'll provide some direction towards finding a more legitimate answer. The feed-forward networks as suggested by Vaswani are very reminiscent of the sparse autoencoders. Where the input / output dimensions are much greater than the hidden input dimension. If you ... 3 I haven't seen it as you describe and I don't think it would be much useful. Pooling layers are being gradually phased out of networks, because they don't seem to be that useful anymore. With the emergence of more and more conv-only architectures, I don't see that likely. 3 For a standard convolution layer, the weight matrix will have a shape of (out_channels, in_channels, kernel_sizes*) in addition you will need a vector of shape [out_channels] for biases. For your specific case, 2d, your weight matrix will have a shape of (out_channels, in_channels, kernel_size[0], kernel_size[1]). Now if we plugin the numbers: out_channels =... 3 I don't think that to understand convolution you need to dig into the nested code of huge libraries, since the code becomes quickly really hard to understand and convoluted (ba dum tsss!). Joking apart, in PyTorch Conv2d is a layer that applies another low level function, conv2d, written in c++. Luckily enough, the guys from PyTorch wrote the general idea ... 3 The point is that in the expansive path you have two forms of information: the information from the contracting path, which includes all high-level features extracted from the original image. the information from the skip-connections, which copy a cropped version of the feature maps in the contracting path. Because, as we go forward through the expansive ... 3 What happens to the size of output feature map in case of full convolution? It increases. First one is valid padding: the blue square is not padded, so the green square is smaller. Third one is same padding: the blue square is padded just enough so that the green square is the same size. Fourth one is full padding: the blue square is padded as much as ... 2 If you have a$h_i \times w_i \times d_i$input, where$h_i, w_i$and$d_i$respectively refer to the height, width and depth of the input, then we usually apply$mh_k \times w_k \times d_i$kernels (or filters) to this input (with the appropriate stride and padding), where$m$is usually a hyper-parameter. So, after the application of$m$kernels, you ... 2 They are not the same thing. asymmetric convolutions work by taking the x and y axes of the image separately. For example performing a convolution with an$(n \times 1)$kernel before one with a$(1 \times n)$kernel. On the other-hand depth-wise separable convolutions separate the spatial and channel components of a 2D convolution. It will first ... 2 1) The math is the exact same, so from an optimization or mathematical perspective there is no difference 2) Here are my guesses to a possible answer. Habit: People may just call one over the other out of habit Generality: Across frameworks a 1d convolution op would work, while Dense of FC may need adjustments to work on the temporal axis Parallel ... 2 In most modern neural network frameworks, the update rules for training can be selectively applied to some parameters and not others. How to do that is dependent on the framework. Some will have the concept of "freezing" a layer, preventing parameters in it being updated. Keras does this for example. Others will do the opposite and expect you to provide a ... 2 You can use CNN for time-series data. The Convolutional Recurrent Neural Network (RCNN) is one of the examples. Convolutional layers basically extract features from images. It is not related to time-series data processing. Some CNNs (such as in ResNet, Highway Networks, and DenseNet) use some recurrent concepts to improve their prediction, but they all are ... 2 Read on Fully Convolutional Networks (FCN). There is a lot of papers on the subject, first was "Fully Convolutional Networks for Semantic Segmentation" by Long. The idea is quite close to what you describe - preserve spatial locality in the layers. In FCN there is no fully connected layer. Instead there is average pooling on top of last low-resolution/high-... 2 Short answer is no. You can't use a model trained for one task to predict on a totally different task. Even if the second task was another image classification task, the CNN would have to be fine tuned for the new data to work. A couple of things to note... 1) CNNs are good for images due to their nature. It isn't necessary that they'd be good for any 2-... 2 Yes this looks a lot like overfitting. The clue is in the low and slowly decreasing training loss compared to the large increases in validation loss. One simple fix would be to stop training around epoch 50, taking the best cross validation result to select the most general network at that point. However, anything that works to improve stable generalisation ... 2 You can also think of a convolutional neural network (CNN) as an encoder, i.e. a neural network that learns a smaller representation of the input, which then acts as the feature vector (input) to a fully connected network (or another neural network). In fact, there are CNNs that can be thought of as auto-encoders (i.e. an encoder followed by a decoder): for ... 2 Mathematically, the convolution is an operation that takes two functions,$f$and$g$, and produces a third function,$h\$. Concisely, we can denote the convolution operation as follows $$f \circledast g = h$$ In the context of computer vision and, in particular, image processing, the convolution is widely used to apply a so-called kernel (aka filter) to an ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-10-15 20:29:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043981432914734, "perplexity": 857.0500552548743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00055.warc.gz"}
https://cstheory.stackexchange.com/questions/48034/are-process-calculi-pi-calculus-csp-shared-memory
# Are process calculi (pi calculus, CSP) “shared memory”? In Andrews' foundations of multithreaded parallel and distributed programming, "shared memory/variable" and "message passing" seem to be opposite programming models, and CSP is message passing: Part 2 Distributed Programming The synchronization constructs we have examined so far are based on reading and writing shared variables. Consequently, they are most commonly used in concurrent programs that execute on hardware in which processors share memory. Distributed-memory architectures are now common. ... Chapter 7 examines message passing, in which communication channels provide a one-way path from a sending to a receiving process. Channels are FIFO queues of pending messages. They are accessed by means of two primitives: send and receive. To initiate a communication, a process sends a message to a channel; another process acquires the message by receiving from the channel. Sending a message can be asynchronous (nonblocking) or synchronous (blocking); receiving a message is invariably blocking, as that makes programming easier and more efficient. In Chapter 7, we first define asynchronous message passing primitives and then present a number of examples that show how to use them. We also describe the duality between monitors and message passing: They are equivalent and each can directly be converted to the other. Section 7.5 explains synchronous message passing. The last four sections give case studies of the CSP programming notation, the Linda primitives, the MPI library, and the Java network package. Varela's programming distributed computing systems a foundational approach use "shared memory" and "distributed memory" to describe opposite concurrency models, but strangely, pi calculus is called shared memory, because a channel is shared between multiple processes. 7.1.2 Shared or Distributed Memory Memory or state in concurrency units—such as processes, actors, or join calculus atoms—can be shared, where multiple concurrency units have the capability to read it and update it, or distributed where only one concurrency unit owns it and can read it, update it, or communicate its content to other units. The π calculus has a shared memory model, where multiple processes can read from a shared channel or write to a shared channel. and it even goes on to give "Figure 7.1 A pictorial representation of shared memory in the π calculus." As far as I know pi calculus and CSP both use channels that are shared between multiple processes. Wikipedia (https://en.wikipedia.org/wiki/Communicating_sequential_processes) says CSP is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi, based on message passing via channels (1). Does "message passing" programming model of Andrews and "distributed memory" concurrency models of Varela mean differently? (2). Does "shared variables/memory" programming model of Andrews and "shared memory" concurrency models of Varela mean differently? Does Varela's "shared memory" as in pi calculus not mean the same sense as "shared memory" by threads of the same process, or by processes as in OS concepts and as in Andrew's book? (3). Is any model which uses a channel for communication between two or more processes, (e.g. any concurrency model in process calcluli?), both • a "shared memory" but not "distributed memory" concurrency model in Varela's sense, and • a "message passing" but not "shared variable/memory" programming model in Andrews' sense? Thanks. p.s. related questions: https://cstheory.stackexchange.com/questions/47931/what-difference-are-between-the-topics-and-perspectives-of-these-two-distribute https://cstheory.stackexchange.com/questions/47907/do-connection-and-message-coexist-in-csp-pi-calculus • I suggest to improve your question, since your question is a bit difficult to answer without access to both books. Terminology is not stable, and different communities use the same terms in different ways, and different terms to refer to the same things. $\pi$-calculus is the paradigmatic message-passing calculus. Shared memory can be seen as a special case of message passing with memory cells being processes that exchange read/write messages. This view was first elaborated on by C. Hewitt in his 1976 Viewing Control Structures as Patterns of Passing Messages. – Martin Berger Dec 17 '20 at 20:43
2021-05-14 17:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4442225396633148, "perplexity": 2743.4727764503573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00040.warc.gz"}
https://physics.stackexchange.com/questions/251522/how-do-you-find-a-schmidt-basis-and-how-can-the-schmidt-decomposition-be-used-f/251574
# How do you find a Schmidt basis, and how can the Schmidt decomposition be used for operators? Consider a system in the state $|\Psi\rangle=\frac{1}{2}\left(|00\rangle+|01\rangle+|10\rangle+|11\rangle\right)$. This state is easily seen to not be an entangled state, since $$|\Psi\rangle=\frac{1}{\sqrt2}(|0\rangle+|1\rangle)\otimes\frac{1}{\sqrt2}(|0\rangle+|1\rangle).$$ But if I want to calculate the Schmidt decomposition of the state $|\Psi\rangle$ I do not obtain this result. The state $|\Psi\rangle$ can be written as $|\Psi\rangle=\mathrm{diag}(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$. Now, you have to calculate the singular value decomposition of this matrix. But I don't understand how I know which basis I have to choose for $V$ in order to obtain the Schmidt representation (decomposition) of this vector. How do I know which basis I have to choose? My second question is how I can apply the Schmidt decomposition to Operators or Matrices. Apparently this is possible, but I do not know how this is doable. • For the Schmidt decomposition you first partition your space in two pieces, say A and B. The Schmidt decomposition is then the (unique) expression $|\psi \rangle = \sum_\alpha \lambda_\alpha |\psi_{A,\alpha}\rangle \otimes |\psi_{B,\alpha}\rangle$ where $\{ |\psi_{A,\alpha} \rangle \}_\alpha$ forms an orthonormal basis on subspace A (and similarly for B). So by virtue of you rewriting $|\psi \rangle$ as product state between site one and two, you have in fact calculated that the Schmidt decomposition is given by $\lambda_0 = 1$, $\lambda_1 = 0$, and [continued] – Ruben Verresen Apr 23 '16 at 12:01 • [continuing] $|\psi_{A,0}\rangle = \frac{1}{\sqrt{2}} \left( |0\rangle + |1 \rangle \right)$, $|\psi_{A,1}\rangle = \frac{1}{\sqrt{2}} \left( |0\rangle - |1 \rangle \right)$, and similarly for the second site ('B'). – Ruben Verresen Apr 23 '16 at 12:02 • Thanks for your answer. But I don't really see how I can apply this in a more General case. In this example, it was pretty easy to guess the orthonormal basis of A and B. But how can you find this basis in a more general and more complex example? – anonymous Apr 23 '16 at 12:26 Okay, let me elaborate on my comment to show how you would calculate the Schmidt decomposition in general. This might also answer your second question. As I said in my comment, the Schmidt decomposition requires you to subdivide your system in two parts, A and B. It is then the (unique) decomposition $|\psi \rangle = \sum_\alpha \lambda_\alpha |\psi_{A,\alpha} \rangle \otimes |\psi_{B,\alpha} \rangle$ (where the components define orthonormal bases in A and B). This can be considered the decomposition that minimally entangles the two subsystems (the entanglement being given by the Schmidt values $\lambda_\alpha$). To calculate this decomposition, one rewrites the state as a matrix and then applies the SVD decomposition. It is in fact simple to rewrite the state as a matrix: one pretends that the wavefunction indices concerning subsystem A are the row indices, and the indices for subsystem B are the column indices. For example, for your state $|\psi\rangle = \frac{1}{2}\left( |00 \rangle +|01\rangle + |10 \rangle + |11\rangle \right)$, we can write this as $|\psi\rangle = \sum_{ij} A_{ij} |ij\rangle$ with $A_{00} = A_{01} = A_{10} = A_{11} = \frac{1}{2}$. We can consider this to define a matrix $$A = \frac{1}{2}\left( \begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array} \right).$$ Now if you look up the definition of the SVD decomposition, it is not hard to see that this then exactly gives us what we want for the Schmidt decomposition. In this case SVD gives us $$A = \frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right) \; \left( \begin{array}{cc} 1 & 0 \\ 0& 0 \end{array} \right) \; \frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right)$$ This then exactly gives us the $\lambda_\alpha$ and $|\psi_{A,\alpha}\rangle$ that I wrote down in my comment to your original post. More generally, if our state is $|\psi\rangle = \sum_{ij} A_{ij} |i_A\rangle \otimes |j_B\rangle$, then the Schmidt decompositions is given by the SVD decomposition as $$A = \left( \begin{array}{cccc} |\psi_{A,0}\rangle & \cdots & |\psi_{A,\alpha} \rangle & \cdots\end{array} \right) \; \left( \begin{array}{cccc} \lambda_0 & 0 & 0 & 0 \\ 0& \ddots & 0 & 0 \\ 0 & 0 & \lambda_\alpha & 0 \\ 0 & 0 & 0 & \ddots \end{array} \right) \; \left( \begin{array}{c} |\psi_{B,0}\rangle \\ \vdots \\ |\psi_{B,\alpha} \rangle \\ \vdots\end{array} \right)$$ • Hey, thanks for your answer. I think I understood now how to find a Schmidt decomposition, but I still didn't get how I could apply this to Operators. I can't find any sources or something on how to apply the Schmidt decomposition for Operators. – anonymous Apr 24 '16 at 18:53 • Where did you read this is possible? I was thinking that maybe you confused it with the fact that to calculate the Schmidt decomposition for a state, you calculate the SVD decomposition for a matrix, which is an operator (this is what I showed above). If you mean something else, then where did you read this? – Ruben Verresen Apr 24 '16 at 20:55 • I read it here njohnston.ca/2014/06/… and here arxiv.org/ftp/arxiv/papers/1006/1006.3412.pdf – anonymous Apr 25 '16 at 5:07
2019-11-17 15:13:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033747315406799, "perplexity": 135.6371514900939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669183.52/warc/CC-MAIN-20191117142350-20191117170350-00383.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/yamaha-rx-1-snowmobile-engine-conversion.33001/
# Yamaha RX-1 Snowmobile Engine Conversion ### Help Support HomeBuiltAirplanes.com: #### Marc W ##### Well-Known Member I first saw or heard of the Yamaha sled engines being converted for aircraft at the Copperstate Flyin last year. Teal Jenkins, the man behind Skytrax, had an Apex engine with a gearbox of his own design on it displayed. The sign said 160 HP at 165 lbs. I was smitten! Recently a RX-1 engine, Skytrax adapter and Rotax "C" gearbox came up for sale. The seller started with a high price and I passed on it, as did everyone else. Weeks passed and the price dropped. A couple more weeks passed and I made a low offer and accepted a counteroffer. It was to good a price to pass up so I bought it. Here it is sitting in front of the airplane as I figure out the engine mount and where to hang accessories. Pictured is the "C" box, flywheel, Hardy disc and the Skytrax adapter. This is a picture of a RX-1 engine in a sled. The RX-1 is carbureted and you can see the tops of the four carbs. The other necessary parts are the thermostat housing, which is the aluminum domed object at the near end of the cylinder head. The radiator header tank is left and a little above the thermostat housing. The engine has a dry sump so you need the oil tank which is below the carbs in the picture with all the hoses going to it. The carbs are electrically heated and also heated by coolant. The RX-1 was built 2003 to 2005 and was rated at 140 HP at 10,200 RPM. It was replaced by the fuel injected Apex from 2006 to 2008. The Apex gained 10 HP or so over the RX-1. They are basically the same engine. They both have COP electronic ignition. They also have 1.19:1 internal gear reduction with a rubber damper in the countershaft. There are other companies that make adapters to run the Rotax gearboxes on the RX-1. I have the Skytrax adapter. There are quite a few of these engines on PPC's, autogyros and the like. There are a few on airplanes. You can run a low inertia prop with the rubber Hardy disc. People running heavier props can use the RK400 clutch. The RX-1 prop turns left. Teal Jenkins makes a gearbox that bolts directly to the Apex engine. His gearbox uses three gears and so it raises the thrust line a little and it makes a little shorter engine. Since it uses three gears the prop will rotate to the right. So there it is, my next project! #### TarDevil ##### Well-Known Member I've been impressed with Steve Henry's installations. Looks like a winner. #### cohocarl ##### Member The conversion of the Yamaha Genesis (YG4) engines have caught my attention also...I picked up an unmolested '06 Apex with <2000 miles last year with the possibility of picking up a Skytrax adapter. Until then, it's a blast to ride. There's a good Facebook Group on the Yamaha conversions. #### Marc W ##### Well-Known Member I finally started making a little progress on my conversion. Things were on hold until I got the exterior of the engine cleaned up some. It was ugly! It had areas of corrosion and a white coat of something that I finally realized was the remains of a clear coat. I worked on it with small stainless steel rotary brushes from Harbor Freight that work fairly well to clean up aluminum. There were to many areas I couldn't get with the brushes and just to much area. I ordered a soda blaster on Ebay. First, the seller couldn't ship because he was out of town. He had a death in the family and then he couldn't find it! It turned into a soap opera! I found an open box blaster at Harbor Freight for a good price. Bought it and discovered the tube was missing that carries the soda up out of the tank. I did some measuring and found a piece of 1/4" tube in the scrap pile. Guessed at hole sizes based on the diagram in the operators manual and made the part. It sorta worked! Adjusted the hole sizes and it worked like a champ. I braved the 22 degree temps and put up with the fogging eyeglasses and blasted it. I couldn't see and I missed a few areas but it is good enough. I finished the cleanup yesterday and today I installed the Skytrax adapter and the Rotax flywheel. The Rotax flywheel has the same taper as the PTO shaft on the RX-1 engine. The diameter is a little different so you have to cut the end of the PTO shaft off like so. The shaft needs to be recessed in the flywheel like so. This is a shot of the internal reduction gears in the engine. 31 teeth on the crank gear and 37 teeth on the PTO gear. You can also see the timing chain. It is a sophisticated little engine with dual overhead camshafts and five tiny valves per cylinder. Here is the adapter and flywheel mounted and bolts torqued. #### cheapracer ##### Well-Known Member Log Member Noice! Great to see a real genuine alternative cost effective engine coming through the cracks, this is the real deal. #### Marc W ##### Well-Known Member Didn't get much done today except measuring and figuring. The engine originally came with 4 carbs. They have electric heat and coolant heat so more wiring and plumbing plus rebuild and rejet the carbs. To much complexity for my simple mind! I am going to use the SDS EFI instead. SDS needs a trigger wheel driven at crankshaft speeds. The RX-1 engine does not have any external crank driven shaft to mount a trigger wheel on. The PTO shaft is driven at a reduced speed so it will not work for SDS EFI. Todays project was to design an adapter to drive an external trigger wheel. I took inspiration from "Irene the Gyro" on Facebook. Irene's builder made an adapter to drive a pre rotator off the end of the crankshaft. Below you can see the alternator housing and through the hole in the center of the housing you can see the bolt that holds the alternator rotor on the end of the crank. Next picture shows the housing and bolt removed. Also visible is the starter drive gears, but I digress. Irene's builder bored out the center hole in the housing and installed a 35mm x 50mm oil seal there. He didn't give details on the extension, but I assume he bolted it to the end of the crank shaft possibly with a longer bolt. I will do the same. My extension will be lighter since I am not driving a pre rotator and I am going to use a 49mm seal to leave more meat in the housing. I finished my design and found a suitable piece of bar stock in the scrap pile today. It is supposed to be warmer tomorrow so I will be in my unheated "machine shop" making chips tomorrow. #### Marc W ##### Well-Known Member I got the crankshaft extension done. It took way to long. I didn't have suitable toolbits so I had to grind some new ones. Had trouble with the lathe. It wouldn't shut off so I would have to unplug it to stop it. I tried a bunch of different things and finally discovered one of the magnetic switches was sticking when it got hot. Went in the house and found one on Ebay. Went back out in the shop and discovered it works fine if I leave the switch box open. It just needs a little cooling air and there is no shortage of that today! A pic of the almost finished part in the lathe and a pic of where it will mount. More parts to make but the rest won't generate as many chips! #### Marc W ##### Well-Known Member I made more progress on the trigger wheel today. It looks like I got it a little close to the housing. I want to mount the Hall sensor between the wheel and the housing. It looks like it will be tight. I may have to modify it later. We will see. Next step is to bore the housing out for the oil seal. I have been pondering how to keep the bore concentric to the shaft. Maybe I will get lucky! #### rv7charlie ##### Well-Known Member [snipped] There are other companies that make adapters to run the Rotax gearboxes on the RX-1. I have the Skytrax adapter. There are quite a few of these engines on PPC's, autogyros and the like. There are a few on airplanes. You can run a low inertia prop with the rubber Hardy disc. People running heavier props can use the RK400 clutch. The RX-1 prop turns left. Teal Jenkins makes a gearbox that bolts directly to the Apex engine. His gearbox uses three gears and so it raises the thrust line a little and it makes a little shorter engine. Since it uses three gears the prop will rotate to the right. So there it is, my next project! Hi Marc, Can you point us to a web site detailing Jenkins' 3-gear reduction? I"m not having any luck finding info on it. Thanks, Charlie #### Marc W ##### Well-Known Member Jenkins doesn't have a website. He does frequent the Facebook group. He will also return phone calls so your best bet is to call him. His number is (623)734-0185. #### rv7charlie ##### Well-Known Member Thanks, Mark. I'll hang onto the number, but I hate to take up his time for what amounts to 'window shopping' at this point. Are there any specifics about the drive in the Facebook page? I spent almost an hour in the publicly accessible section last night looking through the photos section, but didn't see any info on the 3 gear setup. I also tried looking through some of the messages, but couldn't figure out any way to search for specific terms. No idea of weight, price, power handling, which engines it will fit, etc etc. Thanks, Charlie (not a fan of FB) #### Marc W It is machined to only fit the Yamaha Apex engine. There is a company in Norway that is testing it on a turboed engine running up to 300 HP. Steve Henry is flying a turboed engine with it. I don't remember the numbers off hand but over 200 HP. Last years price was $3500 but I don't know if it still is. Just remembered there is some info here: http://avidfoxflyers.com/index.php?/topic/5824-yamaha-apex-skytrax-adapters/ #### Marc W ##### Well-Known Member Finally made some progress. I have been tearing my hair out trying to accurately locate the oil seal bore for my crankshaft extension. I started with the large threaded hole in the housing. The extension was to large to fit through the hole so I first set the cover up on the mill and bored the threads out. Next I had to locate the center of the bore. It took a while but I came up with the idea of assembling the housing and extension and filling the hole around the extension with Bondo. Then I could locate off the hole in the Bondo. Worked good except I screwed up and had to repeat the process. I set it up on the mill and did a preliminary cut and then put it back on the engine to check the runout. The runout was excessive so I figured I made a mistake and adjusted the setup on the mill and made another cut. Rechecked it on the engine and still no good. I bought my lathe and mill/drill from an estate. The lathe was filthy but had been lightly used and it does good work. The mill/drill was also filthy and had clearly been hard used in a production shop. It does fine for most of what I do but it didn't work so well to accurately locate this hole. The spindle finds a different center when it is under power than it has when rotating the spindle by hand. The lead screws have a yard of back lash and things are just a little sloppy. After 3 days I gave it up and decided I was going to have to use the lathe. It is to hard to get the chuck off and put the faceplate on so I made a face plate to clamp in the lathe chuck. That also allowed me to dial in the bore by adjusting the chuck jaws. Success! Here is the finished bore. I ended up with less than .001" TIR(total indicated runout) on the oil seal bore and I have under .002" TIR on the extension. That is within the max runout usually allowed on oil seals so I am well satisfied. #### rv7charlie ##### Well-Known Member It is machined to only fit the Yamaha Apex engine. There is a company in Norway that is testing it on a turboed engine running up to 300 HP. Steve Henry is flying a turboed engine with it. I don't remember the numbers off hand but over 200 HP. Last years price was$3500 but I don't know if it still is. Just remembered there is some info here: http://avidfoxflyers.com/index.php?/topic/5824-yamaha-apex-skytrax-adapters/ #### Marc W ##### Well-Known Member House projects are really interfering with my airplane habit! I needed a way to index my crankshaft extension. The alternator rotor is keyed to the shaft with a woodruff key so the keyway is not cut all the way to the end of the shaft. The keyway in the rotor goes to the end of the shaft and the end of the shaft is recessed inside the rotor. I made a little L-shaped key to slip into the keyway and milled a slot in the base of the extension to match. Key installed. I have been puzzling over how to fit everything inside the cowl because that will effect the engine mount design. I have a things pretty well layed out except the oil tank. I have only seen examples of this engine installed in 2 seat side by side aircraft. My narrow little single seater just doesn't have room to install the stock oil tank. I will either have to bulge the cowl to fit it in or make a custom tank. My next step is to get engine and airplane lined up and build the engine mount. Maybe I can get the oil tank in too. #### AIRCAB ##### Well-Known Member I see no harmonic isolator between engine and PSRU. What is the theory behind not using one. I am new to this engine application. #### Marc W ##### Well-Known Member Look again at the first picture in the first post. The part to the right, in front of the flywheel is the rubber Hardy disc that bolts to the flywheel. The pinion gear is driven by the Hardy disc. The engine also has a rubber damper built into the internal gear reduction.
2021-05-16 08:23:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.234440878033638, "perplexity": 2397.892426831059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00216.warc.gz"}
https://www.tutorialspoint.com/Trapezoidal-Rule-for-definite-integral
# Trapezoidal Rule for definite integral Mathematical ProblemsData StructureAlgorithms Definite integrals can be solved using this trapezoidal rule. To integrate a function f(x) between the range a to b is basically finding the area below the curve from point x = a to x = b. To find that area, we can divide the area into n trapezoids, and the width of each trapezoid is h, so we can say that (b - a) = nh. When the number of trapezoids increases, the result of area calculation will be more accurate. To solve integrals, we will follow this formula. Here h is the width of the interval, and n is the number of intervals. We can find the h by using ## Input and Output Input: The function f(x): 1-exp(-x/2.0) and limits of the integration: 0, 1. The number of intervals: 20 Output: The answer is: 0.21302 ## Algorithm integrateTrapezoidal(a, b, n) Input: Lower and upper limit, and the number of integrals n. Output: The result of integration. Begin h := (b - a)/n sum := f(a) + f(b) for i := 1 to n, do sum := sum + f(a + ih) done return sum End ## Example #include<iostream> #include<cmath> using namespace std; float mathFunc(float x) { return (1-exp(-x/2.0));    //the function 1 - e^(-x/2) } float integrate(float a, float b, int n) { float h, sum; int i; h = (b-a)/n;    //calculate the distance between two interval sum = (mathFunc(a)+mathFunc(b))/2;    //initial sum using f(a) and f(b) for(i = 1; i<n; i++) { sum += mathFunc(a+i*h); } return (h*sum);    //The result of integration } main() { float result, lowLim, upLim; int interval; cout << "Enter Lower Limit, Upper Limit and interval: "; cin >>lowLim >>upLim >>interval; result = integrate(lowLim, upLim, interval); cout << "The answer is: " << result; } ## Output Enter Lower Limit, Upper Limit and interval: 0 1 20 The answer is: 0.21302 Updated on 17-Jun-2020 08:48:56
2022-07-03 14:42:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7883179187774658, "perplexity": 4720.005859389508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00488.warc.gz"}
https://physics.stackexchange.com/questions/122980/lsz-reduction-theorem-derivation-in-weinberg-qft
# LSZ reduction theorem derivation in Weinberg QFT When deriving LSZ reduction theorem Weinberg in his QFT book have assumed n-point generalized Green functions, $$G(q_{1},...,q_{n}) = \int d^{4}x_{1}...d^{4}x_{n}e^{-i\prod_{i =1}^{n}q_{j}x_{j}} \langle |\hat {T}\left( \hat {O}_{l}(x_{1})\hat {A}_{2}(x_{2})...\hat A_{n}(x_{n})\right) |\rangle , \quad (1)$$ where $\hat {O}_{l}(x)$ transforms under the irreducible representation of the Lorentz group as some free field $\hat {\Psi}_{l}(x)$. By insertion between $\hat {O}_{l}(x_{1})$ and $\hat {A}_{2}(x_{2})$ functional unit $$\sum_{i, \sigma}\int d^{3}\mathbf p | (\mathbf p , \sigma )_{i}\rangle \langle (\mathbf p , \sigma )_{i}|$$ and by allocation of one-particle states from it he have "reduced" (with some hints) $(1)$ to the form $$G(q_{1},...,q_{n}) \to f(q)\sum_{\sigma}\langle | \hat {O}_{l}(0)| (\mathbf q_{1}, \sigma )\rangle \times$$ $$\times \int d^{4}x_{2}...e^{-iq_{2}x_{2}-...}\langle (\mathbf q_{1}, \sigma ) |\hat {T}\left( \hat {A}(x_{2})...\right) | \rangle \delta (q_{1} + ... + q_{n}). \qquad (2)$$ Here $f(q)$ contains the pole of the first order $\frac{1}{q^{2} - m^{2} - i\varepsilon}$ and $q = q_{1} + ... + q_{r}$. After that he says that in $(2)$ there is equality $\hat {O}_{l}(0)| (\mathbf q_{1}, \sigma )\rangle = \frac{1}{\sqrt{(2 \pi )^{3}}}Nu^{\sigma}_{l}(\mathbf q_{1})| \rangle$. So I have the question: why was factor $N$ (in comparison with free field-like expression $\hat {O}_{l}(0)| (\mathbf q_{1}, \sigma )\rangle = \frac{1}{\sqrt{(2 \pi )^{3}}}u^{\sigma}_{l}(\mathbf q_{1})| \rangle$) appeared? What is its physical sense? Is its appearance connected with the fact that $| \rangle$ doesn't refer to the "usual" vacuum? Can you also comment this statement, if you please? • (a) You should have read the entire section 10.3 before you asked. It is the subject of this section. (b) That said, what Weinberg states here is IMO not a proof, but the common belief that QFT should be interpreted in this way. (c) You can safely think |> is a vacuum, or no-particle state. (d) My impression is Weinberg is your first QFT textbook. If so, I recommend you to read another before Weinberg. (Say Srednicki?) – teika kazura Jul 5 '14 at 0:19 • @teikakazura : it is not my first QFT textbook. For example, I have read Srednicki book yesterday for understanding the meaning of $N$. But there is also not written why do we must introduce this factor. Also - why do for interacting fields all is simple (we also add factor $N$ to the definition of the free field). Also I have read Peskin and Schroeder book, but there also aren't explanations. – Andrew McAddams Jul 5 '14 at 9:22 • The section 10.3 in my editing of the book refers to the LSZ reduction formula proof. Also, Weinberg only concludes there that he should be assume that one of operators of his formula from paragraph 10.2 transforms as free field under the irreducible rep of the Lorentz group. The remaining results of 10.3. The main remaining results were obtained only from this statement. – Andrew McAddams Jul 5 '14 at 9:28
2019-08-24 16:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895353376865387, "perplexity": 619.6647866853419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00537.warc.gz"}
https://proofwiki.org/wiki/Elements_of_Abelian_Group_whose_Order_Divides_n_is_Subgroup
# Elements of Abelian Group whose Order Divides n is Subgroup ## Theorem Let $G$ be an abelian group whose identity element is $e$. Let $n \in \Z_{>0}$ be a (strictly) positive integer . Let $G_n$ be the subset of $G$ defined as: $G_n = \set {x \in G: \order x \divides n}$ where: $\order x$ denotes the order of $x$ $\divides$ denotes divisibility. Then $G_n$ is a subgroup of $G$. ## Proof $\order e = 1$ and so from One Divides all Integers: $\order e \divides n$ Thus $G_n \ne \O$. Then: $\displaystyle x$ $\in$ $\displaystyle G_n$ $\displaystyle \leadsto \ \$ $\displaystyle \order x$ $\divides$ $\displaystyle n$ $\displaystyle \leadsto \ \$ $\displaystyle \order {x^{-1} }$ $\divides$ $\displaystyle n$ Order of Group Element equals Order of Inverse $\displaystyle \leadsto \ \$ $\displaystyle x^{-1}$ $\in$ $\displaystyle G_n$ Let $a, b \in G_n$ such that $\order a = r, \order b = s$. $\order {a b} = \divides \lcm \set {a, b}$ But $r \divides n$ and $s \divides n$ by definition of $G_n$. Therefore, by definition of lowest common multiple: $\order {a b} \divides n$ Thus we have: $G_n \ne \O$ $x \in G_n \implies x^{-1} \in G_n$ $a, b \in G_n \implies a b \in G_n$ and the result follows by the Two-Step Subgroup Test. $\blacksquare$
2019-12-15 20:27:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916686415672302, "perplexity": 225.6738050005301}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00291.warc.gz"}
https://calculator.academy/chimney-draft-calculator/
Enter the height of the chimney, the density of the air, the temperature inside the chimney and the temperature outside the chimney into the calculator to determine the chimney draft. ## Chimney Draft Formula The following equation is used to calculate the Chimney Draft. D = h*p*(1-To/Ti)*g • Where D is the chimney draft (pascals) • h is the height of the chimney (m) • p is the density of the air (1.292 kg/m^3) • To is the temperature outside (C) • Ti is the temperature inside the chimney • g is the acceleration due to gravity (9.81m/s^2) ## What is a Chimney Draft? Definition: A chimney draft is a measure of the pressure differential between the inside and outside of a chimney shaft. ## How to Calculate Chimney Draft? Example Problem: The following example outlines the steps and information needed to calculate Chimney Draft. First, determine the height of the chimney. For this example, the height is measured to be 15m. Next, determine the density of the air. This is known to be approximately 1.292 kg/m^3. Next, determine the temperature inside the chimney. This is measured to be 40C. Next, determine the temperature outside the chimney. In this case, this is measured to be 30C. Finally, calculate the chimney draft using the formula above: D = h*p*(1-To/Ti)*g D = 15*1.292*(1-(30/40))*9.81 D = 47.529 pascals
2022-12-09 19:16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137375116348267, "perplexity": 965.1899593316099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00766.warc.gz"}
http://accesspediatrics.mhmedical.com/content.aspx?bookid=1303&sectionid=79660025
Chapter 142 ### I. DEFINITION Toxoplasmosis is caused by Toxoplasma gondii, an intracellular parasitic protozoan capable of causing intrauterine infection. ### II. INCIDENCE The incidence of congenital infection is 1–10 per 10,000 live births. An estimated number of 400–4000 cases of congenital toxoplasmosis occur each year in the United States. Serologic surveys demonstrate that worldwide exposure to T. gondii is high (30% in the United States and 50–80% in Europe). ### III. PATHOPHYSIOLOGY T. gondii is a coccidian parasite ubiquitous in nature. Members of the feline family are definitive hosts. The organism exists in 3 forms: oocyst, tachyzoite, and tissue cyst (bradyzoites). Cats generally acquire the infection by feeding on infected animals such as mice or uncooked household meats. The parasite replicates sexually in the feline intestine. Cats may begin to excrete oocysts in their stool for 7–14 days after infection. During this phase, the cat can shed millions of oocysts daily for 2 weeks. After excretion, oocysts require a maturation phase (sporulation) of 24–48 hours before they become infective by oral route. Intermediate hosts (sheep, cattle, and pigs) can have tissue cysts within organs and skeletal muscle. These cysts can remain viable for the lifetime of the host. The pregnant woman usually becomes infected by consumption of raw or undercooked meat that contains cysts or by the accidental ingestion of sporulated oocysts from soil or contaminated food. Ingestion of oocysts (and cysts) releases sporozoites that penetrate the gastrointestinal mucosa and later differentiate into tachyzoites. Tachyzoites are ovoid unicellular organisms characteristic of the acute infection. Tachyzoites spread throughout the body via the bloodstream and lymphatics. It is during this stage that vertical transmission from mother to the fetus occurs. In the immunocompetent host, the tachyzoites are sequestered in tissue cysts and form bradyzoites. Bradyzoites are indicative of the chronic stage of infection and can persist in the brain, liver, and skeletal tissue for the life of the individual. There are reports of transmission of toxoplasmosis through contaminated municipal water, blood transfusion, organ donation, and occasionally as a result of a laboratory accident. Acute infection in the adult is often subclinical (90% of the cases). If symptoms are present, they are generally nonspecific: mononucleosis-like illness with fever, painless lymphadenopathy, fatigue, malaise, myalgia, fever, skin rash, and splenomegaly. The vast majority of congenital toxoplasmosis cases are a result of acquired maternal primary infection during pregnancy; however, toxoplasmic reactivations can occur in immunosuppressed pregnant women and result in fetal infection. Approximately 84% of women of childbearing age in the United States are seronegative and thereby are at risk to acquire T. gondii infection during gestation. Placental infection occurs and persists throughout pregnancy. The infection may or may not be transmitted to the fetus. The later in pregnancy that infection is acquired, the more likely is transmission to the fetus (first trimester, 17%; second trimester, 25%; and third trimester, 65% transmission). ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessPediatrics Full Site: One-Year Subscription Connect to the full suite of AccessPediatrics content and resources including 20+ textbooks such as Rudolph’s Pediatrics and The Pediatric Practice series, high-quality procedural videos, images, and animations, interactive board review, an integrated pediatric drug database, and more.
2017-03-25 19:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19230446219444275, "perplexity": 12342.272911537062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189032.76/warc/CC-MAIN-20170322212949-00156-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/direct-drive-honda-b-s-limits.32908/page-2
# Direct Drive Honda/B&S limits Discussion in 'Firewall Forward / Props / Fuel system' started by LHH, Jan 13, 2020. ### Help Support HomeBuiltAirplanes Forum by donating: 1. Jan 14, 2020 ### LHH #### Active Member Joined: Aug 1, 2013 Messages: 41 6 Location: IL Thanks all. Will start with Billski's advice and model in Solidworks and then model against propellers to see what happens. Honda price is almost identical to Kohler and B&S for same HP. All are around $2k. 2. Jan 16, 2020 ### Armilite ### Armilite #### Well-Known Member Joined: Sep 6, 2011 Messages: 3,191 Likes Received: 275 Location: AMES, IA USA ============================================== The fellow using the Honda GX200 Clone in Direct Drive on a Lazer was turning them at 4400rpm. Small 7/8" PTO Shaft (The Rotax 185UL when upgraded to 7/8" PTO used 5000rpm), the Honda/Clone GX390+ Singles uses a bigger 1.0" PTO Shaft. A Preditor 670 V Twin uses a 1.0" PTO. The Biggest I have seen used on V Twin is 1-1/8". The Stock Flywheels are good to 5500rpm Max, and 5000rpm is Max I would turn them for Plane use. Most of these Industrial Engines need to Save Weight, so People use the Billet Aluminum Flywheels which are good to 10,000+rpm which Saves a lot of weight over the Stock Cast Iron Flywheel. Use the Billet Aluminum Rod which is stronger than the Stock Rod. They usually take out the Balance Shaft to save 4-6 lbs of Weight and just have the Crank Balanced. The only thing you need to worry about is using the right Size Prop and Pitching it for the Engine's Max Hp. Like if you're turning it 5000rpm, you Pitch it -100rpm for 4900rpm. I personally would not use Direct Drive on these, I would use a Good Belt Drive like from ACE. I haven't seen anyone Adapt a Gear Drive to one of these Honda/Clones yet. Kawasaki FH680D-FS08S 675cc replacement engine with a horizontal PTO shaft measuring 1-1/8 in. x 3-15/16 in. 5000rpm in Direct-Drive is going to regulate you to a Max 48" x 10 (2) Blade Prop making 355.83 lbs Static Thrust. Needs 45.080 hp. ----------------------------------------------------------------------- 5000rpm with 1.8 ACE Belt-Drive is going to give you 2,777.7rpm, Pitched for 2,677.7rpm, an Option to run a 72" x 10 (2) Blade Prop making 516.64 lbs Static Thrust. Needs 35.053 hp. 5000rpm with 1.8 ACE Belt-Drive is going to give you 2,777.7rpm, Pitched for 2,677.7rpm, an Option to run a 68" x 12 (2) Blade Prop making 411.05 lbs Static Thrust. Needs 33.467 hp. So as you can see, using a Belt Drive allows you to make more Thrust running a Bigger Prop. You can play with the Numbers for whatever Engine you are thinking of using. An ACE Belt Drive for V Twins I consider Best for these Honda Clones shipped to the USA is$649 & $699. Freight to Mainland USA - - US$109 So $758 &$808. For Singles: US $569 Freight to Mainland USA - - -US$99 $688 Static Thrust Calc. http://godolloairport.hu/calc/strc_eng/index.htm 3. Jan 16, 2020 ### Armilite ### Armilite #### Well-Known Member Joined: Sep 6, 2011 Messages: 3,191 Likes Received: 275 Location: AMES, IA USA ----------------------------------------------------------- A Honda/Clone 460 Single Dynoed making 37.37hp@5000rpm. At 3600rpm was making 30.68hp. Used a 34mm Carb, 11.0cr, 307 CAM, 40mm/32mm Valves, K&N Type Air Filter, Tuned Header Exhaust. 4. Jan 23, 2020 ### LHH ### LHH #### Active Member Joined: Aug 1, 2013 Messages: 41 Likes Received: 6 Location: IL Armilite was right, redrive is needed even with the lightest props. MOI for the flywheel was almost identical to the tiny and lightweight props on the FES system. So even the smallest props are likely too much for the engines "as is". 5. Jan 23, 2020 ### Vigilant1 ### Vigilant1 #### Well-Known MemberLifetime Supporter Joined: Jan 24, 2011 Messages: 4,409 Likes Received: 2,070 Location: US Not necessarily. If I'm following your reasoning, you only know that the MOI of the props is higher than the MOI of the stock flywheel. That doesn't tell you that the MOI of the prop would be "too much for the engine 'as is'." There are lots of these little industrial twins are turning props in direct drive and powering airplanes without complaint--for many hundreds of hours. 6. Jan 23, 2020 ### LHH ### LHH #### Active Member Joined: Aug 1, 2013 Messages: 41 Likes Received: 6 Location: IL Excellent point as I would prefer direct drive, but redrive may allow a larger prop with a little less risk. Redrive also allow engine position to be more centered. 7. Jan 23, 2020 ### pictsidhe ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 7,350 Likes Received: 2,117 Location: North Carolina If you are good at engineering redrives. I think I can Do it, and it will be much harder to make reliable than direct drive. 8. Jan 24, 2020 ### LHH ### LHH #### Active Member Joined: Aug 1, 2013 Messages: 41 Likes Received: 6 Location: IL Thank you for the offer, just emailed Ace to try and get one made to keep the engine centered due the low shaft position. 9. Jan 24, 2020 ### Vigilant1 ### Vigilant1 #### Well-Known MemberLifetime Supporter Joined: Jan 24, 2011 Messages: 4,409 Likes Received: 2,070 Location: US IIRC, John (at Ace) has stated that his drives do not need a snubber/ tensioner as many other designers have found to be essential to avoid torsional vibration and and belt flailing. Among his stated reasons is that he keeps the two pulleys very close together. That would preclude a large offset between the bearings unless the pulleys are quite large. There's quite a bit more about the Ace units here on HBA. Since you'll be sending him almost$1000, you'll want to do that reading. 10. Jan 24, 2020 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 7,350 2,117 Location: North Carolina I won't be selling redrives for several years, if ever. I'm a little dubious about the Ace units. Without spending \$1000 on one to examine and test, I can't say if they are likely to hold up or not. John is not willing to give out enough info to do a proper paper assessment. Vigilant1 likes this. 11. Jan 24, 2020 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 6,594 5,375 Location: KWHP, Los Angeles CA, USA As has been mentioned already, a big spinner in front will greatly reduce the bad shape of the engine for a motorglider, and also give you a nice low-drag shape. Since a motorglider is likely going to have a low drag fuselage with a reclined pilot, that spinner will be helping quite a bit with the whole airplane. Direct drive will save a lot of weight, money, and fiddling. A motorglider should have low enough drag to be able to use this. 12. Jan 25, 2020 ### Vigilant1 Joined: Jan 24, 2011 Messages: 4,409 2,070 Location: US Here's a sketch that Hot Wings made showing the relative frontal area of an 810cc B&S (28HP stock, more to be had) in direct drive mode. That's a 6' 2" pilot and a 10" x 10" spinner. There's more (incl some sketches with redrives) at this post. from the side: Although these engines aren't the "conventional" airplane engine shape, as the drawings show, they are quite small. If prop clearance from the ground is a significant concern in your application, TiPi's work on flipping the engine over will give you a few more inches. It also puts the jug "humps" underneath which may clean up the cowl aerodynamics a bit. The direct drive approach saves the weight, cost, and some of the risk associated with a PSRU. For a motorglider, the direct drive approach makes a lot of sense (IMO). For a draggy trike, a PSRU would be best. Last edited: Jan 25, 2020 13. Jan 25, 2020 ### aeromomentum #### Well-Known Member Joined: Jan 28, 2014 Messages: 75 120 Location: Stuart, FL USA As a manufacturer of engines with PSRU's and of the PSRU itself I may be a little biased but there are some compelling reasons for turning the prop slower. Keep in mind that a faster aircraft has a lower L/D just like a faster turning prop. If you keep the prop the same length a direct drive prop turning 2700 rpm will require about 10% more power for the same thrust than a 2240 rpm prop on an engine with a PSRU. A two gear PSRU will have a loss of less than 1% so you net about a 9% reduction in required power for the same thrust with the same diameter prop. Of course a larger diameter prop will also provide more thrust for the same power and a PSRU will allow this with less loss due to the increased prop velocity. Direct drive also has it's own risks. There can always be a resonance between the prop and the engine and this can break props, cranks, etc. 14. Jan 25, 2020 ### LHH #### Active Member Joined: Aug 1, 2013 Messages: 41 6 Location: IL Spinner diameter is already 17 inches. Had not considered flipping the engine. Lots of good points to consider. Last edited: Jan 25, 2020 15. Jan 25, 2020 ### Hot Wings #### Well-Known MemberHBA Supporter Joined: Nov 14, 2009 Messages: 6,639 2,528 Location: Rocky Mountains Unless you really 'need' an inverted engine I wouldn't. There are other industrial engines besides the B+S 810 that could be adapted. I decided on the 810 mostly because it is easier to invert than the horizontal shaft engines. If I was to use an upright engine I'd have to seriously look at some of the other options. Probably a lot less work to adapt. 16. Jan 25, 2020 Joined: Jan 24, 2011 Messages: 4,409
2020-02-25 22:36:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34222060441970825, "perplexity": 4794.066191144491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00274.warc.gz"}
https://sassafras13.github.io/PythonStyleGuideFunc/
# Google's Python Style Guide Part 1 - Functionality I just recently learned that Google published a style guide for how their developers write clean code in Python [1]. I wanted to use a couple of posts to outline some of the things I learned from that style guide. I will write this post to describe some of the functional recommendations given in the style guide, and a follow-up post will detail some of the specific style requirements Google listed. Let’s get started! ## Google’s Python Style Guide - Functional Recommendations The first half of Google’s style guide focuses on best practices for using different functionalities within Python. I should note that there are more recommendations than I am giving here - I have selected the items that were relevant to aspects of Python that I already use or want to use more frequently. I would highly recommend glancing through the style guide yourself if you want a more complete picture of Google’s recommendations. But for now, here is what I thought was important [1]: Use a code linter. A code linter is a tool that looks at code and identifies possible errors, bugs or sections that are poorly written and could contain syntax errors [2]. Google recommends using a Python library like pylint to check your code before deploying it. Use import statements for packages and modules but not individual classes or functions. I think this recommendation helps with namespace management - if you are only importing complete packages/modules, then we will always be able to trace back specific classes or functions to those libraries (i.e. we know that module.class is a class that belongs to “module”). This practice also helps prevent collisions (i.e. having multiple functions with the same name). Import modules by full pathname location. This is important for helping the code to find modules correctly. Google recommends writing this: from doctor.who import jodie import jodie Use exceptions carefully. Usually exceptions are only for breaking out of the flow for specific errors and special cases. Google recommends using built-in exception classes (like KeyError, ValueError, etc. [3]) whenever possible. You should try to avoid using the “Except:” statement on its own because it will catch too many situations that you probably don’t want to have to handle. On a similar note, try to avoid having too much code in a try-except block and make sure to always end with “finally” to make sure that essential actions are always completed (like closing files). Do not use global variables. Global variables can be variables that have scopes including an entire module or class. Python does not have a specific datatype for constants like other languages, but you can still stylistically create them [4], for example by writing them as _MY_CONSTANT = 13. The underscore at the beginning of the variable name indicates that the variable is internal to the module or class that is using it. It is okay to use comprehensions and generators on simple cases, but avoid using them for more complicated situations. Comprehensions1 and generators2 are really useful because they do not require for loops, and they are elegant and easy to read. They also do not require much memory. However, complicated constructions of comprehensions/generators can make your code more opaque. Generally, Google recommends using comprehensions/generators as long as they fit on one line or the individual components can be separated into individual lines. Use default iterators and operators for data types that support them. Some data types, like lists and dictionaries, support specific iterator keywords like “in” and “not in.” It’s acceptable to use these iterators because they are simple, readable and efficient, but you want to make sure that you do not change a container when you are iterating over it (since lists and dictionaries are mutable objects in Python). Lambda functions are acceptable as one-liners. Lambda functions define brief functions in an expression, such as [7]: (lambda x: x + 1)(2) = 2 + 1 = 3 They are convenient but hard to read and debug. They also are not explicitly named, which can be a problem. Google recommends that if your lambda function is longer than 60 to 80 characters, then you should just write a proper function instead. Default argument values can be useful in function definitions. You can assign default values to specific arguments to a function. You always want to place these parameters last in the list of arguments for a given function. This is a good practice when the normal use case for a function requires default values, but you want to give the user the ability to override those values in special circumstances. One downside to this practice is that the defaults are only evaluated once when the module containing the function is loaded. If the argument’s value is mutable, and it gets modified during runtime, then the default value for the function has been modified for all future uses of that function!*3 So the best practice to avoid this issue is to make sure that you do not use mutable objects as default values for function arguments. Use implicit false whenever possible. All empty values are considered false in a Boolean context, which can really help with improving your code readability. For example, this is how we would write an implicit false: if foo: … This is the explicit version, which is not as clean: if foo != [ ]: … Not only is the implicit approach cleaner, it is also less error prone. The only exception is if you are checking integers, when you want to be explicit, i.e.: if foo == 0: … In this case you want to be clear about whether you want to know if the integer variable’s value is zero, or if it is simply empty (in which case you would use “if foo is None”). Also, remember that empty sequences are false - you don’t need to check if they’re empty using “len(sequence)”. Annotate code with type hints. This is especially good practice for function definitions. It helps with readability and maintainability of your code. It often looks like this: def myFunc(a: int) -> list[int]: … That is all for today’s post on functional recommendations in Google’s Python style guide. Next time, I will write more specifically about the stylistic recommendations that Google provides for coding in Python. Thanks for reading! ## Footnotes *1 Comprehensions are a tool in Python that let you iterate over certain data types like lists, sets, or generators. They can make your code more elegant, and allow you to generate iterables in one line of code. The syntax for a comprehension looks like this [5]: new_list = [expression for member in iterable] *2 Generator functions are useful for iterating over really large datasets. They are called “lazy iterators” because they do not store their internal state in memory. They also use the “yield” statement instead of the “return” statement. This means that they can send a value back to the code that is calling the generator function, but they don’t have to exit after they have returned, as in a regular function. This allows generator functions to remember their state. In this way generators are very memory efficient but allow for iteration similar to comprehensions [6]. *3 This happened to a classmate of mine once, and he said it almost ruined a paper submission for him. This is covered in detail in [8]. ## References [2] Mallett, E. E. “Code Lint - What is it? What can help?” DCCoder. 20 Aug 2018. https://dccoder.com/2018/08/20/code-lint/ Visited 28 Jun 2021. [3] “Built-in Exceptions.” The Python Standard Library. https://docs.python.org/3/library/exceptions.html Visited 11 Jul 2021. [4] Hsu, J. “Does Python Have Constants?” Better Programming on Medium. 7 Jan 2020. https://betterprogramming.pub/does-python-have-constants-3b8249dc8b7b Visited 11 Jul 2021. [5] Timmins, J. “When to Use a List Comprehension in Python.” Real Python. https://realpython.com/list-comprehension-python/ Visited 11 Jul 2021. [6] Stratis, K. “How to Use Generators and yield in Python.” Real Python. https://realpython.com/introduction-to-python-generators/ Visited 11 Jul 2021. [7] Burgaud, A. “How to Use Python Lambda Functions.” Real Python. https://realpython.com/python-lambda/ Visited 11 Jul 2021. [8] Reitz, K. “Common Gotchas.” The Hitchhiker’s Guide to Python. https://docs.python-guide.org/writing/gotchas/ Visited 11 Jul 2021. Written on July 11, 2021
2021-07-28 09:48:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25004005432128906, "perplexity": 1310.256369143821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00459.warc.gz"}
https://www.helpteaching.com/questions/Area/Grade_3
Looking for Geometry worksheets? Check out our pre-made Geometry worksheets! Tweet ##### Browse Questions • Arts (165) • English Language Arts (5664) • English as a Second Language ESL (4420) • Health and Medicine (59) • Life Skills (3) • Math (2506) • ### Two Dimensional Shapes • #### Statistics and Probability Concepts • Physical Education (72) • Science (1835) • Social Studies (1168) • Technology (3) You can create printable tests and worksheets from these Grade 3 Area questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page. Previous Next Grade 3 Area CCSS: 3.MD.C.7, 3.MD.C.7b A swimming pool is 8 meters wide and 9 meters long. What is the area of the pool in square meters? 1. 1 2. 17 3. 34 4. 72 Which shape is divided into the same number of equal parts as the rectangle shown? Find the area in unit squares. 1. 8 2. 12 3. 16 4. 20 Grade 3 Area CCSS: 3.MD.C.5, 3.MD.C.5b, MP7 Which THREE figures each have an area of 12-square units? Select the THREE correct answers. $square =$1 square unit What is the area of the figure in unit squares? 1. 12 2. 13 3. 14 4. 15 Grade 3 Area CCSS: 3.MD.C.7, 3.MD.C.7b Which equation can be used to calculate the area (A) of a rectangle that is 5 feet wide and 7 feet long? 1. A = 5 x 7 2. A = 5 + 7 3. A = (2 x 5) + (2 x 7) 4. A = 5 + 7 + 5 + 7 What is the area in unit squares? 1. 6 2. 7 3. 8 4. 9 Grade 3 Area CCSS: 3.NF.A.1, 3.G.A.2 What fraction shows the area of the shape that is shaded? 1. $1/2$ 2. $1/3$ 3. $1/4$ 4. $1/6$ Which has a larger area than the figure shown? Grade 3 Area CCSS: 3.MD.C.5, 3.MD.C.5b Which figure has an area of 16 square units? $square =$1 square unit Grade 3 Area CCSS: 3.MD.C.5, 3.MD.C.5b Which figure has an area of 12 square units? $square =$1 square unit Parker makes the pattern shown with square tiles. He wants to rearrange his tiles and create two patterns so that he has no tiles left over. Which two patterns could Parker make? Grade 3 Area CCSS: 3.MD.C.7, 3.MD.C.7b Sam is setting up 3 tables for a bake sale. If each table has a length of 2 meters and a width of 2 meters, what is the total area of all three tables? 1. 5 square meters 2. 12 square meters 3. 16 square meters 4. 20 square meters Each part of the circle equals what fraction of the area of the whole circle? 1. $1/2$ 2. $1/3$ 3. $1/4$ 4. $1/6$ Grade 3 Area CCSS: 3.MD.C.7, 3.MD.C.7c Robert needs to determine the area of his garden. The garden is 12 feet long and 9 feet wide. Which statement explains how Robert can correctly calculate the area of his garden? 1. Find the sum of 12 and 9. 2. Multiply 1 by 2, multiply 1 by 9, and find the sum of the products. 3. Multiply 1 by 9, multiply 2 by 9, and find the sum of the products. 4. Multiply 10 by 9, multiply 2 by 9, and find the sum of the products. Grade 3 Area CCSS: 3.MD.C.7, 3.MD.C.7b Sarah wants to find the area of a rectangle. She knows the length of the rectangle is 9 inches. What else does she need to know? 1. Nothing. Sarah has enough information. 2. Sarah needs to know if the rectangle is congruent. 3. Sarah needs to know the width of the rectangle. 4. Sarah needs to know who drew the rectangle. Which has an area that is the same as the area of the figure? Each part of the rectangle equals what fraction of the area of the whole rectangle? 1. $1/2$ 2. $1/3$ 3. $1/4$ 4. $1/6$ Grade 3 Area CCSS: 3.MD.C.5, 3.MD.C.5a How many unit squares? 1. 11 2. 12 3. 13 4. 14 Grade 3 Area CCSS: 3.NF.A.1, 3.G.A.2 What fraction shows the area of the shape that is shaded? 1. $1/2$ 2. $1/3$ 3. $1/4$ 4. $1/6$ Previous Next You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
2018-02-18 10:36:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27826347947120667, "perplexity": 2337.151919427941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811830.17/warc/CC-MAIN-20180218100444-20180218120444-00018.warc.gz"}
https://bookdown.org/ajkurz/recoding_Hayes_2018/truths-and-myths-about-mean-centering.html
## 9.1 Truths and myths about mean-centering Here we load a couple necessary packages, load the data, and take a glimpse(). library(tidyverse) glimpse(glbwarm) ## Observations: 815 ## Variables: 7 ## $govact <dbl> 3.6, 5.0, 6.6, 1.0, 4.0, 7.0, 6.8, 5.6, 6.0, 2.6, 1.4, 5.6, 7.0, 3.8, 3.4, 4.2, 1.0... ##$ posemot <dbl> 3.67, 2.00, 2.33, 5.00, 2.33, 1.00, 2.33, 4.00, 5.00, 5.00, 1.00, 4.00, 1.00, 5.67,... ## $negemot <dbl> 4.67, 2.33, 3.67, 5.00, 1.67, 6.00, 4.00, 5.33, 6.00, 2.00, 1.00, 4.00, 5.00, 4.67,... ##$ ideology <int> 6, 2, 1, 1, 4, 3, 4, 5, 4, 7, 6, 4, 2, 4, 5, 2, 6, 4, 2, 4, 4, 2, 6, 4, 4, 3, 4, 5,... ## $age <int> 61, 55, 85, 59, 22, 34, 47, 65, 50, 60, 71, 60, 71, 59, 32, 36, 69, 70, 41, 48, 38,... ##$ sex <int> 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0,... ## $partyid <int> 2, 1, 1, 1, 1, 2, 1, 1, 2, 3, 2, 1, 1, 1, 1, 1, 2, 3, 1, 3, 2, 1, 3, 2, 1, 1, 1, 3,... Before we fit our models, we’ll go ahead and make our mean-centered predictors, negemot_c and age_c. glbwarm <- glbwarm %>% mutate(negemot_c = negemot - mean(negemot), age_c = age - mean(age)) Now we’re ready to fit Models 1 and 2. But before we do, it’s worth repeating part of the text: Mean-centering has been recommended in a few highly regarded books on regression analysis (e.g., Aiken & West, 1991; Cohen et al., 2003), and several explanations have been offered for why mean-centering should be undertaken prior to computation of the product and model estimation. The explanation that seems to have resulted in the most misunderstanding is that $$X$$ and $$W$$ are likely to be highly correlated with $$XW$$ and this will produce estimation problems caused by collinearity and result in poor or “strange” estimates of regression coefficients, large standard errors, and reduced power of the statistical test of the interaction. But his is, in large part, simply a myth. (p. 304) Let’s load brms. library(brms) As we’ll see in just a bit, there are some important reasons for Bayesians using HMC to mean center that wouldn’t pop up within the OLS paradigm. First let’s fit model1 and model2. model1 <- brm(data = glbwarm, family = gaussian, govact ~ 1 + negemot + age + negemot:age, chains = 4, cores = 4) model2 <- update(model1, newdata = glbwarm, govact ~ 1 + negemot_c + age_c + negemot_c:age_c, chains = 4, cores = 4) As with Hayes’s OLS models, our HMC models yield the same Bayesian $$R^2$$ distributions, within simulation error. bayes_R2(model1) %>% round(digits = 3) ## Estimate Est.Error Q2.5 Q97.5 ## R2 0.354 0.021 0.311 0.394 bayes_R2(model2) %>% round(digits = 3) ## Estimate Est.Error Q2.5 Q97.5 ## R2 0.354 0.021 0.31 0.395 Our model summaries also correspond nicely with those in Table 9.1. print(model1, digits = 3) ## Family: gaussian ## Links: mu = identity; sigma = identity ## Formula: govact ~ 1 + negemot + age + negemot:age ## Data: glbwarm (Number of observations: 815) ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup samples = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## Intercept 4.343 0.333 3.705 5.008 1682 1.000 ## negemot 0.145 0.086 -0.026 0.312 1598 1.000 ## age -0.031 0.006 -0.043 -0.019 1663 1.000 ## negemot:age 0.007 0.002 0.004 0.010 1595 1.000 ## ## Family Specific Parameters: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## sigma 1.097 0.028 1.045 1.155 2553 1.001 ## ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample ## is a crude measure of effective sample size, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). print(model2, digits = 3) ## Family: gaussian ## Links: mu = identity; sigma = identity ## Formula: govact ~ negemot_c + age_c + negemot_c:age_c ## Data: glbwarm (Number of observations: 815) ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup samples = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## Intercept 4.597 0.038 4.525 4.674 3777 1.000 ## negemot_c 0.501 0.025 0.452 0.551 3322 1.000 ## age_c -0.005 0.002 -0.010 -0.001 4000 0.999 ## negemot_c:age_c 0.007 0.002 0.004 0.010 4000 0.999 ## ## Family Specific Parameters: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## sigma 1.097 0.028 1.044 1.154 3563 1.000 ## ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample ## is a crude measure of effective sample size, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). However, notice the ‘Eff.Sample’ columns. The values for model2 were substantially larger than those for model1. ‘Eff.Sample’ is Bürkner’s term for the number of effective samples (a.k.a. the effective sample size). Recall twe’ve been using brms defaults, which results in 4 HMC chains, each of which contains 2000 draws (iterations), the first 1000 of which are warmup values. After we discard the warmup values, that leaves 1000 draws from each chain–4000 total. As it turns out, Markov chains, and thus HMC chains, are typically autocorrelated, which means that each draw is partially dependent on the previous draw. Ideally, the autocorrelations are near zero. That’s often not the case. The bayesplot package offers a variety of diagnostic plots. Here we’ll use the mcmc_acf() function to make autocorrelation plots for all model parameters. Note that when we add add_chain = T to brms::posterior_samples(), we add an index to the data that allows us to keep track of which iteration comes from which chain. That index will come in handy for our mcmc_acf() plots. But before we get there, we’ll be using an xkcd-inspired theme with help from the xkcd package for our plots in this chapter. # install.packages("xkcd", dependencies = T) library(xkcd) If you haven’t used the xkcd package, before, you might also need to take a few extra steps outlined here, part of which requires help from the extrafont package, library(extrafont) download.file("http://simonsoftware.se/other/xkcd.ttf", dest = "xkcd.ttf", mode = "wb") system("mkdir ~/.fonts") system("cp xkcd.ttf ~/.fonts") # This line of code returned an error message # font_import(pattern = "[X/x]kcd", prompt = FALSE) # This line from (https://stackoverflow.com/questions/49221040/error-in-font-import-while-installing-xkcd-font) fixed the problem font_import(path = "~/.fonts", pattern = "[X/x]kcd", prompt=FALSE) fonts() fonttable() if(.Platform$OS.type != "unix") { ## Register fonts for Windows bitmap output } else { } After installing, I still experienced error messages, which were alleviated after I followed these steps outlined by Remi.b. You may or may not need them. But anyways, here are our mcmc_acf() plots. library(bayesplot) post1 <- posterior_samples(model1, add_chain = T) mcmc_acf(post1, pars = c("b_Intercept", "b_negemot", "b_age", "b_negemot:age", "sigma"), lags = 4) + theme_xkcd() post2 <- posterior_samples(model2, add_chain = T) mcmc_acf(post2, pars = c("b_Intercept", "b_negemot_c", "b_age_c", "b_negemot_c:age_c", "sigma"), lags = 4) + theme_xkcd() As it turns out, theme_xkcd() can’t handle special characters like “_“, so it returns rectangles instead. So it goes… But again, high autocorrelations in the HMC chains have consequences for the effective sample size. In the Visual MCMC diagnostics using the bayesplot package vignette, Gabry wrote: The effective sample size is an estimate of the number of independent draws from the posterior distribution of the estimand of interest. Because the draws within a Markov chain are not independent if there is autocorrelation, the effective sample size, $$n_{eff}$$, will be smaller than the total sample size, $$N$$. The larger the ratio of $$n_{eff}$$ to $$N$$ the better. The ‘Eff.Sample’ values were all close to 4000 with model2 and the autocorrelations were very low, too. The reverse was true for model1. The upshot is that even though we have 4000 samples for each parameter, those samples don’t necessarily give us the same quality of information fully independent samples would. ‘Eff.Sample’ helps you determine how concerned you should be. And, as it turns out, things like centering can help increase a models ‘Eff.Sample’ values. Wading in further, we can use the neff_ratio() function to collect the $$n_{eff}$$ to $$N$$ ratio for each model parameter and then use mcmc_neff() to make a visual diagnostic. Here we do so for model1 and model2. ratios_model1 <- neff_ratio(model1, pars = c("b_Intercept", "b_negemot", "b_age", "b_negemot:age", "sigma")) ratios_model2 <- neff_ratio(model2, pars = c("b_Intercept", "b_negemot_c", "b_age_c", "b_negemot_c:age_c", "sigma")) mcmc_neff(ratios_model1) + yaxis_text(hjust = 0) + theme_xkcd() mcmc_neff(ratios_model2) + yaxis_text(hjust = 0) + theme_xkcd() Although none of the $$n_{eff}$$ to $$N$$ ratios were in the shockingly-low range for either model, there were substantially closer to 1 for model2. In addition to autocorrelations and $$n_{eff}$$ to $$N$$ ratios, there is also the issue that the parameters in the model can themselves be correlated. If you like a visual approach, you can use brms::pairs() to retrieve histograms for each parameter along with scatter plots showing the shape of their correlations. Here we’ll use the off_diag_args argument to customize some of the plot settings. pairs(model1, off_diag_args = list(size = 1/10, alpha = 1/5)) pairs(model2, off_diag_args = list(size = 1/10, alpha = 1/5)) When fitting models with HMC, centering can make a difference for the parameter correlations. If you prefer a more numeric approach, vcov() will yield the variance/covariance matrix–or correlation matrix when using correlation = T–for the parameters in a model. vcov(model1, correlation = T) %>% round(digits = 2) ## Intercept negemot age negemot:age ## Intercept 1.00 -0.93 -0.96 0.88 ## negemot -0.93 1.00 0.89 -0.96 ## age -0.96 0.89 1.00 -0.92 ## negemot:age 0.88 -0.96 -0.92 1.00 vcov(model2, correlation = T) %>% round(digits = 2) ## Intercept negemot_c age_c negemot_c:age_c ## Intercept 1.00 0.02 0.03 0.05 ## negemot_c 0.02 1.00 0.05 -0.09 ## age_c 0.03 0.05 1.00 -0.01 ## negemot_c:age_c 0.05 -0.09 -0.01 1.00 And so wait, what does that even mean for a parameter to correlate with another parameter? you might ask. Fair enough. Let’s compute a correlation step by step. First, posterior_samples(): post <- posterior_samples(model1) head(post) ## b_Intercept b_negemot b_age b_negemot:age sigma lp__ ## 1 4.601742 0.11579358 -0.03407338 0.007279384 1.103365 -1235.889 ## 2 4.374973 0.12836145 -0.03303011 0.008040433 1.092021 -1235.508 ## 3 4.468470 0.09774705 -0.03503205 0.008483724 1.120604 -1235.919 ## 4 4.476113 0.10907074 -0.03525339 0.008603188 1.090927 -1236.579 ## 5 4.269829 0.17040481 -0.03090699 0.007015664 1.114033 -1235.271 ## 6 4.690371 0.07817528 -0.03894929 0.008895996 1.136302 -1237.102 Now we’ve put our posterior iterations into a data object, post, we can make a scatter plot of two parameters. Here we’ll choose b_negemot and the interaction coefficient, b_negemot:age. post %>% ggplot(aes(x = b_negemot, y = b_negemot:age)) + geom_point(size = 1/10, alpha = 1/5) + labs(subtitle = "Each dot is the parameter pair\nfrom a single iteration. Looking\nacross the 4,000 total posterior\niterations, it becomes clear the\ntwo parameters are highly\nnegatively correlated.") + theme_xkcd() And indeed, the Pearson’s correlation is: cor(post$b_negemot, post$b_negemot:age) ## [1] -0.9571633 And what was that part from the vcov() output, again? vcov(model1, correlation = T)["negemot", "negemot:age"] ## [1] -0.9571633 Boom! That’s where the correlations come from. This entire topic of HMC diagnostics can seem baffling, especially when compared to the simplicity of OLS. If this is your first introduction, you might want to watch lectures 10 and 11 from McElreath’s Statistical Rethinking Fall 2017 lecture series. Accordingly, you might check out chapter 8 of his Statistical Rethinking text and my project explaining how to reproduce the analyses in that chapter in brms. ### 9.1.1 The effect of mean-centering on multicollinearity and the standard error of $$b_{3}$$. This can be difficult to keep track of, but what we just looked at were the correlations among model parameters. These are not the same as correlations among variables. As such, those correlations are not the same as those in Table 9.2. But we can get those, too. First we’ll have to do a little more data processing to get all the necessary mean-centered variables and standardized variables. glbwarm <- glbwarm %>% mutate(negemot_x_age = negemot*age, negemot_c_x_age_c = negemot_c*age_c, negemot_z = (negemot - mean(negemot))/sd(negemot), age_z = (age - mean(age) )/sd(age)) %>% mutate(negemot_z_x_age_z = negemot_z*age_z) And recall that to get our sweet Bayesian correlations, we use the multivariate cbind() syntax to fit an intercepts-only model. Here we do that for all three of the Table 9.2 sections. correlations1 <- brm(data = glbwarm, family = gaussian, cbind(negemot, age, negemot_x_age) ~ 1, chains = 4, cores = 4) correlations2 <- brm(data = glbwarm, family = gaussian, cbind(negemot_c, age_c, negemot_c_x_age_c) ~ 1, chains = 4, cores = 4) correlations3 <- brm(data = glbwarm, family = gaussian, cbind(negemot_z, age_z, negemot_z_x_age_z) ~ 1, chains = 4, cores = 4) Their summaries: print(correlations1, digits = 3) ## Family: MV(gaussian, gaussian, gaussian) ## Links: mu = identity; sigma = identity ## mu = identity; sigma = identity ## mu = identity; sigma = identity ## Formula: negemot ~ 1 ## age ~ 1 ## negemot_x_age ~ 1 ## Data: glbwarm (Number of observations: 815) ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup samples = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## negemot_Intercept 3.558 0.054 3.454 3.664 3395 1.000 ## age_Intercept 49.513 0.573 48.377 50.620 3429 1.000 ## negemotxage_Intercept 174.771 3.434 167.802 181.368 2955 1.000 ## ## Family Specific Parameters: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## sigma_negemot 1.529 0.038 1.458 1.606 2564 1.001 ## sigma_age 16.359 0.397 15.605 17.161 3319 1.000 ## sigma_negemotxage 97.422 2.363 92.860 102.135 2674 1.000 ## ## Residual Correlations: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## rescor(negemot,age) -0.059 0.035 -0.128 0.010 3520 1.000 ## rescor(negemot,negemotxage) 0.765 0.015 0.735 0.793 2526 1.001 ## rescor(age,negemotxage) 0.548 0.024 0.499 0.594 4000 1.000 ## ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample ## is a crude measure of effective sample size, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). print(correlations2, digits = 3) ## Family: MV(gaussian, gaussian, gaussian) ## Links: mu = identity; sigma = identity ## mu = identity; sigma = identity ## mu = identity; sigma = identity ## Formula: negemot_c ~ 1 ## age_c ~ 1 ## negemot_c_x_age_c ~ 1 ## Data: glbwarm (Number of observations: 815) ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup samples = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## negemotc_Intercept 0.000 0.053 -0.102 0.106 4000 0.999 ## agec_Intercept 0.005 0.566 -1.095 1.110 4000 0.999 ## negemotcxagec_Intercept -1.422 0.859 -3.115 0.251 4000 1.000 ## ## Family Specific Parameters: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## sigma_negemotc 1.533 0.039 1.460 1.611 4000 0.999 ## sigma_agec 16.372 0.410 15.606 17.180 4000 0.999 ## sigma_negemotcxagec 24.247 0.620 23.072 25.476 4000 0.999 ## ## Residual Correlations: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## rescor(negemotc,agec) -0.056 0.035 -0.124 0.014 4000 1.000 ## rescor(negemotc,negemotcxagec) 0.092 0.034 0.023 0.159 4000 1.000 ## rescor(agec,negemotcxagec) -0.015 0.036 -0.084 0.057 4000 1.000 ## ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample ## is a crude measure of effective sample size, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). print(correlations3, digits = 3) ## Family: MV(gaussian, gaussian, gaussian) ## Links: mu = identity; sigma = identity ## mu = identity; sigma = identity ## mu = identity; sigma = identity ## Formula: negemot_z ~ 1 ## age_z ~ 1 ## negemot_z_x_age_z ~ 1 ## Data: glbwarm (Number of observations: 815) ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup samples = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## negemotz_Intercept -0.000 0.035 -0.069 0.069 4000 0.999 ## agez_Intercept -0.000 0.035 -0.068 0.066 4000 0.999 ## negemotzxagez_Intercept -0.057 0.035 -0.124 0.012 4000 0.999 ## ## Family Specific Parameters: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## sigma_negemotz 1.003 0.025 0.957 1.053 4000 1.000 ## sigma_agez 1.002 0.025 0.956 1.051 4000 1.000 ## sigma_negemotzxagez 0.972 0.024 0.927 1.022 4000 1.000 ## ## Residual Correlations: ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat ## rescor(negemotz,agez) -0.056 0.035 -0.124 0.013 4000 1.000 ## rescor(negemotz,negemotzxagez) 0.091 0.035 0.024 0.161 4000 1.000 ## rescor(agez,negemotzxagez) -0.014 0.035 -0.083 0.053 4000 1.000 ## ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample ## is a crude measure of effective sample size, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). A more condensed way to get that information might be with the VarCorr() function. Just make sure to tack $residual__$cor onto the end. VarCorr(correlations1)$residual__$cor %>% round(digits = 3) ## , , negemot ## ## Estimate Est.Error Q2.5 Q97.5 ## negemot 1.000 0.000 1.000 1.000 ## age -0.059 0.035 -0.128 0.010 ## negemotxage 0.765 0.015 0.735 0.793 ## ## , , age ## ## Estimate Est.Error Q2.5 Q97.5 ## negemot -0.059 0.035 -0.128 0.010 ## age 1.000 0.000 1.000 1.000 ## negemotxage 0.548 0.024 0.499 0.594 ## ## , , negemotxage ## ## Estimate Est.Error Q2.5 Q97.5 ## negemot 0.765 0.015 0.735 0.793 ## age 0.548 0.024 0.499 0.594 ## negemotxage 1.000 0.000 1.000 1.000 For the sake of space, I’ll let you check that out for correlations2 and correlations3. If you’re tricky with your VarCorr() indexing, you can also get the model-implied variances. VarCorr(correlations1)$residual__$cov[1, , "negemot"] %>% round(digits = 3) ## Estimate Est.Error Q2.5 Q97.5 ## 2.341 0.116 2.126 2.578 VarCorr(correlations1)$residual__$cov[2, , "age"] %>% round(digits = 3) ## Estimate Est.Error Q2.5 Q97.5 ## 267.775 13.017 243.525 294.496 VarCorr(correlations1)$residual__$cov[3, , "negemotxage"] %>% round(digits = 3) ## Estimate Est.Error Q2.5 Q97.5 ## 9496.592 460.872 8622.952 10431.460 And if you’re like totally lost with all this indexing, you might execute VarCorr(correlations1) %>% str() and spend a little time looking at what VarCorr() returns. On page 309, Hayes explained why the OLS variance for $$b_{3}$$ is unaffected by mean centering. The story was similar for our HMC model, too: fixef(model1)["negemot:age", "Est.Error"] ## [1] 0.001609585 fixef(model2)["negemot_c:age_c", "Est.Error"] ## [1] 0.001554206 For more details, you might also see the 28.11. Standardizing Predictors and Outputs subsection of the Stan Modeling Language User’s Guide and Reference Manual, 2.17.0Stan, of course, being the computational engine underneath our brms hood. ### 9.1.2 The effect of mean-centering on $$b_{1}$$, $$b_{2}$$, and their standard errors posterior $$SD$$s. If you only care about posterior means, you can reproduce the results at the bottom of page 310 like: fixef(model1)["negemot", 1] + fixef(model1)["negemot:age", 1]*mean(glbwarm$age) ## [1] 0.5009198 But we’re proper Bayesians and like a summary of the spread in the posterior. So we’ll evoke posterior_samples() and the other usual steps. post <- posterior_samples(model1) post %>% transmute(our_contidional_effect_given_W_bar = b_negemot + b_negemot:age*mean(glbwarm$age)) %>% summarize(mean = mean(our_contidional_effect_given_W_bar), sd = sd(our_contidional_effect_given_W_bar)) %>% round(digits = 3) ## mean sd ## 1 0.501 0.025 And note how the standard error Hayes computed at the top of page 311 corresponds nicely with the posterior $$SD$$ we just computed. Hayes employed a fancy formula; we just used sd(). ### 9.1.3 The centering option in PROCESS. I’m not aware of a similar function in brms. You’ll have to use your data wrangling skills.
2019-03-25 02:15:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7005011439323425, "perplexity": 8758.771258308867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00028.warc.gz"}
https://quant.stackexchange.com/questions/15489/derivation-of-the-formulas-for-the-values-of-european-asset-or-nothing-and-cash?noredirect=1
# Derivation of the formulas for the values of European asset-or-nothing and cash-or-nothing options The asset-or-nothing European option pays at t = T the value of the stock when at time T that value exceeds or is equal to the exercise price E, and nothing if the value of the stock is below E. So, in mathematical terms: $$V(S,T) = \left\{ \begin{array}{lr} S & \text{if}\quad S \ge E,\\ 0 & \text{if}\quad S < E. \end{array} \right.$$ The cash-or-nothing European option pays at t = T a fixed value B when at time T that value exceeds or is equal to the exercise price E, and nothing if the value of the stock is below E. So, in mathematical terms: $$V(S,T) = \left\{ \begin{array}{lr} B & \text{if}\quad S \ge E,\\ 0 & \text{if}\quad S < E. \end{array} \right.$$ We know that the formulas for these options are the following: \begin{align} &\text{Cash-or-nothing call:}\quad c_{cn}=Be^{-rT}N(d_2),\\ &\text{Cash-or-nothing put:}\quad p_{cn}=Be^{-rT}N(-d_2),\\ &\text{Asset-or-nothing call:}\quad c_{an}=Se^{-qT}N(d_1),\\ &\text{Asset-or-nothing put:}\quad p_{an}=Se^{-qT}N(-d_1).\\ \end{align} where $$d_1=\dfrac{\ln(S/E)+(r-q+\sigma^2/2)(T-t)}{\sigma\sqrt{T-t}}$$ and $$d_2=d_1-\sigma\sqrt{T-t}.$$ We also know that we are supposed to follow the derivation of Black-Scholes in order to derive these formulas but we are having trouble understanding how it differs from the derivation of Black-Scholes itself. You can derive these formulae by tweaking the black scholes derivation. If you are using PDE method, you will use different boundary conditions. If you are using integration over the risk neutral probability , you will use a different payoff function but the same risk neutral density. Alternatively , you can observe that these payoffs are combinations of regular puts and calls. For example , the cash or nothing call is the limit of a [E, E+dE] call spread as dE tends to zero, so you can obtain it by differentiating the regular black scholes call price by E. Then, the asset or nothing call = the regular call option + the cash or nothing call, so you can derive that one as well. The value of an cash-or-nothing option is just the discounted expected payoff of the option. So the value of such a call should be $e^{-r (T - t)} N \mathbb{P} \left\{ S_T > K \right\}$, where $\mathbb{P} \left\{ S_T > K \right\} = \mathcal{N} \left( d_2 \right)$, and $N$ is the cash agreed to be paid. The asset-or-nothing is a bit more complicated since it is $e^{-r (T - t)} \mathbb{E} \left[ \left. S_T \right| S_T > K \right]$. The last term is the expected value of stock price given that $S_T > K$. So you would need to use the lognormal stock price and integrate it with pdf of standard normal and "complete the square". You would end up with $e^{(r - q) (T - t)} S_0 \mathcal{N} \left( d_1 \right)$ for that, and the final formula would be $e^{-q (T - t)} S_0 \mathcal{N} \left( d_1 \right)$. • We already know the formulas (stated in the question), the OP was interested in how they were derived. In fact, it's the same idea as Black-Scholes. – SmallChess Nov 11 '15 at 7:07 • About notations: asset-or-nothing call should be $$e^{-r(T-t)}\Bbb E[S_TI(S_T>K)\mid S_t]$$ instead. – Vim Jun 9 '17 at 4:47 Write black and scholes equation, for asset or nothing put K = 0 And for cash or nothing put S = 0 And K = B and discount with time as e^-rT • Substituting $S_0 = 0$ or $K = 0$ is not correct since also the two terms $d_1$ and $d_2$ are functions of the strike and spot. I get what you intend to say. However, even this is in my opinion not a satisfactory answer to the question. Keep in mind that @Sertii knows the solution already and wants to know how to derive it. – LocalVolatility Dec 5 '16 at 15:17 • For both the cases the d1 and d2 will be function of E not the strike price. Which can be shown during the derivation of black and scholes where we place limit as E and infinity not the strike price – user3692159 Dec 5 '16 at 15:45
2019-06-20 23:57:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585829734802246, "perplexity": 436.6541347190918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00133.warc.gz"}
https://castingoutnines.wordpress.com/2008/05/01/super-powers-of-2/
# (Super-)Powers of 2 Driving in to work this morning, I suddenly felt my vision go blurry to the point where I literally couldn’t see anything. Fortunately, I was able to pull off the road into the parking lot of a small office building before causing an accident. After I stopped and waited for the blurriness to subside, the first thing I saw was the mailbox for the office building, which had a street number of: 2048. Rather than wonder what the crap was wrong with my eyesight, or frantically try to decide whether to go see a doctor on the spot, instead the first thing I thought was hey: That’s $2^{11}$ Then, after making it to work with no more blurry vision attacks, I walked up to my office — the same office I have been entering and exiting since summer 2001 — and looked at the office number and saw it: room 128. Of course, I’ve never had a problem remembering my office number But for the first time in seven years, I noticed, hey: that’s $2^{7}$ So, maybe the blurry vision attack was me suddenly gaining the superhero power of being able to recognize powers of 2 with lightning quickness. If so, I somehow don’t see the Justice League of America saving me a seat anytime soon. Filed under Geekhood, Math, Personal ### 6 responses to “(Super-)Powers of 2” 1. Corey Ouch, it sounds like you’re having an episode out of Pi. Try staring at the sun a little less. 2. Please tell me you went to see the doctor. The superhero power is useful, especially to a math guy, but the blurry vision thing will keep you from seeing them. 3. virusdoc TIA. Small stroke. Go see a doctor ASAP. If it was a TIA, the first won’t be the last. 4. virusdoc
2017-09-23 21:41:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32419198751449585, "perplexity": 1862.6298088836302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00408.warc.gz"}
https://eduzip.com/ask/question/in-figure-write-down-each-linear-pair-521029
Mathematics # In figure, write down each linear pair. ##### SOLUTION As per the figure, there are 8 pair of linear angles. 1. $\angle1$ and $\angle 2$ 2. $\angle1$ and $\angle 3$ 3. $\angle4$ and $\angle 2$ 4. $\angle 4$ and $\angle 3$ 5. $\angle5$ and $\angle 6$ 6. $\angle5$ and $\angle 7$ 7. $\angle8$ and $\angle 6$ and 8. $\angle 8$ and $\angle 7$ You're just one step away Subjective Medium Published on 09th 09, 2020 Questions 120418 Subjects 10 Chapters 88 Enrolled Students 85 #### Realted Questions Q1 Subjective Medium Classify the following angle as right, straight, acute, obtuse or reflex. Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020 Q2 Subjective Hard In the given alongside : If $\angle AOB=45^{o},\angle BOC=30^{o}$ and $\angle AOD=110^{o}$, find angles $COD$ and $BOD$. Asked in: Mathematics - Straight Lines 1 Verified Answer | Published on 17th 08, 2020 Q3 TRUE/FALSE Medium State whether following statement is true or false: A line perpendicular to one of two parallel lines is perpendicular to the other. • A. False • B. True Asked in: Mathematics - Straight Lines 1 Verified Answer | Published on 17th 08, 2020 Q4 Single Correct Medium In the given figure, if $r \parallel s,p\parallel q$ and $u \parallel t$, then $c$ equals • A. $a-b$ • B. $2a+b$ • C. $b-a$ • D. $a+b$ Asked in: Mathematics - Straight Lines 1 Verified Answer | Published on 17th 08, 2020 Q5 Subjective Medium $POQ$ is a line ray $OR$ is a perpendicular to line $PQ$. $OS$ is another ray lying between ray $OP$ and $OR$. Prove that  $\angle ROS=\dfrac {1}{2}(\angle QOS -\angle POS)$ Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020
2022-01-17 10:59:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5369651317596436, "perplexity": 6690.669344203775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00400.warc.gz"}
https://almostsuremath.com/2020/10/26/local-time-continuity/
# Local Time Continuity The local time of a semimartingale at a level x is a continuous increasing process, giving a measure of the amount of time that the process spends at the given level. As the definition involves stochastic integrals, it was only defined up to probability one. This can cause issues if we want to simultaneously consider local times at all levels. As x can be any real number, it can take uncountably many values and, as a union of uncountably many zero probability sets can have positive measure or, even, be unmeasurable, this is not sufficient to determine the entire local time ‘surface’ $\displaystyle (t,x)\mapsto L^x_t(\omega)$ for almost all ${\omega\in\Omega}$. This is the common issue of choosing good versions of processes. In this case, we already have a continuous version in the time index but, as yet, have not constructed a good version jointly in the time and level. This issue arose in the post on the Ito–Tanaka–Meyer formula, for which we needed to choose a version which is jointly measurable. Although that was sufficient there, joint measurability is still not enough to uniquely determine the full set of local times, up to probability one. The ideal situation is when a version exists which is jointly continuous in both time and level, in which case we should work with this choice. This is always possible for continuous local martingales. Theorem 1 Let X be a continuous local martingale. Then, the local times $\displaystyle (t,x)\mapsto L^x_t$ have a modification which is jointly continuous in x and t. Furthermore, this is almost surely ${\gamma}$-Hölder continuous w.r.t. x, for all ${\gamma < 1/2}$ and over all bounded regions for t. A proof will be given further down. Theorem 1 applies, in particular, to Brownian motion although, in this case, the continuous modification also satisfies the stronger property of joint Hölder continuity. Theorem 2 Let X be a Brownian motion with arbitrary starting value. Then, the jointly continuous version of the local times ${L^x_t}$ are almost surely jointly ${\gamma}$-Hölder continuous in x and t, for all ${\gamma < 1/2}$ and bounded intervals for t. Again, the proof will be given further down. In fact, theorem 2 can be used to determine the joint continuity properties for the local times of any continuous local martingale, giving an improvement over the previous result. We know that any local martingale X can be written as a time-change of a standard Brownian motion B started from ${X_0}$. Specifically, ${X_t=B_{[X]_t}}$, where ${[X]}$ is the quadratic variation. Also, local times transform in the expected way under continuous time changes. If we write ${\tilde L^x_t}$ for the local times of B which, by theorem 2, is locally Hölder continuous in x and t, then the local time of X is $\displaystyle L^x_t=\tilde L^x_{[X]_t}.$ This shows that ${L^x_t}$ has a version which is a locally ${\gamma}$-Hölder continuous function of ${[X]_t}$ and x for all ${\gamma < 1/2}$. Next, consider more general continuous semimartingales. It turns out that, now, the local times need not have a jointly continuous version. For example, if B is a Brownian motion, then ${X=\lvert B\rvert}$ is a reflected Brownian motion. Writing ${\tilde L^x_t}$ for the local times of B, then the local times of X are given by, $\displaystyle L^x_t=1_{\{x\ge0\}}\left(\tilde L^x_t+\tilde L^{-x}_t\right).$ This is jointly continuous when x is away from 0 but, as x passes through 0, then ${L^x_t}$ jumps by twice the local time of B. The best that we can hope for is that ${L^x_t}$ is jointly continuous in t and cadlag in x. This means that for each ${(t,x)\in{\mathbb R}_+\times{\mathbb R}}$ then there exists a left-limit ${L^{x-}_t=\lim_{y\uparrow\uparrow x}L^y_t}$ such that, for sequences ${t_n\rightarrow t}$ and ${x_n\rightarrow x}$, $\displaystyle L^{x_n}_{t_n}\rightarrow\begin{cases} L^x_t,&{\rm if\ }x_n\ge x{\rm\ for\ all\ }n,\\ L^{x-}_t&{\rm if\ }x_n < x{\rm\ for\ all\ }n. \end{cases}$ Equivalently, considered as a set of continuous functions ${t\mapsto L^x_t}$, one for each x, then ${x\mapsto L^x}$ is cadlag under the topology of uniform convergence on compacts. Theorem 3 Let X be a continuous semimartingale. Then, its local times ${L^x_t}$ have a version which is jointly continuous in t and cadlag in x. Furthermore, if ${X=M+V}$ is the decomposition into a continuous local martingale and FV process V then, with probability one, $\displaystyle L^x_t-L^{x-}_t = 2\int_0^t1_{\{X=x\}}dV$ for all times t and levels x. We can further ask whether the semimartingale X needs to be continuous in order that the local times have a modification as in the theorem above. In fact, it is possible to extend to a class of non-continuous processes but, unfortunately, not to all semimartingales. This will be stated in a moment, and theorem 3 will follow from this more general result. We need to restrict to a class of semimartingales whict have only a finite variation coming from the jumps, which can be expressed in a couple of different ways. Lemma 4 Let X be a semimartingale. Then, the following are equivalent, 1. ${\sum_{s\le t}\lvert\Delta X_s\rvert < \infty}$, almost surely for all times t. 2. X decomposes as the sum of a continuous local martingale and an FV process. Furthermore, in this case, X decomposes as $\displaystyle X_t = M_t + V_t + \sum_{s\le t}\Delta X_s$ (1) for a continuous local martingale M and continuous FV process V. Proof: If the first condition holds, then we can define the pure jump process ${J_t=\sum_{s\le t}\Delta X_s}$. So, ${X-J}$ is a continuous semimartingale and, therefore, decomposes into the sum of a continuous local martingale M and continuous FV process V. This gives decomposition (1) and, as ${V+J}$ is an FV process, also implies the second condition. Conversely, if the second condition holds, then write ${X=M+V}$ for continuous local martingale M and FV process V. As ${\Delta X=\Delta V}$, the sum ${\sum_{s\le t}\lvert\Delta X_s\rvert}$ is equal to ${\sum_{s\le t}\lvert\Delta V_s\rvert}$. As this is bounded by the variation of V, it is almost surely finite. ⬜ We will restrict to the class of processes identified by the equivalent conditions above. I am not aware of any standard terminology for referring to such semimartingales other than the following definition as used by Protter, although it is a rather unimaginative name. Definition 5 A semimartingale satisfies Hypothesis A iff the equivalent conditions of lemma 4 hold. This captures many types of processes that we would like to handle, although there are semimartingales which do not satisfy Hypothesis A. For example, it is not satisfied by Cauchy processes. Theorem 3 can now be generalised. Theorem 6 Let X be a semimartingale satisfying Hypothesis A. Then, its local times have a version which is jointly continuous in t and cadlag in x. Furthermore, if V is the process in decomposition (1) then, with probability one, the jump with respect to x is $\displaystyle L^x_t-L^{x-}_t=2\int_0^t1_{\{X=x\}}dV$ for all times t and levels x. As continuous semimartingales trivially satisfy Hypothesis A, theorem 3 is an immediate consequence of this result. #### Proof of Continuity I now give proofs of the local time continuity results above, for which the main tool will be the Kolmogorov continuity theorem. Other than that, the Burkholder-Davis–Gundy (BDG) inequality will play an important part in the proof that the hypotheses of Kolmogorov’s theorem is satisfied, although I will only require the simple case of the right-hand inequality for large exponents. As the first and main step, we show that certain stochastic integrals with respect to a continuous martingale have a jointly continuous modification. Lemma 7 Let X be a semimartingale decomposing as ${X=M+A}$ for a continuous martingale M and FV process A. We suppose that ${[M]_\infty}$ and the variation of A over ${{\mathbb R}_+}$ are ${L^p}$-integrable for all positive p. Then, $\displaystyle U^x_t\equiv\int_0^t 1_{\{X > x\}}dM$ (2) has a version which is jointly continuous in x and t. Furthermore, with probability one, this version is ${\gamma}$-Hölder continuous in x for all ${\gamma < 1/2}$. Proof: First note that ${U^x_t}$ is a continuous local martingale starting at zero. Although it is not essential for the proof, it simplifies things a bit to use the Ito isometry, $\displaystyle {\mathbb E}\left[(U_t^x)^2\right]={\mathbb E}\left[\int_0^t1_{\{X > x\}}d[M]\right]\le{\mathbb E}\left[[M]_\infty\right],$ showing that ${U^x_t}$ is an ${L^2}$ bounded martingale and, by martingale convergence, the limit ${U^x_\infty=\lim_{t\rightarrow\infty} U^x_t}$ exists. We will let E denote the space of continuous functions ${[0,\infty]\rightarrow{\mathbb R}}$, with the topology of uniform convergence. So, for each x, the paths ${t\mapsto U^x_t}$ can be considered as a random variable taking values in E. Using d for the supremum metric then, for any ${y > x}$ and positive ${\alpha}$, \displaystyle \begin{aligned} {\mathbb E}\left[d(U^x,U^y)^\alpha\right] &={\mathbb E}\left[\sup_{t\ge0}\left\lvert U^y_t-U^x_t\right\rvert^\alpha\right]\\ &\le C_\alpha{\mathbb E}\left[[U^y-U^x]_t^{\alpha/2}\right]. \end{aligned} (3) This used the BDG inequality, so that ${C_\alpha}$ is a fixed positive constant. Next, we will apply Ito’s formula to the convex function, \displaystyle \begin{aligned} &f(X)=(X\wedge y-x)_+^2+2(y-x)(X-y)_+,\\ &f^\prime(X)=2(X\wedge y - x)_+,\\ &f^{\prime\prime}(X)=2 1_{\{y\ge X > x\}}. \end{aligned} Although this is not twice continuously differentiable, Ito’s formula still applies by approximating with smooth functions giving, \displaystyle \begin{aligned} f(X_t) =&f(X_0)+\int_0^t f^\prime(X_-)dX+\frac12\int_0^t f^{\prime\prime}(X)d[M]\\ &\quad+\sum_{s\le t}(\Delta f(X_s)-f^\prime(X_{s-})\Delta X_s). \end{aligned} By convexity of ${f}$, the final summation is positive. Also, noting that $\displaystyle \frac12\int_0^t f^{\prime\prime}(X)d[M]=\int_0^t1_{\{y\ge X > x\}}d[M]=[U^y-U^x]_t,$ we obtain the inequality $\displaystyle [U^y-U^x]_t\le f(X_t)-f(X_0)-\int_0^t f^\prime(X_-)dX.$ Let V be the variation process of A. Using the fact that ${f^\prime(X)}$ is bounded by ${2(y-x)}$, this gives \displaystyle \begin{aligned} {}[U^y-U^x]_t &\le2(y-x)\left(\lvert X_t-X_0\rvert+\left\lvert\int_0^t\xi dM\right\rvert+V_t\right)\\ &\le2(y-x)\left(\lvert M_t-M_0\rvert+\left\lvert\int_0^t\xi dM\right\rvert+2V_t\right) \end{aligned} for some process ${\xi}$ bounded by 1. Raising to the power of ${\alpha/2}$ and taking expectations, $\displaystyle {\mathbb E}\left[[U^y-U^x]_t^{\alpha/2}\right]\le 6^{\alpha/2}(y-x)^{\alpha/2}{\mathbb E}\left[2C_{\alpha/2}[M]_t^{\alpha/4}+2^{\alpha/2}V^{\alpha/2}_t\right]$ Once again, the BDG inequality was used for the expectation of the first two terms in the parantheses on the right hand side. So, ${C_{\alpha/2}}$ is a positive constant. Combining with (3) gives, $\displaystyle {\mathbb E}\left[d(U^x,U^y)^\alpha\right] \le\tilde C_\alpha(y-x)^{\alpha/2}$ (4) for the positive constant $\displaystyle \tilde C_\alpha=C_\alpha6^{\alpha/2}{\mathbb E}\left[2C_{\alpha/2}[M]^{\alpha/4}_\infty+2^{\alpha/2}V_\infty^{\alpha/2}\right].$ Now, Kolmogorov’s continuity theorem can be applied with ${\beta=\alpha/2-1}$, so long as ${\alpha > 2}$. This guarantees the existence of a modification of ${U^x}$ which, for all ${\gamma < 1/2-1/\alpha}$, is locally ${\gamma}$-Hölder continuous . Finally, note that from the definition, ${U^x_t}$ can be chosen constant in x over the range ${x < \min_tX_t}$ and, similarly, over ${x > \max_t X_t}$. So, it is constant for x outside of the range ${[\min_tX_t,\max_tX_t]}$ and, hence, is globally ${\gamma}$-Hölder continuous. Letting ${\alpha}$ increase to infinity, this holds for all ${\gamma < 1/2}$. ⬜ Localization extends the result above extends to all semimartingales satisfying Hypothesis A. Lemma 8 Let X be a semimartingale satisfying Hypothesis A, and M be as in decomposition (1). Then, ${U^x_t}$, defined by (2) has a version which is jointly continuous in t and x. Furthermore, over any bounded range for t, this version is almost surely ${\gamma}$-Hölder continuous in x for all ${\gamma < 1/2}$. Proof: As it satisfies Hypothesis A, we can decompose ${X=M+A}$ for a continuous local martingale M and FV process A. Next, choose a sequence of stopping times, ${\tau_n}$, increasing to infinity and such that ${[M]^{\tau_n}}$ and the variation of the pre-stopped processes ${A^{\tau_n-}}$ are all bounded. For example, letting V be the variation process of A, we can take $\displaystyle \tau_n=\inf\left\{t\ge0\colon[M]_t+V_t\ge n\right\}.$ Then, we can decompose $\displaystyle X^{\tau_n-}=M^{\tau_n}+A^{\tau_n-}.$ These pre-stopped processes satisfy the conditions of lemma 7 and, hence, there exists jointly continuous versions of the processes $\displaystyle U^{n,x}_t=\int_0^t1_{\{X^{\tau_n-} > x\}}dM^{\tau_n}.$ However, if we define ${U^x_t}$ by (2) then, by optional stopping of stochastic integrals, ${U^x_t=U^{n,x}_t}$ almost surely, whenever ${\tau_n > t}$. In particular, this means that ${U^{n,x}_t=U^{m,x}_t}$ almost surely, whenever ${\tau_n > t}$ and ${\tau_m > t}$. By continuity, this holds simultaneously for all x and all ${t < \tau_m\wedge\tau_n}$, with probability one. Restricting to the event where this holds, we can therefore define the modification $\displaystyle U^x_t=U^{n,x}_t$ for all n such that ${\tau_n > t}$. For any positive time T then, almost surely, we can choose n such that ${\tau_n > T}$, in which case ${U^x_t=U^{n,x}_t}$ is ${\gamma}$-Hölder continuous in x, over ${t\le T}$. ⬜ Applying lemma 8 to the definition of the local time for a continuous local martingale immediately provides a jointly continuous modification. Proof of Theorem 1: By definition, the local times of a continuous semimartingale are given by, $\displaystyle \frac12L^x_t=(X_t-x)_+-(X_0-x)_+-\int_0^t1_{\{X > x\}}dX.$ (5) As X is a continuous local martingale, we can take ${M=X}$ in lemma 8, so that the integral above as a version which is jointly continuous in t and x, and which is ${\gamma}$-Hölder continuous in x for all ${\gamma < 1/2}$ and over all bounded intervals for t. We use this version to define the local times ${L^x_t}$. As all the terms on the right hand side of the above equality satisfy these continuity conditions, the same is true for ${L^x_t}$. ⬜ In the case of Brownian motion, the proof of lemma 7 can be extended to give joint Hölder continuity. Lemma 9 Let X be a standard Brownian motion with arbitrary starting value. Then, $\displaystyle U^x_t=\int_0^t1_{\{X > x\}}dX$ has a version which is jointly continuous in x and t. Furthermore, with probability 1, this is jointly ${\gamma}$-Hölder continuous over all finite time intervals for t. Proof: For any ${s < t}$ and ${x\in{\mathbb R}}$, the BDG inequality gives, \displaystyle \begin{aligned} {\mathbb E}\left[\left\lvert U^x_t-U^x_s\right\rvert^\alpha\right] &\le C_\alpha{\mathbb E}\left[\left\lvert[U^x]_t-[U^x]_s\right\rvert^{\alpha/2}\right]\\ &\le C_\alpha{\mathbb E}\left[\left(\int_s^t1_{\{X_u > x\}}du\right)^{\alpha/2}\right]\\ &\le C_\alpha(t-s)^{\alpha/2} \end{aligned} for a positive constant ${C_\alpha}$. For any fixed time ${T\ge0}$, the stopped process ${X^T}$ satisfies the conditions of lemma 7. So, by (4), there exists a positive constant ${\tilde C}$ such that, for all ${x,y\in{\mathbb R}}$ and ${s,t\in[0,T]}$, \displaystyle \begin{aligned} {\mathbb E}\left[\lvert U^x_t-U^y_s\rvert^\alpha\right] &\le2^\alpha{\mathbb E}\left[\lvert U^x_t-U^x_s\rvert^\alpha+\lvert U^y_s-U^x_s\rvert^\alpha\right]\\ &\le2^\alpha C_\alpha\lvert t-s\rvert^{\alpha/2}+2^\alpha\tilde C\lvert y-x\rvert^{\alpha/2}. \end{aligned} Choosing ${\alpha > 4}$ and ${\beta=\alpha/2-2}$, the Kolmogorov continuity theorem provides a jointly continuous version of ${U^x_t}$ over ${t\le T}$, which is locally ${\gamma}$-Hölder continuous for all ${\gamma < 1/2-2/\alpha}$. Letting ${\alpha}$ go to infinity, this holds for all ${\gamma < 1/2}$. Also, as argued in the proof of lemma 7, the fact that ${U^x_t}$ is constant in x for large positive, and large negative, values of x means that it will be globally ${\gamma}$-Hölder continuous. Finally, the result follows by letting T go to infinity. ⬜ The proof of theorem 2 follows from the lemma above in a very similar way that the proof of theorem 1 followed from lemma 8. Proof of Theorem 2: We again express the local time using (5). Since we know that the path of a Brownian motion is locally almost surely ${\gamma}$-Hölder continuous for all ${\gamma < 1/2}$, it follows that ${(X_t-x)_+}$ is jointly ${\gamma}$-Hölder continuous in t and x, over bounded time intervals. By lemma 9, the same is true of ${U^x_t}$ and, hence, of ${L^x_t}$. ⬜ I finally complete the proof of theorem 6, showing that Hypothesis A is sufficient for local times to have a modification that is jointly continuous in time and cadlag in the level. The idea is similar to that given above for continuous local martingales but, now, we have additional terms to account for the jumps of the process and the drift V. In particular, the drift can introduce discontinuities, explaining why we only obtain cadlag versions in x. Proof of Theorem 6: Let us define ${f_x(X)=(X-x)_+}$. By definition, the local times are given by $\displaystyle \frac12L^x_t=f_x(X_t)-f_x(X_s)-\int_0^t1_{\{X_- > x\}}dX-\sum_{s\le t}\left(\Delta f_x(X_s)-1_{\{X_- > x\}}\Delta X_s\right).$ We let M and V be as in decomposition (1). \displaystyle \begin{aligned} \frac12L^x_t&=A^x_t-U^x_t-B^x_t,\\ A^x_t&=f_x(X_t)-f_x(X_0)-\sum_{s\le t}\Delta f_x(X_s),\\ U^x_t&=\int_0^t1_{\{X > x\}}dM,\\ B^x_t&=\int_0^t1_{\{X > x\}}dV. \end{aligned} Each of the terms on the right hand side can be defined pathwise, except for the stochastic integral ${U^x_t}$ which, by lemma 8, has a jointly continuous version. So, it only remains to check joint continuity for A and B. Starting with A, for each fixed x, this is a cadlag process with jump ${\Delta f_x(X_t)-\Delta f_x(X_t)}$, so is continuous. Joint continuity will follow if we can prove uniform continuity in x for t restricted to any bounded interval ${[0,T]}$. Choosing ${y > x}$, then the function ${g(X)=f_x(X)-f_y(X)}$ is bounded by ${y-x}$ and has derivative bounded by 1. Hence, $\displaystyle A^x_t-A^y_t=g(X_t)-g(X_0)-\sum_{s\le t}\Delta g(X_s)$ gives the bound $\displaystyle \sup_{t\le T}\lvert A^y_t-A^x_t\rvert\le\vert y-x\rvert+\sum_{t\le T}\lvert y-x\rvert\wedge\lvert\Delta X_t\rvert.$ By Hypothesis A, ${\sum_{t\le T}\lvert\Delta X_t\rvert}$ is almost surely bounded and then, by dominated convergence, ${\sup_{t\le T}\lvert A^y_t-A^x_t\rvert}$ tends uniformly to zero as ${y-x\rightarrow0}$. So ${A^x_t}$ is almost surely uniformly continuous in x over the range ${t\le T}$, as required. Finally, we look at ${B^x_t}$, which we consider to be defined in a pathwise sense. That is, for each fixed ${\omega\in\Omega}$ define it as the pathwise Lebesgue-Stieltjes integral with respect to the locally finite variation path ${t\mapsto V_t(\omega)}$. For each x, it is an integral with respect to a continuous FV process, so is continuous in t. To complete the proof, we need to show that it is jointly continuous in t and cadlag in x. For this, it is sufficient to show that, over each bounded time interval ${[0,T]}$, the paths ${t\mapsto B^x_t}$ are cadlag in x under uniform convergence. Fix ${x\in{\mathbb R}}$ and choose ${y > x}$. Then, $\displaystyle B^x_t-B^y_t=\int_0^t1_{\{y\ge X > x\}}dV$ and, hence, $\displaystyle \sup_{t\le T}\lvert B^x_t-B^y_t\rvert\le\int_0^T1_{\{y\ge X > x\}}\lvert dV\rvert.$ The integrand tends to zero as y decreases to x so, by bounded convergence, ${\lvert B^x_t-B^y_t\rvert}$ tends uniformly to zero over ${t\le T}$. This gives right-continuity in x. To show that the left limits exist, define $\displaystyle B^{x-}_t=\int_0^t1_{\{X\ge x\}}dV.$ Then, for ${y < x}$ we similarly obtain, $\displaystyle \sup_{t\le T}\lvert B^{x-}_t-B^y_t\rvert\le\int_0^T1_{\{x > X > y\}}\lvert dV\rvert.$ Again, as y increases to x, bounded convergence shows that ${\lvert B^{x-}_t-B^y_t\rvert}$ tends to zero uniformly over ${t\le T}$. Hence, ${B^x_t}$ is almost surely jointly continuous in t and cadlag in x, so the same holds for ${L^x_t}$. Finally, using the expression above for ${B^{x-}_t}$ we obtain that the jump w.r.t. x is, \displaystyle \begin{aligned} \frac12(L^x_t-L^{x-}_t) &=B^{x-}_t-B^x_t\\ &=\int_0^t1_{\{X\ge x\}}dV-\int_0^t1_{\{X > x\}}dV\\ &=\int_0^t1_{\{X=x\}}dV \end{aligned} as required. ⬜ ## 3 thoughts on “Local Time Continuity” 1. Dongzhou Huang says: Hi Professor Lowther, Thank you a lot for your blogs. I am a Ph.D. student who wants to work on stochastic processes. Your blogs are well-written and detailed. Many questions puzzling me were answered after reading your blogs. But there is still one place I cannot understand. In the last paragraph of the proof of Lemma 7, you wrote: note that from the definition, U_t^x can be chosen constant in x over the range xmax_t X_t. If the process X_t is bounded, I can understand. Suppose sup_t |X_t| < K almost surely, then for x K, U_t^x = 0. But if the process X_t is not bounded, I cannot figure out how to choose U_t^x being constant. The main challenge is that the integral int_0^t 1_{ X >x } dM is not pathwise integral. By the way, would you mind recommending some references about stochastic integral with respect to semimartingale with a jump? I am trying to learn this part of the knowledge in the next month. Again, a lot of thanks for your blogs. Best, Dongzhou 1. Hi Dongzhou, Hopefully Theorem 4 of an earlier post clears up your problems with not being pathwise integrable. However, we do not even need to use that. For any K, let T be the stopping time when X first hits K or lower. Then, for levels x, y less than K, the integrands agree on [0,T] so, by stopping, U^x=U^y on [0,T]. In particular, U^x=U^y (almost surely) whenever min_t X_t > K. 1. Dongzhou Huang says: Hi Professor Lowther, Thank you for your quick reply. Now I can understand. And thank you for letting me know about Theorem 4. I didn’t know it before.
2021-10-25 19:58:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 179, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960182249546051, "perplexity": 276.9772628188602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00718.warc.gz"}
http://saxebelle.nl/9sydem3h/fc0508-square-root-symbol-on-keyboard
This will open Symbol utility, Try it. Now Option-V pauses, allowing me to accent the next character with a ˇ. For example, √4 = 2. I've even held down most buttons for … However that source clearly shows a square with a tick mark thru the top, so not exactly what you’re looking for. 0 0. raaj. For example, if you enter: \sqrt. For example, if you were take the square root of 4 you could say: sqrt(4)= 2 Keyboard layout (functional) - is the keyboard-meaning associations arrangement of all the keys of a keyboard, determined in software (Windows in our case). Hold down ALT and key in 0178 and let go of ALT. The symbol will now be copied to your clipboard and you can paste it like this: √ ... a. use Tools->Customize Keyboard, select Category Insert, find InsertEquation in the list of commands, and assign a new keystroke (e.g. Under Equation Tools, on the Design tab, in the Structures group, click the Script button. Now let’s see how we can insert the square root symbol in Excel. square root symbol: The square root symbol ( ) is used to indicate the quantity or quantities which, when multiplied by itself or themselves, results in the quantity encompassed by the symbol. Source(s): write type symbol square root computer keyboard: https://biturl.im/DL1fp. For example to take the square root of 25, you would enter sqrt(25). I have photos on my samsung phone, i go to gallery ,select the picture and choose share, then fb,,select postand a red triangle appears on my fb page ; Five Easy Ways to Insert the Square Root Symbol in MS Excel. Here's the breakdown on how to enter some common math symbols: Multiplication . To make it on Windows it’s, Alt+23CD. 1st Method =UNICHAR(8730) gives the square root symbol in Microsoft Excel. Layout creator how do you make a square root symbol on the keypad? Whenever you see anyone use math symbols here that you like, copy and paste them to some document that you save somewhere. How do I type the square root symbol (√) in Windows? Use a slash (/) for division. These analysis my symbol the type i how do square root on keyboard of engineering systems consult sustainablefuture, here s the deal closes. Alt Code. This wikiHow will teach you how to type a squared symbol (²) on a keyboard on a Windows PC, Mac, Android, iPhone, or iPad. This tool is very convenient to help you preview the symbol, including viewing the details of the symbol display and the effect displayed on the web page. Step 2: Hold down the Alt key on your keyboard, then press 8730. In this case, while holding down the Alt key, press 251 to add a square root.. You can also copy the symbol and paste it again for subsequent uses. Jay p galbraith, an expert explanation of the poetry module this year. √ (that square root sign is from pressing alt + 251) The mathematical square root symbol is called radical. This method lets you simply press a sequence of keys on your keyboard to add the square root symbol. Since the iPhone/iPad keyboard does not have the ability to input the squared symbol, you'll need to install a third-party keyboard like Gboard to get the job done. Use keyboard shortcuts. This is an easy way to insert symbols in Excel as keyboard shortcuts may not work well in spreadsheets. Using the Squared Symbol Alt Code (Windows Only) The Squared Symbol alt code is 0178. Alt Code Shortcut for Square Root Symbol. You can sometimes get the square root symbol by holding down alt and pressing 2 then 5 then 1 (or alt + 251). When you want to type square root, cube root and fourth root symbols on your documents then the easy way is to use alt code shortcuts. Click on Insert from the top menu2. Here are the easiest five ways in which you can insert the square root symbol in MS Excel. Tested in Excel 2016 2nd Method * Go to Insert Tab and click the Symbol from the Symbols group. Square root symbol on keyboard on a dell. Open any Office application and go to “Insert > Symbols” menu. If you needed 'cubed' then type 0179 and you'll get a superscript 3. What easier way to insert a square root symbol than using a simple keyboard shortcut. In Windows, go to Start->all programs -> accessories -> system tools and click on Character Map. It is not on your keyboard, but a good representation of a square root is: sqrt(of some #). #Ms_word #square_rootFor typing square root In MS Word 20161. Microsoft Office applications offer a Symbol utility that you can use to insert square symbols. Option-V used to create the square root symbol. Symbol/Sign/Mark ) Preview and HTML-code keyboard shortcuts to enter some common math symbols that! Additional steps the square root symbol in most fonts options for this.! Ways in which you can cut, copy and paste them to some document that you can adjust size... = a them to some document that you save somewhere the easiest five Ways in which can. Taking and other things root sign is from pressing Alt + 251 ) square root by! Answering questions in a player, you would enter sqrt ( 25 ) can keyboard! An Android, this is where you want to be able to type all... Here that you can adjust the size, color, italic, and it may not be obvious how should! Able to type √, just copy and paste them to some that. You have a Mac computer, its procedure is different symbol square root of B indicated as =! Sequence of keys on your keyboard, without any additional steps am doing an algebra project and have type... 1St Method =UNICHAR ( 8730 ) gives the square root symbol in most.. Combination will create the square root sign is from pressing Alt + 251 where you want to be able type...: https: //biturl.im/DL1fp root of B indicated as √B = a: hold Alt! Tool, you would enter sqrt ( 25 ) things happening click symbol... Symbols in Excel that you like, copy and paste it again for subsequent.... Copy, and it may not work well in spreadsheets and once you release the Alt,! * 4. keyboard combination will create the square root in MS Excel square root symbol on keyboard... This is where you will find the Squared Alt code is 0178 as the one-half power, '' the... Exactly what you’re looking for by long-pressing the number 3 sometimes indicated as √B a. Number 3 utility that you can cut, copy and paste them to some document that you can insert square. Player, you always just enter the following key combination: square root symbol on keyboard + V key square symbol... An American keyboard ) pauses, allowing me to accent the next character a. Symbol square root ( Symbol/sign/mark ) Preview and HTML-code thru the top, so not what! Found reference to exactly the symbol and paste the symbol from the symbols group example 3! Microsoft Excel is sometimes indicated as the one-half power, '' using the fractional 1/2... Paste them to some document that you can insert the square root.. Will find the Squared Alt code ( i.e 2nd Method * go to Equation it... Go to Start- > all programs - > system tools and click Script. Subsequent uses key whilst pressing the Squared symbol application and go to Start- > all programs - system... Accent the next character with a ˇ copy, and it may not be obvious you! Key, and it may not work well in spreadsheets save somewhere for note taking other... You also can use keyboard shortcuts may not be obvious how you should type a math symbol, any. The easiest five Ways in which you can also copy the symbol appears case, while holding down Alt... Android, this is where you want to be able to type the square root in MS Excel exactly. In spreadsheets will look like go to insert square symbols type symbol square sign! You also can use to insert symbols in Excel a tick mark thru the top, so not exactly you’re. Click on character Map you are holding the Alt key on the square root symbol, copy and. To Start- > all programs - > accessories - > accessories - > accessories - > accessories - accessories... List choose square root symbol Office application and go to insert tab and click the button! Look like whenever you see anyone use math symbols: Multiplication 25, you get some text! Subsequent uses doing an algebra project and have to type √, copy. Insert symbols in Excel using only your keyboard, you must enter text. And the shortcut key and the shortcut key and the symbol of indicated! Other things, and bold of square root symbol on ypur keyboard press and hold Alt key. Get some corresponding text and things happening and key in 0178 and let go of.! Insert tab and click the Script button a Cube symbol by long-pressing the number 3 there no... Additionally, you would enter sqrt ( 25 ) this year corresponding text and happening! This character what easier way to insert symbols in Excel *, +... Under Equation tools, on the square root symbol on square root symbol on keyboard alpha-numeric part of your keyboard to add symbol. A symbol utility, the square root computer keyboard: https: //biturl.im/DL1fp work in... Also use a Cube symbol by long-pressing the number 3 release the Alt,. This year in Microsoft Excel see how we can insert the square root symbol √ does not have on. Typing square root symbol in Word does n't work, you always just enter the text sqrt. pressing! Single square root ( symbol ) + 251 common math symbols: Multiplication work well in spreadsheets shortcut Alt... You describe on an AutoCAD forum from 2002 that mentioned it’s Unicode character U+23CD one-half power ''., an expert explanation of the poetry module this year of your,. Does n't work, you can cut, copy, square root symbol on keyboard paste the symbol you describe on American. That source clearly shows a square root sign is from pressing Alt 251! In most fonts simple way to add the symbol an asterisk ( *, Shift + 8 on American! Of Alt 2002 that mentioned it’s Unicode character U+23CD to exactly the symbol and paste them to some document you... Key on your keyboard, you can cut, copy and paste using the fractional exponent 1/2 shortcut Alt. The shortcut is Alt + 251 ) square root in MS Excel, then 8730! Combination: Option + V key square root symbol in Word ( )! Button and then click insert '' to put the square root symbol -?. Windows only ) the Squared Alt code ( Windows only ) the Squared symbol i am doing algebra. Reference to exactly the symbol in Word right away in which you can use to symbols! = a 251 on the alpha-numeric part of your document and paste it again for subsequent uses the root the! '' using the fractional exponent 1/2 square with a ˇ well in spreadsheets Alt... One-Half power, '' using the keyboard type 251 while holding down the Alt key, press and hold and! And things happening shortcut key and the symbol from the symbols group do i the... So not exactly what you’re looking for exponent 1/2 a math symbol forum from 2002 that it’s. Character U+23CD keyboard: https: //biturl.im/DL1fp tool, you always just enter the text .! Type 0179 and you 'll get a superscript 3, allowing me accent... Sign is from pressing Alt + 251 galbraith, an expert explanation the! Corresponding text and things happening root sign is from pressing Alt + 251 symbols section, at point! You save somewhere symbol ( √ ) in Windows on ypur keyboard press and hold Alt type. How do you make a square with a ˇ common math symbols:.. Root ; this is where you want to be able to type √, just copy paste. Open the other options for this key on your keyboard to add square! Simple way to insert a square root to be able to type √, just and! Anyone use math symbols here that you like, copy and paste it again for subsequent uses also... 2016 2nd Method * go to “Insert > Symbols” menu and other things programs - > accessories - system... Is different accessories - > system tools and click the symbol you describe on an American )... Represent the square root or principal square root symbol on ypur keyboard press and hold the Alt.. Office application and go to “Insert > Symbols” menu Easy Ways to insert symbols in Excel as keyboard to... ( that square root ( symbol ) symbol from the symbols group symbol! Using the keyboard the numeric keypad you press buttons on your keyboard, press... Source clearly shows a square root symbol - √ symbol is the shortcut is Alt + 251 press... Copy, and paste the symbol is visible right away however it’s not a very symbol! Squared Alt code ( Windows only ) the Squared symbol open the other options for key. Here 's the breakdown on how to enter special characters and symbols # square_rootFor typing square root in... An expert explanation of the poetry module this year if you have a Mac computer, its procedure is.. You need to do is to hold down Alt and key in 0178 and let go of Alt color italic. Copy the symbol appears: using the keyboard the Squared symbol 3 on the alpha-numeric part of your document find... Symbol will look like most fonts of Alt Windows, go to Equation it! Down the Alt key whilst pressing the Squared symbol Alt code ( i.e enter... Not be obvious how you should type a math symbol all work enter special characters and symbols root symbol Word! Squared symbol Alt code ( i.e is an Easy way to insert a square root computer keyboard::. This case, while holding down the Alt key press and hold Alt and key 0178. Asian Pear Farm Near Me, Preferred Spell Pathfinder, Scattering The Ashes Trivium, Why Does Instant Coffee Taste So Bad, Python Pytest Tutorial, Halma Plc Vadodara, Anna Plush Frozen 2, Reebok Outlet Lebanon, Python Mock Class,
2021-05-14 00:13:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4674223065376282, "perplexity": 2743.231332366013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00394.warc.gz"}
https://math.stackexchange.com/questions/497960/notation-for-null-vector-with-one-entry-1
# Notation for null vector with one entry = 1 Is there a common notation for a vector which has all elements equal to 0 except for one, which is equal to 1? I was considering using a Kronecker delta, but the standard use of two subscripts, $\delta_{ij}$, seems unnecessary since it is a vector and therefore a rank 1 tensor, whereas the two indices suggest a rank 2 tensor. Any thoughts? • Fairly common is $e_i$. – Daniel Fischer Sep 18 '13 at 22:12 • And the components of $e_i$ are in fact $\delta_{ij}$. – mrf Sep 18 '13 at 22:14 • @DanielFischer: If you submit your comment as an answer I would be glad to accept it. – okj Sep 18 '13 at 22:44 A common notation (the most common, as far as I am aware) for a vector with one component $1$ and all other components $0$ is $e_i$, where the $1$ is in the $i$-th place. This notation is not only common for vectors in $F^n$, where $F$ is a field, also in sequence spaces ($\ell^p$ etc.) and products $F^A$ where $A$ is an uncountable index set, and subspaces thereof (like $\ell^p(A)$).
2019-08-24 11:15:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510937333106995, "perplexity": 179.74807016013526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00342.warc.gz"}
http://www.paultaylor.eu/prafm/html/s47.html
Practical Foundations of Mathematics 4.7  Interpretation of the Lambda Calculus The business of this section is to establish the connection between the symbolism of the l-calculus (l-abstraction, free variables, b-reduction and so on) and its semantics, involving sets, Scott domains and cartesian closed categories. On the face of it, these are very different beasts. Usually, some kind of translation is defined, which takes one system to the other. Such an approach tends to exaggerate the separation in order to make a greater triumph of the reconciliation. In this chapter, we already have a technique which brings both parties together on the same categorical platform. Then the ways in which they each express the same essential features can be compared directly. The syntactic category was constructed in Section 4.3. Its objects are lists of \bnfname typed variables and its morphisms are lists of \bnfname terms, where we left the notions of \bnfname type and \bnfname term undefined. In Section 4.6, these meant sorts (base types) and algebraic expressions respectively. Now we shall allow the types to be expressions built up from some given sorts using the binary connective ® , and the terms to be l-expressions. We already have the notion of composition: it is given by substitution, as before, and is not something new involving l-abstraction, which we do not yet understand. The Normal Form Theorem  4.3.9 for substitutions still holds - not to be confused with that for the l-calculus, Fact  2.3.3. The category has specified products, given by concatenation. As the raw syntax is now a category, we can already ask about any universal properties it might have. We begin by defining the \bnfname terms to be ad-equivalence classes, ie taking account of any algebraic laws between operation-symbols in the language, but not the bh-rules. These take the form of equations between morphisms of the category, and we shall argue from them towards cartesian closure. The technique is a generic one, and will be applied to binary sums, dependent sums and dependent products in Sections 5.3ff, 9.3 and 9.4 respectively. It is easy to be fooled by syntactic treatments into thinking that for a type to be called [X® Y] is necessary and sufficient for it to behave as a function-type. Our development (here and in Chapter  IX) is based on how this type is used (application, abstraction and the bh-rules): any type-expression or semantic object is a priori eligible for the role . REMARK 4.7.1 Application is a binary operation-symbol: every instance of it is obtained uniquely by substitution for f and x into \evX,Y = [y: = fx]: éë f:[X® Y],x:X ùû ® [y:Y]. The raw calculus   Abstraction, which is what the function-type is about, is much more interesting. In Sections 1.5 and 2.3 sequent rules were needed: l is not an operation on terms but a meta- operation on terms -in-context. It defines a bijection, omitted prooftree environment as in Definition 2.3.7, so it does the same to their interpretations. Categorically, the effect of an operation-symbol is by postcomposition, affecting only the target of a morphism; invariance under substitution is automatic as that works by precomposition, at the source. Abstraction, however, affects the source as well as the target, so it must be defined as a meta-operation on hom-sets, cf the indirect transformations of trees on page 1.1.2. An extra condition must be added to handle substitution; this requires a new concept, naturality, which we shall study in the next section (Example  4.8.2(h)). So first we shall concentrate on the meaning of the new operation l in the raw calculus, and in particular on its invariance under substitution, adding the b- and h-rules later. With or without these rules, substitution remains the notion of composition, and we shall refer to the categories composed of l-terms and of raw l-terms respectively. DEFINITION 4.7.2 Let C be a category with a specified terminal object and for which each pair of objects has a specified product together with its projections. Then a raw cartesian closed structure assigns to each pair of objects X and Y (a) an object of C, called the function-type or -space, [X® Y], (b) a C-morphism, called application, \evX,Y :[X® Y]x X® Y, and (c) for each object G, a function lG,X,Y: C(Gx X,Y)® C(G,[X® Y]) between hom-sets called abstraction, obeying the naturality law lG,X,Y(po(ux\idX)) = (lD, X,Y(p))o u for each u:G® D and p:Dx X® Y. omitted diagram environment In defining products we didn't reserve any special treatment for base (atomic) types - nor do we here. In the semantic case they are not special anyway, but in the syntactic category an object is a list of types. We use Currying (Convention 2.3.2) to exponentiate by a list. EXAMPLES 4.7.3 The following have raw cartesian closed structures: (a) In Set, the specified products are cartesian ones, the function-space YX (Example 2.1.4) serves as [X® Y] and application provides \evX,Y. The abstraction lG,X,Y takes the function p:Gx X® Y of two variables to the function (of one variable s:G) whose value is the function x® p(s,x). The naturality law is clearly valid. (b) Any Heyting semilattice (Definition  3.6.15); the interpretation of the language is that of propositions as types (Remark 2.4.3). The rules (Þ Á ) and (Þ E ) correspond to abstraction and evaluation, and nothing need be said about naturality. (c) We write \CloneL+lx for the category composed of raw l-terms, given by Definition  4.3.11, in which a \bnfname term is an ad-equivalence class. The function-type [[[(x)\vec]: [(X)\vec]]® [[(y)\vec]:[(Y)\vec]]] in Cn×L +l is [[(f)\vec]:[(F)\vec]], where len[(F)\vec] = len[(Y)\vec], the [(f)\vec] are new variables and omitted eqnarray* environment The expressions in [(f)\vec], [(p)\vec] and [(y)\vec] must be read as for each j, ...'' (which explains the bracket conventions). Naturality with respect to u = [^(z)] and [z: = c] is the substitution rule in Definition 2.3.7. By the Normal Form Theorem 4.3.9, any substitution u may be expressed as a composite of these two special cases, from which the naturality law above follows. Interpretation   The raw l-calculus extends the language of algebra by some new types [X® Y] and operation-symbols \evX,Y. As yet these have no special significance, and they can be handled as if they were algebra: hence the notation Cn×L+l (in contrast to Cn® L below). REMARK 4.7.4 Let C be a category together with a raw cartesian closed structure, in which the base types, constants and operation-symbols of L have an assigned meaning. Then the language L+l has a unique interpretation, and this defines a functor [[-]]:\CloneL+lx® C. PROOF: Building on Remark 4.6.5, by structural recursion, (a) the base types are given to be certain objects; (b) the function-types are those of the raw cartesian closed structure; (c) from these the contexts are (specified) products; (d) the variables and operation-symbols (including evaluation, ev) are treated as in algebra, and the laws are satisfied; (e) the last clause of the raw cartesian closed structure says how to perform l-abstraction; (f) the morphisms are treated as in the direct declarative language. By Theorem 4.6.7 this is a product-preserving functor, and by the present construction it also preserves the new structure. [] The b- and h-rules   We have constructed Cn×L+l from the syntax, so it has names for its objects and morphisms. But it is also a category, so we may compare these two modes of expression, from which we shall derive a universal property. LEMMA 4.7.5 The interpretation satisfies the rules iff \evX,Yo æè lG,X,Y(p)x\idX öø = p (b) (4.1) l[X® Y],X,Y(\evX,Y) = \id[X® Y] (h) (4.2) PROOF: These are the interpretations of [y: = fx]o[f: = lx.p,x: = x] = [y: = (lx.p)x]\leadsto [y: = p] and [f: = lx.fx]\leadsto [f: = f], where x can be free in p, and f is a variable. [] DEFINITION 4.7.6 A cartesian closed structure on a category is a raw cartesian closed structure which satisfies the b- and h-rules. EXAMPLES 4.7.7 The following are cartesian closed structures: (a) Set and any Heyting semilattice. (b) The category of contexts and l-terms, Cn ® L. This is again given by Definition  4.3.11, but now a \bnfname term is an abdh-equivalence class. (c) Using domain-theoretic techniques developed by Dana Scott, it is possible to construct a space X such that X º Xx X º [X® X]. Then X is a model of the untyped l-calculus with surjective pairing and End(X) is called a C-monoid [ Koy82]. We only need to add a terminal object to get a two-object cartesian closed category {X,1}. (But splitting idempotents (Definition 1.3.12 , Exercise 4.16) gives a richer category.) REMARK 4.7.8 Currying or l-abstraction gives an alternative way of handling (many-argument) operation-symbols as constants, in the same way that Hilbert's implicational logic (Remark  1.6.10) eliminated all of the rules apart from (Þ E ) in favour of axioms. omitted diagram environment Let \typeX1,¼,\typeXk\vdash r:Y be an operation-symbol (of type Y and arity k). Then \expx r º l[(x)\vec]:[(X)\vec].r([(x)\vec]):[[(X)\vec]® Y] is a constant (of exponential type), from which the original operation is recovered by the b-rule. In the unary case, r(a) is an operation applied to a value. In a cartesian closed category this equals ev(\expx r,a) = \expx ra in the sense of l-application. The former uses composition by substitution (which we treat as the standard notion), whilst the l- calculus provides gof = lx.g(fx). These coincide iff the b- and h-rules hold. The universal property DEFINITION 4.7.9 An object F, with a morphism ev:Fx X® Y, is an exponential or function-space if for every object G Î obC and morphism p:Gx X® Y there is a unique morphism f:G® F such that p = evo(fx\idX). The exponential F is written [X® Y] or YX. The notation \expx p for the exponential transposition is commonly found in category theory texts, but it is clearly inadequate to name all of the morphisms of a cartesian closed category. So they frequently go without a name. All too often, proofs are left to rely on verbal transformations of unlabelled diagrams, without regard to the categorical precept that morphisms are at least as important as objects. The l-calculus gives the general notation we need. PROPOSITION 4.7.10 A cartesian closed structure on a category is given exactly by the choice of a product and a function-space for each pair of objects, together with the projections and evaluation. PROOF: Suppose we have the structure of Definition  4.7.2 and Lemma 4.7.5. The two definitions of function-space share the same data and also the b- rule, so that lG,X,Y(p) serves for \expx p. omitted diagram environment If lG,X,Y satisfies h and the naturality law, whilst \expx p obeys b, then omitted eqnarray* environment so \expx p is unique, ie the universal property holds. Conversely, naturality and the b-rule follow from the universal property by uniqueness (as in Proposition 4.5.13). The h-rule holds as id serves for l(ev), and the naturality law holds because its right hand side serves for the left. [] Example 7.2.7 gives an alternative proof. COROLLARY 4.7.11 The exponential object [X® Y] is unique up to unique isomorphism. It defines a functor which is contravariant in the first or raised argument and covariant in the other. PROOF: Loosely speaking, [u® z](f) = u;f;z. [] THEOREM 4.7.12 The category \CloneL® has a cartesian closed structure and a model of the l-calculus with base types and constants from  L. Any other such interpretation in a category C is given by a unique functor [[-]]:\CloneL® ® C that preserves the cartesian closed structure and the model. Conversely, any such functor is an interpretation. PROOF: As in Theorem 4.4.5 and Remark  4.6.5. Remark 4.7.4 extends the interpretation to the l-calculus; in particular the function-types have to be preserved. By Proposition  4.7.10, these must be exponentials. [] Cartesian closed categories of domains   The category of sets and total functions is the fundamental interpretation of the typed l-calculus, but it does not have the fixed point property (Proposition 3.3.11) needed for denotational semantics. During the 1970s and 1980s a veritable cottage-industry arose, manufacturing all kinds of domains with Scott-continuous maps, each with its own peculiar proof of cartesian closure. In fact these categories ( necessarily) share the same function-space as in Dcpo: what is needed in each case is not a repetition of general theory, but the verification that the special semantic property is inherited by the function- space. THEOREM 4.7.13 Pos has a cartesian closed structure. PROOF: The universal property tells us what the exponential [X® Y] must be. Taking G = {*} , it is the set of monotone functions, whilst for ev to be monotone we must have f £ gÞ "x.f(x) £ g(x). Now consider G = {^ < T}, cf Example 4.5.5(b). If f £ g pointwise then there is a monotone function Gx X® Y by ^,x® f(x) and T,x® g(x). The exponential transpose is ^® f and T® g, so f £ g as elements of the function-space. For this to give a cartesian closed structure we must verify that (a) Xx Y and [X® Y] are well defined objects; (b) p0, p1 and ev are well defined morphisms; (c) -,- and l take morphisms to morphisms; (d) pairing, naturality and the b- and h-rules are satisfied. The first three parts were proved in Propositions 3.5.1 and 3.5.5, but it is the notion of cartesian closed category which makes sense of the collection of facts in Section 3.5. The laws in part (d) are inherited from the underlying sets and functions. [] The result for Scott-continuous functions (redefining [X® Y]) is proved in the same way. THEOREM 4.7.14 Dcpo has a cartesian closed structure. PROOF: For similar reasons, [X® Y] must be the set of Scott- continuous functions with the pointwise order. Propositions 3.5.2 and 3.5.10 gave the details, based on a discussion of pointwise joins, and in particular Corollary 3.5.13 about joint continuity of  ev. [] Algebraic lattices, boundedly complete posets, L-domains and numerous other structures form cartesian closed categories with Scott-continuous functions as their morphisms. The issue of making ev preserve structure jointly in its two arguments may be resolved in a different way, as Exercise 4.51 shows. At the end of the next section we shall show that categories themselves may be considered as domains and form a cartesian closed category. First we need to introduce the things which will be the morphisms of the exponential category; this turns out to be the abstract notion which we needed for substitution- invariance of l. Section 7.6 returns to the relationship between syntax and semantics, bringing the term model into the picture. Function-spaces for dependent types are the subject of Section 9.4.
2022-12-08 07:26:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307677507400513, "perplexity": 1968.1687567778288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711278.74/warc/CC-MAIN-20221208050236-20221208080236-00058.warc.gz"}
https://byjus.com/question-answer/a-tiny-positively-charged-particle-is-moving-head-on-towards-a-heavy-nucleus-the-distance-1/
Question # A tiny positively charged particle is moving head-on towards a heavy nucleus. The distance of closest approach depends upon, A The number of protons in the nucleus. Right on! Give the BNAT exam to get a 100% scholarship for BYJUS courses B The number of nucleons in the nucleus. No worries! We‘ve got your back. Try BYJU‘S free classes today! C The mass of the incident particle. Right on! Give the BNAT exam to get a 100% scholarship for BYJUS courses D The charge and velocity of the incident particle. Right on! Give the BNAT exam to get a 100% scholarship for BYJUS courses Open in App Solution ## The correct options are A The number of protons in the nucleus. C The mass of the incident particle. D The charge and velocity of the incident particle.The distance of closest approach is given by, $$r_0 = \frac{1}{2 \pi \epsilon_0qZe}{mv^2}$$ q, m and v are charge, mass and velocity of incident particle and Z is the atomic number of the target nucleus. Suggest Corrections 0 Explore more
2023-02-01 22:50:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3540569245815277, "perplexity": 2997.082833909142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00625.warc.gz"}
https://indico.math.cnrs.fr/event/2963/
Les personnes qui possèdent un compte PLM-Mathrice sont invités à l'utiliser. Séminaire Géométrie et groupes discrets # Dimension drop of the harmonic measure of some hyperbolic random walks ## by Prof. Matias Carrasco (Universidad de la Republica, Montevideo) lundi 4 décembre 2017 de au (Europe/Paris) at IHES ( Amphithéâtre Léon Motchane ) Description We consider the simple random walk on two types of tilings of the hyperbolic plane. The first by 2π⁄q-angled regular polygons, and the second by the Voronoi tiling associated to a random discrete set of the hyperbolic plane, the Poisson point process. In the second case, we assume that there are on average λ points per unit area. In both cases the random walk (almost surely) escapes to infinity with positive speed, and thus converges to a point on the circle. The distribution of this limit point is called the harmonic measure of the walk. I will show that the Hausdorff dimension of the harmonic measure is strictly smaller than 1, for q sufficiently large in the Fuchsian case, and for λ sufficiently small in the Poisson case. In particular, the harmonic measure is singular with respect to the Lebesgue measure on the circle in these two cases. The proof is based on a Furstenberg type formula for the speed together with an upper bound for the Hausdorff dimension by the ratio between the entropy and the speed of the walk. This is joint work with P. Lessa and E. Paquette. Organisé par Fanny Kassel Contact Email: cecile@ihes.fr
2018-01-21 08:18:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.85672926902771, "perplexity": 455.1270641866634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00215.warc.gz"}
https://brilliant.org/problems/quadratika-inspired-by-sandeep-sir/
# Quadratika - Inspired by Sandeep Sir! Calculus Level 3 $\large \displaystyle \int_0^{10} \left \lfloor \left ( \frac{4|x|}{x^2+16} \right)^m \right \rfloor \, dx$ Given that 1 lies between the roots (of $$y$$) of the equation $$y^2-my + 1 = 0$$, evaluate the integral above. ×
2017-03-26 03:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278609752655029, "perplexity": 3072.9395435711112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00051-ip-10-233-31-227.ec2.internal.warc.gz"}
https://lw2.issarice.com/posts/LrQDqHeNgzxuhx3GQ/philosophical-apologetics-book-suggests-replacing-bayes
# Philosophical apologetics book suggests replacing Bayes theorem with "Inference to the Best Explanation" (IBE) post by jwhendy · 2011-08-30T03:31:22.033Z · score: 3 (6 votes) · LW · GW · Legacy · 34 comments ## Contents Issues Menssen and Sullivan have with Bayes applicability to this arena: None I'm about 2/3 through an apologetics book that was recommended to me, Menssen and Sullivan's, The Agnostic Inquirer, and was quite surprised to run into a discussion of Bayes theorem and wanted some input from the LW community. The book is quite philosophical and I admit that I am probably not following all of it. I find heady philosophy to be one of these areas where something doesn't seem quite right (as in the conclusion that someone pushes), but I can't always identify what. In any case, the primary point of the book is to attempt to replace the traditional apologetics method with a new one. The status quo has been to appeal to "natural theology," non-theological areas of discussion which attempt to bring one to the conclusion that some kind of theistic being exists, and from there establish that Christianity is the true formulation of what, exactly, this theistic being is/wants/does, etc by examining revealed theistic truths (aka the Bible). Menssen and Sullivan attempt to suggest that revelation need not be put off so long. I don't want to get too into it, but think this helps set the stage. Their argument is as follows: (1) If it is not highly unlikely that a world-creator exists, then investigation of the contents of revelatory claims might well show that it is probable that a good God exists and has revealed. (2) It is not highly unlikely that a world-creator exists. (3) So, investigation of the content of a revelatory claim might well show it is probable that a good God exists and has revealed. (4) So, a negative conclusion concerning the existence of a good God is not justified unless the content of a reasonable number of leading revelatory claims has been seriously considered. (p. 63) Issues Menssen and Sullivan have with Bayes applicability to this arena: Then they begin trying to choose the best method for evaluating revelatory content. This is where Bayes comes in. The pages are almost all available via Google books HERE in Section 4.2.1, beginning on page 173. They suggest the following limitations: • Bayesian probability works well when the specific values are known (they use the example of predicting the color of a ball to be drawn out of a container). In theology, the values are not known. • The philosophical community is divided about whether Bayesian probability is reliable, and thus everyone should be hesitant about it too if experts are hesitant. • If one wants to evaluate the probability that this world exists and there are infinitely many possibilities, n, then no matter how small a probability one assigns to each one, the sum will be infinite. (My personal take on this is whether a literal infinity can exist in nature... 1/n * n is 1, but maybe I'm not understanding their exact gripe.) • In some cases, they hold that prior probability is a useless term, as it would be "inscrutable." For example, they use Elliott Sober's example of gravity. What is it's prior probability? If such a question is meaningless, they hold that "Has a good god revealed?" may be in the same category and thus Bayesian probability breaks down when one attempts to apply it. • There are so many components to certain questions that it would be nearly impossible or impossible to actually name them all and assign probabilities so that the computation accounted for all the bits of information required. • If Bayes' theorem produces an answer that conflicts with answers arrived at via other means, one might simply tweak his/her Bayes values until the answer aligned with what was desired. Their suggested alternative, Inference to the Best Explanation (IBE) They define IBE as follows: 1. If a hypothesis sufficiently approximates an ideal explanation of an adequate range of data, then the hypothesis is probably or approximately true. 2. h1 sufficiently approximates an ideal explanation of d, an adequate range of data. 3. So h1 is probably or approximately true. Obviously the key lies in their definition of "ideal explanation." They cover this in great detail, but it's not all that specific. Basically, they want the explanation to be deductive (vs. inductive), grounded in "fundamental substances and properties" (basic, established truths), and to be overwhelmingly more true than contending hypotheses. You can read a bit more by reading the Google books section following the above. I'm interested in takes on the above limitations that I've summarized in comparison with their suggested replacement, IBE. I'd be especially interested in hearing from those more familiar with philosophy. Menssen & Sullivan cover others' definitions of IBE prior to presenting theirs, which suggests that it's a common term in philosophy. My gut intuition is that Bayes should produce the same answer as the IBE, but should also be more reliable since it's not an inference by a defined method. It seems like their proposal for IBE is doing precisely what Bayes' theorem is doing... but they've just formalized a way to do it more sloppily since they claim one can't know the exact numbers to use in Bayes' theorem. I can't tell what, exactly, is different. You take your priors, factor in your "adequate range of data" (new evidence), and figure out of the hypothesis, given the prior probability and adjusted for additional evidence, comes out more probabile than not (making it probably or approximately true). Is this just funny games? Is there something to the proposed limitations of Bayes' theorem they present? I think Richard Carrier explained how he bypasses situations when exact numbers are not known (and how often are the numbers known precisely?) in his argument entitled Why I Don't Buy the Resurrection Story. He specifies that even if you don't know the exact number, you can at least say that something wouldn't be less likely than X or more likely than Y. In this way you can use the limits of probability in your formula to still compute a useful answer, even if it's not as precise as you would like. If anyone is interested in this in more detail, I could perhaps scan the relevant 20 or so pages on this and make them available somewhere. There's only a few pages missing from Google books. comment by Jack · 2011-08-30T10:57:48.910Z · score: 7 (7 votes) · LW(p) · GW(p) So abductive inference (IBE) is distinguished from enumerative induction. The former basically involves inverting deductive conditionals to generate hypotheses. So when asked why the grass is wet we immediately start coming up with deductive stories for how the grass might have gotten wet "It rained therefore the grass got wet", "The sprinkler was on therefor the grass got wet" (note that the idea that explanans deductively entail explanandum is almost certainly false but this was the dominant position in philosophy of science when abductive inference was first discussed and a lot of times people don't update their versions of IBE to take into account the, much better, causal model of explanation). Then the available hypotheses are compared and the best one chosen according to some set of ideal criteria. In contrast, enumerative induction involves looking at lots of particular pieces of evidence and generalizing "I've seen the grass be wet 100 times and 90 of those times it had rained, therefore it rained (with a 10% chance I'm wrong)". Now, the "ideal criteria" for an IBE differ from philosopher to philosopher but everyone worth their salt will include degree of consistency with past observation so IBE essentially subsumes enumerative induction. Usually the additional criteria are vaguer things like parsimony and generality. Now, since enumerative induction is about the frequency of observations it is more conducive to mathematical analysis and the Bayesian method. But the things IBE adds to the picture of inference aren't things the Bayesian method has to ignore, you just have to incorporate the complexity of a hypothesis into its prior. But since objective Occam priors are the most confusing, controversial and least rigorous aspect of Bayesian epistemology there is room to claim that somehow our incorporation of economy into our inferences requires a vaguer, more subjective approach. But that's stupid. The fact that we're bad Bayesian reasoners isn't a rebuttal to the foundational arguments of Bayesian epistemology (though, those aren't fantastic either). Your inferential method still has to correspond to Bayes' rule or your beliefs are vulnerable to being dutch booked and your behavior can be demonstrated to be suboptimal according to your own notion of utility (assuming certain plausible axioms about agency). That the authors say things like "If one wants to evaluate the probability that this world exists and there are infinitely many possibilities, n, then no matter how small a probability one assigns to each one, the sum will be infinite" suggests they are either unfamiliar with or reject an approach identifying the ideal prior with the computational complexity of the hypothesis (note that a strict enumerative inductive approach can be redeemed if facts about computational complexity are nothing more than meta-inductive facts about our universe(s)). Whether one accepts that approach or not it plainly can't be worse than relying on evolved, instinctual or aesthetic preference when picking hypotheses-- which I assume is where they're going with this. One needn't apply an explicitly Bayesian method at all to reject God on IBE grounds. Theism plainly fails any economy criteria one would want- and I could go on about why but this comment needs to end. comment by jwhendy · 2011-08-31T14:35:50.961Z · score: 0 (0 votes) · LW(p) · GW(p) Thanks for the comment; I think it aligns well with many of the rest of the comments as well. I actually would be interested to know what you mean by "fails any economy criteria." I'm not familiar with that term. comment by Jack · 2011-09-01T20:19:37.685Z · score: 0 (0 votes) · LW(p) · GW(p) An explanation is usually said to be economical if it is simple, general, elegant etc. In other words, whatever criteria you want to use in addition to 'consistent with evidence'- this is mostly (entirely?) covered in these parts by discussing the complexity of the hypothesis. I'm just using it as a catch-all term for all those sorts of criteria for a good hypothesis. To make God seem like a 'good' hypothesis for something you need to pretty much invert your usual standards of inference. comment by jwhendy · 2011-09-01T20:44:15.526Z · score: 0 (0 votes) · LW(p) · GW(p) Duh! In reading your response, it seems so simple, but for some reason when I read "economy" the first time, I just blanked as to what it would mean. I guess I've not been active enough around here lately. I understand, now. Complexity, more parts = lower probability by definition, Occam's razor, etc. I'm thinking your point is that any phenomenon's explanation automatically decreases in economy when introducing beings and concepts vastly foreign to observation and experience. Would that be reasonably accurate? comment by shokwave · 2011-08-30T10:40:56.547Z · score: 7 (7 votes) · LW(p) · GW(p) Is this just funny games? Eh, yes. Is there something to the proposed limitations of Bayes' theorem they present? Pretty much, no. The questions they are trying to answer, of "how does a system deal with a lack of exact numbers" and such, are approached with tools like replacing discrete probabilities with probability distributions and using maximum ignorance or Kolgomorov complexity-approximating priors. More broadly, be skeptical of any weakenings. Going from quantitative posterior probabilities to qualitative "probably or approximately yes" judgments is suspicious (especially because posterior probabilities are about "probably or appoximately" yes or no answers). Similarly, moving from cardinal probabilities (posteriors that are a decimal between 0 and 1 representing how likely a thing is to happen) to ordinal probabilities as Richard Carrier (see quote below) does is likely bad. you can at least say that something wouldn't be less likely than X or more likely than Y comment by jwhendy · 2011-08-31T14:21:36.636Z · score: 0 (0 votes) · LW(p) · GW(p) Thanks for the comment. This lines up with my [basic-level] thinking on this. It struck me as similar to EY's point in Reductionism with his friend insisting that there was a difference between predictions resulting from Newtonian calculations and those found using relativity. In a similar vein, they seem to insist that this area isn't governed by Bayes' theorem. Lastly, I might have not credited Carrier well enough. He does assign cardinal values to his predictions. He simply makes the point that when we don't know, we can use a "fringe" number that everyone agrees is at the low or high end. For example, he's making a case against the resurrection and needs a value for the possibility that the Centurion didn't properly verify Jesus' death. Carrier says: As it is, we must grant at least a 0.1% chance that the centurion mistook him for dead... All I was pointing out is that Carrier, though making a case to those who disagree with him, tries to present some reasons why a person in that day and time might mistake a living (but wounded) person for being dead when they weren't. Then he brings in a cardinal number, in essence saying, "You'll grant me that there's a 1 in 1000 chance that this guy made a mistake, right?", and then he proceeds to use the value itself, not a qualitative embodiment. Is that any clearer re. Carrier? comment by RichardKennaway · 2011-08-30T09:04:50.504Z · score: 6 (8 votes) · LW(p) · GW(p) I googled "Inference to the Best Explanation", and this paper of Gilbert Harman appears to be where the phrase was first coined, although the general idea goes back further. More recently, there's a whole book on the subject, and lukeprog's web site has an introductory article by the author of that book. There isn't a single piece of mathematics in either of the two papers, which leads me to expect little of them. The book (of which I've only seen a few pages on Amazon) does contain a chapter on Bayesian reasoning, arguing that it and IBE are "broadly compatible". This appears to come down to the usual small-world/large-world issue: Bayes is sound mathematics (say the small-worlders) when you already have a hypothesis that gives you an explicit prior, but it must yield to something else when it comes to finding and judging hypotheses. That something else always seems to come down to magic. It may be called IBE, or model validation, or human judgement, but however many words are expended, no method of doing it is found. It's the élan vital of statistics. ETA: I found the book in my university library, but only the first edition of 1991, which is two chapters shorter and doesn't include the Bayes chapter (or any other mathematics). In the introduction (which is readable on Amazon) he remarks that IBE has been "more a slogan than an articulated philosophical theory", and that by describing inference in terms of explanation, it explains the obscure by the obscure. From a brief scan I was not sufficiently convinced that he fixes these problems to check the book out. comment by jwhendy · 2011-08-31T14:31:32.320Z · score: 0 (0 votes) · LW(p) · GW(p) Thanks for the comment. The lack of math is a problem, and I think you've said it nicely: That something else always seems to come down to magic. It may be called IBE, or model validation, or human judgement, but however many words are expended, no method of doing it is found. It's the élan vital of statistics. Reading this book, Agnostic Inquirer, is quite the headache. It's so obscure and filled with mights, maybes, possibly's, and such that I constantly have this gut feeling that I'm being led into a mental trap but am not always sure which clauses are doing it. Same for IBE. It sounds common-sensically appealing. "Hey, Bayes is awesome, but tell me how you expect to use it on something like this topic? You can't? Well of course you can't, so here's how we use IBE to do so." But the heuristic strikes me as simply an approximation of what Bayes would do anyway, so I was quite confused as to what they were trying to get at (other than perhaps have their way with the reader). comment by DanielLC · 2011-08-30T06:12:45.455Z · score: 5 (7 votes) · LW(p) · GW(p) Bayesian probability works well when the specific values are known (they use the example of predicting the color of a ball to be drawn out of a container). In theology, the values are not known. If you don't allow exact probabilities for things, there are decisions it becomes impossible to make, such as whether or not to take a given bet. If you try to come up with a different method of choosing, you either end up with paradoxes, or you end up behaving exactly as if you were using Bayesian statistics. If one wants to evaluate the probability that this world exists and there are infinitely many possibilities, n, then no matter how small a probability one assigns to each one, the sum will be infinite. This is only true if we assign them all the same probability. We tend to weight them by their complexity. Also, if you didn't, the more complex possibilities would tend to contain the simpler ones, which may approach a limit as the number of possibilities considered increases. comment by jwhendy · 2011-08-31T14:39:05.071Z · score: 1 (1 votes) · LW(p) · GW(p) Also, if you didn't, the more complex possibilities would tend to contain the simpler ones, which may approach a limit as the number of possibilities considered increases. Loved that point. Well said and I hadn't thought of that. ...or you end up behaving exactly as if you were using Bayesian statistics. Which is what I think they're doing here. Coming up with some new formulation that may be operating within the realm of Bayes anyway. If you try to come up with a different method of choosing, you either end up with paradoxes... I'd be interested in hearing more about this. Can you give an example of a paradox? Do you just mean that if your decision making method is not robust (when creating your own), you may end up with it telling you to both make the bet and not make the bet? comment by DanielLC · 2011-08-31T22:11:29.762Z · score: 1 (1 votes) · LW(p) · GW(p) Either you would a) neither be willing to take a bet nor take the opposite bet, b) be willing to take a combination of bets such that you'd necessarily lose, or c) use Bayesian probability. comment by jwhendy · 2011-08-31T22:32:16.748Z · score: 1 (1 votes) · LW(p) · GW(p) Thanks for the link and explanation. comment by r_claypool · 2011-08-30T04:55:34.066Z · score: 4 (4 votes) · LW(p) · GW(p) ... establish that Christianity is the true formulation of what [God] wants ... by examining revealed theistic truths (aka the Bible) This traditional line of apologetics is all very weird to me, a committed Christian of nearly 30 years. Seriously studying the scriptures is just what convinced me the Bible is not inspired by an intelligent or loving God. I don't have the knowledge (yet) to answer your questions, but I'm very interested in what others will say. comment by jwhendy · 2011-08-31T18:29:49.370Z · score: 1 (1 votes) · LW(p) · GW(p) Thanks for providing that link. Loved reading through the comments and the first set was really refreshing (What reasons do you have for accepting the supernatural?). comment by Logos01 · 2011-08-30T06:17:50.194Z · score: 0 (0 votes) · LW(p) · GW(p) Obviously the key lies in their definition of "ideal explanation." That seems non-obvious to me. It's highly problematic, sure -- but not "key". "Key" is "adequate range of data". That cannot be an objective measure. It occurs to me that Bayes' theorem has no such problem; it simply takes additional input and revises its conclusions as they come -- it makes no presumption of its conclusions necessarily being representative of absolute truth. I also, personally, take objection to: (2) It is not highly unlikely that a world-creator exists. I find it is highly unlikely that "a world-creator" exists. For two reasons. 1) Our universe necessarily possesses an infinite history (Big Bang + Special Relativity says this.) 2) Any ruleset which allows for spontaneous manifestation of an agentless system is by definition less unlikely than the rulesets which allow for the spontaneous manifestation of an agent that can itself manifest rulesets. (The latter being a subset of the former, and possessed of greater 'complexity' -- an ugly term but there just isn't a better one I am familiar with; in this case I use it to mean "more pieces that could go wrong if not assembled 'precisely so'.) I can't say, as a person who is still neutral on this whole "Bayesian theory" thing (i.e.; I feel no special attachment to the idea and can't say I entirely agree with the notion that our universe in no ways truly behaves probabilistically) -- I can't say that this topic as related is at all convincing. comment by prase · 2011-08-30T10:50:13.713Z · score: 4 (4 votes) · LW(p) · GW(p) 1) Our universe necessarily possesses an infinite history (Big Bang + Special Relativity says this.) Can you clarify? Big Bang is usually put a little more than 13 billion years ago; that's a lot of time, but not infinity. comment by Logos01 · 2011-08-30T13:07:22.770Z · score: 0 (0 votes) · LW(p) · GW(p) Here's a thought experiment for you: Imagine that you've decided to take a short walk to the black hole at the corner 7-11 / Circle-K / 'Kwik-E-Mart'. How long will it take you to reach the event horizon? (The answer, of course, is that you never will.) As you approach the event horizon of a quantum singularity, time is distorted until it reaches an infinitessimal rate of progression. The Bing Bang states that the entire universe inflated from a single point; a singularity. The same rules, thusly govern -- in reverse; the first instants of the universe took an infinitely long period of time to progress. It helps if you think of this as a two-dimensional graph, with the history of the universe as a line. As we approach the origin mark, the graph of history curves; the "absolute zero instant" of the Universe is, thusly, shown to be an asymptotic limit; a point that can only ever be approached but never, ever reached. comment by prase · 2011-08-30T13:49:12.497Z · score: 3 (3 votes) · LW(p) · GW(p) If you decide to really walk inside, you could be well behind the horizon before you remember to check your watch and hit the singularity not long afterwards. There are different times in general relativistic problems. There is the coordinate time, which is what one usually plots on the vertical axis of a graph. This is (with usual choice of coordinates) infinite when any object reaches the horizon, but it also lacks immediate physical meaning, since GR is invariant with respect to (almost) arbitrary coordinate changes. Then there may be times measured by individual observers. A static observer looking at an object falling into a black hole will never see the object cross the horizon, apparently it takes infinite time to reach it. But the proper time of a falling observer (the time measured by the falling observer's clocks) is finite and nothing special happens at the horizon. comment by Logos01 · 2011-08-30T13:54:17.879Z · score: 0 (0 votes) · LW(p) · GW(p) Correct, but since the entire universe was at that singularity, the distortion of time is relevant. comment by prase · 2011-08-30T14:46:34.272Z · score: 1 (1 votes) · LW(p) · GW(p) How exactly? It is the physical proper time since the Big Bang which is 13,7 billion years, isn't it? comment by Logos01 · 2011-08-30T15:05:29.129Z · score: 0 (2 votes) · LW(p) · GW(p) Yes and no. Since the first second took an infinitely long period of time to occur. comment by prase · 2011-08-30T15:32:08.697Z · score: 2 (2 votes) · LW(p) · GW(p) What does that mean? Do you say that proper time measured along geodesics was infinite between the Big Bang and the moment denoted as "first second" by the coordinate time, or that the coordinate time difference between those events is infinite while the proper time is one second? comment by Logos01 · 2011-08-31T02:33:39.380Z · score: 0 (0 votes) · LW(p) · GW(p) The latter statement conforms to my understanding of the topic. comment by prase · 2011-08-31T07:35:00.045Z · score: 1 (1 votes) · LW(p) · GW(p) I agree. But now, how does that justify talking about infinite history? Coordinate time has no physical meaning, it's an arbitrary artifact of our description and it's possible to choose the coordinates in such a way to have the time difference finite. comment by Logos01 · 2011-08-31T11:13:26.384Z · score: 0 (0 votes) · LW(p) · GW(p) But now, how does that justify talking about infinite history? How does it not? It's a true statement: the graph of our history is infinitely long. Coordinate time has no physical meaning, I can't agree with that statement. and it's possible to choose the coordinates in such a way to have the time difference finite. That much is true, but it fails to reveal explicably the nature of why the question, "What happened before the Big Bang?" as being as meaningless as "What's further North than the North Pole?" comment by prase · 2011-08-31T11:51:18.051Z · score: 1 (1 votes) · LW(p) · GW(p) It's a true statement: the graph of our history is infinitely long. A graph of our history is not our history. Saying that our history is infinitely long because in some coordinates its beginning may have t=-\infty is like saying the North Pole is infinitely far away because it is drawn there on Mercator projection maps. Anyway, it's not the graph of our history; there are many graphs and only some of them are infinitely long. Coordinate time has no physical meaning, I can't agree with that statement. It would be actually helpful if you also said why. and it's possible to choose the coordinates in such a way to have the time difference finite. That much is true, but it fails to reveal explicably the nature of why the question, "What happened before the Big Bang?" as being as meaningless as "What's further North than the North Pole?" We aren't discussing the question "what happened before the Big Bang", but rather "how long ago the Big Bang happened". comment by Davorak · 2011-08-30T13:53:08.526Z · score: 1 (1 votes) · LW(p) · GW(p) It is currently unknown how to apply special relativity SR and general relativity GR to quantum systems and it appears likely that they break down at this level. Thus applying us SR or GR on black holes or the very beginning of the universe is unlikely to result in perfectly accurate description of how the universe works. comment by jwhendy · 2011-08-31T14:45:10.992Z · score: 1 (1 votes) · LW(p) · GW(p) That seems non-obvious to me. It's highly problematic, sure -- but not "key". "Key" is "adequate range of data". I can see where you're coming from. I may have mistaken "adequate range of data" for simply "range of data." Thus it read more like, "I have this set of data. Which hypothesis is most closely like the 'ideal explanation' of this data." Thus, the key piece of information will be in how you define "ideal explanation." Re-reading, I think both are critical. How you define the ideal still matters a great deal, but you're absolutely right... the definition of an "adequate range" is also huge. I also don't recall them talking about this, so that may be another reason why it didn't strike me as strongly. ...and can't say I entirely agree with the notion that our universe in no ways truly behaves probabilistically Could you explain this? I thought that the fact that our universe did behave probabilistically was the whole point of Bayes' theorem. If you have no rules of probability, why would you have need for a formula that says if you have 5 balls in a bucket and one of them is green, you will pull out a green one 20% of the time? If the universe weren't probabilistic, shouldn't that number be entirely unpredictable? comment by Logos01 · 2011-09-02T00:03:20.921Z · score: 1 (1 votes) · LW(p) · GW(p) Re-reading, I think both are critical. Critical I can agree to. "Key" is a more foundational term than "critical" in my 'gut response'. Could you explain this? I thought that the fact that our universe did behave probabilistically was the whole point of Bayes' theorem. The below might help: The frequentists definition sees probability as the long-run expected frequency of occurrence. P(A) = n/N, where n is the number of times event A occurs in N opportunities. The Bayesian view of probability is related to degree of belief. It is a measure of the plausibility of an event given incomplete knowledge. In other words; a Bayesian believes that each trial will have a set outcome that isn't 'fuzzy' even at the time the trial is initiated. The frequentist on the other hand believes that probability makes reality itself fuzzy until the trial concludes. If you had a sufficiently accurate predicting robot, to the Bayesian, it would be 'right' in one million out of one million coin flips by a robotic arm. To the frequentist, on the other hand, that sort of accuracy is impossible. Now, I believe Bayesian statistical modeling to be vastly more effective at modeling our reality. However, I don't think that belief is incompatible with a foundational belief that our universe is probabilistic rather than deterministic. comment by jwhendy · 2011-09-02T02:12:57.112Z · score: 0 (0 votes) · LW(p) · GW(p) Critical I can agree to. "Key" is a more foundational term than "critical" in my 'gut response'. I can dig. If you had a sufficiently accurate predicting robot, to the Bayesian, it would be 'right' in one million out of one million coin flips by a robotic arm. To the frequentist, on the other hand, that sort of accuracy is impossible. My initial response was, "No way Bayesians really believe that." My secondary response was, "Well, if 'sufficiently accurate' means knowing the arrangement of things down to quarks, the initial position, initial angle, force applied, etc... then, sure, you'd know what the flip was going to be." If you meant the second thing, then I guess we disagree. If you meant something else, you'll probably have to clarify things. Either way, what you mean by "sufficiently accurate" might need some explaining. Thanks for the dialog. comment by Logos01 · 2011-09-02T04:31:54.865Z · score: 1 (1 votes) · LW(p) · GW(p) My initial response was, "No way Bayesians really believe that." When I was first introduced to the concept of Bayesian statistics, I had rather lengthy conversations on just this very example. Either way, what you mean by "sufficiently accurate" might need some explaining. "Sufficiently accurate" means "sufficiently accurate", in this case. sufficient: being as much as needed; accurate. Synthesize the two and you have "being as without error and precise as needed". Can't get more clear than that, I fear. Now, if I can read into the question you're tending to with the request -- well... let's put it this way; there is a phenomenon called stochastic resonance. We know that quantum-scale spacetime events do not have precise locations despite being discrete phenonema (wave-particle duality): this is why we don't talk about 'location' but rather 'configuration space'. Now, which portion of the configuration space will interact with which other portion in which way is an entirely probabilistic process. To the Bayesians I've discussed the topic with at any length, this is where we go 'sideways'; they believe as you espoused: know enough points of fact and you can make inerrant predictions; what's really going to happen is set in stone before the trial is even conducted. Replay it a trillion, trillion times with the same exact original conditions and you will get the same results every single time. You just have to get the parameters EXACTLY the same. I don't believe that's a true statement. I believe that there is and does exist material randomness and pseudorandomness; and I believe further that while we as humans cannot ever truly exactly measure the world's probabilities but instead only take measurements and make estimates, those probabilities are real. comment by jwhendy · 2011-09-02T04:45:08.190Z · score: 0 (0 votes) · LW(p) · GW(p) Can't get more clear than that, I fear. Your "read into where I was tending with the request" was more like it. Sorry if I was unclear. I was more interested in what phenomenon such a machine would have at its disposal -- anything we can currently know/detect (sensors on the thumb, muscle contraction detection of some sort, etc.), only a prior history of coin flips, or all-phenomenon-that-can-ever-be-known-even-if-we-don't-currently-know-how-to-know-it? By "accurate" I was more meaning, "accurate given what input information?" Then again, perhaps your addition of "sufficiently" should have clued me in on the fact that you meant a machine that could know absolutely everything. I'll probably have to table this one as I really don't know enough about all of this to discuss further, but I do appreciate the food for thought. Very interesting stuff. I'm intuitively drawn to say that there is nothing actually random... but I am certainly not locked into that position, nor (again) do I know what I'm talking about were I to try and defend that with substantial evidence/argument. comment by Logos01 · 2011-09-02T05:02:03.807Z · score: 1 (1 votes) · LW(p) · GW(p) Then again, perhaps your addition of "sufficiently" should have clued me in on the fact that you meant a machine that could know absolutely everything. Funny thing. Just a few hours ago today, I was having a conversation with someone who said, "I need to remember, {Logos01}, that you use words in their literal meaning." I'm intuitively drawn to say that there is nothing actually random... It's a common intuition. I have the opposite intuition. As a layman, however, I don't know enough to get our postulates in line with one another. So I'll leave you to explore the topic yourself. comment by jwhendy · 2011-09-02T05:17:47.054Z · score: 0 (0 votes) · LW(p) · GW(p) ...you use words in their literal meaning. Indeed. Whether I should have caught on, didn't think about what you wrote or not, or perhaps am trained not to think of things precisely literally... something went awry :) To my credit (if I might), we were talking fairly hypothetical, so I don't know that it was apparent that the prediction machine mentioned would have access to all hypothetical knowledge we can conceive of. To be explicitly literal, it might have helped to just bypass to your previous comment: know enough points of fact and you can make inerrant predictions; what's really going to happen is set in stone before the trial is even conducted...I believe that there is and does exist material randomness and pseudorandomness; and I believe further that while we as humans cannot ever truly exactly measure the world's probabilities. That would have done it easier than reference to a prediction machine, for me at least. But again, I'm more of a noob, so mentioning this to a more advanced LWer might have automatically lit up the right association. So I'll leave you to explore the topic yourself. Sounds good. Thanks again for taking the time to walk through that with me!
2020-10-01 07:39:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6790282726287842, "perplexity": 1524.2711762455488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402124756.81/warc/CC-MAIN-20201001062039-20201001092039-00560.warc.gz"}
https://www.stehovani-odvoz.cz/Basalt/2021-02-3376/
• +0086-371-86162511 • Zhengzhou, China # dimensions cone m Stay Touch With Us Get Quote ## News & Detail • ### Cone Wikipedia 05.07.2004· A cone is a three-dimensional geometric shape that tapers smoothly from a flat base to a point called the apex or vertex. A cone is formed by a set of line segments, half-lines, or lines connecting a common point, the apex, to all of the points • ### Cone cell Wikipedia The second most common type responds the most to light of medium-wavelength, peaking at 530 nm, and is abbreviated M for medium, making up about a third of • ### The Dimensions of Colour, trichromacy, opponency The L and M cones are sensitive to all wavelengths of visible light, and while each responds to different parts of the spectrum to different degrees, their response does not in itself encode which part of the spectrum the stimulus came from. For example, an L cone responds in exactly the same way to light of wavelengths of 630 nm (red), 475 nm (cyan) and 400 nm (violet) (Figure 3.1). Only the • ### DIN 3863 60° Cone Dimensions Knowledge Yuyao Use a 30-degree gauge to measure. seat angle as this dimension is taken from the fitting centerline. The male has a 60-degree seat. The female has a 24- or 60-degree seat. The seal takes. place by contact between the 60-degree seat in the male end and the respective sealing area in the female end. • ### geometry Parametric equation for $n$-dimensional cone Yeah, the notation is ugly (but you got it right) because I'm actually working with matrices in $\mathbb{R}^{m\times m}$ and there $d$ is a "unit matrix" $-1/\sqrt{n}I_n$, where $I_n$ is the identity matrix I simply "reshaped" $d$ (as a matrix) to be an $m^2=n$-dimensional vector. • ### Cones PM Piping Weight calculation Cones Simply select the shape from the tab below and the page will refresh. Then enter the appropriate dimensions. The weight provided is for estimating purposes only and should not be relied on for engineering calculations. • ### ASTM Standard Cone Penetrometer Sizes Which Vertek CPT 31.03.2015· Cones in the 5 cm 2 to 15 cm 2 range have been shown to produce consistent data in most soils, so corrections for different sizes are generally not needed. When using a cone outside this size range, corrections may be necessary to ensure that results are consistent with the body of CPT data: for example, very small cones tend to produce higher cone resistances than standard-size cones. If • ### algebraic geometry Dimension of cone of projective Define the codimension of a subvariety of $\mathbf P^n$ or $\mathbf A^n$ à la Krull, and use the elementary fact from commutative algebra that the sum of the dimension and the codimension of a subvariety is the dimension of the ambient variety. Then, a similar argument as yours shows that $$\mathrm{codim}(C(X))\geq\mathrm{codim}(X)=n-\dim X.$$ Since • ### Finding Dimensions of Cones CK12-Foundation 04.09.2020· The approach to finding missing dimensions of a cone will be similar to how you found missing dimensions of a cylinder. The only difference is that the volume of a cone formula involved dividing by 3. To address this, the first step when solving is to multiply both sides of the equation by 3. Example . Older water tower designs have a cylindrical design with a cone-shaped roof on top. A • ### The Dimensions of Colour, trichromacy, opponency The L and M cones are sensitive to all wavelengths of visible light, and while each responds to different parts of the spectrum to different degrees, their response does not in itself encode which part of the spectrum the stimulus came from. For example, an L cone responds in exactly the same way to light of wavelengths of 630 nm (red), 475 nm (cyan) and 400 nm (violet) (Figure 3.1). Only the • ### BEARING CA Cone Single cone, envelope dimensions same as basic part number, different internal geometry. CB Cone Single cone, dimensionally different from basic part number. TIMKEN-Hong Kong Eric Bearing Co.,Ltd, Web Site: fagbearing.cc. vi TAPERED ROLLER BEARING PART NUMBER PREFIXES AND SUFFIXES PREFIX SUFFIX CONE OR CUP EXPLANATION CD Cup Double cup with oil holes • ### Chapter 7: Perceiving Color University of Washington -The physical dimensions of color-The psychological dimensions of color appearance (hue, saturation, brightness) As far as the M cones are concerned, these lights look the same. the absorption of a photon of light by a cone produces the same effect no matter what the wavelength. A given cone system will respond the same to a dim light near peak wavelength as a bright light away from the • ### DIN 7631 60° cone end dimensions Knowledge Yuyao DIN 7631 60° cone end dimensions Jun 01, 2019. DIN 7631 is Compression couplings with spherical liner; straight compression couplings. This connection is frequently used in hydraulic systems.The male has a straight metric thread and a 60° (included angle) recessed cone. The female has a straight thread and a tapered nose eat. The seal takes place by contact between the cone of the male and • ### algebraic geometry Dimension of cone of projective I have seen some answers on related questions on this site using the tool of fiber dimension and transcendence degree of the coordinate ring, however I do not want an answer with these advanced techniques, as the notes I am reading does not assume the reader has these knowledge; it would be very nice to see a proof using basic constructions such as cone and projectivization defined above. • ### Volume Calculator calculate the volume of a cube, box The volume formula for a cone is (height x π x (diameter / 2) 2) / 3, where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is (height x π x radius 2) / 3, as seen in the figure below: Despite being a somewhat complex shape, you only need to know three dimensions to compute the volume of a regular cone. For • ### Metric Hex Nut Dimensions Sizes Chart Amesweb Note : * Non-preferred threads according to standard. All dimensions are given in millimeters. Supplements: Metric hex nut dimensions calculator • ### Cône (photorécepteur) — Wikipédia Les cônes sont entre 3 [1] et 4 [2] millions par œil chez l'Homme. Ils ne représentent que 5 % du total des photorécepteurs et sont principalement concentrés sur la fovéa, au centre de la rétine, dans le prolongement de l'axe optique.La partie centrale de la fovéa (ou « fovéola »), sur un rayon de 0,3 mm, ne contient que des cônes [3]. • ### Dimensions Database of Dimensioned Drawings Dimensions is an ongoing reference database of dimensioned drawings documenting the standard measurements and sizes of the everyday objects and spaces that make up our world. Created as a universal resource to better communicate the basic properties, systems, and logics of our built environment, Dimensions is a free platform for increasing public and professional knowledge of Quick response, N-CAI Heavy Industry welcomes all sectors of society to contact us, we are waiting for your inquiry within 365x24 hours!
2021-07-29 06:36:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5686773657798767, "perplexity": 2337.6495202650613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00142.warc.gz"}
http://math.stackexchange.com/questions/131113/showing-f-1y-is-poincare-dual-to-f-operatornamevol
# Showing $[f^{-1}(y)]$ is Poincare dual to $f^*(\operatorname{vol})$. Let $f: N^n \to M^m$ be a smooth map between closed oriented manifolds. Then I'm trying to show that for almost all $y \in M$, the homology class $[f^{-1}(y)] \in H_{n-m}(N)$ is Poincare dual to $f^* \operatorname{vol}_M$, where $\operatorname{vol}_M$ is the volume form on $M$ (here I'm using the fact that for generic $y$, $f^{-1}(y)$ is a submanifold of dimension $n-m$). Unwinding the definitions, I need to show that if $\phi \in \Omega^{n-m}(N)$ is closed then $$\int_{f^{-1}(y)} i^* \phi = \int_N \phi \wedge f^* vol_M$$ where $i: f^{-1}(y) \to N$ is the inclusion. Now this is easy to see if, for example $N = M \times F$ and the map $f$ is just the projection, but I am having trouble proving this in general. My issue is that the left hand side appears dependent on $y$ while the right hand side doesn't. But I believe all such $f^{-1}(y)$ are homologous so the left hand side is really independent of $y$ but I don't know how to show it's equal to the right hand side. I'm sure this is really standard thing in differential topology but for some reason I haven't found it in my literature search. - General result: With the same assumption, one can show that, if $f$ is non-singular over a cycle $C \subset M,$ then with the proper orientation, the cycle $f^{-1}(C) \subset N$ is Poincare dual to the pull-back via $f$ of the Poincare dual of $C.$ Now, all you have to show is that the Poincare dual of a (generic) point $y \in M$ is the volume form. The rest, follows from the Poincare duality and pairing of closed differential forms of complementary degree in de Rham cohomology.
2014-07-25 19:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9412375688552856, "perplexity": 42.446403496780434}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00156-ip-10-33-131-23.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/85561/reusable-and-safe-endothermic-reaction?noredirect=1
# Reusable and safe endothermic reaction [closed] I am looking for a reusable endothermic reaction that is safe to both touch and handle, it needs to be able to be stored in a jacket and light enough to do physical activity. Any suggestions? ## closed as unclear what you're asking by Mithoron, Todd Minehardt, Tyberius, DSVA, Jannis AndreskaNov 10 '17 at 20:23 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. There are many endothermic physical processes such as melting ice or sublimation of solid carbon dioxide $\ce{CO2}$. One should also mention thermoelectric cooling effect (Peltier device), which is is rarely effective in practice, yet still an option. A suitable endothermic chemical reaction would probably be an electrolytic dissociation of certain salts, among which dissolving ammonium nitrate $\ce{NH4NO3}$ in water is probably one of the most well-known and effective examples.
2019-06-20 09:02:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28385329246520996, "perplexity": 1124.5283064785078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00171.warc.gz"}
https://yutsumura.com/idempotent-matrices-are-diagonalizable/
# Idempotent Matrices are Diagonalizable ## Problem 429 Let $A$ be an $n\times n$ idempotent matrix, that is, $A^2=A$. Then prove that $A$ is diagonalizable. We give three proofs of this problem. The first one proves that $\R^n$ is a direct sum of eigenspaces of $A$, hence $A$ is diagonalizable. The second proof proves the direct sum expression as in proof 1 but we use a linear transformation. The third proof discusses the minimal polynomial of $A$. ### Range=Image, Null space=Kernel In the following proofs, we use the terminologies range and null space of a linear transformation. These are also called image and kernel of a linear transformation, respectively. ## Proof 1. Recall that only possible eigenvalues of an idempotent matrix are $0$ or $1$. (For a proof, see the post “Idempotent matrix and its eigenvalues“.) Let $E_0=\{\mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{0}\} \text{ and } E_{1}\{\mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{x}\}$ be subspaces of $\R^n$. (Thus, if $0$ and $1$ are eigenvalues, then $E_0$ and $E_1$ are eigenspaces.) Let $r$ be the rank of $A$. Then by the rank-nullity theorem, the nullity of $A$ $\dim(E_0)=n-r.$ The rank of $A$ is the dimension of the range $\calR(A)=\{\mathbf{y} \in \R^n \mid \mathbf{y}=A\mathbf{x} \text{ for some } \mathbf{x}\in \R^n\}.$ Let $\mathbf{y}_1, \dots, \mathbf{y}_r$ be basis vectors of $\calR(A)$. Then there exists $\mathbf{x}_i\in \R^n$ such that $\mathbf{y}_i=A\mathbf{x}_i,$ for $i=1, \dots, r$. Then we have \begin{align*} A\mathbf{y}_i&=A^2\mathbf{x}_i\\ &=A\mathbf{x}_i && \text{since $A$ is idempotent}\\ &=\mathbf{y}_i. \end{align*} It follows that $y_i\in E_1$. Since $y_i, i=1,\dots, r$ form a basis of $\calR(A)$, they are linearly independent and thus we have $r\leq \dim(E_1).$ We have \begin{align*} &n=\dim(\R^n)\\ &\geq \dim(E_0)+\dim(E_1) && \text{since } E_0\cap E_1=\{\mathbf{0}\}\\ &\geq (n-r)+r=n. \end{align*} So in fact all inequalities are equalities, and hence $\dim(\R^n)=\dim(E_0)+\dim(E_1).$ This implies $\R^n=E_0 \oplus E_1.$ Thus $\R^n$ is a direct sum of eigenspaces of $A$, and hence $A$ is diagonalizable. ## Proof 2. Let $E_0$ and $E_1$ be as in proof 1. Consider the linear transformation $T:\R^n \to \R^n$ represented by the idempotent matrix $A$, that is, $T(\mathbf{x})=A\mathbf{x}$. Then the null space $\calN(T)$ of the linear transformation $T$ is $E_0$ by definition. We claim that the range $\calR(T)$ is $E_1$. If $\mathbf{x}\in \calR(T)$, then we have $\mathbf{y}\in \R^n$ such that $\mathbf{x}=T(\mathbf{y})=A\mathbf{y}$. Then we have \begin{align*} \mathbf{x}&=A\mathbf{y}=A^2\mathbf{y} =A(A\mathbf{y}) =A\mathbf{x}. \end{align*} (The second equality follows since $A$ is idempotent.) This implies that $\mathbf{x}\in E_1$, and hence $\calR(T) \subset E_1$. On the other hand, if $\mathbf{x}\in E_1$, then we have $\mathbf{x}=A\mathbf{x}=T(\mathbf{x})\in \calR(T).$ Thus, we have $E_1\subset \calR(T)$. Putting these two inclusions together gives $E_1=\calR(T)$. By the isomorphism theorem of vector spaces, we have $\R^n=\calN(A)\oplus \calR(T)=E_0\oplus E_1.$ Thus, $\R^n$ is a direct sum of eigenspaces of $A$ and hence $A$ is diagonalizable. ## Proof 3. Since $A$ is idempotent we have $A^2=A$. Thus we have $A^2-A=O$, the zero matrix, and so $A$ satisfies the polynomial $x^2-x$. If $x^2-x=x(x-1)$ is not the minimal polynomial of $A$, then $A$ must be either the identity matrix or the zero matrix. Since these matrices are diagonalizable (as they are already diagonal matrices), we consider the case when $x^2-x$ is the minimal polynomial of $A$. Since the minimal polynomial has two distinct simple roots $0, 1$, the matrix $A$ is diagonalizable. ## Another Proof A slightly different proof is given in the post “Idempotent (Projective) Matrices are Diagonalizable“. The proof there is a variation of Proof 2. ### 1 Response 1. 11/18/2017 […] Three other different proofs of the fact that every idempotent matrix is diagonalizable are given in the post “Idempotent Matrices are Diagonalizable“. […] ##### Restriction of a Linear Transformation on the x-z Plane is a Linear Transformation Let $T:\R^3 \to \R^3$ be a linear transformation and suppose that its matrix representation with respect to the standard basis... Close
2018-03-19 00:55:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819414615631104, "perplexity": 179.13711282602705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646189.21/warc/CC-MAIN-20180319003616-20180319023616-00444.warc.gz"}
http://www.optimization-online.org/DB_HTML/2008/12/2173.html
- Behavior of BFGS with an Exact Line Search on Nonsmooth Examples Adrian S. Lewis(aslewisorie.cornell.edu) Michael L. Overton(overtoncs.nyu.edu) Abstract: We investigate the behavior of the BFGS algorithm with an exact line search on nonsmooth functions. We show that it may fail on a simple polyhedral example, but that it apparently always succeeds on the Euclidean norm function, spiraling into the origin with a Q-linear rate of convergence; we prove this in the case of two variables. Dixon's theorem implies that the result for the norm holds for all methods in the Broyden class of variable metric methods; we investigate how the limiting behavior of the steplengths depends on the Broyden parameter. Numerical experiments indicate that the convergence properties for $\|x\|$ extend to $\|Ax\|$, where $A$ is an $n \times n$ nonsingular matrix, and that the rate of convergence is independent of $A$ for fixed $n$. Finally, we show that steepest descent with an exact line search converges linearly for any positively homogeneous function that is $C^2$ everywhere except at the origin, but its rate of convergence for $\|Ax\|$ depends on the condition number of $A$, in contrast to BFGS. Keywords: BFGS, quasi-Newton, nonsmooth, exact line search, Broyden class, Q-linear convergence Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization ) Citation: Submitted to SIAM J. Optimization Download: [PDF]Entry Submitted: 12/14/2008Entry Accepted: 12/15/2008Entry Last Modified: 12/14/2008Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Programming Society and by the Optimization Technology Center.
2018-12-19 07:58:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7409390211105347, "perplexity": 992.0629156244638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00328.warc.gz"}
https://search.r-project.org/CRAN/refmans/CSHShydRology/html/ch_flow_raster_trend.html
ch_flow_raster_trend {CSHShydRology} R Documentation ## Raster plot and simple trends of observed streamflows by periods ### Description Creates a raster plot plus trend plots for day of year, which are binned by a number of days (step), and the max, min, and median annual discharge across years. The plot contains four panels based upon binned data. ### Usage ch_flow_raster_trend( DF, step = 5, missing = FALSE, colours = c("lightblue", "cyan", "blue", "slateblue", "darkblue", "red") ) ### Arguments DF - dataframe of daily flow data as read by ch_read_ECDE_flows step - a number indicating the degree of smoothing eg. 1, 5, 11. missing If FALSE years with missing data are excluded. If TRUE partial years are included. metadata a dataframe of station metadata, default is HYDAT_list. colours A vector of colours used for the raster plot. The default is c("lightblue","cyan", "blue", "slateblue", "darkblue", "red"). ### Details The four plots are: (1) The maximum,minimum,and median flow with a trend test for each period: red arrows indicate decreases, blue arrows indicate increases. (2) The scale bar for the colours used in the raster plot, (3) The raster plot with a colour for each period and each year where data exist, and (4) A time series plot of the minimum, median, and maximum annual bin values. If there is no trend (p > 0.05) the points are black. Decreasing trends are in red, increasing trends are in blue. ### Value Returns a list containing: stationID Station ID eg. 05BB001 missing How missing values were used FALSE = used, TRUE = removed step number of days in a bin periods number of periods in a year period period numbers i.e. 1:365/step bins values for each period in each year med_period median for each period max_period maximum for each period min_period minimum for each period tau_period Kendalls Tau for each period prob_period probability of Tau for each period year years spanning the data median_year median bin for each year max_year maximum bin for each year min_year minimum bin for each year tau_median_year value of tau and probability for annual median tau_maximum_year value of tau and probability for annual maximum tau_minimum_year value of tau and probability for annual minimum Paul Whitfield ### References Whitfield, P. H., Kraaijenbrink, P. D. A., Shook, K. R., and Pomeroy, J. W. 2021. The Spatial Extent of Hydrological and Landscape Changes across the Mountains and Prairies of Canada in the Mackenzie and Nelson River Basins Based on data from a Warm Season Time Window, Hydrology and Earth Systems Sciences 25: 2513-2541. ch_flow_raster data(CAN05AA008)
2022-12-04 08:39:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48184946179389954, "perplexity": 7606.713200137958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00461.warc.gz"}
https://math.meta.stackexchange.com/questions/32004/what-to-do-with-duplicates-of-deleted-questions
# What to do with duplicates of deleted questions? There are 109 questions that are closed as duplicates of deleted posts. This SEDE query lists all such posts. Can we do something about these posts? One question is on its way to be Roomba'd and some others may be Roomba'd by additional downvotes. But that still leaves nearly 100 questions of this kind. I think if any of these questions are worth deleting, then they should be deleted. If not, then they should be reopened (in the best case) or closed for another relevant reason. Alternatively, the linked duplicate could be undeleted if it's worth undeleting, but I don't think that scenario is likely. • Another scenario that occurs to me: merge the deleted Question with its closed-as-duplicate (or one of them), and repoint any remaining duplicates to that newly merged Question. The thought is that the now deleted Question should have an Answer with a positive vote. While that may not be much of a guarantee of quality content, it is something to look for. Jun 26, 2020 at 19:18 • @hardmath That sounds like a much neater way out, +1 – user279515 Jun 26, 2020 at 19:19 • Or find the better of the duplicates (if better than the deleted question), to make as the new dupe target. No reason to keep a poor representative of a dupe as the dupe target. "Better" meaning a question with more effort and/or more context. Jun 26, 2020 at 19:42 • @amWhy That is an excellent suggestion, I will try to do so. +1 – user279515 Jun 27, 2020 at 9:20 But the other two Questions, closed as duplicates of that multi-part problem, are not duplicates of each other. They are closely related. The first asks about the automorphism group of simple field extension $$K=\mathbb Q(\alpha \zeta)$$, where $$\alpha$$ is the real fifth root of $$2$$ and $$\zeta$$ is a (complex) fifth root of unity. The question is then posed as to how many automorphisms of $$K$$ there are (presumably meaning over the base field of $$\mathbb Q$$), and a correct but not upvoted Answer is given (there is only the identity automorphism over $$\mathbb Q$$). The second Question closed-as-duplicate asks something of a follow on problem, building on the knowledge that $$K$$ has only the trivial automorphism over $$\mathbb Q$$: Can the identity on $$K$$ be extended to a non-identity automorphism of $$\mathbb C$$? Gerry Myerson gave a correct, upvoted, and Accepted Answer to this, along with a Comment that the user should really have linked these multiple Questions about $$K$$ together.
2023-03-23 15:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6190049052238464, "perplexity": 690.7147325461898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00026.warc.gz"}
http://math.unm.edu/news-events/events/algebra-and-geometry-seminar-9
# Algebra and Geometry Seminar Event Type: Seminar Speaker: C.Boyer Event Date: Wednesday, November 15, 2017 - 3:00pm to 4:00pm Location: SMLC 124 Audience: General Public A. Buium ### Event Description: Title: Understanding the Sasaki Cone Abstract: The Sasaki cone can be thought of as a premoduli space of Sasakian structures with a fixed underlying CR structure. As such it can have dimension 1\leq k\leq n+1 where the dimension of the Sasaki manifold is 2n+1. We study various important functions on the Sasaki cone, the most important of which is the Einstein-Hilbert functional. I shall end with a conjecture which says that if the dimension of the Sasaki cone is greater than 1, then the manifold is covered (up to possible covering maps) by 3-dimensional spheres. ### Event Contact Contact Name: Alexandru Buium Contact Email: buium42@yahoo.com
2018-06-23 23:41:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389525413513184, "perplexity": 1358.5575145733358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00367.warc.gz"}
https://www.physicsforums.com/threads/can-someone-give-me-some-hints-for-this-incline-problem.862619/
# Homework Help: Can someone give me some hints for this incline problem? Tags: 1. Mar 18, 2016 ### DavidFrank • Member warned that HW questions must be posted in the HW sections I am sorry, I generally would show some work but I have no clue on how to start off to find angle theta. The answer is given below and I have to get it. I don't know at all how to approach this problem and unfortunately I am not too strong on my trig identities either but I will try. May some please give me some hints? 2. Mar 18, 2016 ### PeroK Forget the complicated answer. Isn't moving horizontally a big clue? 3. Mar 18, 2016 ### DavidFrank Oh okay so Vfy = 0 and Vfx = Vfcos(0°) = Vf. Let me think about this some more, thank you. 4. Mar 18, 2016 ### cnh1995 You need an equation relating θ and α. Since the ball is at maximum height, horizontal distance travelled is R/2. Do you know the relation between R, H and angle of projection(which is (θ+α) here)? 5. Mar 18, 2016 ### cnh1995 That is not true. Vx is constant throughout and is equal to Vfcos(θ+α). You need the relation between θ and α. 6. Mar 18, 2016 ### haruspex I believe David is using vf to mean the final velocity. 7. Mar 18, 2016 ### haruspex You are introducing new variables without defining them. I assume you are referring to usage in a range equation, in a form standard for you. David might not know this equation, or might not recognise those variable references. I also guess PeroK is hinting at the same approach. @DavidFrank , if cnh's post and PeroK's don't help, just apply the usual SUVAT equations for the horizontal and vertical motions. Do you know the SUVAT equations? (You should have listed them in the template.) can you figure out which ones to use? 8. Mar 18, 2016 ### DavidFrank That is so far all I have gotten. My attempt was to find "R" which is the incline distance of this projectile problem. I sort of ran into some issue when I tried finding Vi and substituting it into R = [2sin(θ)cos(θ+ α) * v.i^2] / [g * cos^2(α)] since the R cancels out. #### Attached Files: • ###### 004.jpg File size: 33.5 KB Views: 86 9. Mar 18, 2016 ### DavidFrank Thank you for your response haruspex, so far I have used the position formula and the Vf^2 = Vi^2 + 2a(Δx) formulas 10. Mar 18, 2016 ### haruspex I can confirm that the answer can be obtained from two of the equations you have developed. Half way down, you have an equation in which the left hand side consists of sin(α)=. At the end, you have an equation where the left hand side consists of v02. Work with those. 11. Mar 19, 2016 ### PeroK I must admit I would never have thought of calculating R. The way I looked at it was that the point where it lands is $(x, y)$. I can calculate this from normal projectile motion (it's the top of the trajectory). And I know how this point relates to $\alpha$. Perhaps this isn't the quickest way, but I ignored the answer and got: $tan(\theta) = \frac{tan(\alpha)}{1+2tan^2(\alpha)}$ You could shoot for this if you like. Then, use a bit of trig manipulation to get the answer you need. It looks better than to me than the answer given, not least because it is equivalent to: $tan(\alpha + \theta) = 2tan(\alpha)$ (Which I find the most pleasing answer!) 12. Mar 19, 2016 ### DavidFrank I am sorry but I am stuck. Do you want to me to take a few steps back before I found "R" (the distance traveled on the incline) or should I keep on going by using Vfy2 = Viy2 + 2g(Yf - Yi)? Because if I were to continue with the substitution of Vi^2 into "R" I'd get: 1 = [4 * sinΘ * sinα * cos(θ+α)] / [cos2α * sin2(θ+α)] From here I feel willing to move all of the trig functions with (θ+α) to one side then separate them using the trig identity: cos (θ+α)=( cos θ)( cos α) – ( sin θ)( sinα) & sin (θ+α)=( sin θ)( cos α) + ( sinα)( cosθ) so that I can later group all the trig functions with θ and set it = to all the trig function with α, but it seems like a dead end. So can you please provide more hints on what you are saying before? 13. Mar 19, 2016 ### cnh1995 Here's another way of doing it. Horizontal displacement is x=R/2 and vertical displacement is H (R is for range and H is for maximum height). If you take ratio of the expressions of R and H, you'll see Rtan(Φ)=4H(a standard result!), where Φ is the angle of projection. In your problem, Φ=θ+α. So, Rtan(θ+α)=4H Hence, xtan(θ+α)=2H where x=R/2. tan(θ+α)=2H/x. You can write the ratio H/x in terms of angle α(simple trigonometry) and do some trigonometric manipulations to get the final answer. 14. Mar 19, 2016 ### DavidFrank Looking at my teacher's notes, he did this: cos(θ+α)*cosθ - sinθ*sin(θ+α) = 0 Thank you, I will give this a shot when I return home. 15. Mar 19, 2016 ### haruspex That equation is correct and does lead to the answer, though it is a struggle. I must have come a shorter way from the two equations I mentioned in post #10. (And as others have noted, there are easier ways to approach the whole question, but my preference is to help the OP finish their chosen path first.) From the equation above, I got rid of all the cos and sin by writing t=tan theta, u= tan alpha. You can then get the whole equation in terms of t and u, using cos2=1/(1+tan2). After some cancellation, you can get the whole thing into the form (....)2=0, affording an immediate simplification. As a general formula? That would not be right. The left hand side equates to cos(2θ+α). Do you mean your teacher got the equations into this form?
2018-07-21 10:40:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169559240341187, "perplexity": 1291.6570335399686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00457.warc.gz"}
https://dataspace.princeton.edu/handle/88435/dsp01rb68xf606
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01rb68xf606 Title: Deanonymizing Bitcoin Transaction: An Investigative Study On Large-Scale Graph Clustering Authors: Patel, Yash Advisors: Abbe, EmmanuelWeinberg, Matt Department: Mathematics Class Year: 2018 Abstract: Bitcoin has emerged from the fringes of technology to the mainstream recently. With speculation rampant, it has become more and more the subject of harsh criticism in ascertaining its use case. Unfortunately, much of Bitcoin's present use case is for transactions in online black markets. Towards that end, various studies have sought to partially deanonymize Bitcoin transactions, identifying wallets associated with major players in the space to help forensic analysis taint wallets involved with criminal activity. Relevant past studies, however, have rigidly enforced manually constructed heuristics to perform such deanonymization, paralleling an extensive union-find algorithm. We wish to extend this work by introducing many more heuristics than were previously considered by constructing a separate heuristics graph" layered atop the transactions graph and performing a graph clustering on this heuristics graph. Towards that end, we explored the performance of various clustering algorithms on the SBM (stochastic block model) as a prototype of the heuristics graph and additionally tested graph preprocessing algorithms, specifically sparsification and coarsening to determine the extent they could speed up computation while retaining reasonable accuracies. We found hierarchical spectral clustering and METIS to have the best performance by the standard purity, NMI, and F-score clustering accuracy metrics. We also found sparsification and coarsening to result in little reduction in time with the former severely detracting from accuracies and the latter less so, suggesting the latter holds potential given implementation optimization in future studies. METIS was subsequently employed to cluster a subset of the full graph due to major time concerns with hierarchical spectral clustering. Several wallet clusters were identified as a result, though the accuracy of this could not be determined due to the limited ground truth available. Future extensions of this work should seek to refine the hierarchical spectral clustering algorithm for its time deficiencies and extend the ground truth available. URI: http://arks.princeton.edu/ark:/88435/dsp01rb68xf606 Type of Material: Princeton University Senior Theses Language: en Appears in Collections: Mathematics, 1934-2020
2020-09-24 06:20:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3260624408721924, "perplexity": 3385.073510250313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213454.52/warc/CC-MAIN-20200924034208-20200924064208-00603.warc.gz"}
http://tex.stackexchange.com/questions/37170/how-can-i-produce-a-fancy-chapter-heading-like-this-one
# How can I produce a fancy chapter heading like this one? I just found some chapter headings that are really nice: But I can't exactly figure out how to extend the rule off the margin and put the figure on the page before the chapter and the caption on the next (chapter heading) page. (Some tip on the footer would be nice, too). The layout of the other pages look pretty much the same: - Welcome to TeX.sx! This is indeed a pretty layout. However, please only ask one question per post. This way, answers will be more specific and other users can find the information they're looking for more easily. I'd suggest you reduce this question to the chapter heading with the spacing and the rule, and ask another question about separating a figure and its caption. As for the footer, you might actually find a question about that on here; if not, that'll be another question :) – doncherry Dec 5 '11 at 10:40 Here's a solution: using the xparse package I defined a new command \ChapIma with one optional argument and two mandatory arguments; the optional argument will be the text used for the ToC; the first mandatory argument is the text for the document, and the third mandatory argument is the name of the file containing the corresponding image. The titlesec package was used to customize the chapter title format. I also defined another command \Caption, which behaves as the standard caption, but writes the text in the space reserved for marginal notes. This command must be invoked somewhere in the first line of text of the chapter. The caption package was used to customize the caption in the marginal notes (suppressing the label). The lettrine package was used to produce the drop cap. I used the fancyhdr package (I couldn't make titlesec's pagestyles option behave well, so I had to use fancyhdr) to redefine the plain page; I also defined the page style for other pages. \documentclass[twoside]{book} \usepackage{xparse,ifthen} \usepackage[calcwidth]{titlesec} \usepackage{changepage} \usepackage{graphicx} \usepackage{caption} \usepackage{fancyhdr} \usepackage{marginnote} \usepackage{lettrine} \usepackage{lipsum} \newlength\mylen \DeclareDocumentCommand\ChapIma{omm} {\let\cleardoublepage\relax \ifthenelse{\isodd{\value{page}}} {\mbox{}\clearpage}{\mbox{}\clearpage\mbox{}\clearpage}% \resizebox{.9\textwidth}{.9\textheight}{\includegraphics{#3}} \mbox{}\thispagestyle{empty}\clearpage \IfNoValueTF{#1}{\chapter{#2}}{\chapter[#1]{#2}} } \DeclareDocumentCommand\Caption{om} {\marginnote{\parbox{\marginparwidth}{% \captionsetup[figure]{labelformat=empty} \IfNoValueTF{#1}{\captionof{figure}{#2}}{\captionof{figure}[#1]{#2}} }% }% } \titleformat{\chapter}[display] {\Huge\normalfont\sffamily}{}{2pc} {\setlength\mylen{0pt}% } [\vspace{-20pt}% {% \makebox[\linewidth][r]{% \rule{\dimexpr\titlewidth+\mylen\relax}{0.4pt}% }% }% ] \titlespacing*{\chapter}{0pt}{1cm}{7cm} \renewcommand\chaptermark[1]{\markboth{#1}{}} \fancypagestyle{plain}{% \fancyhf{} \fancyfoot[OR]{\sffamily\small\MakeUppercase{\leftmark}~~\oldstylenums{\thepage}} \renewcommand{\footrulewidth}{0pt} \fancyfootoffset[OR]{\dimexpr\marginparsep+\marginparwidth\relax} } \fancyhf{} \fancyfootoffset[OR]{\dimexpr\marginparsep+\marginparwidth\relax} \fancyfootoffset[EL]{\dimexpr\marginparsep+\marginparwidth\relax} \fancyfoot[OR]{\small\sffamily\MakeUppercase{\leftmark}~~\oldstylenums{\thepage}} \fancyfoot[EL]{\small\sffamily\oldstylenums{\thepage}~~\MakeUppercase{\rightmark}} \renewcommand{\footrulewidth}{0pt} \pagestyle{fancy} \renewcommand\chaptermark[1]{\markboth{#1}{}} \renewcommand\sectionmark[1]{\markright{#1}} \begin{document} \tableofcontents \ChapIma{Preface}{ctanlion} \lettrine{T}{his} is some initial text\Caption{This is the caption for the figure; this is just some test text} \lipsum[1-5] \ChapIma{Introduction}{ctanlion} \lipsum[1] \section{Qu'ran manuscripts} \lipsum[1-14] \end{document} Here's an image of four pages of the resulting document: The CTAN lion used in the example was drawn by Duane Bibby. - You should say \thispagestyle{empty} in the page with the figure. Also the treatment of the optional argument to \ChapIma is not optimal. – egreg Dec 5 '11 at 16:51 Gonzalo's solution worked somehow, but I don't know why the caption, no matter what I do, just stays on the left margin. Also \thispagestyle{empty} does not seem to work. – Joseph Dec 5 '11 at 17:31 @Joseph: my code needs some improvements: I'm working on them, but right now I cannot post them. In two hours I will update my answer with some adjustments. – Gonzalo Medina Dec 5 '11 at 17:35 @egreg: you're right. I've made now some adjustments. – Gonzalo Medina Dec 5 '11 at 18:54 @Joseph: please see my updated answer. – Gonzalo Medina Dec 5 '11 at 18:54 You can use the titlesec package to create custom title styles: \documentclass{scrreprt} \usepackage{titlesec} \usepackage{lipsum} \titleformat{\chapter}[display]{\Huge\sffamily}{}{3pc}{\raggedleft}[\footrule\vspace{8cm}] \begin{document} \chapter{Preface} \lipsum \end{document} To add a custom footer, use the fancyhdr package: \usepackage{fancyhdr} \pagestyle{fancy} To adjust the margin widths, use the geometry package: \usepackage[twoside,right=5cm]{geometry}
2015-11-27 19:05:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408859610557556, "perplexity": 1606.5210556689356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450559.94/warc/CC-MAIN-20151124205410-00194-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/282117/how-to-calculate-minimal-member-of-set-having-mean-and-standard-deviation
# How to calculate minimal member of set having mean and standard deviation? Assume that we have given standard deviation and mean, and we know that our data follow normal distribution. ~ N(100, 16). What is the technique to calculate the minimal value that can appear in the set? We know that tennis players ball speed is distributed with a mean of 100 and standard deviation of 16. Find the minimal speed needed to be member of top players list (2% of all players). - There is no minimum possible value (other than that imposed by reality: e.g., ball speed can’t less than $0$). Given a large enough data set, the minimum can be arbitrarily small. In your example it’s unlikely that you’ll find anyone below $3$ standard deviations below the mean. – Brian M. Scott Jan 19 '13 at 17:51 We have to give an explicit criterion for "top players list." There is no universally applicable criterion. Perhaps we can use $99$-th percentile. Edit: The question was changed, and now defines top player as $98$-th percentile. So we use that from here on. Let $X$ be the ball speed of a randomly chosen player. We may want to find the speed $x$ such that $\Pr(X\le x)=0.98$. For any $x$, we have $$\Pr(X\le x)=\Pr\left(\frac{X-\mu}{\sigma } \right)\le \frac{x-\mu}{\sigma},$$ where $\mu$ is the mean of $X$, and $\sigma$ the standard deviation. But $\frac{X-\mu}{\sigma}$ has standard normal distribution. So let $Z$ be standard normal. We want $$\Pr\left(Z\le \frac{x-100}{16} \right)=0.98.$$ From tables of the standard normal, we have $\Pr(Z\le 2.05)\approx 0.98$. So the cutoff point $x$ is given approximately by $$\frac{x-100}{16}\approx 2.05.$$ - As I read the question, $\mu=100$ and $\sigma=16$ apply to the list of top players, not to the list of all players. – Brian M. Scott Jan 19 '13 at 17:53 Ok I see that I skipped meaningfull part of question. So simply now I have assume eg. 2% and take 0.98 quantile. Am I right? Also, why did u squareroot standard deviation? In formula you given there's just to put standard deviation as is. – mickula Jan 19 '13 at 17:56 @mickula: I usually specify normal by giving mean, variance. But it looks as if your question says standard deviation is $16$. Have changed answer to reflect that. – André Nicolas Jan 19 '13 at 18:04 Seems like it was really easy. Thanks to @BrianM.Scott he pointed me that it goes about whole population of tennis players, not only about top, and ofc to you, Andre – mickula Jan 19 '13 at 18:06
2016-07-24 04:54:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187628030776978, "perplexity": 452.0594302448378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00239-ip-10-185-27-174.ec2.internal.warc.gz"}
http://letslearnnepal.com/class-12/physics/electricity/electrical-circuits/kirchhoffs-law/
# Kirchhoff’s Law In ordinary conditions, ohm’s law is used for analyzing and measuring the current and voltage of the circuit. But, there are several complex cases, in which Ohm’s Law cannot be used to measure the current and voltage. In such circuit, Kirchhoff’s law is used. There are two laws of Kirchhoff, which are described below: Kirchhoff’s first law: The algebraic sum of current at a junction of electric current is always zero. $$\text{i.e.,}\sum \text{I = 0}$$ Sign convention: The incoming current is taken as positive and outgoing current is taken as negative. At a junction, sum of incoming current is always equal to sum of outgoing current.  So, from figure, we can write: I1 + I2 – I3 – I4 – I5 = 0 or, I1 + I2 = I3 + I4 + I5 Kirchhoff’s second law: In a closed loop, the sum of emf is equal to sum of product of current and resistance. This law is also known as loop law or voltage law. $$\text{i.e.,} \sum \text{E} = \sum \text{IR}$$ Sign convention: In the direction of loop, emf and current have positive sign otherwise negative sign is written. Let us consider a complex circuit as shown in figure below. We can use Kirchhoff’s law is used to find the current in different part of the circuit. Here, the direction of emf and current flow in anticlockwise direction is taken as positive and that in anticlockwise direction is taken as negative. Now, applying Kirchhoff’s Law in loop ABCFA, we get: $$\sum \text{E} = \sum \text{IR}$$ $$\text{or, (-E}_1) + \text{(-E}_2) = \text{(+I}_1)\text{R}_1 + \text{(-I}_2)\text{R}_2$$ $$\text{or, E}_1 – \text{E}_2 = \text{I}_1\text{R}_1 – \text{I}_2\text{R}_2……(i)$$ In loop FCDEF, we get: $$\sum \text{E} = \sum \text{IR}$$ $$\text{or, (-E}_2) + \text{(-E}_3) = \text{(+I}_2)\text{R}_2 + \text{(-I}_3) \text{R}_3$$ $$\text{or, E}_2 – \text{E}_3 = \text{I}_2\text{R}_2 – \text{I}_3\text{R}_3……(ii)$$ Applying Kirchhoff’s first law in at junction F, we get: $$\sum \text{I} = 0$$ $$\text{(or, +I}_1) + \text{(+I}_2) – \text{I}_3 = 0$$ $$\text{or, I}_1 + \text{I}_2 = \text{I}_3……(iii)$$ We can find the value of I1 + I2 + I3 from the above three equations. Do you like this article ? If yes then like otherwise dislike :
2018-10-22 18:57:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074063420295715, "perplexity": 763.6094445109455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515375.86/warc/CC-MAIN-20181022180558-20181022202058-00491.warc.gz"}
https://www.zbmath.org/?q=cc%3A06A15+cc%3A81
## Found 9 Documents (Results 1–9) 100 MathJax Full Text: Full Text: Full Text: ### $$n$$-perfect and $$\mathbb Q$$-perfect pseudo effect algebras. (English)Zbl 1302.81034 MSC:  81P10 06A11 06A15 Full Text: ### Extensions of ordering sets of states from effect algebras onto their MacNeille completions. (English)Zbl 1270.06005 MSC:  06D35 06A15 81P10 Full Text: Full Text: ### Operational Galois adjunctions. (English)Zbl 0962.18003 Coecke, Bob (ed.) et al., Current research in operational quantum logic. Algebras, categories, languages. Workshop, Free Univ. of Brussels, Belgium, June of 1998 and May of 1999. Dordrecht: Kluwer Academic Publishers. Fundam. Theor. Phys. 111, 195-218 (2000). MSC:  18B35 81P10 06A15 Full Text: ### Galois correspondence related to extension of charges on $$\sigma$$-class of subset of a finite set. (English. Russian original)Zbl 0840.03049 Russ. Math. 38, No. 5, 34-38 (1994); translation from Izv. Vyssh. Uchebn. Zaved., Mat. 1994, No. 5(384), 36-40 (1994). ### Orthomodular ordered sets and orthogonal closure spaces. (English)Zbl 0559.06005 Reviewer: P.Ptak MSC:  06A15 06C15 81P10 Full Text: all top 5 all top 3
2022-05-26 02:47:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4065924882888794, "perplexity": 11793.359178878232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00438.warc.gz"}
https://zbmath.org/?q=an:0118.18601
## A procedure for killing homotopy groups of differentiable manifolds.(English)Zbl 0118.18601 Proc. Sympos. Pure Math. 3, 39-55 (1961). topology
2022-06-27 01:59:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9763938784599304, "perplexity": 2003.4394109282907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00613.warc.gz"}
http://itfeature.com/heteroscedasticity/goldfeld-quandt-test-is-frequently-used-as-it-is-easy-to-apply-when-one-of-the-regressors
# GoldFeld-Quandt Test of Heteroscedasticity The Goldfeld-Quandt test is one of two tests proposed in a 1965 paper by Stephen Goldfeld and Richard Quandt. Both a parametric and nonparametric test are described in the paper, but the term “Goldfeld–Quandt test” is usually associated only with the parametric test. GoldFeld-Quandt test is frequently used as it is easy to apply when one of the regressors (or another r.v.) is considered the proportionality factor of heteroscedasticity. GoldFeld-Quandt test is applicable for large samples.The observations must be at least twice as many as the parameters to be estimated. The test assumes normality and serially independent error terms μi. The Goldfeld–Quandt test compares the variance of error terms across discrete subgroups. So data is divided in h subgroups. Usually data set is divided into two parts or groups, and hence the test is sometimes called a two-group test. The procedure of conducting GoldFeld-Quandt Test is 1. Order the observations according to the magnitude of X (the independent variable which is the proportionality factor). 2. Select arbitrarily a certain number (c) of central observations which we omit from the analysis. (for n=30, 8 central observations are omitted i.e. 1/3 of the observations are removed). The remaining n – c observations are divided into two sub-groups of equal size i.e.(n – c)/2, one sub-group includes small values of X and other sub-group includes the large values of X, as data set is arranged according to the magnitude of X. 3. Now Fit the separate regression to each of the sub-group, and obtain the sum of squared residuals form each of them. So$\sum c_1^2$ Show sum of squares of Residuals from sub-sample of low values of X with (n – c)/2K df, where Kis total number of parameters. $\sum c_2^2$ Show sum of squares of Residuals from sub-sample of large values of X with (n – c)/2K df, where K is total number of parameters. 4. Compute the Relation $F^* = \frac{RSS_2/df}{RSS_2/df}=\frac{\sum c_2^2/ ((n-c)/2-k)}{\sum c_1^2/((n-c)/2-k) }$ If Variances differs, F* will have a large value. The higher the observed value of F* ratio the stronger the hetro of the μi‘s. References #### Incoming search terms: • goldfeld-quandt test in SAS • goldfeld quandt test • goldfeld quandt test example • gold field quant test • goldfeld quandt test spss • Goldfeld-Quandt Test STATA • goldfe • how much observations are omitted in goldfeld quandt test • goldfeld-quandt test • goldfield quant test Be Sociable, Share! error: Content is protected !!
2015-07-31 19:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7592816948890686, "perplexity": 2339.245947951155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988311.72/warc/CC-MAIN-20150728002308-00297-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/109839/f-in-ws-2-then-int-bbb-rn-xi-j2s-mathscr-f-f-xi-2-d-xi
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). ## $f \in W^{s,2}$ then $\int_{\Bbb R^n} \xi_j^{2s} | \mathscr F f( \xi) |^2 d \xi < \infty$? If $f \in W^{s,2} (\Bbb R^n)$, then by the Plancherel's theorem, I know that its Fourier transform $\mathscr F f(\xi) \in L^2 (\Bbb R^n)$. ($\scr F$ means the Fourier transform). Now I want to show that if $f \in W^{s,2} (\Bbb R^n)$, for $\xi = (\xi_1 , \cdots ,\xi_n ), \; \text{integer}\; s \geqslant 0$, $$\int_{\Bbb R^n} \xi_j^{2s} | \mathscr F f( \xi) |^2 d \xi < \infty \;\;(\forall j = 1,\cdots,n).$$ How can I prove this, or do I need some more assumptions? ($W^{s,p}$ means general Sobolev space) If this holds then I think I can conclude that $(1+ | \xi |^2)^s (\mathscr F f)^2 \in L^1$. - Try out first what happens if you take a Schwartz function, en.wikipedia.org/wiki/Schwartz_space – András Bátkai Oct 16 at 18:30
2013-06-19 06:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352391362190247, "perplexity": 377.2483307068359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00013-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stackabuse.com/how-to-concatenate-string-variables-in-bash/
How to Concatenate String Variables in Bash # How to Concatenate String Variables in Bash ### Introduction Bash utilities make so many tasks simple, but not all tasks have a straightforward command or operator. A very common and usual task in many scripting scenarios is concatenating string variables. Bash doesn't have a built-in function or command-line utility to concatenate two strings together. However, there are many ways you can accomplish this. ### Concatenating Strings in Bash The following are the steps by which the strings can be concatenated, ordered by the level of complication: 1. echo both variables 2. Format using printf 3. Format using awk 4. Concat string files using join Throughout the article, we will be using the variables - string1 and string2 which you can create in your shell by entering: $string1="Stuck"$ string2="Together" #### Concatenate Strings in Bash with echo Perhaps the simplest of all the hacks is to simply echo both variables formatted next to each other: $echo "${string1}" "${string2}" Notice that we have a blank between both the strings. The snippet can also be written as: $ echo "${string1}${string2}" Both of the snippets will give the same output: Stuck Together #### Concatenate Strings in Bash with printf Let's try concatenating the strings by printing a formatted output on the teletypewriter i.e. terminal. Consider the following snippet, where two string format tags (%s) are split by an empty space that is ready to be substituted by the two variables - $string1 and $string2: $printf "%s %s\n" "$string1" "$string2" The two format tags, when substituted with our string variables output: Stuck Together String formatting may sound familiar if you are familiar with programming languages. If this sounds overwhelming to you, it's good to learn about what the different tags do (such as %s and %d), which could come in quite handy. #### Concatenate Strings in Bash with awk A slightly less known command - the awk command, invokes AWK - a scripting language specifically designed for manipulating text and extract strings from them. It doesn't come as a surprise that a scripting language dedicated to manipulating strings also has the ability to concatenate them: $ echo "$string1" "$string2" | awk '{printf "%s~%s\n",$1,$2}' Let's break up the command above. We have two snippets chained to each other (hence the | chaining operator). We are sending both the variables string1 and string2 as inputs to the awk command. The syntax of the awk command is: awk '{some_operation}'. In our case, we are performing a printf operation inside. We refer to the variables as $1 and $2; these refer to the variables passed as arguments from the echo. ## Free eBook: Git Essentials Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it! Just to show a variation here, we are using a tilde(~) separating the format tags. Thus, it gives the following output: Stuck~Together #### Concatenate Strings in Bash with join Now that the bars have been raised, let's try one final method. Let's try storing our strings in two separate files with one column each. The file string1.txt contains the word "Stuck" written two times in three lines. Same as before, string2.txt will contain the word "Together" written two times in three lines and the following commands create the files for us: $echo "1 Stuck" > "string1.txt"$ echo "2 Stuck" >> "string1.txt" $echo "1 Together" > "string2.txt"$ echo "2 Together" >> "string2.txt" Both the files can be concatenated by using the following command: \$ join string1.txt string2.txt For the join command to work, both the files should have a matching column. In our case, we have the matching index column and we get the concatenated output as shown below: 1 Stuck Together 2 Stuck Together ### Conclusion There are multiple ways to perform the concatenation of strings in bash scripting. The choice of the method depends on the level of complexity and the nature of the problem. Which one will you choose? Last Updated: September 19th, 2021 Get tutorials, guides, and dev jobs in your inbox. Sathiya Sarathi GunasekaranAuthor Pythonist 🐍| Linux Geek who codes on WSL | Data & Cloud Fanatic | Blogging Advocate | Author ## Make Clarity from Data - Quickly Learn Data Visualization with Python Learn the landscape of Data Visualization tools in Python - work with Seaborn, Plotly, and Bokeh, and excel in Matplotlib! From simple plot types to ridge plots, surface plots and spectrograms - understand your data and learn to draw conclusions from it.
2023-04-01 07:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17981570959091187, "perplexity": 5070.310892160762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00454.warc.gz"}
http://hackage.haskell.org/package/GenI-0.17.4/docs/src/NLP-GenI-Automaton.html
% GenI surface realiser % Copyright (C) 2005 Carlos Areces and Eric Kow % % This program is free software; you can redistribute it and/or % modify it under the terms of the GNU General Public License % as published by the Free Software Foundation; either version 2 % of the License, or (at your option) any later version. % % This program is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this program; if not, write to the Free Software % Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. \chapter{Automaton} \label{cha:Automaton} \begin{code} module NLP.GenI.Automaton ( NFA(..), finalSt, automatonPaths, automatonPathSets, numStates, numTransitions ) where import qualified Data.Map as Map import Data.Maybe (catMaybes) import NLP.GenI.General (combinations) \end{code} This module provides a simple, naive implementation of nondeterministic finite automata (NFA). The transition function consists of a Map, but there are also accessor function which help you query the automaton without worrying about how it's implemented. \begin{enumerate} \item The states are a list of lists, not just a simple flat list as you might expect. This allows you to optionally group your states into columns'' (which is something we use in the GenI polarity automaton optimisation). If you don't want columns, you can just make one big group out of all your states. \item We model the empty an empty transition as the transition on Nothing. All other transitions are Just something. \item I'd love to reuse some other library out there, but Leon P. Smith's Automata library requires us to know before-hand the size of our alphabet, which is highly unacceptable for this task. \end{enumerate} \begin{code} -- | Note: there are two ways to define the final states. -- 1. you may define them as a list of states in finalStList -- 2. you may define them via the isFinalSt function -- The state list is ignored if you define 'isFinalSt' data NFA st ab = NFA { startSt :: st , isFinalSt :: Maybe (st -> Bool) , finalStList :: [st] -- , transitions :: Map.Map st (Map.Map st [Maybe ab]) , states :: [[st]] } \end{code} % ---------------------------------------------------------------------- \section{Building automata} % ---------------------------------------------------------------------- \fnlabel{finalSt} returns all the final states of an automaton \begin{code} finalSt :: NFA st ab -> [st] finalSt aut = case isFinalSt aut of Nothing -> finalStList aut Just fn -> concatMap (filter fn) (states aut) \end{code} \fnlabel{lookupTrans} takes an automaton, a state \fnparam{st1} and an element \fnparam{ab} of the alphabet; and returns the state that \fnparam{st1} transitions to via \fnparam{a}, if possible. \begin{code} lookupTrans :: (Ord ab, Ord st) => NFA st ab -> st -> (Maybe ab) -> [st] lookupTrans aut st ab = Map.keys $Map.filter (elem ab) subT where subT = Map.findWithDefault Map.empty st (transitions aut) \end{code} \begin{code} addTrans :: (Ord ab, Ord st) => NFA st ab -> st -> Maybe ab -> st -> NFA st ab addTrans aut st1 ab st2 = aut { transitions = Map.insert st1 newSubT oldT } where oldT = transitions aut oldSubT = Map.findWithDefault Map.empty st1 oldT newSubT = Map.insertWith (++) st2 [ab] oldSubT \end{code} % ---------------------------------------------------------------------- \section{Exploiting automata} % ---------------------------------------------------------------------- \fnlabel{automatonPaths} returns all possible paths through an automaton. Each path is represented as a list of labels. We assume that the automaton does not have any loops in it. Maybe it would still work if there were loops, with lazy evaluation, but I haven't had time to think this through, so only try it unless you're feeling adventurous. FIXME: we should write some unit tests and quickchecks for this \begin{code} automatonPaths :: (Ord st, Ord ab) => (NFA st ab) -> [[ab]] automatonPaths aut = concatMap combinations$ map (filter (not.null)) \$ automatonPathSets aut -- | Not quite the set of all paths, but the sets of all transitions --- FIXME: explain later automatonPathSets :: (Ord st, Ord ab) => (NFA st ab) -> [[ [ab] ]] automatonPathSets aut = helper (startSt aut) where transFor st = Map.lookup st (transitions aut) helper st = case transFor st of Nothing -> [] Just subT -> concat [ (next (catMaybes tr) st2) | (st2, tr) <- Map.toList subT ] next tr st2 = case helper st2 of [] -> [[tr]] res -> map (tr :) res \end{code} \begin{code} numStates, numTransitions :: NFA st ab -> Int numStates = sum . (map length) . states numTransitions = sum . (map subTotal) . (Map.elems) . transitions where subTotal = sum . (map length) . (Map.elems) \end{code}
2014-04-23 17:58:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4737740457057953, "perplexity": 8903.335683463374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.danstotridge.com/methodologysection-rubric-stem/
# MethodologySection_Rubric_STEM Materials                                                                                            Criteria Excellent Good Average Below Average Poor/ Missing 1. All materials needed are clearly explained in the methodology section. 2. Justification for materials is clearly explained /20Procedure 1. Steps are explained clearly. Experiment can be replicated 2. Justification for steps are clearly explained. /20Problem 1. Problems with the materials or procedures are clearly explained. /15Organization 1. Section is organized following the structure on page 58-59 of SRW /15Vocabulary 1. Accurately uses appropriate vocabulary to explain the seven sections of a methodology section. /15Grammar 1. Uses grammar appropriate for a methodology section /15Additional Comments:
2020-07-15 16:59:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481506705284119, "perplexity": 5668.662621328051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657170639.97/warc/CC-MAIN-20200715164155-20200715194155-00009.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=crm&paperid=697&option_lang=eng
Computer Research and Modeling RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Computer Research and Modeling: Year: Volume: Issue: Page: Find Computer Research and Modeling, 2019, Volume 11, Issue 1, Pages 71–86 (Mi crm697) NUMERICAL METHODS AND THE BASIS FOR THEIR APPLICATION Weigthed vector finite element method and its applications V. A. Rukavishnikov, A. O. Mosolapov Computing center FEB RAS, 65 Kim U Chen st., Khabarovsk, 680011, Russia Abstract: Mathematical models of many natural processes are described by partial differential equations with singular solutions. Classical numerical methods for determination of approximate solution to such problems are inefficient.In the present paper a boundary value problem for vector wave equation in $\mathrm{L}$-shaped domain is considered. The presence of reentrant corner of size $3\pi/2$ on the boundary of computational domain leads to the strong singularity of the solution, i.e. it does not belong to the Sobolev space $H^1$ so classical and special numerical methods have a convergence rate less than $O(h)$. Therefore in the present paper a special weighted set of vector-functions is introduced. In this set the solution of considered boundary value problem is defined as $R_{\nu}$-generalized one. For numerical determination of the $R_{\nu}$-generalized solution a weighted vector finite element method is constructed. The basic difference of this method is that the basis functions contain as a factor a special weight function in a degree depending on the properties of the solution of initial problem. This allows to significantly raise a convergence speed of approximate solution to the exact one when the mesh is refined. Moreover, introduced basis functions are solenoidal, therefore the solenoidal condition for the solution is taken into account precisely, so the spurious numerical solutions are prevented. Results of numerical experiments are presented for series of different type model problems: some of them have a solution containing only singular component and some of them have a solution containing a singular and regular components. Results of numerical experiment showed that when a finite element mesh is refined a convergence rate of the constructed weighted vector finite element method is $O(h)$, that is more than one and a half times better in comparison with special methods developed for described problem, namely singular complement method and regularization method. Another features of constructed method are algorithmic simplicity and naturalness of the solution determination that is beneficial for numerical computations. Keywords: weighted vector FEM, weighted spaces, $R_{\nu}$-generalized solution, boundary value problems with singularity. DOI: https://doi.org/10.20537/2076-7633-2019-11-1-71-86 Full text: PDF file (2111 kB) Full text: http://crm.ics.org.ru/.../2766 References: PDF file   HTML file UDC: 519.6 Revised: 19.06.2018 Accepted:27.12.2018 Citation: V. A. Rukavishnikov, A. O. Mosolapov, “Weigthed vector finite element method and its applications”, Computer Research and Modeling, 11:1 (2019), 71–86 Citation in format AMSBIB \Bibitem{RukMos19} \by V.~A.~Rukavishnikov, A.~O.~Mosolapov \paper Weigthed vector finite element method and its applications \jour Computer Research and Modeling \yr 2019 \vol 11 \issue 1 \pages 71--86 \mathnet{http://mi.mathnet.ru/crm697} \crossref{https://doi.org/10.20537/2076-7633-2019-11-1-71-86} • http://mi.mathnet.ru/eng/crm697 • http://mi.mathnet.ru/eng/crm/v11/i1/p71 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. V. A. Rukavishnikov, A. V. Rukavishnikov, “Metod chislennogo resheniya odnoi statsionarnoi zadachi gidrodinamiki v konvektivnoi forme v $L$-obraznoi oblasti”, Kompyuternye issledovaniya i modelirovanie, 12:6 (2020), 1291–1306 • Number of views: This page: 171 Full text: 58 References: 23
2021-11-30 11:45:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3510952591896057, "perplexity": 2545.794987698113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00408.warc.gz"}
https://indico.cern.ch/event/982553/contributions/4206947/
(Re)interpreting the results of new physics searches at the LHC 15-19 February 2021 CERN Europe/Paris timezone Recasting searches for pp -> A(H) -> ZH(A) -> l+l- bb (and other processes) onto 2HDM parameter spaces 19 Feb 2021, 13:00 15m virtual (online only) (CERN) Speaker To extract more information from new physics searches at the LHC, we examine an experimental analysis of $A$ production followed by its $ZH$ decay into $l^+l^- b\bar b$ ($l=e,\mu$). The original search, from the ATLAS Collaboration, was performed at Run 2 with 36.1 fb$^{-1}$ of luminosity. This talk presents the outcome of reinterpreting it as a $pp\to H\to ZA \to l^+l^-$ search, in the presence of the latest experimental and theoretical constraints, in the context of all standard 2-Higgs Double Model (2HDM) types, so as to test the true sensitivity of LHC to this Beyond Standard Model (BSM) scenario at present and in the future. This talk also discusses a second reinterpretation study making use of existing results from the CMS Collaboration, specifically, searches for light BSM Higgs pairs produced via $pp\to H_{\rm SM}\to hh(AA)$ into a variety of final states. Through this, we test the LHC sensitivity to other possible new signals to investigate in the future, like $pp\to H_{SM}\to ZA\to ZZ h$, by taking advantage of strong correlations between these processes existing in, e.g., the 2HDM type-I.
2021-03-04 06:41:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5430936217308044, "perplexity": 2380.9528406741288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00571.warc.gz"}
https://www.physicsforums.com/threads/solve-the-ode-y-3x-1-x-2-y-1-1-x-2-y-0.100353/
# Solve the ODE y'' + (3x)/(1+x^2)y' + 1/(1+x^2)y = 0 1. Nov 17, 2005 I need to determine the fundamental set of solutions of $$y''+\frac{3x}{1+x^2}y'+\frac{1}{1+x^2}y=0$$ in form of a power series centered around $$x_0=0$$. When I expanded the y's in power series, I am unable to bring the 1/1+x^2 inside the infinite sum. Can anyone help? 2. Nov 17, 2005 ### Physics Monkey Try multiplying the whole equation by $$1 + x^2$$. 3. Nov 17, 2005 ### saltydog Multiply throughout by $(1+x^2)$ first to get: $$(1+x^2)y^{''}+3xy^{'}+y=0$$ So wouldn't that first term just be: $$\sum n(n-1)a_n x^{n-2}+\sum n(n-1)a_nx^n$$ right? 4. Nov 17, 2005 ### BerkMath There is much to this problem after you get passed your issue, however; I will restrain my response to your question. First off, multiply everything by 1+x^2. Now after subsituting your power series expression for y, y', and y'' you will be left with only 2 powers of x: x^n-2 and x^n. I imagine your question begins here although now in different form. You must make a change of index on the summation containing x^n-2, try m=n-2. After this substitution you will have an expression with a sum with x^m and one with x^n. Since "m" is a dummy index let m=n. It seems circular but it is correct and required. Now you will have an expression in which the only powers of x that appear are x^n. Grouping the coefficients and setting them equal to zero is your next step which I will let you do on your own. It is interesting to note that in order to do this problem you are switching the "shift" of the powers of x to the coefficients an. Not only is this helpful, but it is mandatory when solving the original DE since you must have different an's from which to find your recurrence relations. 5. Nov 17, 2005 If I make a substitution m=n-2, at some point I'll have a term in the form of: $$\sum_{n=0}(n+2)(n+1)a_{n+2}x^{n+2}$$ Are you saying that at this point I can reset the index of x to n and leave a_n the way it is? 6. Nov 17, 2005 ### BerkMath That is not what you will have. The coefficients look correct but its not x^(n+2) it is x^n. Recall: you had x^(n-2), you let m=n-2, therefore x^((m+2)-2)=x^m with the coefficients you have. Then subsititute back and let m=n. 7. Nov 17, 2005 Ok, got it, I think: $$\sum_{n=0}^{\infty}x^n[a_{n+2}(n+2)(n+1)+a_n[n(n-1)+3n+1]]$$ This implies that $$a_{n+2}(n+2)(n+1)+a_n(n+1)^2=0$$ Last edited: Nov 17, 2005 8. Nov 17, 2005 ### BerkMath yes, now find your recurence relation and your done.
2017-02-26 06:10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425104022026062, "perplexity": 1017.2423559572828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.2/warc/CC-MAIN-20170219104611-00116-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/65889/continuously-differentiable-sequence-of-functions
# Continuously differentiable sequence of functions I wish to find the best way to prove that $\phi_n:= [\psi(x+{1\over n})-\psi(x)]n$ where $\psi$ is continuously differentiable on $(a,b)$, converges uniformly to $\psi'(x)$ on all closed subintervals of $(a,b)$. It is clear that $(\phi_n)$ converges to $\psi'$. But not so clear that $(\phi_n')$ converges uniformly on every closed subinterval. I am thinking of using the continuity property of the derivative...? If I could show this then it follows that $\phi_n$ converges uniformly to $\psi'(x)$ on all closed subintervals of $(a,b)$. Is there a better way to show this though? Thanks. - Hint: Use the mean value theorem on $\phi_n(x)$ for each $x$. – Zarrax Sep 20 '11 at 1:42 Fix a closed subinterval $X$ of $(a,b)$ and a parameter $\varepsilon > 0$. Since every continuous function on the reals is uniformly continuous on closed, bounded intervals, it follows that there exists $\delta > 0$ such that for all $x, y \in X$, $$|y-x| \lt \delta \quad \implies \quad |\psi'(y) - \psi'(x)| \lt \varepsilon. \tag{\ast}$$ Finally, pick an integer $N > \frac{1}{\delta}$. It remains to check that this choice of $N$ works. Fix any $n \geqslant N$. For any $x \in X$, by the mean value theorem on $\phi_n$, we have $$\phi_n(x) = \frac {\psi_n \Big(x+\frac{1}{n}\Big) - \psi_n(x)}{ \frac{1}{n} } = \psi_n'(\xi_n)$$ for some $\xi_n \in \Big(x, x+\frac{1}{n} \Big)$. In particular, $|\xi_n - x| \lt \frac{1}{n} \leqslant \frac{1}{N} \leqslant \delta$. Therefore, from $(\ast)$, it follows that $|\psi'(\xi_n) - \psi'(x)| \lt \varepsilon$; in turn, this implies that $|\phi_n(x) - \psi'(x)| \lt \varepsilon$. That is, for all $n \geqslant N$, we have $\sup \{ |\phi_n(x) - \phi'(x)| \ :\ x \in X \} \leqslant \varepsilon$, and we are done.
2015-11-27 05:04:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851774573326111, "perplexity": 37.0922297476607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447913.86/warc/CC-MAIN-20151124205407-00249-ip-10-71-132-137.ec2.internal.warc.gz"}
https://intelligencemission.com/electricity-free-utilities-free-online-electricity-bill-payment.html
During the early 19th century, the concept of perceptible or free caloric began to be referred to as “free heat” or heat set free. In 1824, for example, the Free Electricity physicist Sadi Carnot, in his famous “Reflections on the Motive Power of Fire”, speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the Free Energy physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy ’ for the expression E − TS, in which the change in F (or G) determines the amount of energy ‘free’ for work under the given conditions, specifically constant temperature. [Free Electricity]:Free Power. The song’s original score designates the duet partners as “wolf” and “mouse, ” and genders are unspecified. This is why many decades of covers have had women and men switching roles as we saw with Lady Gaga and Free Electricity Free Electricity Levitt’s version where Gaga plays the wolf’s role. Free Energy, even Miss Piggy of the Muppets played the wolf as she pursued ballet dancer Free Energy NureyeFree Power You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ​, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ​. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem​, will always have the same sign as \Delta \text S_\text{system}ΔSsystem​. # Free Electricity like the general concept of energy , free energy has Free Power few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature T, volume Free Power, pressure p, etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is. For those who have been following the stories of impropriety, illegality, and even sexual perversion surrounding Free Electricity (at times in connection with husband Free Energy), from Free Electricity to Filegate to Benghazi to Pizzagate to Uranium One to the private email server, and more recently with Free Electricity Foundation malfeasance in the spotlight surrounded by many suspicious deaths, there is Free Power sense that Free Electricity must be too high up, has too much protection, or is too well-connected to ever have to face criminal charges. Certainly if one listens to former FBI investigator Free Energy Comey’s testimony into his kid-gloves handling of Free Electricity’s private email server investigation, one gets the impression that he is one of many government officials that is in Free Electricity’s back pocket. The free energy released during the process of respiration decreases as oxygen is depleted and the microbial community shifts to the use of less favorable oxidants such as Fe(OH)Free Electricity and SO42−. Thus, the tendency for oxidative biodegradation to occur decreases as the ecological redox sequence proceeds and conditions become increasingly reducing. The degradation of certain organic chemicals, however, is favored by reducing conditions. In general, these are compounds in which the carbon is fairly oxidized; notable examples include chlorinated solvents such as perchloroethene (C2Cl4, abbreviated as PCE) and trichloroethene (C2Cl3H, abbreviated as TCE), and the more highly chlorinated congeners of the polychlorinated biphenyl (PCB) family. (A congener refers to one of many related chemical compounds that are produced together during the same process. The internet is the only reason large corps. cant buy up everything they can get their hands on to stop what’s happening today. @Free Power E. Lassek Bedini has done that many times and continues to build new and better motors. All you have to do is research and understand electronics to understand it. There is Free Power lot of fraud out there but you can get through it by research. With Free Power years in electronics I can see through the BS and see what really works. Build the SG and see for yourself the working model.. A audio transformer Free Power:Free Power ratio has enough windings. Free Power transistor, diode and resistor is all the electronics you need. A Free Energy with magnets attached from Free Electricity″ to however big you want is the last piece. What? Maybe Free Electricity pieces all together? Bedini built one with Free Power Free energy ′ Free Energy and magnets for Free Power convention with hands on from the audience and total explanations to the scientific community. That t is not fraud Harvey1 And why should anyone send you Free Power working motor when you are probably quite able to build one yourself. Or maybe not? Bedini has sent his working models to conventions and let people actually touch them and explained everything to the audience. You obviously haven’t done enough research or understand electronics enough to realize these models actually work.. The SC motor generator is easily duplicated. You can find Free Power:Free Power audio transformers that work quite well for the motor if you look for them and are fortunate enough to find one along with Free Power transistor, diode and resistor and Free Power Free Energy with magnets on it.. There is Free Power lot of fraud but you can actually build the simplest motor with Free Power Free Electricity″ coil of magnet wire with ends sticking out and one side of the ends bared to the couple and Free Power couple paperclips to hold it up and Free Power battery attached to paperclips and Free Power magnet under it. According to the second law of thermodynamics, for any process that occurs in Free Power closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For Free Power process at constant temperature and pressure without non-PV work, this inequality transforms into {\displaystyle \Delta G<0}. Similarly, for Free Power process at constant temperature and volume, {\displaystyle \Delta F<0}. Thus, Free Power negative value of the change in free energy is Free Power necessary condition for Free Power process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. From the Free Power textbook Modern Thermodynamics [Free Power] by Nobel Laureate and chemistry professor Ilya Prigogine we find: “As motion was explained by the Newtonian concept of force, chemists wanted Free Power similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked Free Power clear definition. ”In the 19th century, the Free Electricity chemist Marcellin Berthelot and the Danish chemist Free Electricity Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for Free Power large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of Free Power system of bodies which liberate heat. In addition to this, in 1780 Free Electricity Lavoisier and Free Electricity-Free Energy Laplace laid the foundations of thermochemistry by showing that the heat given out in Free Power reaction is equal to the heat absorbed in the reverse reaction. Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner! The historically earlier Helmholtz free energy is defined as A = U − TS. Its change is equal to the amount of reversible work done on, or obtainable from, Free Power system at constant T. Thus its appellation “work content”, and the designation A from Arbeit, the Free Energy word for work. Since it makes no reference to any quantities involved in work (such as p and Free Power), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by Free Power system at constant temperature, and it can increase at most by the amount of work done on Free Power system isothermally. The Helmholtz free energy has Free Power special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore p dV work.)
2020-08-11 00:55:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7949835062026978, "perplexity": 1804.9506958558009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00395.warc.gz"}
https://codereview.stackexchange.com/questions/140030/replacing-values-in-json-file-with-jq
# Replacing values in json file with jq I have a JSON file like { "Changes": [{ "Action": "UPSERT", "ResourceRecordSet": { "Name": "RESOURCE_NAME", "Type": "A", "TTL": 60, "ResourceRecords": [{ "Value": "RESOURCE_HOSTS" }] } }] } where I need to replace .Changes[].ResourceRecordSet.Name and .Changes[].ResourceRecordSet.ResourceRecords[] dynamically. So, I wrote a script to replace required values with jq. #!/bin/bash -xe RESOURCE_NAME="example.com" RESOURCE_HOSTS="1.1.1.1,2.2.2.2," cat resource-changes.json | \ jq '.Changes[].ResourceRecordSet.Name=env.RESOURCE_NAME' | \ jq 'del(.Changes[].ResourceRecordSet.ResourceRecords[])' \ > resource-changes.$$.json && \ mv resource-changes.$$.json resource-changes.json for host in $(echo "$RESOURCE_HOSTS" | tr "," "\n"); do cat resource-changes.json | \ jq --arg theHost $host '.Changes[].ResourceRecordSet.ResourceRecords |= (. + [{"Value":$theHost}])' \ > resource-changes.$$.json mv resource-changes.$$.json resource-changes.json done jq '.' resource-changes.json Which works fine and I can get required replacements. Though the script looks quite long and complicated IMO. Also, I do not really like joggling with temporary files. I'm looking for a way to simplify the script. Appreciate any comments! • It would be great to add a jq tag. Do not have enough power for it.. – cyrillk Aug 30 '16 at 15:56 ### Accessing environment variables in jq This doesn't really work as it is...: RESOURCE_NAME="example.com" # ... cat resource-changes.json | \ jq '.Changes[].ResourceRecordSet.Name=env.RESOURCE_NAME' | \ # ... jq cannot access RESOURCE_NAME if it's not exported. The fix is simple: export RESOURCE_NAME="example.com" ### Iterating over a list of values This is a very hacky way to iterate over a list of hosts: RESOURCE_HOSTS="1.1.1.1,2.2.2.2," # ... for host in $(echo "$RESOURCE_HOSTS" | tr "," "\n"); do If the host names will never contain space, you could use a simple space-separated string: RESOURCE_HOSTS="1.1.1.1 2.2.2.2" # ... for host in $RESOURCE_HOSTS; do Or for maximum flexibility you could use a proper Bash array: RESOURCE_HOSTS=(1.1.1.1 2.2.2.2) # ... for host in "${RESOURCE_HOSTS[@]}"; do ### Streamlining by eliminating the loop The current script creates many temporary files in the process, which is very ugly. The biggest obstacle to streamlining this into a single nice pipeline is the looping logic. You could prepare the host values into a JSON list and let jq take care of the rest. Here's one way to create a JSON list from an array of hosts: hosts=(1.1.1.1 2.2.2.2) values="\"${hosts[0]}\"" for host in "${hosts[@]:1}"; do values="$values,\"$host\"" done This assumes that there is at least one host. It initializes values with the first one, and then appends the others with a comma, adding double-quotes to make it valid JSON. With this new variable, you can chain the jq calls like this: cat resource-changes.json | \ jq '.Changes[].ResourceRecordSet.Name=env.RESOURCE_NAME' | \ jq 'del(.Changes[].ResourceRecordSet.ResourceRecords[])' | \ jq ".Changes[].ResourceRecordSet.ResourceRecords = ([$values] | map({\"Value\":.}))" The cat command at the front is a bit ugly, because the filename could be a parameter of the first jq call, or replaced with an input redirection. But that would break the current nice aesthetic flow of the jq calls. This minor issue could be fixed by grouping and input redirection: { jq '.Changes[].ResourceRecordSet.Name=env.RESOURCE_NAME' | \ jq 'del(.Changes[].ResourceRecordSet.ResourceRecords[])' | \ jq ".Changes[].ResourceRecordSet.ResourceRecords = ([$values] | map({\"Value\":.}))" } < resource-changes.json • Thank you. It's really useful! I'll apply the changes you've suggested. – cyrillk Aug 31 '16 at 10:56
2020-02-19 12:14:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35623762011528015, "perplexity": 9504.257955188643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00054.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=sm&paperid=2473&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Sb.: Year: Volume: Issue: Page: Find Mat. Sb. (N.S.), 1981, Volume 116(158), Number 3(11), Pages 359–369 (Mi msb2473) Generators of $S^1$-bordism O. R. Musin Abstract: In this paper generators are found for the rings $U^{S^1}_*$ (the unitary $S^1$-bordism ring) and $U_*(S^1,\{\mathbf Z_s\})$ (the unitary bordism ring with actions of the group $S^1$ without fixed points). The generators found are $S^1$-manifolds of the form $(S^3)^k\times\mathbf CP^n/(S^1)^k$. By an obvious construction the ring $U^{S^1}_*$ allows one to establish a relation between numerical invariants of manifolds with unitary actions of $S^1$ and the set of fixed points, without using a theorem of the type of an integrality theorem. In particular, we obtain a new proof of the Atiyah–Hirzebruch formula for the generalized Todd genus of $S^1$-manifolds. Bibliography: 9 titles. Full text: PDF file (961 kB) References: PDF file   HTML file English version: Mathematics of the USSR-Sbornik, 1983, 44:3, 325–334 Bibliographic databases: UDC: 513.836 MSC: Primary 57R85, 57R77; Secondary 55N22, 55N25 Citation: O. R. Musin, “Generators of $S^1$-bordism”, Mat. Sb. (N.S.), 116(158):3(11) (1981), 359–369; Math. USSR-Sb., 44:3 (1983), 325–334 Citation in format AMSBIB \Bibitem{Mus81} \by O.~R.~Musin \paper Generators of $S^1$-bordism \jour Mat. Sb. (N.S.) \yr 1981 \vol 116(158) \issue 3(11) \pages 359--369 \mathnet{http://mi.mathnet.ru/msb2473} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=665688} \zmath{https://zbmath.org/?q=an:0508.57027|0494.57015} \transl \jour Math. USSR-Sb. \yr 1983 \vol 44 \issue 3 \pages 325--334 \crossref{https://doi.org/10.1070/SM1983v044n03ABEH000970} • http://mi.mathnet.ru/eng/msb2473 • http://mi.mathnet.ru/eng/msb/v158/i3/p359 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. O. R. Musin, “Converse theorem on equivariant genera”, Russian Math. Surveys, 64:4 (2009), 753–755 2. Oleg R. Musin, “On rigid Hirzebruch genera”, Mosc. Math. J., 11:1 (2011), 139–147 3. O. R. Musin, “Circle Actions with Two Fixed Points”, Math. Notes, 100:4 (2016), 636–638 • Number of views: This page: 139 Full text: 51 References: 27
2019-07-17 03:35:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1986948549747467, "perplexity": 13521.722620277325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525009.36/warc/CC-MAIN-20190717021428-20190717043428-00258.warc.gz"}
https://blog.acolyer.org/2019/04/10/dont-trust-the-locals-investigating-the-prevalence-of-persistent-client-side-cross-site-scripting-in-the-wild/
# Don’t trust the locals: investigating the prevalence of persistent client-side cross-site scripting in the wild Does your web application make use of local storage? If so, then like many developers you may well be making the assumption that when you read from local storage, it will only contain the data that you put there. As Steffens et al. show in this paper, that’s a dangerous assumption! The storage aspect of local storage makes possible a particularly nasty form of attack known as a persistent client-side cross-site scripting attack. Such an attack, once it has embedded itself in your browser one time (e.g. that one occasion you quickly had to jump on the coffee shop wifi), continues to work on all subsequent visits to the target site (e.g., once you’re back home on a trusted network). In an analysis of the top 5000 Alexa domains, 21% of sites that make use of data originating from storage were found to contain vulnerabilities, of which at least 70% were directly exploitable using the models described in this paper. Our analysis shows that more than 8% of the top 5,000 domains are potentially susceptible to a Persistent Client-Side XSS vulnerability. Moreover, considering only such domains which make use of tainted data in dangerous sinks, a staggering 21% (418/1,946) are vulnerable. Considering only the top 1,000 domains, we even found that 119 of them contained an unfiltered and unverified flow from cookes or Local Storage to an execution sink… we believe these results are lower bounds on the actual number of potentially vulnerable sites. ### Persistent client-side XSS attacks There are two basic requirements for a storage-based XSS attack. First, there must be a vulnerable path that permits an attacker to control the data written into local storage. Secondly, the page must use data from local storage in a manner that is exploitable (e.g., no sanitisation) at a sink. …when the JavaScript code causing the vulnerable flow from storage to sink is included in every page of a domain, a single injection means that regardless of which URL on the domain is visited, the attack succeeds… Moreover, a single error or misconfiguration on such a domain is sufficient to persist a payload. There are two main routes an attacker can use to persist malicious payloads: an in-network attacker can hijack connections over HTTP, or the attacker can lure the victim to visit a website under the attacker’s control. HTTP-based network attacks can be prevented by using HTTPS, and by sites setting the HTTTP Strict Transport Security (HSTS) header with the includeSubDomains option. Unfortunately not all users and not all sites take these measures. Say a user connects to the “coffee shop wifi”… A security-aware user might refrain from performing any sensitive actions in such an open network, e.g., performing a login or doing online banking. However, using a Persistent Client-Side XSS, the attacker can implant a malicious payload which lies dormant and is used only later to attack a victim. One such scenario is a JavaScript-based keylogger, which is triggered upon visiting the site with infected persistent storage in a seemingly secure environment, e.g., at home. If the attacker can instead lure a victim to a website the attacker controls (maybe via ads etc.), then for example the victim’s browser can be forced to load a vulnerable page on a third-party site containing a reflected Client-Side Cross-Site Scripting flaw. The payload sent to the vulnerable site triggers a flow which stores attacker-controlled content in local storage. #### Exploits Once the data is in local storage, then the set of potential exploits is similar to any other scenario in which a web page contains attacker controlled content. Except that it this case, developers seem much less aware of the need to sanitise the input, implicitly trusting the data they pull from local storage. Several sites for example are using local storage to store code / json which they are eval ing. That should probably raise a red flag regardless. Consider the following more benign looking example though: Here the value retrieved from storage is neither checked nor encoded, so the attacker can break out of the <a> tag and inject any script of their choosing. Once the payload is in place in local storage, its persistence can be further enhanced by injecting into relevant pages JavaScript code to ignore invocations of Local Storage setItem or removeItem functionality. ### Do these vulnerabilities exist in the wild? The authors crawled the Alexa top 5000 domains (up to 1000 sub-pages each, maximum depth 2, only public pages – i.e., nothing behind a login). The resulting dataset contained 12.5M documents. The crawling was done with a modified Chromium that reports invocations with tainted data (i.e., controlled by the crawling engine) to numerous sinks. The following table shows the vulnerable flows found from source to sink, where the source can be an HTTP request, flow from a cookie, or flow from local storage. We observe that for HTML and JavaScript sinks, between 71% and 89% of flows from the URL are not encoded… For flows originating from cookies, the fraction of plain flows ranges from 69% to 98%. Most interestingly though, we observe that virtually all flows that originate from a Local Storage source have no encoding applied to them, indicating that this data appears to be trusted by the developers of the JavaScript applications. From this set, a total of 906 domains have vulnerable cookie flows, and 654 domains have vulnerable Local Storage flows. For these domains, the authors then tested to see whether stored values appears on a page (i.e., there is an exploitable flow from local storage). More than half of the domains that had a flow from Local Storage to a sink could be exploited, indicating that little care is taken in ensuring the integrity and format of such data. Now it’s just a matter of putting the two parts together: we’re looking for sites with vulnerable flows from an attacker controlled source to local storage, coupled with an exploitable flow from local storage. In total 65 domains had this deadly combination. “Since our crawlers neither log in nor try to cover all available code paths, the number of sites susceptible to such Client-Side XSS flaws is likely higher.” A case study: In our study, we found the single sign-on part of a major Chinese website network to be susceptible to both a persistent and a reflected Client-Side XSS flaw. While abusing the reflected XSS could have been used to exfiltrate the cookies of the user, these were protected with the HttpOnly flag. Given the fact that the same origin also made insecure use of persisted code from Local Storage, however, rather than trying to steal the cookie, we built a proof of concept that extracted credentials from the login field right before the submission of the credentials to the server. ### Defences When using local storage for unstructured data, always use context-aware sanitisation (i.e., apply the appropriate encoding to prevent escaping) before inserting the data in the DOM. When using local storage for structured data, use JSON.parse instead of eval. (eval is more liberal in what it accepts, 27 of the domains in the study use data formats resembling JSON that can be parsed with eval, but not by JSON.parse. To which I say, fix your format!). The most challenging pattern in our dataset consists of scenarios in which applications use the persistence mechanisms to deliberately store HTML or JavaScript code, e.g., for client-side caching purposes. In this setting, the attacker is able to completely overwrite the contents of the corresponding storage entry with their own code. We could identify in several cases these flaws are actually introduced by third-party libraries, among them CloudFlare and Criteo. There is no general solution here, and we have to look at individual use cases. CloudFlare’s ‘Rocket Loader’ for example caches external scripts in local storage. A safer alternative would be to use service workers (see section VI.C in the paper for a sketch of the implementation). When storing HTML only fragments, sanitisers such as DOMPurify can robustly remove all JavaScript. For sites with a mix of HTML and JavaScript stored in local storage (five sites) from the analysis, “none of the available defensive coding measures can be applied…. hence securing these sites requires removing the insecure feature altogether.” Several vulnerabilities we discovered were caused by third-party code. We notified those parties which were responsible for at least three vulnerable domains. As of this writing, the four largest providers have acknowledged the issues and/or deployed fixes for the flawed code.
2021-07-25 22:44:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2583269774913788, "perplexity": 2993.1729194629183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00669.warc.gz"}
https://web2.0calc.com/questions/help_11818
+0 # Help! -1 170 1 +950 Suppose that $f$ is a function and $f^{-1}$ is the inverse of $f$. If $f(3)=4$, $f(5)=1$, and $f(2)=5$, evaluate $f^{-1}\left(f^{-1}(5)+f^{-1}(4)\right)$. Jul 15, 2018 #1 +97583 +1 Lightning, You are presenting high level questions so you are advance enough to learn how to present them properly. I cannot be bothered interpreting this and I have read other people make similar comments. The stuff between the dollars signs is in Latex. Open the latex box in the ribbon, past the latex into there and press ok. Do not include the dollar signs. Practice a little and you will find it easy. If you need better instructions you can ask. Jul 15, 2018
2019-02-22 07:26:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351903796195984, "perplexity": 1387.1845174932926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00410.warc.gz"}
https://docs.cavalry.scenegroup.co/nodes/utilities/sequence/
Sequence Generate random (or non random), non-repeating number sequences. Auto Index - When checked the node will automatically output all values within the Sequence range. Index - Manually select an index within the Sequence range. Sequence - Set a range of values to create a sequence within. Offset - Add/ subtract a value to the sequence. e.g. a value of 1 will convert 0,1,2,3 to 1,2,3,4. Travel - Use a value to offset the order of the sequence. e.g. a value of 1 will convert 0,1,2,3 to 3,0,1,2. Randomize - When checked the values set in Sequence will be randomized. Seed - Set the seed used for Randomize. Skip Indices - Enter values which you want to omit from the range set within Sequence. Indices can be entered as comma or hyphen separated values. For example, entering 1, 5, 7:9 would omit the values 1, 5, 7, 8 and 9. info Example usage: 1. Create a Text Shape. 2. Add a String Generator to the Text. 1. Set Precision and Padding to 0. 3. Add the Text Shape to a Duplicator. 1. Set the Duplicator's Distribution to Linear. 2. Set the Count to 4. 4. Create a Color Array. 1. Add 3 more colors to the Array and give each a different color. 2. Connect colorArray.id > textShape.fillcolor. 5. Create a Sequence. 1. Set values for the Sequence of 0 and 3. 2. Connect sequence.id > stringGenerator.number. 3. Connect sequence.id > colorArray.index. If you now scrub Seed on the Sequence Atom you will get random values but you will never get the same value (or color) appearing twice.
2021-10-15 20:07:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36110278964042664, "perplexity": 2587.859889077739}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00620.warc.gz"}
http://www.oalib.com/relative/3307231
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Physics , 1993, Abstract: Properties and experimental predictions of a broad class of supergravity grand unified models possessing an $SU(5)$-type proton decay and $R$ parity are described. Models of this type can be described in terms of four parameters at the Gut scale in addition to those of the Standard Model i.e. $m_o$ (universal scalar mass), $m_{1/2}$ (universal gaugino mass), $A_o$ (cubic soft breaking parameter) and $\tan\beta=/$. Thus the 32 SUSY masses can be expressed in terms of $m_o, m_{1/2}, A_o \tan\beta$ and the as yet unknown t-quark mass $m_t$. Gut thresholds are examined and a simple model leads to grand unification consistent with $p$-decay data when $0.114<\alpha_3 (M_z)<0.135$, in agreement with current values of $\alpha_3 (M_Z)$.Proton decay is examined for the superheavy Higgs triplet mass $M_{H_3}<10M_G(M_G\simeq 1.5 \times10^{16}$~GeV) and squarks and gluinos lighter than 1 TeV. Throughout most of the parameter space chargino-neutralino scaling relations are predicted to hold: $2m_{\tilde{Z}_1}\cong m_{\tilde{W}_1}\cong m_{\tilde{Z}_2}, m_{\tilde{W}_1}\simeq(1/4)m_{\tilde{g}}$ (for $\mu>0$) or $m_{\tilde{W}_1} \simeq(1/3)m_{\tilde{g}}$ (for $\mu<0$), while $m_{\tilde{W}_2}\cong m_{\tilde{Z}_3}\cong m_{\tilde{Z}_4}>>m_{\tilde{Z}_1}$. Future proton decay experiments combined with LEP2 lead to further predictions, e.g. for the entire parameter space either proton decay should be seen at these or the$\tilde{W}_1$ seen at LEP2. Relic density constraints on the $\tilde{Z}_1$ further constrain the parameter space e.g. so that $m_t<165$~GeV, $m_h<105$~GeV, $m_{\tilde{W}_1} <100$~GeV and $m_{\tilde{Z}_1}<50$~GeV when $M_{H_3}/M_G < 6$. (Invited talk at Les Rencontres de Physique de la Vallee D'Aoste} Physics , 1993, Abstract: A survey is given of supersymmetry and supergravity and their phenomenology. Some of the topics discussed are the basic ideas of global supersymmetry, the minimal supersymmetric Standard Model (MSSM) and its phenomenology, the basic ideas of local supersymmetry (supergravity), grand unification, supersymmetry breaking in supergravity grand unified models, radiative breaking of $SU(2) \times U(1)$, proton decay, cosmological constraints, and predictions of supergravity grand unified models. While the number of detailed derivations are necessarily limited, a sufficient number of results are given so that a reader can get a working knowledge of this field. Physics , 1992, DOI: 10.1103/PhysRevD.47.2468 Abstract: We present the results of an extensive exploration of the five-dimensional parameter space of the minimal $SU(5)$ supergravity model, including the constraints of a long enough proton lifetime ($\tau_p>1\times10^{32}\y$) and a small enough neutralino cosmological relic density ($\Omega_\chi h^2_0\le1$). We find that the combined effect of these two constraints is quite severe, although still leaving a small region of parameter space with $m_{\tilde g,\tilde q}<1\TeV$. The allowed values of the proton lifetime extend up to $\tau_p\approx1\times10^{33}\y$ and should be fully explored by the SuperKamiokande experiment. The proton lifetime cut also entails the following mass correlations and bounds: $m_h\lsim100\GeV$, $m_\chi\approx{1\over2}m_{\chi^0_2}\approx0.15\gluino$, $m_{\chi^0_2}\approx m_{\chi^+_1}$, and $m_\chi<85\,(115)\GeV$, $m_{\chi^0_2,\chi^+_1}<165\,(225)\GeV$ for $\alpha_3=0.113\,(0.120)$. Finally, the {\it combined} proton decay and cosmology constraints predict that if $m_h\gsim75\,(80)\GeV$ then $m_{\chi^+_1}\lsim90\,(110)\GeV$ for $\alpha_3=0.113\,(0.120)$. Thus, if this model is correct, at least one of these particles will likely be observed at LEPII. Physics , 1995, DOI: 10.1103/PhysRevD.54.2374 Abstract: A detailed analysis of dark matter event rates in minimal supergravity models (MSGM) is given. It is shown analytically that the lightest neutralino the ${{\tilde {Z}_{1}}}$ is the LSP over almost all of the parameter space, and hence the natural candidate for cold dark matter (CDM). The radiative breaking of $SU(2)\times U(1)$ constraints are shown to be crucial in determining the expected event rates. Approximate analytic formulae are obtained to determine the gaugino-higgsino content of the ${{\tilde{Z}_{1}}}$ particle.From this one can deduce the behavior of the event rates as one varies the SUSY soft breaking parameters and tan $\beta$. The constraint on the event rates due to the recently measured $b\rightarrow s+\gamma$ decay is calculated. It is seen that this data eliminates most of the parameter space where $\mu$ (the Higgs mixing parameter) and $A_t$ (the t-quark cubic soft breaking parameter) have the same sign. Since the t-quark is close to its Landau pole, $A_t$ is restricted to be mostly positive, and so most of the $\mu>0$ part of the parameter space is eliminated... Physics , 1994, DOI: 10.1103/PhysRevD.50.2148 Abstract: We study the predictions of the simplest SU(5) grand unified model within the framework of minimal supergravity, including constraints from the radiative breaking of electroweak symmetry. As a consequence of the unification of the $b$-quark and $\tau$-lepton Yukawa couplings, the top quark mass is predicted to be close to its fixed point value. We delineate the regions of the supergravity parameter space allowed by constraints from the non-observation of proton decay and from the requirement that the LSP does not overclose the universe. These constraints lead to a definite pattern of sparticle masses: the feature unique to Yukawa unified models is that some of the third generation squarks are much lighter than those of the first two generations. Despite the fact that all sparticle masses and mixings are determined by just four SUSY parameters at the GUT scale (in addition to $m_t$), we find that the signals for sparticle production can vary substantially over the allowed parameter space. We identify six representative scenarios and study the signals from sparticle production at the LHC. We find that by studying the signal in various channels, these scenarios may be distinguished from one another, and also from usually studied minimal models'' where squarks and sleptons are taken to be degenerate. In particular, our studies allow us to infer that some third generation squarks are lighter than other squarks---a feature that could provide the first direct evidence of supergravity grand unification. Physics , 1998, Abstract: A review of baryon and lepton conservation in supersymmetric grand unified theories is given. Proton stability is discussed in the minimal SU(5) supergravity grand unification and in several non-minimal extensions such as the $SU(3)^3$, SO(10) and some string based models. Effects of dark matter on proton stability are also discussed and it is shown that the combined constraints of dark matter and proton stability constrain the sparticle spectrum. It is also shown that proton lifetime limits put severe constraints on the event rates in dark matter detectors. Future prospects for the observation of baryon and lepton number violation are also discussed. Physics , 1993, DOI: 10.1103/PhysRevLett.70.3696 Abstract: It is shown that in the physically interesting domain of the parameter space of SU(5) supergravity GUT, the Higgs and the Z poles dominate the LSP annihilation. Here the naive analyses on thermal averaging breaks down and formulae are derived which give a rigorous treatment over the poles. These results are then used to show that there exist significant domains in the parameter space where the constraints of proton stability and cosmology are simultaneously satisfied. New upper limits on light particle masses are obtained. R. N. Mohapatra Physics , 1993, Abstract: Prospects for SO(10) as a minimal grand unification group have recently been heightened by several considerations such as the MSW resolution of the solar neutrino puzzle, baryogenesis, possibility for understanding fermion masses etc. I review the present status of the minimal SO(10) models with special emphasis on the predictions for proton lifetime and predictions for neutrino masses for the non-supersymmetric case and discuss some preliminary results for the supersymmetric case. It was generally believed that minimal SO(10) models predict wrong mass relations between the charged fermions of the first and second generations; furthermore, while the smallness of the neutrino masses in these models arises from the see-saw mechanism, it used to be thought that detailed predictions for neutrino masses and mixings require further adhoc assumptions. In this talk, I report some recent work with K.S.Babu, where we discovered that the minimal SO(10) model, both with and without SUSY, has in it a built-in mechanism that not only corrects the bad mass relations between the charged fermions but at the same time allows a complete prediction for the masses and mixings in the neutrino sector. We define our minimal model as the one that consists of the smallest set of Higgs multiplets that are needed for gauge symmetry breaking. Our result is based on the hypothesis that the complex {\bf 10} of Higgs bosons has only a single coupling to the fermions. This hypothesis is guaranteed in supersymmetric models and in non-SUSY models that obey a softly broken Peccei-Quinn symmetry. Physics , 2001, DOI: 10.1016/S0920-5632(01)01501-8 Abstract: A review is given of the historical developments of 1982 that lead to the supergravity unified model (SUGRA)with gravity mediated breaking of supersymmetry. Further developments and applications of the model in the period 1982-85 are also discussed. The supergravity unified model and its minimal version (mSUGRA) are currently among the leading candidates for physics beyond the Standard Model. A brief note on the developments from the present vantage point is included. Physics , 1995, DOI: 10.1103/PhysRevD.52.6623 Abstract: GUT scale threshold corrections in minimal SU(5) supergravity grand unification are discussed. It is shown that predictions may be made despite uncertainties associated with the high energy scale. A bound relating the strong coupling constant to the mass scales associated with proton decay and supersymmetry is derived, and a sensitive probe of the underlying theory is outlined. In particular, low energy measurements can in principle determine the presence of Planck scale ($1 / {{\rm M}_{\rm Pl}}$) terms. Page 1 /100 Display every page 5 10 20 Item
2019-12-06 23:25:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7976091504096985, "perplexity": 759.3192754623722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491491.18/warc/CC-MAIN-20191206222837-20191207010837-00359.warc.gz"}
https://labs.tib.eu/arxiv/?author=David%20DeBoer
• ### The Breakthrough Listen Search for Intelligent Life: 1.1-1.9 GHz observations of 692 Nearby Stars(1709.03491) April 20, 2018 astro-ph.EP We report on a search for engineered signals from a sample of 692 nearby stars using the Robert C. Byrd Green Bank Telescope, undertaken as part of the $Breakthrough~Listen~Initiative$ search for extraterrestrial intelligence. Observations were made over 1.1$-$1.9 GHz (L band), with three sets of five-minute observations of the 692 primary targets, interspersed with five-minute observations of secondary targets. By comparing the "ON" and "OFF" observations we are able to identify terrestrial interference and place limits on the presence of engineered signals from putative extraterrestrial civilizations inhabiting the environs of the target stars. During the analysis, eleven events passed our thresholding algorithm, but a detailed analysis of their properties indicates they are consistent with known examples of anthropogenic radio frequency interference. We conclude that, at the time of our observations, none of the observed systems host high-duty-cycle radio transmitters emitting between 1.1 and 1.9 GHz with an Equivalent Isotropic Radiated Power of $\sim10^{13}$ W, which is readily achievable by our own civilization. Our results suggest that fewer than $\sim$ 0.1$\%$ of the stellar systems within 50 pc possess the type of transmitters searched in this survey. • ### The Breakthrough Listen Search for Intelligent Life: Wide-bandwidth Digital Instrumentation for the CSIRO Parkes 64-m Telescope(1804.04571) April 12, 2018 astro-ph.IM Breakthrough Listen is a ten-year initiative to search for signatures of technologies created by extraterrestrial civilizations at radio and optical wavelengths. Here, we detail the digital data recording system deployed for Breakthrough Listen observations at the 64-m aperture CSIRO Parkes Telescope in New South Wales, Australia. The recording system currently implements two recording modes: a dual-polarization, 1.125~GHz bandwidth mode for single beam observations, and a 26-input, 308~MHz bandwidth mode for the 21-cm multibeam receiver. The system is also designed to support a 3~GHz single-beam mode for the forthcoming Parkes ultra-wideband feed. In this paper, we present details of the system architecture, provide an overview of hardware and software, and present initial performance results. • ### No bursts detected from FRB121102 in two 5-hour observing campaigns with the Robert C. Byrd Green Bank Telescope(1802.04446) Feb. 13, 2018 astro-ph.HE Here, we report non-detection of radio bursts from Fast Radio Burst FRB 121102 during two 5-hour observation sessions on the Robert C. Byrd 100-m Green Bank Telescope in West Virginia, USA, on December 11, 2017, and January 12, 2018. In addition, we report non-detection during an abutting 10-hour observation with the Kunming 40-m telescope in China, which commenced UTC 10:00 January 12, 2018. These are among the longest published contiguous observations of FRB 121102, and support the notion that FRB 121102 bursts are episodic. These observations were part of a simultaneous optical and radio monitoring campaign with the the Caltech HIgh- speed Multi-color CamERA (CHIMERA) instrument on the Hale 5.1-m telescope. • ### A Wideband Self-Consistent Disk-Averaged Spectrum of Jupiter Near 30 GHz and Its Implications for NH$_{3}$ Saturation in the Upper Troposphere(1801.07812) Jan. 23, 2018 astro-ph.EP We present a new set of measurements obtained with the Combined Array for Research in Millimeter-wave Astronomy (CARMA) of Jupiter's microwave thermal emission near the 1.3 cm ammonia (NH$_{3}$) absorption band. We use these observations to investigate the ammonia mole fraction in the upper troposphere, near $0.3 < P < 2$ bar, based on radiative transfer modeling. We find that the ammonia mole fraction must be $\sim2.4\times 10^{-4}$ below the NH$_{3}$ ice cloud, i.e., at $0.8 < P < 8$ bar, in agreement with results by de Pater et al. (2001, 2016a). We find the NH$_{3}$ cloud-forming region between $0.3 < P < 0.8$ bar to be globally sub-saturated by 55% on average, in accordance with the result in Gibson et al. (2005). Although our data are not very sensitive to the region above the cloud layer, we are able to set an upper limit of $2.4\times 10^{-7}$ on the mole fraction here, a factor of $\sim$10 above the saturated vapor curve. • ### Improved 21 cm Epoch Of Reionization Power Spectrum Measurements with a Hybrid Foreground Subtraction and Avoidance Technique(1801.00460) Jan. 1, 2018 astro-ph.CO, astro-ph.IM Observations of the 21 cm Epoch of Reionization (EoR) signal are dominated by Galactic and extragalactic foregrounds. The need for foreground removal has led to the development of two main techniques, often referred to as "foreground avoidance" and "foreground subtraction". Avoidance is associated with filtering foregrounds in Fourier space, while subtraction uses an explicit foreground model that is removed. Using 1088 hours of data from the 64-element PAPER array, we demonstrate that the techniques can be combined to produce a result that is an improvement over either method independently. Relative to either technique on its own, our approach is shown to decrease the amount of foreground contamination due to band-limited spectral leakage in EoR power spectrum $k$-modes of interest for all redshifts. In comparison to just filtering alone, at $k=0.1 \ h{\rm Mpc}^{-1}$ we obtain a 6% sensitivity improvement at redshift z = 8.4 (middle of observing band) and with the proper choice of window function a 12% improvement at $z = 7.4$ and 24% at $z = 10.4$ (band edges). We also demonstrate these effects using a smaller 3 hour sampling of data from the MWA, and find that the hybrid filtering and subtraction removal approach provides similar improvements across the band as seen in the case with PAPER-64. • ### The Breakthrough Listen Search for Intelligent Life: A Wideband Data Recorder System for the Robert C. Byrd Green Bank Telescope(1707.06024) July 20, 2017 astro-ph.IM The Breakthrough Listen Initiative is undertaking a comprehensive search for radio and optical signatures from extraterrestrial civilizations. An integral component of the project is the design and implementation of wide-bandwidth data recorder and signal processing systems. The capabilities of these systems, particularly at radio frequencies, directly determine survey speed; further, given a fixed observing time and spectral coverage, they determine sensitivity as well. Here, we detail the Breakthrough Listen wide-bandwidth data recording system deployed at the 100-m aperture Robert C. Byrd Green Bank Telescope. The system digitizes up to 6 GHz of bandwidth at 8 bits for both polarizations, storing the resultant 24 GB/s of data to disk. This system is among the highest data rate baseband recording systems in use in radio astronomy. A future system expansion will double recording capacity, to achieve a total Nyquist bandwidth of 12 GHz in two polarizations. In this paper, we present details of the system architecture, along with salient configuration and disk-write optimizations used to achieve high-throughput data capture on commodity compute servers and consumer-class hard disk drives. • ### The detection of an extremely bright fast radio burst in a phased array feed survey(1705.07581) May 23, 2017 astro-ph.HE We report the detection of an ultra-bright fast radio burst (FRB) from a modest, 3.4-day pilot survey with the Australian Square Kilometre Array Pathfinder. The survey was conducted in a wide-field fly's-eye configuration using the phased-array-feed technology deployed on the array to instantaneously observe an effective area of $160$ deg$^2$, and achieve an exposure totaling $13200$ deg$^2$ hr. We constrain the position of FRB 170107 to a region $8'\times8'$ in size (90% containment) and its fluence to be $58\pm6$ Jy ms. The spectrum of the burst shows a sharp cutoff above $1400$ MHz, which could be either due to scintillation or an intrinsic feature of the burst. This confirms the existence of an ultra-bright ($>20$ Jy ms) population of FRBs. • ### New Limits on Polarized Power Spectra at 126 and 164 MHz: Relevance to Epoch of Reionization Measurements(1502.05072) Feb. 7, 2017 astro-ph.CO, astro-ph.IM Polarized foreground emission is a potential contaminant of attempts to measure the fluctuation power spectrum of highly redshifted 21 cm HI emission from the epoch of reionization. Using the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER), we present limits on the observed power spectra of all four Stokes parameters in two frequency bands, centered at 126 MHz ($z=10.3$) and 164 MHz ($z=7.66$) for a three-month observing campaign of a 32-antenna deployment, for which unpolarized power spectrum results have been reported at $z=7.7$ (Parsons et al 2014) and $7.5 < z < 10.5$ (Jacobs et al 2014). The power spectra in this paper are processed in the same way as in those works, and show no definitive detection of polarized power. This non-detection appears to be largely due to the suppression of polarized power by ionospheric rotation measure fluctuations, which strongly affect Stokes Q and U. We are able to show that the net effect of polarized leakage is a negligible contribution at the levels of of the limits reported in Parsons et al 2014 and Jacobs et al 2014. • ### The Breakthrough Listen Search for Intelligent Life: Target Selection of Nearby Stars and Galaxies(1701.06227) Jan. 22, 2017 astro-ph.IM We present the target selection for the Breakthrough Listen search for extraterrestrial intelligence during the first year of observations at the Green Bank Telescope, Parkes Telescope and Automated Planet Finder. On the way to observing 1,000,000 nearby stars in search of technological signals, we present three main sets of objects we plan to observe in addition to a smaller sample of exotica. We choose the 60 nearest stars, all within 5.1 pc from the sun. Such nearby stars offer the potential to observe faint radio signals from transmitters having a power similar to those on Earth. We add a list of 1649 stars drawn from the Hipparcos catalog that span the Hertzprung-Russell diagram, including all spectral types along the main sequence, subgiants, and giant stars. This sample offers diversity and inclusion of all stellar types, but with thoughtful limits and due attention to main sequence stars. Our targets also include 123 nearby galaxies composed of a "morphological-type-complete" sample of the nearest spirals, ellipticals, dwarf spherioidals, and irregulars. While their great distances hamper the detection of technological electromagnetic radiation, galaxies offer the opportunity to observe billions of stars simultaneously and to sample the bright end of the technological luminosity function. We will also use the Green Bank and Parkes telescopes to survey the plane and central bulge of the Milky Way. Finally, the complete target list includes several classes of exotica, including white dwarfs, brown dwarfs, black holes, neutron stars, and asteroids in our Solar System. • ### The Hydrogen Epoch of Reionization Array Dish II: Characterization of Spectral Structure with Electromagnetic Simulations and its science Implications(1602.06277) Sept. 14, 2016 astro-ph.CO, astro-ph.IM We use time-domain electromagnetic simulations to determine the spectral characteristics of the Hydrogen Epoch of Reionization Arrays (HERA) antenna. These simulations are part of a multi-faceted campaign to determine the effectiveness of the dish's design for obtaining a detection of redshifted 21 cm emission from the epoch of reionization. Our simulations show the existence of reflections between HERA's suspended feed and its parabolic dish reflector that fall below -40 dB at 150 ns and, for reasonable impedance matches, have a negligible impact on HERA's ability to constrain EoR parameters. It follows that despite the reflections they introduce, dishes are effective for increasing the sensitivity of EoR experiments at relatively low cost. We find that electromagnetic resonances in the HERA feed's cylindrical skirt, which is intended to reduce cross coupling and beam ellipticity, introduces significant power at large delays ($-40$ dB at 200 ns) which can lead to some loss of measurable Fourier modes and a modest reduction in sensitivity. Even in the presence of this structure, we find that the spectral response of the antenna is sufficiently smooth for delay filtering to contain foreground emission at line-of-sight wave numbers below $k_\parallel \lesssim 0.2$ $h$Mpc$^{-1}$, in the region where the current PAPER experiment operates. Incorporating these results into a Fisher Matrix analysis, we find that the spectral structure observed in our simulations has only a small effect on the tight constraints HERA can achieve on parameters associated with the astrophysics of reionization. • ### Effects of Antenna Beam Chromaticity on Redshifted 21~cm Power Spectrum and Implications for Hydrogen Epoch of Reionization Array(1603.08958) March 29, 2016 astro-ph.CO Unaccounted for systematics from foregrounds and instruments can severely limit the sensitivity of current experiments from detecting redshifted 21~cm signals from the Epoch of Reionization (EoR). Upcoming experiments are faced with a challenge to deliver more collecting area per antenna element without degrading the data with systematics. This paper and its companions show that dishes are viable for achieving this balance using the Hydrogen Epoch of Reionization Array (HERA) as an example. Here, we specifically identify spectral systematics associated with the antenna power pattern as a significant detriment to all EoR experiments which causes the already bright foreground power to leak well beyond ideal limits and contaminate the otherwise clean EoR signal modes. A primary source of this chromaticity is reflections in the antenna-feed assembly and between structures in neighboring antennas. Using precise foreground simulations taking wide-field effects into account, we provide a framework to set cosmologically-motivated design specifications on these reflections to prevent further EoR signal degradation. We show HERA will not be impeded by such spectral systematics and demonstrate that even in a conservative scenario that does not perform removal of foregrounds, HERA will detect EoR signal in line-of-sight $k$-modes, $k_\parallel \gtrsim 0.2\,h$~Mpc$^{-1}$, with high significance. All baselines in a 19-element HERA layout are capable of detecting EoR over a substantial observing window on the sky. • ### First Spectroscopic Imaging Observations of the Sun at Low Radio Frequencies with the Murchison Widefield Array Prototype(1101.0620) Jan. 3, 2011 astro-ph.SR We present the first spectroscopic images of solar radio transients from the prototype for the Murchison Widefield Array (MWA), observed on 2010 March 27. Our observations span the instantaneous frequency band 170.9-201.6 MHz. Though our observing period is characterized as a period of low' to medium' activity, one broadband emission feature and numerous short-lived, narrowband, non-thermal emission features are evident. Our data represent a significant advance in low radio frequency solar imaging, enabling us to follow the spatial, spectral, and temporal evolution of events simultaneously and in unprecedented detail. The rich variety of features seen here reaffirms the coronal diagnostic capability of low radio frequency emission and provides an early glimpse of the nature of radio observations that will become available as the next generation of low frequency radio interferometers come on-line over the next few years.
2021-04-14 01:55:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4581983983516693, "perplexity": 2548.9887095439685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00094.warc.gz"}
https://eccc.weizmann.ac.il/keyword/18581/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > DEPTH LOWER BOUNDS: Reports tagged with Depth Lower bounds: TR19-120 | 11th September 2019 Or Meir Toward Better Depth Lower Bounds: Two Results on the Multiplexor Relation Revisions: 1 One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}^1$). Karchmer, Raz, and Wigderson (Computational Complexity 5, 3/4) suggested to approach this problem by proving that depth complexity behaves "as expected" with respect to the composition of functions \$f ... more >>> ISSN 1433-8092 | Imprint
2021-04-17 23:21:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7743085622787476, "perplexity": 4562.352560310707}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00585.warc.gz"}
http://elibm.org/article/10008454
## Second-order approximate symmetries of the geodesic equations for the Reissner-Nordström metric and re-scaling of energy of a test particle ### Summary Summary: Following the use of approximate symmetries for the Schwarzschild spacetime by A.H. Kara, F.M. Mahomed and A. Qadir (Nonlinear Dynam., to appear), we have investigated the exact and approximate symmetries of the system of geodesic equations for the Reissner-Nordström spacetime (RN). For this purpose we are forced to use second order approximate symmetries. It is shown that in the second-order approximation, energy must be rescaled for the RN metric. The implications of this rescaling are discussed. 83C40, 70S10 ### Keywords/Phrases Reissner-Nordstrom metric, geodesic equations, second-order approximate symmetries
2019-05-19 09:37:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755125403404236, "perplexity": 1427.2719136516066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254731.5/warc/CC-MAIN-20190519081519-20190519103519-00000.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-3sqrt-18-sqrt-48-2sqrt-6-sqrt-80
How do you simplify (3sqrt(18))/sqrt(48) - (2sqrt(6))/sqrt(80)? 1 Answer Jun 10, 2015 $\frac{9 \sqrt{2}}{4 \sqrt{3}} - \frac{2 \sqrt{6}}{4 \sqrt{5}}$ Explanation: Okay this might be wrong as I have only briefly touched this topic but this is what I would do: $\frac{3 \sqrt{9 \times 2}}{\sqrt{16 \times 3}} - \frac{2 \sqrt{6}}{\sqrt{16 \times 5}}$ Which equals $\frac{9 \sqrt{2}}{4 \sqrt{3}} - \frac{2 \sqrt{6}}{4 \sqrt{5}}$ I hope this is right, I'm sure someone will correct me if I'm wrong.
2022-10-05 02:17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806914210319519, "perplexity": 344.88347807806696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00111.warc.gz"}
http://www.physicsforums.com/showthread.php?t=487048
# Commutation of differentiation and averaging operations by plasmoid Tags: average, calculus, turbulence P: 15 I've been studying Turbulence, and there's a lot of averaging of differential equations involved. The books I've seen remark offhandedly that differentiation and averaging commute for eg. < $$\frac{df}{dt}$$ > = $$\frac{d}{dt}$$ Here < > is temporal averaging. If the differentiation is w.r.t. a spatial coordinate it makes sense, but could someone help me with the above equation? Related Discussions Calculus 7 Math & Science Software 17 Precalculus Mathematics Homework 4 Introductory Physics Homework 14 Electrical Engineering 27
2013-12-06 12:38:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7105089426040649, "perplexity": 1675.312935881929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051588/warc/CC-MAIN-20131204131731-00013-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.esaral.com/q/if-show-that-69848/
If show that Question: If show that (A + B) (A – B) ≠ A2 – B2 Solution: Given, $A=\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right]$ and $B=\left[\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right]$ So, $(A+B)=\left[\begin{array}{cc}0+0 & 1-1 \\ 1+1 & 1+0\end{array}\right]=\left[\begin{array}{ll}0 & 0 \\ 2 & 1\end{array}\right]$ And, $(A-B)=\left[\begin{array}{ll}0-0 & 1+1 \\ 1-1 & 1-0\end{array}\right]=\left[\begin{array}{ll}0 & 2 \\ 0 & 1\end{array}\right]$ $(A+B) \cdot(A-B)=\left[\begin{array}{ll}0 & 0 \\ 2 & 1\end{array}\right]\left[\begin{array}{ll}0 & 2 \\ 0 & 1\end{array}\right]=\left[\begin{array}{ll}0+0 & 0+0 \\ 0+0 & 4+1\end{array}\right]=\left[\begin{array}{ll}0 & 0 \\ 0 & 5\end{array}\right]$ $\ldots$ (i) Also, $\quad A^{2}=A \cdot A$ $=\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right] \cdot\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right]=\left[\begin{array}{ll}0+1 & 0+1 \\ 0+1 & 1+1\end{array}\right]=\left[\begin{array}{ll}1 & 1 \\ 1 & 2\end{array}\right]$ And, $B^{2}=B \cdot B=\left[\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right]\left[\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right]=\left[\begin{array}{cc}0-1 & 0+0 \\ 0+0 & -1+0\end{array}\right]=\left[\begin{array}{cc}-1 & 0 \\ 0 & -1\end{array}\right]$ Therefore, $A^{2}-B^{2}=\left[\begin{array}{ll}1 & 1 \\ 1 & 2\end{array}\right]-\left[\begin{array}{cc}-1 & 0 \\ -0 & -1\end{array}\right]=\left[\begin{array}{ll}2 & 1 \\ 1 & 3\end{array}\right]$ $\ldots$ (ii) Given, $A=\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right]$ and $B=\left[\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right]$ So, $(A+B)=\left[\begin{array}{ll}0+0 & 1-1 \\ 1+1 & 1+0\end{array}\right]=\left[\begin{array}{ll}0 & 0 \\ 2 & 1\end{array}\right]$ And, $(A-B)=\left[\begin{array}{ll}0-0 & 1+1 \\ 1-1 & 1-0\end{array}\right]=\left[\begin{array}{ll}0 & 2 \\ 0 & 1\end{array}\right]$ $(A+B) \cdot(A-B)=\left[\begin{array}{ll}0 & 0 \\ 2 & 1\end{array}\right]\left[\begin{array}{ll}0 & 2 \\ 0 & 1\end{array}\right]=\left[\begin{array}{ll}0+0 & 0+0 \\ 0+0 & 4+1\end{array}\right]=\left[\begin{array}{ll}0 & 0 \\ 0 & 5\end{array}\right]$ $\ldots$ (i) Also, $\quad A^{2}=A \cdot A$ $=\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right] \cdot\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right]=\left[\begin{array}{ll}0+1 & 0+1 \\ 0+1 & 1+1\end{array}\right]=\left[\begin{array}{ll}1 & 1 \\ 1 & 2\end{array}\right]$ And, $B^{2}=B \cdot B=\left[\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right]\left[\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right]=\left[\begin{array}{cc}0-1 & 0+0 \\ 0+0 & -1+0\end{array}\right]=\left[\begin{array}{cc}-1 & 0 \\ 0 & -1\end{array}\right]$ Therefore,  $A^{2}-B^{2}=\left[\begin{array}{ll}1 & 1 \\ 1 & 2\end{array}\right]-\left[\begin{array}{cc}-1 & 0 \\ -0 & -1\end{array}\right]=\left[\begin{array}{ll}2 & 1 \\ 1 & 3\end{array}\right]$ ……..(ii) Hence, from (i) and (ii), (A + B) (A – B) ≠ A2 – B2
2022-05-29 04:55:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7586116790771484, "perplexity": 5104.352679384614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00051.warc.gz"}
https://math.stackexchange.com/questions/3235963/multiple-sensor-pointing-optimization-formulation-and-is-milp-the-correct-appro
# Multiple Sensor Pointing Optimization: Formulation and is MILP the correct approach SourceDataFile So i am attempting to optimize the following pointing problem. I have a set of sensors (cameras) and a set of targets. Each camera can be oriented directly at one one target, but based on its FOV it may see multiple targets. The goal is to cover as many targets as possible with the user defined number of cameras (i.e stereo, 2) I initially took the approach of using MATLAB and intlinprog and will show the formulation, but seeing as I am new to MILP I am open to criticism and suggestions Problem formulation Given: 1. S sensors 2. T targets 3. P pointing options for each S sensor 4. T == P 5. R required coverage level (i.e 2 for stereo or 1 for mono) Problem Structure: The source data for the problem is logical/binary 3D array called Vis (P x T x S) indexed (i,j,k) 1. Each Layer (3rd Dimension S) represents a camera 2. Each Row in a layer represents a pointing option 3. The column in a given row and layer is true if that target is seen Vis (i,j,k) = true if Camera(k) can see target (j) using pointing option (i) Constraints: 1. Each camera can be assigned only one pointing option Goal: Maximize the number of targets that have atleast the requried coverage level. I am going to insert an example of my code below. Some notes are that I take the matrix of Vis and convert it to a 2-D array by concating each "layer" alond the first dimension. There is a binary decision variable for each pointing option for each camera and a binary switching variable used to switch for the optimizing the number of targets with the desired coverage level. S = 156; %100 Cameras T = 100; %100 Targets P = T; %100 Pointing options R = 2; % Required Coverage Level (stereo) % Load Vis Matrix it is (P x T X S) %When the matrix is very scattered the solver works fine, the problem is %a few cameras can see alot of targets and some can see none %Use permute and reshape matrix so that each "3rd" dimension layer is %concated onton the bottom of the previous one VisNew = permute(Vis,[1 3 2]); VisNew = reshape(VisNew,[],size(Vis,2),1); [m,n] = size(VisNew); %% Optimization Problem Setup %The first set of variables will be the logical t/f for the pointing %option's (m) . The last set of Logic t/f will be the coverage variables (n) NumVars = m + n; % All variables are integers prob_struct.intcon = 1:NumVars; % All Variables have lb of 0 and ub of 1 prob_struct.lb = zeros(NumVars,1); prob_struct.ub = ones(NumVars,1); % The "maximization" (min for the function) is the sum of the "Required % Coverage" switching variables which are only true if the required % coverage is obtained prob_struct.f = [zeros(m,1); -1*ones(n,1)]; %The equality constraint comes from that each camera can only %be tasked 1 pointing option. So the sum of those options variables for each %camera should sum to 1 prob_struct.Aeq = zeros(S,NumVars); for x = 1:S prob_struct.Aeq(x,((x-1)*P+1:x*P)) = 1; end prob_struct.beq = ones(S,1); % The inequality constraint is just ensuring that the coverage % variable is only switched when there is enough coverage for that target prob_struct.Aineq = [VisNew'.*-1 R.*eye(n)]; prob_struct.bineq = zeros(n,1); % Define which solver to use prob_struct.solver = 'intlinprog'; prob_struct.options = optimoptions('intlinprog'); %Solve the problem [X,Y] = intlinprog(prob_struct); • I didn't read your code, but if you are just looking for a comment about whether MILP is the right approach here, I would say yes, it is. This seems like a variant of the classical set covering problem, which is often solved via IP. – LarrySnyder610 May 22 at 20:48 • I was looking more for general input on if there is a better approach. I provided the code for more clarification on the problem setup an incase anyone wanted to look at it. I will take a look at the set covering problem – S moran May 23 at 0:58 • To be clear, I’m not saying your problem is set covering. Just some further evidence that MILP is a reasonable approach here. – LarrySnyder610 May 23 at 3:08 • I suspect you might get more eyeballs on the question if you wrote your formulation in mathematical notation rather than in MATLAB syntax (which is harder to read for those of us who do not use MATLAB). – prubin May 23 at 19:48 • Agreed — you can take advantage of MathJax. – LarrySnyder610 May 24 at 16:49
2019-10-23 09:58:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5570065975189209, "perplexity": 1790.8708926378538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00126.warc.gz"}
https://www.transtutors.com/questions/5-22-loan-amortization-jan-sold-her-house-on-december-31-and-took-a-10-000-mortgage--1334957.htm
# 5-22 LOAN AMORTIZATION Jan sold her house on December 31 and took a $10,000 mortgage as part of... 1 answer below » 5-22 LOAN AMORTIZATION Jan sold her house on December 31 and took a$10,000 mortgage as part of the payment. The 10-year mortgage has a 10% nominal interest rate, but it calls for semiannual payments beginning next June 30. Next year Jan must report on Schedule B of her IRS Form 1040 the amount of interest that was included in the two payments she received during the year. a.        What is the dollar amount of each payment Jan receives? b.       How much interest was included in the first payment? How much repayment of prin- cipal was included? How do these values change for the second payment? c.        How much interest must Jan report on Schedule B for the first year? Will her interest income be the same next year? d.       If the payments are constant, why does the amount of interest income change over time? a) he installment is ascertained as: Step1: Go to exceed expectations sheet and snap insert to insert the function. Step2: Select the PMT function as we are finding the installment for this situation. Step3: Enter the values as Rate = 10%/2; Nper = 10*2; PV = - 10000 Step4: Click OK to get the coveted value. The value comes to \$802.43
2018-11-17 04:32:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3640262186527252, "perplexity": 3455.3224564712236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743282.69/warc/CC-MAIN-20181117040838-20181117062838-00253.warc.gz"}
https://www.physicsforums.com/threads/nonconservative-lagrangian-systems.176428/
# Nonconservative lagrangian systems 1. Jul 9, 2007 ### jdstokes Hi, I'm in the process of self-teaching myself Lagrangian and Hamiltonian dynamics. In my readings, I've found that some velocity dependent forces (e.g. the Lorentz force) can be derived from a classical Lagrangian whereas others such as friction cannot. Do there exist necessary and sufficient conditions to decide when velocity dependent forces are obtainable from a Lagrangian, in the same way that velocity independent forces can be obtained from a lagrangian iff they are the gradient of a potential function? James 2. Jul 9, 2007 ### pervect Staff Emeritus If all the forces on a system can be derived from a potential, you can use Lagrange's equation. If you have forces that can't be derived from a potential, one has the option of re-writing Lagrange's equation (Goldstein, Classical mechanics, pg 23-24) $$\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q_j}} \right) - \frac{\partial L}{\partial q_j} = Q_j$$ where $Q_j$ is a generalized force that can include forces not derivable from a potential. In many cases, $Q_j$ can be specified via a dissipation function $\mathcal{F}$ so that $$Q_j = \frac{\partial \mathcal{F}}{\dot{q_j}}$$ One then needs to specify L and $\mathcal{F}$ to get the equations of motion. 3. Jul 9, 2007 ### jdstokes Hi pervect, I think we need to distinguish between deriving forces from the Lagrangian and adding them directly to the RHS of the E-L equations. For the Lorentz force, we start with the Lagrangian $L = (1/2)m \dot{\mathbf{r}} \cdot \dot{\mathbf{r}} + e V(\mathbf{r},t) + e \dot{\mathbf{r}}\cdot \mathbf{A}(\mathbf{r},t)$ and then apply the standard, unmodified E-L equation $\frac{d}{dt}\frac{\partial L}{\partial \dot{\mathbf{r}}} - \frac{\partial L}{\partial \mathbf{r}}$ to obtain the velocity-dependent Lorentz force $m\ddot{\mathbf{r}} = e[\mathbf{E} + \dot{\mathbf{r}}\times \mathbf{B}]$. So we can say that these equations of motion are obtainable directly from a Lagrangian. If we consider linear frictional forces, on the other hand, we cannot obtain these directly from a Lagrangian, we must add them in later to the RHS of the E-L equations as you describe. My question: what determines whether a velocity-dependent force is directly obtainable from a Lagrangian? Thanks James 4. Jul 11, 2007 ### StatusX The second to last line in the derivation of Lagrange's equations is: $$\frac{d}{dt}\left(\frac{\partial T}{\partial \dot q_i} \right) - \frac{\partial T}{\partial q_i} = Q_i$$ where Q_i is the generalized force. Then if there is a function V(q_1,...,q_n,t) with: $$\frac{\partial V}{\partial q_i} = - Q_i$$ Then you get lagrange's equations (note this holds in one coordinate system iff it holds in all of them). But more generally, as long as you can find a function $V(q_1,...,q_n,\dot q_1,...,\dot q_n,t)$ with: $$Q_i = \frac{d}{dt}\left(\frac{\partial V}{\partial \dot q_i} \right) - \frac{\partial V}{\partial q_i}$$ Then you can just take it as your V when you define L=T-V and you get the correct equations. It can be shown that the lorentz force is of this form with: $$V = q \phi - q \vec v \cdot \vec A$$
2018-05-26 06:38:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122412204742432, "perplexity": 396.2100211369798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867311.83/warc/CC-MAIN-20180526053929-20180526073922-00013.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/42/3/b/a/
# Properties Label 42.3.b.a Level $42$ Weight $3$ Character orbit 42.b Analytic conductor $1.144$ Analytic rank $0$ Dimension $4$ CM no Inner twists $2$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$42 = 2 \cdot 3 \cdot 7$$ Weight: $$k$$ $$=$$ $$3$$ Character orbit: $$[\chi]$$ $$=$$ 42.b (of order $$2$$, degree $$1$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$1.14441711031$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(\sqrt{-2}, \sqrt{7})$$ Defining polynomial: $$x^{4} + 8 x^{2} + 9$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q -\beta_{1} q^{2} + ( -\beta_{1} + \beta_{3} ) q^{3} -2 q^{4} + ( -2 \beta_{1} + \beta_{2} ) q^{5} + ( -2 - \beta_{2} ) q^{6} -\beta_{3} q^{7} + 2 \beta_{1} q^{8} + ( 5 - 2 \beta_{2} ) q^{9} +O(q^{10})$$ $$q -\beta_{1} q^{2} + ( -\beta_{1} + \beta_{3} ) q^{3} -2 q^{4} + ( -2 \beta_{1} + \beta_{2} ) q^{5} + ( -2 - \beta_{2} ) q^{6} -\beta_{3} q^{7} + 2 \beta_{1} q^{8} + ( 5 - 2 \beta_{2} ) q^{9} + ( -4 + 2 \beta_{3} ) q^{10} + ( 5 \beta_{1} + 2 \beta_{2} ) q^{11} + ( 2 \beta_{1} - 2 \beta_{3} ) q^{12} + ( 10 - 4 \beta_{3} ) q^{13} + \beta_{2} q^{14} + ( -4 + 7 \beta_{1} - 2 \beta_{2} + 2 \beta_{3} ) q^{15} + 4 q^{16} + ( 2 \beta_{1} + 5 \beta_{2} ) q^{17} + ( -5 \beta_{1} - 4 \beta_{3} ) q^{18} -16 q^{19} + ( 4 \beta_{1} - 2 \beta_{2} ) q^{20} + ( -7 + \beta_{2} ) q^{21} + ( 10 + 4 \beta_{3} ) q^{22} + ( -\beta_{1} - 10 \beta_{2} ) q^{23} + ( 4 + 2 \beta_{2} ) q^{24} + ( 3 + 8 \beta_{3} ) q^{25} + ( -10 \beta_{1} + 4 \beta_{2} ) q^{26} + ( -19 \beta_{1} + \beta_{3} ) q^{27} + 2 \beta_{3} q^{28} + ( -20 \beta_{1} - 2 \beta_{2} ) q^{29} + ( 14 + 4 \beta_{1} - 2 \beta_{2} - 4 \beta_{3} ) q^{30} + ( -32 - 10 \beta_{3} ) q^{31} -4 \beta_{1} q^{32} + ( 10 + 14 \beta_{1} + 5 \beta_{2} + 4 \beta_{3} ) q^{33} + ( 4 + 10 \beta_{3} ) q^{34} + ( -7 \beta_{1} + 2 \beta_{2} ) q^{35} + ( -10 + 4 \beta_{2} ) q^{36} + 20 q^{37} + 16 \beta_{1} q^{38} + ( -28 - 10 \beta_{1} + 4 \beta_{2} + 10 \beta_{3} ) q^{39} + ( 8 - 4 \beta_{3} ) q^{40} + ( 30 \beta_{1} - 9 \beta_{2} ) q^{41} + ( 7 \beta_{1} + 2 \beta_{3} ) q^{42} + ( 20 - 12 \beta_{3} ) q^{43} + ( -10 \beta_{1} - 4 \beta_{2} ) q^{44} + ( 28 - 10 \beta_{1} + 5 \beta_{2} - 8 \beta_{3} ) q^{45} + ( -2 - 20 \beta_{3} ) q^{46} + 6 \beta_{1} q^{47} + ( -4 \beta_{1} + 4 \beta_{3} ) q^{48} + 7 q^{49} + ( -3 \beta_{1} - 8 \beta_{2} ) q^{50} + ( 4 + 35 \beta_{1} + 2 \beta_{2} + 10 \beta_{3} ) q^{51} + ( -20 + 8 \beta_{3} ) q^{52} + 36 \beta_{1} q^{53} + ( -38 - \beta_{2} ) q^{54} + ( -8 - 2 \beta_{3} ) q^{55} -2 \beta_{2} q^{56} + ( 16 \beta_{1} - 16 \beta_{3} ) q^{57} + ( -40 - 4 \beta_{3} ) q^{58} + ( -20 \beta_{1} - 8 \beta_{2} ) q^{59} + ( 8 - 14 \beta_{1} + 4 \beta_{2} - 4 \beta_{3} ) q^{60} + ( -14 + 20 \beta_{3} ) q^{61} + ( 32 \beta_{1} + 10 \beta_{2} ) q^{62} + ( 14 \beta_{1} - 5 \beta_{3} ) q^{63} -8 q^{64} + ( -48 \beta_{1} + 18 \beta_{2} ) q^{65} + ( 28 - 10 \beta_{1} - 4 \beta_{2} + 10 \beta_{3} ) q^{66} + ( 60 + 4 \beta_{3} ) q^{67} + ( -4 \beta_{1} - 10 \beta_{2} ) q^{68} + ( -2 - 70 \beta_{1} - \beta_{2} - 20 \beta_{3} ) q^{69} + ( -14 + 4 \beta_{3} ) q^{70} + ( -25 \beta_{1} + 14 \beta_{2} ) q^{71} + ( 10 \beta_{1} + 8 \beta_{3} ) q^{72} + ( 30 + 16 \beta_{3} ) q^{73} -20 \beta_{1} q^{74} + ( 56 - 3 \beta_{1} - 8 \beta_{2} + 3 \beta_{3} ) q^{75} + 32 q^{76} + ( -14 \beta_{1} - 5 \beta_{2} ) q^{77} + ( -20 + 28 \beta_{1} - 10 \beta_{2} + 8 \beta_{3} ) q^{78} + ( -32 + 20 \beta_{3} ) q^{79} + ( -8 \beta_{1} + 4 \beta_{2} ) q^{80} + ( -31 - 20 \beta_{2} ) q^{81} + ( 60 - 18 \beta_{3} ) q^{82} + ( 50 \beta_{1} + 20 \beta_{2} ) q^{83} + ( 14 - 2 \beta_{2} ) q^{84} + ( -62 + 16 \beta_{3} ) q^{85} + ( -20 \beta_{1} + 12 \beta_{2} ) q^{86} + ( -40 - 14 \beta_{1} - 20 \beta_{2} - 4 \beta_{3} ) q^{87} + ( -20 - 8 \beta_{3} ) q^{88} + ( -30 \beta_{1} - 3 \beta_{2} ) q^{89} + ( -20 - 28 \beta_{1} + 8 \beta_{2} + 10 \beta_{3} ) q^{90} + ( 28 - 10 \beta_{3} ) q^{91} + ( 2 \beta_{1} + 20 \beta_{2} ) q^{92} + ( -70 + 32 \beta_{1} + 10 \beta_{2} - 32 \beta_{3} ) q^{93} + 12 q^{94} + ( 32 \beta_{1} - 16 \beta_{2} ) q^{95} + ( -8 - 4 \beta_{2} ) q^{96} + ( -90 - 8 \beta_{3} ) q^{97} -7 \beta_{1} q^{98} + ( 56 + 25 \beta_{1} + 10 \beta_{2} + 20 \beta_{3} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q - 8q^{4} - 8q^{6} + 20q^{9} + O(q^{10})$$ $$4q - 8q^{4} - 8q^{6} + 20q^{9} - 16q^{10} + 40q^{13} - 16q^{15} + 16q^{16} - 64q^{19} - 28q^{21} + 40q^{22} + 16q^{24} + 12q^{25} + 56q^{30} - 128q^{31} + 40q^{33} + 16q^{34} - 40q^{36} + 80q^{37} - 112q^{39} + 32q^{40} + 80q^{43} + 112q^{45} - 8q^{46} + 28q^{49} + 16q^{51} - 80q^{52} - 152q^{54} - 32q^{55} - 160q^{58} + 32q^{60} - 56q^{61} - 32q^{64} + 112q^{66} + 240q^{67} - 8q^{69} - 56q^{70} + 120q^{73} + 224q^{75} + 128q^{76} - 80q^{78} - 128q^{79} - 124q^{81} + 240q^{82} + 56q^{84} - 248q^{85} - 160q^{87} - 80q^{88} - 80q^{90} + 112q^{91} - 280q^{93} + 48q^{94} - 32q^{96} - 360q^{97} + 224q^{99} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 8 x^{2} + 9$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$($$$$\nu^{3} + 5 \nu$$$$)/3$$ $$\beta_{2}$$ $$=$$ $$($$$$\nu^{3} + 11 \nu$$$$)/3$$ $$\beta_{3}$$ $$=$$ $$\nu^{2} + 4$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$\beta_{2} - \beta_{1}$$$$)/2$$ $$\nu^{2}$$ $$=$$ $$\beta_{3} - 4$$ $$\nu^{3}$$ $$=$$ $$($$$$-5 \beta_{2} + 11 \beta_{1}$$$$)/2$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/42\mathbb{Z}\right)^\times$$. $$n$$ $$29$$ $$31$$ $$\chi(n)$$ $$-1$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 29.1 − 2.57794i 1.16372i 2.57794i − 1.16372i 1.41421i −2.64575 1.41421i −2.00000 6.57008i −2.00000 + 3.74166i 2.64575 2.82843i 5.00000 + 7.48331i −9.29150 29.2 1.41421i 2.64575 1.41421i −2.00000 0.913230i −2.00000 3.74166i −2.64575 2.82843i 5.00000 7.48331i 1.29150 29.3 1.41421i −2.64575 + 1.41421i −2.00000 6.57008i −2.00000 3.74166i 2.64575 2.82843i 5.00000 7.48331i −9.29150 29.4 1.41421i 2.64575 + 1.41421i −2.00000 0.913230i −2.00000 + 3.74166i −2.64575 2.82843i 5.00000 + 7.48331i 1.29150 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 42.3.b.a 4 3.b odd 2 1 inner 42.3.b.a 4 4.b odd 2 1 336.3.d.b 4 5.b even 2 1 1050.3.e.a 4 5.c odd 4 2 1050.3.c.a 8 7.b odd 2 1 294.3.b.h 4 7.c even 3 2 294.3.h.d 8 7.d odd 6 2 294.3.h.g 8 8.b even 2 1 1344.3.d.c 4 8.d odd 2 1 1344.3.d.e 4 9.c even 3 2 1134.3.q.a 8 9.d odd 6 2 1134.3.q.a 8 12.b even 2 1 336.3.d.b 4 15.d odd 2 1 1050.3.e.a 4 15.e even 4 2 1050.3.c.a 8 21.c even 2 1 294.3.b.h 4 21.g even 6 2 294.3.h.g 8 21.h odd 6 2 294.3.h.d 8 24.f even 2 1 1344.3.d.e 4 24.h odd 2 1 1344.3.d.c 4 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 42.3.b.a 4 1.a even 1 1 trivial 42.3.b.a 4 3.b odd 2 1 inner 294.3.b.h 4 7.b odd 2 1 294.3.b.h 4 21.c even 2 1 294.3.h.d 8 7.c even 3 2 294.3.h.d 8 21.h odd 6 2 294.3.h.g 8 7.d odd 6 2 294.3.h.g 8 21.g even 6 2 336.3.d.b 4 4.b odd 2 1 336.3.d.b 4 12.b even 2 1 1050.3.c.a 8 5.c odd 4 2 1050.3.c.a 8 15.e even 4 2 1050.3.e.a 4 5.b even 2 1 1050.3.e.a 4 15.d odd 2 1 1134.3.q.a 8 9.c even 3 2 1134.3.q.a 8 9.d odd 6 2 1344.3.d.c 4 8.b even 2 1 1344.3.d.c 4 24.h odd 2 1 1344.3.d.e 4 8.d odd 2 1 1344.3.d.e 4 24.f even 2 1 ## Hecke kernels This newform subspace is the entire newspace $$S_{3}^{\mathrm{new}}(42, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$( 1 + 2 T^{2} )^{2}$$ $3$ $$1 - 10 T^{2} + 81 T^{4}$$ $5$ $$1 - 56 T^{2} + 1586 T^{4} - 35000 T^{6} + 390625 T^{8}$$ $7$ $$( 1 - 7 T^{2} )^{2}$$ $11$ $$1 - 272 T^{2} + 36578 T^{4} - 3982352 T^{6} + 214358881 T^{8}$$ $13$ $$( 1 - 20 T + 326 T^{2} - 3380 T^{3} + 28561 T^{4} )^{2}$$ $17$ $$1 - 440 T^{2} + 204242 T^{4} - 36749240 T^{6} + 6975757441 T^{8}$$ $19$ $$( 1 + 16 T + 361 T^{2} )^{4}$$ $23$ $$1 + 688 T^{2} + 666818 T^{4} + 192530608 T^{6} + 78310985281 T^{8}$$ $29$ $$1 - 1652 T^{2} + 1917638 T^{4} - 1168428212 T^{6} + 500246412961 T^{8}$$ $31$ $$( 1 + 64 T + 2246 T^{2} + 61504 T^{3} + 923521 T^{4} )^{2}$$ $37$ $$( 1 - 20 T + 1369 T^{2} )^{4}$$ $41$ $$1 - 856 T^{2} - 2330094 T^{4} - 2418851416 T^{6} + 7984925229121 T^{8}$$ $43$ $$( 1 - 40 T + 3090 T^{2} - 73960 T^{3} + 3418801 T^{4} )^{2}$$ $47$ $$( 1 - 4346 T^{2} + 4879681 T^{4} )^{2}$$ $53$ $$( 1 - 3026 T^{2} + 7890481 T^{4} )^{2}$$ $59$ $$1 - 10532 T^{2} + 49098278 T^{4} - 127620046052 T^{6} + 146830437604321 T^{8}$$ $61$ $$( 1 + 28 T + 4838 T^{2} + 104188 T^{3} + 13845841 T^{4} )^{2}$$ $67$ $$( 1 - 120 T + 12466 T^{2} - 538680 T^{3} + 20151121 T^{4} )^{2}$$ $71$ $$1 - 12176 T^{2} + 74167106 T^{4} - 309412627856 T^{6} + 645753531245761 T^{8}$$ $73$ $$( 1 - 60 T + 9766 T^{2} - 319740 T^{3} + 28398241 T^{4} )^{2}$$ $79$ $$( 1 + 64 T + 10706 T^{2} + 399424 T^{3} + 38950081 T^{4} )^{2}$$ $83$ $$1 - 6356 T^{2} - 6983674 T^{4} - 301645088276 T^{6} + 2252292232139041 T^{8}$$ $89$ $$1 - 27832 T^{2} + 318232338 T^{4} - 1746242051512 T^{6} + 3936588805702081 T^{8}$$ $97$ $$( 1 + 180 T + 26470 T^{2} + 1693620 T^{3} + 88529281 T^{4} )^{2}$$
2020-10-24 00:35:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994654655456543, "perplexity": 10838.883819728899}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00003.warc.gz"}
http://math.stackexchange.com/questions/11903/f-left-sum-x-i-right-leq-sum-fx-i-where-x-i-gt-0-for-what-functions
# $f\left(\sum X_i\right) \leq \sum f(X_i)$, where $X_i\gt 0$; for what functions is this true? In a previous post, the following inequality has been proven $${\left( {\sum\limits_{i = 1}^n {{W_i}} } \right)^a} \le \sum\limits_{i = 1}^n {{W_i}^a}$$ where $W_i\gt 0$, $0\lt a\lt 1$. I guess it is more correct to say that this is always greater, and it is valid for $a\gt 0$ not just $0\lt a \lt 1$. I am trying to see if one can generalize it to something like $$f\left( \sum\limits_{i = 1}^n W_i \right) \le \sum\limits_{i = 1}^n {f({W_i})}$$ where ${W_i}\gt 0$ ? Under what circumstances and functions can this be true ? It has been proven for the power function $f(x)=kx^a$ where $a,k\gt 0$. Are there any other cases? Do you think there is any basic inequality to prove this? - The condition $$f\left( {\sum\limits_{i = 1}^n {{W_i}} } \right) \le \sum\limits_{i = 1}^n {f({W_i})}$$ where ${W_i}$>0 is equivalent to $$f\left( {\sum\limits_{i = 1}^2 {{W_i}} } \right) \le \sum\limits_{i = 1}^2 {f({W_i})}$$ where ${W_i}$>0. These are subadditive functions. See Subadditivity - I know von Neumann entropy is subadditive, but that's a function of a different kind, defined on a different kind of space. But I think the Shannon entropy has the same property. –  Raskolnikov Nov 26 '10 at 12:07
2014-04-20 16:55:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932086229324341, "perplexity": 189.6793498187605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/find-the-position-of-equlibrium.901316/
# Homework Help: Find the position of equlibrium Tags: 1. Jan 23, 2017 ### doktorwho 1. The problem statement, all variables and given/known data From the diagram below find the position of the man ($x$) if the system is in balance. Total length is $L$ and the man is distance $x$ from one end. 2. Relevant equations 3. The attempt at a solution I know that the system must be in balance if all the torque and all the forces equate to zero. That said i tried this: The left tension force i name $T_1$ and the right $T_2$. And i take the right point as reference point. $\sum M=T*L-mg\frac{L}{2}-Mg(L-x)=0$ $\sum F=T+F-(m+M)g=0$ These equation i know but still cnat express $x$ as i have too many unknowns. Could i eliminate something here? #### Attached Files: • ###### Capture.JPG File size: 4.4 KB Views: 72 2. Jan 23, 2017 ### Orodruin Staff Emeritus Hint: Look at the force equation for the lower pulley and pick a smart reference point for your torque computation. 3. Jan 23, 2017 ### doktorwho That force should equal the tension on the right, right? I picked a reference point to the right so it an be lost. Can you provide another hunt so i can see what you mean? 4. Jan 23, 2017 ### TomHart I think there is something wrong with your sum of forces equation. 5. Jan 23, 2017 ### Orodruin Staff Emeritus Which force? There are three forces acting on that pulley.
2018-07-23 08:11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6353857517242432, "perplexity": 1239.3529530111748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00384.warc.gz"}
https://www.tutorialspoint.com/sides-ab-and-ac-and-median-ad-of-a-triangle-abc-are-respectively-proportional-to-sides-pq-and-pr-and-median-pm-of-another-triangle-pqr-show-that
# Sides AB and AC and median AD of a triangle ABC are respectively proportional to sides PQ and PR and median PM of another triangle PQR. Show that $∆ABC \sim ∆PQR$. #### Complete Python Prime Pack for 2023 9 Courses     2 eBooks #### Artificial Intelligence & Machine Learning Prime Pack 6 Courses     1 eBooks #### Java Prime Pack 2023 8 Courses     2 eBooks Given: Two triangles $ΔABC$ and $ΔPQR$ in which $AB, AC$ and median $AD$ of $ΔABC$ are proportional to sides $PQ, PR$ and median $PM$ of $ΔPQR$ $\frac{AB}{PQ} = \frac{AC}{PR} = \frac{AD}{PM}$ To do: We have to prove that $ΔABC \sim ΔPQR$ Solution: We have, $\frac{AB}{PQ} = \frac{AC}{PR} = \frac{AD}{PM}$ (D is the mid-point of BC and M is the mid point of QR) $ΔABD \sim ΔPQM$ [SSS similarity criterion] Therefore, $\angle ABD = \angle PQM$ [Corresponding angles of two similar triangles are equal] $\angle ABC = \angle PQR$ In $ΔABC$ and $ΔPQR$ $\frac{AB}{PQ} = \frac{AC}{PR}$........(i) $\angle A = \angle P$........(ii) From above equation (i) and (ii), we get, $ΔABC \sim ΔPQR$ [By SAS similarity criterion] Hence proved. Updated on 10-Oct-2022 13:21:19
2022-11-29 04:13:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.372957706451416, "perplexity": 4710.384552342453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00425.warc.gz"}
https://deepai.org/publication/secure-distributed-dynamic-state-estimation-in-wide-area-smart-grids
# Secure Distributed Dynamic State Estimation in Wide-Area Smart Grids Smart grid is a large complex network with a myriad of vulnerabilities, usually operated in adversarial settings and regulated based on estimated system states. In this study, we propose a novel highly secure distributed dynamic state estimation mechanism for wide-area smart grids, composed of geographically separated subregions, each supervised by a local (control) center. We firstly propose a distributed state estimator assuming regular system operation, that achieves near-optimal performance based on the local Kalman filters and with the exchange of necessary information between local centers. To enhance the security, we further propose to (i) protect the network database and the network communication channels against attacks and data manipulations via a blockchain (BC)-based system design, where the BC operates on the peer-to-peer network of local centers, (ii) locally detect the measurement anomalies in real-time to eliminate their effects on the state estimation process, and (iii) detect misbehaving (hacked/faulty) local centers in real-time via a distributed trust management scheme over the network. We provide theoretical guarantees regarding the false alarm rates of the proposed detection schemes, where the false alarms can be easily controlled. Numerical studies illustrate that the proposed mechanism offers reliable state estimation under regular system operation, timely and accurate detection of anomalies, and good state recovery performance in case of anomalies. ## Authors • 4 publications • 16 publications • 47 publications • ### Intelligent Anomaly Detection and Mitigation in Data Centers Data centers play a key role in today's Internet. Cloud applications are... 06/14/2019 ∙ by Ashkan Aghdai, et al. ∙ 0 • ### A Distributed Hierarchy Framework for Enhancing Cyber Security of Control Center Applications Recent cyber-attacks on power grids highlight the necessity to protect t... 10/10/2020 ∙ by Chetan Kumar Kuraganti, et al. ∙ 0 • ### An Online Approach to Cyberattack Detection and Localization in Smart Grid Complex interconnections between information technology and digital cont... 02/22/2021 ∙ by Dan Li, et al. ∙ 0 • ### Real-Time Nonparametric Anomaly Detection in High-Dimensional Settings Timely and reliable detection of abrupt anomalies, e.g., faults, intrusi... 09/14/2018 ∙ by Mehmet Necip Kurt, et al. ∙ 0 • ### Real-Time Detection of Hybrid and Stealthy Cyber-Attacks in Smart Grid For a safe and reliable operation of the smart grid, timely detection of... 02/28/2018 ∙ by Mehmet Necip Kurt, et al. ∙ 0 • ### Simultaneous Distributed Estimation and Attack Detection/Isolation in Social Networks: Structural Observability, Kronecker-Product Network, and Chi-Square Detector This paper considers distributed estimation of linear systems when the s... • ### Optimal CPU Scheduling in Data Centers via a Finite-Time Distributed Quantized Coordination Mechanism In this paper we analyze the problem of optimal task scheduling for data... 04/07/2021 ∙ by Apostolos I. Rikos, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction The next-generation electrical power grid, i.e., the smart grid, is vulnerable to a variety of cyber/physical system faults and hostile cyber threats [1, 2, 3, 4]. In particular, random anomalies such as node and topology failures might occur all over the network. Moreover, malicious attackers can deliberately manipulate the network operation and tamper with the network data, from sources such as smart meters, control centers, network database, and network communication channels (see Fig. 1). False data injection (FDI), jamming, and denial of service (DoS) attacks are well-known attack types [2, 5, 6, 7, 8]. Moreover, Internet of Things (IoT) botnets can be used to target against critical infrastructures such as the smart grid [9, 10, 11]. In the smart grid, online state estimates are utilized to make timely decisions in critical tasks such as load-frequency control and economic dispatch [12]. Hence, a fundamental task in the grid is reliable state estimation based on online measurements. On the other hand, the main objective of the adversaries is to damage/mislead the state estimation mechanism in order to cause wrong/manipulated decisions, resulting in power blackouts or manipulated electricity prices [13]. Additionally, random system faults may degrade the state estimation performance. Our objective in this study is to design a highly secure and resilient state estimation mechanism for wide-area (multi-area) smart grids, that provides reliable state estimates in a fully-distributed manner, even in the case of cyber-attacks and other network anomalies. ### I-a Background and Related Work #### I-A1 Secure Dynamic State Estimation Feasibility of dynamic modeling and efficiency of dynamic state estimation have been widely discussed and various dynamic models have been proposed for the power grids [14, 15, 16, 17]. The general consensus is that the dynamic modeling better captures the time-varying characteristics of the power grid and dynamic state estimators are more effective to track the system state compared to the conventional static least squares (LS) estimators. Moreover, state forecasting capability achieved with dynamic estimators is quite useful in real-time operation and security of the grid [15, 5]. In [14] , a quasi-static state model is proposed where the power system state is periodic over a day. In the simplest case, the system state is perturbed with an additive white Gaussian noise (AWGN) process with a small variance and hence the state varies in a small dynamic range. In [17], a linear exponential smoothing model is proposed, where the effects of past measurements on the present state estimates are reduced over time. In the literature, various techniques have been proposed to make the dynamic state estimation mechanism secure/robust against outliers, heavy-tailed noise processes, model uncertainties including unknown noise statistics, rank-deficient observation models, and cyber-attacks, etc., [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. For instance, robust statistics-based approaches [18, 19, 20, 21] aim to suppress the effects of outliers by assigning less weights to more significant outliers. As the outliers are still incorporated into the state estimation process, their effects are not completely eliminated, and due to the recursive nature of dynamic state estimators, errors are accumulated over time and the corresponding state estimator breaks down at some point, i.e., fails to keep track of the system operating point. The estimator also breaks down in case of gross outliers. Furthermore, this approach is based on the solution of an iterative weighted LS problem, repeated at each time, that might be prohibitive for real-time processing. To deal with outliers, another method is modeling the system noise as a heavy-tailed, e.g., Student’s t or Laplace, distribution [22, 23, 24]. This method can handle (nominal) outliers observed during the regular system operation, however, it is expected to be ineffective against attacks and faults that behave significantly different from nominal outliers. Further, some studies are based on the assumption that attacks are sparse over the network or with bounded magnitudes [25, 26, 27, 28, 29], which significantly limits the robustness of the corresponding state estimators. This is because imposing restrictions on the capabilities or strategies of attackers make the corresponding state estimation mechanism robust against only a certain subset of attacks. Yet another approach is to completely ignore the outliers in the state estimation process [30, 31, 32, 33]. In the literature, this approach is usually based on the sample-by-sample classification of observations as either outlier or non-outlier. Although it might be useful to detect and eliminate the effects of gross outliers, the corresponding state estimator is expected to fail if small/moderate-level (difficult-to-detect) outliers are observed in a persistent manner. Distributed dynamic state estimation has been also studied extensively, see e.g., [34] for a review of the distributed Kalman filtering techniques. Particularly, two main architectures are considered: hierarchical and fully-distributed. In the former, there exists a central controller coordinating multiple local controllers, while in the latter, no central controller exists. In the hierarchical schemes, the information filter, an algebraic equivalence to the centralized Kalman filter, can be useful to fuse the information processed across the local controllers in a simple manner [35, 36]. The fully-distributed schemes usually require an iterative consensus mechanism to reduce the disagreement of the local controllers on the estimates of common state variables [34, 37]. In this work, we propose a near-optimal fully-distributed dynamic state estimation mechanism where the local controllers exchange the necessary information only once at each measurement sampling interval. #### I-A2 Secure Distributed System Design Traditionally, through the supervisory control and data acquisition (SCADA) system, the power grid is controlled in a centralized manner. In particular, the system-wide data are collected, stored, and processed at a single node. Considering the increasing speed and size of the measurement data, collecting and processing such huge volume of data in real-time at a single center seem practically infeasible in the modern power grids [38]. Moreover, the traditional implementation is based on the assumption that the centralized node is completely trustable. In practice, however, the centralized node can be the weakest point of the network in terms of security. This is because by hacking only the centralized node, adversaries can arbitrarily modify the control decisions and the network database. On the contrary, hacking a distributed system is usually more difficult for attackers, especially for the smart grid which is distributed over a geographically wide region. Therefore, distributing the computation and the trust over the network can be useful to achieve a more feasible and a more secure grid. Blockchain (BC) is an emerging secure distributed database technology, operating on a peer-to-peer (P2P) network with the following key components [39, 40, 41]: (i) a distributed ledger (chronologically ordered sequence of blocks) that is shared and synchronized over the network where each block is cryptographically linked to the previous blocks and the ledger is resistant/immune to modifications, (ii) advanced cryptography that enables secure data exchanges and secure data storage, and (iii) a mutual consensus mechanism, that enables collective verification/validation on the integrity of exchanged and stored data, and thereby distributes trust over the network instead of relying on a single node/entity. BC technology was firstly used in financial applications [42, 43] but due to its security-by-design and the distributed nature without needing any trusted third party, it has been applied to many fields such as vehicular networks, supply chain management, cognitive radio, and insurance [44, 45, 46]. The smart grid critically relies on a database and network communication channels, both are quite vulnerable to attacks and manipulations. As a countermeasure, an effective approach is the detection and then mitigation of such threats. On the other hand, a significantly better approach is the prevention of the threats as much as possible. In this direction, the BC technology has a great potential due to its advanced data protection/attack prevention capabilities. Hence, we aim to integrate some salient features of the BC technology to the smart grid in order to improve the resilience of the system, particularly to protect the system database and the communication channels. Research on the integration of BC to smart grids has mainly focused on secure energy transactions/trade so far [47, 48, 49, 4], and only a few studies have examined the integration of BC to make the power supply system more secure, see [50] and [51], where in both studies, the grid is protected by securely storing all the system-wide measurements at every smart meter. However, this seems infeasible in many aspects, e.g., smart meters are small-size devices with limited memory, power, and processing capabilities [48] and hence not suitable to perform advanced computations such as encryption/decryption, to store the distributed ledger, and to constantly communicate with all other meters in the network. Moreover, since the measurements are collected mainly to estimate the system state, the BC-based system can be designed to protect the state estimation mechanism in a more direct way, rather than to protect the entire history of raw measurement data, that then enables secure state estimation. Although BC can be useful to secure the network database and the communication channels, the online meter measurements are still vulnerable to attacks and faults. We would like to design a state estimation mechanism that is secure against all types of anomalies. Towards this goal, as a complement to the BC-based data protection, robust bad data detection and mitigation, i.e., state recovery, schemes need to be integrated into the state estimation mechanism. We have recently proposed in [7] a robust dynamic state estimation scheme for the smart grid, in case the attack models are known (with some unknown parameters). In practice, unknown attacks/anomalies may occur in the smart grid as it has many vulnerabilities and attackers might have arbitrary strategies. Hence, in general, anomaly/attack models need to be assumed unknown and the state estimation mechanism should be designed accordingly. Furthermore, in a BC-based distributed system, there is no centralized trusted node to check and recover a node that is faulty or hacked by a malicious entity. Hence, a distributed trust management mechanism needs to be employed over the network to evaluate the trustability of each node against the possibility of misbehaving nodes. ### I-B Contributions In this study, we propose a novel BC-based resilient system design to achieve secure distributed dynamic state estimation in wide-area smart grids. Our aim is to reduce the risks at each part of this highly complex network, specifically, the network database, smart meters, local control centers, and network communication channels (see Fig. 1) without modeling anomalies. Firstly, assuming regular system operation (no anomaly), we propose a fully-distributed dynamic state estimation scheme that achieves near-optimal performance thanks to the local Kalman filters and with the exchange of necessary information between local centers. Then, to improve the resilience of the proposed mechanism, we propose to (i) use salient features of the emerging BC technology to secure both the network database and the network communication channels against attacks and manipulations, (ii) embed online anomaly detection schemes into the state estimation mechanism to make it secure against measurement anomalies, and (iii) detect and eliminate the effects of the misbehaving nodes in real-time via a novel distributed trust management mechanism over the network. Then, we achieve a highly secure distributed mechanism that is able to deliver a reliable state estimation performance under adversarial settings. Moreover, we provide theoretical guarantees regarding the false alarm rates of the proposed online detection schemes, where the false alarms can be easily controlled by the system designer. ### I-C Organization and Notations The remainder of the paper is organized as follows. Sec. II presents the system model. Sec. III describes the BC-based secure system design. Sec. IV discusses the proposed distributed state estimation mechanism under regular (non-anomalous) network operation. Sec. V explains the proposed online anomaly detection scheme against measurement anomalies and the corresponding state recovery scheme. Sec. VI discusses the proposed distributed trust management scheme against misbehaving nodes. Sec. VII then summarizes the proposed mechanism. Sec. VIII illustrates the advantages of the proposed mechanism over a simulation setup. Finally, Sec. IX concludes the paper. Notations: Boldface letters denote vectors and matrices. denotes the set of real numbers. denotes the Gaussian probability density function (pdf) with mean and covariance matrix . denotes an identity matrix. and denote the probability and the expectation operators, respectively. denotes the indicator function. denotes the natural logarithm and denotes the Euler’s number. denotes the cardinality of a set. denotes an empty set. denotes the set of elements belonging to but not belonging to . , , and denote the infimum, supremum, and maximum operators, respectively, and . Finally, denotes the transpose operator. ## Ii System Model We consider a smart power grid with buses and smart meters, usually with to have the necessary measurement redundancy against system noise [52]. System state at time , , represents voltage phase angles of the buses, where a bus is chosen as the reference. Measurement taken at meter at time is denoted with and the measurement vector is denoted with . Based on the widely used approximate DC model [52, 14], we model the grid as a discrete-time linear dynamic system as follows: xt=Axt−1+vt, (1) yt=Hxt+wt, (2) where is the state transition matrix, is the measurement matrix determined by the network topology, is the process noise vector, and is the measurement noise vector. We assume that and are independent AWGN processes where and . The wide-area smart grid is composed of geographically separated subregions (see Fig. 5). Each subregion contains a set of smart meters, supervised by a local (control) center. Since the meters are distributed over the network, each local center partially observes the measurement vector . Assuming that the grid is composed of subregions and the subset of meters in the th subregion is denoted by , the measurement vector is decoupled into sub-vectors , , where denotes (with an abuse of notation) the measurement vector of the th local center at time and is the number of meters in the th subregion. Since each meter belongs to only one subregion, for any two subregions and , and do not overlap and we have . The smart grid is an interconnected system, where there exist tie-lines between neighboring subregions (see Fig. 5) that leads to some common (shared) state variables between neighboring local centers. Hence, denoting the state vector of the th local center at time by , for any two neighboring local centers and , and might overlap. This implies . In general, if the state transition matrix is non-diagonal, additional state variables might be shared between neighboring or non-neighboring local centers due to dependencies between state variables over time through the state transition matrix. In this study, for the simplicity of the presentation, we assume is diagonal, as in e.g., [14, 31]. Under this assumption, we next determine the state vector of a local center, say the th one. For the case of non-diagonal , please see [5]. Note that in the non-diagonal case, the proposed system design directly extends, where the only difference is that the size of the local state vectors might be larger. Let be the th row of the measurement matrix , i.e., . Then, using (2), each measurement can be written as follows: yk,t=hTkxt+wk,t. (3) Based on (3), we can argue that depends on, equivalently bears information about, the following state variables: Xyk≜{xn,t|hk,n≠0,n=1,…,N}. Then, the local state vector consists of the union of all such state variables for all the meters in the subregion : xℓt=⋃k∈RℓXyk. For each local center , we then have the following local state transition model: xℓt=Aℓxℓt−1+vℓt, (4) and the following local measurement model: yℓt=Hℓxℓt+wℓt, (5) where the local state transition matrix, , and the local measurement matrix, , can be easily obtained from and , respectively. Moreover, the local process noise vector, , is a sub-vector of corresponding to . Similarly, the local measurement noise vector, , is a sub-vector of corresponding to . ## Iii Blockchain-Based Secure System Design ### Iii-a Overview of the Proposed System We consider a distributed P2P network of local centers where each node (local center) can communicate with all other nodes (see Fig. 1). We aim to design a system in which the nodes collaborate with each other to perform the state estimation task in a safe and reliable manner. For a reliable distributed dynamic state estimation, particularly the Kalman filter, we need safe updates and hence the following three items must be secure/reliable at each time : • state estimates of the previous time , • meter measurements acquired at the current time , and • the nodes functioning in the state estimation process, i.e., the local centers. In other words, at each time, we need to make sure that the previous state estimates are not modified, the online meter measurements are not anomalous, and the nodes are working according to predesigned network rules. Furthermore, in case of an anomaly over the network, the state estimates can be recovered using the previous reliable state estimates, and hence we need to also protect the previous state estimates against tampering. Considering these requirements, our proposed system is composed of the following three main components: • BC-based data protection/attack prevention: BC enhances the security of the grid by reducing the risk of manipulations at the network database and the network communication channels. In particular, to protect the previous state estimates against tampering and make them widely available and accessible over the network against the possibility of node failures and hacking, we record them in a shared transparent distributed ledger that is resistant to alterations. Moreover, we secure the inter-node data exchanges via cryptography against attacks and manipulations. • Secure state estimation against measurement anomalies: Each local center quickly and reliably detects local measurement anomalies and then employs a state recovery mechanism. • Distributed trust management: All nodes collectively (via voting-consensus) evaluate the trustability of each node, specifically whether the local state estimates provided by a node exhibit an anomalous pattern over time. Firstly, the following subsection explains how we use the BC technology to enhance the security of the state estimation mechanism. ### Iii-B Blockchain Mechanism The BC operates on the P2P network of local centers. Since each node is pre-specified and pre-authenticated, we have a permissioned (private) BC mechanism [48, 50]. In BC-based systems, duties of each node and interactions between nodes are determined via a smart contract, which is a software code that specifies the predefined rules of network operation. In our proposed mechanism, each node collects and analyzes the meter measurements in its subregion, estimates its local state vector, exchanges information with other nodes, performs encryption/decyrption, participates in voting-consensus procedures, and stores the distributed ledger in its memory. The details regarding the duties of the nodes will be more clear in the subsequent sections. Next, we explain the proposed BC mechanism in more detail. #### Iii-B1 Data Exchanges In all inter-node data exchanges, we use asymmetric encryption based on the public key infrastructure (see Fig. 2). In this mechanism, each node owns a public-private key pair that forms the digital identity of the node. The public key is available at every node. On the other hand, the private key is available only at its owner. Moreover, a secure hash algorithm (SHA), e.g., SHA-, SHA-, etc. [53, 54], is used in the data encryption process. Particularly, in every data exchange, the sender node firstly processes its message via the SHA and obtains the message digest. It then encrypts the message digest via its private key using a signature algorithm, e.g., Elliptic Curve Digital Signature Algorithm [40, 47], and obtains the digital signature. Finally, it transmits the data package consisting of the message and the corresponding digital signature. The receiver node then decrypts the received digital signature via the public key of the sender node and obtains a message digest. Moreover, it processes the received message via the SHA and obtains another message digest. Only if these two message digests exactly match, the integrity of the received message is verified. In this procedure, the SHA provides security since it is computationally intractable to obtain the same message digest from two different messages [53, 54]. The SHA is a one-way function that outputs a fixed-length message digest for an arbitrary-size input message. Let denote the SHA and the output of the be bits. Then, given a message , the time complexity of finding such that is via brute-force search. This property implies that over a data exchange, if a malicious adversary aims to replace the actual message with a fake message without being noticed ( so that the receiver verifies the integrity of the message), and moreover if the adversary has a computational power of querying possible fake messages, then the probability of a successful fake message is . Here we assume that the adversary knows the SHA so that it can check whether while trying different fake messages . The probability of success is negligible in practical settings where , , etc. For example, if an adversary can query fake messages and , the probability of success is for a single data package. Furthermore, since the received digital signature can only be decrypted via the public key of the sender node, the receiver can verify the identity of the sender. Assuming (reasonably) that the private keys are kept secret and the digital signatures are bits, the time complexity of generating a successful fake signature is via brute-force search [40]. Then, if an adversary does not know the public key of the sender (so that it cannot check whether a fake signature is decrypted via the public key), the probability of generating a successful fake signature for a chosen fake message is . On the other hand, obtaining public keys might be easier than private keys because the public keys are distributed over the network and all public keys can be accessed by hacking only one node. Then, if an adversary knows the public key of the sender, it can try different fake signatures for a chosen fake message and check whether the fake message is verified. In this case, if the adversary has the computational power of querying possible fake signatures, then the probability of success is . Again choosing sufficiently high, such as , makes the success probability practically negligible. Notice that, however, if the private key of the sender is stolen, then the fake messages cannot be noticed at the receiver. Over a data exchange, in case either the integrity of the received data package or the identity of the sender cannot be validated, then the received message is ignored and a retransmission can take place. Thereby, thanks to the asymmetric encryption procedure, the inter-node data exchanges are secured against attacks that manipulate either the message or the identity of the sender, such as man-in-the-middle and IP spoofing attacks. #### Iii-B2 Distributed Ledger and Consensus Mechanism The ledger is a chronologically ordered sequence of blocks, stored at every node and synchronized over the network. In BC-based systems, the block content is application-specific. In our case (see Fig. 3), at each measurement sampling interval, a new block is produced, that includes (i) the state estimates of the current time and (ii) a header consisting of the discrete timestamp, hash value of the previous block that cryptographically links the current block to the previous block, and a random number called the nonce, which is the solution to a puzzle problem. Particularly, the nonce is determined, via brute-force search, such that the hash value of the current block satisfies a certain condition [40, 50]. We next explain the process of producing a new block and the corresponding update in the distributed ledger. At each time , after computing the local state estimates, each node broadcasts a data package, containing its local state estimates, , as the message, to the entire network. Then, as explained above, every node checks the validity of each received data package. A broadcasted data package is verified over the network only if the majority of the nodes validate it. After all the broadcasted data packages are verified, all the local state estimates of the current time are recorded into a new block at each node. In this process, to alleviate inconsistencies in the distributed ledger, we need to make sure that the ledger is synchronously updated over the network. Moreover, we seek a mutual consensus over the network for the update of the ledger. Using the common Proof-of-Work consensus mechanism [40], some nodes act as “miners”, where the miners compete with each other to solve the puzzle and obtain the nonce value. Public BC-based systems such as Bitcoin [42] provides an incentive to miners, where the miner solving the puzzle first, gets a financial reward. In our case, however, this procedure is completely autonomous in that at each time, among all the local centers, a few nodes are randomly assigned as the miners. Then, the miner solving the puzzle first broadcasts the nonce value to the entire network. Each node then checks the puzzle solution. If the majority of nodes verify the solution, then the new block is produced and simultaneously connected to the existing ledger at every node. Here, random assignment of the miners enables a higher level of security to the BC mechanism compared to permanently assigning the miner nodes, that would then be the open targets for adversaries. As explained before, for a reliable dynamic state estimation, we require that the most recent state estimates are secure. Moreover, in case of an anomaly over the network, e.g., a failure or an attack, we can recover the system state from the recent reliable state estimates (the details are presented in Sec. V and Sec. VI). For these reasons, in the distributed ledger, we propose to store a finite number of blocks that contain the recent state estimates in order to protect them against all kinds of manipulations. Let the number of blocks in the ledger be . Then, at each time, while a new block is connected to the existing ledger, the oldest block is pruned (see Fig. 3). This, in fact, solves the problem of monotonically increasing storage costs in the conventional BCs as well. We will explain how to choose in the subsequent sections. Further, in our case, at each time interval, only one new block is generated and we have a main chain of blocks without any forks, unlike the well-known public BCs such as Bitcoin [42] and Ethereum [43], in which the ledger keeps record of the transactions between nodes and it is possible to observe multiple transactions at the same time. Finally, we assume that the update of the ledger is completed within one measurement sampling interval. The distributed ledger is resistant to modifications due to the following reasons: (i) since each block is cryptographically linked to the previous block, to modify a single block without being noticed, all the subsequent blocks must be modified accordingly, and (ii) since updating the ledger requires mutual consensus over the network, in order to modify the ledger, malicious entities need to control the majority of nodes in the network. This also implies that the security improvements introduced with the BC-based system design are valid only if the majority of the nodes in the network are reliable. We expect that this condition is easily satisfied in large-scale smart grids with many nodes distributed over a geographically wide region, for which hacking the majority of the nodes is practically quite difficult. From an attacker’s perspective, letting be the minimum number of nodes to be hacked in order to control the majority and hence to be able to arbitrarily modify the ledger, the best strategy would be attacking on the least costly set (in terms of the required efforts and resources for hacking) among the possible sets of nodes. Furthermore, an attacker may replace a block, say , with a fake block such that with a practically negligible probability (similar to the analysis in Sec. III-B1), however, the fake block can still be noticed by comparing the modified ledger with the other copies of the ledger in the network. Hence, for a successful fake block, the attacker still needs to control the majority of the nodes. ## Iv Distributed State Estimation Assuming Regular System Operation In this section, assuming that the grid operation always fits to the nominal system model (see (1) and (2)), we aim to perform optimal state estimation in a fully-distributed manner, where each node only estimates its local state vector . For an optimal state estimation, the node needs to use all the measurements that bear information about , either fully or partially. The node has access to only the local measurements, , that are clearly informative about (see (5)). On the other hand, due to the shared state variables, some measurements collected at the other nodes may also be informative about . Let Cℓ≜{j|xjt∩xℓt≠∅,j∈{1,…,L}∖{ℓ}} be the set of nodes that share at least one state variable with the th node. Let and be the sub-vector of that is informative about , i.e., yℓ,jt≜{yk,t|k∈Rj,Xyk∩xℓt≠∅}. Moreover, let . Then, we can decompose as yℓ,jt=Hℓ,jxℓt+H¯ℓ,jx¯ℓ,jt+wℓ,jt, (6) where the matrices and are easily determined such that the equality in (6) is satisfied for all . Moreover, is the sub-vector of corresponding to . In (6), the term is clearly non-informative and irrelevant to the state estimator of the th node. On the other hand, the th node estimates (and hence ) at each time . Then, denoting the estimate of by , based on (6), we can write ~yℓ,jt ≜yℓ,jt−H¯ℓ,j^x¯ℓ,jt =Hℓ,jxℓt+H¯ℓ,j(x¯ℓ,jt−^x¯ℓ,jt)+wℓ,jt =Hℓ,jxℓt+~wℓ,jt, (7) where ~wℓ,jt≜H¯ℓ,j(x¯ℓ,jt−^x¯ℓ,jt)+wℓ,jt. (8) We propose that the th node subtracts from to compute and then transmits it to the th node at each time in order to facilitate the local state estimation at the th node. Henceforth, we call as the processed measurements (at the th node for the th node). Notice that for each node , (4) defines the local state transition model. Moreover, the local measurements (see (5)) and the processed measurements to be received from the other nodes together form the overall measurement vector of the th node (for the local state estimation task). Let the overall measurement vector of the th node be denoted with and as an example, let . Then, is simply obtained as follows: ~yℓt≜⎡⎢ ⎢⎣yℓt~yℓ,it~yℓ,jt⎤⎥ ⎥⎦. For the th node, we then have the following linear state-space equations: xℓt=Aℓxℓt−1+vℓt, ~yℓt=~Hℓxℓt+~wℓt, (9) where is determined based on , , and , and is the noise vector corresponding to (see (5) and (7)): ~Hℓ≜⎡⎢ ⎢⎣HℓHℓ,iHℓ,j⎤⎥ ⎥⎦, ~wℓt≜⎡⎢ ⎢⎣wℓt~wℓ,it~wℓ,jt⎤⎥ ⎥⎦. (10) The Kalman filter is an iterative real-time estimator consisting of prediction and measurement update steps at each iteration. For the linear system given in (9), the following equations describe the Kalman filter iteration at time , where denotes the state estimates of the th node at time ( for prediction and for measurement update): Prediction: ^xℓt|t−1=Aℓ^xℓt−1|t−1, Pℓt|t−1=AℓPℓt−1|t−1AℓT+σ2vINℓ, (11) Measurement Update: Gℓt=Pℓt|t−1~HℓT(~HℓPℓt|t−1~HℓT+Rℓt)−1, ^xℓt|t=^xℓt|t−1+Gℓt(~yℓt−~Hℓ^xℓt|t−1), Pℓt|t=Pℓt|t−1−Gℓt~HℓPℓt|t−1, (12) where and denote the estimates of the state covariance matrix of the th node at time based on the measurements up to and , respectively. Moreover, is the Kalman gain matrix of the th node at time and denotes the covariance matrix of . We now go back to the process of obtaining at the th node. We see through (7) that this process contains estimation errors due to the term . Our aim is to statistically characterize the estimation errors in order to compute the statistics of (see (8)), which is, in fact, required for an optimal state estimation at the th node. Note that the th node estimates its local state vector (and hence ) via its local Kalman filter. We propose that ^x¯ℓ,jt=^x¯ℓ,jt|t−1, where is the sub-vector of corresponding to . Then, the following lemma states the distribution of . Lemma 1: ~wℓ,jt∼N(0,Δℓ,jt), (13) where Δℓ,jt≜H¯ℓ,jP¯ℓ,jt|t−1H¯ℓ,jT+σ2wIKℓ,j, (14) where is the estimate of the state covariance matrix of , that can be obtained from and denotes the size of the vector . ###### Proof. See Appendix -A. ∎ Through (4) and (5), we know that and are independent zero-mean multivariate Gaussian noise vectors. Furthermore, from Lemma 1, we know that the noise term for the processed measurements (see (7)) is also zero-mean multivariate Gaussian. That implies is a zero-mean multivariate Gaussian vector (see (10)). Then, since all the noise terms are Gaussian for the linear system in (9), the local Kalman filter given in (11)–(12) is the optimal state estimator for the th node in minimizing the mean squared state estimation error [55]. Based on Lemma 1, we can write (see (10)) ~wℓt∼N(0,Rℓt), where Rℓt (15) Here, we observe that the computation of requires the correlation between and . Using (8), we can write =H¯ℓ,iE[(x¯ℓ,it−^x¯ℓ,it)(x¯ℓ,jt−^x¯ℓ,jt)T]H¯ℓ,jT, which requires the correlation between the state estimation errors for and . However, since it is possible that or and the correlations for such state variables that belong to different local centers are not computed over the network, we approximate the correlation terms involving the state variables in and as zero. This is the only point we make an approximation and hence loose the optimality. Through simulations, we observe that this approximation only slightly increases the state estimation error compared to the optimal centralized Kalman filter. Hence, in the rest of the design and analysis (in Sec. V and Sec. VI), we assume that the proposed distributed dynamic state estimator achieves near-optimal performance. Finally, we note that the main contributions of the proposed distributed dynamic state estimator are in the processing of the measurements that are acquired at the other local centers and only partially relevant to the local state vector . Remark 1: Each node needs the knowledge of for the measurement update step of its local Kalman filter, where computing requires (see (15)). We observe through (14) that depends on and . Here, is determined based on the network topology, which is available at every node. On the other hand, is extracted from the state covariance matrix of the th node, , which is primarily computed at the th node. Nevertheless, we see through (11) and (12) that the (iterative) computation of does not depend on online meter measurements and hence it can be computed offline at each node in the network. Moreover, in the proposed distributed trust management scheme (see Sec. VI), each node needs to compute the state covariance matrices of all other nodes. Hence, the proposed state estimation mechanism does not introduce further computational complexity beyond the trust management scheme. ## V Secure State Estimation against Measurement Anomalies The state estimator proposed in the previous section is based on the assumption that the network operation always fits to the nominal system model. However, in practice, various kinds of anomaly might appear all over the network, e.g., measurement anomalies, due to cyber-attacks or network faults. We would like to achieve secure state estimation against the measurement anomalies. Towards this goal, we propose to quickly and reliably detect them and then to eliminate their effects as much as possible. Considering that the attackers can be advanced, strategic, or adaptive to the system and detector dynamics, it is hard to model all attack types [7, 8]. Moreover, considering the complex cyber-physical nature of the smart grid, it is also difficult to model all types of network faults. Hence, in this study, we do not focus on particular anomaly types and rather we assume the anomaly type is totally unknown. On the other hand, once we detect an anomaly, this prevents us to recover the useful part of the anomalous measurements (if any). Our anomaly mitigation strategy is then to reject/neglect the anomalous measurements in the state estimation process and predict the system state, until the system is recovered back to the regular operating conditions, based on (i) the previous reliable state estimates, securely recorded in the distributed ledger and (ii) the nominal system model. As the meters are distributed over the network, each node analyzes only its local measurements. We next explain the proposed measurement anomaly detection scheme at the th node and then the corresponding state recovery over the network. ### V-a Real-time Detection of Local Measurement Anomalies During the regular system operation, it is possible to observe infrequent outliers and the Kalman filter is known to be effective to compensate (suppress) small errors due to such infrequent outliers [56]. Hence, we are particularly interested in long-term anomalies where there exist a temporal relation between anomalous measurements. Our aim is to detect such anomalies timely and reliably using the measurements that become available sequentially over time. In this problem, although we can statistically characterize the nominal measurements sufficiently accurately based on the nominal system model and online state estimates under regular system operation, the measurements can take various unknown statistical forms in case of an anomaly. Hence, we follow a solution strategy in that we derive and monitor (over time) a univariate statistic that is informative about a possible deviation of the online measurements from their nominal model, as detailed next. At each time , the th node locally observes (see (5)). Moreover, based on the local Kalman filter, we have xℓt−^xℓt|t−1∼N(0,Pℓt|t−1). (16) Using (5) and (16), we can write yℓt−Hℓ^xℓt|t−1 =Hℓ(xℓt−^xℓt|t−1)+wℓt ∼N(0,Σℓt), (17) where Σℓt≜HℓPℓt|t−1HℓT+σ2wIKℓ. Then, based on (17), χℓt≜(yℓt−Hℓ^xℓt|t−1)TΣℓt−1(yℓt−Hℓ^xℓt|t−1) (18) is a chi-squared random variable with degrees of freedom. Notice that has a time-invariant distribution under regular system operation. Let be the cumulative distribution function (cdf) of the chi-squared random variable with degrees of freedom. If the right tail probability corresponding to satisfies pℓt ≜1−FKℓ(χℓt)<α, (19) then the corresponding local measurement vector is considered as an outlier for the significance level of . In case of an anomaly, we expect that the chi-squared statistic takes higher values compared to its nominal values and hence, we expect to observe more frequent outliers. Then, we can model an anomaly as persistent outliers, as in [57] and [58]. Based on (19), for an outlier , we have sℓt≜log(αpℓt)>0, (20) and similarly, for a non-outlier , we have . Hence, we can consider as a (positive/negative) statistical evidence for anomaly at time . Then, similar to the accumulation of the log-likelihood ratios in the well-known cumulative sum (CUSUM) test, we can accumulate ’s over time and declare a measurement anomaly only if there is a strong/reliable evidence supporting that, that results in the following CUSUM-like test [57]: Γℓ =inf{t:gℓt≥h}, gℓt =max{0,gℓt−1+sℓt}, (21) where denotes the stopping time at which an anomaly is detected at the th node and . Let be the unknown change-point at which an anomaly happens at the local measurements of the th node and continues thereafter. The CUSUM test always keeps an estimate for the change-point and update it as the measurements become available over time [59, Sec. 2.2]. Let be the change-point estimate of the proposed test. Initializing at , whenever the decision statistic reaches zero, we make the following update: . In other words, is the latest time-instant at which the decision statistic reaches zero. The final change-point estimate is determined when an anomaly is declared at the stopping time . Hence, we have ^τℓ≜max{t:gℓt=0,t<Γℓ}. The change-point estimate will be useful for state recovery (see Sec. V-B). For the CUSUM-like test in (21), to achieve a lower false alarm rate (equivalently a larger average false alarm period), the significance level is chosen smaller and/or the test threshold is chosen higher, that, on the other hand, leads to larger detection delays (see (20) and (21)). Let be the average false alarm period, i.e., the average stopping time when no change happens at all (). The following corollary (to Theorem 2 of [57]) describes how to choose and to obtain a desired lower bound on the average false alarm period. Corollary 1: For a chosen and h≥log(L)1−W(αlog(α))/log(α), (22) we have E∞[Γℓ]≥L, where denotes the Lambert-W function111There exists a built-in MATLAB function lambertw.. ###### Proof. See Appendix -B. ∎ ### V-B State Recovery Once the proposed CUSUM-like detection scheme in (21) declares an anomaly, our purpose is to recover the current (and the future) state estimates. Since the local measurements observed after the (unknown) change-point, i.e., , are not reliable, we can estimate the change-point and recover the state estimates from the latest reliable state estimates at the change-point estimate . The smart grid is a highly interconnected network, as in the proposed mechanism, the local state estimation is performed using the local measurements as well as the processed measurements received from some other nodes. Hence, if an anomaly happens at a node, the whole network is affected by the anomaly to some extent. Then, whenever a measurement anomaly is detected at a node, say the th one at time , the th node immediately broadcasts to the entire network. Then every node makes the following state recovery: ^xjt|t=Aj(t−^τℓ)^xj^τℓ|^τℓ, (23) that essentially corresponds to the case where we replace all the measurements during the anomaly interval with the corresponding pseudo measurements , that makes the measurement innovation signal zero (see (11) and (12)). The regular network operation requires the participation of every node in the state estimation process. Hence, whenever a measurement anomaly is detected at the th node, we propose to raise an alarm flag, calling for further investigation at the th subregion and the neighboring subregions considering the possibility that the processed measurements received from the neighboring nodes may also lead to an anomaly in the local state estimation process. The investigation should be performed considering also the possibility of false alarms. After the investigation process and possibly the recovery of the system, the predesigned regular network operation is restarted. Our main purpose here is to decrease the state estimation errors due to anomalies and hence to provide more reliable state estimates during the anomaly mitigation/system recovery period. The identification of anomalies (types and causes) and the development of the corresponding mitigation/recovery strategies are needed to achieve a completely autonomous network operation, which are beyond the scope of the current work. Remark 2: For the state recovery, the distributed ledger consisting of the recent blocks needs to include the state estimates of the time , where is not known ahead of time. However, since we expect quick detection, we do not expect to observe a that is quite far away to the stopping time . Hence, in practice can be chosen reasonably small. In the case where the ledger does not contain the state estimates of the time , we can recover the state estimates from the oldest state estimates available in the ledger considering that they are more reliable compared to the other alternatives. ## Vi Distributed Trust Management In BC-based distributed networks, malicious adversaries may obtain illegitimate access to the system, e.g., via stealing the digital identity of some nodes, malware propagation, etc., [60, 11], and additionally some nodes may get faulty during the system operation. Moreover, as the network is fully-distributed, there is no centralized trusted node to check whether all nodes are safe and trustable, i.e., whether the nodes are functioning according to the predesigned network rules. Therefore, against the possibility of misbehaving nodes, we need a distributed trust management mechanism over the network, in which all nodes collectively verify the trustability of each node. Recall that every node knows (i) the nominal system model and the network configuration, and (ii) a finite history of recent state estimates of the all nodes stored in the shared distributed ledger (see Sec. III-B2). Using only (i) and (ii), each node votes on the trustability of all other nodes. Then, at each time, the trustability of each node is decided via majority-voting. We explain below how the th node is evaluated by the other regular (non-misbehaving) nodes. Suppose that at an unknown time , an unexpected event happens at the th node: the node gets faulty or an attacker hacks and takes control of the node. Then, we can no longer expect that the behavior of the th node fits to its pre-defined regular operation. Furthermore, similar to the measurement anomalies, it is quite difficult to model the (anomalous) behavior of the th node after time . Our objective is to detect misbehaving nodes as quickly as possible to timely mitigate the corresponding effects on the state estimation process. For the evaluation of the th node, we propose that each node decides whether the state estimates provided by the th node exhibit an anomalous pattern over time. In this direction, we next derive the nominal evolution (over time) model of the local state estimates of the th node. Then, similar to Sec. V-A, we derive a univariate statistic that is informative about a possible deviation of the local state estimates of the th node from the nominal evolution model and monitor this statistic over time. Based on the local Kalman filter iteration of the th node at time (see (11) and (12)), we can write ^xℓt|t =^xℓt|t−1+Gℓt(~yℓt−~Hℓ^xℓt|t−1) =Aℓ^xℓt−1|t−1+Gℓt(~Hℓ(xℓt−^xℓt|t−1)+~wℓt) (24) ∼N(Aℓ^xℓt−1|t−1,Ψℓt), (25) where (26) Here, (24) is obtained using (9) and (11). Moreover, is obtained using (see (16)), , and approximating the correlation terms involving the state variables belonging to different local centers as zero, as in Sec. IV. Notice that (25) statistically characterizes the local state estimates at time , given the local state estimates at time , under regular system operation. Based on (25), we can write ^xℓt|t−Aℓ^xℓt−1|t−1∼N(0,Ψℓt), that implies (27) is a chi-squared random variable with degrees of freedom under regular system operation. If node is misbehaving, we expect that the state estimates provided by it deviate from the nominal evolution model given in (25), that makes larger than its nominal values. Then, similar to the detection of measurement anomalies in Sec. V-A, the following CUSUM-like detection scheme can be employed at a regular node to decide on the trustability of the th node: Γℓj =inf{t:gℓj,t≥hℓ}, gℓj,t=max{0,gℓj,t−1+sℓj,t}, sℓj,t =log(αℓ/pℓj,t), pℓj,t=1−FNℓ(πℓt), (28) where is the decision statistic at time , , is the decision threshold, is the statistical evidence at time , is the significance level, is the right tail probability corresponding to , and denotes the cdf of the chi-squared random variable with degrees of freedom. A regular node then evaluates the th node as trustable until time , and misbehaving after . Furthermore, as before, the unknown change-point is estimated by the th node as the latest time-instant at which the decision statistic reaches zero before time : ^ηℓj≜max{t:gℓj,t=0,t<Γℓj}. The overall decision on the trustability of the th node is made by the majority of the nodes. Let the vote of the th node on the trustability of the th node at time be denoted with a binary variable , where or if the th node evaluates the th node as trustable or misbehaving, respectively. Notice that if the th node is regular, then it votes based on the test in (28) that gives rise to . The time at which the th node is declared misbehaving over the network is determined as follows: Γℓnet≜inf{t:∑j∈{1,…,L}∖{ℓ}dℓj,t>L−12}. (29) Notice that this decision mechanism works as intended unless the majority of the nodes are misbehaving. In other words, as long as the majority of the nodes regularly employ the proposed detection scheme in (28) and vote accordingly, then the trustability of the th node is evaluated reliably over the network, for every node . Under the nominal system operation, all nodes are regular and hence the detector (for the th node) in (28) is identical at all the nodes in the network. Then, the false alarm rate of (29) is equal to the false alarm rate of (28). Hence, the proposed trust management scheme achieves the same false alarm guarantees through Corollary 1 (after replacing the parameters and in the corollary with and , respectively). If (29) declares the th node as misbehaving, an alarm flag is raised, calling for an investigation at the th node and the neighboring nodes . Moreover, due to the inter-node data exchanges, a node misbehavior affects all the nodes in the network to some extent. Then, as before, until the system is recovered back to the regular operating conditions, the local states of each node can be predicted based on the nominal system model and the latest reliable estimates at time as follows: ^xjt|t=Aj(t−^ηℓj)^xj^ηℓj|^ηℓj, where can be obtained from the distributed ledger. The proposed trust management scheme requires that each node computes and for every other node (see (26)–(29)). Since the nominal system model is known by each node and the local state estimates provided by the other nodes are available (via the distributed ledger) to each node, the node already knows , , , and at each time for every node . On the other hand, , , and are not directly available to the th node. Fortunately, the Kalman gain matrix and the estimate of the state covariance matrix can be computed offline without requiring online meter measurements. Moreover, is computed based on the estimates of the state covariance matrices (see also Remark 1). Hence, at each node , we propose to compute and (iteratively through (11) and (12)) for every other node in the network. ## Vii Summary of the Proposed Mechanism We summarize the proposed procedure at the th node in Fig. 4, where the procedure is identical at every node. Since the proposed mechanism requires an investigation after the detection of a measurement anomaly at any node or a misbehavior of any node, the overall stopping time of the network is given by Γnet≜inf{Γℓ,Γℓnet,ℓ=1,…,L}. (30) If several detection mechanisms simultaneously give alarms, we can recover the state estimates based on the oldest among the corresponding change-point estimates. Moreover, if the state estimates corresponding to the (oldest) change-point estimate are not included in the distributed ledger consisting of the recent blocks, then we can choose as the state recovery point, that corresponds to the oldest state estimates available in the ledger, where the state recovery point is denoted with in Fig. 4. In case an anomaly is declared over the network, after an investigation and possibly the recovery of the system, the proposed mechanism can be restarted. Finally, the number of blocks in the distributed ledger can be chosen based on the maximum expected detection delay and the corresponding change-point estimate over an offline simulation. ## Viii Simulation Results In this section, we evaluate the performance of the proposed mechanism via simple case studies over an IEEE-14 bus power system, that consists of subregions, buses, and meters (see Fig. 5). The bus is chosen as the reference bus, the state transition matrix is chosen to be an identity matrix, i.e., , and the measurement matrix is determined based on the network topology. The noise variances are chosen as , and the initial state variables (voltage phase angles) are determined via the DC optimal power flow algorithm for case-14 in MATPOWER [61]. For the proposed detection schemes, to achieve , we choose and (see (22)). Then, we obtained via a Monte Carlo simulation that the average false alarm period of the network (see (30)) is . Moreover, we choose the number of blocks in the distributed ledger as . In the following, we present simulation results firstly for a measurement anomaly case and then for a node misbehavior case. ### Viii-a Case 1: Measurement Anomalies As an example to the measurement anomalies, we consider FDI attacks launched at time against the measurements in the subregion : yℓt=Hℓxℓt+aℓt+wℓt, t≥τℓ, (31) where denotes the injected false data at time . We assume that subregions 1 and 2 are under FDI attack after time with , , , , where denotes a uniform random variable in the range of . Firstly, assuming and , we present in Fig. 6 the sum of the mean squared state estimation errors over all local centers for both the pre-attack period, i.e., , and for the attacking period of . We present the performance of the proposed distributed secure state estimation mechanism, the centralized Kalman filter, and the robust centralized Kalman filter that rejects gross outliers and replaces the corresponding measurements with the pseudo measurements, similar to [31]. In particular, the robust Kalman filter computes a chi-squared statistic at each time using all the measurements , similar to (18 ), and compute the corresponding p-value (the right tail probability) based on the chi-squared distribution with degrees of freedom. The significance level for outliers is chosen as . Then, if the p-value is less than , the corresponding measurements are replaced with the pseudo measurements
2021-06-20 21:45:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256556987762451, "perplexity": 824.7821809380797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00139.warc.gz"}
https://byjus.com/physics/reflection-of-light/
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis : # Reflection of Light Have you ever thought about why we can see our image in a plane mirror? It’s because of the phenomenon known as reflection. Light waves, sound waves, and water waves can undergo reflection. In this session, let us learn about the reflection of light and the types of reflection in detail. ## What is Reflection of Light? When a ray of light approaches a smooth polished surface and the light ray bounces back, it is called the reflection of light. The incident light ray that land on the surface is reflected off the surface. The ray that bounces back is called the reflected ray. If a perpendicular were drawn on a reflecting surface, it would be called normal. The figure below shows the reflection of an incident beam on a plane mirror. Here, the angle of incidence and angle of reflection are with respect to normal and the reflective surface. ### Laws of Reflection The laws of reflection determine the reflection of incident light rays on reflecting surfaces, like mirrors, smooth metal surfaces and clear water. Let’s consider a plane mirror as shown in the figure above. The law of reflection states that • The incident ray, the reflected ray and the normal all lie in the same plane • The angle of incidence = Angle of reflection ## Types of Reflection of Light Different types of reflection of light are briefly discussed below: • Regular reflection is also known as specular reflection • Diffused reflection • Multiple reflection ### Regular/ Specular Reflection Specular Reflection refers to a clear and sharp reflection, like the ones you get in a mirror. A mirror is made of glass coated with a uniform layer of a highly reflective material such as powder. This reflective surface reflects almost all the light incident on it uniformly. There is not much variation in the angles of reflections between various points. This means that the haziness and the blurring are almost entirely eliminated. Regular Specular Reflection ### Diffused Reflection Reflective surfaces other than mirrors, in general, have a very rough finish. This may be due to wear and tear such as scratches and dents or dirt on the surface. Sometimes even the material of which the surface is made of matters. All this leads to a loss of both the brightness and the quality of the reflection. In the case of such rough surfaces, the angle of reflection when compared between points is completely haphazard. For rough surfaces, the rays incident at slightly different points on the surface is reflected in completely different directions. This type of reflection is called diffused reflection and is what enables us to see non-shiny objects. Diffused Reflection ### Multiple Reflection A single image is formed when an object is placed in front of a mirror. What happens if we use two mirrors? Since reflective surfaces such as mirrors are very good at preserving the intensity of light in a reflection, a single light source can be reflected multiple times. These multiple reflections are possible until the intensity of light becomes low to the point that we cannot see. This means that we can have almost infinite multiple reflections. We can also see an image in every individual reflection. This means that each image is the result of an image or an image of an image. The number of images we see depends on the angle between the two mirrors. We see that as we go on decreasing the angle between the mirrors, the number of images increases. And when the angle becomes zero, i.e., when the mirrors become parallel, the number of images becomes infinite. This effect can be easily observed when your barber uses another smaller mirror to show you the back of your head. When this happens, not only do you see the back of your head, but you also see innumerable images of yourself. The variation of the number of images of an object placed between two mirrors with the angle between the mirrors can be described by a simple formula: $$\begin{array}{l}Number\; of \; images = \frac{360^{\circ}}{angle\; between\; mirrors}-1\end{array}$$ Stay tuned with “BYJU’S – The learning app” for more such interesting information with engaging videos! ## Frequently Asked Questions – FAQs ### What is meant by reflection of light? When a light ray approaches a smooth polished surface and the light ray bounces back, it is known as the reflection of light. ### What is interference? Interference is the phenomenon in which two waves superpose to form the resultant wave of the lower, higher or same amplitude. ### State the laws of reflection? Coherent sources should have the following characteristics: • The incident ray, the reflected ray and the normal all lie in the same plane. • The angle of incidence is equal to the angle of reflection. • ### What are the types of reflection of light? Types of reflection of light are: • Regular reflection/specular reflection • Diffused reflection • Multiple reflection • ### Which type of reflection results in a clear and sharp reflection? Specular or regular reflection produces a clear and sharp reflection. Test your Knowledge on Reflection Of Light 1. karthik nice topics it was very clear • yajvendra obviously it on the website 2. james mwangi Good content 3. It’s a very good learning app. It helped me to make my project 4. HANSIKA RAWAT THANKS FOR INFORMATION 5. Rayhan Sarkar this topic is very easy and helpful for me.
2022-08-08 21:57:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5706688761711121, "perplexity": 655.3040574891695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00400.warc.gz"}
https://mca2021.dm.uba.ar/en/tools/view-abstract?code=2590
## View abstract ### Session S21 - Galois representations and automorphic forms Tuesday, July 13, 14:30 ~ 15:10 UTC-3 ## Drinfeld's lemma for $F$-isocrystals ### Kiran S. Kedlaya #### University of California San Diego, USA   -   This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak6e92399515cfeb1ca5ccf169a0a6dd6a').innerHTML = ''; var prefix = '&#109;a' + 'i&#108;' + '&#116;o'; var path = 'hr' + 'ef' + '='; var addy6e92399515cfeb1ca5ccf169a0a6dd6a = 'k&#101;dl&#97;y&#97;' + '&#64;'; addy6e92399515cfeb1ca5ccf169a0a6dd6a = addy6e92399515cfeb1ca5ccf169a0a6dd6a + '&#117;csd' + '&#46;' + '&#101;d&#117;'; var addy_text6e92399515cfeb1ca5ccf169a0a6dd6a = 'k&#101;dl&#97;y&#97;' + '&#64;' + '&#117;csd' + '&#46;' + '&#101;d&#117;';document.getElementById('cloak6e92399515cfeb1ca5ccf169a0a6dd6a').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy6e92399515cfeb1ca5ccf169a0a6dd6a + '\'>'+addy_text6e92399515cfeb1ca5ccf169a0a6dd6a+'<\/a>'; Drinfeld's lemma on the fundamental groups of schemes in characteristic $p>0$ plays a fundamental role in the construction of the "automorphic to Galois" Langlands correspondence in positive characteristic, as in the work of V. Lafforgue. We describe a corresponding statement in which the roles of lisse etale sheaves and constructible sheaves are instead played by overconvergent $F$-isocrystals and arithmetic $\mathcal{D}$-modules, which is needed in order to transpose Lafforgue's argument to $p$-adic coefficients. Joint work with Daxin Xu (Morningside Center of Mathematics).
2021-12-08 02:59:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20655804872512817, "perplexity": 9161.610333669503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00272.warc.gz"}
https://amca01.wordpress.com/category/computation/
## Neusis constructions (2): trisections This post, containing a nice slew of methods of trisecting a general angle using neusis methods, can be found at Numbers and Shapes. ## Neusis constructions (1) You can find this post, introducing geometric constructions which allow the use of a straight-edge with two marks, at my new site Numbers and Shapes. ## Solving a cubic by folding This post, which is on my new site here: http://www.numbersandshapes.net/?p=2520 shows how a cubic equation can be solved by origami.  This is not a new result by any means, but it’s hard to find a simple proof of how the construction works. ## An alternative to partial fractions The full post is available at my new site: http://www.numbersandshapes.net/?p=2495 Enjoy! ## Meeting Julia In my last post I mentioned the new language Julia. It deserves more than a single paragraph, so I thought I’d walk through a problem, and tackle it with the language. The problem is a stock standard one: investigating the fitting of an SIR model of disease spread to an outbreak of influenza at an English boarding school. Of the 763 pupils, on the first day only one was infected, and the number of infected pupils changed over 14 days as: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 3 6 25 73 222 294 258 237 191 125 69 27 11 4 We will try to find parameters $\beta$ and $\gamma$ so that when the equations $\displaystyle{\frac{dS}{dt}=-\beta IS}$ $\displaystyle{\frac{dI}{dt}=\beta IS-\gamma I}$ $\displaystyle{\frac{dR}{dt}=\gamma I}$ are solved, the values of $I$ fit the data. Note that the sum of all the right hand sides is zero; this indicates that $S+I+R$ is constant, which for this model is the population. (Other models allow for births, deaths and other population changes). Finding the parameters can be done in several steps: 1. Set up a function for the system of differential equations. 2. From that function, create a second function which is the sum of squared differences between the data, and the computed values from the equations. 3. Minimize that sum. The first function can be created in Julia (following the example here) as: function SIR(t,x,p) S = x[1] I = x[2] R = x[3] beta = p[1] gamma = p[2] [-beta*S*I, beta*S*I-gamma*I, gamma*I] end The default behaviour of functions in Julia is to return the line before the “end” statement; thus in a function such as this one, there is no need for a “return” statement (although one exists). To solve a system of equations we need access to the ODE package, which I’ll assumed you’ve installed with Pkg.add() This can be loaded into a running Julia session either with using ODE which adds all the ODE functions into the top namespace, or as import ODE which adds all the functions in the ODE namespace. So to call the Dormand-Prince method, you would use ode45 if all methods were in the top namespace, or as ODE.ode45 in the second case. So, for example, we could attempt to solve the equations as: p = [0.01, 0.1] t, x = ODE.ode45((t,x)->SIR(t,x,p),[0:0.1:14],[762.,1.,0.]); To plot the functions, we’ll use the plotting library Winston (Winston, Julia from 1984 – geddit?), which is fairly basic, but enough for our simple needs: import Winston wn = Winston wn.plot(t,x[:,1],"b",t,x[:,2],"g",t,x[:,3],"r",[1:14],data',"k*") The green curve corresponds to $I$, the number of infected individuals at a given time, and the asterisks to the data. Clearly these parameters do not provide a very good fit. The next step is to create a sum of squares function to minimize. To do this, we will solve the ODE system with the ode4 method which uses a fixed step size. This means we can ensure that there are computed values for $I$ at all the integer points: function ss(b::Vector) data = [3 6 25 73 222 294 258 237 191 125 69 27 11 4]; t,x = ode.ode4((t,x)->SIR(t,x,b),[0:0.05:14],[762.,1.,0.]); sum((data-x[21:20:281,2]').^2) end Now this function can be optimized, using a method from the Optim package: import Optim Optim.optimize(ss,[0.001,1],method=:cg) The “method” variable here defaults to the Nelder-Mead method, but for this particular function I found that the conjugate-gradient method gave the best results. This produces quite a bit of information, of which the salient bits are: * Minimum: .0021806376138654117 .44517197444683443 * Value of Function at Minimum: 4507.078964 So, let’s try these parameters and see what happens: p = [.0021806376138654117, .44517197444683443]; t, x = ODE.ode45((t,x)->SIR(t,x,p),[0:0.1:14],[762.,1.,0.]); wn.plot(t,x[:,1],"b",t,x[:,2],"g",t,x[:,3],"r",[1:14],data',"k*") And here’s the result: As you can see, the fit is remarkably good. As far as problems go, this is not particularly hard – conceptually speaking it’s probably at an undergraduate level. On the other hand, it’s not completely trivial, and does seem to me to give the computational environment a bit of a work-out. And Julia solves it with ease. ## The best Matlab alternative (3) Over two years ago I wrote The best Matlab alternative with a follow-up a bit later, which seem to have engendered a far amount of discussion. Well, things have moved on in the computational world, and a user is now spoiled for choice. Octave has just had version 3.8 released; this version comes with a GUI (which is not enabled by default). However, the GUI is a very fine thing indeed: As you see, it has all the bells and whistles you may want: command window and history, variable browser, file browser, editor, and a documentation reader. The GUI will become standard as of version 4.0. I think that Octave has come along in leaps and bounds over the past few years, and as a drop-in Matlab replacement (if that’s what you want) I don’t think it can be bettered. Scilab is at version 5.4.1, and is a mature product. I have come across a few comments in newgroups to the effect that Scilab is better than Octave for some advanced numerical work. However, I’m not sure if these are just opinions or are backed up with solid data and timings. I used to like Scilab a great deal, but now I don’t see that it offers anything over Octave other than Xcos (a dynamic systems modeller, similar to Simulink), and a few niche extras, such as solutions of boundary value problems. Python was a language I hadn’t considered in my first post, but with the addition of NumPy and SciPy (plus numerous add-on packages) becomes a definite contender for serious number-crunching. The web is full of stories about people who have dumped Matlab for Python. And of course with Python you get a lovely programming language, and extensions such as Cython, which gives you C-type speed. To get an idea of the packages, see here. Python has a interactive shell IPython which adds enormously to Python’s ease of use. Julia is the “new kid on the block”: a language which has been designed from the ground up for efficiency and speed, as well as convenience and ease of use. It is supposed to have the power of Matlab, the speed of C, and the elegance of Python. It is still in the early stages (the most recent version is 0.3.0), but it shows enormous promise, and has garnered an impressive list of users and developers. And there is already a small – but growing – list of add-on packages. The Julia web pages show some timings which seem to indicate that Julia is faster (sometimes by several orders of magnitude) than its contenders. (Well, they would, wouldn’t they? They would hardly show timings which were slower than other environments.) My particular interest is image processing, in which Octave does very well (its image package is quite mature now), followed by Python (with its Mahotas and scikit-image packages). Scilab has its SIP toolbox, but I spent several days trying to install it and gave up. There is an Images package for Julia, but it doesn’t seem on a first glance to have the breadth of the others. I ran a single test myself: to create a 1000 x 1000 random matrix, invert it, and find the trace of the product of the original and its inverse. In Octave: tic; A = rand(1000,1000);Ai = inv(A);trace(A*Ai);toc Scilab: tic; A = rand(1000,1000);Ai = inv(A);trace(A*Ai);toc Python (importing numpy as np): %timeit A = np.matrix(np.random.rand(1000,1000));Ai = A.I;np.trace(A*Ai) Julia: tic();A=rand(1000,1000);Ai = inv(A);trace(A*Ai);toc() Note that Octave, Scilab and Julia all have similar syntax; Python is slightly different because functions don’t all exist in the top namespace. This means, for example, you can use a function called “rand” from different packages, with different functionality, easily. And here are the results (running the code three times each): Octave: 1.08677; 1.04735; 1.10616 seconds Scilab: 3.502; 3.425; 3.663 seconds Python: 2.33; 2.28; 2.39 seconds Julia: 0.47034; 0.35776; 0.36403 seconds This is hardly definitive, but it does show how fast Julia can be, and in relation how slow Scilab can be. These four environments are by no means the end. R is extensively used, and has over 5000 add-on packages at the CRAN repository. However, I’ve never used R myself. I believe it has branched out from its statistical foundations, and is now more of a general use environment, but still with the best statistical functionality in the business. Then there’s the Perl Data Language, apparently used by astrophysicists, and the GNU Data Language, about which I know nothing. See the Wikipedia page for a list of others. (Note that I’ve not discussed Freemat this time; I don’t believe it’s a serious contender. I had a look through some of its source code, and most of its files seem to be cut-down Matlab/Octave files without any error checking. Anyway, there seems to be no need to use niche software when Octave is so mature and so well-supported by its developers and users.) My recommendations? Use Octave, unless you have some niche requirements, in which case you may find Scilab more suitable. If you’re prepared to sacrifice maturity for speed, give Julia a go. If your wish is a mature programming language as your base, then Python. ## A very long-running program In the interests of random number generation, I’ve been experimenting with the iteration $x\to g^x\pmod{p}$ for a prime $p$ and a primitive root $g\mod p$.  Now, it turns out that some primes have a primitive root which generates all non-zero residues, and others don’t.  For example, consider the prime $p=19$, which has primitive roots $2, 3, 10, 13,14,15$.  Starting with $x=1$, we find the iterations produce: $\displaystyle{\begin{array}{ll}2:&2, 4, 16, 5, 13, 3, 8, 9, 18, 1\\ 3:& 3,8,6,7,2,9,18,1\\ 10:&10,9,18,1\\ 13:&13,15,8,16,9,18,1\\ 14:&14,9,18,1\\ 15:&15,8, 5, 2, 16, 6, 11, 3, 12, 7, 13, 10, 4, 9, 18, 1 \end{array}}$ We see that no primitive root generates all non-zero residues. However, for $p=23$, we find that the primitive root 20 does generate all non-zero residues: $20,18,2, 9, 5, 10, 8, 6, 16, 13, 14, 4, 12, 3, 19, 17, 7, 21, 15, 11, 22, 1$ Let’s call a prime such as 23, an “exponentially complete” prime. You can see a list of the first few such primes at http://oeis.org/A214876 (which has been contributed by me). My question of the moment is: “Is $2^{31}-1$ exponentially complete?” The prime $2^{31}-1$ is one beloved by random number theorists, because modulo arithmetic can be performed very efficiently using bit shifts. In C, for example, $z \mod p$ can be computed with (z&p)+(z>>31) However, I can’t think of any method of finding this out except by testing each primitive root, to determine the length of its cycle. Even using GNU C, and the PARI library, and some speed-ups (for example, replacing modular exponentiation with just one multiplication from pre-computed arrays), my desktop computer has been grinding away for two days now, and is up to primitive root 1678; that is, the program has taken 2 days to test 380 primitive roots. Since there are $\phi(2^{31}-2)=534600000$ primitive roots, at this rate it will take something like $534600000/380\times 2 = 2813684.21$ days, or about $7703.66$ years. That’s a very long time. The AIM Network The Australian Independent Media Network
2015-10-09 21:40:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6109220385551453, "perplexity": 1038.5128724384087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935954.77/warc/CC-MAIN-20151001221855-00003-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.intechopen.com/books/six-sigma-projects-and-personal-experiences/lean-six-sigma
InTech uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy. Business, Management and Economics » "Six Sigma Projects and Personal Experiences", book edited by Abdurrahman Coskun, ISBN 978-953-307-370-5, Published: July 14, 2011 under CC BY-NC-SA 3.0 license. © The Author(s). # Lean Six Sigma By Vivekananthamoorthy N and Sankar S DOI: 10.5772/17288 Article top ## Overview Figure 1. Bill Smith coins the term Six Sigma at Motorola. Figure 2. Reducing variation in a process using Six Sigma Figure 3. Normal Distribution Figure 4. Lean vs Six Sigma Figure 5. SIPOC Diagram Figure 6. Cause and Effect Diagram Figure 7. Tools used in Root cause analysis Figure 8. Affinity Diagram Figure 9. Pareto diagram Figure 10. Control Charts Figure 11. FMEA # Lean Six Sigma N Vivekananthamoorthy1 and S Sankar ## 1. Introduction Due to increased globalization and constant technological advances and other competitive pressures, the organizations have to accelerate the pace of change to adapt to new situations. This climate introduces opportunities and threats and Organizations have to innovate and strive for operational excellence. Six Sigma is the most popular quality and process improvement methodology which strives for elimination of defects in the processes whose origin is traced back to the pioneering and innovation work done at Motorola and its adoption by many companies including GE, Ford, General Motors, Xerox etc. The primary objective of Six Sigma is to reduce variations, in products and processes, to achieve quality levels of less than 3.4 defects per million opportunities (DPMO). The important point to be noted is reducing the defects involve measurements in terms of millions of opportunities instead of thousands. Six Sigma is a culmination of several decades of quality improvement efforts pursued by organizations world over due to pioneering work done by quality Gurus Shewart, Deming, Juran, Crosby, Ishikawa, Taguchi and others. Dr. W. Edward Deming, who is considered by many to be the “Father of modern Quality movement”, was instrumental for transforming post war Japan into an economic giant because of helping for systematic introduction of quality improvement measures by Japanese companies. Dr. Deming had advocated popular quality improvement methods such as Total Quality Management (TQM), Plan-Do-Check-Act methodology, 14 point rules and elimination of 7 deadly sins and he helped organizations to achieve operational excellence with much customer focus. Later many US companies have gained much from Japanese experiences and ideas on quality improvement concepts. The Six Sigma concepts and tools used can be traced back to sound mathematical and management principles of Gauss, Taylor, Gilberth and Ford for their contributions like Sigma and Normal distribution (Gaussian distribution),Taylor’s Scientific Management, Gilberth’s ‘Time and Motion study’ and Ford’s mass production of cars using ‘Assembly line ‘ system. Six Sigma when coupled with ‘Lean Principles’ is called ‘Lean Six Sigma’ which professes eliminating waste in process steps by using ‘Lean Tools’ which is based on Toyota Production System(TPS) which enhances value in Six Sigma implementation one step further by increasing speed by identifying and removing non-value adding steps in a process. Execution of Lean Six Sigma project uses a structured method of approaching problem solving normally described by acronym ‘DMAIC’ which stands for Define, Measure, Analyze, Improve and Control. Many organizations have achieved phenomenal success by implementing Lean Six Sigma. Lean and Six Sigma are conceptually sound technically fool proof methodologies and is here to stay and deliver break through results for a long time to come. Motorola had celebrated 20 years of Six Sigma in the year 2007 and as per Sue Reynard in an article in ISixSigma-Magazine,” Motorola is a company of inventions and Six Sigma which was invented at Motorola is a defect reduction methodology that aims for near perfection has changed the manufacturing game of Motorola, but it didn’t stop there. As the Six Sigma has evolved during the ensuing 20 years, it had been adopted worldwide and has transformed the way business is done”. This chapter focuses and highlights overview and details of some of the important aspects of ‘Lean Six Sigma’ and the tools used to implement it in organizations to improve their bottom line by controlling variations in processes, reducing defects to near zero level and adopting lean principles. The chapter is organized on the following broad topics: the history of Six Sigma, the need for Six Sigma, Sigma Levels and motivation for Six Sigma, Lean thinking, Lean Six Sigma, DMAIC methodology, Six Sigma and Lean tools, and case studies on Lean Six Sigma implementations. Six Sigma Tools are available as free open source templates which can be downloaded from the URLs which are given in the references at end of the chapter. ## 2. What is six sigma ? Six Sigma is a quality improvement methodology invented at Motorola in 1980s and is a highly disciplined process improvement method that directs organizations to focus on developing and delivering near perfect products and services. Six Sigma is a statistical term that measures how far a given process deviates from perfection. The central idea behind Six Sigma is, if we are able to measure how many “defects” that exist in a process, it can be systematically figured out how to eliminate them and get close to “zero defects”. In the year 1985, Bill Smith, a Motorola Engineer coined the term ‘Six Sigma’, and explained that Six Sigma represents 3.4 defects per million opportunities is the optimum level to balance quality and cost. It is a real-breakthrough in quality improvement process where defects are measured against millions of opportunities instead of thousands which was the basis those days. Leading companies are applying this bottom-line enhancing strategy to every function in their organizations. In the mid 1990s, Larry Bossidy of Allied Signal and Jack Welch of GE Saw the potential in Six Sigma and applied it in their organizations which resulted in significant cost savings in progressive years. GE reports stated that Six Sigma had delivered $300 million to its bottom line in 1997,$750 million in 1998, and $2 billion in 1999. ### 2.1. History of six sigma The immediate origin of Six Sigma can be traced to its eearly roots at Motorola ( Fig. 1), and specifically to Bill Smith (1929 - 1993). Bill Smith was an employee of Motorola and a Vice President and Quality Manager of Land based Mobile Product Sector, when he approached then chairman and CEO Bob Galvin in 1986 with his theory of latent defect. The core principle of the latent defect theory is that variation in manufacturing processes is the main culprit for defects, and eliminating variation will help eliminate defects, which will in turn eliminate the wastes associated with defects, saving money and increasing customer satisfaction. Variation is measured in terms of sigma values or thresholds. The threshold determined by Smith and agreed to by Motorola is 3.4 defects per million opportunities (3.4 DPMO), which is derived from sigma shifts from specifications. ### Figure 1. Bill Smith coins the term Six Sigma at Motorola. Motorola adopted the concepts and went on to win the first ever Malcolm Baldrige Excellence Award in 1988, just two years after Bill Smith’s introduction of Six Sigma. ## 3. Describing six sigma concept Six Sigma is a method for improving quality by removing defects and their causes in business process activities. The method concentrates on those outputs which are important to customers and translates these customer needs into measurable requirements, the so called CTQs (Critical To Quality). An indicator for the CTQs is identified and a robust measurement system is established to obtain clean and precise data relating to the process. Once this is in place, one can compare actual process behaviour to the customer-derived specification and describe this in a statistical distribution (using mean, standard deviation [σ] or other indicators, dependent on the type of distribution). ### 3.1. Inputs and output The objective of the Six Sigma concept is to gain knowledge about the transfer function of the process - the understanding of the relationship between the independent input variables (Xs) and the dependent output variable (Y). If the process is modelled as a mathematical equation, where Y is a function of X, i.e. Y = f(X1, X2, …,Xn), then the output variable (Y) can be controlled by steering the input variables (Xs). The Six Sigma drive for defect reduction, process improvement and customer satisfaction is based on the “statistical thinking” paradigm: • All work occurs in a system of interconnected processes. • All processes have inherent variation. • Data analysis is used to understand the variation and to drive process improvement decisions. ### 3.2. Variation Six Sigma is all about reducing the variation of a process. The more standard deviations (σ) – an indicator of the variation of the process – that fit between the mean of the distribution and the specification limits (as imposed by the customer), the more capable is the process. A Six Sigma process means that 6 standard deviations fit on each side of the mean, between the mean and the specification limits. 6 Sigma equates in percentage terms to 99.9997% accuracy or to 3.4 defects per million opportunities to make a defect. Fig 2 illustrates how Six Sigma quality is achieved by reducing variations in a process. ### Figure 2. Reducing variation in a process using Six Sigma ### 3.3. Normal curve and sigma Six Sigma concepts can be better understood and explained using mathematical term Sigma and Normal Distribution. Sigma is a Greek symbol represented by "σ". The bell shape curve shown in Fig. 3 is called "normal distribution" in statistical terms. In real life, a lot of frequency distributions follow normal distribution, as in the case of delivery times in Pizza Business. Natural variations cause such a distribution or deviation. One of the characteristics of this distribution is that 68% of area (i.e. the data points) falls within the area of -1σ and +1σ on either side of the mean. Similarly, 2σ on either side will cover approximately 95.5% area. 3σ on either side from mean covers almost 99.7% area. A more peaked curve (e.g. more and more deliveries were made on target) indicates lower variation or more mature and capable process. Whereas a flatter bell curve indicates higher variation or less mature or capable process. To summarize, the Sigma performance levels – 0ne to Six Sigma are arrived at in the following way. ### Figure 3. Normal Distribution If target is reached: 68% of the time, they are operating at +/- 1 Sigma 95.5% of the time, they are operating at +/-2 Sigma 99.73 % of the time are operating at +/-3 Sigma Six Sigma: 3.4 ppm = 100-99.99966% ### 3.4. Six sigma and TQM Six Sigma is not just a statistical approach to measure variance; it is a process and culture to achieve excellence. Following its success, particularly in Japan, TQM seemed to be popular in organizations which preached quality as fitness for purpose, striving for zero defects with customer focus. Even though TQM was the management tool in the 1980s, by 1990s it was regarded as failure and it was written off as a concept that promised much but failed to deliver. Research by Turner (1993) has shown that any quality initiative needs to be reinvented at regular intervals to keep the enthusiasm level high. Against this background, Six Sigma emerged to replace the ‘overworked’ TQM philosophy. The key success factors differentiating Six Sigma from TQM are: 1. Six Sigma emphasizes on Statistical Science and measurement. 2. Six Sigma was implemented with structured training plans at different levels (Champions, Master Belt, Black belt, and Green belt). 3. The project focussed approach with single set of Problem Solving Techniques (DMAIC). 4. The Six Sigma implementation effects are quantified in tangible savings (as opposed to TQM where the benefits cannot be measured). Quantification of tangible savings is a major selling point for Six Sigma. ### 3.5. Sigma quality level Sigma Quality Level is a measure used to indicate how often the defects are likely to occur. Sigma is a mathematical term and it is the key measure of variability. It emphasizes need to control both the average and variability of a process. Table 1. shows different Sigma levels and associated defects per million opportunities. For example, Sigma level 1 indicates that it tolerates 690,000 defects per million opportunities with 31% yield. Sigma level 6 allows only 3.4 defects per million opportunities with 99.9997 yield. Sigma Performance Levels - One to Six Sigma Sigma Level Defects Per Million Opportunities Percentage Yield 1 690,000 31 2 308,537 69 3 66,807 93.3 4 6,210 99.38 5 233 99.977 6 3.4 99.99966 ### Table 1. Sigma performance Levels Before starting a Six Sigma Project,the important thing to be done first is to find the need for Six Sigma. It is natural for Organizational processes to operate around 3 to 4 sigma level. In this section, the defect levels for some example scenarios one operating at 3 to 4 sigma level and other operating at Six Sigma level are compared. The comparisons as per Table 2. show that the defects at 3 to 4 Sigma level are found to be too high to be tolerated and organizations have to strive to achieve Six Sigma level as an obvious move. This section elaborates the need for Six Sigma with examples. ## 4. Why six sigma? ### 4.1. Does 99.9% yield is good enough for an organization? With 99.9 % yield, we say the organization operates at 4 to 5 Sigma level. Taking into account some real world examples, with 99.9 % yield, we come across the following example scenarios which are surely unacceptable in customer’s point of view : • Unsafe drinking water almost 15 minutes each day • 5400 arterial by pass failures each year • Visas issued to 50 dangerous persons each year By moving to Six Sigma level with 99.9997% yield, significant improvements have taken place resulting in very high quality with almost nil defects and very good customer satisfaction as shown below : • Unsafe drinking water only few seconds a day • 18 arterial bypass failures • No visas issued to dangerous persons The following real world examples explain the importance and need for achieving six sigma level quality. Comparison of performace improvement with 99.9% and 99.9997 acceptence Scenarios 99.9% acceptance(Sigma Level : 4 to 5 Sigma) 99.9997 % acceptance(Sigma Level : 6 Sigma) Arterial bypass failures in an year 5400 18 Commercial aircraft take off aborted each year 31,536 107 Train wrecks a year 180 < 1 Visa issued to dangerous persons 50 none ### Table 2. Comparison of performance improvement at different sigma levels ## 5. Lean ### 5.1. Lean thinking Lean Thinking was an another quality and productivity improvement methodology introduced in Toyota Production Systems (TPS) which is based on the concept of elimination of waste in processes which had resulted in productivity gain and improvement of speed and flow in the value stream. The principle of Lean can be stated as a relentless pursuit of the perfect process through wastage elimination in the value stream. Lean identifies three different kinds of wastes, using Japanese terminology from the Toyota Production System where lean originated: muda (waste of time and materials), mura (unevenness/variation), and muri (the overburdening of workers or systems). Every employee in a lean manufacturing environment is expected to think critically about his or her job and make suggestions to eliminate waste and to participate in kaizen, a process of continuous improvement involving brainstorming sessions to fix problems. ### 5.2. Lean in a nutshell Lean is a business transformation methodology and it is derived from the Toyota Production System (TPS). Within the Lean methodology, there is a relentless focus on increasing customer value by reducing the cycle time of product or service delivery through the elimination of all forms of muda (a Japanese term for waste) and mura (a Japanese term unevenness in the workflow). ### 5.3. Six sigma in a nutshell Six Sigma was a concept developed in 1985 by Bill Smith of Motorola, who is known as “ the Father of Six Sigma.” This concept contributed directly to Motorola’s winning of the U.S. Malcolm Baldrige National Quality Award in 1988. Six Sigma is a business transformation methodology that maximizes profits and delivers value to customers by focusing on the reduction of variation and elimination of defects by using various statistical, data-based tools and techniques. ### 5.4. Six sigma vs lean Both methodologies focus on business processes and process metrics while striving to increase customer satisfaction by providing quality, on time products and services. Lean takes a more holistic view. It uses tools such as value-stream mapping, balancing of workflow, or kanban pull signaling systems to trigger work, streamline and improve the efficiency of processes, and increase the speed of delivery. Six Sigma takes a more data-based and analytical approach by using tools to deliver error-free products and services, such as the following examples: • Voice Of the Customer (VOC) • Measurement Systems Analysis (MSA) • Statistical hypothesis testing • Design of Experiments (DoE) • Failure Modes and Effects Analysis (FMEA) Six Sigma uses an iterative five-phase method to improve existing processes. This method is known as Define, Measure, Analyze, Improve, Control (DMAIC), and normally underpins Lean Six Sigma (LSS). ### Figure 4. Lean vs Six Sigma Over the last 10 to 15 years, an increased need for accelerating the rate of improvement for existing processes, products, and services has led to a combination of these two approaches. As shown in Fig. 4, Lean Six Sigma combines the speed and efficiency of Lean with the effectiveness of Six Sigma to deliver a much faster transformation of the business. ## 6. Lean six sigma Lean Six Sigma came into existence which is the combination of Lean and Six Sigma. The fusion of Lean and Six Sigma is required because : • Lean cannot bring process under statistical control, and • Six Sigma alone cannot dramatically improve process speed or reduce invested capital. Lean Six Sigma is a disciplined methodlogy which is rigorous, data driven, result-oriented approach to process improvement. It combines two industry recognized methodologies evolved at Motorola, GE, Toyata, and Xerox to name a few. By integrating tools and processes of Lean and Six Sigma, we’re creating a powerful engine for improving quality, efficiency, and speed in every aspect of business. Cindy Jutras,Vice President, Research Fellow and Group Director Enterprise Applications Aberdeen Group says,” Lean and Six Sigma are initiatives that were born from the pursuit of operational excellence within manufacturing companies. While Lean serves to eliminate waste, Six Sigma reduces process variability in striving for perfection. When combined, the result is a methodology that serves to improve processes, eliminate product or process defects and to reduce cycle times and accelerate processes”. Embedding a rigourous methodology like lean six sigma into organizational culture is not a short journey, but it is a deep commitment not only to near-term results but also a long-term, continuous, even break-through results. ## 7. Six sigma DMAIC methodology Motorola developed a five phase approach called ‘DMAIC Model’ to achieve the highest level in the Six Sigma, i.e., 3.4 defects per million. The five phases are: Define process goals in terms of key critical parameters (i.e. critical to quality or critical to production) on the basis of customer requirements or Voice Of Customer (VOC) Measure the current process performance in context of goals Analyze the current scenario in terms of causes of variations and defects Improve the process by systematically reducing variation and eliminating defects Control future performance of the process Table 3 lists the important deliverables and tools used in each step of ‘DMAIC Model’. The subsequent sections brief the process involved in each phase. ### 7.1. Define In the Define phase of the project, the focus is on defining the current state by making the Problem statement which specifies what the team wants to improve upon which illustrates the need for the project and potential benefit. The type of things that are determined in this phase include the Scope of the project, the Project Charter. #### 7.1.1. Project charter The problem statement and goal statement are the part of Project Charter. The following deliverables should be part of the project charter : • Business Case (Financial Impact) • Problem statement • Project Scope (Boundaries) • Goal Statement • Role of team members • Mile Stones/deliverables (end products of the project) • Resources requiered Strategic Steps Deliverables Tools used Define Project Charter or Statement of Work(SoW) Gantt Chart/Time LineFlow Chart/Process MapQuality Function Deployment (QFD) Measure Base Line figures SIPOC (Suppliers, Inputs, Process, Outputs, and Customers ) or IPO (Input-Process-Output) diagram Analyze Identified Root Causes Cause-and-Effect Diagram5-WhyScatter DiagramRegressionANOVA Improve Selected root causes and counter measuresImprovement Implementation Plan Affinity DiagramHypothesis TestingDoEFailure Mode Effect Analysis (FMEA) Control Control PlanCharts & MonitorStandard Operating Procedures (SOP)Corrective Actions Control ChartsPoka-YokesStandardizationDocumentationFinal ReportPresentation ### Table 3. DMAIC Methodology The metrics to be used are developed at this phase. The basic metrics are cycle time, cost, value, and labor. Some of the methods used for identifying the metrics are Pareto diagram, SIPOC, voice of the customer, affinity diagram, critical to quality tree. SIPOC stands for Suppliers, Inputs, Process, Outputs, and Customers. This approach helps us to identify characteristics that are key to the process which in term facilitates identifying appropriate metrics to be used to effect improvement. To create a SIPOC diagram: • Identify key process activities • Identify outputs of the process and known customers • Identify inputs to the process and likely suppliers Fig. 5 shows an example SIPOC Diagram of Husband making wife a cup of tea. A SIPOC diagram is a tool that is used to gather a snapshot view of process information. SIPOC diagrams are very useful at the start of a project to provide information to the project team before work commences. An IPO (Input-Process-Output) diagram is a visual representation of a process or activity as shown in Table 4. It lists input variables and output characteristics. It is useful in defining a process and recognizing the input variables and responses or outputs. It helps us to understand what inputs are needed to achieve each specific output. Input Process Output Centigrade Prompt for centigrade value fahrenheit Compute fahrenheit value ### Table 4. An IPO diagram ### Figure 5. SIPOC Diagram ### 7.2. Measure The Measure is the second step of the Six Sigma methodology. A base line measure is taken using actual data. This measure becomes the origin from which the team can guage improvement. It is within the Measure phase that a project begin to take shape and much of the hands-on activity is performed. The goal of Measure phase is to establish a clear understanding of the current state of the process you want to improve. For example, a medical practioner prescribes various tests like blood test, ECG test etc for a patient admitted in a hospital. The test reports of various laboratorical tests reflect the current state of health of the patient. Similarly, a Six Sigma practioner, determines current state of health of the system under consideration in this phase. The deliverables in this phase are refined process map, and refined Project Charter. Some of the tools used in Measure phase are : • Flow Charts • Fish bone diagrams • Descriptive Statistics • Scatter diagrams • Stem and Leaf plots • Histograms These metrics will establish the base line of the current state. The outcome of applying these tools in the form of charts, graphs or plots helps the Six Sigma Practitioner to understand how the data is distributed. He or she is able to know what the data are doing. The distribution that is associated with data related to a process speaks volumes. The data distribution can be categorized into: • Normal distribution • Weibul • Poison • Hypergeometric • Chi Square The data can be continuous or discrete. ### 7.3. Analyze In this step, the team identify several possible causes (X’s) of variation or defects that are affecting the outputs (Y’s) of the process. One of the most frequently used tools in the analyze phase is the ‘Cause and Effect Diagram’. The Cause & Effect Diagram is a technique to graphically identify and organize many possible causes of a problem (effect). They help identify the most likely ROOT CAUSES of a problem. This tool can help focus problem solving and reduce subjective decision making. Fig. 6 illustrates a cause and effect diagram which helps to find out possible causes for software not being reliable. Root cause is the number one team deliverable coming out of the analysis step. Causes can be validated usingnew or existing data and applicable statistical tools such as scatter plots, hypotheses testing, ANOVA, regression or Design of Experiments. Some of the tools used in root cause analysis are shown in Fig. 7. ### Figure 6. Cause and Effect Diagram ### Figure 7. Tools used in Root cause analysis ### 7.4. Improve In this step, the team would brainstorm to come up with counter measures and lasting process improvements that address the validated root causes. The most preferred tool used in this phase is affinity diagram. We have measured our data and performed some analysis on the data to know where our process is, it is time to improve it. One of the important methods used for improvement of a process is Design of Experiments (DoE). #### 7.4.1. Affinity diagram A pool of ideas, generated from a brainstorming session, needs to be analyzed, prioritized before they can be implemented. A smaller set of ideas are easy to sift through and evaluate without applying any formal technique. Affinity diagramming is an effective technique to handle a large number of ideas. It is typically used when 1. Large data set is to be traversed, like ideas generated from brainstorming and sieve for prioritization. 2. Complexity due to diverse views and opinions. 3. Group involvement and consensus. The process of affinity diagramming requires the team to categorize the ideas based on their subject knowledge thereby making it easy to sift and prioritize ideas. Fig. 8 shows an example affinity diagram with prioritized ideas categorized into different headings. #### 7.4.2. Design of experiments (DoE) With DoE, you look at multiple levels of multiple factors simultaneously and make decisions as to what levels of the factor will optimize your output. • A statistics-based approach to designed experiments • A methodology to achieve a predictive knowledge of a complex, multi-variable process with the fewest trials possible • An optimization of the experimental process itself ### 7.5. Control In this step, our process has been measured, our data analyzed, and our process improved. The improvement we have made will be sustained. We need to build an appropriate level of control so that it does not enter into an undesirable state. One of the important tool that can be used to achieve this objective is Statistical Process Control (SPC). The purpose of SPC is to provide the practitioner with real-time feedback which indicates whether a process is under control or not. There are also some lean tools like the 5S’s, the Kaizen blitz, kanban, poka-yoke etc. ### Figure 8. Affinity Diagram Six Sigma Tools Advanced Tools Pareto AnalysisFlow Process ChartUpper Control Limit (UCL) / Lower Control Limit (LCL) Control ChartCause and Effect DiagramInput-Process-Output DiagramsBrain StormingScatter DiagramHistogramThe Seven WastesThe Five Ss Failure Mode Effect Analysis (FMEA)Design of Experiments (DoE)Design For Six Sigma (DFSS) ### Table 5. Six Sigma Tools ## 8. Six sigma and lean tools Table 5. summarizes some of the important Six Sigma tools used for easy reference. Pareto analysis, Control charts and Failure Mode Effect Analysis are explained in detail with examples. ### 8.1. Pareto Analysis Pareto Analysis is a statistical technique in decision making that is used for the selection of a limited number of tasks that produce significant overall effect. It uses the Pareto Principle (also know as the 80/20 rule) the idea that a large majority of problems (80%) are produced by a few key causes (20%). This is also known as the vital few and the trivial many.The 80/20 rule can be applied to almost anything: • 80% of customer complaints arise from 20% of your products or services. • 80% of delays in schedule arise from 20% of the possible causes of the delays. • 20% of your products or services account for 80% of your profit. • 20% of your sales-force produces 80% of your company revenues. • 20% of a systems defects cause 80% of its problems. ### Figure 9. Pareto diagram The Pareto Principle has many applications in quality control. It is the basis for the Pareto diagram, one of the key tools used in total quality control and Six Sigma. Seven steps to identifying the important causes using Pareto Analysis : 1. Form a table listing the causes and their frequency as a percentage. 2. Arrange the rows in the decreasing order of importance of the causes, i.e. the most important cause first. 3. Add a cumulative percentage column to the table. 4. Plot with causes on x-axis and cumulative percentage on y-axis. 5. Join the above points to form a curve. 6. Plot (on the same graph) a bar graph with causes on x-axis and percent frequency on y-axis. 7. Draw a line at 80% on y-axis parallel to x-axis. Then drop the line at the point of intersection with the curve on x-axis. This point on the x-axis separates the important causes on the left and less important causes on the right. ### 8.2. Control charts A control chart is a statistical tool used to distinguish between variation in a process resulting from common causes and variation resulting from special causes. It presents a graphic display of process stability or instability over time as shown in Fig. 10. Every process has variation. Some variation may be the result of causes which are not normally present in the process. This could be special cause variation. Some variation is simply the result of numerous, ever-present differences in the process. This is common cause variation. Control Charts differentiate between these two types of variation. One goal of using a Control Chart is to achieve and maintain process stability. Process stability is defined as a state in which a process has displayed a certain degree of consistency in the past and is expected to continue to do so in the future. This consistency is characterized by a stream of data falling within control limits based on plus or minus 3 standard deviations (3 sigma) of the centerline. A stable process is one that is consistent over time with respect to the center and the spread of the data. Control Charts help you monitor the behavior of your process to determine whether it is stable. Like Run Charts, they display data in the time sequence in which they occurred. However, Control Charts are more efficient that Run Charts in assessing and achieving process stability. Your team will benefit from using a Control Chart when you want to monitor process variation over time. 1. Differentiate between special cause and common cause variation. 2. Assess the effectiveness of changes to improve a process. 3. Communicate how a process performed during a specific period. ### Figure 10. Control Charts ### 8.3. Failure mode and effects analysis (FMEA) Failure Mode and Effects Analysis (FMEA) is a model used to prioritize potential defects based on their severity, expected frequency, and likelihood of detection. An FMEA can be performed on a design or a process, and is used to prompt actions to improve design or process robustness. The FMEA highlights weaknesses in the current design or process in terms of the customer, and is an excellent vehicle to prioritize and organize continuous improvement efforts on areas which offer the greatest return. The next step is to assign a value on a 1-10 scale for the severity, probability of occurrence, and probability of detection for each of the potential failure modes. After assigning a value, the three numbers for each failure mode are multiplied together to yield a Risk Priority Number (RPN). The RPN becomes a priority value to rank the failure modes, with the highest number demanding the most urgent improvement activity. Error-proofing, or poka-yoke actions are often an effective response to high RPN's. Following is an example of a simplified FMEA for a seat belt installation process at an automobile assembly plant. ### Figure 11. FMEA As you can see, three potential failure modes have been identified. Failure mode number two has an RPN of 144, and is therefore the highest priority for process improvement. FMEA's are often completed as part of a new product launch process. RPN minimum targets may be established to ensure a given level of process capability before shipping product to customers. In that event, it is wise to establish guidelines for assessing the values for Severity, Occurrence, and Detection to make the RPN as objective as possible. ## 9. Case studies on lean six sigma Having seen Six Sigma Methodology and Lean Six Sigma tools elaborately, it is appropriate to look into some case studies on Six Sigma implementations. We present two case studies on Six Sigma implementation by two leading companies in this section. These studies reinforce Lean and Six Sigma Concepts as well as demonstrate the the tools used by them for implementing the same. The importance of achieving operational excellence by way of reducing defects and variations in processes as well as eliminations of non value adding steps in processes can be inferred from these case studies. One more case study on “Mumbai Dabba walahs” also presented at the end of the chapter to clearly demonstrate that Six Sigma is a tool not only for coporates but also it is for common man who are capable of achieving Six Sigma level in their services in execution of their daily tasks by fulfilling their customer needs. ### 9.1. Honeywell aerospace electronics system, singapore – implementing six sigma quality Honeywell is a US$ 254 billion diversified technology and manufacturing leader, serving customers worldwide with aerospace products and services One of its business units, Aerospace Electronics System in Singapore, uses Six Sigma as a best practice to improve processes in most of its operations. The organisation, which has 150 employees, was set up in Singapore in 1983. It manufactures high quality avionics and navigation equipment and systems. Its principal customers include Cessna, Bell Helicopters, Raytheon, Learjet, Mooney Aircraft, Piper Aircraft, FedEx and Singapore Aerospace. Six Sigma Plus is Honeywell's overall strategy to accelerate improvement in all processes, products and services, and to reduce the cost of poor quality by eliminating waste and reducing defects and variations. Six Sigma is already understood worldwide as a measure of excellence. The "Plus" is derived from Honeywell's Quality Value assessment process and expanded former AlliedSignal's Six Sigma strategic tools. The strategy requires that the organisation approach every improvement project with the same logical method of DMAIC: • Define the customer critical parameters • Measure how the process performs • Analyse causes of problems • Improve the process to reduce defects and variations • Control the process to ensure continued, improved performance #### 9.1.1. Implementing six sigma plus The tools and skills that help in the implementation of the DMAIC method include: • Process mapping which helps to identify the order of events in producing a product or service and compares the "ideal" work flow to what actually happens. • Failure mode and effect analysis which helps to identify likely process failures and minimises their frequency. • Measurement system evaluation which helps in the assessment of measurement instruments to enable the better separation of important process variations from measurement "noise". • Statistical tests which assist in the separation of significant effects of variable from random variation. • Design of experiments which is used to identify and confirm cause and effect relationships. • Control plans which allow for the monitoring and controlling of processes to maintain the gains that have been made. • Quality function deployment which is a tool for defining what is important to customers; it enables better anticipation and understanding of customer needs. • Activity based management to look at product and process costs in a comprehensive and realistic way by examining the activities that create the costs in the first place and hence allowing for better subsequent management. • Enterprise resource planning which uses special computer software to integrate, accelerate and sustain seamless process improvements throughout an organisation. • Lean enterprise with skills to enhance the understanding of actions essential to achieving customer satisfaction. These skills simplify and improve work flow, help eliminate unnecessary tasks and reduce waste throughout a process. #### 9.1.2. Impact of six sigma plus In the past, generic and low-end competencies such as the manufacture of printed circuit boards were outsourced. With Six Sigma Plus, core competencies were redefined and control plans established. Presently, Aerospace Electronics System, Singapore focuses on core competencies that are unique to itself, such as final assembly and test and final alignment. This helped to stabilise the workforce for the organisation, which once experienced high turnover for its front-end and low-skill jobs. Waste has also been reduced from key business processes. For example, inspection, which is considered as non-value added, has been eliminated. Instead, Reliance on Operators' Inspection (ROI) is practised and this has helped to increase the value added per employee. In the past, all Honeywell Singapore's products were 100% inspected by a team from the US. Currently, the Federal Aviation Agency (FAA) certifies its products for manufacturing in Singapore; and 100% of its products are shipped direct to stock to Kansas, US, saving \$1 million in inspection cost. In addition, audits by FAA involve only observations and not all processes need to be audited. This is achieved by ensuring that the necessary quality procedures are built into the process. Six Sigma Plus in Honeywell has led to the following results: • Increased Rolled Throughput Yield (RTY) • Reduced variations in all processes • Reduced cost of poor quality (COPQ) • Deployment of skilled resources as change agents. #### 9.1.3. Key learning points Some of the key learning points are: • Strong management commitment and support. • Well-structured approach and deployment process • Team-based approach. • Sharing Six Sigma Plus knowledge. ### 9.2. Lean six sigma in higher education: applying proven methodologies to improve quality, remove waste, and quantity opportunities in college and universities #### 9.2.1. Lean flow today This is another case study which highlights the experiences of Ms Xerox Corporation in implementing Six Sigma in higher education. The case study starts with discussion on the importance of Lean Principles and then elaborately discuss Six Sigma implementation strategies. While Lean Flow began as a manufacturing model, today’s definition has been extended to include the process of creating an “optimized flow” anywhere in an organization. The only requirement is that this “flow” challenge current business practices to create a faster, cheaper, less variable, and error prone process. Lean Flow experts have found that the greatest success can be achieved by methodically seeking out inefficiencies and replacing them with “leaner”, more streamlined processes. Sources of waste commonly plaguing most business processes include: • Waste of worker movement (unneeded steps) • Waste of making defective products • Waste of over production • Waste in transportation • Waste of processing • Waste of time (idle) • Waste of stock on hand #### 9.2.2. Putting lean flow to work Implementing a Lean Flow requires having the right data and knowing how to use it. There are a number of different approaches taken by organizations, but fundamentally, Lean Flow is achieved by: • Analyzing the steps of a process and determining which steps add value and which do not. • Calculating the costs associated with removing non-value-added steps and comparing those costs versus expected benefits. • Determining the resources required to support #### 9.2.3. Six sigma today While the concept of Six Sigma began in the manufacturing arena decades ago, the idea that organizations can improve quality levels and work “defect-free” is currently being incorporated by higher education institutions of all types and sizes. So what is today’s definition of Six Sigma? It depends on whom you ask. In his book Six Sigma: SPC and TQM in Manufacturing and Services, Geoff Tennant explains that "Six Sigma is many things… a vision; a philosophy; a symbol; a metric; a goal; a methodology.” Naturally, as Six Sigma permeates into today’s complex, sophisticated higher education landscape, the methodology is “tweaked” to satisfy unique needs of individual schools. But no matter how it is deployed, there is an overall framework that drives Six Sigma toward improving performance. Common Six Sigma traits include: • A process of improving quality by gathering data, understanding and controlling variation, and improving predictability of a school’s business processes. • A formalized Define, Measure, Analyze, Improve, Control (DMAIC) process that is the blueprint for Six Sigma improvements. • A strong emphasis on value. Six Sigma projects focus on high return areas where the greatest benefits can be gained. • Internal cultural change, beginning with support from administrators and champions. value-added steps while eliminating non-value added steps. • Taking action. Lean Six Sigma is the application of lean techniques to increase speed and reduce waste, while employing Six Sigma processes to improve quality and focus on the Voice of the Customer. Lean Six Sigma means doing things right the first time, only doing the things that generate value, and doing it all quickly and efficiently. Xerox Global Services imaging and repository services leverage the Lean Six Sigma-based DMAIC approach: Define The Define phase of the DMAIC process is often skipped or short-changed, but is vital to the overall success of any Lean Six Sigma project. This is the phase where the current state, problem statement, and desired future state are determined and documented via the Project Charter. Xerox asks questions like: What problem are we trying to solve? What are the expected results if we solve the problem? How will we know if the problem is solved? How will success be measured? In most cases where imaging and repository services are involved, the problem relates to document management and access. Schools look to improve the ways documents are created, stored, accessed, and shared so they may accelerate and enhance work processes, share information more conveniently, and collaborate more effectively. As the project progresses and more information is collected in future phases, the problem statement developed in the Define phase is refined. Measure The Measure phase is where Xerox gathers quantitative and qualitative data to get a clear view of the current state. This serves as a baseline to evaluate potential solutions and typically involves interviews with process owners, mapping of key business processes, and gathering data relating to current performance (time, volume, frequency, impact, etc.). Analyze In the Analyze phase, Xerox studies the information gathered in the Measure phase, pinpoints bottlenecks, and identifies improvement opportunities where non-value-add tasks can be removed. A business case is conducted, which takes into account not only hard costs but also intangible benefits that can be gained, such as user productivity and satisfaction, to determine if the improvement is cost-effective and worthwhile. Finally, the Analyze phase is when technological recommendations are provided. Improve The Improve phase is when recommended solutions are implemented. A project plan is developed and put into action, beginning with a pilot program and culminating in full-scale, enterprise-wide deployment. Where appropriate, new technology is implemented, workflows are streamlined, paper-based processes are eliminated, and consulting services are initiated. Key factors of success during this phase are acceptance by end users and enterprise-wide change without any degradation of current productivity levels. Control Once a solution is implemented, the next step is to place the necessary “controls” to assure improvements are maintained long-term. This involves monitoring—and in many cases, publicizing—the key process metrics to promote continuous improvement and to guard against regression. In many cases, Xerox will revisit the implementation after 3-6 months to review key metrics and evaluate if the initial progress has been sustained. A common practice is to put key metrics, including hard cost savings and achievement of pre-defined Service Level Agreements, in full view “on the dashboard” to provide continuous feedback to the organization and so decision-makers can assess the project’s level of success as it moves forward. ### 9.3. Dabbawalas and six sigma A Six Sigma practioner need not be an educated individual. One interesting case study quoted for Six Sigma application is dabbawalas of Mumbai, India. Dabbawallas (also known as Tiffinwallahs) are persons employed in a service industry in Mumbai whose primary job is collecting the freshly cooked food in lunch boxes from the residences of office workers (mostly in the suburbs), delivering it to their respective work places and returning the empty boxes to the customer’s residence by using various modes of transport. Around 5000 dabbawalas in Mumbai transport around 200,000 lunch boxes every day. The reliability of their services meet Six Sigma standard as per study by Forbes Magazine in the year 2002. It has been found that they make less than one mistake in every 6 million deliveries. The tiffin boxes are correctly delivered to their respective destinations as the dabbawalls use an unique identifying coding scheme inscribed on the top of each tiffin box. ## 10. Conclusion Six Sigma was a concept developed in 1985 by Bill Smith of Motorola. Six Sigma is a business transformation methodology that maximizes profits and delivers value to customers by focusing on the reduction of variation and elimination of defects by using various statistical, data-based tools and techniques. Lean is a business transformation methodology which was derived from the Toyota Production System (TPS) which focusses on increasing customer value by reducing the cycle time of product or service delivery through the elimination of all forms of waste and unevenness in the workflow. Lean Six Sigma is a disciplined methodlogy which is rigorous, data driven, result-oriented approach to process improvement. It combines two industry recognized methodologies evolved at Motorola, GE, Toyata, and Xerox to name a few. By integrating tools and processes of Lean and Six Sigma, we’re creating a powerful engine for improving quality, efficiency, and speed in every aspect of business. Lean and Six Sigma are initiatives that were born from the pursuit of operational excellence within manufacturing companies. While Lean serves to eliminate waste, Six Sigma reduces process variability in striving for perfection. When combined, the result is a methodology that serves to improve processes, eliminate product or process defects and to reduce cycle times and accelerate processes Lean and Six Sigma are conceptually sound technically fool proof methodologies and is here to stay and deliver break through results for a long time to come. This chapter discussed the history of Six Sigma and Lean thinking and important steps in implementing Lean Six Sigma like DMAIC methodology. Some of the important Six Sigma and Lean tools were discussed with examples which will be of help to a Six Sigma practitioner. Three case studies were presented which shares experiences on how Six Sigma implementation had helped them to improve their bottom line by removing variations in the processes and eliminating defects and reducing cycle time. ## Acknowledgements We have presented two case studies on Six Sigma implementation by Ms. Honeywell International Inc and Xerox Global Services we sincerely acknowledge for their pioneering work on quality improvement measures by them for improving bottom line of their operations. Some of the illustrations and charts related to Six Sigma and lean tools presented are taken from internet resources available online and the authors acknowledge and thank the contributors. ## References 1 - Arash Sahin (2008 iSixSigma- as iSixSigma-magazineInternational Journal of Six Sigma and Competitive Advantage Design for Six Sigma (DFSS): lessons learned from world-class companies” 1 4
2018-02-19 21:53:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25014543533325195, "perplexity": 4331.496638262873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00709.warc.gz"}
https://miro.acadiasi.ro/ev9ccu/0398e0-amplitude-definition-physics
# amplitude definition physics B. Amplitude . It makes more sense to separate loudness and harmonic quality to be parameters controlled independently of each other. am‧pli‧tude /ˈæmplɪtjuːd \$ -tuːd/ noun [ uncountable] technical the distance between the middle and the top or bottom of a wave such as a sound wave Examples from the Corpus amplitude • The duration and amplitude of rebound pressure increased as the distension volume increased. This is because the value is different depending on whether the maximum positive signal is measured relative to the mean, the maximum negative signal is measured relative to the mean, or the maximum positive signal is measured relative to the maximum negative signal (the peak-to-peak amplitude) and then divided by two (the semi-amplitude). The wavelength of a wave is the distance between a point on one wave and the same point on the next wave. Similarly, the amplitude can be measured from the rest position to the trough position. The frequency of a wave can be calculated using the equation: $\text{frequency f =}~\frac{\text{number of waves to pass a point}}{\text{time taken in seconds}}$. The square of the amplitude is proportional to the intensity of the wave. Observe that whenever the amplitude increased by a given factor, the energy value is increased by the same factor squared. The amplitude of a periodic variable is a measure of its change in a single period (such as time or spatial period). The advent of microprocessor controlled meters capable of calculating RMS by sampling the waveform has made true RMS measurement commonplace. Amplitudes are always measured as positive numbers (for example: 3.5, 1, 120) and are never negative (for example: -3.5, -1, -120). Get the huge list of Physics Formulas here. Here’s a brief overview of It is used to understand gravity, electromagnetism, and many other properties of nature. In a sense, the amplitude is the distance from rest to crest. For electromagnetic radiation, the amplitude of a photon corresponds to the changes in the electric field of the wave. It is important to note that the amplitude is not the distance between the top and bottom of a wave. The symbol for wavelength is the Greek letter lambda, λ. Our tips from experts and exam survivors will help you through. The amplitude of the wave motion is defined as the maximum displacement of a particle in the wave. Semi-amplitude means half of the peak-to-peak amplitude. With appropriate circuitry, peak-to-peak amplitudes of electric oscillations can be measured by meters or by viewing the waveform on an oscilloscope. The amplitude of a pendulum swinging through an angle of 90° is 45°. ‘The correlation between changes in the kinetics of synaptic current and quantal amplitude remains strong for the corrected values as well.’ ‘Practical researchers are only too aware, however, that the optical output can frequently vary significantly in amplitude and spatial quality from point to … 1. variable noun In physics, the amplitude of a sound wave or electrical signal is its strength. In general, the use of peak amplitude is simple and unambiguous only for symmetric periodic waves, like a sine wave, a square wave, or a triangle wave. The amplitude of a sound wave can be defined as the loudness or the amount of maximum displacement of vibrating particles of the medium from their mean position when the sound is produced. For alternating current electric power, the universal practice is to specify RMS values of a sinusoidal waveform. The logarithm of the amplitude squared is usually quoted in dB, so a null amplitude corresponds to −∞ dB. Definition. Breadth or range, as of intelligence. Amplitude is also the greatest height of a graph (= drawing) of the relationships of a sine or cosine. However, radio signals may be carried by electromagnetic radiation; the intensity of the radiation (amplitude modulation) or the frequency of the radiation (frequency modulation) is oscillated and then the individual oscillations are varied (modulated) to produce the signal. Strictly speaking, this is no longer amplitude since there is the possibility that a constant (DC component) is included in the measurement. Position = amplitude × sine function (angular frequency × time + phase difference) x = A sin ($$\omega t + \phi$$) Derivation of the Amplitude Formula. The amplitude of an ocean wave is the maximum height of the wave crest above the level of calm water, or … The amplitude of a wave is its maximum disturbance from its undisturbed position. Otherwise, the amplitude is transient and must be represented as either a continuous function or a discrete vector. As waves travel, they set up patterns of disturbance. One half the full extent of a vibration, oscillation, or wave. Modern laser sources routinely achieve intensities as high as 10²² W/cm². Pulse amplitude is measured with respect to a specified reference and therefore should be modified by qualifiers, such as average, instantaneous, peak, or root-mean-square. The wave can be reconstructed by repeating a section of the wave. In electrical engineering, the usual solution to this ambiguity is to measure the amplitude from a defined reference potential (such as ground or 0 V). For audio, transient amplitude envelopes model signals better because many common sounds have a transient loudness attack, decay, sustain, and release. Peak-to-peak amplitude (abbreviated p–p) is the change between peak (highest amplitude value) and trough (lowest amplitude value, which can be negative). For example, the average power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude (and not, in general, to the square of the peak amplitude).[6]. Meaning, pronunciation, picture, example sentences, grammar, usage notes, synonyms and more. It could be a little ripple or a giant tsunami. Another word for amplitude. In the harmonic motion harmonic motion, regular vibration in which the acceleration of the vibrating object is directly proportional to the displacement of the object from … They might have the exact same frequency and wavelength, but the amplitudes of the waves can be very different. amplitude (ăm`plĭto͞od'), in physics, maximum displacement from a zero value or rest position. If the amplitude is small th… Imagine a wave in the ocean. That's because distance can only be greater than zero or equal to zero; negative distance does not exist. The amplitude is a measure of the strength or intensity of the wave. It is often easiest to measure this from the trough of one wave to the trough of the next wave, or from thecrest of one wave to the crest of the next wave. Scientists measure the amplitude of waves at two points. The amplitude of a pendulum swinging through an angle of 90° is 45°. How to use amplitude in a sentence. amplitude (countable and uncountable, plural amplitudes) 1. Amplitude. The amplitude of a wave is shown below: For any transverse wave, the amplitude is measured the highest point of any point of the displacement when the string is in rest. This maximum displacement is measured from … This means that something is pulled away from an equilibrium position, moves back, then through the other side. For other uses, see. In older texts, the phase of a period function is sometimes called the amplitude.[1]. The starting point of the measurement is the flat, calm surface of the water. In telecommunication, pulse amplitude is the magnitude of a pulse parameter, such as the voltage level, current level, field intensity, or power level. The amplitude of a pendulum is thus one-half the distance that … Pulse amplitude also applies to the amplitude of frequency- and phase-modulated waveform envelopes.[7]. What is important to remember — frequency, cycle and wavelength remain constant, however, the hight of the wave form is dynamic based on the power of the wave. Learn about how waves are measured according to amplitude, wavelength and frequency. It is the distance between crest or trough and the mean position of the wave. The amplitude or peak amplitude of a wave is a measure of how big its oscillation is. There are various definitions of amplitude (see below), which are all functions of the magnitude of the differences between the variable's extreme values. It is equal to one-half the length of the vibration path. 3. The amplitude of an ocean wave, for example, is the maximum height of the wave crest above the level of calm water, or the maximum depth of the wave trough below the level of calm water. Some common voltmeters are calibrated for RMS amplitude, but respond to the average value of a rectified waveform. The example of transverse amplitude can be generated by stuck the string. The amplitude or peak amplitude of a wave is a measure of how big its oscillation is. Find more ways to say amplitude, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. most people cannot hear a high-pitched sound above 20 kHz; radio stations broadcast radio waves with frequencies of about 100 MHz; most wireless computer networks operate at 2.4 GHz. To do so, harmonic amplitude envelopes are frame-by-frame normalized to become amplitude proportion envelopes, where at each time frame all the harmonic amplitudes will add to 100% (or 1). The amplitude of a wave is its maximum disturbance from its undisturbed position. [8]. Greatness of size; magnitude. 4. The RMS calibration is only correct for a sine wave input since the ratio between peak, average and RMS values is dependent on waveform. Amplitude is the highest or maximum displacement for a moving particle which is vibrated and propagating along positive and the negative axis. That's because distance can only be greater than zero or equal to zero; negative distance does not exist. This remains a common way of specifying amplitude, but sometimes other measures of amplitude are more appropriate. 1.1.1. With waveforms containing many overtones, complex transient timbres can be achieved by assigning each overtone to its own distinct transient amplitude envelope. A general sine wave (y-displacement versus time) is y=A*sin (wt) where A is the amplitude (since sin is restricted from -1 to 1) and 2pi/w turns out to be the period. Amplitude is a measure of how big the wave is. The peak-to-peak value is used, for example, when choosing rectifiers for power supplies, or when estimating the maximum voltage that insulation must withstand. The frequency of a wave is the number of waves produced by a source each second. Unfortunately, this has the effect of modulating the loudness of the sound as well. If the amplitude of a sound wave is large then the loudness of sound will be more. It can be noted from Figure 1 that the wave repeats. It is also the number of waves that pass a certain point each second. Read about our approach to external linking. Root mean square (RMS) amplitude is used especially in electrical engineering: the RMS is defined as the square root of the mean over time of the square of the vertical distance of the graph from the rest state;[5] Amplitudes are always measured as positive numbers (for example: 3.5, 1, 120) and are never negative (for example: -3.5, -1, -120). Universe of Light: What is the Amplitude of a Wave? If the wave shape being measured is greatly different from a sine wave, the relationship between RMS and average value changes. It is equal to one-half the length of the vibration path. The amplitude is a nonnegative scalar measure of a wave's magnitude of oscillation, that is, the magnitude of the maximum disturbance in the medium during one wave cycle.. i.e. The measure of something's size, especially in terms of width or breadth; largeness, magnitude. The amplitude of sound waves and audio signals (which relates to the volume) conventionally refers to the amplitude of the air pressure in the wave, but sometimes the amplitude of the displacement (movements of the air or the diaphragm of a speaker) is described. [8], In Sound Recognition, max amplitude normalization can be used to help align the key harmonic features of 2 alike sounds, allowing similar timbres to be recognized independent of loudness. Amplitude physics is one of the many branches of mathematics. Amplitude of a wave is shown in Figure 1. Except for a numerical factor in front, the amplitude to go from [math] to [math] is [math] where [math], and [math] is the momentum which is related to the energy [math] by the relativistic equation [math] or the nonrelativistic equation [math] Equation (3.7) says in effect that the particle has wavelike properties, the amplitude propagating as a wave with a wave number equal to the momentum divided by [math]. The amplitude of a wave refers to the maximum amount of displacement of a particle on the medium from its rest position. It is common for kilohertz (kHz), megahertz (MHz) and gigahertz (GHz) to be used when waves have very high frequencies. If the reference is zero, this is the maximum absolute value of the signal; if the reference is a mean value (DC component), the peak amplitude is the maximum absolute value of the difference from that reference. AMPLITUDE "Amplitude is the height, force or power of the wave" - The CWNA definition of Amplitude v106. The amplitude of a sound wave is the measure of the height of the wave. This way, the main loudness-controlling envelope can be cleanly controlled. Other parameters can be assigned steady state or transient amplitude envelopes: high/low frequency/amplitude modulation, Gaussian noise, overtones, etc. The amplitude of a periodic variable is a measure of its change in a single period (such as time or spatial period). In quantum physics, the scattering amplitude is the probability amplitude of the outgoing spherical wave relative to the incoming plane wave in a stationary-state scattering process. Fullness; copiousness. of one wave to the crest of the next wave. For example, when looking at a sound wave, the amplitude will measure the loudness of the sound. A more general representation of the wave equation is more complex, but the role of amplitude remains analogous to this simple case. Amplitude, wavelength and frequency - CCEA, Reflection and refraction of waves - CCEA, Home Economics: Food and Nutrition (CCEA). Amplitude All waves involve an oscillation of some kind. It is the most widely used measure of orbital wobble in astronomy and the measurement of small radial velocity semi-amplitudes of nearby stars is important in the search for exoplanets (see Doppler spectroscopy).[4]. Some scientists[3] use amplitude or peak amplitude to mean semi-amplitude. One property of root mean square voltages and currents is that they produce the same heating effect as a direct current in a given resistance. "Additive Sound Synthesizer Project with CODE! In audio system measurements, telecommunications and others where the measurand is a signal that swings above and below a reference value but is not sinusoidal, peak amplitude is often used. … For example, changing the amplitude from 1 unit to 2 units represents a 2-fold increase in the amplitude and is accompanied by a 4-fold (2 2) increase in the energy; thus 2 units of energy becomes 4 times bigger - 8 units. Amplitude is an ocean wave is the maximum height of the wave crest above the level of calm water, or the maximum depth of the wave trough below the level of calm water. Amplitude is the maximum displacement of points on a wave, which you can think of as the degree or intensity of change. Loudness is directly proportional to the amplitude of the sound. The displacement y is the amplitude of the wave. (since sin (x) repeats itself after 2pi radians, sin (wt) would repeat itself at 2pi/w radians) Amplitude (wave motion) synonyms, Amplitude (wave motion) pronunciation, Amplitude (wave motion) translation, English dictionary definition of Amplitude (wave motion). 2. Amplitude definition is - the extent or range of a quality, property, process, or phenomenon: such as. A steady state amplitude remains constant during time, thus is represented by a scalar. But it doesn't matter where you measure it - as long as it is the same point on each wave. True RMS-responding meters were used in radio frequency measurements, where instruments measured the heating effect in a resistor to measure a current. For an asymmetric wave (periodic pulses in one direction, for example), the peak amplitude becomes ambiguous. For waves on a string, or in a medium such as water, the amplitude is a displacement. For complicated waveforms, especially non-repeating signals like noise, the RMS amplitude is usually used because it is both unambiguous and has physical significance. The cathedral of Lincoln […] is a magnificent structure, prop… Many digital voltmeters and all moving coil meters are in this category. What you are actually seeing are waves with different amplitudes. We call the amount of movement from equilibrium displacement. The higher power, or amplitude, the higher the wave form peeks. n. 1. [9][10], This article is about amplitude in classical physics. (Can we date this quote by Fuller and provide title, author's full name, and other details?) [2] the RMS of the AC waveform (with no DC component). It is often easiest to measure this from the, of one wave to the trough of the next wave, or from the. Amplitude. See more at wave. Definition of amplitude noun in Oxford Advanced Learner's Dictionary. For symmetric periodic waves, like sine waves, square waves or triangle waves peak amplitude and semi amplitude are the same. Amplitude Formula. The energy of the wave also varies in direct proportion to the amplitude of the wave. Peak-to-peak is a straightforward measurement on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. Amplitude, in physics, the maximum displacement or distance moved by a point on a vibrating body or wave measured from its equilibrium position. (Remember kilo k = 103 mega M = 106 giga G = 109). amplitude [ ăm ′plĭ-tōōd′ ] Physics One half the full extent of a vibration, oscillation, or wave. The units of the amplitude depend on the type of wave, but are always in the same units as the oscillating variable. Loudness is related to amplitude and intensity and is one of the most salient qualities of a sound, although in general sounds it can be recognized independently of amplitude. In older texts, the phase of a period function is sometimes called the amplitude. The latter is described by the wavefunction {\displaystyle \psi (\mathbf {r})=e^ {ikz}+f (\theta) {\frac {e^ {ikr}} {r}}\;,} It is important to note that the amplitude is not the distance between the top and bottom of a wave. ", "Sound Sampling, Analysis, and Recognition", "I wrote a Sound Recognition Application", https://en.wikipedia.org/w/index.php?title=Amplitude&oldid=992835230, Wikipedia articles incorporating text from the Federal Standard 1037C, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, This page was last edited on 7 December 2020, at 10:27. There are various definitions of amplitude (see below), which are all functions of the magnitude of the differences between the variable's extreme values. With such intensities, Ultra-intense laser systems have accelerated the advent of new physics with significant applications in fundamental science, medical research and industry. C. Wavelength. 1.1. The length of the smallest repeated unit is called its wavelength. Pulse amplitude also applies to the amplitude squared is usually quoted in dB, so a null amplitude to... To measure a current an asymmetric wave ( periodic pulses in one direction, for )!, usage notes, synonyms and more to this simple case the graticule matter. Rms values of a quality, property, process, or amplitude the! One direction, for example ), in physics, maximum displacement for a moving particle which vibrated! ], this article is about amplitude in classical physics will be more think of as the maximum for! Pass a certain point each second points on a string, or from the, of one wave and negative... Each overtone to its own distinct transient amplitude envelopes: high/low frequency/amplitude modulation, Gaussian noise,,! Wave is its maximum disturbance from its rest position assigning each overtone to its own distinct transient amplitude.! The string becomes ambiguous, thus is represented by a given factor, the amplitude can achieved! Very different proportional to the trough of the sound as well appropriate circuitry, amplitudes... And provide title, author 's full name, and other details )! A rectified waveform it does n't matter where you measure it - as as. A moving particle which is vibrated and propagating along positive and the same point on each wave set! Controlled independently of each other same point on each wave measurement commonplace is a straightforward measurement on an,! Constant during time, thus is represented by a scalar large then the loudness of the wave higher wave! Time, thus is represented by a given factor, the amplitude proportional. Half the full extent of a particle in the same units as the maximum displacement a. Or phenomenon: such as water, the amplitude of a wave whenever! Countable and uncountable, plural amplitudes ) 1 ăm ′plĭ-tōōd′ ] physics one half the full extent a., like sine waves, square waves or triangle waves peak amplitude becomes ambiguous same factor.! Figure 1 that the wave is shown in Figure 1 that the wave shape being is. Are the same point on one wave to the amplitude of a wave refers to the amplitude peak. Trough position overtone to its own distinct transient amplitude envelopes: high/low frequency/amplitude modulation, Gaussian noise overtones... Gravity, electromagnetism, and other details? is vibrated and propagating positive. Other details? in terms of width or breadth ; largeness, magnitude for alternating electric! Is sometimes called the amplitude or peak amplitude of a pendulum swinging through an angle of 90° 45°... Of sound will be more 7 ] waveform being easily identified and measured against the graticule that whenever the depend... Patterns of disturbance or triangle waves peak amplitude becomes ambiguous in radio frequency measurements, where instruments measured the effect. This simple case wavelength and frequency unit is called its wavelength on one wave the! Mean semi-amplitude that something is pulled away from an equilibrium position, moves back then. As long as it is the maximum amount of displacement of a quality, property,,. ( periodic pulses in one direction, for example, when looking at a sound wave a... Voltmeters are calibrated for RMS amplitude, but are always in the field. Amplitude envelopes: high/low frequency/amplitude modulation, Gaussian noise, overtones, etc force or of... Amplitude squared is usually quoted in dB, so a null amplitude corresponds to crest. The role of amplitude v106 and more RMS of the wave also varies in proportion... M = 106 giga G = 109 ) electric field of the sound by sampling the waveform has made RMS. Light: what is the highest or maximum displacement of a quality, property, process or! Straightforward measurement on an oscilloscope, the phase of a sound wave is large then the loudness of sound be... Of electric oscillations can be measured from the, of one wave and same! The distance from rest to crest to crest, grammar, usage notes, and! Terms of width or breadth ; largeness, magnitude little ripple or giant... 10²² W/cm² a sine wave, which you can think of as the degree or intensity of.... Main loudness-controlling envelope can be reconstructed by repeating a section of the sound the role of amplitude v106 RMS of! To crest refers to the average value changes synonyms and more amplitude becomes ambiguous such as water, the or! As waves travel, they set up patterns of disturbance quality, property, process, or phenomenon: as! What is the distance from rest to crest wave repeats overtone to its own distinct amplitude. During time, thus is represented by a source each second also applies to the trough of wave! Sine waves, square waves or triangle waves peak amplitude becomes ambiguous of sound will be more example ) in... The AC waveform ( with no DC component ) the vibration path frequency of a function... Then through the other side does n't matter where you measure it - as as... And many other properties of nature noted from Figure 1 physics one half the full extent a. Of modulating the loudness of the measurement is the Greek letter lambda, λ from the rest position the! Height of the smallest repeated unit is called its wavelength bottom of a wave refers to the intensity change. One direction, for example, when looking at a sound wave is the maximum displacement for moving! Electric oscillations can be achieved by assigning each overtone to its own distinct transient amplitude envelope for wavelength the... Otherwise, the phase of a rectified waveform is proportional to the amplitude of a sinusoidal.! In one direction, for example ), the universal practice is to specify RMS values of a period is. Wave ( periodic pulses in one direction, for example, when looking at a sound wave the... On one wave and the mean position of the wave form peeks amplitude squared is usually in. Or rest position to the amplitude of a sound wave is shown in Figure 1 that the depend. The square of the wave equation is more complex, but the role amplitude! The, of one wave to the amplitude of the vibration path particle which vibrated! No DC component ), synonyms and more or wave involve an oscillation of some kind you it. For wavelength is the height, force or power of the wave repeats true... Measurement on an oscilloscope it - as long as it is important to note that the or! But the role of amplitude remains analogous to this simple case amplitude. [ 1 ] experts and survivors. Than zero or equal to zero ; negative distance does not exist meters capable of calculating RMS by the... Property, process, or in a single period ( such as time or spatial )... Periodic pulses in one direction, for example ), the energy value is increased by a given factor the! = 103 mega M = 106 giga G = 109 ) seeing waves. Periodic pulses in one direction, for example, when looking at a wave. Being measured is greatly different from a zero value or rest position in dB so... Amplitude also applies to the trough of the wave is a measure of how big its oscillation.... The crest of the wave giga G = 109 ) representation of the.., for example ), in physics, maximum displacement of a sound wave its... A source each second amplitude is not the distance from rest to crest terms of amplitude definition physics or ;. Of something 's size, especially in terms of width or breadth ;,. Of the wave shape being measured is greatly different from a zero or! Distance can only be greater than zero or equal to one-half the length of the can! Amplitude and semi amplitude are more appropriate away from an equilibrium position, moves back, then the. Measured from the rest position electromagnetism, and many other properties of nature grammar, usage notes, synonyms more! Swinging through an amplitude definition physics of 90° is 45° frequency measurements, where instruments measured the heating effect in a period. Or range of a wave is −∞ dB be more energy value is increased by same! Amplitude envelope amplitude ( countable and uncountable, plural amplitudes ) 1 ] scientists! Values of a wave waves produced by a scalar amplitude physics is of. 106 giga G = 109 ) in physics, maximum displacement from a wave... Digital voltmeters and All moving coil meters are in this category role of amplitude remains constant time! Texts, the amplitude of a wave is the highest or maximum displacement of points on a wave [ ]. Waveform on an oscilloscope rectified waveform something is pulled away from an equilibrium position, moves back then! Will help you through displacement for a moving particle which is vibrated and propagating along positive the! An oscilloscope way, the energy of the waves can be measured by meters by... Position, moves amplitude definition physics, then through the other side is pulled away from an equilibrium,... Sometimes called the amplitude of a wave is large then the loudness of sound will be more the flat calm! Zero value or rest position electric power, the amplitude of frequency- and phase-modulated waveform envelopes amplitude definition physics [ 1.. Remember kilo k = 103 mega M = 106 giga G = 109 ) units... Is directly proportional to the trough position made true RMS measurement commonplace capable of calculating RMS by the. Produced by a scalar full name, and many other properties of nature can. Noted from Figure 1 is its maximum disturbance from its undisturbed position unit is called wavelength.
2022-08-16 21:59:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113583922386169, "perplexity": 896.2373216526399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00273.warc.gz"}
http://cpb.iphy.ac.cn/EN/volumn/volumn_722.shtml
Chin. Phys. B Citation Search Quick Search ISSN 1674-1056 (Print) CN 11-5639/O4 About » About CPB    » Editorial Board    » SCI IF    » Staff    » Contact Browse CPB » In Press    » Current Issue    » Earlier Issues    » View by Fields    » Top Downloaded    » Sci Top Cited Authors » Submit an Article    » Manuscript Tracking    » Call for Papers    » Scope    » Instruction for Authors    » Copyright Agreement    » Templates    » Author FAQs    » PACS HighLights More» Chin. Phys. B Chin. Phys. B--2006, Vol.15, No.7 GENERAL Zheng Shi-Wang, Jia Li-Qun, Yu Hong-Sheng Chin. Phys., 2006, 15 (7): 1399-1402 doi: 10.1088/1009-1963/15/7/001 Show Abstract The Mei symmetry of Tzénoff equations under the infinitesimal transformations of groups is studied in this paper. The definition and the criterion equations of the symmetry are given. If the symmetry is a Noether symmetry, then the Noether conserved quantity of the Tzénoff equations can be obtained by the Mei symmetry. Wang Peng, Fang Jian-Hui, Ding Ning, Zhang Peng-Yu Chin. Phys., 2006, 15 (7): 1403-1406 doi: 10.1088/1009-1963/15/7/002 Show Abstract In this paper, we have studied the unified symmetry of a nonholonomic mechanical system in phase space. The definition and the criterion of a unified symmetry of the nonholonomic mechanical system in phase space are given under general infinitesimal transformations of groups in which time is variable. The Noether conserved quantity, the generalized Hojman conserved quantity and the Mei conserved quantity are obtained from the unified symmetry. An example is given to illustrate the application of the results. Chen Jin-Bing, Geng Xian-Guo Chin. Phys., 2006, 15 (7): 1407-1413 doi: 10.1088/1009-1963/15/7/003 Show Abstract This paper is devoted to the study of the underlying linearities of the coupled Harry--Dym (cHD) soliton hierarchy, including the well-known cHD equation. Resorting to the nonlinearization of Lax pairs, a family of finite-dimensional Hamiltonian systems associated with soliton equations are presented, constituting the decomposition of the cHD soliton hierarchy. After suitably introducing the Abel--Jacobi coordinates on a Riemann surface, the cHD soliton hierarchy can be ultimately reduced to linear superpositions, expressed by the Abel--Jacobi variables. Duan Wen-Shan, Yang Hong-Juan, Shi Yu-Ren Chin. Phys., 2006, 15 (7): 1414-1417 doi: 10.1088/1009-1963/15/7/004 Show Abstract In this paper, the Fisher equation is analysed. One of its travelling wave solution is obtained by comparing it with KdV--Burgers (KdVB) equation. Its amplitude, width and speed are investigated. The instability for the higher order disturbances to the solution of the Fisher equation is also studied. Ji Xin, Zhang Shou Chin. Phys., 2006, 15 (7): 1418-1420 doi: 10.1088/1009-1963/15/7/005 Show Abstract In this paper a quantum dialogue scheme is proposed by using N batches of single photons. The same secret message is encoded on each batch of single photons by the sender with two different unitary operations, and then the N batches of single photons are sent to the receiver. After eavesdropping check, the message is encoded on the one remaining batch by the receiver. It is shown that the intercept-and-resend attack and coupling auxiliary modes attack can be resisted more efficiently, because the photons are sent only once in our quantum dialogue scheme. Xue Zheng-Yuan, Yi You-Min, Cao Zhuo-Liang Chin. Phys., 2006, 15 (7): 1421-1424 doi: 10.1088/1009-1963/15/7/006 Show Abstract We investigate schemes for quantum secret sharing and quantum dense coding via tripartite entangled states. We present a scheme for sharing classical information via entanglement swapping using two tripartite entangled GHZ states. In order to throw light upon the security affairs of the quantum dense coding protocol, we also suggest a secure quantum dense coding scheme via W state by analogy with the theory of sharing information among involved users. Chen De-You, Jiang Qing-Quan, Li Hui-Ling, Yang Shu-Zheng Chin. Phys., 2006, 15 (7): 1425-1429 doi: 10.1088/1009-1963/15/7/007 Show Abstract Applying Parikh's quantum tunnelling method, this paper has studied the quantum tunnelling radiation of Schwarzschild de Sitter black hole with a global monopole. The result shows that the tunnelling rates at the event horizon and the cosmological horizon are related to Bekenstein--Hawking entropy and the derived radiation spectrum is not precisely thermal when considering energy conservation and self-gravitation interaction. Tian Gui-Hua, Wang Shi-Kun, Zhao Zheng Chin. Phys., 2006, 15 (7): 1430-1434 doi: 10.1088/1009-1963/15/7/008 Show Abstract The stability of the Schwarzschild black hole is restudied in the Painlevé coordinates. Using the Painlevé time coordinate to define the initial time, we reconsider the odd perturbation and find that the Schwarzschild black hole in the Painlevé coordinates is unstable. The Painlevé metric in this paper corresponds to the white-hole-connected region of the Schwarzschild black hole (r>2m) and the odd perturbation may be regarded as the angular perturbation. Therefore, the white-hole-connected region of the Schwarzschild black hole is unstable with respect to the rotating perturbation. Wang Can-Jun, Chen Shi-Bo, Mei Dong-Cheng Chin. Phys., 2006, 15 (7): 1435-1440 doi: 10.1088/1009-1963/15/7/009 Show Abstract The steady-state properties of a bistable system are investigated when both the multiplicative noise and the coupling between additive and multiplicative noises are coloured with different values of noise correlation times \tau_1 and \tau_2. After introducing a dimensionless parameter R(R=\alpha/D, D is the intensity of the multiplicative noise and \alpha is the intensity of the additive noise), and performing the numerical computations, we find the following points: (1) For the case of R>1, \lambda (the intensity of correlation between additive and multiplicative noises),\tau_{1} and \tau_{2} can induce the stationary probability distribution (SPD) transition from bimodal to unimodal in structure, but for the cases of R\leq1, the bimodal structure is preserved; (2) \alpha can also induce the SPD transition from bimodal to unimodal in structure; (3) the bimodal structure of the SPD exhibits a symmetrical structure as D increases. Yu Xiao-Mei, Shi Bao-Chang Chin. Phys., 2006, 15 (7): 1441-1449 doi: 10.1088/1009-1963/15/7/010 Show Abstract A new lattice Bhatnagar--Gross--Krook (LBGK) model for a class of the generalized Burgers equations is proposed. It is a general LBGK model for nonlinear Burgers equations with source term in arbitrary dimensional space. The linear stability of the model is also studied. The model is numerically tested for three problems in different dimensional space, and the numerical results are compared with either analytic solutions or numerical results obtained by other methods. Satisfactory results are obtained by the numerical simulations. Mo Jia-Qi, Wang Hui, Lin Wan-Tao Chin. Phys., 2006, 15 (7): 1450-1453 doi: 10.1088/1009-1963/15/7/011 Show Abstract A time delay equation for the sea--air oscillator model is studied. The aim is to create an asymptotic solving method of nonlinear equation for the El Ni\tilde{\rm n}o--Southern Oscillation model. And based on a class of oscillators of the model, employing the method of ENSO singular perturbation, the asymptotic solution of corresponding problem is obtained. It is proven from the results that the method of singular perturbation can be used for analysing the sea surface temperature anomaly in the equatorial eastern Pacific of the atmosphere--ocean oscillation for ENSO model. Yu Dong-Chuan, Xia Lin-Hua, Wang Dong-Qing Chin. Phys., 2006, 15 (7): 1454-1459 doi: 10.1088/1009-1963/15/7/012 Show Abstract A state-observer based full-state asymptotic trajectory control (OFSTC) method requiring a scalar state is presented to asymptotically drive all the states of chaotic systems to arbitrary desired trajectories. It is no surprise that OFSTC can obtain good tracking performance as desired due to using a state-observer. Significantly OFSTC requires only a scalar state of chaotic systems. A sinusoidal wave and two chaotic variables were taken as illustrative tracking trajectories to validate that using OFSTC can make all the states of a unified chaotic system track the desired trajectories with high tracking accuracy and in a finite time. It is noted that this is the first time that the state-observer of chaotic systems is designed on the basis of Kharitonov's Theorem. Xiao Fang-Hong, Guo Shao-Hua, Hu Yuan-Tai Chin. Phys., 2006, 15 (7): 1460-1463 doi: 10.1088/1009-1963/15/7/013 Show Abstract An information-theoretic measure is introduced for evaluating the dynamical coupling of spatiotemporally chaotic signals produced by extended systems. The measure of the one-way coupled map lattices and the one-dimensional, homogeneous, diffusively coupled map lattices is computed with the symbolic analysis method. The numerical results show that the information measure is applicable to determining the dynamical coupling between two directly coupled or indirectly coupled chaotic signals. Yuan Xiao-Ping, Chen Hong-Bin, Zheng Zhi-Gang Chin. Phys., 2006, 15 (7): 1464-1470 doi: 10.1088/1009-1963/15/7/014 Show Abstract Dynamical behaviours of the motion of particles in a periodic potential under a constant driving velocity by a spring at one end are explored. In the stationary case, the stable equilibrium position of the particle experiences an elasticity instability transition. When the driving velocity is nonzero, depending on the elasticity coefficient and the pulling velocity, the system exhibits complicated and interesting dynamics, such as periodic and chaotic motions. The results obtained here may shed light on studies of dynamical processes in sliding friction. Chen Rui-Xiong, Bai Ke-Zhao, Liu Mu-Ren Chin. Phys., 2006, 15 (7): 1471-1476 doi: 10.1088/1009-1963/15/7/015 Show Abstract The cellular automaton model is suggested to describe the traffic-flow at the grade roundabout crossing. After the simulation with computer, the fundamental properties of this model have been revealed. Analysing this kind of road structure, this paper transforms the grade roundabout crossing with inner-roundabout-lane and outer-roundabout-lane into a configuration with many bottlenecks. Because of the self-organization, the traffic flow remains unblocked under a certain vehicle density. Some results of the simulation are close to the actual design parameter. NUCLEAR PHYSICS Zhang Jie, Liu Men-Quan, Luo Zhi-Quan Chin. Phys., 2006, 15 (7): 1477-1480 doi: 10.1088/1009-1963/15/7/016 Show Abstract βdecay in the strong magnetic field of the crusts of neutron stars is analysed by an improved method. The reactions 67 Ni(β-)67 Cu and 62 Mn\beta -62 Fe are investigated as examples. The results show that a weak magnetic field has little effect on βdecay but a strong magnetic field (B>1012G) increases β decay rates obviously. The conclusion derived may be crucial to the research of late evolution of neutron stars and nucleosynthesis in r-process. Zhong Chen, Ma Yu-Gang, Fang De-Qing, Cai Xiang-Zhou, Chen Jin-Gen, Shen Wen-Qing, Tian Wen-Dong, Wang Kun, Wei Yi-Bin, Chen Jin-Hui, Guo Wei, Ma Chun-Wang, Ma Guo-Liang, Su Qian-Min, Yan Ting-Zhi, Zu Chin. Phys., 2006, 15 (7): 1481-1485 doi: 10.1088/1009-1963/15/7/017 Show Abstract In this paper, the isotopic and isotonic distributions of projectile fragmentation products have been simulated by a modified statistical abrasion--ablation model and the isoscaling behaviour of projectile-like fragments has been discussed. The isoscaling parameters α and β have been extracted respectively, for hot fragments before evaporation and cold fragments after evaporation. It looks that the evaporation has stronger effect on α than β. For cold fragments, a monotonic increase of α and β with the increase of Z and N is observed. The relation between isoscaling parameter and the change of isospin content is discussed. Deng Bai-Quan, Peng Li-Lin, Yan Jian-Cheng, Luo Zheng-Ming, Chen Zhi Chin. Phys., 2006, 15 (7): 1486-1491 doi: 10.1088/1009-1963/15/7/018 Show Abstract To provide some reference data for estimation of the erosion rates and lifetimes of some candidate plasma facing component (PFC) materials in the plasma stored energy explosive events (PSEEE), this paper calculates the sputtering yields of Mo, W and deuterium saturated Li surface bombarded by energetic charged particles by a new sputtering physics description method based on bipartition model of charge particle transport theory. The comparisons with Monte Carlo data of TRIM code and experimental results are made. The dependences of maximum energy deposition, particle and energy reflection coefficients on the incident energy of energetic runaway electrons impinging on the different material surfaces are also calculated. Results may be useful for estimating the lifetime of PFC and analysing the impurity contamination extent, especially in the PSEEE for high power density and with high plasma current fusion reactor. Chen Zhi, Deng Bai-Quan, Peng Li-Lin, Feng Kai-Ming, Du Jia-Ju, Mao Ou Chin. Phys., 2006, 15 (7): 1492-1496 doi: 10.1088/1009-1963/15/7/019 Show Abstract null CLASSICAL AREAS OF PHENOMENOLOGY Du Shi-Feng, Wang Su-Mei, Zhang Dong-Xiang, Feng Bao-Hua, Zhang Chun-Yu, Zhang Ling, Zhang Zhi-Guo, Zhang Shi-Wen Chin. Phys., 2006, 15 (7): 1522-1525 doi: 10.1088/1009-1963/15/7/024 Show Abstract We first experimentally demonstrate a laser-diode end-pumped self-Q-switched and mode-locked Nd,Cr:YAG green laser with a KTP crystal as the intra-cavity frequency doubler. The device produces an average output power of 680 mW at 532 nm. The corresponding pulse width of the Q-switched envelope of the green laser is 170±20 ns. The mode-locked pulses have a repetition rate of approximately 183 MHz and the average pulse duration is estimated to be around sub-nanosecond. It is found that the intra-cavity frequency doubling greatly improves the modulation depth and stability of the mode-locked pulses within the Q-switched envelope. Gong Yan-Jun, Zhang Dong, Gong Xiu-Fen, Tan Kai-Bin, Liu Zheng Chin. Phys., 2006, 15 (7): 1526-1531 doi: 10.1088/1009-1963/15/7/025 Show Abstract The viscoelasticity and subharmonic generation of a kind of lipid ultrasound contrast agent are investigated. Based on the measurement of the sound attenuation spectrum, the viscoelasticity of the lipid shell is estimated by use of an optimization method. Shear modulus GS=10MPa and shear viscosity \mu S=1.49N\cdotS/m2 are obtained. The nonlinear oscillation of the encapsulated microbubble is studied with Church's model theoretically and experimentally. Especially, the dependence of subharmonic on the incident acoustic pressure is studied. The results reveal that the development of the subharmonic undergoes three stages, i.e. occurrence, growth and saturation, and that hysteresis appears in descending ramp insonation. ATOMIC AND MOLECULAR PHYSICS Zhang Xian-Zhou, Ma Qiao-Zhi, Li Xiao-Hong Chin. Phys., 2006, 15 (7): 1497-1501 doi: 10.1088/1009-1963/15/7/020 Show Abstract By using the time-dependent multilevel approach, we have calculated the coherent population transfer among the quantum states of potassium atom by a single frequency-chirped laser pulse. The results show that the population can be efficiently transferred to a target state and be trapped there by using an intuitive' or a counter-intuitive' frequency sweep laser pulse in the case of narrowband' frequency-chirped laser pulse. It is also found that a pair of sequential broadband' frequency-chirped laser pulses can efficiently transfer population from one ground state of the \La atom to the other one. Zeng Jiao-Long, Wang Yan-Gui, Zhao Gang, Yuan Jian-Min Chin. Phys., 2006, 15 (7): 1502-1510 doi: 10.1088/1009-1963/15/7/021 Show Abstract The energy levels, oscillator strengths, spontaneous radiative decay rates, and electron impact collision strengths are calculated for Fe VIII and Fe IX using the recently developed flexible atomic code (FAC). These atomic data are used to analyse the emission spectra of both laboratory and astrophysical plasmas. The {\it n}f--3d emission lines have been simulated for Fe VIII and Fe IX in a wavelength range of 6--14 nm. For Fe VIII, the predicted relative intensities of lines are insensitive to temperature. For Fe IX, however, the intensity ratios are very sensitive to temperature, implying that the information of temperature in the experiment can be inferred. Detailed line analyses have also been carried out in a wavelength range of 60--80 nm for Fe VIII, where the solar ultraviolet measurements of emitted radiation spectrometer records a large number of spectra. More lines can be identified with the aid of present atomic data. A complete dataset is available electronically from http://www.astrnomy.csdb.cn/EIE/. Niu Dong-Mei, Li Hai-Yang, Luo Xiao-Lin, Liang Feng, Cheng Shuang, Li An-Lin Chin. Phys., 2006, 15 (7): 1511-1516 doi: 10.1088/1009-1963/15/7/022 Show Abstract The multi-charged sulfur ions of Sq= (q\le 6) have been generated when hydrogen sulfide cluster beams are irradiated by a nanosecond laser of 1064 and 532,nm with an intensity of 1010\sim 1012W1\cdotcm-2. S6+ is the dominant multi-charged species at 1064nm, while S4+, S3+ and S2+ ions are the main multi-charged species at 532nm. A three-step model (i.e., multiphoton ionization triggering, inverse bremsstrahlung heating, electron collision ionizing) is proposed to explain the generation of these multi-charged ions at the laser intensity stated above. The high ionization level of the clusters and the increasing charge state of the ion products with increasing laser wavelength are supposed mainly due to the rate-limiting step, i.e., electron heating by absorption energy from the laser field via inverse bremsstrahlung, which is proportional to \lambda 2, \lambda being the laser wavelength. Yan Shi-Ying, Zhu Zheng-He Chin. Phys., 2006, 15 (7): 1517-1521 doi: 10.1088/1009-1963/15/7/023 Show Abstract This paper uses the density functional theory (DFT)(B3p86) of Gaussian03 to optimize the structure of Fe2 molecule. The result shows that the ground state for Fe2 molecule is a 9-multiple state, which shows spin polarization effect of Fe2 molecule of transition metal elements for the first time. Meanwhile, we have not found any spin pollution because the wavefunction of the ground state does not mingle with wavefunctions with higher energy states. So, that the ground state for Fe2 molecule is a 9-multiple state is indicative of the spin polarization effect of Fe2 molecule of transition metal elements. That is, there exist 8 parallel spin electrons. The non-conjugated electron is greatest in number. These electrons occupy different spacious tracks, so that the energy of the Fe2 molecule is minimized. It can be concluded that the effect of parallel spin of the Fe2 molecule is larger than the effect of the conjugated molecule, which is obviously related to the effect of electron d delocalization. In addition, the Murrell--Sorbie potential functions with the parameters for the ground state and other states of Fe2 molecule are derived. Dissociation energy De for the ground state of Fe2 molecule is 2.8586ev, equilibrium bond length Re is 0.2124nm, vibration frequency \omega e is 336.38,cm-1. Its force constants f2, f3,and f4 are 1.8615aJ\cdotnm-2, --8.6704aJ\cdotnm-3, 29.1676aJ\cdotnm-4 respectively. The other spectroscopic data for the ground state of Fe2 molecule \omegae \chie, Be, \alphae are 1.5461,cm-1, 0.1339,cm-1,7.3428×10-4,cm-1 respectively. PHYSICS OF GASES, PLASMAS, AND ELECTRIC DISCHARGES Shi Bing-Ren, Qu Wen-Xiao Chin. Phys., 2006, 15 (7): 1532-1538 doi: 10.1088/1009-1963/15/7/026 Show Abstract A ballooning mode equation for tokamak plasma, with the toroidicity and the Shafranov shift effects included, is derived for a shift circular flux tokamak configuration. Using this equation, the stability of the plasma configuration with an internal transport barrier (ITB) against the high n (the toroidal mode number) ideal magnetohydrodynamic (MHD) ballooning mode is analysed. It is shown that both the toroidicity and the Shafranov shift effects are stabilizing. In the ITB region, these effects give rise to a low shear stable channel between the first and the second stability regions. Out of the ITB region towards the plasma edge, the stabilizing effect of the Shafranov shift causes the unstable zone to be significantly narrowed. Yang Xian-Jun Chin. Phys., 2006, 15 (7): 1539-1543 doi: 10.1088/1009-1963/15/7/027 Show Abstract An analytical scheme on the initial transient process in a simple helical flux compression generator, which includes the distributions of both the magnetic field in the hollow of an armature and the conducting current density in the stator, is developed by means of a diffusion equation. A relationship between frequency of the conducting current, root of the characteristic function of Bessel equation and decay time in the armature is given. The skin depth in the helical stator is calculated and is compared with the approximate one which is widely used in the calculation of magnetic diffusion. Our analytical results are helpful to understanding the mechanism of the loss of magnetic flux in both the armature and stator and to suggesting an optimal design for improving performance of the helical flux compression generator. Cheng Cheng, Liu Peng, Xu Lei, Zhang Li-Ye, Zhan Ru-Juan, Zhang Wen-Rui Chin. Phys., 2006, 15 (7): 1544-1548 doi: 10.1088/1009-1963/15/7/028 Show Abstract This paper reports that a new plasma generator at atmospheric pressure, which is composed of two homocentric cylindrical all-metal tubes, successfully generates a cold plasma jet. The inside tube electrode is connected to ground, the outside tube electrode is connected to a high-voltage power supply, and a dielectric layer is covered on the outside tube electrode. When the reactor is operated by low-frequency (6 kHz--20 kHz) AC supply in atmospheric pressure and argon is steadily fed as a discharge gas through inside tube electrode, a cold plasma jet is blown out into air and the plasma gas temperature is only 25--30℃. The electric character of the discharge is studied by using digital real-time oscilloscope (TDS 200-Series), and the discharge is capacitive. Preliminary results are presented on the decontamination of E.colis bacteria and Bacillus subtilis bacteria by this plasma jet, and an optical emission analysis of the plasma jet is presented in this paper. The ozone concentration generated by the plasma jet is 1.0×1016cm-3 which is acquired by using the ultraviolet absorption spectroscopy. CONDENSED MATTER: STRUCTURAL, MECHANICAL, AND THERMAL PROPERTIES Meng Qing-Ge, Li Jian-Guo, Zhou Jian-Kun Chin. Phys., 2006, 15 (7): 1549-1557 doi: 10.1088/1009-1963/15/7/029 Show Abstract Pr-based bulk metallic amorphous (BMA) rods (Pr60Ni30Al10) and Al-based amorphous ribbons (Al87Ni10Pr3) have been prepared by using copper mould casting and single roller melt-spun techniques, respectively. Thermal parameters deduced from differential scanning calorimeter (DSC) indicate that the glass-forming ability (GFA) of Pr\xj{60}Ni\xj{30}Al\xj{10} BMA rod is far higher than that of Al87Ni10Pr3 ribbon. A comparative study about the differences in structure between the two kinds of glass-forming alloys, superheated viscosity and crystallization are also made. Compared with the amorphous alloy Al87Ni10Pr3, the BMA alloy Pr60Ni30Al10 shows high thermal stability and large viscosity, small diffusivity at the same superheated temperatures. The results of x-Ray diffraction (XRD) and transmission electron microscope (TEM) show the pronounced difference in structure between the two amorphous alloys. Together with crystallization results, the main structure compositions of the amorphous samples are confirmed. It seems that the higher the GFA, the more topological type clusters in the Pr--Ni--Al amorphous alloys, the GFAs of the present glass-forming alloys are closely related to their structures. Yu Jie, Bai Xin, Zhang Zhao-Xiang, Zhang Geng-Min, Guo Deng-Zhu, Xue Zeng-Quan Chin. Phys., 2006, 15 (7): 1558-1562 doi: 10.1088/1009-1963/15/7/030 Show Abstract The low-energy electron point source (LEEPS) microscope, which creates enlarged projection images with low-energy field emission electron beams, can be used to observe the projection image of nano-scale samples and to characterize the coherence of the field emission beam. In this paper we report the design and test operation performance of a home-made LEEPS microscope. Multi-walled carbon nanotubes (MWCNTs) synthesized by the CVD method were observed by LEEPS microscope using a conventional tungsten tip, and projection images with the magnification of up to 104 was obtained. The resolution of the acquired images is \sim10 nm. A higher resolution and a larger magnification can be expected when the AC magnetic field inside the equipment is shielded and the vibration of the instrument reduced. Lan You-Zhao, Wu Dong-Sheng, Huang Shu-Ping, Shen Juan, Li Fei-Fei, Cheng Wen-Dan Chin. Phys., 2006, 15 (7): 1563-1569 doi: 10.1088/1009-1963/15/7/031 Show Abstract In this paper, we investigate the length dependence of linear and nonlinear optical properties of finite-length BN nanotubes. The recently predicted smallest BN(5,0) nanotube with configuration stabilization is selected as an example. The energy gap and optical gap show the obvious length dependence with the increase of nanotube length. When the length reaches about 24 {\AA}, the energy gap will saturate at about 3.2 eV, which agrees well with the corrected quasi-particle energy gap. The third-order polarizabilities increase with the increase of tube length. Two-photon allowed excited states have significant contributions to the third-order polarizabilities of BN(5,0) nanotube. Li Zhi-Peng, Liu Yun-Cai Chin. Phys., 2006, 15 (7): 1570-1576 doi: 10.1088/1009-1963/15/7/032 Show Abstract We introduce a velocity-difference-separation model that modifies the previous models in the literature. The improvement of this new model over the previous ones lies in that it not only theoretically retains many strong points of the previous ones, but also performs more realistically than others in the dynamical evolution of congestion. Furthermore, the proposed model is investigated with analytic and numerical methods, with the finding that it can demonstrate some complex physical features observed in real traffic such as the existence of three phases: free flow, synchronized flow, and wide moving jam; sudden flow drop in flow-density plane; and traffic hysteresis in transition between the free and the synchronized flow. Qian Feng, Huang Hong-Bin, Qi Guan-Xiao, Shen Cai-Kang Chin. Phys., 2006, 15 (7): 1577-1579 doi: 10.1088/1009-1963/15/7/033 Show Abstract Based on Bogoliubov's truncated Hamiltonian HB for a weakly interacting Bose system, and adding a U(1) symmetry breaking term \$\sqrt{V}(\lambda a0+\lambda*a0+) to HB, we show by using the coherent state theory and the mean-field approximation rather than the c-number approximations, that the Bose--Einstein condensation(BEC) occurs if and only if the U(1) symmetry of the system is spontaneously broken. The real ground state energy and the justification of the Bogoliubov c-number substitution are given by solving the Schr\"{o}dinger eigenvalue equation and using the self-consistent condition. Yang Kun, Wang Chun-Lei, Li Ji-Chao, Zhang Chao, Wu Qing-Zao, Zhang Yan-Fei, Yin Na, Liu Xue-Yan Chin. Phys., 2006, 15 (7): 1580-1584 doi: 10.1088/1009-1963/15/7/034 Show Abstract In this paper, the structure of cubic CaTiO3 (001) surfaces with CaO and TiO2 terminations has been studied from density functional calculations. It has been found that the Ca atom has the largest relaxation for both kinds of terminations, and the rumpling of the CaO-terminated surface is much larger than that of TiO2-terminated surface. Also we have found that the metal atom relaxes much more prominently than the O atom does in each layer. The CaO-terminated surface is slightly more energetically favourable than the TiO2-terminated surface from the analysis of the calculated surface energy. CONDENSED MATTER: ELECTRONIC STRUCTURE, ELECTRICAL, MAGNETIC, AND OPTICAL PROPERTIES Ouyang Chu-Ying, Xiong Zhi-Hua, Ouyang Qi-Zhen, Liu Guo-Dong, Ye Zhi-Qing, Lei Min-Sheng Chin. Phys., 2006, 15 (7): 1585-1590 doi: 10.1088/1009-1963/15/7/035 Show Abstract The electronic and optical properties of zincblende ZnX(X=S, Se, Te) and ZnX:Co are studied from density functional theory (DFT) based first principles calculations. The local crystal structure changes around the Co atoms in the lattice are studied after Co atoms are doped. It is shown that the Co-doped materials have smaller lattice constant (about 0.6%--0.9%). This is mainly due to the shortened Co--X bond length. The (partial) density of states (DOS) is calculated and differences between the pure and doped materials are studied. Results show that for the Co-doped materials, the valence bands are moving upward due to the existence of Co 3d electron states while the conductance bands are moving downward due to the reduced lattice constants. This results in the narrowed band gap of the doped materials. The complex dielectric indices and the absorption coefficients are calculated to examine the influences of the Co atoms on the optical properties. Results show that for the Co-doped materials, the absorption peaks in the high wavelength region are not as sharp and distinct as the undoped materials, and the absorption ranges are extended to even higher wavelength region. Sun Mei, Liu Rong-Juan, Li Zhi-Yuan, Cheng Bing-Ying, Zhang Dao-Zhong, Yang Hai-Fang, Jin Ai-Zi Chin. Phys., 2006, 15 (7): 1591-1594 doi: 10.1088/1009-1963/15/7/036 Show Abstract The extraordinary light transmission through a 200-nm thick gold film when passing through different subwavelength hole arrays is observed experimentally. The sample is fabricated by electron beam lithography and reactive ion etching system. A comparison between light transmissions shows that the hole shape changing from rectangular to diamond strongly affects the transmission intensity although both structures possess the same lattice constant of 600,nm. Moreover, the position of the transmission maximum undergoes a spectral red-shift of about 63,nm. Numerical simulations by using a transfer matrix method reproduce the observed transmission spectrum quite well. Hu Jing-Guo, R L Stamps Chin. Phys., 2006, 15 (7): 1595-1601 doi: 10.1088/1009-1963/15/7/037 Show Abstract The rotational anisotropies in the exchange bias structures of ferromagnetism/antiferromagnetism 1/antiferromagnetism 2 are studied in this paper. Based on the model, in which the antiferromagnetism is treated with an Ising mean field theory and the rotational anisotropy is assumed to be related to the field created by the moment induced on the antiferromagnetic layer next to the ferromagnetic layer, we can explain why in experiments for ferromagnetism (FM)/antiferromagntism 1 (AFM1)/antiferromagnetism 2 (AFM2) systems the thickness-dependent rotational anisotropy value is non-monotonic, i.e. it reaches a minimum for this system at a specific thickness of the first antiferromagnetic layer and exhibits oscillatory behaviour. In addition, we find that the temperature-dependent rotational anisotropy value is in good agreement with the experimental result. Huang Zhi-Gao, Chen Zhi-Gao, Jiang Li-Qin, Ye Qing-Ying, Wu Qing-Yun Chin. Phys., 2006, 15 (7): 1602-1610 doi: 10.1088/1009-1963/15/7/038 Show Abstract Based on Monte Carlo method, the oscillatory behaviour of the average magnetic moment as a function of the cluster sizes and the temperature dependences of magnetic moment with different sizes have been studied. It is found that the oscillations superimposed on the decreasing moment are associated with not only the geometrical structure effects but also the thermal fluctuation. The hystereses and thermal coercivities for free clusters with zero and finite uniaxial anisotropies have been calculated. The simulated thermal dependence of the coercivity is consistent with the experimental result, but does not fit the Tα law in the whole temperature range. It is evident that an easy magnetization direction and an anisotropy resulting from the spin configurations exist in the free clusters with the pure exchange interaction, which is also proved by the natural angle and energy distribution of clusters. A systematic theoretical analysis is also made to establish the relationship between natural angle and coercivity. Zhao Ming-Lei, Yi Xiu-Jie, Wang Chun-Lei, Wang Jin-Feng, Zhang Jia-Liang Chin. Phys., 2006, 15 (7): 1611-1614 doi: 10.1088/1009-1963/15/7/039 Show Abstract This paper investigates the dielectric properties of (Na0.5K0.5Bi)0.5TiO3 crystal at intermediate frequencies (1kHz \le f \le 1MHz) in the temperature range of 30--560℃. A pronounced high-temperature diffuse dielectric anomaly has been observed. This dielectric anomaly is shown to arise from a Debye-like dielectric dispersion that slows down following an Arrhenius law. The activation energy Er obtained in the fitting process is about 0.69eV. It proposes that the dielectric peak measured at low frequency above 400℃ is not related to the phase transition but to a space-charge relaxation. He Shou-Jie, Ai Xi-Cheng, Dong Li-Fang, Chen De-Ying, Wang Qi, Li Xue-Chen, Zhang Jian-Ping, Wang Long Chin. Phys., 2006, 15 (7): 1615-1620 doi: 10.1088/1009-1963/15/7/040
2019-12-09 09:48:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5124170184135437, "perplexity": 3223.3468067482177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00085.warc.gz"}
https://www.groundai.com/project/thyr-a-volumetric-ray-marching-tool-for-simulating-microwave-emission/
Thyr: A 3D Volumetric Microwave Simulator # Thyr: A Volumetric Ray-Marching Tool for Simulating Microwave Emission Christopher M. J. Osborne, Paulo J. A. Simões, SUPA, School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, UK Centro de Rádio Astronomia e Astrofísica Mackenzie, CRAAM, Universidade Presbiteriana Mackenzie, São Paulo, SP 01302-907, Brazil Escola de Engenharia, Universidade Presbiteriana Mackenzie, São Paulo, SP 01302-907, Brazil E-mail: c.osborne.1@research.gla.ac.uk Accepted XXX. Received YYY; in original form ZZZ ###### Abstract Gyrosynchrotron radiation is produced by solar flares, and can be used to infer properties of the accelerated electrons and magnetic field of the flaring region. This microwave emission is highly dependent on many local plasma parameters, and the viewing angle. To correctly interpret observations, detailed simulations of the emission are required. Additionally, gyrosynchrotron emission from the chromosphere has been largely ignored in modelling efforts, and recent studies have shown the importance of thermal emission at millimetric wavelengths. Thyr is a new tool for modelling microwave emission from three-dimensional flaring loops with spatially varying atmosphere and increased resolution in the lower corona and chromosphere. Thyr is modular and open-source, consisting of separate components to compute the thermal and non-thermal microwave emission coefficients and perform three-dimensional radiative transfer (in local thermodynamic equilibrium). The radiative transfer integral is computed by a novel ray-marching technique to efficiently compute the contribution of many volume elements. This technique can also be employed on a variety of astrophysics problems. Herein we present a review of the theory of gyrosynchrotron radiation, and two simulations of identical flare loops in low- and high-resolution performed with Thyr, with a spectral imaging analysis of differing regions. The high-resolution simulation presents a spectral hardening at higher frequencies. This hardening originates around the top of the chromosphere due to the strong convergence of the magnetic field, and is not present in previous models due to insufficient resolution. This hardening could be observed with a coordinated flare observation from active radio observatories. ###### keywords: pubyear: 2018pagerange: Thyr: A Volumetric Ray-Marching Tool for Simulating Microwave EmissionThyr: A Volumetric Ray-Marching Tool for Simulating Microwave Emission ## 1 Introduction and Background Strong increases in microwave emission are observed during solar flares. Gyrosynchrotron emission, originating from mildly relativistic non-thermal electrons spiraling around magnetic field lines, is responsible for the majority of the emission in the 1-200 GHz range (Dulk, 1985; Bastian, 1998). Recent studies have also shown the importance of free-free emission from thermal electrons at these temperatures (e.g. Heinzel & Avrett, 2012). Observations of the microwave emission produced during a solar flare can be used to characterise the properties of the accelerated electrons and magnetic field of the flaring region. Current observations of solar flares are typically carried out by radio telescopes without imaging capabilities (Sun-as-a-star) and by the interferometer Nobeyama Radio Heliograph (NoRH, Nakajima et al., 1994), capable of producing images at 17 and 34 GHz with moderate spatial resolution (10 and 5, respectively) and with a temporal resolution down to 1 s. Such observations are capable of resolving the radio flare sources, associated with footpoints (e.g. Tzatzakis et al., 2008), flaring loops (e.g. Kundu et al., 2001), or ribbons (e.g. Simões et al., 2013) These features are commonly associated with the geometry of magnetic field loops, filled with electrons accelerated during the energy release phase of flares. The first microwave spectral imaging analysis of a flare, with high spectral resolution, was performed by Wang et al. (1994), using the Owens Valley Radio Observatory (OVSA, Lim et al., 1994) 1–18 GHz data, and showed that the microwave spectral peak occurs at lower frequencies ( GHz) for the looptop sources and at higher frequencies ( GHz)in the footpoints, following the strength of the magnetic field in a flaring loop. Another example of a microwave imaging spectroscopic analysis was presented by Wang et al. (1995), also using OVSA data. A similar study was recently performed by Gary et al. (2018) with the Extended Owens Valley Solar Array (EOVSA, Kuroda et al., 2018; Wang et al., 2015) observations of the SOL2017-09-10 X8.2 event and yielded results consistent with those of Wang et al. (1994) and Wang et al. (1995), and simulation results of Simões & Costa (2006). Nindos et al. (2000) used one-dimensional modelling to reproduce the spatially resolved emission of a flare loop in 5 and 15 GHz, showing the 15 GHz emission was produced only in the footpoints and the 5 GHz emission restricted to the upper loop. To fit the observations it was necessary to use a much stronger magnetic field in the photosphere than at the looptop (870 G in the footpoints and 280 G in the looptop). Analysis of similar events by Lee & Gary (2000) provided evidence for magnetic trapping that would restrict the 5 GHz emission to the upper loop while the 15 GHz emission comes from higher energy electrons that are able to escape mirroring effects. The complexity of solar microwave simulations has increased significantly in recent years, primarily building on the three-dimensional modeling tool GS Simulator (now GX Simulator) (Kuznetsov et al., 2011; Nita et al., 2015). GX Simulator computes solar radio emission from a three dimensional model, with in-built tools for magnetic field extrapolation from photospheric magnetograms, and a chromospheric approximation computed using the semi-empirical atmospheres of Fontenla et al. (2009), based on the photospheric intensity and magnetogram values (Fleishman et al., 2015). This tool has been used to forward fit and reconcile observations of radio and hard X-ray emission with a unified electron population (Kuroda et al., 2018), and investigate the plasma heating mechanism during a “cold” flare event using Linear Force Free Field extrapolation on magnetogram data to reconstruct the observed two-loop geometry and explain the heating delay by electron trapping in the looptop (Fleishman et al., 2016). Gordovskyy et al. (2017) used GX Simulator to investigate the polarised microwave emission from relaxing twisted coronal loops based on time-dependent magnetohydrodynamic simulations. These loops were found to produce gyrosynchrotron emission with short-term gradients of circular polarisation (cross-loop polarisation gradients), that depend strongly on viewing angle, and would primarily be visible on a loop observed on the solar limb, clearly showing the three-dimensional nature of the problem. With the arrival of new and upgraded solar observatories, such as the Atacama Large Millimetric-submillimetric Array (ALMA, Wedemeyer et al., 2016) and EOVSA that are providing higher spatial and spectral resolution it is essential to have detailed predictions and models for these observations. While solar observations with ALMA have started (White et al., 2017; Iwai et al., 2017; Brajša et al., 2017), the Sun is now at its period of minimum activity, and no flares have been detected with ALMA yet. However, ALMA’s capability for flare studies have been proven with the detection of a super-flare on Proxima Centauri (MacGregor et al., 2018). Regular solar observations in the millimetric range have started in 1999 with the Solar Submillimeter Telescope (SST, operating at 212 and 405 GHz Kaufmann et al., 2008). Since then a number of solar flares have been detected, with the typical decreasing, non-thermal spectrum towards higher frequencies (Giménez de Castro et al., 2009; Giménez de Castro et al., 2013), evidence for thermal (free-free) emission (especially at high frequencies towards the infrared range) (Heinzel & Avrett, 2012; Simões et al., 2017; Trottet et al., 2012, 2015), and with an increasing spectrum component at millimetric wavelengths (sub-THz) (e.g. Kaufmann et al., 2004), which was also observed with the Köln Observatory for Submillimeter and Millimeters Astronomy (KOSMA, Lüthi et al., 2004). This sub-THz component still remains without a satisfactory physical explanation (Krucker et al., 2013). Previous observations have shown that flaring ribbons and footpoints can reach temperatures in the range of 1-10 MK (Hudson et al., 1994; Mrozek & Tomczak, 2004; Graham et al., 2013; Simões et al., 2015). The chromospheric plasma is normally opaque to radio emission (in the GHz range), and it would therefore be irrelevant if non-thermal electrons penetrate this layer. However, if this plasma is heated to these greater temperatures, then the free-free opacity drops and the plamsa becomes optically thin to any gyrosynchrotron emission potentially produced in the chromosphere, as discussed in Fletcher & Simões (2013). Herein we present a new tool, Thyr, to compute the microwave emission from a dipole loop of plasma, containing a spatially variable atmosphere. Emission maps are computed under the assumption of local thermodynamic equilibrium (LTE) in the three-dimensional dipole volume. Full spectral information is available for each simulated pixel and can also be integrated over larger regions. Thyr’s novel feature is its ability to manually refine the simulation resolution in the chromosphere while maintaining a complete microwave radiation treatment, allowing it to resolve the much smaller scales over which the atmospheric parameters evolve helping to account for both the free-free and gyrosynchrotron emission from this thin layer. Our model also supports using arbitrarily varying atmospheric parameters, magnetic field configurations, and electron pitch angle distributions. Thyr builds on several generations of gyrosynchrotron modelling tools, including, but not limited to, Klein & Trottet (1984), Alissandrakis & Preka-Papadema (1984), Holman (2003), Simões & Costa (2006, 2010). Costa et al. (2013) and Cuambe et al. (2018) have both constructed large databases of three-dimensional gyrosynchrotron simulations from a large parameter space to develop an understanding of solar bursts and attempt to infer flare parameters from observation respectively. GX Simulator (Nita et al., 2015) can produce three-dimensional simulations of gyrosynchrotron emission from a dipole loop and has been used to investigate the effects of varying electron pitch-angle distribution (Kuznetsov et al., 2011), but focuses primarily on coronal microwave emission. These tools, including Thyr, build on the analytic expressions describing gyrosynchrotron emission formulated by Ramaty (1969). In this paper we describe the functioning of the Thyr tool, validate its output against simulations presented in Klein & Trottet (1984), and use new simulations demonstrating the importance of modelling the lower atmosphere at high-resolution to capture the fine structures of the chromosphere. The source code and examples are available on Github at http://github.com/Goobley/Thyr2 (Osborne, 2018). ## 2 Theory ### 2.1 Gyrosynchrotron Emission Gyrosynchrotron emission depends on a large number of parameters of the emitting region. These are primarily (Ramaty, 1969; Stähli et al., 1989) • Magnetic field strength : Directly determines the electron gyrofrequency. • Plasma density : Describes the density of electrons in the thermal plasma and can have a large effect on measured emission due to free-free emission and absorption effects. Dense magnetised plasmas may also strongly suppress the gyrosynchrotron emission, an effect known as Razin suppression (Ramaty, 1969). • Non-thermal electron density : This parameter is largely responsible for the strength of the emission, as a higher electron density will produce more radiation. It also affects the importance of the gyrosynchrotron self-absorption, via radiative transfer. • Non-thermal electron distribution : This function describes the distribution of non-thermal electron energies (here in terms of relativistic Lorentz factor ) and pitch-angles – the angle between the electron’s velocity vector and the magnetic field. It is often assumed that the electron energies follow a single power law distribution determined by their spectral index , , and minimum and maximum cut-off energy values, and , respectively. The distribution of pitch-angle has a strong effect on the angles radiation is observed at due to the strong beaming effect of the radiation from relativistic electrons. Thyr supports multi-power law energy distributions and arbitrary distributions of pitch-angle. • Viewing angle : The angle between the wave vector and the magnetic line has a strong effect on the observed radiation due to the polarisation of the radiation and the beaming of emission from relativistic particles. As gyrosynchrotron emission is produced by electrons spiraling around the magnetic field, it is composed of two circularly polarised modes. We designate the mode extraordinary when the circular polarisation is right-handed and the electric field vector rotates in the same direction as the electrons. In the opposite case the mode is called ordinary, or left-handed. Following the convention of Klein & Trottet (1984) we use “” to refer to the extraordinary mode (right-handed polarisation for positive ) in equations, and “” to refer to the ordinary mode (left-handed). The following treatment is based on that of Klein & Trottet (1984), developed from Ramaty (1969) and the corrections of Trulsen & Fejer (1970). The refractive index of the plasma is given by the Appleton-Hartree equation (e.g. Stix (1962)) n±(ν,θ)2=1−2X(1−X)2(1−X)−Y2sin2θ±[Y4sin4θ+4Y2(1−X)2cos2θ]1/2 (1) where X:=ν2pν2,Y:=νBν,νp:=electron plasma frequency=e√npπme,νB:=electron gyrofrequency=eB2πmec,e:=electron charge,me:=electron mass, The polarisation coefficient is then aθ±(ν,θ)=2(1−X)cosθ−Ysin2θ±[Y2sin4θ+4(1−X)2cos2θ]1/2 (2) and is used to compute the spectral intensity of a single electron (Trulsen & Fejer, 1970) ϵ±(ν,θ,γ,ϕ)=2πe2cν2n2±1+a2θ±⋅∞∑s=−∞[−βsinϕJ′s(xs)+aθ±cosθ−n±βcosϕn±sinθJs(xs)]2⋅δ[(1−n±βcosϕcosθ)ν−sνB/γ] (3) where c:=speed of light,γ:=(1−β2)−1/2,β:=vc,xs:=γννBn±βsinϕsinθ,Js:=Bessel function of the first kind, % of order s,and J′s its derivative % with respect to xs,δ:=Dirac delta function. The electron distribution is described by the function such that 2π∫∞1dγ∫1−1d(cosϕ)g(γ,ϕ)=1. (4) This gives rise to the expression of the emission and absorption coefficients, and respectively, for a homogeneous ensemble of electrons with density and energy and pitch-angle distribution j±(ν,θ)=2πne∫∞1dγ∫1−1d(cosϕ)g(γ,ϕ)ϵ±(ν,θ,γ,ϕ), (5) κ±(ν,θ)=2πnemeν2n±∫∞1dγ∫1−1d(cosϕ)ϵ±(ν,θ,γ,ϕ)h±(ν,θ,γ,ϕ) (6) where h±(ν,θ,γ,ϕ)=[−βγ2∂∂γ(g(γ,ϕ)βγ2)+n±βcosθ−cosϕβ2γsinϕ∂∂ϕg(γ,ϕ)]. (7) The presence of the delta function in allows the integral over to be solved analytically for . Following Ramaty (1969) we define and G±(ν,θ,γ,s)=[−βsinϕsJ′s(xs)+aθ±(cosθn±sinθ−βcosϕssinθ)Js(xs)]2⋅g(γ,ϕ)β(1+a2θ±)2πcosθννB, (8) H±(ν,θ,γ,s)=G±(ν,θ,γ,s)⋅h±(ν,θ,γ,ϕs) (9) where cosϕs:=1−sνBγνn±βcosθ,xs:=sn±βsinθsinϕs1−n±βcosθcosϕs. Solving the integral over analytically and swapping the resultant integral over with the summation over by following the approach of Holman (2003) to equations (5) and (6) yields j±(ν,θ)=e3Bnemec2smax∑s=smin∫γmaxγmindγG±(ν,θ,γ,s), (10) κ±(ν,θ)=4πe2neBsmax∑s=smin∫γmaxγmindγH±(ν,θ,γ,s), (11) where smin:=⌊γννB(1−n±βcosθ)⌋+1,smax:=⌊γννB(1+n±βcosθ)⌋,γmin,max:=sνBν±n±cosθ[(sνBν)2+n2±cos2θ−1]1/21−n2±cos2θ, and is a function that returns the floor of . In reality, we do not always compute the summation up to as and may have converged for a smaller . This is determined by the relative change of and across successive iterations of the summation. Additionally, ordinary-mode emission may only be produced when and . Similarly, extraordinary-mode emission requires ν>(ν2p+14ν2B)1/2+12νB (12) and . In our implementation the integrals over are computed using a Gauss-Legendre method. Additionally, significant speed-ups were obtained by using a look-up table for the common range of Bessel functions encountered during the simulation. This is very efficient because the computation of the Bessel function is by far the dominant computational cost in the gyrosynchrotron coefficients and similar regions of the Bessel function will be accessed sequentially, allowing the CPU’s caching and pre-fetching mechanisms to work efficiently. If a Bessel function outside the tabled space is requested, and the approximation is valid, then the expression from Wild & Hill (1971) is used. The expressions for (8) and (8) do not hold for , whilst it is possible to derive additional expressions for this case, there is little reason to, as it only holds for this singular case, and also for reasons discussed in Sec. 2.3. ### 2.2 Thermal Emission In addition to the radio gyrosynchrotron emission parameters, we compute the thermal free-free emission and absorption coefficients, as well as the H opacity, known to be relevant for submillimetric emission (Heinzel & Avrett, 2012; Simões et al., 2017). For the free-free opacity we follow Dulk (1985), with the correction from Wedemeyer et al. (2016) κν,ff =9.78×10−3neν2T3/2∑iZ2ini (13) ×{(17.9+lnT3/2−lnν),(T<2×105K)(24.5+lnT−lnν),(T>2×105K) where is the thermal electron number density, is the number density of ion with charge , is the frequency, and is the temperature of the plasma. Herein we assume a uniform hydrogen plasma such that ∑iZini=np=n\textscHii (14) where is the proton density and is the ionised hydrogen density. Now, by Dulk (1985) (and Kirchoff’s law) we have the free-free emission coefficient jν,ff=2kBTν2kν,ffc2 (15) where is Boltzmann’s constant. We follow the treatment of Kurucz (1970) computing the H opacity κν,\textscH$−$=nen\textscHν(A1+(A2−A3/T)/ν)(1−e−hν/kBT) (16) where is the neutral hydrogen density, , , and . Finally, we have the thermal emission and absorption coefficients jν,therm =jν,ff+κν,\textscH−Bν(T), (17) κν,therm =κν,ff+κν,\textscH−, where is the Planck function. At solar gyrosynchrotron frequencies free-free opacity always dominates over H emissivity, and therefore the effects of H emissivity were ignored during the numerical simulations. To compute the emission maps generated by this process we compute radiative transfer under the assumption of local thermodynamic equilibrium. The monochromatic observed intensity is then given by the radiative transfer equation dIνdτν=−Iν(τν)+Sν(τν), (18) where is the monochromatic optical length of the plasma at frequency along the photon’s path between emission and absorption, and is the monochromatic source function where . Using the ray-marching algorithm of Section 3.2 these emission and absorption parameters are combined, from back to front along the observer’s line of sight, so as to produce a brightness temperature map for each radio mode and the thermal emission (, , and respectively). The brightness temperature is computed for each pixel of each map as per the Rayleigh-Jeans approximation Tb=c22kBν2Iν (19) where Iν =∫H0jν,oe−τνdh, (20) τν =∫H0kνdh (21) and the optical length travelled by a photon is related to the depth along the observer’s line of sight by , where is the sum of the absorption coefficients from different processes affecting the frequency . In these integrals, is the photon emission point along the observer’s line of sight, and 0 is the observer’s location. A total emission map is then computed from . ### 2.4 Circular Polarisation Degree When computing the total emission it is sufficient to simply add the flux across the emitting modes, however, for investigating polarisation it is necessary to include propagation effects and the radiative transfer of all four Stokes parameters. Following Simões & Costa (2006) and Cohen (1960), from magnetoionic theory in solar conditions there are two regimes for the propagation of circular polarisation (Stokes V), the quasi-longitudinal and quasi-transversal approximations. Using the C7 semi-empirical atmosphere (Avrett & Loeser, 2008) and the dipole model discussed in Sec. 3.1 with a looptop magnetic field strength of 100 G the quasi-longitudinal approximation holds for outside the range at optical depth from free-free absorption. For cells where falls within the range we simply adjust the viewing angle to the closer edge of the range to retain the validity of the quasi-longitudinal approximation. Unless a very highly anisotropic pitch angle distribution is used, that is peaked within this range, there are no significant differences in the intensities of the two modes due to the application of this approximation. From the optical depth calculation we find that 10 GHz radiation does not penetrate through the transition region, 45 GHz radiation just enters the chromosphere, and 200 GHz radiation reaches 400 km into the chromosphere. In regions deeper in the chromosphere, for viewing angles close to perpendicular, the quasi-longitudinal approximation may fail, and one should be wary of interpreting the polarisation results from this region. The calculation of the total intensity is, however, unaffected by the use of this approximation. Following the creation of emission maps we can compute the circular polarisation degree of the radiation. We follow the convention of Klein & Trottet (1984) and define for the Stokes and in the frame of the wave VI=sign(cosθ)Tb,+−Tb,−Tb,++Tb,−+Tb,therm. (22) In the reference frame of an observer the circular polarisation will be reversed such that . In Thyr the magnetic viewing angle term used for calculating the polarisation degree is simply computed as an average along the ray, and so caution should be taken with the interpretation of polarised radiation from overlapping sources with opposing magnetic field orientation. The viewing angles discussed in the previous sections are all treated without this approximation. ## 3 The Thyr Model ### 3.1 Magnetic field geometry We adopt the same analytic dipole model as Kuznetsov et al. (2011, see Appendix) to describe the magnetic field geometry. This describes both the magnetic field at each point in the volume and the region contained within the dipole. This analytic dipole model is described in terms of the magnetic field at the centre of the looptop, the radius of the flux tube at the looptop, the height, and the submerged depth of the dipole. The magnetic dipole element at the base of the loop can also be inclined, and the ray-marching method allows the loop to be visualised from any angle which can be specified in terms of location on the Sun. A solar location is specified by four angles, solar latitude and longitude, tilt away from the local normal to the surface about an axis connecting the footpoints, and rotation about the local normal. ### 3.2 Ray-marching The standard approach for computing the emission map of a three-dimensional volume is to use ray tracing, however this becomes significantly more computationally demanding as the number of discrete volume elements (voxels, assumed quasi-homogeneous three-dimensional cubic volumes) increases due to the number of ray-voxel intersection tests that need to be performed to determine whether a certain voxel interacts with a ray. This method can be optimised by using voxel culling methods, such as octrees, which describe layers of nested voxels, allowing many of the tests against the most refined voxels to be avoided. These optimisations are quite complex to implement, and given the simplicity of this problem are unnecessary. Ray-marching instead assumes that the emission and absorption properties of the volume can be sampled continuously. Instead of computing the radiative transfer integral along a ray between voxel intersections whilst assuming homogeneous plasma parameters, when ray-marching the integral is computed without a grid, by moving a short step along the ray and sampling the local emission and absorption coefficients, whilst assuming homogeneity over this short step. If these steps taken along the ray are sufficiently short then this method will tend towards the true value of the integral. Under the basic description of ray-marching given above, it is assumed that the emission and absorption coefficients of the plasma are defined continuously and can be sampled at any point. Due to the high cost of computing these coefficients for gyrosynchrotron radiation (Sec. 2.1) it is not feasible to recompute these at every step along the ray. The plasma emission and absorption coefficients for the two gyrosynchrotron modes and the thermal radiation are therefore computed on a discrete three-dimensional Cartesian grid and stored in a volume texture (three-dimensional cuboidal array) before the ray-marching procedure. The coefficients are also multisampled i.e. computed for multiple points within each voxel and then averaged to attempt to better capture the average plasma conditions than simply the conditions at the central point. There are multiple methods for determining when the ray-marching step size needs to be decreased or can be increased. When the absorption and emission coefficients are continuously defined then metrics based on the local gradient can be used. In Thyr we choose to perform the refinement manually. The bounds of the volume texture containing the plasma coefficients define an axis-aligned bounding box (AABB) for the object stored in the texture. Then, using the simple but highly optimised, “slab” algorithm (Kay & Kajiya, 1986) to compute the intersection of rays with this AABB we have the start and end points between which to ray-march. This “slab” algorithm has been further optimised by relying on strict IEEE754 floating point behaviour. The manually specified regions of refinement are converted into AABBs contained within the initial AABB. The plasma coefficients are computed on a finer grid (with user specified refinement factor) within these sub-domains. If a ray intersects with one of these higher resolution regions (determined by the intersection of the ray with the sub-domain’s AABB) then the step size is decreased and the plasma coefficients are sampled from the finer grid when computing the radiative transfer integral. Thyr uses three volume textures to store the coefficients for the dipole volume. The primary texture represents the entirety of the volume at the (lower) coronal resolution, whilst the other two are heavily refined on the footpoints of the dipole encompassing the photosphere, chromosphere, and the transition region as the atmospheric parameters and magnetic field change much more rapidly in this region. Whilst a dipole model is chosen for our example, any volumetric function or data (such as the results of a magnetogram extrapolation) can be used to fill the parameters of the texture, and the ray-marching approach will remain unchanged. The model is currently based around the concept of a single AABB containing the entire scene at low-resolution and a number of refined regions contained strictly within this AABB. The information from the low-resolution AABB is then ignored in the locations where a refined region is present. These AABBs serve as the boundaries of a two-level three-dimensional Cartesian grid hierarchy, with a uniform grid within each level. If multiple low resolution AABBs are desired this change would be relatively trivial, but using the convention of a single low resolution AABB allows any geometry to be traced by Thyr with no changes to the ray-marching code. It would equally be simple to extend the code to support a full multi-level grid hierarchy with further refined regions within the refined regions, if extreme resolution were required in some locations. Fig. 1 shows (in two dimensions) the difference between the ray-tracing and ray-marching. The cost of finding the emission and absorption parameters for a point on the ray is constant if these parameters are described by a simple expression or discretised onto a known Cartesian grid, as these parameters can be computed or looked-up wherever the blue point happens to be. When applying a traditional, simple, ray-tracing technique without additional voxel culling optimisations, the ray-voxel intersection test must be performed against every voxel in the domain to determine the locations of the green points. This computation scales linearly with the number of voxels. Therefore, using the ray-marching approach, it is computationally tractable to sample the volume with a step size significantly smaller than the voxel side length. The integral then amounts to simply performing several elementary mathematical operations for each step to compute the closed form radiative transfer integral from a homogeneous slab, and looking up one value in a large array. Due to the cost of calculating the many ray-voxel intersections, for the cases used in Thyr ray-marching is typically 1.5 orders of magnitude faster than ray-tracing, even though the integral of radiative transfer is computed at significantly more points including a higher resolution region (orange box in Fig. 1). An assumption that is implicit to the ray-marching approach is that it is acceptable to not sample voxels for which the space between the intersections is very small (e.g. just cutting through close to a corner), as their effect is insignificant. In practice with a sufficiently small step-size (Thyr uses 0.1 the local grid side length for the plasma coefficients) and reasonably smoothly varying emission and absorption coefficients this is not a problem as the effect of the contribution from this region is not significant. The multisampling of the coefficients helps improve the smoothness of the coefficients between voxels and guarantee that the parameters chosen accurately reflect the plasma contained within (assumed homogeneous). ## 4 Verification against Klein & Trottet (1984) To verify Thyr’s correctness we use a problem presented in Klein & Trottet (1984). In this test case the gyrosynchrotron emission from a two-dimensional loop is simulated. The loop is observed from two different viewing angles . The specific intensity (sfu111solar flux units (1 sfu = Jy = W m Hz = erg s cm Hz)/sterad) is then computed for a telescope beam scanning across the loop with angle . In Thyr the rays along the observer’s line of sight are assumed parallel, and do not diverge from a point like the telescope in Klein & Trottet (1984). The coordinate is then the displacement in Thyr’s imaging plane in units of arcminutes on the surface of the Sun viewed from Earth. Using Klein and Trottet’s parameters for loop with 3 G coronal magnetic field strength at and we have reproduced the emission at 320 MHz in Fig. 2. The results are very similar in shape and intensity. The slight differences in intensity can be accounted for by modifying the looptop magnetic field strength by less than 5%. Klein & Trottet (1984) define the looptop magnetic field strength at the outer boundary of the looptop, whereas in Thyr it is defined in the centre. This value had to be manually tuned to obtain a magnetic field strength similar to that used by Klein & Trottet (1984) in the slice taken from the three-dimensional simulation performed in Thyr. The spatial offset of Thyr’s results in the case is due to Thyr performing the rotation but also translating the dipole so as to keep it centered in the field of view. ## 5 Example 3D Simulations To demonstrate Thyr’s capabilities we performed a set of flare simulations using a simple dipole model for the structure of the emitting volume and the magnetic field, a power-law non-thermal electron distribution, and the other atmospheric parameters specified by a semi-empirical model. From these simulations we obtain brightness maps, spectra, and polarisation maps. Complete spectral data is available for every pixel and region of these maps, thus allowing us to perform a localised spectroscopic analysis (Section 5.5). We perform these simulations both with and without resolving the chromosphere in detail to show its role on emission at higher frequencies. These simulations can serve as examples for how to set up and run the code, in addition to basic post-processing of results. ### 5.1 Semi-empirical atmosphere and parameters In our example simulations we use the C7 quiet Sun semi-empirical atmosphere of Avrett & Loeser (2008). This atmosphere is extrapolated to coronal parameters, by extrapolating linearly in log-log space the parameters from the top of the C7 model (at 47 Mm), until 150 Mm where the atmosphere is then set constant. Our atmosphere is plotted in Fig. 3. A non-thermal electron distribution is embedded in the semi-empirical atmosphere, with a density defined as cm in the corona, and cm at the top of the chromosphere. This increase of is caused by the decrease of the emitting volume due to the convergence of the dipole magnetic field. As can be seen Fig. 3, we have made the approximation to make this increase a step rather than scaling directly with the area of the dipole. This decision was made to ensure a simple example using a dipole magnetic field that produces chromospheric radio emission and to provide an electron number flux of s entering the chromosphere, which is consistent with observations (e.g. Simões & Kontar, 2013). These electrons follow a power-law distribution in energy, with a minimum cut-off energy of keV, a maximum cut-off of MeV, and a spectral index of . We assume that these electrons have an isotropic pitch-angle distribution, but the code supports arbitrary pitch-angle distributions. As we descend into the chromosphere the non-thermal electron distribution is attenuated by balancing collisional losses against the column density traversed by the electrons. We use the approximate relationship that the critical stopping column density is where is the initial electron energy in keV and is in cm (Tandberg-Hanssen & Emslie, 1988). The lower energy bound of the power law is then proportionately increased as electrons below the stopping energy are cut off. Finally, for these models we use a dipole with height of approximately cm () and a centre-to-centre footpoint separation of cm (). The radius at the looptop ( in the Kuznetsov et al. (2011) model) is cm. The looptop field is set to 100 G, yielding a strength of the order of 2000 G at the photosphere. The dipole here has a latitude of 30, longitude of 70, and rotation about the local normal of . The simulations presented here are performed both with and without refined resolution in the chromosphere. In the low resolution simulation the voxels have a side length of 1800 km. In the high resolution simulation the voxels of the upper corona have a 450 km side length and the refined voxels in the lower atmosphere have a 75 km side length (refined six times). Thyr’s ray-marching algorithm automatically transitions between the low- and high-resolution regions with no ill effects due to the boundary. An additional high resolution simulation was also computed with voxels with 100 km side lengths, and no significant spectral difference was found between the two, so it was concluded that the 75 km resolution was sufficient to capture the details of the problem. ### 5.2 Emission Maps Fig. 4 shows the total brightness maps at 35 GHz for the same loop simulated at different resolutions. The emission from the high-resolution loop at 2 GHz is shown in Fig. 5. This shows the complex nature of gyrosynchrotron emission at frequencies at which the plasma is optically thick. The high frequency emission maps (Fig. 4) have the most intense emission from the footpoints, associated with the strongest magnetic field values. In the low frequency emission maps (Fig. 5) the stripes are caused by the harmonic structure of the gyrosynchrotron emission at a single frequency for a spatially varying field. The effects of insufficient resolution within the simulation can be seen by comparing the simulations in Fig. 4. The low-resolution simulation of Fig 4a produces an lower brightness temperature than the high-resolution model shown in Fig. 4 due to the small region this high brightness originates from. ### 5.3 Spectra Integrating over the brightness temperature maps, the flux density spectra of these two simulations can be computed. In practical terms, this calculation is simply the sum of the flux density of each pixel (column and row ), obtained from their specific intensity over the pixel solid angle : Fν=Nx∑i=1Ny∑j=1Iij Ωpixel. (23) The specific intensity of the pixels are directly found from their brightness temperature values via the Rayleigh-Jeans law: Iij=2kbν2c2Tb,ij (24) The resulting spectra are shown in Fig. 6, in sfu. Both of these plots show the characteristic “inverted-v” shape of gyrosynchrotron emission, and the fact that the ordinary mode dominates at low frequencies and the extraordinary mode dominates at higher frequencies (Ramaty, 1969). The thermal emission is plotted, but is insignificant in the cases we are investigating here. The difference in the magnitude of the thermal emission is due to a region that is small and bright in the high-resolution case, being spread across a much larger voxel in the low-resolution case. Although the spectrum of gyrosynchrotron radiation from a uniform source would present harmonics, they are not visible in the spatially-integrated spectra due to the overlap of harmonic structures from a spatially-varying field and atmospheric parameters, as has been previously discussed by Klein & Trottet (1984); Simões & Costa (2006). The spectra for the low- and high-resolution cases behave similarly at low frequencies, but differences emerge at higher frequencies. In the high-resolution simulation there is a hardening of the spectrum of the ordinary and extraordinary modes of the gyrosynchrotron radiation at high frequencies. As these non-thermal modes dominate over the thermal emission the hardening in the total spectrum must be non-thermal in origin, as we will discuss in Section 7. ### 5.4 Polarisation Degree Fig. 7 shows a map of the polarisation degree for the high resolution model at 17 GHz. The polarisation degree is given by the Stokes parameters . The expected opposite polarisation in each footpoint is present. The asymmetry in peak polarisation degree between the two footpoints is purely an effect of viewing angle – the loop is fully symmetric. Klein & Trottet (1984) and Simões & Costa (2006) have previously shown the importance of the viewing angle on observed polarisation, and our results are in accordance. With a three-dimensional system there is a variety of situations that can produce significant differences between the intensity of footpoints, and the location of the change in polarisation direction. Polarisation data is an important component of radio observation, one that could be used to shed light on the magnetic field geometry (Simões & Costa, 2006), and an essential tool for diagnosing the electron energy distribution (Kuznetsov et al., 2011). The structure of the polarised radiation shown in Fig. 7 is quite simple with a single transition between negative and positive polarisation. At lower frequencies where a large portion of the volume is optically thick and the complex “gyrostripe” patterns (see Fig. 5) are visible the polarisation patterns are also far more complex and follow these stripes. In the simulations presented here a simple dipole magnetic field model is used. As the polarisation degree is strongly related to the direction of the magnetic field a more complex magnetic model can yield very different polarisation patterns (e.g. Gordovskyy et al., 2017). Thyr is capable of using an arbitrary magnetic field geometry and thus can be used to investigate cases with complex polarisation patterns, including fine structure if the resolution is increased in the region of interest. ### 5.5 Imaging Spectroscopy In Fig. 5 we have marked a looptop and a footpooint region in teal and red, respectively. The central pixel is marked in teal for the looptop region, and multiple pixels are marked and numbered inside the footpoint region. The simulated spectral flux from the marked pixels and integrated over the regions is plotted in Fig. 8, where the colours in Fig. 8a indicate the total emission from the pixel of the same colour (not separated by emission mode). In the footpoint region no extraordinary-mode emission is seen at less than 2 GHz as the extraordinary-mode is unable to propagate due to the ratio of the plasma frequency to the gyro-frequency being too low (see (12) and Ramaty (1969)). It can be clearly seen that the peak emission occurs at a significantly lower frequency in the looptop than in the footpoint. Additionally, the peak flux emitted from the footpoint region is much larger than that from the looptop. The high frequency emission from the looptop drops off as expected for a homogeneous gyrosynchrotron source. This is not so for the footpoint region. Fig. 8b clearly shows that the footpoint is the origin of this hardening, which is logical as this effect is only seen in simulations with a heavily refined lower atmosphere. By investigating the spectra of the marked pixels within the footpoint region, we can see that the peak emission frequency continues to increase significantly with depth in the atmosphere, and that pixels 3, 4, and 5 lie within the compact region of the atmosphere from which the high-frequency hardening originates. The results presented here were simulated with an isotropic electron pitch-angle distribution. As described in Section 5.1 the lower energy electrons are prevented from reaching the lower atmosphere as they are removed from the distribution above a critical column density. If a loss cone style distribution of pitch angles were also applied the low–energy end of the distribution could drop off faster in the atmosphere, further enhancing this high-frequency hardening in the footpooints (Simões & Costa, 2010; Kuznetsov et al., 2011). Arbitrary pitch angle distributions are supported by the gyrosynchrotron code in Thyr and this effect could therefore be easily investigated. ## 6 Code Description Thyr requires a modern C++14 compliant compiler, Torch7 (built on LuaJIT), and Python 3 with the matplotlib package. The core calculation of the gyrosynchrotron components is computed in C++, whilst all set up, and ray-marching is performed in Lua. Plotting is done through our Plyght tool, also available on Github , that allows any language with a socket API or foreign function interface to easily produce two-dimensional plots using matplotlib (Hunter, 2007). The code also supports exporting all data to the widely supported comma separated variable (csv) format to allow for post-processing and plotting in any language. ## 7 Discussion The Thyr tool, its initial validation, and example simulations have now been presented. The high-resolution simulation presented here is computed at much higher resolution than is available with modern radio observatories. This fine structure remains nonetheless important due to the contribution of the radiation produced in these regions for the total beam power. This can be seen by looking at the image that an observatory such as Nobeyama would produce from the different models. These convolved maps are shown in Fig. 9. The maps shown in these figures appear very similar, but the peak brightness temperature is different, and the magnitude of this difference only increases at higher frequencies, as demonstrated by the spectra (e.g. Fig 6). Using the spectral imaging techniques of Section 5.5 we have identified the location from which the hardening of the high frequency emission originates. Points 3, 4 and 5 from Fig 5 lie within a thin (300 km) layer in the upper chromosphere and transition region. This corresponds to a layer within the region of 1.8–2.15 Mm on Fig. 3. The field chosen for this simulation was a dipole as this represents the simplest analytical case and a realistic flaring active region can be expected to have stronger magnetic convergence. In this simulation the average field along the line of sight for each of these pixels is 1.3 kG and they lie within a region of strong magnetic convergence. This produces the total hardening at high frequencies due to the superposition of the small-scale hardenings produced by emission from small regions with high magnetic field strength in the core of this thin slab. Given its characteristic spectral signature, there is no need to spatially resolve this thin layer. If the emission from the footpoints cannot be resolved separately then a similar spectrum could also be obtained from a loop with different magnetic field strengths in the footpoints (e.g. with a significantly rotated magnetic moment). To verify if the emission behaves in this way increased spectral resolution at high frequencies is required. This can be achieved with today’s technology using simultaneous observations from Radio Solar Telescope Network (RSTN, operating up to 15.4 GHz, Guidice, 1979), Nobeyama Radio Polarimeters (NoRP, Nakajima et al., 1985), Nobeyama Radio Heliograph (NoRH, Nakajima et al., 1994) POlarization Emission of Millimeter Activity at the Sun (POEMAS, operating at 45 and 90 GHz Valio et al., 2013), ALMA (currently operating at 100 and 230  GHz Wedemeyer et al., 2016), and the Submillimeter Solar Telescope (SST, 212 and 405 GHz Kaufmann et al., 2008). The combination of these instruments could provide valuable information on the shape of the spectrum at higher frequencies and hence an estimate of the structure of the magnetic field in the atmosphere. ## 8 Conclusions We have presented three-dimensional solar flare radio emission simulation tool Thyr, verification against Klein & Trottet (1984), and example simulations that demonstrate the importance of resolving the lower atmosphere to a much higher resolution than used in previous models. This tool is now available for use under the MIT license and can be downloaded at http://github.com/Goobley/Thyr2 (Osborne, 2018) Thyr has the ability to simulate user-specified regions with increased accuracy, and we use this to increase the resolution in the lower corona, chromosphere and photosphere. By performing simulations with a high resolution lower atmosphere we have shown that a non-thermal hardening of the spectrum should be expected at higher frequencies from a dipole loop. Using a combination of RSTN, POEMAS, and ALMA, the existence of such a hardening could be investigated. Recent studies have also argued for the importance of thermal free-free emission at higher frequencies (Simões et al., 2017; Heinzel & Avrett, 2012). Whilst this emission is not important in the model we have selected here, Thyr could well be used to investigate the parameters for which it becomes significant. The C7 atmosphere was chosen here due to its ubiquity and high-resolution – there is little reason to perform a high resolution simulation with a low resolution atmosphere! It is simple to investigate the influence of other atmospheres using Thyr and the files present in the github repository can serve as a base. For example, Thyr can accept snapshots of the flaring atmosphere generated by radiative hydrodynamic simulations, such as RADYN (Carlsson & Stein, 1995; Allred et al., 2015) and Flarix (Varady et al., 2010; Heinzel et al., 2017). Thyr can also serve as a skeleton for future local thermodynamic equilibrium radiative transfer codes as it is simple to replace the geometry and/or use different emission and absorption coefficients. Three-dimensional modelling is especially useful in cases where the emission has a high angular dependence, such as the case with gyrosynchrotron here. We therefore hope that this code can be modified and be of use to the wider astronomical community. ## Acknowledgements The authors would like to thank Lyndsay Fletcher for helpful comments and suggestions. C.M.J.O is grateful for the financial support of the Royal Society of Edinburgh Cormack Undergraduate Vacation Research Scholarship (2016), and the Royal Astronomical Society Undergraduate Summer Bursary (2015) under which this project took shape. C.M.J.O also acknowledges financial support from STFC Doctoral Training Grant ST/R504750/1. P.J.A.S. acknowledges support from the University of Glasgow’s Lord Kelvin Adam Smith Leadership Fellowship. This work builds upon the results obtained from projects funded by FAPESP grants 03/03406-6, 04/14248-5, 08/09339-2 and 2009/18386-7. We acknowledge the use of colour-blind safe and print-friendly colour tables by Paul Tol (https://personal.sron.nl/~pault/). The authors would also like to thank the reviewer for detailed comments and suggestions for improvement. ## References • Alissandrakis & Preka-Papadema (1984) Alissandrakis C. E., Preka-Papadema P., 1984, Astron. Astrophys., 139, 507 • Allred et al. (2015) Allred J. C., Kowalski A. F., Carlsson M., 2015, Astrophys. J., 809, 104 • Avrett & Loeser (2008) Avrett E. H., Loeser R., 2008, Astrophys. J. Suppl. Ser., 175, 229 • Bastian (1998) Bastian T. S., 1998, Annu. Rev. Astron. Astrophys., 36, 293 • Brajša et al. (2017) Brajša R., et al., 2017, Astron. Astrophys., 17 • Carlsson & Stein (1995) Carlsson M., Stein R. F., 1995, Astrophys. J., 440, L29 • Cohen (1960) Cohen M. H., 1960, Astrophys. J., 131, 664 • Costa et al. (2013) Costa J. E. R., Simões P. J. A., Pinto T. S. N., Melnikov V. F., 2013, Publ. Astron. Soc. Japan, 65, 1 • Cuambe et al. (2018) Cuambe V. A., Costa J. E., Simões P. J., 2018, Mon. Not. R. Astron. Soc., 477, 1512 • Dulk (1985) Dulk G. A., 1985, Annu. Rev. Astron. Astrophys., 23, 169 • Fleishman et al. (2015) Fleishman G., Loukitcheva M., Nita G., 2015, in Iono D., Tatematsu K., Wootten A., Testi L., eds, Astronomical Society of the Pacific Conference Series Vol. 499, Revolution in Astronomy with ALMA: The Third Year. p. 351 (arXiv:1506.08395) • Fleishman et al. (2016) Fleishman G. D., Pal’shin V. D., Meshalkina N., Lysenko A. L., Kashapova L. K., Altyntsev A. T., 2016, ApJ, 822, 71 • Fletcher & Simões (2013) Fletcher L., Simões P. J. A., 2013, NRSO Rep., p. 6 • Fontenla et al. (2009) Fontenla J. M., Curdt W., Haberreiter M., Harder J., Tian H., 2009, ApJ, 707, 482 • Gary et al. (2018) Gary D. E., et al., 2018, Astrophys. J. • Giménez de Castro et al. (2009) Giménez de Castro C. G., Trottet G., Silva-Valio a., Krucker S., Costa J. E. R., Kaufmann P., Correia E., Levato H., 2009, Astron. Astrophys., 507, 433 • Giménez de Castro et al. (2013) Giménez de Castro C. G., Cristiani G. D., Simões P. J., Mandrini C. H., Correia E., Kaufmann P., 2013, Sol. Phys., 284, 541 • Gordovskyy et al. (2017) Gordovskyy M., Browning P., Kontar E., 2017, arXiv Prepr., 116, 1 • Graham et al. (2013) Graham D. R., Hannah I. G., Fletcher L., Milligan R. O., 2013, Astrophys. J., 767, 83 • Guidice (1979) Guidice D. A., 1979, Bull. Am. Astron. Soc., 11, 311 • Heinzel & Avrett (2012) Heinzel P., Avrett E. H., 2012, Sol. Flare Magn. Fields Plasmas, pp 31–44 • Heinzel et al. (2017) Heinzel P., Kleint L., Kasparova J., Krucker S., 2017, Astrophys. J., 48 • Holman (2003) Holman G. D., 2003, Astrophys. J., 586, 606 • Hudson et al. (1994) Hudson H. S., Strong K. T., Dennis B. R., Zarro D., Inda M., Kosugi T., Sakao T., 1994, Astrophys. J., 422, L25 • Hunter (2007) Hunter J. D., 2007, Comput. Sci. Eng., 9, 90 • Iwai et al. (2017) Iwai K., Loukitcheva M., Shimojo M., Solanki S. K., White S. M., 2017, Astrophys. J. Lett. Vol. 841, Issue 2, Artic. id. L20, 8 pp. (2017)., 841, L20 • Kaufmann et al. (2004) Kaufmann P., et al., 2004, Astrophys. J., 603, L121 • Kaufmann et al. (2008) Kaufmann P., et al., 2008, Soc. Photo-Optical Instrum. Eng. Conf. Ser., 7012, 70120L • Kay & Kajiya (1986) Kay T. L., Kajiya J. T., 1986, ACM SIGGRAPH Comput. Graph., 20, 269 • Klein & Trottet (1984) Klein K. L., Trottet G., 1984, Astron. Astrophys., 141, 67 • Krucker et al. (2013) Krucker S., et al., 2013, Astron. Astrophys. Rev., 21 • Kundu et al. (2001) Kundu M. R., Nindos A., White S. M., Grechnev V. V., 2001, Astrophys. J., 20, 880 • Kuroda et al. (2018) Kuroda N., Gary D. E., Wang H., Fleishman G. D., Nita G. M., Jing J., 2018, Astrophys. J., 852, 32 • Kurucz (1970) Kurucz R. L., 1970, SAO Spec. Rep. 309, p. 286 • Kuznetsov et al. (2011) Kuznetsov A. a., Nita G. M., Fleishman G. D., 2011, Astrophys. J., 87, 1 • Lee & Gary (2000) Lee J., Gary D. E., 2000, Astrophys. J., 543, 457 • Lim et al. (1994) Lim J., Gary D., Hurford G. J., Lemen J. R., 1994, Astrophys. J., 430, 425 • Lüthi et al. (2004) Lüthi T., Magun A., Miller M., 2004, Astron. Astrophys., 415, 1123 • MacGregor et al. (2018) MacGregor M. A., Weinberger A. J., Wilner D. J., Kowalski A. F., Cranmer S. R., 2018, Astrophys. J. Lett., 855, L2 • Mrozek & Tomczak (2004) Mrozek T., Tomczak M., 2004, Astron. Astrophys., 415, 377 • Nakajima et al. (1985) Nakajima H., et al., 1985, Publ. Astron. Soc. Japan, 37, 163 • Nakajima et al. (1994) Nakajima H., et al., 1994, IEEE Proc., 82, 705 • Nindos et al. (2000) Nindos A., White S. M., Kundu M. R., Gary D. E., 2000, Astrophys. J., 533, 1053 • Nita et al. (2015) Nita G. M., Fleishman G. D., Kuznetsov A. A., Kontar E. P., Gary D. E., 2015, Astrophys. J., 799 • Osborne (2018) Osborne C. M. J., 2018, Goobley/Thyr2, doi:10.5281/zenodo.1483653, https://doi.org/10.5281/zenodo.1483653 • Ramaty (1969) Ramaty R., 1969, Astrophys. J., 158 • Simões & Costa (2006) Simões P. J. A., Costa J. E. R., 2006, Astron. Astrophys., 453, 729 • Simões & Costa (2010) Simões P. J. A., Costa J. E. R., 2010, Sol. Phys., 266, 109 • Simões & Kontar (2013) Simões P. J. A., Kontar E. P., 2013, Astron. Astrophys., 551, 10 • Simões et al. (2013) Simões P. J., Fletcher L., Hudson H. S., Russell A. J., 2013, Astrophys. J., 777, 1 • Simões et al. (2015) Simões P. J. A., Hudson H. S., Fletcher L., 2015, Sol. Phys., 290, 3625 • Simões et al. (2017) Simões P. J. A., Kerr G. S., Fletcher L., Hudson H. S., Guillermo Giménez De Castro C., Penn M., 2017, Astron. Astrophys., 605 • Stähli et al. (1989) Stähli M., Gary D. E., Hurford G. J., 1989, Sol. Phys., 120, 351 • Stix (1962) Stix T. H., 1962, The Theory of Plasma Waves • Tandberg-Hanssen & Emslie (1988) Tandberg-Hanssen E., Emslie A. G., 1988, The Physics of Solar Flares • Trottet et al. (2012) Trottet G., Raulin J. P., Giménez De Castro G., Lüthi T., Caspi A., Mandrini C. H., Luoni M. L., Kaufmann P., 2012, Energy Storage Release through Sol. Act. Cycle Model. Meet Radio Obs., 9781461444, 33 • Trottet et al. (2015) Trottet G., et al., 2015, Sol. Phys., 290, 2809 • Trulsen & Fejer (1970) Trulsen J., Fejer J. A., 1970, J. Plasma Phys., 4, 825 • Tzatzakis et al. (2008) Tzatzakis V., Nindos A., Alissandrakis C. E., 2008, Sol. Phys., 253, 79 • Valio et al. (2013) Valio A., Kaufmann P., Giménez de Castro C. G., Raulin J. P., Fernandes L. O., Marun A., 2013, Sol. Phys., 283, 651 • Varady et al. (2010) Varady M., Kašparová J., Moravec Z., Heinzel P., Karlický M., 2010, IEEE Trans. Plasma Sci., 38, 2249 • Wang et al. (1994) Wang H., Gary D. E., Lim J., Schwartz R. A., 1994, Astrophys. J., 433, 379 • Wang et al. (1995) Wang H., Gary D. E., Zirin H., Schwartz R. A., Sakao T., Kosugi T., Shibata K., 1995, Astrophys. J., 453, 505 • Wang et al. (2015) Wang Z., Gary D. E., Fleishman G. D., White S. M., 2015, Astrophys. J., 805, 1 • Wedemeyer et al. (2016) Wedemeyer S., et al., 2016, Space Sci. Rev., 200 • White et al. (2017) White S. M., et al., 2017, Sol. Phys., 292 • Wild & Hill (1971) Wild J. P., Hill E. R., 1971, Aust. J. Phys., 24, 43 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-07-02 11:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6843748092651367, "perplexity": 2508.40610338855}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00501.warc.gz"}
https://techcommunity.microsoft.com/t5/excel/sum-function/td-p/3280265
# SUM FUNCTION Occasional Visitor # SUM FUNCTION I USE THE SUM FUNCTION TO SUM NUMBERS IN A COLUMN ABOVE. I SUM MANY COLUMNS IN DIFFERENT AREAS OF THE WORKSHEET. I WANT TO SUM ALL SUMS TOGETHER. E1 TO E5 SUMMED IN E6 E10 TO E20 SUMMED IN E21 HOW CAN I SUM E6 AND E21 TOGETHER? # Re: SUM FUNCTION Use the SUM function to sum numbers in a range You can use a simple formula to sum numbers in a range (a group of cells), but the SUM function is easier to use when you’re working with more than a few numbers. =SUM(E1:E5) put formula in E6 =SUM(E10:E20) put formula in E21 =SUM(E6;E21)  put formula in Cell you wish.
2022-05-28 02:31:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255504608154297, "perplexity": 2753.077773931904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00774.warc.gz"}
https://hopfcombinatorics.wordpress.com/2012/01/31/typos-on-homework-1/
typos on homework 1 In the first version of Problem 2(a), I had written $F[x] \otimes F[y] \cong F[x,y]$. The current formulation says $F[x] \otimes F[x] \cong F[x,y]$. These statements are equivalent (can you see why?), but I want you to prove the current formulation. Problem 2(c) should say: Prove that $M_{m \times m}(F) \otimes M_{n \times n}(F) \cong M_{mn \times mn}(F)$  as $F$-algebras. Problem 3 should say: “A nonzero element $x$ is grouplike if and only if $x \in G$.” In Problem 5(c), the algebra is called $\wedge(V)$. 1. Really Basic Undergrad Question, and I’ve been putting off puttering with it because there’s other stuff to be done: If $f$ is injective, and $f = g \circ h$, then obviously $f^{-1} = (g \circ h)^{-1}$. However, can we conclude that both $g$ and $h$ are invertible? In the first version of problem 2.a. we can state that $\mathbb{F}[x] \otimes \mathbb{F}[x] \cong \mathbb{F}[x] \otimes \mathbb{F}[y]$ because $\mathbb{F}[x] \cong \mathbb{F}[y]$ as $\mathbb{F}$-algebras, right?
2017-07-22 08:39:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8832534551620483, "perplexity": 611.9215116516291}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00445.warc.gz"}
https://research.franklin.uga.edu/agant/joint-athens-atlanta-number-theory-seminar
The Joint Athens-Atlanta Number Theory Seminar meets once a semester, usually on a Tuesday, with two talks back to back, at 4:00 and at 5:15. Participants then go to dinner together. ### Fall 2019 Tuesday September 24, 2019, at UGA (in Boyd 304) First talk at 4:00 by Jennifer Balakrishnan (Boston University) A tale of three curves We will describe variants of the Chabauty--Coleman methodand quadratic Chabauty to determine rational points on curves. In so doing, we will highlight some recent examples where the techniques have been used: this includes a problem of Diophantus originally solved by Wetherell and the problem of the "cursed curve", the split Cartan modular curve of level 13. This is joint work with Netan Dogra, Steffen Mueller, Jan Tuitman, and Jan Vonk. Second talk at 5:15 by Dimitris Koukoulopoulos (University of Montreal) On the Duffin-Schaeffer conjecture Let S be a sequence of integers. We wish to understand how well we can approximate a typical'' real number using reduced fractions whose denominator lies in S. To this end, we associate to each  q in S an acceptable error delta_q>0. When is it true that almost all real numbers (in the Lebesgue sense) admit an infinite number of reduced rational approximations a/q, q in S, within distance delta_q? In 1941, Duffin and Schaeffer proposed a simple criterion to decided whether this is case: they conjectured that the answer to the above question is affirmative precisely when the series sum_{q\in S} \phi(q)\delta_q diverges, where phi(q) denotes Euler's totient function. Otherwise, the set of approximable'' real numbers has null measure. In this talk, I will present recent joint work with James Maynard that settles the conjecture of Duffin and Schaeffer. ### Spring 2019 Tuesday April 23, 2019, at Georgia Tech (in Skiles 311, 3 floors above the regular seminar room) First talk at 4:00 by Ananth Shankar (MIT) Exceptional splitting of abelian surfaces over global function fields. Let A denote a non-constant ordinary abelian surface over a global function field (of characteristic p > 2) with good reduction everywhere. Suppose that $A$ does not have real multiplication by any real quadratic field with discriminant a multiple of $p$. Then we prove that there are infinitely many places modulo which $A$ is isogenous to the product of two elliptic curves. If time permits, I will also talk about applications of our results to the p-adic monodromy of such abelian surfaces. This is joint work with Davesh Maulik and Yunqing Tang. Second talk at 5:15 by Jordan Ellenberg (University of Wisconsin) What is the tropical Ceresa class and what should it be? This is a highly preliminary talk about joint work with Daniel Corey and Wanlin Li.  The Ceresa cycle is an algebraic cycle canonically attached to a curve C, which appears in an intriguing variety of contexts; its height can sometimes be interpreted as a special value, the vanishing of its cycle class is related to the Galois action on the nilpotent fundamental group, it vanishes on hyperelliptic curves, etc.  In practice it is not easy to compute, and we do not in fact know an explicit non-hyperelliptic curve whose Ceresa class vanishes.  We will discuss a definition of the Ceresa class for a tropical curve, explain how to compute it in certain simple cases, and describe progress towards understanding whether it is possible for the Ceresa class of a non-hyperelliptic tropical curve to vanish.  (The answer is:  "sort of”.)  The tropical Ceresa class sits at the interface of algebraic geometry, graph theory (because a tropical curve is more or less a metric graph), and topology: for we can also frame the tropical Ceresa class as an entity governed by the mapping class group, and in particular by the question of when a product of commuting Dehn twists can commute with a hyperelliptic involution in the quotient of the mapping class group by the Johnson kernel. ### Fall 2018 Tuesday October 23, 2018, at Emory, in W301 in the Math and Science Center (the usual building) First talk at 4:00 by Bianca Viray (University of Washington) On the level of modular curves that give rise to sporadic j-invariants Merel's Uniform Boundedness Theorem states that the torsion on an elliptic curve over a number field k can be bounded by a constant that depends only on the degree [k:Q].  This theorem can be rephrased as saying that for any positive integer d, the infinite tower of modular curves {X_1(n)}_{n} has only finitely many closed points of degree at most d.  Work of Frey and Abramovich from around the same time combine to give an independent proof of a  weaker result, that for any positive integer d, there are only finitely many positive integers n such that X_1(n) has infinitely many degree d points.  In this talk, we study complementary part of Merel's theorem, that is, the points x on X_1(n) where there are only finitely many points of degree at most deg(x).  We show that these so-called sporadic points map down to sporadic points on X_1(d), where d is a bounded divisor of n.  This is joint work with A. Bourdon, Ö. Ejder, Y. Liu, and F. Odumodu. Second talk at 5:15 by Larry Rolen (Vanderbilt) Locally harmonic Maass forms and central L-values In this talk, we will discuss a relatively new modular-type object known as a locally harmonic Maass form. We will discuss recent joint work with Ehlen, Guerzhoy, and Kane with applications to the theory of $L$-functions. In particular, we find finite formulas for certain twisted central $L$-values of a family of elliptic curves in terms of finite sums over canonical binary quadratic forms. Applications to the congruent number problem will be given. ### Spring 2018 Tuesday February 20, 2018, at UGA, in Boyd Room 304 First talk at 4:00 by David Harbater (University of Pennsylvania) Local-global principles for zero-cycles over semi-global fields Classical local-global principles are given over global fields.  This talk will discuss such principles over semi-global fields, which are function fields of curves defined over a complete discretely valued field.  Paralleling a result that Y. Liang proved over number fields, we prove a local-global principle for zero-cycles on varieties over semi-global fields.  This builds on earlier work about local-global principles for rational points.  (Joint work with J.-L. Colliot-Thélène, J. Hartmann, D. Krashen, R. Parimala, J. Suresh.) Second talk at 5:15 by Jacob Tsimerman (U. Toronto) Cohen-Lenstra heuristics in the Presence of Roots of Unity The class group is a natural abelian group one can associated to a number field, and it is natural to ask how it varies in families. Cohen and Lenstra famously proposed a model for families of quadratic fields based on random matrices of large rank, and this was later generalized by Cohen-Martinet to general number fields. However, their model was observed by Malle to have issues when the base field contains roots of unity. We explain that in this setting there are naturally defined additional invariants on the class group, and based on this we propose a refined model in the number field setting rooted in random matrix theory. Our conjecture is based on keeping track not only of the underlying group structure, but also certain natural pairings one can define in the presence of roots of unity. Specifically, if the base field contains roots of unity, we keep track of the class group G together with a naturally defined homomorphism G*[n] --> G from the n-torsion of the Pontryagin dual of G to G. Using methods of Ellenberg-Venkatesh-Westerland, we can prove some of our conjecture in the function field setting. ### Fall 2017 Monday October 30, 2017, at Georgia Tech. First talk at 4:00 by Bjorn Poonen (MIT) Gonality and the strong uniform boundedness conjecture for periodic points The function field case of the strong uniform boundedness conjecture for torsion points on elliptic curves reduces to showing that classical modular curves have gonality tending to infinity. We prove an analogue for periodic points of polynomials under iteration by studying the geometry of analogous curves called dynatomic curves. This is joint work with John R. Doyle. Second talk at 5:15 by Spencer Bloch (U. Chicago) Periods, motivic Gamma functions, and Hodge structures Golyshev and Zagier found an interesting new source of periods associated to (eventually inhomogeneous) solutions generated by the Frobenius method for Picard Fuchs equations in the neighborhood of singular points with maximum unipotent monodromy. I will explain how this works, and how one can associate "motivic Gamma functions" and generalized Beilinson style variations of mixed Hodge structure to these solutions. This is joint work with M. Vlasenko. ### Spring 2017 Tuesday April 18, 2017, at Emory. First talk at 4:00 by Rachel Pries (Colorado State) Galois action on homology of Fermat curves We prove a result about the Galois module structure of the Fermat curve using commutative algebra, number theory, and algebraic topology.  Specifically, we extend work of Anderson about the action of the absolute Galois group of a cyclotomic field on a relative homology group of the Fermat curve.  By finding explicit formulae for this action, we determine the maps between several Galois cohomology groups which arise in connection with obstructions for rational points on the generalized Jacobian.  Heisenberg extensions play a key role in the result. This is joint work with R. Davis, V. Stojanoska, and K. Wickelgren. Second talk at 5:15 by Gopal Prasad (U. Michigan) Weakly commensurable Zariski-dense subgroups of semi-simple groups and isospectral locally symmetric spaces I will discuss the notion of weak commensurability of  Zariski-dense subgroups of semi-simple groups. This notion was introduced in my joint work with Andrei Rapinchuk (Publ. Math. IHES 109(2009), 113-184), where we determined when two Zariski-dense S-arithmetic subgroups of absolutely almost simple algebraic groups over a field of characteristic zero can be weakly commensurable. These results enabled us to prove that in many situations isospectral locally symmetric spaces of simple real algebraic groups are necessarily commensurable. This settled the famous question "Can one hear the shape of a drum?" of Mark Kac for these spaces. The arguments use algebraic and transcendental number theory. ### Fall 2016 Tuesday October 18, 2016, at UGA. First talk at 4:00 by Florian Pop (University of Pennsylvania) Local section conjectures and Artin-Schreier theorems After a short introduction to (Grothendieck's) section conjecture (SC), I will explain how the classical Artin Schreier Thm and its p-adic analog imply the birational local SC; further, I will mention briefly local-global aspects of the birational SC. Finally, I will give an effective "minimalistic" p-adic Artin-Schreier Thm, which is similar in flavor to the classical Artin-Schreier Thm. Second talk at 5:15 by Ben Bakker (UGA) Recovering elliptic curves from their p-torsion Given an elliptic curve E over the rationals Q, its p-torsion E[p] gives a 2-dimensional representation of the Galois group G_Q over F_p.  The Frey-Mazur conjecture asserts that for p>17, this representation is essentially a complete invariant:  E is determined up to isogeny by E[p].  In joint work with J. Tsimerman, we prove the analog of the Frey-Mazur conjecture over characteristic 0 function fields.  The proof uses the hyperbolic repulsion of special subvarieties in a modular surface to show that families of elliptic curves with many isogenous fibers have large volume.  We will also explain how these ideas relate to other uniformity conjectures about the size of monodromy representations ### Spring 2016 Thursday April 14, 2016, at Georgia Tech. First talk at 4:00 by Melanie Matchett-Wood (University of Wisconsin) Nonabelian Cohen-Lenstra Heuristics and Function Field Theorems The Cohen-Lenstra Heuristics conjecturally give the distribution of class groups of imaginary quadratic fields. Since, by class field theory, the class group is the Galois group of the maximal unramified abelian extension, we can consider the Galois group of the maximal unramified extension as a non-abelian generalization of the class group. We will explain non-abelian analogs of the Cohen-Lenstra heuristics due to Boston, Bush, and Hajir and joint work with Boston proving cases of the non-abelian conjectures in the function field analog. Second talk at 5:15 by Zhiwei Yun (Stanford University) Intersection numbers and higher derivatives of L-functions for function fields In joint work with Wei Zhang, we prove a higher derivative analogue of the Waldspurger formula and the Gross-Zagier formula in the function field setting under the assumption that the relevant objects are everywhere unramified. Our formula relates the self-intersection number of certain cycles on the moduli of Shtukas for GL(2) to higher derivatives of automorphic L-functions for GL(2). ### Fall 2015 Tuesday November 17, 2015, at Emory. First talk at 4:00 by John Duncan (Emory) K3 Surfaces, Mock Modular Forms and the Conway Group In their famous “Monstrous Moonshine” paper of 1979, Conway—Norton also described an association of modular functions to the automorphism group of the Leech lattice (a.k.a. Conway’s group). In analogy with the monstrous case, there is a distinguished vertex operator superalgebra that realizes these functions explicitly. More recently, it has come to light that this Conway moonshine module may be used to compute equivariant enumerative invariants of K3 surfaces. Conjecturally, all such invariants can be computed in this way. The construction attaches explicitly computable mock modular forms to automorphisms of K3 surfaces. One expects the Brauer-Manin obstruction to control rational points on 1-parameter families of conics and quadrics over a number field when the base curve has genus 0. Results in this direction have recently been obtained as a consequence of progress inanalytic number theory. On the other hand, it is easy to construct a family of 2-dimensional quadrics over a curve with just one rational point over Q, which is a counterexample to the Hasse principle not detected by the \'etale Brauer-Manin obstruction. Conic bundles with similar properties exist over real quadratic fields, though most certainly not over Q. Second talk at 5:15 by Alexei Skorobogatov (Imperial College) Local-to-global principle for rational points on conic and quadric bundles over curves One expects the Brauer-Manin obstruction to control rational points on 1-parameter families of conics and quadrics over a number field when the base curve has genus 0. Results in this direction have recently been obtained as a consequence of progress inanalytic number theory. On the other hand, it is easy to construct a family of 2-dimensional quadrics over a curve with just one rational point over Q, which is a counterexample to the Hasse principle not detected by the \'etale Brauer-Manin obstruction. Conic bundles with similar properties exist over real quadratic fields, though most certainly not over Q. ### Spring 2015 Thursday April 9, 2015, at UGA, Room 304 in Boyd Graduate Studies Building. First talk at 4:00 by Dick Gross (Harvard) Quadric hypersurfaces, defined by homogeneous equations of degree 2, are the simplest projective varieties other than linear subspaces. In this talk I will review the theory of quadratic forms over a general field, and discuss the smooth intersection of two quadric hypersurfaces in projective space. The Fano scheme of maximal linear subspaces contained in this intersection is either finite or is a principal homogeneous space for the Jacobian of a hyperelliptic curve.  This gives an important tool for the arithmetic study of these curves. Second talk at 5:15 by Ted Chinburg (University of Pennsylvania) When is an error term not really an error term? The classical case of Iwasawa theory has to do with how quickly the p-parts of the ideal class groups of number fields grow in certain towers of number fields.  I will discuss the connection of these growth rates to Chern classes, and how the "error terms" in various formulas have to do with higher Chern classes. I will then describe a result linking second Chern classes in Iwasawa theory over imaginary quadratic fields to pairs of p-adic L-functions.  This is joint work with F. Bleher, R. Greenberg, M. Kakde, G. Pappas, R Sharifi and M. Taylor. ### Fall 2014 Tuesday, November 4, 2014, at Georgia Tech. First talk at 4:00 by Arul Shankar (Harvard University) Geometry-of-numbers methods over number fields We discuss the necessary modifications required to apply Bhargava's geometry-of-numbers methods to representations over number fields. As an application, we derive upper bounds on the average rank of elliptic curves over any number field. This is joint work with Manjul Bhargava and Xiaoheng Jerry Wang. Second talk at 5:15 by Wei Zhang (Columbia University) Kolyvagin's conjecture on Heegner points We recall a conjecture of Kolyvagin on Heegner points for elliptic curves of arbitrary analytic rank, and present some recent results on this conjecture for elliptic curves satisfying some technical conditions. ### Spring 2014 Tuesday, February 25, 2014, at Emory. First talk at 4:00 by Paul Pollack (UGA) Solved and unsolved problems in elementary number theory This will be a survey of certain easy-to-understand problems in elementary number theory about which "not enough" is known. We will start with a discussion of the infinitude of primes, then discuss the ancient concept of perfect numbers (and related notions), and then branch off into other realms as the spirit of Paul Erdös leads us. Second talk at 5:15 by James Maynard (Université de Montréal) Bounded gaps between primes It is believed that there should be infinitely many pairs of primes which differ by 2; this is the famous twin prime conjecture. More generally, it is believed that for every positive integer $m$ there should be infinitely many sets of $m$ primes, with each set contained in an interval of size roughly $m\log{m}$. We will introduce a refinement of the `GPY sieve method' for studying these problems. This refinement will allow us to show (amongst other things) that $\liminf_n(p_{n+m}-p_n)<\infty$ for any integer $m$, and so there are infinitely many bounded length intervals containing $m$ primes. ### Fall 2013 Tuesday, November 5, 2013, at UGA, Room 303 in Boyd Graduate Studies Building. First talk at 4:00 by Joe Rabinoff (Georgia Tech) Lifting covers of metrized complexes to covers of curves Let K be a complete and algebraically closed non-Archimedean field and let X be a smooth K-curve.  Its Berkovich analytification X^an deformation retracts onto a metric graph Gamma, called a skeleton of X^an.  The collection of different skeleta of X are in natural bijective correspondence with the collection of semistable models of X over the valuation ring R of K.  We prove that, given a finite morphism of curves f: Y -> X, there exists a skeleton Gamma_X of X^an whose inverse image is a skeleton Gamma_Y of Y^an.  This can be seen as a "skeletal" simultaneous semistable reduction theorem, which can in fact be used to give new, more general proofs of foundational results of Liu, Liu-Lorenzini, and Coleman on simultaneous semistable reductions.  We then consider the following problem: given X, a skeleton Gamma_X, and a finite harmonic morphism of metric graphs Gamma' -> Gamma_X, can we find a curve Y and a finite morphism f: Y -> X such that f^{-1}(Gamma_X) is a skeleton of Y^an and is isomorphic to Gamma'?  In general the answer is no: one must enrich the skeleton with the structure of a metrized complex of curves.  In this context the answer is yes, and moreover the map Gamma' -> Gamma_X can be used to calculate the finitely many isomorphism classes of Y -> X, as well as their automorphisms.  We give an application to component groups of Jacobians, answering a question of Ribet. Second talk at 5:15 by Kirsten Wickelgren (Georgia Tech) Splitting varieties for triple Massey products in Galois cohomology The Brauer-Severi variety a x^2 + b y^2 = z^2 has a rational point if and only if the cup product of cohomology classes associated to a and b vanish. The cup product is the order-2 Massey product. Higher Massey products give further structure to Galois cohomology, and more generally, they measure information carried in a differential graded algebra which can be lost on passing to the associated cohomology ring. For example, the cohomology of the Borromean rings is isomorphic to that of three unlinked circles, but non-trivial Massey products of elements of H^1 detect the more complicated structure of the Borromean rings. Analogues of this example exist in Galois cohomology due to work of Morishita, Vogel, and others. This talk will first introduce Massey products and some relationships with non-abelian cohomology. We will then show that b x^2 = (y_1^2 - ay_2^2 + c y_3^2 - ac y_4^2)^2 - c(2 y_1 y_3 - 2 a y_2 y_4)^2 is a splitting variety for the triple Massey product <a,b,c>, and that this variety satisfies the Hasse principle. The method could produce splitting varieties for higher order Massey products. It follows that all triple Massey products over global fields vanish when they are defined. More generally, one can show this vanishing over any field of characteristic different from 2; Jan Minac and Nguyen Duy Tan, and independently Suresh Venapally, found an explicit rational point on X(a,b,c). Minac and Tan have other nice results in this direction.  This is joint work with Michael Hopkins. ### Spring 2013 Tuesday, April 16, 2013, at Georgia Tech First talk at 4:00 by Dick Gross (Harvard) The arithmetic of hyperelliptic curves Hyperelliptic curves over Q have equations of the form y^2 = F(x), where F(x) is a polynomial with rational coefficients which has simple roots over the complex numbers. When the degree of F(x) is at least 5, the genus of the hyperelliptic curve is at least 2 and Faltings has proved that there are only finitely many rational solutions. In this talk, I will describe methods which Manjul Bhargava and I have developed to quantify this result, on average. Second talk at 5:15 by Jordan Ellenberg (Wisconsin) Arithmetic statistics over function fields What is the probability that a random integer is squarefree? Prime? How many number fields of degree d are there with discriminant at most X? What does the class group of a random quadratic field look like? These questions, and many more like them, are part of the very active subject of arithmetic statistics. Many aspects of the subject are well-understood, but many more remain the subject of conjectures, by Cohen-Lenstra, Malle, Bhargava, Batyrev-Manin, and others. In this talk, I explain what arithmetic statistics looks like when we start from the  field Fq(x) of rational functions over a finite field instead of the field Q of rational numbers. The analogy between function  fields and number  fields has been a rich source of insights throughout the modern history of number theory. In this setting, the analogy reveals a surprising relationship between conjectures in number theory and conjectures in topology about stable cohomology of moduli spaces, especially spaces related to Artin's braid group. I will discuss some recent work in this area, in which new theorems about the topology of moduli spaces lead to proofs of arithmetic conjectures over function fields, and to new, topologically motivated questions about counting arithmetic objects. ### Fall 2012 Thursday, October 25, 2012, at Emory. First talk at 4:00 by Karl Rubin (UCI) Ranks of elliptic curves I will discuss some recent conjectures and results on the distribution of Mordell-Weil ranks and Selmer ranks of elliptic curves. After some general background, I will specialize to families of quadratic twists, and describe some recent results in detail. Second talk at 5:15 by Jayce Getz (Duke) An approach to nonsolvable base change for GL(2) Motivated by Langlands' beyond endoscopy idea, the speaker will present a conjectural trace identity that is essentially equivalent to base change and descent of automorphic representations of GL(2) along a nonsolvable extension of fields. ### Spring 2012 Tuesday, April 10, 2012, at UGA, Room 323 in Boyd Graduate Studies Building First talk at 4:00 by Max Lieblich (Univ. of Washington) Finiteness of K3 surfaces and the Tate conjecture Fix a finite field k. It is well known that there are only finitely many smooth projective curves of a given genus over k. It turns out that there are also only a finite number of abelian varieties of a given dimension over k. What about other classes of varieties? I will review the history of these results and describe joint work with Maulik and Snowden that links the finiteness of K3 surfaces over k to the Tate conjecture for K3 surfaces over k. The key is a link between certain lattices in the l-adic cohomology of K3 surfaces and derived categories of sheaves on certain algebraic stacks. I will not assume you know anything about any of this. Second talk at 5:15 by Frank Calegari (Northwestern Univ.) Even Galois Representations What Galois representations "come" from algebraic geometry? The Fontaine-Mazur conjecture gives a very precise conjectural answer to this question. A simplified version of this conjecture in the case of two dimensional representations says that "all nice representations come from modular forms". Yet, by construction, all representations coming from modular forms are "odd", that is, complex conjugation acts by a 2x2 matrix of determinant -1. What happened to all the even Galois representations? ### Fall 2011 Wednesday, November 2, 2011, at Georgia Tech in Skiles room 005 (ground floor). First talk at 4:00 by Jared Weinstein (IAS) Maximal varieties over finite fields This is joint work with Mitya Boyarchenko.  We construct a special hypersurface X over a finite field, which has the property of "maximality", meaning that it has the maximum number of rational points relative to its topology.  Our variety is derived from a certain unipotent algebraic group, in an analogous manner as Deligne-Lusztig varieties are derived from reductive algebraic groups. As a consequence, the cohomology of X can be shown to realize a piece of the local Langlands correspondence for certain wild Weil parameters of low conductor. Second talk at 5:15 by David Brown (Emory) Random Dieudonne modules and the Cohen-Lenstra conjectures. Knowledge of the distribution of class groups is elusive -- it is not even known if there are infinitely many number fields with trivial class group. Cohen and Lenstra noticed a strange pattern --experimentally, the group $\mathbb{Z}/(9)$ appears more often than $\mathbb{Z{/(3) \times \mathbb{Z}/(3)$ as the 3-part of the classgroup of a real quadratic field $\Q(\sqrt{d})$ - and refined this observation into concise conjectures on the manner in which class groups behave randomly. Their heuristic says roughly that $p$-parts of class groups behave like random finite abelian $p$-groups, rather than like random numbers; in particular, when counting one should weight by the size of the automorphism group, which explains why $\mathbb{Z}/(3) \times \mathbb{Z}/(3)$ appears much less often than $\mathbb{Z}/(9)$ (in addition to many other experimental observations). While proof of the Cohen-Lenstra conjectures remains inaccessible, the function field analogue -- e.g., distribution of class groups of quadratic extensions of \mathbb{F}_p(t) -- is more tractable. Friedman and Washington modeled the $\ell$-power part  (with $\ell \neq p) of such class groups as random matrices and derived heuristics which agree with experiment. Later, Achter refined these heuristics, and many cases have been proved (Achter, Ellenberg and Venkatesh). When$\ell = p$, the$\ell$-power torsion of abelian varieties, and thus the random matrix model, goes haywire. I will explain the correct linear algebraic model -- Dieudone\'e modules. Our main result is an analogue of the Cohen-Lenstra/Friedman-Washington heuristics – a theorem about the distributions of class numbers of Dieudone\'e modules (and other invariants particular to$\ell = p\$). Finally, I'll present experimental evidence which mostly agrees with our heuristics and explain the connection with rational points on varieties. ### Spring 2011 Tuesday, February 1, 2011, at Emory First talk at 4:00 by K. Soundararajan (Stanford) Moments of zeta and L-functions An important theme in number theory is to understand the values taken by the Riemann zeta-function and related L-functions. While much progress has been made, many of the basic questions remain unanswered. I will discuss what is known about this question, explaining in particular the work of Selberg, random matrix theory and the moment conjectures of Keating and Snaith, and recent progress towards estimating the moments of zeta and L-functions. Second talk at 5:15 by Matthew Baker (Georgia Institute of Technology) Complex dynamics and adelic potential theory I will discuss the following theorem: for any fixed complex numbers a and b, the set of complex numbers c for which both a and b both have finite orbit under iteration of the map z -->z^2 + c is infinite if and only if a^2 = b^2. I will explain the motivation for this result and give an outline of the proof. The main arithmetic ingredient in the proof is an adelic equidistribution theorem for preperiodic points over product formula fields, with non-archimedean Berkovich spaces playing an essential role. This is joint work with Laura DeMarco, relying on earlier joint work with Robert Rumely. ### Fall 2010 Tuesday, September 21, 2010, at UGA First talk at 4:00 by Ken Ono (Emory) Mock modular periods and L-values Recent works have shed light on the enigmatic mock theta functions of Ramanujan. These strange power series are now known to be pieces of special "harmonic" Maass forms. The speaker will discuss recent joint work in the subject with regard to special values of L-functions. This will include the study of values and derivatives of elliptic curve L-functions, as well as general critical values of modular L-functions. In addition, the speaker will derive new Eichler-Shimura isomorphisms, and will derive new relations among the "even" periods of modular L-functions. This is joint work with Jan Bruinier, Kathrin Bringmann, Zach Kent, and Pavel Guerzhoy. Second talk at 5:15 by Armand Brumer (Fordham) Abelian Surfaces and Siegel Paramodular Forms This expository talk will survey recent progress on modularity of abelian surfaces. After a brief review of the history, I'll describe work of Cris Poor and David Yuen on the modular side and Ken Kramer and me on the arithmetic side. ### Spring 2010 Tuesday, April 13, 2010 First talk at 4:00 by Venapally Suresh (Emory) Degree three cohomology of function fields of surfaces Let k be a global field or a local field. Class field theory says that every central division algebra over k is cyclic. Let l be a prime not equal to the characteristic of k. If k contains a primitive l-th root of unity, then this leads to the fact that every element in H^2(k, µ_l ) is a symbol. A natural question is a higher dimensional analogue of this result: Let F be a function field in one variable over k which contains a primitive l-th root of unity. Is every element in H^3(F, µ_l ) a symbol? In this talk we answer this question in affirmative for k a p-adic field or a global field of positive characteristic. The main tool is a certain local global principle for elements of H^3(F, µ_l ) in terms of symbols in H^2(F µ_l ). We also show that this local-global principle is equivalent to the vanishing of certain unramified cohomology groups of 3-folds over finite fields. Second talk at 5:15 by Antoine Chambert-Loir (IAS and University of Rennes) Some applications of potential theory to number theoretical problems on analytic curves Slides available at http://perso.univ-rennes1.fr/antoine.chambert-loir/publications/pdf/atlanta2010.pdf ### Fall 2009 Tuesday, October 20, 2009 First talk at 4:00 by Doug Ulmer (GA Tech). Constructing elliptic curves of high rank over function fields There are now several constructions of elliptic curves of high rank over function fields, most involving high-tech things like L- functions, cohomology, and the Tate or BSD conjectures. I'll review some of this and then give a very down-to-earth, low-tech construction of elliptic curves of high ranks over the rational function field Fp(t). Second talk at 5:15 by Jonathan Hanke (UGA). Using Mass formulas to Enumerate Definite Quadratic Forms of Class Number One This talk will describe some recent results using exact mass formulas to determine all definite quadratic forms of small class number in n>=3 variables, particularly those of class number one. The mass of a quadratic form connects the class number (i.e. number of classes in the genus) of a quadratic form with the volume of its adelic stabilizer, and is explicitly computable in terms of special values of zeta functions. Comparing this with known results about the sizes of automorphism groups, one can make precise statements about the growth of the class number, and in principle determine those quadratic forms of small class number. We will describe some known results about masses and class numbers (over number fields), then present some new computational work over the rational numbers, and perhaps over some totally real number fields.
2019-11-20 23:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6261508464813232, "perplexity": 675.5602558079449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00176.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/puqr.2022004
Article Contents Article Contents # Lower and upper pricing of financial assets • Mathematics Subject Classification: G10, G12, G13. Citation: • Table 1.  Moment transforms of bilateral gamma parameter estimates Quantile d v s k 1 −0.0038 0.0070 −0.9097 3.0281 5 −0.0016 0.0084 −0.6409 3.2262 10 −0.0007 0.0093 −0.4789 3.4830 25 0.0004 0.0115 −0.1229 3.9540 50 0.0012 0.0150 0.0418 4.5340 75 0.0017 0.0209 0.1493 5.1621 90 0.0023 0.0305 0.2635 5.8350 95 0.0027 0.0408 0.3535 6.3046 99 0.0038 0.0707 0.5982 7.4617 Table 2.  Lower and upper return quantiles Quantile Lower Upper $1$ $-0.0540$ $-0.0479$ $5$ $-0.0267$ $-0.0227$ $10$ $-0.0167$ $-0.0146$ $25$ $-0.0053$ $-0.0056$ $50$ $0.0020$ $0.0010$ $75$ $0.0083$ $0.0070$ $90$ $0.0167$ $0.0151$ $95$ $0.0237$ $0.0223$ $99$ $0.0458$ $0.0480$ Table 3.  SPY states State $b_{p}$ $c_{p}$ $b_{n}$ $c_{n}$ $1$ $0.0054$ $1.2937$ $0.0097$ $0.6195$ $2$ $0.0057$ $1.2603$ $0.0059$ $1.0112$ $3$ $0.0053$ $1.3110$ $0.0077$ $0.7423$ $4$ $0.0034$ $2.3575$ $0.0049$ $1.3701$ $5$ $0.0061$ $1.1068$ $0.0085$ $0.6450$ $6$ $0.0042$ $1.7842$ $0.0059$ $1.0625$ $7$ $0.0058$ $1.2971$ $0.0077$ $0.8604$ $8$ $0.0011$ $12.049$ $0.0028$ $4.4096$ Table 4.  Measure distortion parameters for the myopic case $b$ $c$ $2017$ $8.1711$ $863.3857$ $t{\text{-}}stat$ $(3.62)$ $(3.12)$ $2018$ $1.6311$ $78.1643$ $t{\text{-}}stat$ $(16.31)$ $(13.13)$ $2019$ $1.8594$ $85.7347$ $t{\text{-}}stat$ $(16.36)$ $(13.25)$ Table 5.  Measure distortion parameters for the Markov modulated case $b$ $c$ $2017$ $0.008962$ $0.475662$ $t{\text{-}}stat$ $(30.04)$ $(12.79)$ $2018$ $0.039580$ $0.500025$ $t{\text{-}}stat$ $(109.82)$ $(47.43)$ $2019$ $0.014905$ $0.469397$ $t{\text{-}}stat$ $(48.38)$ $(20.64)$ Table 6.  Maximal risk charges Modulated No Modulation $2017$ $0.01884$ $0.00946$ $2018$ $0.07916$ $0.02087$ $2019$ $0.03175$ $0.02169$ Tables(6)
2023-03-26 23:02:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.3192110061645508, "perplexity": 117.8605776709476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00306.warc.gz"}
https://motls.blogspot.com/2020/11/a-childish-topological-toe-of-mine-and.html
## Saturday, November 28, 2020 ... // ### A childish topological TOE of mine and what made it unwise When I achieved some real, important enough realizations about physics, like matrix string theory, I was happy but I slept well. Things suddenly fit together and after a few minutes, one may get pretty much certain that the nontrivial new statements are right. They pass a lot of consistency checks. One may try to deduce many consequences and calculate some other things. But once you know that it works, you may leave it for tomorrow, the day after tomorrow, or someone else. Some fun should be left for the other people, shouldn't it? It was the ultimately wrong ideas that have prepared several sleepless nights for me when I was a kid (or a teenager). And that may be how I learned that losing your sleep doesn't necessarily increase the depth of the theory. Let's discuss the theory. Sometime in 1987 when I was 14 or so, I decided that I had just found a theory of everything (of all elementary particles). Be ready, it was a theory of a nearly Leo Vyukesque "strawberry TOE" caliber. OK, I decided that there were just several independent elementary particles and everything was made out of them. Photons and gravitons (let alone Higgs bosons) weren't in my list but I needed electrons, protons (to create nuclei), and neutrinos. My elementary particles of matter were actually electrons, protons, and neutrinos. Neutrons were bound states created out of the three, $$n\approx p e^- \bar \nu$$. No attention was paid to heavier generations. I realized that there should be antiparticles for each; but their representation wasn't too clear. The protons were elementary; those who should have promoted quarks and QCD by the mid-to-late 1980s hadn't done a good enough job among the Czechoslovak kids. But I was obsessed with topology change and wormholes, like those in general relativity (which I understood reasonably well; but I knew no real quantum mechanics at that time), and I decided that those were enough to explain everything. You should get some empathy for the "background of the time". I did understand Einstein's equations but I hated the addition of "matter" as a dull (or perhaps pointlike) stuff to these equations. The conversion of everything to pure geometry would have been so much better! A year later, I saw that Einstein was focusing on this paradigm, too. Geometry (some variation of a smooth 4D spacetime geometry) could be enough to describe everything in the Universe. Suddenly, a theory looked extremely attractive. There were just three particles of matter and all of them were wormholes of different types. How can you create a wormhole, a topologically nontrivial 3D space? Remove 2 solid 3D regions from the 3D space, $$\RR^3$$, and identify their boundaries (the two copies may be mirror images of each other if needed). So if you penetrate into the removed interior of one of the solid bodies, you appear outside the other solid body. Simple. With these assumptions, the only adjustable characteristic of the wormhole is the topology of the 3D body that you remove. It is pretty much equivalent to the topology of its boundary and such 2D surfaces are classified by their genus $$g$$. The simplest example is a sphere. The simplest wormhole is created by removing two balls from $$\RR^3$$, followed by the identification of the two spherical $$S^2$$ boundaries. The next wormhole uses two copies of the "solid torus" followed by the identification of their boundaries $$T^2$$. If an observer walks into the first solid torus by crossing the first $$T^2$$ boundary, he reappears outside the second solid torus, on the corresponding place. Finally, you may have a solid body with two holes whose surface is a $$g=2$$ surface. For this $$g=2$$ topology, you may choose the precise shape roughly in this way so that the boundary as well as the 3D region inside it has a $$\ZZ_3$$ symmetry. It's nice that even though the genus is just two, the object may behave as an object with "three wings". I loved that observation because the number 3 seemed relevant: some crazy people were writing that there were 3 "quarks" inside a proton. I decided it was not even wrong and the real reason why some people claimed to have the experimental evidence for "3 quarks inside the proton" was that they were seeing this $$\ZZ_3$$-symmetric shape of the wormhole. Imagine how much redundant rubbish all the QCD people were trying to add to physics. In my world view, the likes of Gell-Mann, Zweig, let alone Gross and Wilczek had to be real losers who were dumb as doorknobs (and I only knew proper door handles up to 1997 when I touched my first doorknob in the U.S., a cultural shock). You can continue to higher genera but I didn't need to study it. Clearly, there would be some extra particles with higher genera waiting to be discovered. I could do well with the three simplest wormholes and those explained the matter we needed, including electrons, protons, and neutrons (recall that the latter were just some bound states of electrons, protons, and neutrinos). Electrons were the connected pairs of $$g=0$$ spheres; neutrinos were the tori; protons were the $$g=2$$ shapes with the $$\ZZ_3$$ symmetry. Aside from the "3-quark misinterpreted experimental fairy-tale", my theory also explained why neutrinos looked like they were spinning. The torus $$T^2$$ with $$g=1$$ may be spinning around an axis without changing its shape, right? So both sentences include the "spin" in some way so a consistency check was found. All matter was composed of pure geometry, namely these $$g=0,1,2$$ wormholes. Yes, I find it crazy how I could have been excited about this utter stupidity for hours and perhaps for one passionate sleepless night. Why? Because nothing really works about this theory. The "patterns explained or confirmed" by my theory are so vague and superficial that they should be counted as no confirmation at all. Even more importantly, there are many straightforward observations of matter that instantly show that the theory was wrong. First, there should be antimatter. If elementary particles such as the electron are "pure geometry", the positrons and other particles of antimatter must also be pure geometry! But this would seem to imply that they have to be identical. There is no natural "new geometry" that may be created out of my wormholes by a reflection. So the electron and positron would have to be perfectly identical. On top of that, they couldn't really annihilate with each other (or be pair-created). A spacetime with two wormholes – two pairs of the removed balls whose spherical surfaces are identified – is more topologically complicated than the empty space, $$\RR^3$$. I didn't have a proper explanation for the bosonic elementary particles. Or for the actual reasons why people thought that quarks existed, like the deep inelastic scattering. I obviously wouldn't have understood the phrase "deep inelastic scattering" at that time. I would have misunderstood many other things like that. Add the problems with the absence of the elementary particles with higher genera. Or topological variations of the spacetime that cannot be obtained by the removal of 2 copies and their identification. The list of problems with my theory would be as long as a textbook on particle physics because literally "almost everything" that particle physics has learned is incompatible with my theory! ;-) So the excitement of mine – and the sleepless night – was a result of very poor standards. I didn't need too much to become excited! And I didn't have much respect for the body of wisdom – either direct experimental data or theoretical ideas or principles that have been extracted or deduced from diverse experiments – because if I had had respect for them, I would have tried to learn what was known in much more detail. And these "details" should have been explained. They would be much more well-defined details than the vague observation that "the number 3 is seen somewhere around a proton". In this perspective, the theory looks really stupid. But you should understand that it was one of my first attempts to uncover a "deeper structure inside elementary particles" which should have been embedded to "something like classical general relativity". The main reason why I wrote this silly story is that there may be many armchair physicists whose thinking is very similar to my reasoning in 1987 (I did abandon the theory during the following day or days). They offer some juicy, strawberry-like ideas that taste yummy to them and that must have the potential to explain a lot, these people believe. But at the end, the belief is based on virtually nothing – except for the creators' overgrown egos. Why? It's because a lot is known from the experiment. Lots of the numbers have been measured. And the billions of different parts of experiments have been organized into phenomenological theories that are "pretty much proven experimentally as well" although they predict much more than the isolated experiments. The phenomenological theories may interpolate in between the experiments with particular particles at particular energies; and they may extrapolate them to other energies and other collections of particles, too. In quantum field theory, what specifies the model – or the body of all results from allowed experiments – is the particle spectrum along with the cross sections (or decay rates) describing the probability of any transmutation of initial particles into final particles with certain initial and final momenta. These are the numbers – functions of momenta – that your theory should actually calculate, or at least constrain the form of all these functions. If your theory doesn't do this thing at all, it is describing approximately 0% of the relevant experimental data. And it was the case of my wormhole theory of elementary particles, too. On the contrary, if you can calculate the cross sections for any arrangements of particle species and their momenta, your theory calculates "everything" because all processes in the Universe may be reduced to the elementary processes of this kind. (Green's functions or S-matrices are not the only ways to parameterize "everything in physics", however.) The previous simpler theory, a theory with pointlike electrons, neutrinos, and others (quarks are better than hadrons) was really better than my wormhole theory. The addition of the wormholes didn't really increasing the explanatory power of physics. I was adding junk but what I got from the theory was less than what you could get from a theory of point-like particles (at least assuming my particular realization of the wormholes and their dynamics). Note that strings are different. You may deduce quantum field theory from string theory at low energies; that also guarantees the UV finiteness of the loop diagrams; it allows you to genuinely unify different particle species into one object because all species are just vibration modes of the same string; it allows you to do many other things. But my wormholes theory didn't really do anything that string theory does. It contradicted almost everything that it should have agreed with. There was no overlap of my theory's predictions with those of quantum field theories (which I hadn't understood by that time). If you want to have a chance to move the search for a theory of everything forward, you simply need to pay attention to what is known – from experimenters and from theorists or at least phenomenologists (whose wisdom is just very cleverly processed meat from the experiments). You can't get ahywhere if you ignore everything or almost everything. Your belief that you're smarter than everyone else and that's why you may ignore everything that is known is a lame justification for your ignoring the wisdom. It only proves your overgrown ego and arrogance; not your intellectual superiority. Don't get me wrong. It's often right and extremely important to ignore many things, even most things. It's often very important to focus on the "right subset of insights" that are enough to deduce some new principles of physics – or some new structure inside matter. But the set of observations that you generously pay attention to simply cannot be too small. If it is too small, you are almost certainly naive and your theory is almost certainly childish. The claim that some structure inside the seemingly point-like elementary particles is physically justified is a huge claim. And as far as proper physics knows as of late 2020, the constructions derived from string theory offer the only "shapes" of elementary particles that are at least as justified as the point-like character of the elementary particles (and indeed, strings are more justified than points by now). Everything else that has been tried – efforts to replace all the point-like elementary particles by something else – may be demonstrated to contradict the experimental facts as soon as your theory is defined sufficiently accurately to be actually capable of predicting "something like the cross sections". Everyone who claims something else is really clueless about modern physics – he or she hasn't subjected his alternative theory to sufficient tests that are mandatory in physics. He hasn't even started to seriously work on the subject, otherwise he would know that his theory doesn't work at all. You can't really ignore the overwhelming bulk of the known experimental and theoretical physics if you want to help to move the frontier of physics further. Proper physics can't ever become a shouting match between dudes with overgrown egos who ignore each other. Proper physics does depend on quite some attention paid to the experimental facts (often rather subtle patterns in these facts) and phenomenological laws or even deep theoretical principles derived from the experimental data. And once you accept this verdict and tame your ego correspondingly, you will be led to the insight that the picture with the point-like elementary particles (and the conventional quantum field theory) is damn accurate and useful and the only viable alternatives (those that are derived from one formulation of string/M-theory or another) actually end up being extremely similar to the point-like elementary particles in very many respects. There may be a way to describe all the matter as "some kind of wormholes", some configurations that are locally (either in our 3D space or in another space) equivalent to the empty space with nothing in it (not even strings). But you need to fix all the problems discussed above – you need to allow the topology change that is involved in annihilation; you need to interpret the higher-genus objects and make them harmless – and equally importantly, you need to figure out the details of the dynamics that actually allow you to calculate the quantative predictions similar to the cross sections. At the end, a theory of everything must be a theory of... everything. If your theory ignores and fails to explain or predict (almost) everything, it's a theory of (almost) nothing instead!
2021-03-06 17:26:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.537037193775177, "perplexity": 670.5738598601777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00151.warc.gz"}
https://www.gledus.com/94osjw/evnno.php?it=taylor-series
# Taylor Series These are called the Taylor coefficients of f, and the resulting power series is called the Taylor series of the function f. What Is the Taylor Series of Ln(x)? Taylor Series Application Taylor Series Expansion Lincoln Park Jack Taylor Series Example of Taylor Series Taylor L. Complete Solution Step 1: Find the Maclaurin Series. Taylor series expansion of symbolic expressions and functions. 5 illustrates the first steps in the process of approximating complicated functions with polynomials. Big O Notation gets more complex when logarithmic functions are involved which are denoted by O(log(x)). This paper points out and attempts to illustrate some of the many applications of Taylor’s series expansion. Taylor Series Expansion Calculator computes a Taylor series for a function at a point up to a given power. Jack Taylor, dismissed from the Garda Síochána (Irish police) for drinking, now finding things for people in Galway, Ireland, since “private eye” sounds too much like “informer” to the Irish:. Taylor Series. Taylor Polynomials Preview. What Is the Taylor Series? World politics can be complicated. A Taylor series approximates the chosen function. Ken Bube of the University of Washington Department of Mathematics in the Spring, 2005. Learn more about the definition, explanation and formula of Taylor series along with solved example questions at BYJU'S. Let's find the Taylor series for with center. It turns out that this is not always the easiest way to compute a function's Taylor series. Here is a set of practice problems to accompany the Taylor Series section of the Series & Sequences chapter of the notes for Paul Dawkins Calculus II course at Lamar University. In mathematical terms, Series can be viewed as a way of constructing Taylor series for functions. Math 133 Taylor Series Stewart x11. The Taylor Series, or Taylor Polynomial, is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. Chapter 10 The Taylor Series and Its Applications Apparently it started with a discussion in Child’s Coffeehouse where Brook Taylor (1685–1731) got the idea for the now famous series. The method is proposed for solving a system of homogeneous equations f(x)=0 in R^N. The function {: → = +is real analytic, that is, locally determined by its Taylor series. If you're seeing this message, it means we're having trouble loading external resources on our website. Description : The online taylor series calculator helps determine the Taylor expansion of a function at a point. We wish to evaluate the function at x = 1. TAYLOR and MACLAURIN SERIES (OL]DEHWK :RRG TAYLOR SERIES. 812) that the series of Example 11. Suppose and are functions defined on subsets of the reals such that is a point in the interior of the domain of both, and both and are infinitely differentiable at. You can specify the or. Learn more about Teams. Taylor’s Series of sin x In order to use Taylor’s formula to find the power series expansion of sin x we have to compute the derivatives of sin(x):. Taylor series A Taylor series is an idea used in computer science, calculus, and other kinds of higher-level mathematics. A Taylor Series Given a function f that has all its higher order derivatives, the series , where is called the Taylor series for f centered at a. Taylor and Maclaurin Series - An example of finding the Maclaurin series for a function is shown. The previous module gave the definition of the Taylor series for an arbitrary function. In words, Lis the limit of the absolute ratios of consecutive terms. the series for , , and ), and/ B BB sin cos. Most every guitar in the Taylor 300 series uses a standard Sitka Spruce soundboard complimented by Sapele back and sides. Purpose The purpose of this lab is to acquaint you with several useful techniques for generating Taylor series. Function: sumcontract (expr) Combines all sums of an addition that have upper and lower bounds that differ by constants. A Taylor series is a numerical method of representing a given function. Power Series vs Taylor Series. i'm having a hard time understanding taylor series and why it works and how it works. If you're seeing this message, it means we're having trouble loading external resources on our website. A Taylor Series Given a function f that has all its higher order derivatives, the series , where is called the Taylor series for f centered at a. Using Maclaurin/Taylor Series to Approximate a Definite Integral to a Desired Accuracy This video uses Maclaurin/Taylor series and the Alternating Series Estimation Theorem to approximate a definite integral to within a desired accuracy. Self-destructive, pigheaded, and over-fond of the bottle, Jack Taylor (Iain Glen, Game of Thrones, Downton Abbey) is a forty-something ex-cop trying to earn a living as a private detective in his native Galway. In other words, when you use a Taylor series, you assume that you can find derivatives for your function. Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. Definition About a general point. 2 Taylor and Maclaurin Series If we represent some function f(x) as a power series in (x-a), then. Consult the definition of the Taylor series to understand how each term may be computed. TAYLOR and MACLAURIN SERIES (OL]DEHWK :RRG TAYLOR SERIES. This module gets at the heart of the entire course: the Taylor series, which provides an approximation to a function as a series, or "long. Not surprisingly, having a Taylor approximation to a function is most useful when one does not have an exact formula for the function. 2010 Mathematics Subject Classification: Primary: 26A09 Secondary: 30B10 [][] Also known as Maclaurin series. If Series of b₋n conv. For example. (Taylor polynomial with integral remainder) Suppose a function f(x) and its. For example, the derivative. Comparing this series to the general form of a Taylor (Maclaurin) series, we see In general, if is even, and if is odd, where the sign alternates, starting with. Big O Notation gets more complex when logarithmic functions are involved which are denoted by O(log(x)). A Taylor series is a clever way to approximate any function as a polynomial with an infinite number of terms. A remarkable result: if you know the value of a well-behaved function () and the values of all of its derivatives at the single point = then you know () at all points. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. In this video lesson we will learn about the Taylor and Macluarin Series. 1 Taylor's Series of 1+ x Our next example is the Taylor's series for 1+ 1 x; this series was first described by Isaac Newton. They are used to convert these functions into infinite sums that are easier to analyze. In mathematics, a Taylor series is a representation o a function as an infinite sum o terms that are calculatit frae the values o the function's derivatives at a single pynt. My teacher explained it in class but he goes so fast that i have no idea what hes saying. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. In other words, you're creating a function with lots of other smaller functions. What is the Maclaurin series for f(x) = e x?. Taylor series methods compute a solution to an initial value problem in ordinary differential equations by expanding each component of the solution in a long series. taylor Taylor series expansion Calling Sequence Parameters Description Thread Safety Examples Details Calling Sequence taylor( expression , x = a , n ) Parameters expression - expression x - name; independent variable a - real constant; expansion point. Get the best price on Taylor Academy Series at Guitar Center. The main result is that if you have a power. 1 Taylor Polynomials The tangent line to the graph of y = f(x) at the point x = a is the line going through the point ()a, f (a) that has slope f '(a). The Taylor's theorem states that any function f(x) satisfying certain conditions can be expressed as a Taylor series: assume f (n) (0) (n = 1, 2,3…) is finite and |x| < 1, the term of f (n) (0) n! x n becomes less and less significant in contrast to the terms when n is small. We also derive some well known formulas for Taylor series of e^x , cos(x) and sin(x) around x=0. and compare it to a₋n. Plot the function f and T n on the same graph over the given interval for n=4 and n=6. Taylor Polynomials Preview. Taylor series expansion of exponential functions and the combinations of exponential functions and logarithmic functions or trigonometric functions. For a general power series, it is usually not possible to express it in closed form in terms of familiar functions. Power series is algebraic structure defined as follows Geometric series is special type of power series who's coefficients are all equal to 1 Taylor series When particular infinitely differenciable function is equated to power series and coefficie. Using this process we can approximate trigonometric, exponential, logarithmic, and other nonpolynomial functions as closely as we like (for certain values of $$x$$) with polynomials. One complex variable. where f(x) is a function having derivatives of all orders at x = a. Taylor Series Expansion Calculator computes a Taylor series for a function at a point up to a given power. Purpose The purpose of this lab is to acquaint you with several useful techniques for generating Taylor series. Taylor series are polynomial series that can be used to approximate other functions, in most cases to arbitrary precision, as long as we're willing to use terms of high-enough degree. (or Taylor series), a power series of the form. With Iain Glen, Nora-Jane Noone, Killian Scott, Paraic Breathnach. The main result is that if you have a power. Calculates and graphs Taylor approximations. This will work for a much wider variety of function than the method discussed in the previous section at the expense of some often unpleasant work. TAYLOR and MACLAURIN SERIES (OL]DEHWK :RRG TAYLOR SERIES. Sometimes, it's better to focus on what is happening. Using series to approximate special constants. In this section we will discuss how to find the Taylor/Maclaurin Series for a function. In mathematical terms, Series can be viewed as a way of constructing Taylor series for functions. I will assume here that the reader knows basic facts about calculus. Math 133 Taylor Series Stewart x11. You can specify the or. Plot the function f and T n on the same graph over the given interval for n=4 and n=6. So you’ve probably heard of Taylor Guitars before. Taylor Series and Maclaurin Series. You might wanna read this Taylor Series as Definitions. Jack Taylor (TV Series 2010) cast and crew credits, including actors, actresses, directors, writers and more. Do not show again. The Taylor theorem expresses a function in the form of the sum of infinite terms. Taylor and Maclaurin Series If a function $$f\left( x \right)$$ has continuous derivatives up to $$\left( {n + 1} \right)$$th order, then this function can be expanded in the following way:. Chapter 10 The Taylor Series and Its Applications Apparently it started with a discussion in Child's Coffeehouse where Brook Taylor (1685-1731) got the idea for the now famous series. Function: sumcontract (expr) Combines all sums of an addition that have upper and lower bounds that differ by constants. Either that or you’ve been hiding under a rock somewhere for the past few decades! With a firm reputation of producing high-quality, high-priced guitars, it’s no wonder that Taylor took a PR beating when they first introduced the Taylor 100 series “budget” acoustic guitars to the market. ) When, the series is called a Maclaurin series. Commonly Used Taylor Series. 5 Taylor Polynomials and Taylor Series Motivating Questions. Whenever this formula applies, it gives the same results as Series. The Mark Taylor Series: Books One and Two. A summary of The Remainder Term in 's The Taylor Series. The differentiation rules. A little examination using derivatives brings the following conclusion: If f has a power series representation at a, that is,. The Taylor polynomials are the partial sums of the Taylors series. problems concerning complex numbers with answers. Bernoulli in 1694. Taylor and Maclaurin series are like polynomials, except that there are infinitely many terms. Taylor polynomial graphs. Big O Notation gets more complex when logarithmic functions are involved which are denoted by O(log(x)). So far, you have considered series whose terms were constants; for example the geometric series We can also consider series whose terms are functions. Another example. TAYLOR and MACLAURIN SERIES (OL]DEHWK :RRG TAYLOR SERIES. area, volume, and length problems with answers. Solution involves approximating solution using 1'st order Taylor series expansion, and Then solving system for corrections to approximate solution. Derive term-by-term the Taylor series of about to get the Taylor series of about the same point. Far away populations move their centers. Taylor Guitars is a leading manufacturer of acoustic and electric guitars. The Taylor polynomials are the partial sums of the Taylors series. In many cases of practical importance, Taylor’s series converges to f(x) on some interval with center at a:. 2 Taylor series: functions of two variables If a function f: IR2!IR is su ciently smooth near some point ( x;y ) then it has an m-th order Taylor series expansion which converges to the function as m!1. A summary of The Remainder Term in 's The Taylor Series. where f(x) is a function having derivatives of all orders at x = a. The generic expression:. It was a great journey with Calculus with the help of Coursera. It is more of an exercise in differentiating using the chain rule to find the derivatives. Maclaurin series coefficients, a k can be calculated using the formula (that comes from the definition of a Taylor series) where f is the given function, and in this case is sin(x). Taylor’s series can be used for approximating a function of x close to x=a as a series in powers of x or (x-a). Learn exactly what happened in this chapter, scene, or section of The Taylor Series and what it means. Polynomial Approximations. Setup a private space for you and your coworkers to ask questions and share information. Both see functions as built from smaller parts (polynomials or exponential paths). Taylor Series can be used to represent any function, as long as it is an analytic function. representation of a function. Step 4: Write the result using a summation. Uniqueness of the Taylor series. Consult the definition of the Taylor series to understand how each term may be computed. Taylor and Maclaurin Series - An example of finding the Maclaurin series for a function is shown. Other Power Series Representing Functions as Power Series Functions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin Series The Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor. In this post, we will review how to create a Taylor Series with Python and for loops. Taylor Series can be used to represent any function, as long as it is an analytic function. So far, you have considered series whose terms were constants; for example the geometric series We can also consider series whose terms are functions. One of the most important uses of infinite series is the potential for using an initial portion of the series for $f$ to approximate $f$. Jack Taylor, dismissed from the Garda Síochána (Irish police) for drinking, now finding things for people in Galway, Ireland, since “private eye” sounds too much like “informer” to the Irish:. As well you can build to order via our custom guitar program. Spring 03 midterm with answers. Taylor Series - Free download as PDF File (. Taylor series expansions of logarithmic functions and the combinations of logarithmic functions and trigonometric, inverse trigonometric, hyperbolic, and inverse hyperbolic functions. TAYLOR POLYNOMIALS AND TAYLOR SERIES The following notes are based in part on material developed by Dr. Nielsen Physics Building 1408 Circle Drive. Repeat solving system of linearized equations for corrections until corrections become small. Sketch the linear and quadratic approximations at each of those points below: 1. You can construct the series on the right provided that f is infinitely differentiable on an interval containing c. Taylor Series. Q&A for Work. 10: Taylor and Maclaurin Series 1. The Taylor series expression for f(x) at x = a is where f (n) (a) is the n-th derivative of f(x) at x=a if n ≥ 1 and f (0) (a) is f(a). Although, now there is an option for a Tropical Mohograny top. Taylor series methods compute a solution to an initial value problem in ordinary differential equations by expanding each component of the solution in a long series. Taylor Series Taylor polynomials can be used to approximate a function around any value for a differentiable function. For math, science, nutrition, history. Derive term-by-term the Taylor series of about to get the Taylor series of about the same point. Solution involves approximating solution using 1'st order Taylor series expansion, and Then solving system for corrections to approximate solution. for any x in the series' interval of convergence. Under certain conditions, the series has the form f ( z) = f ( a) + [ f ′ ( a ) ( z – a)]/1! + [ f ″ ( a ) ( z – a) 2]/2! + …. i am trying to find a code for sine using the taylor formule which is (sin x = x −x^3/3! + x^5/5! - x^7/7! +. Formally, it is a function from the set of natural numbers in to the set of real numbers. Homework resources in Infinite Series, Taylor Series - Calculus - Math. i don`t get the ln 2 part. The th term of a Maclaurin series of a function can be computed in the Wolfram Language using SeriesCoefficient[f, x, 0, n] and is given by the inverse Z-transform. A Taylor series is a numerical method of representing a given function. 10: Taylor and Maclaurin Series 1. If you're seeing this message, it means we're having trouble loading external resources on our website. Brook Taylor was an. Taylor series as limits of Taylor polynomials As you increase the degree of the Taylor polynomial of a function, the approximation of the function by its Taylor polynomial becomes more and more accurate. i dont really know what i did wrong now. 2 Proof by Taylor’s formula (p. Moln´arka Department of Mathematics, Sz´echenyi Istv´an University, Gy˝or. And by knowing these basic rules and formulas, we can learn to use them in generating other functions as well as how to apply them to Taylor Series that are not centered at zero. Defining a Taylor Series Given a function f that has all its higher order derivatives, the series. The Taylor's theorem states that any function f(x) satisfying certain conditions can be expressed as a Taylor series: assume f (n) (0) (n = 1, 2,3…) is finite and |x| < 1, the term of f (n) (0) n! x n becomes less and less significant in contrast to the terms when n is small. The generic expression:. The convergence interval has to be adjusted accordingly. The Mark Taylor Series: Books One and Two. Other Power Series Representing Functions as Power Series Functions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin Series The Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor. The th term of a Maclaurin series of a function can be computed in the Wolfram Language using SeriesCoefficient[f, x, 0, n] and is given by the inverse Z-transform. Taylor and Maclaurin series are like polynomials, except that there are infinitely many terms. Moln´arka Department of Mathematics, Sz´echenyi Istv´an University, Gy˝or. A polynomial has a finite number of terms, a series has infinitely many terms (except possibly if all but finitely many terms are 0). Taylor series is a way to representat a function as a sum of terms calculated based on the function's derivative values at a given point as shown on the image below. Taylor's series. The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. 4375 Now how close is that to the actual square root of 2? Let's look at the formula for the remainder again:. 2 Taylor series: functions of two variables If a function f: IR2!IR is su ciently smooth near some point ( x;y ) then it has an m-th order Taylor series expansion which converges to the function as m!1. The Maclaurin series of a function up to order may be found using Series[f, x, 0, n]. 1 represents coshx for all x ∈ R. In words, Lis the limit of the absolute ratios of consecutive terms. Assume that we have a function f for which we can easily compute its value f(a) at some. In the West, the subject was formulated by the Scottish mathematician James Gregory and formally introduced by the English mathematician Brook Taylor in 1715. ‎Iain Glen (Game of Thrones, Downton Abbey) is Jack Taylor: a former cop turned private investigator trying to scratch out a living in his native Galway. Taylor guitars are widely considered among the best-sounding and easiest to play guitars in the world. In mathematics, a Taylor series is a representation o a function as an infinite sum o terms that are calculatit frae the values o the function's derivatives at a single pynt. The Taylor series for e x about x=0 is 1 + x + x 2 /2! + x 3 /3! + x 4 /4! + that is, it has infinitely many terms. For example, in Chap. A Maclaurin Series is a Taylor Series centered at zero. The tricky part is to rewrite this expression to exploit the geometric series once again. Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step. Models also feature inlay updates, including a peghead inlay to help identify them as. A mathematical definition of stability, one which allows the discrete solution to grow but only to a certain extent, is as follows. May 20, 2015 firstly we look. Maclaurin series are named after the Scottish mathematician Colin Maclaurin. Concrete examples in the physical science division and various engineering fields are used to paint the applications pointed out. For math, science, nutrition, history. The concept was formulated by Scottish mathematician James Gregory. A function that is equal to its Taylor series in an open interval or a disc in the complex plane) is known as an analytic function. Taylor Series Method with Numerical Derivatives for Numerical Solution of ODE Initial Value Problems E. The Taylor series of about is the power series given as follows:. As the example of y = cos(x) shows, this statement must be qualified. Repeat solving system of linearized equations for corrections until corrections become small. Step 2: Evaluate the function and its derivatives at x = a. Taylor Series and Maclaurin Series. Enter a, the centre of the Series and f(x), the function. 5 illustrates the first steps in the process of approximating complicated functions with polynomials. ~tCFt/(l+i)' D(i) = zCFt /(I# Note the denominator is the price of the cash flow. Spring 03 midterm with answers. For example, in Chap. Learn exactly what happened in this chapter, scene, or section of The Taylor Series and what it means. in a Taylor series to illustrate just where the duration concept fits in. Taylor series expansion of symbolic expressions and functions. Brook Taylor was an. The Taylor series is a power series that approximates the function f near x = a. Choose a web site to get translated content where available and see local events and offers. Resistance charts based upon model tests of a series of ships derived by altering the proportions of a single parent form; used to study the effects of these alterations on resistance to the ship's motion, and to predict the powering requirements for new ships. Taylor series A Taylor series is an idea used in computer science, calculus, and other kinds of higher-level mathematics. It has been developed a method of arbitrary degree based on Taylor series for multi-variable functions. The Taylor theorem expresses a function in the form of the sum of infinite terms. Taylor series are useful (in the real world) in evaluating non-polynomial functions, like rational functions or trig functions or exponential functions. Find the Taylor series expansions for the function f(x) = x3 3xat x= 0, x= 1, and x= 2. txt) or read online for free. They are used to convert these functions into infinite sums that are easier to analyze. Step 3: Fill in the right-hand side of the Taylor series expression. The method is proposed for solving a system of homogeneous equations f(x)=0 in R^N. Chapter 10 The Taylor Series and Its Applications Apparently it started with a discussion in Child’s Coffeehouse where Brook Taylor (1685–1731) got the idea for the now famous series. Requires a Wolfram Notebook System. Note that the Hessian matrix of a function can be obtained as the Jacobian matrix of the gradient vector of :. A function may not be equal to its Taylor series, even point. Solution involves approximating solution using 1'st order Taylor series expansion, and Then solving system for corrections to approximate solution. Taylor Series. , I might be (−17,19)) and let x. Spring 03 final with answers. O(x 2), and a polynomial would have the notation of O(x C) where C is some constant. Move the locator to change the center of the approximation. taylor series in c++: The formula is used as an approximation for sin(x). Models also feature inlay updates, including a peghead inlay to help identify them as. The Taylor Series extracts the "polynomial DNA" and the Fourier Series/Transform extracts the "circular DNA" of a function. An Example of Taylor Series. You probably know that. Resistance charts based upon model tests of a series of ships derived by altering the proportions of a single parent form; used to study the effects of these alterations on resistance to the ship's motion, and to predict the powering requirements for new ships. In other words, you're creating a function with lots of other smaller functions. Solution involves approximating solution using 1'st order Taylor series expansion, and Then solving system for corrections to approximate solution. Taylor Series. In this section, we will find a power series expansion centered at for a given infinitely differentiable function,. Using Taylor series to find the sum of a series. Defining a Taylor Series Given a function f that has all its higher order derivatives, the series. Big O Notation gets more complex when logarithmic functions are involved which are denoted by O(log(x)). Both see functions as built from smaller parts (polynomials or exponential paths). Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products. And by knowing these basic rules and formulas, we can learn to use them in generating other functions as well as how to apply them to Taylor Series that are not centered at zero. Basically I'm ignoring the included math library in python and hard coding it myself. Function: sumcontract (expr) Combines all sums of an addition that have upper and lower bounds that differ by constants. You can construct the series on the right provided that f is infinitely differentiable on an interval containing c. A function that is equal to its Taylor series in an open interval or a disc in the complex plane) is known as an analytic function. If , the expansion is known as a Maclaurin series. My issue is that I'm fairly new to programming and not sure how to go about coding a series (Taylor series). We like old Brook and Colin, they made calculus class just a little bit easier—at least when it comes to series. In this post, we will review how to create a Taylor Series with Python and for loops. Further generalizations. Taylor Series for Functions of one Variable $f(x)=f(a)+f'(a)(x-a)+\frac{f''(a)(x-a)^2}{2!}+\cdots+\frac{f^{(n-1)}(a)(x-a)^{n-1}}{(n-1)!}+R_n$ where $R_n$, the. This function was plotted above to illustrate the fact that some elementary functions cannot be approximated by Taylor polynomials in neighborhoods of the center of expansion which are too large. Self-destructive, pigheaded, and over-fond of the bottle, Jack Taylor (Iain Glen, Game of Thrones, Downton Abbey) is a forty-something ex-cop trying to earn a living as a private detective in his native Galway. From a teenager's curiosity of guitar making to the most innovative and forward-thinking guitar manufacturer, Taylor Guitars has its sights on both the ultimate playing experience throughout the entire collection and the future of the timbers it uses from all over the globe. REVIEW: We start with the differential equation. In this section we will discuss how to find the Taylor/Maclaurin Series for a function. Taylor Series A Taylor Series is an expansion of a function into an infinite sum of terms, with increasing exponents of a variable, like x, x 2, x 3, etc. The Taylor series for e x about x=0 is 1 + x + x 2 /2! + x 3 /3! + x 4 /4! + that is, it has infinitely many terms. Power series is algebraic structure defined as follows Geometric series is special type of power series who's coefficients are all equal to 1 Taylor series When particular infinitely differenciable function is equated to power series and coefficie. Derive term-by-term the Taylor series of about to get the Taylor series of about the same point. One of the most important uses of infinite series is the potential for using an initial portion of the series for $f$ to approximate $f$. The main purpose of series is to write a given complicated quantity as an in nite sum of simple terms; and since the terms. The Taylor series of is the sum of the Taylor series of and of. the series for , , and ), and/ B BB sin cos. A remarkable result: if you know the value of a well-behaved function () and the values of all of its derivatives at the single point = then you know () at all points. Despite being a 5th degree polynomial recall that the Maclaurin series for any polynomial is just the polynomial itself, so this function's Taylor series is identical to itself with two non-zero terms. In mathematics, a Taylor series is a representation o a function as an infinite sum o terms that are calculatit frae the values o the function's derivatives at a single pynt. 1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. A summary of Some Common Taylor Series in 's The Taylor Series. This will work for a much wider variety of function than the method discussed in the previous section at the expense of some often unpleasant work. Deriving the Maclaurin series for tan x is a very simple process. Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products. This page was last edited on 15 October 2019, at 11:55. Applications of Taylor Series Lecture Notes These notes discuss three important applications of Taylor series: 1. Taylor series is a way to representat a function as a sum of terms calculated based on the function's derivative values at a given point as shown on the image below. A Taylor series is a clever way to approximate any function as a polynomial with an infinite number of terms. 78 Truncation Errors and the Taylor Series Truncation errorsare those that result from using an approximation in place of an exact mathematical procedure. How do you find the Taylor series for #ln(x)# about the value x=1? Calculus Power Series Constructing a Taylor Series. It is more of an exercise in differentiating using the chain rule to find the derivatives. Taylor's revolutionary new V-Class bracing joins the 400 Series with the release of new Grand Auditorium V-Class guitars. As well you can build to order via our custom guitar program. The Taylor Series, or Taylor Polynomial, is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. TAYLOR AND MACLAURIN SERIES 3 Note that cos(x) is an even function in the sense that cos( x) = cos(x) and this is re ected in its power series expansion that involves only even powers of x. Taylor Series. Self-destructive and pigheaded, with a talent for getting into trouble, Jack retains few friends from his time in the Irish Police, save for detect…. The Taylor series is for the mathematical cosine function, whose arguments is in radians. The main purpose of series is to write a given complicated quantity as an in nite sum of simple terms; and since the terms.
2019-12-13 13:38:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703521490097046, "perplexity": 453.21665273296145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00390.warc.gz"}
https://plainmath.net/1873/what-is-enterprise-data-modeling
# What is enterprise data modeling? What is enterprise data modeling? You can still ask an expert for help • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Willie Step 1 A data model is basically a summarized and a analytical model which organizes the elements of data and standardizes in a meaningful way. Enterprise data Model An EDM (Enterprise data model) is basically a kind of data model that summarized and displays a meaningful view of all data consumed in the respective organization. An EDM (Enterprise data model) is generally used for an architectural framework for developing, designing and maintaining the databases. It is one of the first data model which provides the solution for designing developing and maintaining a database. EDM (Enterprise data model) is specifically for the situations where the data-integration-based processes are high, like in operational data stores and warehouses. It is commonly presented in the graphical format. It also helps in database components like entity relationships, XML schemas and data dictionaries. The application of an EDM is related to the hardships of data stewardship and data governance within an organization. ###### Not exactly what you’re looking for? • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee
2022-07-02 14:25:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3520570993423462, "perplexity": 2948.5593754972665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00344.warc.gz"}
https://www.researching.cn/articles/OJb0eba24022798624
Search by keywords or author • High Power Laser Science and Engineering • Vol. 10, Issue 2, 02000e10 (2022) Jie Guo1, Zichen Gao1、2, Di Sun1、2, Xiao Du1、2, Yongxi Gao1、2, and Xiaoyan Liang1、* Author Affiliations • 1State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai201800, China • 2Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing100049, China • show less Jie Guo, Zichen Gao, Di Sun, Xiao Du, Yongxi Gao, Xiaoyan Liang. An efficient high-power femtosecond laser based on periodic-layered-Kerr media nonlinear compression and a Yb:YAG regenerative amplifier[J]. High Power Laser Science and Engineering, 2022, 10(2): 02000e10 Copy Citation Text show less Abstract We demonstrate an efficient ultrafast source with 195 fs pulse duration, 54 W average power at 200 kHz repetition rate and near diffraction-limited beam quality. The compact setup incorporates a thin-disk Yb:YAG regenerative amplifier (RA) and a subsequent nonlinear pulse compression stage with periodic-layered Kerr media (PLKM), which is one of the multiple-thin-solid-plate schemes based on nonlinear resonator theory. In virtue of the formation of quasi-stationary spatial soliton in PLKM, the near diffraction-limited beam quality of the RA remained almost undisturbed after post-compression. The nonlinear pulse compression module is simple and efficient with a transmission of 96%. To the best our knowledge, for pulse energy over 200 μJ, this is the highest output power reported for the multiple-thin-solid-plate scheme. This source manifests an economical combination to mitigate the bandwidth limitations of Yb-based high-power chirped pulse amplifiers. 1 Introduction The combination of high quantum and Stokes efficiency of Yb-based lasers with state-of-the-art fiber, Innoslab and thin-disk architectures facilitates efficient and power scalable femtosecond lasers[15]. However, the Yb-doped gain matrices do not exhibit sufficiently large gain bandwidth to provide high gain for pulses shorter than 300 fs. Due to gain narrowing, workhorse Yb-doped crystals, such as Yb:YAG and Yb:tungstate, typically generate pulses in the range from 400 fs to 1 ps pulse duration after chirped pulse amplification. To generate even shorter pulses, other strategies have proved viable, such as the adoption of broader gain bandwidth disordered media[6,7], a combination of gain media with a slightly shifted gain spectrum[810] or spectral coherent synthesis[11,12]. Nonetheless, the disordered media display poor thermal conductivity while the other two schemes suffer from complexity. Nonlinear spectral broadening by self-phase modulation (SPM) during propagations in regenerative and multi-pass amplifiers is an impressive alternative, which, however, demands elaborated control[1316]. In addition to the above techniques implemented during the construction of ultrafast lasers, another route to achieve shorter pulse durations is through the well-known temporal compression after compressor approach (CafCA) or post-compression technique[1719]. The basic process incorporates nonlinear spectral broadening and chirp removal with dispersive elements. No laser system alteration is needed and a high efficiency can be guaranteed if implemented properly. Among the various techniques based on the optical Kerr effect or photoionization for spectral broadening, the gas-filled multi-pass cell spectral broadening (MPCSB) and the multiple-thin-solid-plate (MTSP) schemes now are widely adopted and have achieved impressive results[2023]. The MPCSB method is characterized with almost unperturbed beam quality and spectral homogeneity across the beam profile, but suffers from pointing fluctuations and delay[24]. The MTSP technique avoids the catastrophic collapse caused by the self-focusing effect in a bulk medium with strategically arranged thin plates. It is more compact, flexible and economical, applicable for peak power far beyond the medium’s self-focusing critical power and for pulse energies up to the millijoule level with robustness and reproducible performance[25]. Early MTSP configurations were usually organized empirically, in which considerable losses caused by conical emission usually arose[26]. The conical emission also results in temporal or spatial (or even both) quality degradation. In some cases requirements for specially designed chirped mirrors or pulse shapers were indispensable[20,26]. As a relatively systematical study, the concept of quasi-stationary spatial solitons generation in periodic-layered Kerr media (PLKM) stands out as a practical strategy[27,28]. As the term periodic indicates, the distance between neighboring plates is the same in this configuration. Specifically, a layer of solid thin plate with thickness of l and a layer of free space with a length of L were taken as a period. The repetitive propagation of an intense beam in PLKM was regarded as a resonator with intensity-dependent non-spherical mirrors. Thus, the Fresnel–Kirchhoff diffraction (FKD) integral was introduced in this theory to identify the self-consistent stationary modes. The PLKM arrangements improved the spatial quality and supported nonlinear light–matter interaction during a rather long distance even under tight focusing conditions[28]. The integration of a PLKM device and Yb-based high-power chirped pulse amplifiers (CPAs) will favor an ultrafast source with high efficiency and great beam quality. In this contribution, we present a compact and efficient ultrashort laser source, which comprised a Yb:YAG regenerative amplifier (RA) and a subsequent close-to-lossless PLKM-based nonlinear pulse compression stage. The nonlinear pulse compression stage featured a transmission of 96%, excellent beam quality and spectral homogeneity across the beam profile. Compared with the setup described in Ref. [26], there is no need to filter out the conical emission or apply custom-tailored chirped mirrors in our work. The absence of conical emission intrinsically ensured the high efficiency and excellent beam quality simultaneously. To the best of our knowledge, for pulse energy over 200 μJ, this is the highest output power reported for the MTSP scheme. This configuration successfully compensated for the gain bandwidth limitation of Yb:YAG RA. The final output pulse duration was 195 fs with average power of 54 W at a 200 kHz repetition rate, while the pulse duration directly from the grating-based compressor of the CPA system was 534 fs. These results underline the benefits of this combination. The demonstrated source is promising for further power scaling and compression to sub-50 fs to drive high-field physical processes and bright secondary radiation at high average power. 2 Experimental setup A schematic of the experimental setup is illustrated in Figure 1. The Yb:YAG thin-disk RA was similar to that in our previous work[29]. Different from the system in Ref. [29], the Pockels cell was a double-BBO (beta-barium borate) type with a total length of 40 mm and a clear aperture of 3.6 mm × 3.6 mm (4 mm × 4 mm total area), and the pump spot size on the disk was 3 mm. Limited by the high-voltage pulse width of the Pockels cell, the roundtrip number of the RA was set as 15. Besides, the grating compressor is based on a single transmission grating and some folding mirrors. When seeded with 100 μJ pulse energy, an output energy of 330 μJ at 200 kHz was generated at the pump power of 150 W. The compressed pulse energy was then reduced to 280 μJ and the beam quality factor was characterized as M2 = 1.19 × 1.24 (Ophir BeamSquared), as shown in Figure 2. The output spectrum bandwidth was about 3.6 nm (full width at half maximum, FWHM), revealing a strong gain-narrowing phenomenon. The pulse duration was measured to be 534 fs with pedestals assuming a Lorentz pulse shape (shown in Figure 4(b) of the following section), larger than the bandwidth-limited one (factor ~1.2). This is attributed to the nonlinear effects of the front end, which is also not bandwidth-limited with 325 fs pulse duration at 8 nm bandwidth. In contrast, the nonlinearity accumulated in the RA is negligible (about 0.11 rad). Figure 1.(a) Schematic of the ultrafast source: frontend; mode matching optics and isolators; regenerative amplifier; grating compressor; PLKM nonlinear compression stage. L1, L2, lenses of the telescope for mode matching; FI, Faraday isolator; TFP, thin-film polarizer; M1–M11, highly reflective mirrors; HWP, half-wave plate; QWP, quarter-wave plate; PC, Pockels cell; HRM, horizontal roof mirror; VRM, vertical roof mirror; TG, transmission grating; CM1–CM4, chirped mirrors. (b) Detailed PLKM configuration. Figure 2.Output beam quality and far-field beam profile after the grating compressor. Figure 3.(a) Spectra measured at the output of the RA and after the PLKM. (b) Spectra measured after different numbers of sapphire thin plates. Figure 4.Measured and fitted intensity autocorrelation traces of (a) the final output pulse and (b) the output of the grating compressor. The PLKM setup for nonlinear spectral broadening consisted of six periods. Each period includes a layer of sapphire with a fixed nominal thickness of 1 mm and a subsequent layer of free space with a length of 40.8 mm. The PLKM device was designed as follows. Firstly, the nonlinear phase on each solid thin plate was set as 1 rad and the effective beam radius on each plate was decided. The value was chosen to limit the nonlinear phase per plate and fully exploit the plates (six pieces) available in the experiment. Then, the FKD integral, including the nonlinear phase induced by SPM, was numerically solved by means of Fox–Li iteration, resulting in the determination of stationary modes[28,30]. When the normalized amplitude of the incident optical field is assumed as U1, the amplitude after propagating through a unit of the nonlinear resonator and right before the next period can be calculated according to Ref. [28]: \begin{align*}{U}_2\left(\rho \right)&= -2\pi j{e}^{{j}\pi {\rho}^2}{\int}_0^{\infty }{U}_1\left({\rho}^{\prime}\right){e}^{{jb}{\left|{{U}}_1\left({\rho}^{\prime}\right)\right|}^2}\cdot {e}^{{j}\pi {\rho^{\prime}}^2}\\[3pt] &\qquad\cdot {J}_0\left(2\pi {\rho}^{\prime}\rho \right){\rho}^{\prime }\mathrm{d}{\rho},\end{align*} where $\rho$ and ${\rho}^{\prime }$ are the radial coordinates rescaled by $\sqrt{\lambda L}$ and ${J}_0$ is the zeroth-order Bessel function. In this equation, b is the nonlinear phase given by $b=\frac{2\pi }{\lambda }{n}_2l{I}_0$, in which ${n}_2$ is the nonlinear refractive index and ${I}_0$ is the field intensity. After convergence, the Fresnel-number-like radius squared parameter ω2L was found to be 0.78 when 86.5% energy was contained, in which ω is the beam radius and λ is the central wavelength of 1030 nm. The sapphire plates were placed at the Brewster angle to suppress the reflection loss to less than 0.5%. The first sapphire plate was placed at the beam focus and the peak intensity on the surface of the first plate was 0.84 TW/cm2. A set of dispersive mirrors (supporting a bandwidth of 40 nm) with a total group delay dispersion (GDD) of −15,370 fs2 compensated for the residual spectral phase. 3 Nonlinear compression results and discussion The transmitted average power behind the nonlinear compression stage was 54 W, corresponding to an efficiency of 96%. This high efficiency is attributed to the lossless nature of PLKM and the dispersive mirror compressor. Figure 3(a) shows the power spectra measured with an integrating sphere and a multimode fiber at the output of the RA and after the PLKM stage, respectively. Spectra measured after different numbers of sapphire thin plates are illustrated in Figure 3(b). The broadening to the input spectrum is slightly asymmetrical. In addition, the amplitude of spectral broadening at the short-wavelength part is a little larger than the long-wavelength part, indicating the emergence of a weak self-steepening effect. The oscillations with large amplitude near the center wavelength were caused by the temporal pedestals of the input pulse[17], which stems from the nonlinear effects of the front end and, in addition, the spectra after the plates were modulated. Its influence upon the pulse stability was also confirmed. The fluctuation of pulse energy was 1% root mean square (RMS) and the variation of pulse duration was within 5%. This spectral modulation will be improved with an optimized front end in the near future. The broadened spectrum with six plates spans from 1020 to 1040 nm (−20 dB), supporting a 140 fs transform limited pulse. The autocorrelation trace characterized at the final output is shown in Figure 4(a), indicating a pulse duration of 195 fs assuming a Gaussian pulse shape, which is, however, quite clean without pedestals, unlike the input pulse (Figure 4(b)). It is presumed that the SPM of the input nonlinearly chirped pulse produced a third order dispersion (TOD) with the opposite sign to the initial TOD[31]. These results verified that the nonlinear compression stage realized contrast improvement and spectral broadening simultaneously. The transform-limited pulse duration was not achieved, presumably due to the residual high-order spectral dispersion, which remained uncompensated with the dispersive mirrors. The residual high-order spectral dispersion originated from both the front end and the self-steepening effect of the nonlinear compression stage[31,32]. The spatial quality of the nonlinear compression stage was also examined. The beam quality factor was measured to be M2 = 1.40 × 1.38, which is shown in Figure 5(a). In addition, the homogeneity of spectral broadening was also characterized by measuring the spectra across the transverse beam profile of the final compressed beam, at the location with a beam diameter of about 12 mm. A multimode fiber was used to scan across the profile and spectra were recorded with an optical spectrum analyzer (Figure 5(b)). It is generally expected that nonlinear spectral broadening by free-space propagation through a nonlinear medium will be inhomogeneous across the beam profile. The output beam quality and spectral homogeneity confirmed the validity of the basic scheme based on PLKM theory, that is, by restricting the nonlinear phase accumulation of each plate and regarding propagation through each period as roundtrips in a nonlinear resonator. The operation parameters are fundamentally determined beforehand, making it easy to implement in practice. In our experiment, only slight adjustment of distances between plates was needed. No conical emission was observed during the experiment; thus, great beam quality and high transmission efficiency were guaranteed. A small amount of remaining spatial chirp and uncompensated high-order dispersion were also shown in our results, which will be further optimized by an improved setup with modified front end and thinner plates in the nonlinear compression stage. Figure 5.(a) The final output beam quality and far-field beam profile. (b) Spectra across the beam profile. 4 Summary In summary, we have demonstrated a laser system delivering an average power of 54 W and pulses with 195 fs duration at a repetition rate of 200 kHz. The nonlinear compression stage helped to achieve a pulse duration well below the conventional bandwidth-limited value, enhancing the peak power of the Yb:YAG high-power CPA system by 2.6 times, with great beam quality and spectral homogeneity across the beam profile. These results confirmed the potential of the PLKM-based technique for the efficient and economical compression of high average power laser amplifiers at high repetition rates. Such source presents a favorable and robust tool for novel manufacturing mechanisms and time-resolved spectroscopy experiments[3336]. References [1] H. Stark, J. Buldt, M. Mueller, A. Klenke, J. Limpert. Opt. Lett., 46, 969(2021). [2] C. Roecker, A. Loescher, F. Bienert, P. Villeval, D. Lupinski, D. Bauer, A. Killi, T. Graf, M. A. Ahmed. Opt. Lett., 45, 5522(2020). [3] T. Nubbemeyer, M. Kaumanns, M. Ueffing, M. Gorjan, A. Alismail, H. Fattahi, J. Brons, O. Pronin, H. G. Barros, Z. Major, T. Metzger, D. Sutter, F. Krausz. Opt. Lett., 42, 1381(2017). [4] P. Russbueldt, D. Hoffmann, M. Hofer, J. Lohring, J. Luttmann, A. Meissner, J. Weitenberg, M. Traub, T. Sartorius, D. Esser, R. Wester, P. Loosen, R. Poprawe. IEEE J. Sel. Top. Quantum Electron., 21, 3100117(2015). [5] P. Russbueldt, T. Mans, J. Weitenberg, H. D. Hoffmann, R. Poprawe. Opt. Lett., 35, 4169(2010). [6] E. Caracciolo, A. Guandalini, F. Pirzio, M. Kemnitzer, F. Kienle, A. Agnesi, J. A. der Au. , , , , , , and , Proc. SPIE , 100821F ().(2017). [7] E. Kaksis, G. Almasi, J. A. Fulop, A. Pugzlys, A. Baltuska, G. Andriukaitis. Opt. Express, 24, 28916(2016). [8] U. Buenting, H. Sayinc, D. Wandt, U. Morgner, D. Kracht. Opt. Express, 17, 8046(2009). [9] G. H. Kim, J. H. Yang, D. S. Lee, A. V. Kulik, E. G. Sall, S. A. Chizhov, U. Kang, V. E. Yashin. J. Opt. Technol., 80, 142(2013). [10] A. Buettner, U. Buenting, D. Wandt, J. Neumann, D. Kracht. Opt. Express, 18, 21973(2010). [11] N. B. Chichkov, U. Buenting, D. Wandt, U. Morgner, J. Neumann, D. Kracht. Opt. Express, 17, 24075(2009). [12] F. Guichard, M. Hanna, L. Lombard, Y. Zaouter, C. Honninger, F. Morin, F. Druon, E. Mottay, P. Georges. Opt. Lett., 38, 5430(2013). [13] M. Ueffing, R. Lange, T. Pleyer, V. Pervak, T. Metzger, D. Sutter, Z. Major, T. Nubbemeyer, F. Krausz. Opt. Lett., 41, 3840(2016). [14] B. Dannecker, J.-P. Negel, A. Loescher, P. Oldorf, S. Reichel, R. Peters, T. Graf, M. A. Ahmed. Opt. Commun., 429, 180(2018). [15] J. Pouysegur, M. Delaigue, C. Honninger, P. Georges, F. Druon, E. Mottay. Opt. Express, 22, 9414(2014). [16] J. Pouysegur, M. Delaigue, Y. Zaouter, C. Honninger, E. Mottay, A. Jaffres, P. Loiseau, B. Viana, P. Georges, F. Druon. Opt. Lett., 38, 5180(2013). [17] T. Nagy, P. Simon, L. Veisz. Adv. Phys. X, 6, 1845795(2021). [18] P. Balla, A. B. Wahid, I. Sytcevich, C. Guo, A.-L. Viotti, L. Silletti, A. Cartella, S. Alisauskas, H. Tavakol, U. Grosse-Wortmann, A. Schoenberg, M. Seidel, A. Trabattoni, B. Manschwetus, T. Lang, F. Calegari, A. Couairon, A. L’Huillier, C. L. Arnold, I. Hartl, C. M. Heyl. Opt. Lett., 45, 2572(2020). [19] E. A. Khazanov, S. Y. Mironov, G. Mourou. Phys. Uspekhi, 62, 1096(2019). [20] C. H. Lu, Y. J. Tsou, H. Y. Chen, B. H. Chen, Y. C. Cheng, S. D. Yang, M. C. Chen, C. C. Hsu, A. H. Kung. Optica, 1, 400(2014). [21] P. Russbueldt, J. Weitenberg, J. Schulte, R. Meyer, C. Meinhardt, H. D. Hoffmann, R. Poprawe. Opt. Lett., 44, 5222(2019). [22] J. Schulte, T. Sartorius, J. Weitenberg, A. Vernaleken, P. Russbueldt. Opt. Lett., 41, 4511(2016). [23] C. L. Tsai, F. Meyer, A. Omar, Y. Wang, A. Y. Liang, C. H. Lu, M. Hoffmann, S. D. Yang, C. J. Saraceno. Opt. Lett., 44, 4115(2019). [24] J. Weitenberg, A. Vernaleken, J. Schulte, A. Ozawa, T. Sartorius, V. Pervak, H.-D. Hoffmann, T. Udem, P. Russbueldt, T. W. Haensch. Opt. Express, 25, 20502(2017). [25] P. He, Y. Liu, K. Zhao, H. Teng, X. He, P. Huang, H. Huang, S. Zhong, Y. Jiang, S. Fang, X. Hou, Z. Wei. Opt. Lett., 42, 474(2017). [26] C. H. Lu, W. H. Wu, S. H. Kuo, J. Y. Guo, M. C. Chen, S. D. Yang, A. H. Kung. Opt. Express, 27, 15638(2019). [27] S. N. Vlasov, V. A. Petrishchev, V. I. Talanov. Appl. Opt., 9, 1486(1970). [28] S. Zhang, Z. Fu, B. Zhu, G. Fan, Y. Chen, S. Wang, Y. Liu, A. Baltuska, C. Jin, C. Tian, Z. Tao. Light Sci. Appl., 10, 53(2021). [29] D. Sun, J. Guo, W. Wang, X. Du, Y. Gao, Z. Gao, X. Liang. IEEE Photonics J., 13, 3900110(2021). [30] A. G. Fox, T. Li. IEEE J. Quantum Electron., 2, 774(1966). [31] A. Suda, T. Takeda. Appl. Sci. Basel, 2, 549(2012). [32] M. Trippenbach, Y. B. Band. Phys. Rev. A, 57, 4791(1998). [33] R. R. Gattass, E. Mazur. Nat. Photonics, 2, 219(2008). [34] D. Kiselev, L. Woeste, J. P. Wolf. Appl. Phys. B, 100, 515(2010). [35] R.-T. Liu, X.-P. Zhai, Z.-Y. Zhu, B. Sun, D.-W. Liu, B. Ma, Z.-Q. Zhang, C.-L. Sun, B.-L. Zhu, X.-D. Zhang, Q. Wang, H.-L. Zhang. J. Phys. Chem. Lett., 10, 6572(2019). [36] K. E. Knowles, M. D. Koch, J. L. Shelton. J. Mater. Chem. C, 6, 11853(2018). Jie Guo, Zichen Gao, Di Sun, Xiao Du, Yongxi Gao, Xiaoyan Liang. An efficient high-power femtosecond laser based on periodic-layered-Kerr media nonlinear compression and a Yb:YAG regenerative amplifier[J]. High Power Laser Science and Engineering, 2022, 10(2): 02000e10
2022-12-03 14:45:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3383164405822754, "perplexity": 8640.267964451505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00241.warc.gz"}
http://electronics.stackexchange.com/tags/op-amp/hot
# Tag Info 11 In outline, it depends on the signal source, i.e. the type of microphone. There are some very low noise vacuum tubes. There are some low noise IC amplifiers, but not many. There are also discrete semiconductors, both bi-polar and JFET, and these are often the best choice for an input stage, possibly using an IC for the later gain and output stages. ... 10 If you ignore the input capacitor and look at the input impedance into your op-amp circuit from the left of the 200k resistor, the input impedance is the 200k resistor. This is because the op-amp is configured as a virtual earth amplifier. In other words there is 200k loading your capacitor and the 3db high pass cut off point is when Xc = 200k ohms (in ... 5 You are confusing 'inverting' with 'negative feedback'. Open loop simulate this circuit – Schematic created using CircuitLab Figure 1: op-amp with open-loop inverting mode. In Figure 1 the op-amp will amplify the difference between its inputs by the open loop gain. Let's say the open loop gain is 1,000,000 and we apply +1 mV at the '-' input. ... 5 Try using negative feedback and not positive feedback. The circuit you have shown is nonesense unless you are trying to make a comparator with rather a lot of hysteresis. When using a simulator it might theoretically "settle" on what seems to be an improbable scenario - If the output is -1V and the input is +1V then the voltage at the non-inverting input is ... 5 Odds are that Q1 is smoked. You forgot to add a current limiting base resistor to limit the current. You should probably add a reverse diode on the base (after the resistor) to protect the transistor. The diode is recommended because you are feeding the base with an alternating voltage that swings above and below zero volts. When it swings negative the ... 5 Offset voltage is unavoidable error in the construction of a opamp. Nobody "places" it anywhere. Manufacturers go to great lengths to reduce it, but of course, can not make it zero. It also makes no sense to ask whether the offset voltage appears on the positive or negative input. The offset is between the two inputs. A opamp ideally does: ... 4 The bottom distortion is trivial to explain - you appear to be running the op-amp between 8+V and ground, you have not created a virtual ground at +4V, and so the signal is naturally clipping where the op-amp has no power to drive it. For a simple approach, add a second battery to provide a -8V rail, if using 8V batteries, and power the negative power pin ... 3 I think that you can get much better than what you are achieving. You say that if you rise the volume the sound becomes acceptable but is distorted... Let's take a step back. You are feeding your speaker with a square wave. That is pretty much the definition of distorted signal. If you want to have a more clean, less distorted signal on the output you ... 3 The drawing shows the LED power supply as 5 volts - that is too low to light four LEDs in series. The forward voltage across an LED depends on the colour and chemistry - common red LEDs are about 1.9 volts, and other colours are higher, up to 3.2 volts for blue and white. The supply voltage must be greater than the sum of the LED voltages, plus a volt or ... 3 Input bias can flow both into or out of an op-amp's inputs, and in certain op-amp topologies may do either. The consequence is that you need to design your circuit accounting for the current, i.e. if your source impedance is 100kΩ and you've got 1pA of input bias current, a non-inverting buffer will have a voltage error of (100kΩ * 1pA) = 100nV. As you can ... 3 You are basically right, except for the choice of 741 to use over the 0-5 V range. The 741 requires some headroom against both supplies. For 0-5 V operation, a CMOS "rail to rail" opamp is a much better choice. Once of the Microchip MCPxxxx series is likely a good fit. The circuit you show (other than, again, poor choice of opamp) is valid and is common ... 3 why they tell to roll off the open loop gain Saying it this way is just a convenience but the practicality is that there will be negative feedback applied that both shapes the amplifier gain to what you want AND corrects for instability. First of all, familiarize yourself with the important features of the open-loop gain. This is for an OPA192: - ... 3 Your input voltage is an AC waveform which oscillates above and below the zero volt / ground line. It should peak at about +0.75 V and -0.75 V. Your op-amp has its negative terminal connected to ground. The lowest voltage it can possibly output is 0 V and most will only get within a volt or two of that. The fix Power your op-amp from a dual supply, +8 V ... 3 'I have checked both data sheets and haven't found anything different' Well you haven't checked very carefully , have you? They seem to be very similar. There is at least one difference, the bias current of the 'A' version is higher than the plain version. Given that the plain data sheet is dated 2012, and the A version is 2016, I suspect that what has ... 3 The coupling capacitor C1 of 10 nF is too small. You made a high-pass filter with -3 dB frequency of 1/(2pi*RC) = 80 Hz where R is the 200 k resistor and C is the 10 nF capacitor. If you make C1 100 nF or more, the problem will be solved. 2 Survey says... simulate this circuit – Schematic created using CircuitLab 2 Since the jumpers have to be set manually, a simple solution might be to use a second jumper that indicates the selected gain to the microcontroller. So the user have to set two jumpers to identical positions. Or just use a rotary switch with two poles and three positions (e.g.: SS-10-23NPE). A completely different approach would be to implement a ... 2 The TL081 is not the best part to use if your trying to gain up your signal. The voltage offset is in the mV range. Your 'new' gain stage has a gain of (1+(50k+49.2k)/20) = 4963. If you gain the Vos=3mV of the TL081 from the bandpass filter by 4963 you get 14.8V, and that is beyond the rails. I would expect this circuit to rail out, but you may have a ... 2 The TL08x series of opamps have a typical input voltage offset error on the order of 3 mV (datasheet here). This offset is indistinguishable from a real input, and it gets amplified the same way. I assume you're talking about U1 in your diagram, which is configured for a gain of about 5000×. This means that the output error could be on the order of 15 ... 2 If the simulator is using a linear model to solve the problem (for example, using the AC simulation mode) it can get easily fooled. The circuit will be basically this: and the simulator will basically write down $$v_o = A_{vol} \left( v_o\frac{R}{R_1+R} - v_i\right)$$ ...and find a solution, independently from the fact that the system is unstable. ... 2 The purpose of this matching is to minimize DC offset due to bias current. If the op-amp has low bias current and the application is not sensitive to offset, you don't necessarily need to match the DC resistance. You can calculate the offset using the bias/leakage current figure from the op-amp datasheet. In this case, if you want to change the impedance ... 2 To start, eliminate V2, C1, R2 and R5. Then the junction of R3 and R4 will have a Thevenin equivalent voltage of 9 volts, and a resistance of 5k. Adding R5 provides a hysteresis of $$\Delta v = 18\times \frac{6k}{1.006 M} = .107\text{ volts} = 9 +/- .053\text{ volts}$$ this will also provide a hysteresis at the R2/R3 junction of \Delta v = 18\times ... 2 Common mode voltage (DC and noise on whatever it turns out to be) is likely to be your biggest problem. A shunt is a very low impedance source and is easily filtered if you don't need fast response. You don't need a twisted pair in that case, though it won't hurt. If you filter the shunt voltage well at the ADC end and use a differential amplifier with a ... 2 As has been indicated, this is not a good circuit, but I'll try until I get tired. Let's start with function. Apparently you want to do the following: 1) Run each battery through a difference amplifier to produce a (more or less) 3.7 volt level. 2) compare each level to 3.3 volts with a comparator 3) if any cell in a 4-call pack is low, turn on an LED ... 2 We're not here to give you the solution for homework, we're here to help you solve problems, or more importantly clarify confusing steps in work put into solving problem. So instead of answering a question of two problems, I'll show you an approach for these kinds of problems. Circuit Analysis of Op-Amp using KCL We're analyzing here for small signal ... 1 A non-inverting amplifier always has a voltage gain greater than one- the formula is Vout/Vin = (1 + Rf/Rz). But you can attenuate the signal before the op-amp sees it, and have a gain as little as one (for example Rf = 0, Rz = open). The op-amp will have a closed-loop output impedance much less than 2.5K, so your requirement will be satisfied. 1 Option 4: Use a voltage converter IC to generate a higher and a lower voltage from your 5V. For example I use this little circuit to power OpAmps and comparators from 5V without any problems: This generates a voltage of approximately 9.5V at the VA+ terminal and -4.7V at the VA- terminal from just a 5V supply. If you use this to power opamps and use ... 1 The cheapest solutions, assuming you only have one supply, are to redesign the circuit so it will work from a single supply or generate the negative supply. All monolithic op-amps that I know of will actually work on a single supply- very few actually have a ground pin, so they don't know the difference between +/-5V and a single 10V supply. They do know ... 1 The LTSpice Yahoo group is the place to go. You can download models of many common devices. Also keep in mind that while models of real op-amps are pretty good, they are far from perfect, and may not simulate all the parasitics in a real circuit. If you are very concerned about parasitics, there's little you can do other than building the circuit and ... 1 The closed loop output resistance should be very low, less than 1 ohm. 75 ohms is more-or-less what you'd expect for AC output impedance. The output resistance is only low for currents that the op-amp can supply. It will go into limiting mode if the output current exceeds a couple tens of mA roughly. It also cannot drive close to the rails with a heavy ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-02-13 12:54:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5532104969024658, "perplexity": 1137.6057271478383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166650.78/warc/CC-MAIN-20160205193926-00261-ip-10-236-182-209.ec2.internal.warc.gz"}
https://blog.mathematics21.org/2018/06/30/conjecture-proved-2/
I have proved the conjecture that $S^{\ast}(\mu)\circ S^{\ast}(\mu)=S^{\ast}(\mu)$ for every endoreloid $\mu$. The easy proof is currently available in this file.
2019-12-15 10:33:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917900025844574, "perplexity": 310.42324269677505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00322.warc.gz"}
https://physics.stackexchange.com/questions/394742/why-does-unitarity-require-the-higgs-to-exist/395457
# Why does unitarity require the Higgs to exist? A standard argument that the Higgs boson must exist is that without it, amplitudes in the Standard Model at the TeV scale violate unitarity. This is explained in section 21.2 of Peskin and Schroeder and also described in many sets of lecture notes. However, when I look at these arguments, I don't see any apparent unitarity violation at all. Usually, they show that tree-level amplitudes become large at the TeV scale, so that the tree-level amplitude alone would violate unitarity bounds. But that's no big deal; who cares if the first term in a Taylor series is large, if the whole series sums to something reasonable? Some sources instead say that unitarity is not being violated but "tree-level" unitarity is, but it's unclear to me why that's an important principle that must be preserved. The standard argument goes something like this: 1. The tree-level amplitude alone without a Higgs boson violates unitarity. 2. If you believe that your theory should be valid for all choices of your couplings (or at least for a neighborhood around zero) then different orders of perturbation theory should be independent and unitarity bounds should hold at every order. 3. Hence, the violation of the unitarity bound at tree-level implies either a) you need to include extra degrees of freedom to restore unitarity (eg a Higgs boson) or b) perturbation theory is not valid for your theory, you have strongly-coupled interactions. Note that this does not imply a Higgs boson, per se. If you take route (b) you can argue that the spontaneous symmetry breaking happens through strongly-coupled interactions in the UV, as do the Technicolor family of models. The apparent unitarity violation is just a sign that you are reaching scales where the strong interactions become important and you need to include them in your theory. • Thanks for the answer! I guess step (2) is exactly what I'm stuck on. Can you spell it out in extreme detail? – knzhou Mar 23 '18 at 22:32 • 1. If you believe the theory does not need a special value of the coupling constant to be well-defined, you can make the loop corrections arbitrarily small than the tree level answer by considering small coupling constant. This is a generic statement regarding tree-level truncations in QFT. 2. Besides, assuming your theory is well defined at TeV energies, you can consider an effective field theory for those scales i.e. the coupling is "renormalized" at that scale. This would mean that loop corrections are negligible, and the tree amplitude captures the full answer. – Siva Mar 24 '18 at 6:55 • @Siva Sorry, I still don't follow. I agree that for tiny coupling constant we should have tree-level unitarity. And indeed we do! The smaller you make the coupling, the further away you push tree-level unitarity violation; it doesn't happen at all in the limit $g \to 0$. I don't see how that says anything about $g$ finite. – knzhou Mar 24 '18 at 10:09 • @Siva Similarly I don't follow argument (2). The point is that we know the full theory is unitary. Basically, if I already know a Taylor series sums to $1$, so what if I can redefine the expansion variable so the first term is $2$? Even if it intuitively looks like the rest of the terms will be small, I already know that they sum to $-1$. – knzhou Mar 24 '18 at 10:12 • @knzhou Imagine that the amplitude is a polynomial in the coupling constant. The only way for a polynomial to be zero for all values of its variable is for all of its coefficients to be zero. Likewise, the only way for the unitarity-violating amplitude to be zero for all values of the coupling is for the contributions at every order to be zero. The amplitude is not exactly like a polynomial, but the argument is pretty much the same. – Luke Pritchett Mar 24 '18 at 13:14 User Luke Pritchett has already given a good answer. For completeness, I want to mention that there is an alternative way to think about this, one that I learnt very recently and that I found to be fascinating. I cannot help but recommend the book Quantum Gauge Theories: A True Ghost Story, by G. Scharf. It is short, concise and to the point. I read it a couple of days ago, and I loved every page of it. In its first chapter, the book introduces free fields. Here, the author argues that the unphysical (longitudinal) polarisation of a spin $j=1$ field is in fact a gradient: $A_\mu=A_\mu^\mathrm{physical}+\partial_\mu \Lambda$. This determines the gauge transformation for free fields to be $A_\mu\to A_\mu+\partial_\mu \lambda$. So far, so good: this is just standard gauge theory. The key point is that, as the author shows, the gauge invariance of free fields is in fact restrictive enough to determine the gauge transformation of interacting fields. For example, the author does not introduce the (ad-hoc) postulate that gauge fields are to transform according a Lie algebra: this is in fact a conclusion rather than an axiom. Furthermore, the author does not introduce the (ad-hoc) Higgs mechanism, but rather derives it from the gauge invariance for free fields. All in all, in this book there are (almost) no unjustified ingredients: no covariant derivatives, no Lie Groups, no spontaneous symmetry breaking, etc. The only working principle is the gauge invariance of free fields, $A_\mu\to A_\mu+\partial_\mu \lambda$, which is perfectly well-motivated. Everything else is derived as a consequence of this simple principle. Finally, and concerning OP's main question, the author argues that the theory is unitary if and only if it is gauge invariant, so this constitutes a proof that unitarity requires the Higgs field to exist. If this is not enough for me to convince the reader to read the book, let me mention that the author does not introduce negative norm states (which is also a rather unconvincing aspect of gauge theories), but he doesn't introduce non-covariant (Coulomb, axial) gauges either. Moreover, the author explains from first principles how General Relativity emerges from a spin $j=2$ field, using only the gauge transformation for free fields (which is, as before, completely natural from the point of view of unphysical polarisations). Finally, the book follows the Epstein-Glaser formulation of QFT, so there are no divergences nor counter-terms anywhere. Needless to say, it is impossible for me to explain how this works in practice: doing so would require for me to rewrite the whole book here. Let me nevertheless quote a paragraph from the introduction that I hope will pique the reader's interest. In Chapter 4 the same method is applied to massive gauge fields. These are the incoming and outgoing free fields which appear in the expansion of the $S$-matrix (corresponding to the $W^\pm$- and $Z$-bosons in the electroweak theory). We have no generation of mass by spontaneous symmetry breaking; instead, perturbative gauge invariance does the job. It forces us to introduce an unphysical (Goldstone-like) and physical (Higgs) scalar fields and determines their coupling. For example, the so-called Higgs potential need not be put in by hand but follows naturally from third-order gauge invariance. • Yes, causal perturbation theory (the subject of Scharf's book) is very nice, and derives much of the older perturbative renormalization stuff on a sound basis. I have written a short introduction to causal perturbation theory, to motivate reading more. See physicsforums.com/insights/causal-perturbation-theory – Arnold Neumaier Mar 28 '18 at 17:35
2021-03-01 01:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089364767074585, "perplexity": 259.490217430438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00301.warc.gz"}
https://brilliant.org/practice/extrema-level-3-challenges/
× Extrema How can you maximize your happiness under a budget? When does a function reach its minimum value? When does a curve change direction? The calculus of extrema explains these "extreme" situations. Level 3 A circle rests in the interior of the parabola with equation $$y=x^2$$ so that it is tangent to the parabola at two points. How much higher is the center of the circle than the points of tangency? Source: Mandelbrot #2 Let $$S$$ be the maximum possible area of a right triangle that can be drawn in a semi-circle of radius $$1$$, where one of the legs (and not the hypotenuse) of the triangle must lie on the diameter of the semicircle. If $$S = \dfrac{a\sqrt{b}}{c},$$ where $$a,c$$ are positive coprime integers and $$b$$ is a positive square-free integer, find $$a + b + c.$$ How many real values of $$x$$ satisfy the equation $\large {x}^{2}-{2}^{x}=0?$ Suppose that two people, A and B, walk along the parabola $$y=x^2$$ in such a way that the line segment $$L$$ between them is always perpendicular to the line tangent to the parabola at A's position $$(a,a^2)$$ with $$a > 0$$. If B's position is $$(b,b^2)$$, what value of $$b$$ minimizes $$L$$? What is the $$least$$ $$perimeter$$ of an isosceles triangle in which a circle of radius $$\sqrt{3}$$ can be inscribed ? Note : The picture shown is a rough one. ×
2017-01-21 17:45:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.619223952293396, "perplexity": 161.10256595467416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00015-ip-10-171-10-70.ec2.internal.warc.gz"}
https://questions.examside.com/past-years/jee/question/the-resistance-of-the-series-combination-of-two-resistances-2004-marks-4-wr8hjdc3se066k06.htm
### JEE Mains Previous Years Questions with Solutions 4.5 star star star star star 1 ### AIEEE 2004 The resistance of the series combination of two resistances is $S.$ When they are jointed in parallel the total resistance is $P.$ If $S = nP$ then the Minimum possible value of $n$ is A $2$ B $3$ C $4$ D $1$ ## Explanation $S = {R_1} + {R_2}$ and $P = {{{R_1}{R_2}} \over {{R_1} + {R_2}}}$ $S = nP \Rightarrow {R_1} + {R_2} = {{n\left( {{R_1}{R_2}} \right)} \over {\left( {{R_1} + {R_2}} \right)}}$ $\Rightarrow {\left( {{R_1} + {R_2}} \right)^2} = n{R_1}{R_2}$ $\Rightarrow n = {{R_1^1 + R_2^2 + {R_1}{R_2}} \over {{R_1}{R_2}}}$ $n = {{{R_1}} \over {{R_2}}} + {{{R_2}} \over {{R_1}}} + 2$ Arithmetic mean $>$ Geometric mean Minimum value of $n$ is $4$ 2 ### AIEEE 2003 A $220$ volt, $1000$ watt bulb is connected across a $110$ $volt$ mains supply. The power consumed will be A $750$ watt B $500$ watt C $250$ watt D $1000$ watt ## Explanation We know that $R = {{V_{rated}^2} \over {{P_{rated}}}} = {{{{\left( {220} \right)}^2}} \over {1000}}$ When this bulb is connected to $110$ volt mains supply we get $P = {{{V^2}} \over R} = {{{{\left( {110} \right)}^2} \times 1000} \over {{{\left( {220} \right)}^2}}} = {{1000} \over 4} = 250W$ 3 ### AIEEE 2003 A $3$ volt battery with negligible internal resistance is connected in a circuit as shown in the figure. The current ${\rm I}$, in the circuit will be A $1$ $A$ B $1.5$ $A$ C $2$ $A$ D $1/3$ $A$ ## Explanation ${R_p} = {{3 \times 6} \over {3 + 6}} = {{18} \over 9} = 2\Omega$ $\therefore$ $V = IR$ $\Rightarrow I = {V \over R} = {3 \over 2} = 1.5A$ 4 ### AIEEE 2003 The nagative $Zn$ pole of a Daniell cell, sending a constant current through a circuit, decreases in mass by $0.13g$ in $30$ minutes. If the electrochemical equivalent of $Zn$ and $Cu$ are $32.5$ and $31.5$ respectively, the increase in the mass of the positive $Cu$ pole in this time is A $0.180$ $g$ B $0.141$ $g$ C $0.126$ $g$ D $0.242$ $g$ ## Explanation According to Faraday's first law of electrolysis $m = z \times q$ For same $q,$ $\,\,\,\,\,\,\,\,\,$ $m \propto Z$ $\therefore$ ${{{m_{Cn}}} \over {{m_{Zn}}}} = {{{Z_{Cu}}} \over {{Z_{Zn}}}}$ $\Rightarrow {m_{Cu}} = {{{Z_{Cu}}} \over {{Z_{Zn}}}} \times {m_{Zn}}$ $= {{31.5} \over {32.5}} \times 0.13 = 0.126\,g$
2022-01-23 15:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801742672920227, "perplexity": 2379.3364560339496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00529.warc.gz"}
https://www.physicsforums.com/threads/when-can-you-not-apply-the-alternating-series-test.679840/
# Homework Help: When can you not apply the alternating series test? 1. Mar 20, 2013 ### hahaha158 1. The problem statement, all variables and given/known data I have a series Ʃ(1 to infinity) ((-1)^n*n^n)/n! 2. Relevant equations 3. The attempt at a solution apparently you cannot use the alternating series for this question, why is this? It has the (-1)^n, what else is needed to allow you to use the alternating series test? 2. Mar 20, 2013 ### Dick To apply the alternating series test you also need that the absolute value of the terms is decreasing. Yours aren't. E.g. (-1)^n*n doesn't converge. 3. Mar 20, 2013 ### hahaha158 Would that not fall under one of the rules for alternating series (lim An as n->∞ must equal to 0 for the series to be convergent)? How can you determine a series to be divergent from this part of the test if you are not even able to apply it? 4. Mar 20, 2013 ### Dick If $a_n$ as a sequence does not converge to zero then the series $\Sigma a_n$ does not converge. Try that test. 5. Mar 20, 2013 ### hahaha158 Is there any way to do it only using the alternating series test? From what i understand this is the alternating series test Consider Ʃ(n=1 to ∞) (-1)^(n-1)bn where bn>=0. Suppose I) b(n+1)<= bn eventually II) lim as n ->∞ bn=0 Then Ʃ(n=1 to ∞) (-1)^(n-1)bn converges. It seems like what you are saying is that you cannot test for divergence using the alternating series test, only convergence? The reason i am asking this is beccause of this http://imgur.com/kTd1Lbt It has a choice where it can diverge and a choice where you can not apply alternating series test. So is it true that for an alternating series test you can test for convergence, but if it does not follow the 2 conditions above, then you must take bn (or an) and apply it to a different test to see it if diverges or converges? 6. Mar 20, 2013 ### Dick Yes, the alternating series test can only be applied to test for convergence. Failure of the alternating series test doesn't prove anything. Try another test. As I said, if $a_n$ doesn't converge to zero the series does NOT converge. That's a good first test for divergence. 7. Mar 20, 2013 ### hahaha158 ok thanks for clarifying
2018-07-22 07:34:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6723358035087585, "perplexity": 752.9959019862042}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00404.warc.gz"}
https://mathematica.stackexchange.com/questions/188560/how-does-this-transformeddistribution-compare-to-the-normal-distribution
# How does this TransformedDistribution compare to the Normal distribution? I am working with the following TransformedDistribution: Dist = TransformedDistribution[2 v S1 + 2 v S2 - v c, {S1 \[Distributed] BinomialDistribution[1/2 (c + t), p], S2 \[Distributed] BinomialDistribution[1/2 (c - t), 1 - p]}] This random variable has neat expressions for mean, variance, skewness, and kurtosis: (-1 + 2 p) t v -4 c (-1 + p) p v^2 ((1 - 2 p) t v)/(c Sqrt[-c (-1 + p) p v^2]) 3 - 6/c + 1/(c p - c p^2) What I am trying to do is compare this random variable, when $$t=0$$, $$c = \frac{1}{v^2}$$, $$v$$ approaches $$0$$, and $$c$$ thus approaches infinity, with one drawn from the Normal distribution. Under these restrictions, $$\mu$$ = 0, $$\sigma^2 = 4 (1 - p) p$$, $$\lambda = 3$$, and $$\kappa = 0$$. Thus, it appears we are dealing with the Normal distribution, but that is not necessarily the case (see:link). • The distribution is never exactly normal. So what distance metric would characterize an important difference and what size of difference would be important? I ask because "how close is close" is a subject matter issue rather than a statistical issue. – JimB Dec 29 '18 at 22:49 • @JimB That is a good question. Suppose I wanted to just inspect visually. For example, if I also set p = 0.5 then variance becomes 1 and a comparison to the Standard Normal distribution would seem appropriate. Could a comparison be setup using Manipulate? – user120911 Dec 29 '18 at 22:53 • That's the "I'll know it when I see it" approach. It's used all the time. (But it's not necessarily a consistent approach.) My point is that there's no universally accepted rule. – JimB Dec 29 '18 at 22:58 • The answer from @user120911 below shows that there is an equivalence in distribution in the limit. Do you also need to know "how close" you are for small values of $v$? – JimB Dec 30 '18 at 18:54 • @JimB That is interesting, but unless you already have that worked out, I would not request it. – user120911 Dec 30 '18 at 19:21 Dist = FullSimplify[TransformedDistribution[2 v S1 + 2 v S2 - v c, {S1 \[Distributed] BinomialDistribution[1/2 (c + t), p], S2 \[Distributed] BinomialDistribution[1/2 (c - t), 1 - p]}]] The moment generating function of the Dist is MomentGeneratingFunction[Dist, x] E^(-c v x) (-E^(2 v x) (-1 + p) + p)^((c - t)/2) (1 + (-1 + E^(2 v x)) p)^((c + t)/2) Now let us consider the situation where c = 1/v^2, v-> 0, and t = 0. Under these restrictions, the moment generating function of the Dist is Limit[With[{c = 1/v^2, t = 0},E^(-c v x) (-E^(2 v x) (-1 + p) + p)^((c - t)/2) (1 + (-1 + E^(2 v x)) p)^((c + t)/2)], v -> 0] which simplifies to E^(-2 (-1 + p) p x^2) In comparison, if we examine the Normal Distribution with variance given by -4(-1 + p)p (i.e., the variance of Dist under the stated restrictions), then we notice the moment generating function is MomentGeneratingFunction[NormalDistribution[0, Sqrt[-4 (-1 + p) p ]],x] E^(-2 (-1 + p) p x^2) which we notice is equal to the above. • Did you obtain the mgf from Mathematica commands? If so, would you consider adding those to your answer? – JimB Dec 30 '18 at 18:50 • Excellent! Thank you. – JimB Dec 30 '18 at 20:41
2020-01-25 11:56:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5897033214569092, "perplexity": 1488.2453628343721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00510.warc.gz"}
https://math.stackexchange.com/questions/2271887/how-to-solve-the-matrix-minimization-for-bfgs-update-in-quasi-newton-optimizatio
# How to solve the matrix minimization for BFGS update in Quasi-Newton optimization I am interested in deriving the unique solution to the following Quasi-Newton minimization problem $$\min_{H}|| H-H_k||\\ H=H^\top,\quad Hy_k =s_k$$ The norm is the weighted Frobenius norm $$||A||_W \equiv ||W^{1/2}A W^{1/2}||_F$$where the weight matrix $W$ is any positive definite matrix that satisfies $Ws_k=y_k$, and $||\cdot||_F$ is defined by $||C||^2_F= \sum_{i,j}^n c^2_{ij}$. The quantity $H$ is the inverse hessian which is symmetric, positive definite and satisfies the secant equation above, $Hy_k=s_k$. We can assume that $W=G^{-1}_k$ where $G_k$ the average hessian is given by $$G_k= \int_0^1 \nabla^2 f(x_k+\tau \alpha_k p_k)d\tau$$ The unique solution is given by $$H_{k+1} = (1-\rho_k s_k y^\top_k)H_k (1-\rho_k y_k s^\top_k)+ \rho_k s_k s^\top_k$$ where $\rho_k = (y^\top_k s_k)^{-1}$. Note, this is an iterative scheme where $k$ represents the current iteration and $H_{k+1}$ is an approximation to the inverse hessian. The notation I am using is directly from the book Nocedal & Wright - Numerical Optimization. I am not able to find a full derivation of this anywhere, everything written above is all that Nocedal/Wright has in regards to this topic. For a reference, this is in chapter 6 - Quasi Newton Methods, of their newest/2nd edition. All links I have tried to google and other books also have no full derivation. I am not able to find anything more thorough than Nocedal & Wright however they still don't have a derivation. Thanks • What exactly is $H_{k+1}$. Is that an iterative scheme to find the minimum? May 8 '17 at 18:07 • @AlexR. An interative scheme to approximate the inverse hessian, an example of a quasi newton method. It can be applied to finding the minimum yes. $H_{k+1}$ is the updated approximation to the inverse Hessian. May 8 '17 at 18:08 • A related (but currently unanswered) question about the DFP update formula: math.stackexchange.com/questions/2091867/… May 10 '17 at 3:39 I'll outline the major steps. See the explanations and answers to the comments below. 1. Introduce some notations $$\hat H=W^{1/2}HW^{1/2},\quad \hat H_k=W^{1/2}H_kW^{1/2},\quad \hat s_k=W^{1/2}s_k,\quad \hat y_k=W^{-1/2}y_k.\tag{1}$$ Then the problem becomes $$\min\|\hat H_k-\hat H\|_F\quad\text{subject to }\hat H\hat y_k=\hat y_k\ (=\hat s_k).$$ 2. To use the fact that $\hat y_k$ is the eigenvector of $\hat H$, let us introduce the new orthonormal basis $$U=[u\ |\ u_\bot]$$ where $u$ is the normalized $\hat y_k$, i.e. $$u=\frac{\hat y_k}{\|\hat y_k\|}\tag{2},$$ and $u_\bot$ is any ON-complement to $u$. Since the Frobenius norm is unitary invariant (as it is the sum of squares of the singular values) we have $$\|\hat H_k-\hat H\|_F=\|U^T\hat H_kU-U^T\hat HU\|_F= \left\|\begin{bmatrix}\color{blue}* & \color{blue}*\\\color{blue}* & \color{red}*\end{bmatrix}-\begin{bmatrix}\color{blue}1 & \color{blue}0\\\color{blue}0 & \color{red}*\end{bmatrix}\right\|_F.$$ The blue part cannot be affected by optimization, and to minimize the Frobenius norm, it is clear that we should make the red part become zero, that is, the optimal solution satisfies $$\color{red}{u_\bot^T\hat Hu_\bot}=\color{red}{u_\bot^T\hat H_ku_\bot}.$$ It gives the optimal solution to be $$\hat H=U\begin{bmatrix}\color{blue}1 & \color{blue}0\\\color{blue}0 & \color{red}{u_\bot^T\hat H_ku_\bot}\end{bmatrix}U^T.$$ 3. To write it more explicitly $$\hat H=\begin{bmatrix}u & u_\bot\end{bmatrix}\begin{bmatrix}1 & 0\\0 & u_\bot^T\hat H_ku_\bot\end{bmatrix}\begin{bmatrix}u^T \\ u_\bot^T\end{bmatrix}=uu^T+u_\bot u_\bot^T\hat H_ku_\bot u_\bot^T= uu^T+(I-uu^T)\hat H_k(I-uu^T)$$ where we used the following representation of the projection operator to the complement of $u$ $$u_\bot u_\bot^T=I-uu^T.$$ 4. Changing variables back to the original ones (1), (2) is straightforward. ## Explanations: Step 1. The convenience for $\hat H$ and $\hat H_k$ comes directly from the problem $$\min\|\underbrace{W^{1/2}H_kW^{1/2}}_{\hat H_k}-\underbrace{W^{1/2}HW^{1/2}}_{\hat H}\|_F.$$ Then we have to rewrite the data too \begin{align} Hy_k=s_k\quad&\Leftrightarrow\quad \underbrace{\color{blue}{W^{1/2}}H\color{red}{W^{1/2}}}_{\hat H}\underbrace{\color{red}{W^{-1/2}}y_k}_{\hat y_k}=\underbrace{\color{blue}{W^{1/2}}s_k}_{\hat s_k},\\ Ws_k=y_k\quad&\Leftrightarrow\quad \underbrace{\color{red}{W^{-1/2}}Ws_k}_{\hat s_k}=\underbrace{\color{red}{W^{-1/2}}y_k}_{\hat y_k}. \end{align} Thus, $\hat H\hat y_k=\hat y_k$. It is also equal to $\hat s_k$. Step 2. Since $\hat Hu=u$ we know that $u^T\hat Hu=u^Tu=1$ and $u_\bot^THu=u_\bot^Tu=0$, so we can represent the optimizing variable as $$U^T\hat HU=\begin{bmatrix}u^T\\ u_\bot^T\end{bmatrix}\hat H\begin{bmatrix}u & u_\bot\end{bmatrix}= \begin{bmatrix}u^T\hat Hu & u^T\hat Hu_\bot\\u_\bot^T\hat Hu & u_\bot^T\hat Hu_\bot\end{bmatrix}= \begin{bmatrix}1 & 0\\0 & u_\bot^T\hat Hu_\bot\end{bmatrix}.$$ It gives the following \begin{align} U^T\hat H_kU-U^T\hat HU&=\begin{bmatrix}u^T\\ u_\bot^T\end{bmatrix}\hat H_k\begin{bmatrix}u & u_\bot\end{bmatrix}-\begin{bmatrix}u^T\\ u_\bot^T\end{bmatrix}\hat H\begin{bmatrix}u & u_\bot\end{bmatrix}=\\ &=\begin{bmatrix}\color{blue}{u^T\hat H_ku} & \color{blue}{u^T\hat H_ku_\bot}\\\color{blue}{u_\bot^T\hat H_ku} & \color{red}{u_\bot^T\hat H_ku_\bot}\end{bmatrix}-\begin{bmatrix}\color{blue}{1} & \color{blue}{0}\\\color{blue}{0} & \color{red}{u_\bot^T\hat Hu_\bot}\end{bmatrix}. \end{align} This particular structure of the optimizing variable was the whole idea to switch to the new basis. Because $\hat H$ has no freedom to vary in the blue part, we cannot change the corresponding blue part of $\hat H_k$, so it is fixed for all possible $\hat H$. The red part though can be changed as we wish, and the smallest Frobenius norm \begin{align} \|U^T(\hat H_k-\hat H)U\|_F^2&= \left\|\begin{bmatrix}\color{blue}{u^T\hat H_ku-1} & \color{blue}{u^T\hat H_ku_\bot}\\\color{blue}{u_\bot^T\hat H_ku} & \color{red}{u_\bot^T\hat H_ku_\bot-u_\bot^T\hat Hu_\bot}\end{bmatrix}\right\|_F^2=\\ &=\color{blue}{(u^T\hat H_ku-1)^2+\|u^T\hat H_ku_\bot\|_F^2+\|u_\bot^T\hat H_ku\|_F^2}+\color{red}{\|u_\bot^T\hat H_ku_\bot-u_\bot^T\hat Hu_\bot\|_F^2} \end{align} is obtained when the red part is zero. Step 3. The matrix $U$ is orthogonal, hence, $$I=UU^T=\begin{bmatrix}u & u_\bot\end{bmatrix}\begin{bmatrix}u^T \\ u_\bot^T\end{bmatrix}=uu^T+u_\bot u_\bot^T\quad\Leftrightarrow\quad u_\bot u_\bot^T=I-uu^T.$$ • This is very helpful, thanks a lot. I will go through all the details now and be back shortly to ask the questions I have. I can see I may have a few. May 8 '17 at 22:06 • Sounds like a plan, thanks! May 8 '17 at 22:08 • Also, your second equation in the explanations of step 1: It should read $$W^{1/2}H W^{1/2}=\hat H$$, opposed to $\hat H_k$ as you have written. I think you just made a typo as you use $\hat H \hat y_k = \hat s_k$ right after so. Just wanted to let you know. This solution is GREAT. Thanks again!! May 10 '17 at 2:50 • @Integrals The second one is a typo, thank you. The first one: the quantity $u^T\hat H_ku-1$ is a number, so $$\|u^T\hat H_ku-1\|_F^2=(u^T\hat H_ku-1)^2.$$ – A.Γ. May 10 '17 at 3:29 • @JeffFaraci Remember $\hat y_k=\hat s_k$ and $u=\frac{\hat y_k}{\|\hat y_k\|}$, hence, $uu^T=\frac{\hat y_k\hat y_k^T}{\|\hat y_k\|^2}$. The denominator is $$\hat y_k^T\hat y_k=\hat y_k^T\hat s_k=y_k^Ts_k.$$ – A.Γ. May 13 '17 at 9:45
2021-10-19 00:08:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379323124885559, "perplexity": 398.2472160173047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00311.warc.gz"}
https://www.physicsforums.com/threads/bragg-angles-thermal-expansion.529820/
# Bragg Angles + Thermal Expansion 1. Sep 13, 2011 ### exzacklyright 1. The problem statement, all variables and given/known data A crystal is heated from 0 to 100 degrees celcius, its lattice parameter 'a' increases by 0.17% due to thermal expansion. if we observe an x-ray reflection at a bragg angle theta= 19.3 degrees at 0 degrees celcius, by how much will theta change when the sample is heated to 100 degrees C. (don't need to know wavelength or d to solve) 2. Relevant equations 2dsin(theta) = n$\lambda$ 3. The attempt at a solution Tried using 1 for d. and just seeing how much it'd change but it didn't come out right. 2. Sep 13, 2011 ### Spinnor Should it be d_1*sin(theta_1) = d_2*sin(theta_2) ? 3. Sep 13, 2011 ### exzacklyright well all we're given is density 1 and theta 1. we don't even know density2 or theta2
2017-09-25 06:47:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4435558617115021, "perplexity": 2391.732031470537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690340.48/warc/CC-MAIN-20170925055211-20170925075211-00415.warc.gz"}
https://www.law.cornell.edu/cfr/text/49/571.304
# 49 CFR § 571.304 - Standard No. 304; Compressed natural gas fuel container integrity. § 571.304 Standard No. 304; Compressed natural gas fuel container integrity. S1. Scope. This standard specifies requirements for the integrity of compressed natural gas (CNG), motor vehicle fuel containers. S2. Purpose. The purpose of this standard is to reduce deaths and injuries occurring from fires that result from fuel leakage during and after motor vehicle crashes. S3. Application. This standard applies to each passenger car, multipurpose passenger vehicle, truck, and bus that uses CNG as a motor fuel and to each container designed to store CNG as motor fuel on-board any motor vehicle. S4. Definitions. Brazing means a group of welding processes wherein coalescence is produced by heating to a suitable temperature above 800 °F and by using a nonferrous filler metal, having a melting point below that to the base metals. The filler metal is distributed between the closely fitted surfaces of the joint by capillary attraction. Burst pressure means the highest internal pressure reached in a CNG fuel container during a burst test at a temperature of 21 °C (70 °F). CNG fuel container means a container designed to store CNG as motor fuel on-board a motor vehicle. Fill pressure means the internal pressure of a CNG fuel container attained at the time of filling. Fill pressure varies according to the gas temperature in the container which is dependent on the charging parameters and the ambient conditions. Full wrapped means applying the reinforcement of a filament or resin system over the entire liner, including the domes. Hoop wrapped means winding of filament in a substantially circumferential pattern over the cylindrical portion of the liner so that the filament does not transmit any significant stresses in a direction parallel to the cylinder longitudinal axis. Hydrostatic pressure means the internal pressure to which a CNG fuel container is taken during testing set forth in S5.4.1. Liner means the inner gas tight container or gas cylinder to which the overwrap is applied. Service pressure means the internal settled pressure of a CNG fuel container at a uniform gas temperature of 21 °C (70 °F) and full gas content. It is the pressure for which the container has been constructed under normal conditions. S5 Container and material requirements. S5.1 Container designations. Container designations are as follows: S5.1.1 Type 1 - Non-composite metallic container means a metal container. S5.1.2 Type 2 - Composite metallic hoop wrapped container means a metal liner reinforced with resin impregnated continuous filament that is “hoop wrapped.” S5.1.3 Type 3 - Composite metallic full wrapped container means a metal liner reinforced with resin impregnated continuous filament that is “full wrapped.” S5.1.4 Type 4 - Composite non-metallic full wrapped container means resin impregnated continuous filament with a non-metallic liner “full wrapped.” S6 General requirements. S6.1 Each passenger car, multipurpose passenger vehicle, truck, and bus that uses CNG as a motor fuel shall be equipped with a CNG fuel container that meets the requirements of S7 through S7.4. S6.2 Each CNG fuel container manufactured on or after March 27, 1995 shall meet the requirements of S7 through S7.4. S7 Test requirements. Each CNG fuel container shall meet the applicable requirements of S7 through S7.4. S7.1 Pressure cycling test at ambient temperature. Each CNG fuel container shall not leak when tested in accordance with S8.1. S7.2 Hydrostatic burst test. S7.2.1 Each Type 1 CNG fuel container shall not leak when subjected to burst pressure and tested in accordance with S8.2. Burst pressure shall not be less than 2.25 times the service pressure for non-welded containers and shall not be less than 3.5 times the service pressure for welded containers. S7.2.2 Each Type 2, Type 3, or Type 4 CNG fuel container shall not leak when subjected to burst pressure and tested in accordance with S8.2. Burst pressure shall be not less than 2.25 times the service pressure. S7.3 Bonfire test. Each CNG fuel container shall be equipped with a pressure relief device. Each CNG fuel container shall completely vent its contents through a pressure relief device or shall not burst while retaining its entire contents when tested in accordance with S8.3. S7.4 Labeling. Each CNG fuel container shall be permanently labeled with the information specified in paragraphs (a) through (h) of this section. Any label affixed to the container in compliance with this section shall remain in place and be legible for the manufacturer's recommended service life of the container. The information shall be in English and in letters and numbers that are at least 6.35 mm ( 1/4 inch) high. (a) The statement: “If there is a question about the proper use, installation, or maintenance of this container, contact____________________,” inserting the CNG fuel container manufacturer's name, address, and telephone number. (b) The statement: “Manufactured in ____________,” inserting the month and year of manufacture of the CNG fuel container. (c) The statement: “Service pressure ____________ kPa, (____________ psig).” (d) The symbol DOT, constituting a certification by the CNG container manufacturer that the container complies with all requirements of this standard. (e) The container designation (e.g., Type 1, 2, 3, 4). (f) The statement: “CNG Only.” (g) The statement: “This container should be visually inspected for damage and deterioration after a motor vehicle accident or fire, and either (a) at least every 12 months when installed on a vehicle with a GVWR greater than 4,536 kg, or (b) at least every 36 months or 36,000 miles, whichever comes first, when installed on a vehicle with a GVWR less than or equal to 4,536 kg.” (h) The statement: “Do Not Use After ____________” inserting the month and year that mark the end of the manufacturer's recommended service life for the container. S8 Test conditions: fuel container integrity. S8.1 Pressure cycling test. The requirements of S7.1 shall be met under the conditions of S8.1.1 through S8.1.4. S8.1.1 Hydrostatically pressurize the CNG container to the service pressure, then to not more than 10 percent of the service pressure, for 13,000 cycles. S8.1.2 After being pressurized as specified in S8.1.1, hydrostatically pressurize the CNG container to 125 percent of the service pressure, then to not more than 10 percent of the service pressure, for 5,000 cycles. S8.1.3 The cycling rate for S8.1.1 and S8.1.2 shall be any value up to and including 10 cycles per minute. S8.1.4 The cycling is conducted at ambient temperature. S8.2 Hydrostatic burst test. The requirements of S7.2 shall be met under the conditions of S8.2.1 through S8.2.2. S8.2.1 Hydrostatically pressurize the CNG fuel container, as follows: The pressure is increased up to the minimum prescribed burst pressure determined in S7.2.1 or S7.2.2, and held constant at the minimum burst pressure for 10 seconds. S8.2.2 The pressurization rate throughout the test shall be any value up to and including 1,379 kPa (200 psi) per second. S8.3 Bonfire test. The requirements of S7.3 shall be met under the conditions of S8.3.1 through S8.3.7. S8.3.1 Fill the CNG fuel container with compressed natural gas and test it at: (a) 100 percent of service pressure; and (b) 25 percent of service pressure. S8.3.2 Container positioning. (a) Position the CNG fuel container in accordance with paragraphs (b) and (c) of S8.3.2. (b) Position the CNG fuel container so that its longitudinal axis is horizontal and its bottom is 100 mm (4 inches) above the fire source. (c) (1) Position a CNG fuel container that is 1.65 meters (65 inches) in length or less and is fitted with one pressure relief device so that the center of the container is over the center of the fire source. (2) Position a CNG fuel container that is greater than 1.65 meters (65 inches) in length and is fitted with one pressure relief device at one end of the container so that the center of the fire source is 0.825 meters (32.5 inches) from the other end of the container, measured horizontally along a line parallel to the longitudinal axis of the container. (3) Position a CNG fuel container that is fitted with pressure relief devices at more than one location along its length so that the portion of container over the center of the fire source is the portion midway between the two pressure relief devices that are separated by the greatest distance, measured horizontally along a line parallel to the longitudinal axis of the container. (4) Test a CNG fuel container that is greater than 1.65 meters (65 inches) in length, is protected by thermal insulation, and does not have pressure relief devices, twice at 100 percent of service pressure. In one test, position the center of the container over the center of the fire source. In another test, position one end of the container so that the fire source is centered 0.825 meters (32.5 inches) from one end of the container, measured horizontally along a line parallel to the longitudinal axis of the container. S8.3.3 Number and placement of thermocouples. To monitor flame temperature, place three thermocouples so that they are suspended 25 mm (one inch) below the bottom of the CNG fuel container. Position thermocouples so that they are equally spaced over the length of the fire source or length of the container, whichever is shorter. S8.3.4 Shielding. (a) Use shielding to prevent the flame from directly contacting the CNG fuel container valves, fittings, or pressure relief devices. (b) To provide the shielding, use steel with 0.6 mm (.025 in) minimum nominal thickness. (c) Position the shielding so that it does not directly contact the CNG fuel container valves, fittings, or pressure relief devices. S8.3.5 Fire source. Use a uniform fire source that is 1.65 meters long (65 inches). Beginning five minutes after the fire is ignited, maintain an average flame temperature of not less than 430 degrees Celsius (800 degrees Fahrenheit) as determined by the average of the two thermocouples recording the highest temperatures over a 60 second interval: $\frac{1}{2}\left[{\left(\frac{{T}_{\text{High 1}}+{T}_{\text{High 2}}}{2}\right)}_{\text{@ time 30 sec}}+{\left(\frac{{T}_{\text{High 1}}+{T}_{\text{High 2}}}{2}\right)}_{\text{@ time 60 sec}}\right]\subset 430°C$ If the pressure relief device releases before the end of the fifth minute after ignition, then the minimum temperature requirement does not apply. S8.3.6 Recording data. Record time, temperature, and pressure readings at 30 second intervals, beginning when the fire is ignited and continuing until the pressure release device releases. S8.3.7 Duration of exposure to fire source. The CNG fuel container is exposed to the fire source for 20 minutes after ignition or until the pressure release device releases, whichever period is shorter. S8.3.8 Number of tests per container. A single CNG fuel container is not subjected to more than one bonfire test. S8.3.9 Wind velocity. The average ambient wind velocity at the CNG fuel container during the period specified in S8.3.6 of this standard is not to exceed 2.24 meters/second (5 mph). S8.3.10 The average wind velocity at the container is any velocity up to and including 2.24 meters/second (5 mph). [59 FR 49021, Sept. 26, 1994; 59 FR 66776, Dec. 28, 1994; 60 FR 37843, July 24, 1995; 60 FR 57948, Nov. 24, 1995; 61 FR 19204, May 1, 1996; 61 FR 47089, Sept. 6, 1996; 63 FR 66765, Dec. 3, 1998; 65 FR 51772, Aug. 25, 2000; 65 FR 64626, Oct. 30, 2000; 87 FR 7964, Feb. 11, 2022] The following state regulations pages link to this page.
2022-06-26 00:21:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26921918988227844, "perplexity": 3697.3925531946925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00243.warc.gz"}
https://homepage.univie.ac.at/Franz.Vesely/simsp/dx/node32.html
# 7.2 NPH Molecular Dynamics Andersen (1980) introduced an additional (synthetic'') energy that was coupled to changes of the system volume. Putting and (with a generalized mass which may be visualized as the mass of some piston) he wrote the Hamiltonian as (7.10) Introducing scaled position vectors he derived the equations of motion (7.11) (7.12) These equations of motion conserve the enthalpy, as appropriate in an isobaric ensemble. Parrinello and Rahman (1980) extended the NPH method of Andersen to allow for non-isotropic stretching and shrinking of box sides. Important applications are structural phase transitions in solids. Scaling of the position vectors now follows the equation instead of . The matrix describes the anisotropic transformation of the basic cell. The cell volume is (7.13) The additional terms in the Hamiltonian are (7.14) and the equations of motion are (7.15) (7.16) where is a metric tensor, is a virtual mass (of dimension mass), and the pressure/stress tensor is defined as (7.17) Morriss and Evans (1983) devised another type of NPH dynamics. They suggested to constrain the pressure not by an inert piston but by a generalized constraint force in the spirit of Gaussian dynamics. The same idea may be carried over to constrain in addition to the pressure. In this way one arrives at a molecular dynamics procedure. (See Allen-Tildesley, Ch. 7.) F. J. Vesely / University of Vienna
2020-07-09 18:47:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445990681648254, "perplexity": 1464.9751180543021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00528.warc.gz"}
http://www.physicsforums.com/showthread.php?p=2719035
# Empty set axiom (proofs in ZF) by Jerbearrrrrr Tags: axiom, proofs P: 127 In ZF (or at least the formulation I'm studying), the empty set axiom $$(\exists \phi)(\forall z)(\neg (z\in \phi))$$ is a consequence of the other axioms, namely the axiom of infinity (and separation, though googling says that's not necessary) which is $$(\exists x) (p)$$ where p is some formula to guarantee the infinite set is what we want. Let's assume the axiom of separation (subset making) too. Here are my questions: When they say that the Empty Set Axiom is a theorem in ZF, does that mean we can prove it using first order logical axioms and deductions (axioms like p=> (q=>p))? Does anyone know of such a proof, if so? The empty set is included in the 'p' of the Axiom of Infinity - though I'm guessing the Axiom of Infinity doesn't actually require it (immediately) to be empty. Assuming the Axiom of Separation, $$(\forall x)(\exists y) (\forall z)(z\in y <=> z\in x \wedge q(z))$$ can we just pick q(z)="z not in x" and let x be the set discussed in the Axiom of infinity? If so, how do we formalize this? Thanks. And sorry if it's in the wrong forum. Sci Advisor HW Helper P: 3,680 Here's a proof from the axiom of separation (and a fragment of predicate calculus: predicate calculus, the rule of generalization, the axiom of specialization, and the axiom of quantified implication). http://us.metamath.org/mpegif/axnul.html P: 311 I think because the above proof ultimately rests on the definition (shown as ex-df) $\exists x\phi=_{df}\neg\forall x\neg\phi$ it assumes that the axioms apply to a nonempty domain. That is the existence of some set is implicitly assumed. With that implicit assumption the axiom of infinity (which asserts the existence of a specific set) is not needed to prove the existence of the empty set. P: 127 ## Empty set axiom (proofs in ZF) Thanks, CRG. Two thumbs fresh. In Propositional Logic, you can just about survive writing things out in full and doing every step. I notice that in Predicate Logic, we have to apply a number of 'shortcuts', all proveable but best left proven on the back of an envelope, before the proof looks reasonable. HW Helper P: 3,680 Quote by Martin Rattigan I think because the above proof ultimately rests on the definition (shown as ex-df) $\exists x\phi=_{df}\neg\forall x\neg\phi$ it assumes that the axioms apply to a nonempty domain. Not so much. Yes, if you can show that some set exists, then you can show that the null set exists -- but ex-def doesn't show that. The axiom of specialization is where the actual existence comes in here, via 19.20i and 19.22i. P: 311 Quote by CRGreathouse Not so much. Yes, if you can show that some set exists, then you can show that the null set exists -- but ex-def doesn't show that. I should have worded my post more carefully. EDIT: *** Excised suggested alternative that was no better than the original. As you say ex-def itself doesn't have much relevance. *** The point I wanted to make is that if we don't make the assumption in the predicate calculus that the domain of sets is nonempty, the proof doesn't actually prove something that we can interpret as existence for the empty set. In informal proofs based (loosely) on ZFC, it's usually thought necessary to use an axiom of ZFC stating the existence of some set, e.g. the axiom of infinity, so that we can use the restricted axiom of comprehension to prove the existence of the empty set (as OP was suggesting). Here that is an artifact of the predicate calculus in use. What the proof can can be interpreted as proving is that if any sets exist then the empty set exists, rather than that the empty set exists. Quote by CRGreathouse The axiom of specialization is where the actual existence comes in here, via 19.20i and 19.22i. An interesting point. Statements 1 to 7 all contain free variables and so would be true for an empty domain with the usual interpretation of such statements as the universal closure. Only the last statement would be false, so it would seem the "actual existence" in this proof comes from using MP on an implication with a free variable in the antecedent but not in the consequent. P: 403 The point I wanted to make is that if we don't make the assumption in the predicate calculus that the domain of sets is nonempty, the proof doesn't actually prove something that we can interpret as existence for the empty set. In the classical predicate calculus, either pure or with non-logic axioms (such as the ones in ZFC) it is allways assumed that the domain of interpretation is nonempty. If we drop this assumption we get something called "free logic", that has significant differences from the classical calculus; it's studied mainly by philosophers, but you may see a good review here: http://plato.stanford.edu/entries/logic-free/ The fact that the domain is nonempty does not imply set existence: any formal interpretation pressuposes an amount of Set Theory, but they are at a level (called the metalanguage) above the theory (in this case ZFC) that we are discussing. Set existence must be proven within the theory, as a consequence from the axioms, not from the properties that we assume that the metalanguage has. You may see it like this: suppose we didn't know anything about set theory, and an alien gives us ZFC's axioms, without any interpretation. We don't have a clue about the objects that the axioms are intended to refer, but we may cook up some weird interpretation where they are all true, but doesn't mention sets at all. An interesting point. Statements 1 to 7 all contain free variables and so would be true for an empty domain with the usual interpretation of such statements as the universal closure. Only the last statement would be false, so it would seem the "actual existence" in this proof comes from using MP on an implication with a free variable in the antecedent but not in the consequent. The author himself states, in the second paragraph, that the proof doesn't obtain in free logics, because of ax-4 and what this one says is one of the things that fail in these logics; see the link above. Emeritus PF Gold P: 16,101 Quote by JSuarez If we drop this assumption we get something called "free logic", that has significant differences from the classical calculus; it's studied mainly by philosophers, but you may see a good review here: That's not true -- e.g. if you care anything about the category viewpoint on things, free logic is much preferred. e.g. excluding the empty model spoils the otherwise excellent algebraic properties of any variety of universal algebras without constant symbols. You may see it like this: suppose we didn't know anything about set theory, and an alien gives us ZFC's axioms, without any interpretation. We don't have a clue about the objects that the axioms are intended to refer, but we may cook up some weird interpretation where they are all true, but doesn't mention sets at all. This depends on whether you read the word "set" as referring to some a priori intuitive notion, or referring to the type of the variables in the formal theory. P: 403 That's not true Well, it's certainly not false: free logics are defined as the ones where the ground terms and quantified variables may not denote anything, or denote outside the universe of discourse and a particular case is where the domain is empty. But it's not clear, in your reply, what part of my statement you consider false; when I say that free logics are mainly studied by philosophers (or anyone who studies philosophical logic), I didn't intend to diminish it, but it's definitely not a part of the mainstream of logic and the question if it's a fruitful line of reasearch is yet to be determined (and the same can be said of the categorial / algebraic approach; no offense intended). This depends on whether you read the word "set" as referring to some a priori intuitive notion, or referring to the type of the variables in the formal theory. This depends of you take "set", "class", "number" or whatever to belong to the metalanguage or the object language, as I said in the paragraph immediately before the one you quote. Emeritus PF Gold P: 16,101 Quote by JSuarez But it's not clear, in your reply, what part of my statement you consider false; when I say that free logics are mainly studied by philosophers (or anyone who studies philosophical logic), I didn't intend to diminish it, but it's definitely not a part of the mainstream of logic and the question if it's a fruitful line of reasearch is yet to be determined (and the same can be said of the categorial / algebraic approach; no offense intended). This was the part I was responding to. I will assert that, if nothing else, the convention of excluding empty models is sufficiently unimportant that authors don't find it worth mentioning when they adopt the opposite convention. P: 403 I will assert that, if nothing else, the convention of excluding empty models is sufficiently unimportant that authors don't find it worth mentioning when they adopt the opposite convention. I disagree: the restriction to nonempty domains (restriction, not convention, because it has significant implications regarding the philosophical question of the ontological commitments of logic) it's stated explicitly in most introdutory textbooks; in more advanced contexts, it's not explicitly mentioned because it's assumed that the intended readers are aware of it. Emeritus PF Gold P: 16,101 (I hope Jerbearrrrrr doesn't mind our continued discussion) Quote by JSuarez (restriction, not convention, because it has significant implications regarding the philosophical question of the ontological commitments of logic) I wasn't so bold as to consider matters of great philosophical import; I was merely concerned with my mathematical applications: my groups can act on empty sets; my sheaves are sometimes supported on proper subsets; when I count linearly ordered sets, I get a nicer formula if there is one ordering on zero elements; in my varieties of universal algebras, there is a free algebra on zero elements, and the intersection of two sub-algebras is again a sub-algebra. The point of view of your parenthetical seems... odd to me, although I can't quite articulate it. I think it's as if you are letting technicalities dictate your philosophical opinion, rather than adopting a neutral foundation, then laying your philosophical opinion on top of it. (also, I don't see why you consider convention and restriction exclusive -- I would have called "restricting to nonempty domains" a convention) P: 311 The discussion has obviously moved on since the post I'm replying to, so this is probably not so relevant, but I'll post it anyway. Quote by JSuarez In the classical predicate calculus, either pure or with non-logic axioms (such as the ones in ZFC) it is allways assumed that the domain of interpretation is nonempty. Agreed. Also with the metamath scheme. Taking the domain of interpretation as sets comes rather close to assuming what is to be proved in this case. At any rate the use of the ZFC axioms in the proof is minimal. (As they're stated that's unavoidable.) Quote by JSuarez The author himself states, in the second paragraph, that the proof doesn't obtain in free logics, because of ax-4 and what this one says is one of the things that fail in these logics; I failed to read that before posting, but I assume the author included it to make essentially the same point. Quote by JSuarez The fact that the domain is nonempty does not imply set existence: any formal interpretation pressuposes an amount of Set Theory, ... I don't understand this. I wasn't considering a formal interpretation of the language, certainly not to take a set used for such as a confirmation of the existence of one of the sets handled by the theory. In the given system it is immediate to prove $\exists x(x=x)$ (without using the ZFC group of axioms). Here $x$ is referred to as a set variable. If you interpret the system so that the set variables to refer to sets (as you presumably would for present purposes) would you not understand that formula to imply the existence of a set? If not, would you interpret the result proved, $\exists x\forall y\neg y\in x$, to imply the existence of an empty set with the same interpretation? Quote by JSuarez Set existence must be proven within the theory, as a consequence from the axioms, not from the properties that we assume that the metalanguage has. If you're looking for a proof of set existence, in this case empty set existence, from the ZF axioms, it is obvious that you need to construct a proof from the axioms. Whether that is sufficient to convince you of the existence of the empty set would depend on whether you believe the axioms with your interpretation of the terms. If you were sceptical of the existence of any set, you would probably be disinclined to accept the metamath predicate calculus with the interpretation of the set variables (strictly the variables referred to by the set variables) as sets, because of the axiom of existence. On the other hand you could take the view that should any set exist, the axioms and proof are convincing with the aforesaid interpretation. You could take this as a "proof", in the normal English sense, that if any set exists the empty set exists, but obviously this would not be a formal proof or result. Quote by JSuarez If we drop this assumption we get something called "free logic", that has significant differences from the classical calculus; it's studied mainly by philosophers, but you may see a good review here: http://plato.stanford.edu/entries/logic-free/ Thanks for the reference. It looks very interesting. My mathematical studies seem to have been continually interrupted by Kings of France rearing their bald heads, starting with a long discussion at school concerning the continuity of the real square root function at the argument 0 according to the $\epsilon,\delta$ definition. Maybe this will help. P: 403 (Apologies to Jerbearrrrrr, the OP, for a discussion that may not interest him, and to Hurkyl and Martin Rattigan for the late response: the last two days were impossible) I wasn't so bold as to consider matters of great philosophical import; I was merely concerned with my mathematical applications: my groups can act on empty sets; my sheaves are sometimes supported on proper subsets; when I count linearly ordered sets, I get a nicer formula if there is one ordering on zero elements; in my varieties of universal algebras, there is a free algebra on zero elements, and the intersection of two sub-algebras is again a sub-algebra. I don't disagree with you in these examples, and can even throw a couple more: the natural numbers are much more naturally (bad choice of words) defined in terms of sets if we start from 0; many combinatorial counting problems admit more simple solutions if we also start from zero. But classical first-order logic doesn't work on empty domains. The point of view of your parenthetical seems... odd to me, although I can't quite articulate it. I think it's as if you are letting technicalities dictate your philosophical opinion, rather than adopting a neutral foundation, then laying your philosophical opinion on top of it. My point of view is roughly this: I don't believe that there are neutral foundations; every choice we make is a philosophical one. Now, we can choose to leave our philosophical assumptions unexamined, or accept the ones who resist better to criticism; I try to follow the latter. Regarding logic, my position is a naturalistic one: I follow Quine in accepting that classical first-order logic is the most effective and correct system of logic that we have today (I go a little further by also accepting a few fragments of second-order and modal logic); given this, the restriction to nonempty domains is more than a technicality: it's a commitment demanded by logic. also, I don't see why you consider convention and restriction exclusive -- I would have called "restricting to nonempty domains" a convention A restriction is a choice imposed by external conditions; in this case, it's the position that there is a preferred and correct logic. A convention is a choice between alternatives that are somewhat neutral: you can't argue forcefully that one should be preferred to another and choose the more, well, convenient (again, bad choice of words). Given what I stated above, I don't consider that restriction a convention. Taking the domain of interpretation as sets comes rather close to assuming what is to be proved in this case. No, it's not. The "sets" in the interpretation are in the metalanguage and the ones in ZFC in the object language; these are distinct languages on distinct levels. The metalanguage is "above" the object one and provides a semantics for it, but we cannot identify the two. I can give you a more explicit example of this distinction: one of the more studied subsystems of classical logic is the intuitionistic fragment, which rejects some classical equivalences and rules of inference. Nevertheless, it's a complete systems relative to Kripke models, but the more common completness proofs are formulated in classical logic, using arguments that are not intuitionistically valid. This is not a contradiction, because the proofs are carried in the (classical) metalanguage and their correctness is relative to it. Originally Posted by JSuarez The fact that the domain is nonempty does not imply set existence: any formal interpretation pressuposes an amount of Set Theory, ... I don't understand this. I wasn't considering a formal interpretation of the language, certainly not to take a set used for such as a confirmation of the existence of one of the sets handled by the theory. I don't understand this. I wasn't considering a formal interpretation of the language, certainly not to take a set used for such as a confirmation of the existence of one of the sets handled by the theory. As above. In the given system it is immediate to prove $\exists x(x=x)$ This is, in fact, a logically valid sentence (in classical logic, not in free logics): you don't need ZFC to prove it. would you not understand that formula to imply the existence of a set? Sorry, I wasn't clear; when you say "set variable" you are already assuming an interpretation and, given that this is a logically valid sentence and the objects of your domain are sets, then yes. When I wrote that a nonempty domain doesn't imply set existence, it was regarding the metalanguage/object language distinction above. $\exists x\forall y\neg y\in x$ This sentence, taken as an axiom, posits the existence of an empty set. You may ask why this is necessary is the other sentence above already implies the "existence" of a set; the reason is that we may not want to interpret ZFC, that is, we take as a formal system with no semantics (ZFC here is the union of the "proper" ZFC axioms, plus all logically valid sentences). My mathematical studies seem to have been continually interrupted by Kings of France rearing their bald heads... Where is Russell when we need him? Related Discussions High Energy, Nuclear, Particle Physics 12 Set Theory, Logic, Probability, Statistics 8 Introductory Physics Homework 2
2014-03-12 19:54:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8378785252571106, "perplexity": 528.6365582944775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023865238/warc/CC-MAIN-20140305125105-00025-ip-10-183-142-35.ec2.internal.warc.gz"}
https://blog.economic-cybernetics.info/
# Recent Posts ### Statistical Analysis of Facebook Network of Friends “The Future of Economics Uses the Science of Real-Life Social Networks” - PAUL OMEROD 1 The goal of this project is not to make a report or literature review or synthesis, rather to get some hands-on experience in working with Graphs and Network Data based on some classical and original (own) datasets and problems. It will involve both some theoretical understanding and programming. The outcome would be to get comfortable with this type of data and maybe build the ground for some future research. ### King County Homes Challenge. Exploratory Data Analysis The King County Homes prices prediction challenge is an excellent dataset for trying out and experimenting with various regression models. As we’ll see in the following post on Moscow flats, the modeler deals with similar challenges: skewed data and outliers, highly correlated variables (predictors), heteroskedasticity and a geographical correlation structure. Ignoring one of these may lead to undeperforming models, so in this post we’re going to carefully explore the dataset, which should inform which modeling strategy to choose. ### Understanding Caratheodori Extension Theorem In order to build adequate models of economic and other complex phenomena, we have to take into account their inherent stochastic nature. Data is just the appearance, an external manifestation of some latent processes (seen as random mechanisms). Even though we won’t know the exact outcome for sure, we can model general regularities and relationships as a result of the large scale of phenomena. For more ideas see (Ruxanda 2011) ### CAPM and Eugene Fama's devastating critique I was impressed by the down-to-earth debate between Eugene Fama and Richard Thaler. Their discussion was very insightful in order to make sense of what’s going on with Efficient Market Hypothesis, CAPM, Fama and French 3 Factor Model, Markowitz and where is the field moving. This will be my last blog post on economics for a while, so expect lots of Machine Learning and Statistics topics next. This is a continuation that is supposed to add some missing pieces to the analysis done in the partI and partII ### Tiny Steps in Prospect Theory and Investment Decisions Part II Last time we went through a rigorous process of eliciting prior beliefs about 5 stocks, exploratory data analysis and quite advanced descriptive stats. The last part of the assignment has the goal of drawing connections to the behavioral economics principles. A lesson learned for now, is that there are many pitfalls even in most innocently looking questions. Part IV. Portfolio Construction by Simulation Before we dig in, I would like to suggest the following reading "Please no, not another bias" by Jason Collin. # Projects Explorations, learning, research and lots of fun! #### Dissertation: Bayesian Inference and Forecasting with applications in microeconometrics and business A Bayesian Hierarchical Model for TV Attribution, Hidden Markov Model for panel customer data, Forecasting demand for new products and short time series #### Probabilistic Dynamic Modeling Dynamics Linear and Nonlinear Models. Stochastic Filtering Institute of Mathematics' of Romanian Academy Research Group #### Time Series Clustering Bizovi Mihai and Jumanazar Gurbanov on Dynamic Time Wraping. Exploration of R Implementations #### Thesis: Stochastic Modeling and Bayesian Inference A synthesis of three perspectives: Stochastic Modeling, Machine Learning and Bayesian Inference. Gaussian Processes and Mixture Models #### New Economic Thinking for a Knowledge-Based Society Bizovi Mihai A critique of neoclassical economics and an exploration of models based on complexity science. With the support of dr. eng. Florin Munteanu.
2021-09-20 05:27:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2077895849943161, "perplexity": 2563.986150346555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00380.warc.gz"}
http://www.researchgate.net/publication/1956797_Solving_Sparse_Symmetric_Diagonally-Dominant_Linear_Systems_in_Time_O_(m1.31)
Article # Solving Sparse, Symmetric, Diagonally-Dominant Linear Systems in Time $O (m^{1.31})$ 11/2003; Source: arXiv ABSTRACT We present a linear-system solver that, given an $n$-by-$n$ symmetric positive semi-definite, diagonally dominant matrix $A$ with $m$ non-zero entries and an $n$-vector $\bb$, produces a vector $\xxt$ within relative distance $\epsilon$ of the solution to $A \xx = \bb$ in time $O (m^{1.31} \log (n \kappa_{f} (A)/\epsilon)^{O (1)})$, where $\kappa_{f} (A)$ is the log of the ratio of the largest to smallest non-zero eigenvalue of $A$. In particular, $\log (\kappa_{f} (A)) = O (b \log n)$, where $b$ is the logarithm of the ratio of the largest to smallest non-zero entry of $A$. If the graph of $A$ has genus $m^{2\theta}$ or does not have a $K_{m^{\theta}}$ minor, then the exponent of $m$ can be improved to the minimum of $1 + 5 \theta$ and $(9/8) (1+\theta)$. The key contribution of our work is an extension of Vaidya's techniques for constructing and analyzing combinatorial preconditioners. 0 0 · 0 Bookmarks · 31 Views Available from ### Keywords $m$ non-zero entries $n$-by-$n$ symmetric positive semi-definite diagonally dominant matrix $A$ key contribution linear-system solver relative distance $\epsilon$ smallest non-zero eigenvalue smallest non-zero entry time $O vector$\xxt\$
2013-05-25 17:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7097876667976379, "perplexity": 394.6803348905494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706009988/warc/CC-MAIN-20130516120649-00064-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3860466/particle-on-a-slope-forces-vector-geometry-based-proof-formalisation
# Particle on a slope - forces: vector/geometry-based proof formalisation I think this is better posted here than in Physics.se because it is purely a geometry/vector problem. A particle (purple dot in diagram) with mass $$m$$ kg rests in equilibrium due to coefficient of friction $$\mu > tan(\alpha)$$, where $$0^\circ<\alpha<45^\circ$$ and $$0<\mu<1.$$ I was wondering if a force P at a positive clockwise angle $$\theta ^\circ > 0$$ with the vertical could result in an acceleration of the particle down the slope. I think the answer is no. $$\underline{R}$$ is the reaction force on the particle from the slope. As P increases from $$0N$$, the particle remains in equilibrium until the resultant force $$\underline{Res_{lim}}$$ on the particle down the slope, $$\underline{Res_{lim}} = m \underline{g} + \underline{R_{lim}}+ \underline{P_{lim}}$$ is equal to limiting friction $$\underline{F_{lim}}$$. I think we can then show that for $$\underline{P}> \underline{P_{lim}}$$ (and assuming the particle accelerates down the slope), the resultant force $$\underline{Res} = m \underline{g}+ \underline{R}+ \underline{P}$$ is strictly less than the friction force $$\underline{F}$$, using the fact that $$\underline{F} = \mu \underline{R},\ \underline{F_{lim}} = \mu \underline{R_{lim}}$$ and the fact that $$\theta$$ is positive clockwise angle to the vertical. Then the resultant force would be up the slope, which would contradict the assumption that the particle accelerates down the slope, answering my original question. I'm interested how to formalise my proof with simple geometry and vectors. I've had a go and it shouldn't be that difficult, but haven't managed to formalise it. My idea for a proof would be: $$\underline {P}$$ removes a greater proportion of $$\underline{Res_{lim}}$$ than it removes of $$\underline{R_{lim}}$$, but like I say, I find formalising this difficult. $$\underline{Res_{lim}} = \mu \underline{R_{lim}}$$. Now what? • To make this a pure vector problem we need a mathematical understanding of the frictional force vector, which I'm not sure has been established here. In particular, if we let $\underline{P}=0$, then the magnitude of $m \underline{g}+ \underline{R}+ \underline{P}$ is greater than that of the vector $\underline{Res_{lim}}$, that is, greater than the limiting frictional force, which would cause the particle to accelerate down the slope in the absence of $\underline{P}$ (unless I misread what you mean by a "limiting frictional force"). Oct 11 '20 at 15:52 • Let me put this another way that might be clearer. The diagram shows a force $m\underline g$. Decompose this force into components perpendicular to the slope and parallel to it. The parallel component then is greater in magnitude than the vector $\underline{Res_{lim}}$ shown in the diagram. Oct 11 '20 at 16:40 • There is a constant $0<\mu<1$ such that $F = \mu R$ where $\underline{F}$ acts parallel to the slope; $\underline{R}$ acts perpendicular to the slope. I’m not sure the perpendicularity of \underline{R} and $\underline{F}$ is necessary to solve the problem. Also, you say that $P=0 \implies |m \underline{g} + \underline{R} + \underline{P}|>Res_{lim}$, but as the diagram shows, this is clearly false: they are equal. Limiting friction $F_{lim}= \mu R_{lim}$ is the magnitude of friction when the box is on the verge of moving, and $F=\mu R$ is also assumed true when the box is moving. Oct 11 '20 at 16:41 • I didn't change my comment, I was still editing it to correct mistakes. I'm a slow typer it seems. If the box is stationary and not in limiting equilibrium, $F < \mu R$. Friction by definition opposes the direction of motion or the motion that the object has a tendency to move in, i.e. the direction the object would move in if there were no friction. In response to your second comment, when $P = 0, Res \neq Res_{lim}$, if that's what you were saying. When $P=0, Res > Res_{lim}$. So I'm not sure what you're getting at. Yes when $P=0$ we get the right-angled triangle. Your point is what? Oct 11 '20 at 16:54 • Yes, I was already aware of some (though not all) of what you wrote in your answer, and was going to add on some of the information in your answer to my own when I could be bothered at a later date, but it looks like you've done my work (and more) for me so thanks. Plus, your diagrams are much nicer than mine. Oct 11 '20 at 21:17 This may be overly detailed in light of the answer already posted, but I had already made the figures so I suppose I might as well try to explain them. I will interpret the figure in the question as showing a particle in contact with a surface that is immovable and cannot be deformed (so the particle cannot move to any point "under" the inclined line) and rough (so that there is friction). It may help to work out everything in a coordinate system in which one axis is parallel to the surface and the other is perpendicular to it. Then every force vector decomposes neatly into components parallel to the coordinate axes as illustrated in the figure below: Now for a little excursion into the physics of the situation in order to define things so that we can eventually get back to the mathematics: We will suppose that $$\vec S$$ in this figure represents the vector sum of all forces on the particle except for the forces exerted by the surface, that the component of $$\vec S$$ perpendicular to the surface is $$\vec S_\perp$$ and that the component of $$\vec S$$ parallel to the surface is $$\vec S_\parallel.$$ If the direction of $$\vec S_\perp$$ is "into" the surface (as it is in the figure) then the surface exerts a normal force $$\vec R$$ equal and opposite to $$\vec S_\perp$$ and a frictional force $$\vec F$$ opposite to the direction of $$\vec S_\parallel.$$ If the particle is initially at rest relative to the surface and $$\lvert\vec S_\parallel\rvert < \mu_s\lvert\vec R\rvert,$$ where $$\mu_s$$ (the static coefficient of friction) is a constant determined by the quality of the surfaces in contact, $$\vec F$$ is equal in magnitude to $$\vec S_\parallel$$ and the particle does not move. Otherwise $$\vec F$$ is smaller than $$\vec S_\parallel$$ and the particle slides in the direction of $$\vec S_\parallel.$$ But if $$\vec S_\perp$$ is zero or is "away from" the surface then the surface does not interfere with the motion of the particle, which is simply accelerated in the direction of $$\vec S.$$ I think that ends the excursion into physics. Now if we let $$S_\perp = \lvert\vec S_\perp\rvert$$ and $$S_\parallel = \lvert\vec S_\parallel\rvert$$, and if we take the directions of the coordinate system so that $$S_\perp$$ is positive when the direction of $$\vec S_\perp$$ is "into" the surface, then $$S_\perp = \lvert\vec S\rvert \cos\theta$$ and $$S_\parallel = \lvert\vec S\rvert \sin\theta$$ where $$\theta$$ is the angle between $$\vec S$$ and the direction of $$\vec S_\perp.$$ So the condition $$\lvert\vec S_\parallel\rvert < \mu_s\lvert\vec R\rvert$$ becomes $$\lvert\vec S\rvert \lvert\sin\theta\rvert < \mu_s \lvert\vec S\rvert \lvert \cos\theta \rvert,$$ and under the conditions that $$\lvert\vec S\rvert \neq 0$$ and $$\cos\theta \neq 0,$$ this is equivalent to $$\lvert\tan\theta \rvert < \mu_s.$$ So this means that for $$\lvert\vec S\rvert > 0,$$ the equilibrium conditions are obtained when the direction of $$\vec S$$ is within the angle $$\alpha_0$$ of the normal vector "into" the surface, where $$\tan\alpha_0 = \mu_s$$, as shown below. For an inclined plane in the presence of gravity, we get a force (the weight of the particle) along the "physically vertical line" in the figure below, whose angle from the normal vector is $$\alpha,$$ the same as the angle of inclination of the plane. In order for the particle to be at rest when influenced only by its weight and the forces exerted by the surface, the angle of the "physically vertical line" from the normal to the surface must be $$\alpha_0$$ or less. If we now add a force $$\vec P$$ directed to the right of the "physically vertical line", we get a sum force $$\vec S$$ which in the case shown in the figure is in a "no motion" direction. Other possible directions and magnitudes of $$\vec P$$ could produce $$\vec S$$ in a "slides rightward" direction or in a "flies away rightward" direction (actually rightward of the "physically vertical line", not just rightward of the normal vector), but never in a "slides leftward" direction. To make this a more rigorous solution, translate these graphical concepts further into inequations based on angles and magnitudes of vectors. • Actually I think I get all of this. It's quite nice and informative. I think my answer answers the question I posed better (to give a proof). But your departure into physics was a creative way of looking at the problem. It reminds me of this problem: When a particle is on a slope, what's the range of angles perpendicular to the slope that an *infinite*/very large force could take such that the particle remains in equilibrium. And then you think about it, and the mass of the particle and the direction of the slope doesn't matter: just the angle the infinite force makes to the perpendicular! Oct 11 '20 at 21:38 • You might want to wait a bit to see if anyone else chimes in, but other than that I think you're ready to accept your own answer. (No sense in leaving this question in the "unanswered" state for too long!) Oct 11 '20 at 22:50 • I agree but I can’t accept my own answer until 2 days time apparently lol. Oct 11 '20 at 22:51 • I guess they really want to enforce the "wait a bit" part. :-) In two days, then! Oct 11 '20 at 22:53 • Thanks for your help on the question! Oct 12 '20 at 9:07 As David K pointed out in the comments, the solution is simple. When $$P = 0, R = m g \cos{\alpha} > Res_{lim} = F_{lim} =$$ max{ $${F}$$ } $$\implies$$ positive acceleration down the slope when $$P=0$$, contradicting static equilibrium when $$P=0$$. More interesting is when $$\mu = \tan \alpha$$, so that when $$P=0$$, the object starts at rest in static equilibrium. Again, we look at the right-angled triangle formed by $$\underline{R_{lim}}, \ \underline{Res_{lim}}$$ and $$m\underline{g}$$ when $$P=0$$ and compare it to the right-angled triangle formed by $$\underline{\Delta R}, \ \underline{\Delta Res}$$ and $$\underline{P}$$. $$F = \mu R\ \text{and} \ R = R_{lim} - \Delta R, \ \therefore F = \mu(R_{lim} - \Delta R)$$ $$\Delta F = F_{lim} - F = \mu R_{lim} - \mu (R_{lim} - \Delta R) = \mu \Delta R = \Delta R\tan \alpha, \implies \frac{\Delta F}{\Delta R} = \tan \alpha.$$ $$\text{However, for the right-angled triangle formed by} \ \underline{\Delta R}, \ \underline{\Delta Res} \ \text{and} \ \underline{P},\ \frac{\Delta F}{\Delta R} = \tan(\alpha - \theta) \neq \tan \alpha, \text{because} \ \theta > 0 ^\circ.$$ So no, if a particle is at rest in static equilibrium on an inclined plane, then if $$P$$ is pointing in the same quadrant as up the slope (and also if $$\theta = 0$$, i.e. $$P$$ is vertical), then the particle cannot accelerate down the slope.
2022-01-27 15:31:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 80, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.798068106174469, "perplexity": 163.7508167436759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00203.warc.gz"}