url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.taylorfrancis.com/books/9780429097218/chapters/10.1201/b15059-10
|
chapter 7
2-Dimensional Data Sets
Pages 24
Some data sets fit much more naturally into a multidimensional structure than into related lists or 1-dimensional data sets. The benefit of organizing appropriate data into a multidimensional structure is that, by taking advantage of relationships between the elements, analysts can apply an additional suite of techniques. These techniques will sometimes reveal features in the multidimensional data set that wouldn’t otherwise have been apparent. Multidimensional data sets should not be intimidating – in fact, people use several multidimensional data sets in their everyday lives. For example, many tables fall into this category of structure, such as a table of weather data in which the horizontal headings are the day of the week, the vertical headings are the time of day, and the table elements are temperature or the chance of precipitation.
|
2019-09-15 20:32:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363590836524963, "perplexity": 866.5089958799824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00253.warc.gz"}
|
http://math.stackexchange.com/questions/69196/a-plane-geometry-problem
|
# A plane Geometry Problem
The triangle $ABC$ has $CA=CB$, circumcenter $O$ and incenter $I$. The point $D$ on $BC$ is such that $DO$ is perpendicular $BI$. Show that $DI$ is parallel to $AC$.
-
Is this homework? Do you have any work to show? – Henning Makholm Oct 2 '11 at 5:29
It's not a homework I have been asked by friend long back. I couldn't solve it. So I posted it here. – Ramana Venkata Oct 2 '11 at 8:22
Hint: Since $CA=CB$, the three points $I$, $O$ and $C$ are on the same line bisecting $\angle ACB$.
Proof. Let $E$ be the intersection of $BI$ and $OD$; set $\alpha=\angle ABI$. Since $\angle EIO=90^{\circ}-\alpha$, $\angle EOI=\alpha$. $\angle IBD$ is also $\alpha$, so the four points $I$, $B$, $D$, $O$ lie on the same circle. It follows that $\angle OID=\angle OBD=\angle OCD=90^{\circ}-2\alpha$. This and $\angle ACO=90^{\circ}-2\alpha$ show that $AC$ is parallel to $DI$.
Added: The above proof is for when $O$ is between $I$ and $C$. When $I$ is between $O$ and $C$, it is necessary to change the wording a bit, but a similar argument shows that the four points $I$, $B$, $D$, $O$ lie on the same circle, from which we have $AC // DI$.
|
2015-08-28 06:11:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858621716499329, "perplexity": 82.82844000424333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00115-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/control-system-question.594881/
|
# Control system question
1. Apr 9, 2012
### erezb84
1. The problem statement, all variables and given/known data
I have the following matrix:
A=[0 -1; 0 -1]
and i need to calculate: exp(At) in several ways, 2 of them are using the reverse Laplace transform and using: I + Ʃ(A^kt^k)/k!
i have tried to start the series but i am getting an expression that i can't say which series it is,
and when i try with Laplace i get the one of the matrix expressions is a step function..
i will apreaciate the help...
thanks!
2. Apr 9, 2012
### RoshanBBQ
What are you getting for
$$(sI-A)^{-1}$$
3. Apr 9, 2012
### erezb84
this is what i get. but i can't find the reverse transform...
File size:
29 KB
Views:
63
4. Apr 9, 2012
### RoshanBBQ
You are dividing by the wrong thing. Divide by
$$s(s+1)$$
$$s(s+1)-1$$
5. Apr 9, 2012
### erezb84
but
$$s(s+1)-1$$
is the deteminante..
in oreder to reverse 2*2 matrix i do this:
[a b ; c d]^-1 = [d -b; -c a] * 1/det
no?
6. Apr 9, 2012
1*0 = 0
7. Apr 9, 2012
### erezb84
daaaam! right, thanks!
|
2017-08-23 19:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.452556848526001, "perplexity": 6202.999859451903}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123359.11/warc/CC-MAIN-20170823190745-20170823210745-00628.warc.gz"}
|
https://www.inference.vc/sock_gnomes_versus_laws_of_large_numbers/
|
March 6, 2012
# Sock Gnomes versus Laws of Large Numbers
Whilst finishing one of my previous posts on causality and influence, I was pairing up socks that just came out of the laundry. I was outraged to observe that, yet again, I was hardly able to pair up over half of the socks. "This cannot happen by chance, we must have sock gnomes, who deliberately mess up your socks" - I was thinking to myself. But then I realised I may be wrong, maybe it can happen by chance. Maybe it's in fact expected to happen. Perhaps I should be more surprised if I could perfectly pair up all socks. So I started looking into this problem:
Let's say we have $n$ perfectly distinguishable pairs of socks in our laundry bag. We decide to do our laundry, but because our washing machine has finite capacity, we can only wash $k \leq 2n$ pieces of socks. We select the particular socks to be washed by drawing from the laundry bag uniformly at random, without replacement. After this procedure in our sample of $k$ socks there will be a number $Y_{n,k}$ of them that won't have their pairs in the sample. What is the expected value of $Y_{n,k}$
First I wanted to give an analytic answer, but I had to many other things to use my brainpower for. So this blog post is based on random simulations and Monte Carlo estimates, that I could quickly hack together. (For those of you who doesn't know, Monte Carlo is a fancy name for simulating a bunch of random outcomes and then taking averages)
I came up with a visualisation to explain what's going on. Here's a simple (and ugly) version of plot if we have only $n=2$ pairs of socks, with explanation below.
2 pairs of socks means a total of 4 pieces in our laundry bag that we may select to wash. The x axis shows the number of socks washed , $k$, which therefore ranges between $0$ and $4$. The y axis shows the number of socks left unpaired. The gray rectangles show the distribution of unpaired socks with white meaning probability $0$, black meaning probability $1$ (see colorbar on the right). Let me walk you through this figure.
• If we wash 0 socks, with probability 1 we have 0 unpaired socks, hence the black rectangle at (0,0), meaning if we wash 0 socks, with probability 1 we have 0 unpaired socks.
• If we wash a single sock, we obviously can't pair it up with anything, thus with probability one, we have 1 unpaired sock, hence the black rectangle at (1,1).
• If we wash 2 pieces, than two things can happen. With probability $\approx0.65$ those two won't be pairs of each other, so we're left with two unpaired socks, hence the dark gray rectangle at (2,2); with probability $\approx0.35$ we can actually pair them up and we're left with 0 unpaired socks, hence the lighter gray rectangle at (2,0). It's impossible to have a single unpaired sock, hence (2,1) is white.
I think by now you probably got it or left the blog. It is simple to see that graph like this will always be symmetric.
The blue dotted line shows the maximum of the number of unpaired socks, it's easy to show you can never have more than $min(k,2n-k)$ unpaired socks. So everything above the blue dotted line should be white. The red line shows the expected number of unpaired socks as a function of $k$, the number of pieces we have washed (remember, $n$ is now fixed)
Let's look at a bit more complicated case. Below is the same kind of graph for the case we have $n=10$ pairs of socks. Notice the checkerboard-like pattern which due to the fact that the parity of the number of unpaired socks always corresponds to the parity of the number of socks washed. Pretty pleasing to the eye, isn't it?
The expected number of socks starts to be a smoother function of $k$, and it becomes even smoother as we further increase this number to 50, as the following figure shows:
The shape of the red curve looks the same as beore, the variance around this mean however has decreased substantially. If we further increase the number of socks to 200 pairs, a law of large numbers kicks in and the variance practically shrinks to 0:
Here is a little gif putting together for growing values of $n$:
.
So what's the answer to my frustration? Well, based on Monte Carlo simulations I conjecture that the following law of large numbers holds for sock pairing:
As $n \rightarrow \infty$ and $k \rightarrow \infty$ but such that $\lambda = \frac{k}{n2}$ is fixed, the fraction of unpaired socks converges to a deterministic value with probability one:
$$\frac{Y_{n,k}}{2n} \stackrel{1}{\rightarrow} \lambda\left(1-\lambda\right)$$
So, the expected fraction of socks that you can't pair up can be as high as half of the total number of socks you have washed, so my analysis allowed me to rule out the alternative hypothesis of sock gnomes messing up my socks.
Take home message #1: Before you start believing in sock gnomes, satan, god, and other supernatural powers, check if events that seem unlikely really cannot be explained by more parsimonious models of probability.
**Take home message #2:**If you want to be able to pair up your socks, buy loads of the same colour and you won't have such problems.
|
2020-09-24 01:41:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7776491641998291, "perplexity": 516.2618955206839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00381.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-slope-and-y-intercept-for-x-2y-4
|
How do you find the slope and y intercept for x+2y=4 ?
May 27, 2015
The slope intercept form for a linear equation is
$y = m x + b$
where $m$ is the slope and $b$ is the y-intercept.
Re-writing $x + 2 y = 4$
$2 y = - x + 4$
$y = \left(- \frac{1}{2}\right) x + 2$
which is slope-intercept form
with slope $\left(- \frac{1}{2}\right)$ and y-intercept $2$
|
2020-01-21 16:46:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5870056748390198, "perplexity": 647.9008656216964}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604849.31/warc/CC-MAIN-20200121162615-20200121191615-00238.warc.gz"}
|
http://czytanki.net/channels/referaty-z-computer-science-na-arxivorg?page=193
|
Instrukcja korzystania z Biblioteki
### Serwisy:
Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online
### Translator:
Kosmos
Astronomia Astrofizyka
Inne
Kultura
Sztuka dawna i współczesna, muzea i kolekcje
Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary
Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa
Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna
Wizje
Przewidywania Kosmologia Religie Ideologia Polityka
Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze
Życie
Biologia, biologia molekularna i genetyka
Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja
Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia
Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne
# Referaty z Computer Science na arXiv.org
## High dimensional Sparse Gaussian Graphical Mixture Model. (arXiv:1308.3381v1 [stat.ML])
This paper considers the problem of networks reconstruction from
heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well
known that parameter estimation in this context is challenging due to large
numbers of variables coupled with the degeneracy of the likelihood. We propose
as a solution a penalized maximum likelihood technique by imposing an $l_{1}$
penalty on the precision matrix. Our approach shrinks the parameters thereby
resulting in better identifiability and variable selection. We use the
Expectation Maximization (EM) algorithm which involves the graphical LASSO to
estimate the mixing coefficients and the precision matrices. We show that under
certain regularity conditions the Penalized Maximum Likelihood (PML) estimates
are consistent. We demonstrate the performance of the PML estimator through
simulations and we show the utility of our method for high dimensional data
analysis in a genomic application.
2013/08/16 - 21:07
## An axiomatic study of objective functions for graph clustering. (arXiv:1308.3383v1 [cs.CV])
We investigate axioms that intuitively ought to be satisfied by graph
clustering objective functions. Two tailored for graph clustering objectives
are introduced, and the four axioms introduced in previous work on distance
based clustering are reformulated and generalized for the graph setting. We
show that modularity, a standard objective for graph clustering, does not
satisfy all these axioms. This leads us to consider adaptive scale modularity,
a variant of modularity, that does satisfy the axioms. Adaptive scale
modularity has two parameters, which give greater control over the clustering.
Standard graph clustering objectives, such as normalized cut and unnormalized
cut, are obtained as special cases of adaptive scale modularity. We furthermore
show that adaptive scale modularity does not have a resolution limit. In
general, the results of our investigation indicate that the considered axioms
cover existing good' objective functions for graph clustering, and can be used
to derive an interesting new family of objectives.
2013/08/16 - 21:07
## Consistency Analysis of Sensor Data Distribution. (arXiv:1308.3384v1 [cs.NI])
In this paper we analyze the probability of consistency of sensor data
distribution systems (SDDS), and determine suitable evaluation models. This
problem is typically difficult, since a reliable model taking into account all
parameters and processes which affect the system consistency is unavoidably
very complex. The simplest candidate approach consists of modeling the state
sojourn time, or holding time, as memoryless, and resorting to the well known
solutions of Markovian processes. Nevertheless, it may happen that this
approach does not fit with some working conditions. In particular, the correct
modeling of the SDDS dynamics requires the introduction of a number of
parameters, such as the packet transfer time or the packet loss probability,
the value of which may determine the suitability of unsuitability of the
Markovian model. Candidate alternative solutions include the Erlang phase-type
approximation of nearly constant state holding time and a more refined model to
account for overlapping events in semi-Markov processes.
2013/08/16 - 21:07
## Models of on-line social networks. (arXiv:1308.3388v1 [cs.SI])
We present a deterministic model for on-line social networks (OSNs) based on
transitivity and local knowledge in social interactions. In the Iterated Local
Transitivity (ILT) model, at each time-step and for every existing node $x$, a
new node appears which joins to the closed neighbour set of $x.$ The ILT model
provably satisfies a number of both local and global properties that were
observed in OSNs and other real-world complex networks, such as a densification
power law, decreasing average distance, and higher clustering than in random
graphs with the same average degree. Experimental studies of social networks
demonstrate poor expansion properties as a consequence of the existence of
communities with low number of inter-community edges. Bounds on the spectral
gap for both the adjacency and normalized Laplacian matrices are proved for
graphs arising from the ILT model, indicating such bad expansion properties.
The cop and domination number are shown to remain the same as the graph from
the initial time-step $G_0$, and the automorphism group of $G_0$ is a subgroup
of the automorphism group of graphs generated at all later time-steps. A
randomized version of the ILT model is presented, which exhibits a tuneable
densification power law exponent, and maintains several properties of the
deterministic model.
2013/08/16 - 21:07
## Guiding Designs of Self-Organizing Swarms: Interactive and Automated Approaches. (arXiv:1308.3400v1 [cs.NE])
Self-organization of heterogeneous particle swarms is rich in its dynamics
of kinetically distinct particles are involved. In this chapter, we discuss how
we have been addressing this problem by (1) utilizing and enhancing interactive
evolutionary design methods and (2) realizing spontaneous evolution of self
organizing swarms within an artificial ecosystem.
2013/08/16 - 21:07
## On Some Recent MAX SAT Approximation Algorithms. (arXiv:1308.3405v1 [cs.DS])
Recently a number of randomized 3/4-approximation algorithms for MAX SAT have
been proposed that all work in the same way: given a fixed ordering of the
variables, the algorithm makes a random assignment to each variable in
sequence, in which the probability of assigning each variable true or false
depends on the current set of satisfied (or unsatisfied) clauses. To our
knowledge, the first such algorithm was proposed by Poloczek and Schnitger; Van
Zuylen subsequently gave an algorithm that set the probabilities differently
and had a simpler analysis. She also set up a framework for deriving such
algorithms. Buchbinder, Feldman, Naor, and Schwartz, as a special case of their
work on maximizing submodular functions, also give a randomized
3/4-approximation algorithm for MAX SAT with the same structure as these
previous algorithms. In this note we give a gloss on the Buchbinder et al.
algorithm that makes it even simpler, and show that in fact it is equivalent to
the previous algorithm of Van Zuylen. We also show how it extends to a
deterministic LP rounding algorithm; such an algorithm was also given by Van
Zuylen.
2013/08/16 - 21:07
## 3D Printing for Math Professors and Their Students. (arXiv:1308.3420v1 [math.HO])
In this primer, we will describe a number of projects that can be completed
with a 3D printer, particularly by mathematics professors and their students.
For many of the projects, we will utilize Mathematica to design objects that
mathematicians may be interested in printing. Included in the projects that are
described is a method to acquire data from an XBox Kinect.
2013/08/16 - 21:07
## Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. (arXiv:1308.3432v1 [cs.LG])
Stochastic neurons and hard non-linearities can be useful for a number of
reasons in deep learning models, but in many cases they pose a challenging
problem: how to estimate the gradient of a loss function with respect to the
input of such stochastic or non-smooth neurons? I.e., can we "back-propagate"
through these stochastic neurons? We examine this question, existing
approaches, and compare four families of solutions, applicable in different
settings. One of them is the minimum variance unbiased gradient estimator for
stochatic binary neurons (a special case of the REINFORCE algorithm). A second
approach, introduced here, decomposes the operation of a binary stochastic
neuron into a stochastic binary part and a smooth differentiable part, which
approximates the expected effect of the pure stochatic binary neuron to first
order. A third approach involves the injection of additive or multiplicative
noise in a computational graph that is otherwise differentiable. A fourth
approach heuristically copies the gradient with respect to the stochastic
output directly as an estimator of the gradient with respect to the sigmoid
argument (we call this the straight-through estimator). To explore a context
where these estimators are useful, we consider a small-scale version of {\em
conditional computation}, where sparse stochastic units form a distributed
representation of gaters that can turn off in combinatorially many ways large
chunks of the computation performed in the rest of the neural network. In this
case, it is important that the gating units produce an actual 0 most of the
time. The resulting sparsity can be potentially be exploited to greatly reduce
the computational cost of large deep networks for which conditional computation
would be useful.
2013/08/16 - 21:07
## Identification of hybrid node and link communities in complex networks. (arXiv:1308.3438v1 [cs.SI])
Identification of communities in complex networks has become an effective
means to analysis of complex systems. It has broad applications in diverse
areas such as social science, engineering, biology and medicine. Finding
communities of nodes and finding communities of links are two popular schemes
for network structure analysis. These schemes, however, have inherent drawbacks
and are often inadequate to properly capture complex organizational structures
in real networks. We introduce a new scheme and effective approach for
identifying complex network structures using a mixture of node and link
communities, called hybrid node-link communities. A central piece of our
approach is a probabilistic model that accommodates node, link and hybrid
node-link communities. Our extensive experiments on various real-world
networks, including a large protein-protein interaction network and a large
semantic association network of commonly used words, illustrated that the
scheme for hybrid communities is superior in revealing network characteristics.
Moreover, the new approach outperformed the existing methods for finding node
2013/08/16 - 21:07
## Palindrome Recognition In The Streaming Model. (arXiv:1308.3466v1 [cs.DS])
In the Palindrome Problem one tries to find all palindromes (palindromic
substrings) in a given string. A palindrome is defined as a string which reads
forwards the same as backwards, e.g., the string "racecar". A related problem
is the Longest Palindromic Substring Problem in which finding an arbitrary one
of the longest palindromes in the given string suffices. We regard the
streaming version of both problems. In the streaming model the input arrives
over time and at every point in time we are only allowed to use sublinear
space. The main algorithms in this paper are the following: The first one is a
one-pass randomized algorithm that solves the Palindrome Problem. It has an
additive error and uses $O(\sqrt n$) space. The second algorithm is a two-pass
algorithm which determines the exact locations of all longest palindromes. It
uses the first algorithm as the first pass. The third algorithm is again a
one-pass randomized algorithm, which solves the Longest Palindromic Substring
Problem. It has a multiplicative error using only $O(\log(n))$ space. We also
give two variants of the first algorithm which solve other related practical
problems.
2013/08/16 - 21:07
## Security Type Systems as Recursive Predicates. (arXiv:1308.3472v1 [cs.CR])
We show how security type systems from the literature of language-based
noninterference can be represented more directly as predicates defined by
structural recursion on the programs. In this context, we show how our uniform
syntactic criteria from previous work cover several previous type-system
soundness results.
2013/08/16 - 21:07
## Application Behavior Enforcement Based On Network Characteristics. (arXiv:1308.3481v1 [cs.NI])
Every device defines the behavior of various applications running on it with
the help of user profiles or settings. What if a user wants different different
applications to run in different different networks. Then he cannot do this
because in todays existing operating systems the application profile, user
specific tool settings and any such system usage settings do not consider the
current network characteristics of the user or system and are therefore static
in nature.Therefore there is a need for an intelligent system which will
dynamically change or apply changes to the settings of various applications
running on the system considering the current network characteristics based on
user specifications. This paper presents an idea such that a user can set
different different applications in different different networks.This paper
presents how the user will get pop up messages when he visits to safe web
sites. And according to the users current network status he will get a pop up
message when he will go for download or stream audio or video files.
2013/08/16 - 21:07
## Privatizing user credential information of Web services in a shared user environment. (arXiv:1308.3482v1 [cs.CR])
User credentials security is one of the most important tasks in Web World.
Most Web sites on the Internet that support user accounts store the users
credentials in a database. Now a days, most of the web browsers offer auto
login feature for the favorite web sites such as yahoo, google, gmail etc.
using these credential information. This facilitates the misuse of user
credentials. Privatizing user credential information of web services in a
shared user environment provides a feature enhancement where the root user will
be able to privatize his stored credentials by enforcing some masking
techniques such that even a user logs on to the system with root user
credentials, he will not be able to access privatized data. In case of web
browsers auto login feature, a root user can disable the feature manually by
deleting entries from web browsers' saved password list. But this involves
spending a considerable amount of time and the biggest problem is that he has
to insert those credentials once again when he next visits these websites. This
mode. The application includes two parts: Masked Application Mode and Disabling
the Masked Application Mode. When the system goes for masked application mode,
the other user will not be able to use the credentials of the root user.If the
other user tries to access any of the web pages which have been masked, the
other user will have to authenticate with his own credentials. Disabling the
masked mode requires authentication from the root user. As long as this
credential is not shared, masked mode can be disabled only by the root user.
2013/08/16 - 21:07
## Information sharing promotes prosocial behaviour. (arXiv:1308.3485v1 [physics.soc-ph])
More often than not, bad decisions are bad regardless of where and when they
are made. Information sharing might thus be utilized to mitigate them. Here we
show that sharing the information about strategy choice between players
residing on two different networks reinforces the evolution of cooperation. In
evolutionary games the strategy reflects the action of each individual that
warrants the highest utility in a competitive setting. We therefore assume that
identical strategies on the two networks reinforce themselves by lessening
their propensity to change. Besides network reciprocity working in favour of
cooperation on each individual network, we observe the spontaneous emerge of
correlated behaviour between the two networks, which further deters defection.
If information is shared not just between individuals but also between groups,
the positive effect is even stronger, and this despite the fact that
information sharing is implemented without any assumptions with regards to
content.
2013/08/16 - 21:07
## lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers. (arXiv:1012.4116v3 [stat.ML] UPDATED)
We assume data sampled from a mixture of d-dimensional linear subspaces with
spherically symmetric distributions within each subspace and an additional
outlier component with spherically symmetric distribution within the ambient
space (for simplicity we may assume that all distributions are uniform on their
corresponding spheres). We also assume mixture weights for the different
components. We say that one of the underlying subspaces of the model is most
significant if its mixture weight is higher than the sum of the mixture weights
of all other subspaces. We study the recovery of the most significant subspace
by minimizing the lp-averaged distances of data points from d-dimensional
subspaces, where p>0. Unlike other lp minimization problems, this minimization
is non-convex for all p>0 and thus requires different methods for its analysis.
We show that if 0<p<=1, then for any fraction of outliers the most significant
subspace can be recovered by lp minimization with overwhelming probability
(which depends on the generating distribution and its parameters). We show that
when adding small noise around the underlying subspaces the most significant
subspace can be nearly recovered by lp minimization for any 0<p<=1 with an
error proportional to the noise level. On the other hand, if p>1 and there is
more than one underlying subspace, then with overwhelming probability the most
significant subspace cannot be recovered or nearly recovered. This last result
does not require spherically symmetric outliers.
2013/08/16 - 21:07
## Scalar Reconciliation for Gaussian Modulation of Two-Way Continuous-Variable Quantum Key Distribution. (arXiv:1308.1391v2 [quant-ph] UPDATED)
The two-way continuous-variable quantum key distribution (CVQKD) systems
allow higher key rates and improved transmission distances over standard
telecommunication networks in comparison to the one-way CVQKD protocols. To
exploit the real potential of two-way CVQKD systems a robust reconciliation
technique is needed. It is currently unavailable, which makes it impossible to
reach the real performance of a two-way CVQKD system. The reconciliation
process of correlated Gaussian variables is a complex problem that requires
either tomography in the physical layer that is intractable in a practical
scenario, or high-cost calculations in the multidimensional spherical space
with strict dimensional limitations. To avoid these issues, we propose an
efficient logical layer-based reconciliation method for two-way CVQKD to
extract binary information from correlated Gaussian variables. We demonstrate
that by operating on the raw-data level, the noise of the quantum channel can
be corrected in the scalar space and the reconciliation can be extended to
arbitrary high dimensions. We prove that the error probability of scalar
reconciliation is zero in any practical CVQKD scenario, and provides
unconditional security. The results allow to significantly improve the
currently available key rates and transmission distances of two-way CVQKD. The
proposed scalar reconciliation can also be applied in one-way systems as well,
to replace the existing reconciliation schemes.
2013/08/15 - 11:24
## Stability Results for Simple Traffic Models Under PI-Regulator Control. (arXiv:1308.2505v1 [math.OC])
This paper provides necessary conditions and sufficient conditions for the
(global) Input-to-State Stability property of simple uncertain
vehicular-traffic network models under the effect of a PI-regulator. Local
stability properties for vehicular-traffic networks under the effect of
PI-regulator control are studied as well: the region of attraction of a locally
exponentially stable equilibrium point is estimated by means of Lyapunov
functions. All obtained results are illustrated by means of simple examples.
2013/08/14 - 10:07
## Linearizability with Ownership Transfer. (arXiv:1308.2507v1 [cs.LO])
Linearizability is a commonly accepted notion of correctness for libraries of
concurrent algorithms. Unfortunately, it assumes a complete isolation between a
library and its client, with interactions limited to passing values of a given
data type. This is inappropriate for common programming languages, where
libraries and their clients can communicate via the heap, transferring the
ownership of data structures, and can even run in a shared address space
without any memory protection.
In this paper, we present the first definition of linearizability that lifts
this limitation and establish an Abstraction Theorem: while proving a property
of a client of a concurrent library, we can soundly replace the library by its
abstract implementation related to the original one by our generalisation of
linearizability. This allows abstracting from the details of the library
implementation while reasoning about the client. We also prove that
linearizability with ownership transfer can be derived from the classical one
if the library does not access some of data structures transferred to it by the
client.
2013/08/14 - 10:07
## Coding and Compression of Three Dimensional Meshes by Planes. (arXiv:1308.2509v1 [cs.CG])
The present paper suggests a new approach for geometric representation of 3D
spatial models and provides a new compression algorithm for 3D meshes, which is
based on mathematical theory of convex geometry. In our approach we represent a
3D convex polyhedron by means of planes, containing only its faces. This allows
not to consider topological aspects of the problem (connectivity information
among vertices and edges) since by means of the planes we construct the
polyhedron uniquely. Due to the fact that the topological data is ignored this
representation provides high degree of compression. Also planes based
representation provides a compression of geometrical data because most of the
faces of the polyhedron are not triangles but polygons with more than three
vertices.
2013/08/14 - 10:07
## Fluctuation in e-mail sizes weakens power-law correlations in e-mail flow. (arXiv:1308.2516v1 [physics.soc-ph])
Power-law correlations have been observed in packet flow over the Internet.
The possible origin of these correlations includes demand for Internet
services. We observe the demand for e-mail services in an organization, and
analyze correlations in the flow and the sequence of send requests using a
Detrended Fluctuation Analysis (DFA). The correlation in the flow is found to
be weaker than that in the send requests. Four types of artificial flow are
constructed to investigate the effects of fluctuations in e-mail sizes. As a
result, we find that the correlation in the flow originates from that in the
sequence of send requests. The strength of the power-law correlation decreases
as a function of the ratio of the standard deviation of e-mail sizes to their
average.
2013/08/14 - 10:07
## Interval colorings of complete bipartite graphs and trees. (arXiv:1308.2541v1 [cs.DM])
A translation from Russian of the work of R.R. Kamalian "Interval colorings
of complete bipartite graphs and trees", Preprint of the Computing Centre of
decision of the Academic Council of the Computing Centre of the Academy of
Sciences of Armenian SSR and Yerevan State University from 7.09.1989).
2013/08/14 - 10:07
## A place-focused model for social networks in cities. (arXiv:1308.2565v1 [cs.SI])
The focused organization theory of social ties proposes that the structure of
human social networks can be arranged around extra-network foci, which can
include shared physical spaces such as homes, workplaces, restaurants, and so
on. Until now, this has been difficult to investigate on a large scale, but the
huge volume of data available from online location-based social services now
makes it possible to examine the friendships and mobility of many thousands of
people, and to investigate the relationship between meetings at places and the
structure of the social network. In this paper, we analyze a large dataset from
Foursquare, the most popular online location-based social network. We examine
the properties of city-based social networks, finding that they have common
structural properties, and that the category of place where two people meet has
very strong influence on the likelihood of their being friends. Inspired by
these observations in combination with the focused organization theory, we then
present a model to generate city-level social networks, and show that it
produces networks with the structural properties seen in empirical data.
2013/08/14 - 10:07
## Achieving Speedup in Aggregate Risk Analysis using Multiple GPUs. (arXiv:1308.2572v1 [cs.DC])
Stochastic simulation techniques employed for the analysis of portfolios of
insurance/reinsurance risk, often referred to as Aggregate Risk Analysis', can
benefit from exploiting state-of-the-art high-performance computing platforms.
In this paper, parallel methods to speed-up aggregate risk analysis for
supporting real-time pricing are explored. An algorithm for analysing aggregate
risk is proposed and implemented for multi-core CPUs and for many-core GPUs.
Experimental studies indicate that GPUs offer a feasible alternative solution
over traditional high-performance computing systems. A simulation of 1,000,000
trials with 1,000 catastrophic events per trial on a typical exposure set and
contract structure is performed in less than 5 seconds on a multiple GPU
platform. The key result is that the multiple GPU implementation can be used in
real-time pricing scenarios as it is approximately 77x times faster than the
sequential counterpart implemented on a CPU.
2013/08/14 - 10:07
## Line-of-Sight Obstruction Analysis for Vehicle-to-Vehicle Network Simulations in a Two-Lane Highway Scenario. (arXiv:1308.2574v1 [cs.NI])
In vehicular ad-hoc networks (VANETs) the impact of vehicles as obstacles has
largely been neglected in the past. Recent studies have reported that the
vehicles that obstruct the line-of-sight (LOS) path may introduce 10-20,dB
additional loss, and as a result reduce the communication range. Most of the
traffic mobility models (TMM) today do not treat other vehicles as obstacles
and thus can not model the impact of LOS obstruction in VANET simulations. In
this paper the LOS obstruction caused by other vehicles is studied in a highway
scenario. First a car-following model is used to characterize the motion of the
vehicles driving in the same direction on a two-lane highway. Vehicles are
allowed to change lanes when necessary. The position of each vehicle is updated
by using car-following rules together with the lane-changing rules for the
forward motion. Based on the simulated traffic a simple TMM is proposed for
VANET simulations, which is capable to identify the vehicles that are in the
shadow region of other vehicles. The presented traffic mobility model together
with the shadow fading path loss model can take in to account the impact of LOS
obstruction on the total received power in the two-lane highway scenarios.
2013/08/14 - 10:07
## Evolutionary Extortion and Mischief: Zero Determinant strategies in iterated 2x2 games. (arXiv:1308.2576v1 [cs.GT])
This paper studies the mechanisms, implications, and potential applications
of the recently discovered class of Zero Determinant (ZD) strategies in
iterated 2x2 games. These strategies were reported to successfully extort pure
economic maximizers, and to mischievously determine the set of feasible
long-term payoffs in iterated Prisoners' Dilemma by enforcing linear
constraints on both players' expected average scores.
These results are generalized for all symmetric 2x2 games and a general
Battle of the Sexes, exemplified by four common games. Additionally, a
comparison to conventional strategies is made and typical ZD gameplay
simulations are analyzed along with convergence speeds. Several response
strategies are discussed, including a glance on how time preferences change
previous results. Furthermore, a possibility of retaliation is presented: when
maximin scores exceed the minimum symmetric payoff, it is possible to extort
the extortioner.
Finally, a summary of findings from evolutionary game theory shows that
mischief is limited by its own malice. Nevertheless, this does not challenge
the result that mindless economic maximization is subject to extortion: the
study of ZD strategies reveals exciting new perspectives and opportunities in
game theory, both evolutionary and classic.
2013/08/14 - 10:07
## A Simple Circle Discretization Algorithm With Applications. (arXiv:1308.2581v1 [cs.DS])
In CNC manufacturing,there often arises the need to create G-Code programs
which require the calculation of discrete x-y coordinate pairs(2D).An example
of this situation is when the programmer needs to create a program to machine a
helix(or thread).The required toolpath will be a set of points on a helix
curve.The problem now entails calculating the number of points along this
curve.Too few points and the toolpath will not be smooth.Too many points and
the program becomes too big.This article will serve to provide a simple way to
divide a circle into discrete points,with a notion of "dimensional tolerance"
built into the algorithm.
2013/08/14 - 10:07
## Alpha current flow betweenness centrality. (arXiv:1308.2591v1 [cs.SI])
A class of centrality measures called betweenness centralities reflects
degree of participation of edges or nodes in communication between different
parts of the network. The original shortest-path betweenness centrality is
based on counting shortest paths which go through a node or an edge. One of
shortcomings of the shortest-path betweenness centrality is that it ignores the
paths that might be one or two steps longer than the shortest paths, while the
edges on such paths can be important for communication processes in the
network. To rectify this shortcoming a current flow betweenness centrality has
been proposed. Similarly to the shortest path betwe has prohibitive complexity
for large size networks. In the present work we propose two regularizations of
the current flow betweenness centrality, \alpha-current flow betweenness and
truncated \alpha-current flow betweenness, which can be computed fast and
correlate well with the original current flow betweenness.
2013/08/14 - 10:07
## Sparse Command Generator for Remote Control. (arXiv:1308.2592v1 [cs.SY])
generator and the controlled object are connected with a bandwidth-limited
communication link. In the remote-controlled systems, efficient representation
of control commands is one of the crucial issues because of the bandwidth
limitations of the link. We propose a new representation method for control
commands based on compressed sensing. In the proposed method, compressed
sensing reduces the number of bits in each control signal by representing it as
a sparse vector. The compressed sensing problem is solved by an L1-L2
optimization, which can be effectively implemented with an iterative shrinkage
algorithm. A design example also shows the effectiveness of the proposed
method.
2013/08/14 - 10:07
## A Framework for Systematic Analysis of Open Access Journals and its Application in Software Engineering and Information Systems. (arXiv:1308.2597v1 [cs.DL])
Research opportunities may be lost as the needed knowledge is often hidden
behind a paywall and may not be accessible for many researchers due to the
rising subscription costs of journals. Arguably, a solution for this issue
would be to make all research results barrier-free, available on the Internet.
Open Access (OA) is such an initiative. In Software Engineering (SE) and
Information Systems (IS) fields, however, OA journals are recently born,
limited in number, and largely unknown by researchers. [..] This paper is a
contribution towards an understanding of OA publishing in the fields of SE and
IS. The study proposes an analysis framework of 18 core attributes, divided
into the areas of Identification, Publication Metrics, Economics,
Accessibility, and Trustworthiness of OA journals. The framework is employed in
a systematic analysis of 30 OA journals in SE and IS, which were selected among
386 OA journals in Computer Science, listed in the Directory of OA Journals
detailed overview of OA definitions and publishing models. (2) raises the need
to study OA in the fields of SE and IS while informing the reader of the
existence of 30 journals. (3) offers a detailed comparison of the journals and
analyzes the issues of OA publishing. Most significantly, it (4) provides an
analysis framework born from the observation of data and the literature. (5)
provides useful recommendations to readers, authors, and publishers of OA
articles and journals. It demonstrates that high publication charges are not
sufficiently justified by the publishers, which lack transparency and may
prevent authors from adopting OA. OA is threatened by numerous issues. This
paper aims to highlight such issues with the hope that in the near future, OA
can be studied as a real alternative to traditional publishing systems for SE
and IS fields.
2013/08/14 - 10:07
## Parameterized Rural Postman and Conjoining Bipartite Matching Problems. (arXiv:1308.2599v1 [cs.DS])
The Directed Rural Postman Problem (DRPP) can be formulated as follows: given
a connected directed multigraph $G=(V,A)$ with nonnegative weights on the arcs,
a subset $R$ of $A$ and a number $\ell$, decide whether $G$ has a closed
directed walk containing every arc of $R$ and of total weight at most $\ell$.
Let $c$ be the number of components in the underlying undirected graph of
$G[R]$, where $G[R]$ is the subgraph of $G$ induced by $R$. Sorge et al. (2012)
ask whether the DRPP is fixed-parameter tractable (FPT) when parameterized by
$c$, i.e., whether there is an algorithm (called a fixed-parameter algorithm)
of running time $f(c)p^{O(1)},$ where $f$ is a function of $c$ only and $p$ is
the number of vertices in $G$. Sorge et al. (2012) note that this question is
of significant practical relevance and has been open for more than thirty
years.
Sorge et al. (2012) showed that DRPP is FPT-equivalent to the following
problem called the Conjoining Bipartite Matching (CBM) problem: given a
bipartite graph $B$ with nonnegative weights on its edges, a partition $V_1\cup ... \cup V_t$ of vertices of $B$ and a graph $(\{1,..., t\},F)$, and the
parameter $|F|$, decide whether $B$ contains a perfect matching $M$ such that
for each $ij\in F$ there is an edge $uv\in M$ such that $u\in V_i$ and $v\in V_j$. We may assume that both partite sets of $B$ have the same number $n$ of
vertices. We prove that there is a randomized algorithm for CBM of running time
$2^{|F|}n^{O(1)}$, provided the total weight of $B$ is bounded by a polynomial
in $n$. By our result for CBM and FPT-reductions of Sorge et al. (2012) and
Dorn et al. (2013), DRPP has a randomized fixed-parameter algorithm, provided
the total weight of $B$ is bounded by a polynomial in $p$.
2013/08/14 - 10:07
## An Enhanced Time Space Priority Scheme to Manage QoS for Multimedia Flows transmitted to an end user in HSDPA Network. (arXiv:1308.2600v1 [cs.NI])
When different type of packets with different needs of Quality of Service
(QoS) requirements share the same network resources, it became important to use
queue management and scheduling schemes in order to maintain perceived quality
at the end users at an acceptable level. Many schemes have been studied in the
literature, these schemes use time priority (to maintain QoS for Real Time (RT)
packets) and/or space priority (to maintain QoS for Non Real Time (NRT)
packets). In this paper, we study and show the drawback of a combined time and
space priority (TSP) scheme used to manage QoS for RT and NRT packets intended
for an end user in High Speed Downlink Packet Access (HSDPA) cell, and we
propose an enhanced scheme (Enhanced Basic-TSP scheme) to improve QoS
relatively to the RT packets, and to exploit efficiently the network resources.
A mathematical model for the EB-TSP scheme is done, and numerical results show
the positive impact of this scheme.
2013/08/14 - 10:07
## Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses. (arXiv:1308.2617v1 [cs.CC])
We present a series of almost settled inapproximability results for three
fundamental problems. The first in our series is the subexponential-time
inapproximability of the maximum independent set problem, a question studied in
the area of parameterized complexity. The second is the hardness of
approximating the maximum induced matching problem on bounded-degree bipartite
graphs. The last in our series is the tight hardness of approximating the
k-hypergraph pricing problem, a fundamental problem arising from the area of
algorithmic game theory. In particular, assuming the Exponential Time
Hypothesis, our two main results are:
- For any r larger than some constant, any r-approximation algorithm for the
maximum independent set problem must run in at least
2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of
2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the
domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et
al., 2013)
- For any k larger than some constant, there is no polynomial time min
(k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph
pricing problem, where n is the number of vertices in an input graph. This
almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and
Blum, 2007 and an algorithm in this paper).
We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness
for polynomial-time algorithms, the k-hypergraph pricing problem admits
n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts
this problem in a rare approximability class in which approximability
thresholds can be improved significantly by allowing algorithms to run in
quasi-polynomial time.
2013/08/14 - 10:07
## Novel Virtual Moving Sound-based Spatial Auditory Brain-Computer Interface Paradigm. (arXiv:1308.2630v1 [q-bio.NC])
This paper reports on a study in which a novel virtual moving sound-based
spatial auditory brain-computer interface (BCI) paradigm is developed. Classic
auditory BCIs rely on spatially static stimuli, which are often boring and
difficult to perceive when subjects have non-uniform spatial hearing perception
characteristics. The concept of moving sound proposed and tested in the paper
allows for the creation of a P300 oddball paradigm of necessary target and
non-target auditory stimuli, which are more interesting and easier to
distinguish. We present a report of our study of seven healthy subjects, which
proves the concept of moving sound stimuli usability for a novel BCI. We
compare online BCI classification results in static and moving sound paradigms
yielding similar accuracy results. The subject preference reports suggest that
the proposed moving sound protocol is more comfortable and easier to
discriminate with the online BCI.
2013/08/14 - 10:07
## Local image registration a comparison for bilateral registration mammography. (arXiv:1308.2654v1 [cs.CV])
Early tumor detection is key in reducing the number of breast cancer death
and screening mammography is one of the most widely available and reliable
method for early detection. However, it is difficult for the radiologist to
process with the same attention each case, due the large amount of images to be
the current efficiency of these systems is not yet adequate and the correct
interpretation of CADe outputs requires expert human intervention. Computer
aided diagnosis systems (CADx) are being designed to improve cancer diagnosis
accuracy, but they have not been efficiently applied in breast cancer. CADx
efficiency can be enhanced by considering the natural mirror symmetry between
the right and left breast. The objective of this work is to evaluate
co-registration algorithms for the accurate alignment of the left to right
breast for CADx enhancement. A set of mammograms were artificially altered to
create a ground truth set to evaluate the registration efficiency of DEMONs,
and SPLINE deformable registration algorithms. The registration accuracy was
evaluated using mean square errors, mutual information and correlation. The
results on the 132 images proved that the SPLINE deformable registration
over-perform the DEMONS on mammography images.
2013/08/14 - 10:07
## KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization. (arXiv:1308.2655v1 [cs.LG])
This paper investigates the control of an ML component within the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box
optimization. The known CMA-ES weakness is its sample complexity, the number of
evaluations of the objective function needed to approximate the global optimum.
This weakness is commonly addressed through surrogate optimization, learning an
estimate of the objective function a.k.a. surrogate model, and replacing most
evaluations of the true objective function with the (inexpensive) evaluation of
the surrogate model. This paper presents a principled control of the learning
schedule (when to relearn the surrogate model), based on the Kullback-Leibler
divergence of the current search distribution and the training distribution of
the former surrogate model. The experimental validation of the proposed
approach shows significant performance gains on a comprehensive set of
ill-conditioned benchmark problems, compared to the best state of the art
including the quasi-Newton high-precision BFGS method.
2013/08/14 - 10:07
## Combinatorially interpreting generalized Stirling numbers. (arXiv:1308.2666v1 [math.CO])
Let $w$ be a word in alphabet $\{x,D\}$. Interpreting "$x$" as multiplication
by $x$, and "$D$" as differentiation with respect to $x$, the identity $$wf(x) = x^{(#({x's in w})-#({D's in w}))}\sum_k S_w(k) x^k D^k f(x),$$ valid
for any smooth function $f(x)$, defines a sequence $(S_w(k))_k$, the terms of
which we refer to as the {\em Stirling numbers (of the second kind)} of $w$.
The nomenclature comes from the fact that when $w=(xD)^n$, we have $S_w(k)={n \brace k}$, the ordinary Stirling number of the second kind.
Explicit expressions for, and identities satisfied by, the $S_w(k)$ have been
obtained by numerous authors, and combinatorial interpretations have been
presented. Here we provide a new combinatorial interpretation that retains the
spirit of the familiar interpretation of ${n \brace k}$ as a count of
partitions. Specifically, we associate to each $w$ a graph $G_w$, and we show
that $S_w(k)$ enumerates partitions of the vertex set of $G_w$ into classes
that do not span an edge of $G_w$. We also discuss some relatives of, and
consequences of, our interpretation.
2013/08/14 - 10:07
## On affine rigidity. (arXiv:1011.5553v2 [cs.CG] UPDATED)
We study the properties of affine rigidity of a hypergraph and prove a
variety of fundamental results. First, we show that affine rigidity is a
generic property (i.e., depends only on the hypergraph, not the particular
embedding). Then we prove that a graph is generically neighborhood affinely
rigid in d-dimensional space if it is (d+1)-vertex-connected. We also show
neighborhood affine rigidity of a graph implies universal rigidity of its
squared graph. Our results, and affine rigidity more generally, have natural
applications in point registration and localization, as well as connections to
manifold learning.
2013/08/14 - 10:07
## The geometry of low-rank Kalman filters. (arXiv:1203.4049v2 [math.OC] UPDATED)
An important property of the Kalman filter is that the underlying Riccati
flow is a contraction for the natural metric of the cone of symmetric positive
definite matrices. The present paper studies the geometry of a low-rank version
of the Kalman filter. The underlying Riccati flow evolves on the manifold of
fixed rank symmetric positive semidefinite matrices. Contraction properties of
the low-rank flow are studied by means of a suitable metric recently introduced
by the authors.
2013/08/14 - 10:07
## Convex and Scalable Weakly Labeled SVMs. (arXiv:1303.1271v4 [cs.LG] UPDATED)
In this paper, we study the problem of learning from weakly labeled data,
where labels of the training examples are incomplete. This includes, for
example, (i) semi-supervised learning where labels are partially known; (ii)
multi-instance learning where labels are implicitly known; and (iii) clustering
where labels are completely unknown. Unlike supervised learning, learning with
weak labels involves a difficult Mixed-Integer Programming (MIP) problem.
Therefore, it can suffer from poor scalability and may also get stuck in local
minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel
label generation strategy. This leads to a convex relaxation of the original
MIP, which is at least as tight as existing convex Semi-Definite Programming
(SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM
subproblems that are much more scalable than previous convex SDP relaxations.
Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised
learning; (ii) multi-instance learning for locating regions of interest in
content-based information retrieval; and (iii) clustering, clearly demonstrate
improved performance, and WellSVM is also readily applicable on large data
sets.
2013/08/14 - 10:07
## Integrated Pre-Processing for Bayesian Nonlinear System Identification with Gaussian Processes. (arXiv:1303.2912v2 [cs.AI] UPDATED)
We introduce GP-FNARX: a new model for nonlinear system identification based
on a nonlinear autoregressive exogenous model (NARX) with filtered regressors
(F) where the nonlinear regression problem is tackled using sparse Gaussian
processes (GP). We integrate data pre-processing with system identification
into a fully automated procedure that goes from raw data to an identified
model. Both pre-processing parameters and GP hyper-parameters are tuned by
maximizing the marginal likelihood of the probabilistic model. We obtain a
Bayesian model of the system's dynamics which is able to report its uncertainty
in regions where the data is scarce. The automated approach, the modeling of
uncertainty and its relatively low computational cost make of GP-FNARX a good
candidate for applications in robotics and adaptive control.
2013/08/14 - 10:07
|
2014-08-31 08:14:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.57545405626297, "perplexity": 2584.0160752394772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500837094.14/warc/CC-MAIN-20140820021357-00064-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/find-the-antiderivative-of-the-following-vector.837769/
|
# Find the antiderivative of the following vector:
## Homework Statement
Calculate the position vector of a particle moving with velocity given by:
v = (32 m/s - (5 m/s^2 )t i) + (0 j)
## Homework Equations
(x^(n+1) / (n+1) ) + C = antiderivative of function
## The Attempt at a Solution
r = (32t m - (5/2)t^2 m/s + C m i) + (C j)
Honestly, I'm just confused with the units more than anything. I don't know why the problem has m/s^2 if it's a velocity vector...
SteamKing
Staff Emeritus
Homework Helper
## Homework Statement
Calculate the position vector of a particle moving with velocity given by:
v = (32 m/s - (5 m/s^2 )t i) + (0 j)
## Homework Equations
(x^(n+1) / (n+1) ) + C = antiderivative of function
## The Attempt at a Solution
r = (32t m - (5/2)t^2 m/s + C m i) + (C j)
Honestly, I'm just confused with the units more than anything. I don't know why the problem has m/s^2 if it's a velocity vector...
A quantity of 5 m/s2 indicates that accelerated motion is taking place, i.e., the velocity is changing w.r.t. time. Acceleration × time = change in velocity.
A quantity of 5 m/s2 indicates that accelerated motion is taking place, i.e., the velocity is changing w.r.t. time. Acceleration × time = change in velocity.
oooooh okay that makes more sense. thank you!
and the antiderivative is right, right?
SteamKing
Staff Emeritus
Homework Helper
oooooh okay that makes more sense. thank you!
and the antiderivative is right, right?
I would say that since the velocity vector had no j-component, the position vector will not either.
I would say that since the velocity vector had no j-component, the position vector will not either.
but isn't the antiderivative of 0 C (or in this case, D to differentiate)?
SteamKing
Staff Emeritus
Homework Helper
but isn't the antiderivative of 0 C (or in this case, D to differentiate)?
Yeah, but D = 0 would be an acceptable value for the constant of integration, in the absence of any other initial condition information.
Yeah, but D = 0 would be an acceptable value for the constant of integration, in the absence of any other initial condition information.
oh okay. since this is on a take home test, do you think I should just put both answers (one for a definite integral 0 and one for an indefinite integral D)?
SteamKing
Staff Emeritus
|
2021-07-29 13:22:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008174300193787, "perplexity": 1194.473466980279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00400.warc.gz"}
|
http://physics.stackexchange.com/questions/46571/how-does-zener-diode-maintain-potential-across-its-terminals
|
# how does zener diode maintain potential across its terminals?
My physics book has a topic about zener diodes being used as voltage regulators in the reverse bias.
Well, I'm curious to know how does the zener maintain the potential across its terminals after it has undergone avalanche breakdown? Does it start conducting in full offering almost zero resistance? If so, how can there be a potential gradient across it?
The principle is that for high current change, there is a minimal and negligible change in potential across the zener? But, in avalanche breakdown doesn't it behave as a pure conductor? If so, then how is it possible for there to be a drop in potential? After all it allows large amounts of current through it and can you keep it somewhat simple?
-
en.wikipedia.org/wiki/Zener_effect Note that zener effect and avalanche are totally different! – Georg Apr 16 at 12:38
In avalanche breakdown the zener diode does not behave like a pure conductor. It behaves like a "something that consumes N volts" followed by a perfect conductor. An intuitive way to think of it is: it costs you N volts worth of energy to keep the diode in breakdown. If you apply less than N volts breakdown stops and it barely conducts at all (it becomes a very good resistor.)
The way avalanche breakdown works is: there are some charge carriers (e.g. electrons) that are being accelerated by the voltage. When the electron hits a bond between two other atoms if the energy is low enough it just bounces off. But if the voltage is large enough then a loose electron will get accelerated (by the voltage) so that it will hit with sufficient energy to break a molecular bond and release another electron. Now there are two electrons being accelerated to fast enough speeds to break bonds. The instant you reduce the voltage below the breakdown limit, the electrons are no longer accelerated enough to break any more bonds, so the free electrons "settle back" into the bonds that are missing electrons and the current stops almost immediately. All that energy from the acceleration is released as heat.
So in a voltage regulator circuit like this:
Kirchoff's voltage law says that the voltage around any closed loop is 0. So you get +10 volts from the input, and you know you are going to drop -6 volts across the diode. Thus there must be 4 volts across the $40\Omega$ resistor and 6 volts across the $60\Omega$ resistor. So you can figure out the currents across the resistors. Now Kirchoff's current law says that the current going through the diode is the current through the $40\Omega$ resistor minus the current through the $60\Omega$ resistor.
For an input voltage >8.4 Volts (8.4 = 6.0 * 140/100) there will be 6 Volts across the load. Any remaining current gets shunted across the diode (which is now in breakdown.) At an input voltage <8.4 Volts there will be <6 Volts across the diode so there will be almost no current across the diode. The current through the resistors will be (approximately) the input voltage divided by $140\Omega$.
-
A Zener diode is a diode which allows current to flow in the forward direction in the same manner as an ideal diode, but will also permit it to flow in the reverse direction when the voltage is above a certain value known as the breakdown voltage, "zener knee voltage" or "zener voltage" or "Avalanche point". Zener Diode as Voltage Regulators The function of a regulator is to provide a constant output voltage to a load connected in parallel with it in spite of the ripples in the supply voltage or the variation in the load current and the zener diode will continue to regulate the voltage until the diodes current falls below the minimum IZ(min) value in the reverse breakdown region. It permits current to flow in the forward direction as normal, but will also allow it to flow in the reverse direction when the voltage is above a certain value - the breakdown voltage known as the Zener voltage. The Zener diode specially made to have a reverse voltage breakdown at a specific voltage. Its characteristics are otherwise very similar to common diodes. In breakdown the voltage across the Zener diode is close to constant over a wide range of currents thus making it useful as a shunt voltage regulator. The purpose of a voltage regulator is to maintain a constant voltage across a load regardless of variations in the applied input voltage and variations in the load current. The resistor is selected so that when the input voltage is at VIN(min) and the load current is at IL(max) that the current through the Zener diode is at least Iz(min). Then for all other combinations of input voltage and load current the Zener diode conducts the excess current thus maintaining a constant voltage across the load. The Zener conducts the least current when the load current is the highest and it conducts the most current when the load current is the lowest.
-
-1 for not knowing that there are two types of "Zener" with totally different working principles. – Georg Apr 16 at 12:00
|
2013-05-22 21:05:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5650012493133545, "perplexity": 433.7031098881749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702447607/warc/CC-MAIN-20130516110727-00084-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/mc2/chapter/5/lesson/5.1.1/problem/5-12
|
### Home > MC2 > Chapter 5 > Lesson 5.1.1 > Problem5-12
5-12.
Since the spinner must land on any of the portions, all of the portions must add up to 1 because each of the portions is relative to its probability.
Try finding the difference between 1 and the given probability portions.
Since they have different denominators, you must find a common denominator for all of them and convert them to an equivalent fraction.
$\frac{2}{36} \text{ or } \frac{1}{18}$
|
2021-05-17 17:03:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6328945755958557, "perplexity": 690.6733954582239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00448.warc.gz"}
|
http://imaginetattooing.com/473nc/function-of-galvanometer-08eedd
|
Sir Edward Schafer of the University of Edinburgh was the first to buy a string galvanometer electrograph for clinical use in 1908. How an educator uses Prezi Video to approach adult learning theory A galvanometer works as an actuator, by producing a rotary deflection of a pointer, in response to electric current flowing through a coil in a constant magnetic field. We attach a concave mirror to the wire at the top of the coil so that deflection can be measured using a lamp and scale. It also determines the null point of the circuit. STUDY. Now, we will use a soft iron core (it is a strong ferromagnetic material) in place of a loop and cylindrical magnets in place of horseshoe magnets. A torque acts on the coil which rotates the coil. The main function of the galvanometer is to decide the existence, direction, as well as electric current strength in a conductor. This instrument is a kind of ammeter, used to detect and measure electric current. When a current-carrying coil is suspended in a uniform magnetic field it is acted upon by a torque. If galvanometer shows a high deflection in a small voltage, then it is a voltage sensitivity given by. The galvanometer has permanent … In 1901, he successfully developed a new string galvanometer with very high sensitivity, which he used in his electrocardiograph. The angle is measured by the movement of the needle or by the deflection of a beam of light reflected from the mirror. Here, Φ and Sin Ө is a variable, and I α Φ/ Sin Ө. Under the action of this torque, the coil rotates and the deflection in the coil in a moving coil galvanometer is directly proportional to the current flowing through the coil. However, the torque on the coil remains the same. The soft iron core attracts the magnetic lines of force and hence the strength of the magnetic field increases if we use soft iron core. A galvanometer is a rather antique name for an instrument used to measure electrical current. We consider a coil having many turns and place it in a very strong magnetic field. Apparatus required: A Battery (0-6V), high resistance box (0-10,000 ohms), low resistance box (0-100 ohms), rheostat, two one way keys, galvanometer (30-0-30), voltmeter (0-3V). In galvanometer. It can be converted into ammeter to measure the currents in the order of an ampere or millimetre or in the range of milliamperes or microammeter to measure microampere current. Make sure the coil is tilted because torque isn’t generated when the coil is parallel to the magnetic field. When an electric current passes through the […] Vedantu academic counsellor will be calling you shortly for your Online Counselling session. As the coil rotates, it rotates smoothly and the spring twists. Q4: How Do You Increase the Sensitivity of a Galvanometer? The galvanometer at left is wired as a voltmeter placed across the capacitor, and the galvanometer at right is wired as an ammeter, placed in series with the resistor and capacitor. Anthropology We can express the torque produced as: Here, Ө is the angle between the area vector and the magnetic field. A field in which the magnetic field lines pass from N to South pole such that the area vector A➝is always perpendicular (radial) to the magnetic field B. Solution for What is the function of galvanometer in a circuit? It can be converted into ammeter to measure the currents in the order of an ampere or millimetre or in the range of milliamperes or microammeter to measure microampere current. Galvanometer. It has a coil pivoted (or suspended) between concave pole faces of a strong laminated horse shoe magnet. https://www.britannica.com/technology/galvanometer, Association for Psychological Science - The History Corner: The Galvanometer, The University of Texas at Austin - Galvanometer, National High Magnetic Field Laboratory - Magnetic Academy - Galvanometer, galvanometer - Student Encyclopedia (Ages 11 and up). Let us know if you have suggestions to improve this article (requires login). If the galvanometer deflects full scale for a current of 145 mA, what Ans: The sensitive galvanometer shows a huge deflection in a small current. There are various types of galvanometer. The reason is the more the number of turns, the more is current and more is the torque produced. Moving coil galvanometer: A galvanometer is used to detect current in a circuit. Any conventional galvanometer … In this article, we will study the moving coil type galvanometer. 3. 2. The null point means the situation in which no current flows through the circuit. Now, to remove this Sin Ө, we use the radial field. The most common type is the D’Arsonval galvanometer, in which the indicating system consists of a light coil of wire suspended from a metallic ribbon between the poles of a permanent magnet. One end of coil is attached to suspension … is perpendicular to the plane of the loop. Construction of Suspended Type Moving Coil Galvanometer: What is the function of (i) uniform radial magnetic field, (ii) soft iron core?Define the terms (i) current sensitivity and (ii) voltage sensitivity of a galvanometer. This is how we can convert galvanometer to ammeter and voltmeter to get the value of current sensitivity and voltage sensitivity respectively. Pro Lite, Vedantu Here, we use a pointer and a scale to get the deflection of the coil. This broad category encompasses such simple devices as the inclined plane, lever, wedge, wheel and axle, pulley, and screw (the so-called simple machines) as well as…. The small current that needs to be detected/measured is sent to the coil. The magnetic field produced by a current passing through the coil reacts with the magnetic field of the permanent magnet, producing a torque, or twisting force. Pro Lite, Vedantu Therefore, our purpose is solved, i.e., I = $\frac{C}{N ABSin \theta}$Φ = $\frac{C}{N AB}$Φ or I α Φ. Q2: What is the Relation between the Sensitivity and Deflection for a Galvanometer? Initially, this pointer points to 0. function. A galvanometer is a device that is used to detect small electric current or measure its magnitude. Once the current supplies in a magnetic field, a magnetic torque can be experienced. So, more is the twist; more is the restoring force. The moving coil of a moving coil galvanometer, moves in a magnetic field produced by a permanent magnet. Now, we will use a soft iron core (it is a strong ferromagnetic material) in place of a loop and cylindrical magnets in place of horseshoe magnets. Attach one end of a spring to the bottom of the coil and another end to the terminal T2. Sorry!, This page is not available for now to bookmark. When a current passes through the coil, its sides which are perpendicular to the magnetic field, experience equal and opposite force. This works on the rule of converting energy from electrical to mechanical. This means Ө between B and A➝will always be 90°, i.e., Sin Ө becomes 1. Ans: You can increase the sensitivity by: Decrease C by using phosphor bronze or Quartz wire because their Torsional constant is very low. Now, how to measure this Torsional strain? Galvanometer definition, an instrument for detecting the existence of small electric currents and determining their strength. Now, this coil keeps on rotating. The most common type is the D’Arsonval galvanometer, in which the indicating system consists of a light coil of wire suspended from a metallic ribbon between the poles of a permanent magnet. Aim: To convert a galvanometer (30-0-30) into a voltmeter of a given range (1.5 V) and to calibrate it. b) Moving coil galvanometer: Principle: The underlying principle of moving coil galvanometer is that a current carrying coil, placed in a uniform magnetic field, experiences torque. Ans: The current sensitivity of a moving coil galvanometer is given as: This means in the fractional flow of current, there is a high deflection. As the circuit completes, i.e., on making the connection between the wire, torque starts generating. Thus the sensitivity of galvanometer increases. A soft iron core is used in a moving coil galvanometer. Construction: It consists of a rectangular coil wound on a non-conducting metallic frame and is suspended by phosphor bronze strip between the pole-pieces (N and S) of a strong perm an ent magnet.A soft iron core in cylindrical form is placed between the coil. Not only the galvanometer mirror, but also the body of the galvanometer mirror, moves if not fixed in place using a custom-made metal jig with a circular hole for the galvanometer mirror. So, the use of a galvanometer is to detect whether there is a current in the circuit or not. On joining both the terminals, the current I starts flowing. Click hereto get an answer to your question ️ Explain, using a labelled diagram, the principle and working of a moving coil galvanometer. Fix the galvanometer mirror such that it is stabilized to protect it from damage while oscillating. You may have thought what this device is and what is the use of it? The galvanometer is the current measuring instrument which is mostly used in bridges and potentiometer for showing the zero current. Updates? We analyze the three most common profiles of scanning functions for galvanometer-based scanners (GSs): the sawtooth, triangular and sinusoidal functions. If it is open to turning below a controlling torque, then it turns by an angle which is proportional to the flow of current through it. We study the difference … It is used for detecting the direction of current flows in the circuit.
|
2022-10-06 05:08:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6037800908088684, "perplexity": 854.1424540848346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00723.warc.gz"}
|
https://precice.discourse.group/t/scaled-consistent-mapping-constraint/1038
|
# Scaled consistent mapping constraint
Hi,
I was preparing my contribution for preCICE-minisymposium during the ECCOMAS conference and came accross the possibility of the scaled-consistent mapping constraint in the preCICE version 2.3, which I overlooked until now but can be interesting for me. I was wondering how the integral value is calculated (with which quadrature rule) and how the mapping matrix is affected by this constraint (for consistent mapping: each row sum equals one; for conservative mapping: each column sum equals one). Can be integral value also be set manually (in the config file)?
Thanks in advance for the information!
Kind regards,
I was not involved in the development of this feature, but here are a few starting points:
1 Like
I was one of the developers of the feature but it has been a long time. Let me try to answer some of your questions.
• Integral is calculated by simple area-weighted averaging. A fancier quadrature rule might be better indeed but currently it is a straightforward calculation. In case there are no connectivity elements associated with the interface, the integral is simply the sum of nodal values.
• Mapping matrix is a little bit trickier. So idea behind is that: Apply consistent mapping → Calculate surface integral → Scale the values. Therefore, from practical point of view, the mapping matrix is same as the Consistent mapping one. I am not entirely sure its consequences from mathematical point of view. This publication might give more insight about the mathematical indications (we did not follow this paper one-to-one)
• No, it is not possible to set this integral value manually. Especially not in the config file since the integral value would probably depend on the current data values in the interface. In order to manipulate the data manually, you can use actions in preCICE.
I hope this is somewhat helpful. Please let me know if I can do more.
Cheers,
Oguz
2 Likes
Mathematically, it should be like this:
We start with a consistent mapping from mesh B to mesh A:
v_A = M \cdot v_B
As the mapping is consistent, each row of M sums up to 1.
Now, we do a scaling as a post-processing. Mathematically this means multiplication with a diagonal matrix D.
v_A = D \cdot M \cdot v_B
And the scaling is constructed in such a way that the overall mapping becomes conservative, meaning each column of D \cdot M sums up to 1.
Makes sense?
@JurgenKersschot Have you seen the (experimental) direct mesh access option already? IIRC, you were interested in such a feature at some point for your DG code.
Yes, Thank you all for the information! Yes, I saw the direct access feature, in the meantime I came up with a work-around, but it is on my to-do list to try the direct access feature out as well!
Kind regards,
|
2022-07-06 10:55:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383360743522644, "perplexity": 645.1870563343234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00460.warc.gz"}
|
http://mathoverflow.net/questions/80625/is-there-a-spectral-theory-approach-to-non-explicit-plancherel-type-theorems
|
Is there a spectral theory approach to non-explicit Plancherel-type theorems?
Teaching graduate analysis has inspired me to think about the completeness theorem for Fourier series and the more difficult Plancherel theorem for the Fourier transform on $\mathbb{R}$. There are several ways to prove that the Fourier basis is complete for $L^2(S^1)$. The approach that I find the most interesting, because it uses general tools with more general consequences, is to use apply the spectral theorem to the Laplace operator on a circle. It is not difficult to show that the Laplace operator is a self-adjoint affiliated operator, i.e., the healthy type of unbounded operator for which the spectral theorem applies. It's easy to explicitly solve for the point eigenstates of the Laplace operator. Then you can use a Fredholm argument, or ultimately the Arzela-Ascoli theorem, to show that the Laplace operator is reciprocal to a compact operator, and therefore has no continuous spectrum. The argument is to integrate by parts. Suppose that $$\langle -\Delta \psi, \psi\rangle = \langle \vec{\nabla} \psi, \vec{\nabla \psi} \rangle \le E$$ for some energy $E$, whether or not $\psi$ is an eigenstate and even whether or not it has unit norm. Then $\psi$ is microscopically controlled and there is only a compact space of such $\psi$ except for adding a constant. The payoff of this abstract proof is the harmonic completeness theorem for the Laplace operator on any compact manifold $M$ with or without boundary. It also works when $\psi$ is a section of a vector bundle with a connection.
My question is whether there is a nice generalization of this approach to obtain a structure theorem for the Laplace operator, or the Schrödinger equation, in non-compact cases. Suppose that $M$ is an infinite complete Riemannian manifold with some kind of controlled geometry. For instance, say that $M$ is quasiisometric to $\mathbb{R}^n$ and has pinched curvature. (Or say that $M$ is amenable and has pinched curvature.) Maybe we also have the Laplace operator plus some sort of controlled potential --- say a smooth, bounded potential with bounded derivatives. Then can you say that the spectrum of the Laplace or Schrödinger operator is completely described by controlled solutions to the PDE, which can be interpreted as "almost normalizable" states?
There is one case of this that is important but too straightforward. If $M$ is the universal cover of a torus $T$, and if its optional potential is likewise periodic, then you can use "Bloch's theorem". In other words you can solve the problem for flat line bundles on $T$, where you always just have a point spectrum, and then lift this to a mixed continuous and point spectrum upstairs. So you can derive the existence of a fancy spectrum that is not really explicit, but the non-compactness is handled using an explicit method. I think that this method yields a cute proof of the Plancherel theorem for $\mathbb{R}$ (and $\mathbb{R}^n$ of course): Parseval's theorem as described above gives you Fourier completeness for both $S^1$ and $\mathbb{Z}$, and you can splice them together using the Bloch picture to get completeness for $\mathbb{R}$.
-
Only a simple remark. In the non-compact case, the paradigmatic example is the harmonic oscillator $$-\Delta_{\mathbb R^d}+\frac{\vert x\vert^2}{4}$$ with spectrum $\frac{d}{2}+\mathbb N$. The eigenvectors are the Hermite functions with an explicit expression from the so-called Maxwellian $\psi_0=(2\pi)^{-d/4}\exp{-\frac{\vert x\vert^2}{4}}$ and the creation operators $(\alpha!)^{-1/2}(\frac{x}{2}-\frac{d}{dx})^\alpha \psi_0$. In one dimension the operator $-\frac{d^2}{dx^2}+x^4$ (quartic oscillator) has also a compact resolvent, but nothing explicit is known about the eigenfunctions. – Bazin May 2 '12 at 13:53
More subtle is the compactness of the resolvent of the 2D $$-\Delta_{\mathbb R^2}+x^2y^2.$$ – Bazin May 2 '12 at 13:54
I just saw this playing around on meta.... Are you asking a question beyond that spectrally almost every solution is polynomially bounded? – Helge Aug 15 '12 at 18:53
@Helge - That's part of the story, but in the ordinary Plancherel theorem, not the hardest part to state or prove. You would also want some statement about the spectral measure (that is, the projection-valued measure produced by the spectral theorem) associated to the Laplace or Schrodinger operator. Again, if you have a Laplace operator on a closed manifold, there is an algorithm to diagonalize it completely. The completeness theorem is considered very important, and not just the fact that you can find eigenfunctions. – Greg Kuperberg Aug 18 '12 at 3:16
2 Answers
Since this has not been mentioned, let me point to the Weyl-Stone-Titchmarsh-Kodaira theorem which gives the generalized Fourier transform and Plancherel formula of a selfadjoint Sturm-Liouville operator. The ODE section in Dunford-Schwartz II presents this. See also the nice original paper Kodaira (1949). The (one-dimensional) Schrödinger operator with periodic potential (Hill's operator) is also treated in Kodaira's paper.
In several variables, scattering theory provides Plancherel theorems. For the Dirichlet Laplacian in the exterior of a compact obstacle, one can find a result of this kind in chapter 9 of M.E. Taylor's book PDE II. Formula (2.15) in that chapter is the Plancherel theorem of the Fourier transform $\Phi$ defined in (2.8).
Stone's formula represents the (projection-valued) spectral measure of a selfadjoint operator as the limit of the resolvent at the real axis. It is a key ingredient in proofs of these results.
-
Too big to fit well as comment: There is a seeming-technicality which is important to not overlook, the question of whether a symmetric operator is "essentially self-adjoint" or not. As I discovered only embarrasingly belatedly, this "essential self-adjointness" has a very precise meaning, namely, that the given symmetric operator has a unique self-adjoint extension, which then is necessarily given by its (graph-) closure. In many natural situations, Laplacians and such are essentially self-adjoint. But with any boundary conditions, this tends not to be the case, exactly as in the simplest Sturm-Liouville problems on finite intervals, not even getting to the Weyl-Kodaira-Titchmarsh complications.
Gerd Grubb's relatively recent book on "Distributions and operators" discusses such stuff.
The broader notion of Friedrichs' canonical self-adjoint extension of a symmetric (edit! :) semi-bounded operator is very useful here. At the same time, for symmetric operators that are not essentially self-adjoint, the case of $\Delta$ on $[a,b]$ with varying boundary conditions (to ensure symmetric-ness) shows that there is a continuum of mutually incomparable self-adjoint extensions.
Thus, on $[0,2\pi]$, the Dirichlet boundary conditions give $\sin nx/2$ for integer $n$ as orthonormal basis, while the boundary conditions that values and first derivatives match at endpoints give the "usual" Fourier series, in effect on a circle, by connecting the endpoints.
This most-trivial example already shows that the spectrum, even in the happy-simple discrete case, is different depending on boundary conditions.
-
|
2015-07-07 02:32:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8981298208236694, "perplexity": 303.59550900483583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00153-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://lpsa.swarthmore.edu/Transient/TransMethSS.html
|
# Transient Response from State Space Representation
Contents
## Solution via State Space
Before starting this section make sure you understand how to create a state space representation of a system.
Zero input and zero state solutions of a system can be found if a state space representation of the system is known. Before solving an example, we first develop a generalized technique for finding the zero input and zero state solutions of a problem. This is followed by several examples.
Recall that a state space system is defined by the equations
where q is the state vector, A is the state matrix, B is the input matrix, u is the input, C is the output matrix, D is the direct transition (or feedthrough) matrix, and y is the output. In general we will have a single input and single output so u(t), y(t) and D defined as scalars. The techniques generalize in obvious ways to systems with multiple inputs and multiple outputs.
## The State Transition Matrix
Before we consider the solution of a problem, we will first introduce the state transition matrix and discuss some of its properties. The state transition matrix is an important part of both the zero input and the zero state solutions of systems represented in state space. The state transition matrix in the Laplace Domain, Φ(s), is defined as:
where I is the identity matrix. The time domain state transition matrix, φ(t), is simply the inverse Laplace Transform of Φ(s).
##### Example: Find State Transition Matrix of a 2nd Order System
Find Φ(s) and φ(t) if
Solution:
The inverse of a 2×2 matrix is given here.
To find φ(t) we must take the inverse Laplace Transform of every term in the matrix
We now must perform a partial fraction expansion of each term, and solve
Solution via MatLab
MatLab can be used to find the zero input response of a state space system:
## Zero Input
Let us now develop a method for finding the zero input solution to a system defined in state space. The system is defined as
The zero input problem is given by:
with a known set of initial conditions, q(0-).
We solve for q(t) by first taking the Laplace Transform and solving for Q(s)
But, (sI-A)-1=Φ(s), i.e., the state transition matrix. So
Since q(0-) is a constant multiplier the inverse Laplace Transform is simply
The solution for y(t) is found in a straightforward way from the output equation
##### Example: Zero Input Response from State Space (2x2)
Find the response for the system defined by:
with
and
Solution:
The zero input problem was solved previously
with the state transition matrix given by
For the given A matrix, Φ(s) and φ(t) were calculated previously (above)
So
and
Solution via Matlab
This problem can also be solve with MatLab
A=[0 1; -2 -3]; %Define Matrices
B=[0; 1;]; C=[1 -1]; D=0;
mySys=ss(A,B,C,D); %Define State Space system
q0=[1; 2;]; %Define initial conditions
initial(mySys,q0); %Plot zero input solution
##### Key Concept: Zero Input Response from State Space Representation
Given a state space system:
The zero input response is given by
where Φ(s) is the state transition matrix:
### Alternate Derivation of the State Transition Matrix
There is an alternate, more intuitive, derivation of the state transition matrix. This derivation is made in analogy with that of a scalar first order differential equation. The scalar and matrix equations are shown below, side-by-side.
Description Scalar Equation Matrix Equation Define the problem(a 1st order differential equation) Write solution in terms of initialconditions and (Taylor expansion of exponential)
Examining the third row of the table we see that we have introduced a matrix exponential that is exactly analogous to the scalar exponential, and we have used this matrix exponential in the solution of our first order matrix differential equation:
Comparing this to our solution in terms of the state transition matrix
we see that
##### Example: Evaluation of the Matrix Exponential
Write a closed form expression for eAt if .
Solution:
Since
and we know (from above) that for the A matrix specified that
then
It is perhaps surprising that the series form of the matrix exponential
yields such a compact closed form solution, but this makes it possible to evaluate eAt (and φ(t)) precisely and efficiently.
### Properties of the State Transition Matrix
From the matrix exponential definition of the state transition matrix we can derive several properties.
## Zero State
Finding the zero state response of a system given a state space representation is a bit more complicated. In the Laplace Domain the response is found by first finding the transfer function of the system. (A description of the transformation from state space representation to transfer function is given elsewhere).
In the time domain, this last equation (multiplication in the Laplace domain), is just a convolution (the asterisk (*) denotes convolution):
Note: this last equation assumes a single input system. For multi-input systems the u(t) term must stay to the right of B.
##### Example: Zero State Solution from State Space (2x2)
Find the zero input solution (qzi(t) and yzi(t)) for the system defined by:
with
and
Solution:
First we need to find the transfer function from the state space representation
We found Φ(s); earlier.
so
(we can check this with MatLab)
>> mySys=ss([0 1; -2 -3], [0; 1], [1 -1], 0); % Define system in state space
>> [n,d]=tfdata(mySys,'v') % Get numerator and denominator
n =
0 -1.0000 1.0000
d =
1 3 2
We also know that
so
The partial fraction expansion can be done by hand or with Matlab. The Matlab solution is shown.
>> [r,p,k]=residue([1 -1],[1 3 2 0 0]) % Perform partial fraction expansion
r =
0.7500
-2.0000
-1.2500
0.5000
p =
-2
-1
0
0
k = []
## Complete Response
##### Example: Complete Response from State Space (2x2)
Find the response for the system defined by:
with
and
Solution:
The zero input problem was solved previously
The zero state problem was also solved previously
The complete response is simply the sum of the two
Solution via Matlab
A numerical solution can be found with Matlab
t=linspace(0,10); %Define time vector
A=[0 1; -2 -3]; %Define Matrices
B=[0; 1;]; C=[1 -1]; D=0;
mySys=ss(A,B,C,D); %Define State Space system
u=t; %Define input
q0=[1; 2;]; %Define initial conditions
yzi=initial(mySys,q0,t); %Find zero input response
yzs=lsim(mySys,u,t); %Find zero state response
yc=yzi+yzs; %Find complete response
Note that the complete response converges to the zero state response at long times as the zero input response decays to zero.
##### Example: Another transient response of a state space system
The system shown is a simplified model of a part of a suspension system of a wheel on a car or motorcyle. The mass, m, represents the weight of the vehicle supported by the wheel, and the spring and dashpot represent the suspension system. For our purposes let m=500 kg, k=3000 N/m, b=2500 N-m/s.
Find the output if the system starts at rest (the velocity is zero) but xout(0-)=0.05 and xin(t)=0.1·γ(t).
Solution:
We must first develop a state space model. Techniques for doing so are discussed elsewhere. We will start from the system transfer function (derived on the previous page):
We can transform this to Observable Canonic form as
From the output equation we see that
From the top row of the state variable equation we see that:
or
Zero State Solution:
We start by finding the state transition matrix. This could be done by hand, we'll use Matlab's symbolic toolbox:
>> syms s
>> A=[-5 1; -6 0];
>> Phi=inv(s*eye(2)-A)
Phi =
[ s/(s^2 + 5*s + 6), 1/(s^2 + 5*s + 6)]
[ -6/(s^2 + 5*s + 6), (s + 5)/(s^2 + 5*s + 6)]
The zero input solution is
with
We again turn to Matlab to find Yzi(s)
>> C=[1 0];
>> q0=[0.05; 0.25];
>> Yzi=C*Phi*q0
Yzi =
s/(20*(s^2 + 5*s + 6)) + 1/(4*(s^2 + 5*s + 6))
>> pretty(simple(Yzi))
s + 5
-----------------
2
20 (s + 5 s + 6)
At this point we could perform a partial fraction expansion, but we will let Matlab do the work
>> [r,p,k]=residue([1 5],20*[1 5 6])
r =
-0.1000
0.1500
p =
-3.0000
-2.0000
k = []
So we get
As expected this agrees with the solution obtained using the transfer function (done on previous page).
Zero State Solution:
We start by finding the transfer function, which was derived at the start of the problem.
(Note, if we didn't already know the transfer function we could always use the relationship H(s)=(s)B+D.)
Rather than solving this again, we refer to the solution on the previous page.
Complete Solution:
The complete response is simply the sum of the zero input and zero stat response.
References
|
2016-10-25 03:01:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691723346710205, "perplexity": 1664.4205398159052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00524-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://brilliant.org/discussions/thread/infinite-universe/
|
×
# infinite universe
universe is created through infinity,by infinity and in infinify which starts from zero and ends with infinite which is nearer to zero
Note by Deepak Patil
4 years, 5 months ago
Sort by:
no it is actually a 4-dimensional space which can only be limited if we master the 4th dimension,i.e. time...
- 4 years, 5 months ago
|
2017-10-20 05:29:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944823145866394, "perplexity": 5260.602526113975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00720.warc.gz"}
|
https://www.answers.com/Q/What_are_the_differences_between_expressed_and_implied_powers_of_congress
|
US Constitution
US Congress
The Difference Between
# What are the differences between expressed and implied powers of congress?
234
###### Wiki User
Expressed powers are those written in the U.S. Constitution, implied powers are those that are not written in the U.S. Constitution but are used to better the united states as needed.
🙏
0
🤨
0
😮
0
😂
0
## Related Questions
Expressed: Powers given to Congress Implied: Not spelled out but given or "implied" Reserved: Not expressed in the Constitution and are granted to the states
Congress has expressed and implied powers. Expressed are strictly stated powers in the Constitution. Implied powers are derived from the elastic clause of the Constitution.
In the Constitution, delegated (expressed) powers are powers that are explicitly given to Congress. Implied Powers are powers that are not written in the Constitution, but are implied by the Elastic Clause.
The United States Constitution is the document that contains the expressed and implied powers that are given to Congress. These powers are outlined in Article I, Section 8.
Expressed powers are powers that are specifically listed in the Constitution. Implied powers are powers not listed in the Constitution but according to the "necessary and proper" clause, these powers may used to carry out expressed powers.
Expressed powers are powers spcificaly given to Congress in the Constitution and Implied powers are given to the Congress in Article 1 section 8 of the Constitution at least that is what my history book says.
These type of powers are called "expressed powers", as they're clearly expressed in the constitution. The other powers are called "implied powers", as they're not expressed but implied from other sections.
i would tell you but hunny it is too too much to write so get your hands ready
A power that must be deemed to exist in order for a particular responsibility to be carried out.
An expressed contract is one that is actually in writing. Implied is one that can be inferred from the actions of the parties.
impliedif something is not expressed, it is implied.
Congress' powers are listed in Article one of the Constitution. Specific powers are enumerated in section eight. Congress has expressed powers that are written in the Constitution and implied powers that are not expressed.
Implied powers allow Congress to execute anything they see as necessary and proper, and it doesn't need to fall under the expressed powers of the Constitution.
Implied powers are the authorities that although are not specifically delegated in the constitution are still a power. A good example for an implied power in congress is that the constitution gives Congress the expressed power of providing for a Navy and an Army. But, they also provide for the Air Force. Though this is not listed in the constitution because there were no airplanes during this time, it was implied that Congress should provide for all of the military. justapebbleinthesea.blogspot.com
The five expressed powers of Congress are to collect taxes, borrow money on the credit of the United States, regulate commerce, coin money, and lastly to declare war:)
Expressed powers are powers that are stated in the constitution while implied are vaguely relevant and can be assumed to be stated. The elastic clause grants congress a set of implied powers that are not explicitly named in the constitution, but are assumed to exist because they are necessary to implement the expressed powers named in article 1.
These powers are referred to as implied powers, powers that are not explicitly granted to Congress in the U.S. Constitution. The opposite would be expressed powers.
Congress legislating on issues of national health care is a good example of the use of what is known as an implied power. The opposite would be an expressed power.
The implied powers doctrine upheld Mcculloh vs Maryland and gives Congress the power to do anything reasonably related to carrying out the expressed powers.
It's simple the reader infers details that are implied by the text,Explicit means clearly expressed or readily observable where as Implicit means implied or expressed indirectly.
###### US ConstitutionUS CongressLaw & Legal IssuesEnglish LanguagePolitics and GovernmentHealthHistory of the United StatesGovernmentHobbies & Collectibles
Copyright © 2020 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply.
|
2020-12-03 11:45:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198758363723755, "perplexity": 2269.573620633768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141727627.70/warc/CC-MAIN-20201203094119-20201203124119-00710.warc.gz"}
|
http://oknoprof.pl/arygzgr0/how-many-unpaired-electrons-does-vanadium-have-6beaff
|
That leaves the (n â 1)d orbitals to be involved in some portion of the bonding and in the process also describes the metal complex's valence electrons. The electron configuration for transition metals predicted by the simple Aufbau principle and Madelung's rule has serious conflicts with experimental observations for transition metal centers under most ambient conditions. 3.37. Periodic Table: commons.wikimedia.org/wiki/File:Periodic_table.svg, Ionic Compounds: lac.smccme.edu/New%20PDF%20No.../Ionrules2.pdf (Page 6 is useful), List of Inorganic Compounds: en.Wikipedia.org/wiki/List_of_inorganic_compounds, en.Wikipedia.org/wiki/Metal_Oxidation_States#Variable_oxidation_states. Munoz-Paez, Adela. Alkali metals have one electron in their valence s-orbital and therefore their oxidation state is almost always +1 (from losing it) and alkaline earth metals have two electrons in their valences-orbital, resulting with an oxidation state of +2 (from losing both). What two transition metals have only one oxidation state. It is far more common for metal centers to have bonds to other atoms through metallic bonds or covalent bonds. Why does the number of oxidation states for transition metals increase in the middle of the group? According to the model present by ligand field theory, the ns orbital is involved in bonding to the ligands and forms a strongly bonding orbital which has predominantly ligand character and the correspondingly strong anti-bonding orbital which is unfilled and usually well above the lowest unoccupied molecular orbital (LUMO). See table in this module for more information about the most common oxidation states. J. Chem. Which transition metal has the most number of oxidation states? For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. The oxidation state determines if the element or compound is diamagnetic or paramagnetic. Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.. An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons … so 2×Cr+3×O=0 Oxygen O almost always has a charge of -2 so 2×Cr+3×(−2)=0 2×Cr+−6=0 add + 6 to both sides 2×Cr+−6+6=0+6 so 2×Cr=+6 divide both side by 2 22×Cr=+62 equals Cr=+3 This gives us Ag, Electron Configuration of Transition Metals, General Trends among the Transition Metals, Oxidation State of Transition Metals in Compounds, http://www.chemicalelements.com/groups/transition.html, http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch12/trans.php, information contact us at info@libretexts.org, status page at https://status.libretexts.org. When given an ionic compound such as AgCl, you can easily determine the oxidation state of the transition metal. For example, in group 6, (chromium) Cr is most stable at a +3 oxidation state, meaning that you will not find many stable forms of Cr in the +4 and +5 oxidation states. The oxidation state of an element is related to the number of electrons that an atom loses, gains, or appears to use when joining with another atom in compounds. These substances are non-magnetic, such as wood, water, and some plastics. Like other heavier lanthanides, dysprosium has a lot of unpaired electrons, giving both the metal and its ions a high magnetic susceptibility. Since there are 3 Cl atoms the negative charge is -3. Chlorine is a chemical element with the symbol Cl and atomic number 17. Legal. 1s^2 2s^2 2p^3 Nitrogen (7 electrons) Three unpaired electron in the 2p sublevel. If an atom is reduced, it has a higher number of valence shell electrons, and therefore a higher oxidation state, and is a strong oxidant. This example also shows that manganese atoms can have an oxidation state of +7, which is the highest possible oxidation state for the fourth period transition metals. The more recent ligand field theory offers an easy to understand explanation that models phenomena relatively well. Since the orbitals resulting from the ns orbital are either buried in bonding or elevated well above the valence, the ns orbitals are not relevant to describing the valence. As the number of unpaired valence electrons increases, the d-orbital increases, the highest oxidation state increases. Magnets are used in electric motors and generators that allow us to have computers, light, telephones, televisions, and electric heat. Reduction results in a decrease in the oxidation state. Standard electron configuration perspective, "A new approach to the formal classification of covalent compounds of the elements", MLX Plots (Ged Parkin group website, Columbia University), oxidative addition / reductive elimination, https://en.wikipedia.org/w/index.php?title=D_electron_count&oldid=944513533, Creative Commons Attribution-ShareAlike License, This page was last edited on 8 March 2020, at 08:49. Determine the oxidation states of the transition metals found in these neutral compounds. In this module, we will precisely go over the oxidation states of transition metals. In this situation the complex geometry is octahedral, which means two of the d orbitals have the proper geometry to be involved in bonding. Knowing that CO3has an oxidation state of -2 and knowing that the overall charge of this compound is neutral, we can conclude that zinc (Zn) has an oxidation state of +2. With this said, we get Co2+ and 2Br-, which would result as CoBr2. Paramagnetic substances have at least one unpaired electron. Since we know that chlorine (Cl) is in the halogen group of the periodic table, we then know that it has a charge of -1, or simply Cl-. Print. Nitrogen gained 3 electrons to form N3; it has 7 protons and 10 electrons. What is the oxidation state of zinc (Zn) in ZnCO3. General Chemistry: Principles and Modern Applications. Petrucci, Ralph H., William S. Harwood, F. G. Herring, and Jeffry D. Madura. Chromium and molybdenum possess maximum number (6) of unpaired electrons and magnetic moment. What follows is a short description of common geometries and characteristics of each possible d electron count and representative examples. Magnetism is a function of chemistry that relates to the oxidation state. Vanadium(IV) has one unpaired 3d electron that, coupled with the nuclear spin, is exquisitely diagnostic in EPR spectroscopy - the vanadyl ion (VO 2+) is a sensitive spectroscopic probe that has been used to elucidate enzyme active site structure, as well as catalytic activity. See File Attachment for Solutions. (2003). Angew Chem Int Ed Engl 42(9): 1038-41. Similarly copper is [Ar]4s13d10 with a full d subshell, and not [Ar]4s23d9.[3]:38. Experimentally it has been observed that not only are the ns electrons removed first, even for unionized complexes all of the valence electrons are located in the (n â 1)d orbitals. For higher d-series, the actual magnetic moment includes components from the orbital moment in addition to the spin moment. This results in two filled bonding orbitals and two orbitals which are usually the lowest unoccupied molecular orbitals (LUMO) or the highest partially filled molecular orbitals â a variation on the highest occupied molecular orbitals (HOMO). Note: The transition metal is underlined in the following compounds. Similar to chlorine, bromine (Br) is also in the halogen group, so we know that it has a charge of -1 (Br-). Diamagnetic substances have only paired electrons, and repel magnetic fields weakly. c. vanadium d. calcium. Often it is difficult or impossible to assign electrons and charge to the metal center or a ligand. Depending on the geometry of the final complex, either all three of the np orbitals or portions of them are involved in bonding, similar to the ns orbitals. Determine the oxidation state of cobalt (Co) in CoBr2. The two orbitals that are involved in bonding form a linear combination with two ligand orbitals with the proper symmetry. How many electrons in an atom can have each of the following quantum number or sublevel designation An equilibrium mixture of PCl_5g PCl_3g and Cl_2g has partial pressures of 217.0 Torr. Academia.edu is a platform for academics to share research papers. Unpaired Electrons of d-orbitals. The cation is first in the formula; therefore the formula should be Na2S. Answer: Cl has an oxidation state of -1. This gives us Ag+ and Cl-, in which the positive and negative charge cancels each other out, resulting with an overall neutral charge; therefore +1 is verified as the oxidation state of silver (Ag). These are the type of magnets found on your refrigerator. But referring to the formal oxidation state and d electron count can still be useful when trying to understand the chemistry. "Stabilization of low-oxidation-state early transition-metal complexes bearing 1,2,4-triphosphacyclopentadienyl ligands: structure of [Sc(P3C2tBu2)2]2; Sc(II) or mixed oxidation state?" Since there are two bromines, the anion (bromine) gives us a charge of -2. Since FeCl3 has no overall charge, the compound have a neutral charge, and therefore the oxidation state of Fe is +3. Chromium and copper have 4s1 instead of 4s2. 3.39. A large variety of ligands can bind themselves to these elements. General Chemistry Principles and Modern Applications. Answer: +3 Explanation: A compound has a zero net charge. This gives us Mn7+ and 4 O2-, which will result as $$MnO_4^-$$. Other possible oxidation states for iron includes: +5, +4, +3, and +2. Similarly, for copper, it is 1 d-electron short for having a fully-filled d-orbital and takes one from the s-orbital, so the electron configuration for copper would simply be: [Ar] 4s13d10. Due to this, a wide variety of stable complexes are formed by transition elements. We present a thoroughgoing electron paramagnetic resonance investigation of polydopamine (PDA) radicals using multiple electron paramagnetic resonance techniques at the W-band (94 GHz), electron nuclear double resonance at the Q-band (34 GHz), spin relaxation, and continuous wave measurements at the X-band (9 GHz). The ground state electronic configuration of neutral oxygen is [He].2s 2.2p 4 and the term symbol of oxygen is 3 P 2.. There are many examples of every possible d electron configuration. For example: Scandium has one unpaired electron in the d-orbital. Clentsmith, G. K., F. G. Cloke, et al. An example is chromium whose electron configuration is [Ar]4s13d5 with a half-filled d subshell, although Madelung's rule would predict [Ar]4s23d4. Here is a chart which shows the most common oxidation states for first row transition metals. The d electron count is a chemistry formalism used to describe the electron configuration of the valence electrons of a transition metal center in a coordination complex. Question 17. It is these dâd transitions, ligand to metal charge transfers (LMCT), or metal to ligand charge transfers (MLCT) that generally give metals complexes their vibrant colors. The s-orbital also contributes to determining the oxidation states. What is the maximum number of electrons that can be found in any orbital of an atom? Its unit is Bohr Magneton (BM). The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Adopted a LibreTexts for your class? Consider the manganese (Mn) atom in the permanganate ($$MnO_4^-$$) ion. We know that the full p orbitals will add up to 6. It is added to the 2 electrons of the s-orbital and therefore the oxidation state is +3. This poor explanation avoids the basic problems with the standard electron configuration model. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. For example, the 4s fills before the 3d in period 4. There are five orbitals in the d subshell manifold. To fully understand the phenomena of oxidation states of transition metals, we have to understand how the unpaired d-orbital electrons bond. [1][2] The d electron count is an effective way to understand the geometry and reactivity of transition metal complexes. To help remember the stability of higher oxidation states for transition metals it is important to know the trend: the stability of the higher oxidation states progressively increases down a group. Chlorine is a yellow-green gas at room temperature. Since copper is just 1 electron short of having a completely full d-orbital, it steals an electron from the s-orbital, allowing it to have 10 d-electrons. By contrast, there are many stable forms of molybdenum (Mo) and tungsten (W) at +4 and +5 oxidation states. For example, in the MO diagram provided for the [Ti(H2O)6]3+ the ns orbital â which is placed above (n â 1)d in the representation of atomic orbitals (AOs) â is used in a linear combination with the ligand orbitals, forming a very stable bonding orbital with significant ligand character as well as an unoccupied high energy antibonding orbital which is not shown. However, paramagnetic substances become magnetic in the presence of a magnetic field. On the other hand, lithium (Li) and sodium (Na) are incredibly strong reducing agents (likes to be oxidized), meaning that they easily lose electrons. The final description of the valence is highly dependent on the complex's geometry, in turn highly dependent on the d electron count and character of the associated ligands. Many paramagnetic compounds are formed by these elements, because of the unpaired electrons in the d orbital. alkali metals and alkaline earth metals)? The standard electron configuration model assumes a hydrogen-like atom removed from all other atoms. Scandium is one of the two elements in the first transition metal period which has only one oxidation state (zinc is the other, with an oxidation state of +2). All the other elements have at least two different oxidation states. This is because unpaired valence electrons are unstable and eager to bond with other chemical species. This is not the case for transition metals since transition metals have 5 d-orbitals. This is because copper has 9 d-electrons, which would produce 4 paired d-electrons and 1 unpaired d-electron. Negative. These have applications including the film industry; the lamps have a high luminous efficiency whilst they can be dimmed appreciably whilst still maintaining the same "colour temperature". Have questions or comments? The valence of a transition metal center can be described by standard quantum numbers. For example, oxygen (O) and fluorine (F) are very strong oxidants. 9th ed. The radical anion, DHAQ3–•, formed as a reaction intermediate during the reduction of DHAQ2–, was detected and its concentration quantified during … The d electron count is an effective way to understand the geometry and reactivity of transition metal complexes. In addition, this compound has an overall charge of -1; therefore the overall charge is not neutral in this example. Likewise, chromium has 4 d-electrons, only 1 short of having a half-filled d-orbital, so it steals an electron from the s-orbital, allowing chromium to have 5 d-electrons. This means that the oxidation states would be the highest in the very middle of the transition metal periods due to the presence of the highest number of unpaired valence electrons. Rb forms a +1 cation (Rb+) and Cl forms a 1 anion (Cl), so the formula should be RbCl. The usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons". To fully understand the phenomena of oxidation states of transition metals, we have to understand how the unpaired d-orbital electrons bond. We see that iodine has 5 electrons in the p orbitals. In other words, it is: Fe3+ and 3Cl-, which makes up FeCl3 with a neutral charge. This is because chromium is 1 d-electron short for having a half-filled d-orbital, therefore it takes one from the s-orbital, so the electron configuration for chromium would just be: [Ar] 4s13d5. We report the development of in situ (online) EPR and coupled EPR/NMR methods to study redox flow batteries, which are applied here to investigate the redox-active electrolyte, 2,6-dihydroxyanthraquinone (DHAQ). 1s^2 2s^2 2p^6 3s^2 3p^6 3d^10 4s^2 4p^4 ... How many unpaired electrons does an atom of this element have? b) How many unpaired electrons does iodine have? The formalism has been incorporated into the two major models used to describe coordination … The TanabeâSugano diagram with a small amount of information accurately predicts absorptions in the UV and visible electromagnetic spectrum resulting from d to d orbital electron transitions. The np orbitals if any that remain non-bonding still exceed the valence of the complex. To find one of its oxidation states, we can use the formula: Indeed, +6 is one of the oxidation states of iron, but it is very rare. where ‘S’ is the total spin and ‘n’ is the number of unpaired electrons. For ions, the oxidation state is equal to the charge of the ion, e.g., the ion Fe, The oxidation state of a neutral compound is zero, e.g., What is the oxidation state of Fe in FeCl. 3.38. In addition, we know that CoBr2 has an overall neutral charge, therefore we can conclude that the cation (cobalt), Co must have an oxidation state of +2 in order to neutralize the -2 charge from the two bromines. 8th ed. The other three d orbitals in the basic model do not have significant interactions with the ligands and remain as three degenerate non-bonding orbitals. It was mentioned previously that both copper and chromium do not follow the general formula for transition metal oxidation states. The formula for determining oxidation states would be (with the exception of copper and chromium): Highest Oxidation State for a Transition metal = Number of Unpaired d-electrons + Two s-orbital electrons. To determine the oxidation state, unpaired d-orbital electrons are added to the 2s orbital electrons since the 3d orbital is located before the 4s orbital in the periodic table. Matters are further complicated when metal centers are oxidized. Since the (n â 1)d shell is predicted to have higher energy than the ns shell, it might be expected that electrons would be removed from the (n â 1)d shell first. N.J.: Pearson/Prentice Hall, 2002. In this case, you would be asked to determine the oxidation state of silver (Ag). Thus, since the oxygen atoms in the ion contribute a total oxidaiton state of -8, and since the overall charge of the ion is -1, the sole manganese atom (Mn) must have an oxidation state of +7. Crystal field theory describes a number of physical phenomena well but does not describe bonding nor offer an explanation for why ns electrons are ionized before (n â 1)d electrons. The d-orbital has a variety of oxidation states. Titanium lost four electrons to form Ti4+; it has 22 protons and 18 electrons. (You will probably need Adobe Reader to open the PDF file.). Oxidation results in an increase in the oxidation state. Each of the ten possible d electron counts has an associated TanabeâSugano diagram describing gradations of possible ligand field environments a metal center could experience in an octahedral geometry. These bonds drastically change the energies of the orbitals for which electron configurations are predicted. Using the Hund's rule and Pauli exclusion principals we can make a diagram like the following: The answer is one. Oxygen atoms have 8 electrons and the shell structure is 2.6. Since oxygen has an oxidation state of -2 and we know there are four oxygen atoms. Under most conditions all of the valence electrons of a transition metal center are located in d orbitals while the standard model of electron configuration would predict some of them to be in the pertinent s orbital. To find the answer we refer to part a) and look at the valence electrons. 3.40 There are various hand waving arguments for this phenomenon including that "the ns electrons are farther away from the nuclei and thus ionized first" while ignoring results based on neutral complexes. These are much stronger and do not require the presence of a magnetic field to display magnetic properties. Iron has 4 unpaired electrons and 2 paired electrons. This gives us Zn2+ and CO32-, in which the positive and negative charges from zinc and carbonate will cancel with each other, resulting in an overall neutral charge, giving us ZnCO3. Petrucci, Ralph H., William S. Harwood, and F. G. Herring. Since there are many exceptions to the formula, it would be better just to memorize the oxidation states for the fourth period transition metals, since they are more commonly used. Manganese, which is in the middle of the period, has the highest number of oxidation states, and indeed the highest oxidation state in the whole period since it has five unpaired electrons (see table below). Free elements (elements that are not combined with other elements) have an oxidation state of zero, e.g., the oxidation state of Cr (chromium) is 0. A. Click here to let us know! There are five orbitals in the d subshell manifold. Almost all of the transition metals have multiple potential oxidation states. 13.2 A quantity of 2.00 x 10^2 mL of 0.779 M HCl is mixed with 2.00 x 10^2 mL of 0.390 M BaOH2 in a con For a high-oxidation-state metal center with a +4 charge or greater it is understood that the true charge separation is much smaller. Calculate the magnetic moment and the number of unpaired electrons in Cu 2+. The d electron count is a chemistry formalism used to describe the electron configuration of the valence electrons of a transition metal center in a coordination complex. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. "Transition Metal Oxides: Geometric and Electronic Stuctures: Introducing Solid State Topics in Inorganic Chemistry Courses." This assumption is only truly relevant for esoteric situations. The number of d-electrons range from 1 (in Sc) to 10 (in Cu and Zn). The number of unpaired electrons are 4 as follows: Their magnetic moment is µ = $$\sqrt { 4(4+2) }$$ = $$\sqrt { 24 }$$ = 4.89 µ B. (Note: CO3 in this example has an oxidation state of -2, CO32-). It is important to remember that the d electron count is a formalism and describes some complexes better than others. 3. As the number of unpaired valence electrons increases, the d-orbital increases, the highest oxidation state increases. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. In general chemistry textbooks, a few exceptions are acknowledged with only one electron in the ns orbital in favor of completing a half or whole d shell. It also determines the ability of an atom to oxidize (to lose electrons) or to reduce (to gain electrons) other atoms or species. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the … Educ.1994, 71, 381. Print. See Periodic Table below: In the image above, the blue-boxed area is the d block, or also known as transition metals. Oxygen: description Your user agent does not support the HTML5 Audio element. The analysis proves the existence of two distinct … Upper Saddle River, N.J.: Pearson/Prentice Hall, 2007. Another stronger magnetic force is a permanent magnet called a ferromagnet. Why do transition metals have a greater number of oxidation states than main group metals (i.e. In addition, by seeing that there is no overall charge for AgCl, (which is determined by looking at the top right of the compound, i.e., AgCl#, where # represents the overall charge of the compound) we can conclude that silver (Ag) has an oxidation state of +1. So that would mathematically look like: 1s electron + 1s electron + 1d electron = 3 total electrons = oxidation state of +3. [ "article:topic", "Unpaired Electrons", "oxidation state", "orbitals", "transition metals", "showtoc:no", "oxidation states", "Multiple Oxidation States", "Polyatomic Transition Metal Ions" ], https://chem.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FInorganic_Chemistry%2FModules_and_Websites_(Inorganic_Chemistry)%2FDescriptive_Chemistry%2FElements_Organized_by_Block%2F3_d-Block_Elements%2F1b_Properties_of_Transition_Metals%2FElectron_Configuration_of_Transition_Metals%2FOxidation_States_of_Transition_Metals, The formula for determining oxidation states would be, we can conclude that silver (Ag) has an oxidation state of +1. The Aufbau principle and Madelung's rule would predict for period n that the ns orbitals fill prior to the (n â 1)d orbitals. Thus for coordination complexes the standard electron configuration formalism is meaningless and the d electron count formalism is a suitable substitute. The formalism has been incorporated into the two major models used to describe coordination complexes; crystal field theory and ligand field theory, which is a more advanced version based on molecular orbital theory.[3]. These elements have a large ratio of charge to the radius. As stated above, most transition metals have multiple oxidation states, since it is relatively easy to lose electron(s) for transition metals compared to the alkali metals and alkaline earth metals. Not neutral in this module for more information contact us at info libretexts.org... Two bromines, the 4s fills before the 3d in period 4 metal is underlined in the should. Make a diagram like the following compounds Herring, and electric heat Jeffry Madura. The number of oxidation states of the transition metal has been incorporated into the major. As three degenerate non-bonding orbitals transition elements includes components from the orbital in... Chromium and molybdenum possess maximum number of unpaired electrons does an atom further complicated when metal centers to bonds... Electrons that can be found in these neutral compounds used to describe coordination … b ) How many unpaired does... Arrangements of electrons that can be described by standard quantum numbers two bromines the... Module, we will precisely go over the oxidation state of zinc Zn... Was mentioned previously that both copper and chromium do not follow the how many unpaired electrons does vanadium have formula for transition metals have only oxidation. Different oxidation states compound such as AgCl, you would be asked to determine oxidation! ]:38 moment in addition, this compound has an oxidation state and d electron is! Form Ti4+ ; it has 7 protons and 18 electrons proper symmetry components! Assumes a hydrogen-like atom removed from all other atoms ( O ) and fluorine ( ). Hydrogen-Like atom removed from all other atoms through metallic bonds or covalent bonds bind to! Arrangements of electrons that can be found in how many unpaired electrons does vanadium have neutral compounds half-filled or completely filled subshells are stable... Is difficult or impossible to assign electrons and magnetic moment and the number d-electrons. Consider the manganese ( Mn ) atom in the 2p sublevel a diagram like the following compounds 1d electron 3... There are 3 Cl atoms the negative charge is -3 ( Note: CO3 in this module we... Co3 in this example has an oxidation state is +3 remain non-bonding still exceed the how many unpaired electrons does vanadium have of the?... Academics to share research papers rule and Pauli exclusion principals we can make a diagram like the following: transition. Transition metals have multiple potential oxidation states given an ionic compound such as AgCl, you be. At info @ libretexts.org or check out our status page at https: //status.libretexts.org are 3 Cl atoms the charge. ) in ZnCO3 Inorganic chemistry Courses. previously that both copper and chromium do not have significant interactions with symbol! Substances are non-magnetic, such as wood, water, and therefore the overall charge is -3 eager to with... Far more common for metal centers are oxidized are non-magnetic, such AgCl! Form a linear combination with two ligand orbitals with the standard electron configuration is! Is added to the oxidation state are five orbitals in the d-orbital also contributes to determining oxidation... Can easily determine the oxidation state increases electric motors and generators that allow to... Mostly intermediate between them are further complicated when metal centers are oxidized up 6... Used to describe coordination … b ) How many unpaired electrons center or ligand... Other heavier lanthanides, dysprosium has a zero net charge every possible electron. Atom of this element have chromium and molybdenum possess maximum number ( 6 ) of unpaired valence electrons increases the... Has 7 protons and 18 electrons we know there are many examples of every possible d electron count an. Chromium do not have significant interactions with the symbol Cl and atomic number 17 since has. Has an oxidation state of -2, CO32- ) state and d electron count is an effective way to How. Assign electrons and 2 paired electrons, giving both the metal and its ions a high susceptibility! A linear combination with two ligand orbitals with the proper symmetry explanation that models relatively. Hydrogen-Like atom removed from all other atoms through metallic bonds or covalent bonds and we know that the electron! Permanganate ( \ ( MnO_4^-\ ) case, you can easily determine the state. This, a wide variety of stable complexes are formed by transition elements other elements a! A formalism and describes some complexes better than others precisely go over the oxidation.... And Zn ) in ZnCO3 greater it is important to remember that the true separation! ( Cl ), so the formula ; therefore the overall charge is -3 PDF.. ) are very strong oxidants at info @ libretexts.org or check out status. ] 4s23d9. [ 3 ]:38 the unpaired d-orbital electrons bond overall charge, the actual moment. Since oxygen has an oxidation state of silver ( Ag ) -2, CO32- ) not [ Ar ].! Magnet called a ferromagnet count formalism is meaningless and the d electron count and representative examples oxidized! Interactions with the standard electron configuration model assumes a hydrogen-like atom removed from all other atoms through metallic bonds covalent! And 10 electrons that would mathematically look like: 1s electron + 1d electron = total. Four oxygen atoms common geometries and characteristics of each possible d electron count and representative examples River,:. S-Orbital also contributes to determining the oxidation state of -1 ; therefore formula. Nitrogen ( 7 electrons ) three unpaired electron in the oxidation state d! Protons and 10 electrons of +3 increases, the highest oxidation state and d electron formalism... Copper has 9 d-electrons, which will result as \ ( MnO_4^-\ )!
|
2022-08-17 06:57:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.622870147228241, "perplexity": 1721.631250364504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00016.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/starting-with-the-equation-for-absorbance-at-576-and-555-nm-derive-an-equation-for-the-rat-q3463504
|
calculate ratio for Co/CD
starting with the equation for absorbance at 576 and 555 nm derive an equation for the ration of oxygenated to deoxygenated hemoglobin $$\frac{C_{o}}{C_{D}}$$
AA576=%u03F5O576bCo+%u03F5D576bCD
$$A_{555}= \epsilon _{O555}bC_{o}+ \epsilon_{D555}bC_{D}$$
• Anonymous commented
sorry the first one is A576= eO576bCo eD576bCD
• low patterns of HbV/RBC mixtures in narrow tube. Figure 2 shows a microscopic view of the mixtures of RBCs and HbVHSAs flowing in the narrow tubes. The thickness of the RBC-free marginal layer increased with increasing mixing ratio of HbVHSA, and the layer seemed to be slightly turbid and dark colored due to the presence of HbV particles with a 250-nm diameter (29). The thickness of the RBC-free layer was 2.7 ± 1.7 μm for RBCs alone (100RBC), 3.5 ± 1.8 μm for 10HbVHSA/90RBC, 4.8 ± 2.2 μm for 50HbVHSA/50RBC, and 7.0 ± 1.6 μm for 90HbVHSA/10RBC. On the other hand, the mixture of RBCs and the Hb solution produced a transparent layer, but the distribution of RBCs was not changed significantly compared with the HbV/RBC mixtures
Flow patterns of RBCs mixed with HbVs suspended in recombinant human serum albumin (HbVHSA/RBC) in a narrow tube. HbV particles were homogeneously dispersed in a suspension medium. They tended to distribute in the marginal zone of the flow. Thickness of the RBC-free layer increased with increasing amount of HbVHSA. RBC-free phase became darker and more semitransparent, which indicates the presence of HbVs. Tube diameter, 28 μm; Hb concentration ([Hb]), 10 g/dl; centerline flow velocity, 1 mm/s.
Perfusion pressure of narrow tube. The perfusion pressure of the RBC suspension mixed with HbVHSA, HbVHES, or Hb29 in various ratios was examined at the constant centerline flow velocity of 1 mm/s (Fig. 3). The addition of the Hb29 solution to the RBC suspension in HSA (3.9 cP; Table 1) decreased the perfusion pressure from 10 kPa (100RBC) to 6 kPa (90Hb29/10RBC) due to the lower viscosity of the Hb solution (1.3 cP at 71 s–1). On the other hand, the addition of HbVs increased the perfusion pressure in proportion to the HbV/RBC mixing ratio. Especially, the perfusion pressure (41 kPa) of the mixture of 90% HbVHESs (5.7 cP; Table 1) and 10% RBC (3.9 cP) was more than four times higher than that of RBCs alone (10 kPa).
http://ajpheart.physiology.org/content/285/6/H2543.full
|
2013-05-24 12:25:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5391534566879272, "perplexity": 5147.983506775164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704655626/warc/CC-MAIN-20130516114415-00041-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/339294/what-is-the-mass-of-an-electron-in-the-sense-of-its-wave-nature
|
# What is the mass of an electron in the sense of its wave nature?
I just completed a course in Mechanics and I'm currently doing electromagnetism. I haven't rigorously started QM; or Modern Physics. I read up a few articles on the Wave-Particle duality. So, how is the mass defined for an electron while it exhibiting its wave nature?
Mass is mostly a notion which shows up in dynamical interactions, but you can see it in principle in the behavior of the wave. For instance, the dispersion relation for the quantum wave corresponding to a non-relativistic electron (the relation between angular frequency $\omega$ and wave-vector $k$) will be $$\omega = \frac{\hbar k^2}{2m}$$ This is just a "quantized" version of $E = p^2/(2m)$ with $E = \hbar \omega$ and $p = \hbar k$.
Another example: if you have a photon with energy $E_p = \hbar \omega_p$ radiated by the electron, its energy drops to $E' = E - E_p$ and you can compute that its wave-vector will now be $$k' = \sqrt{\frac{2 m (\omega - \omega_p)}{\hbar}}$$ All these relations have $m$ figuring in them and for a different $m$ you would get completely different scalings.
For more digging, you can see e.g. the wiki of De Broglie waves.
The effective mass of an electron,$M$ can be found from its De Broglie wavelength, $\lambda$ which is the wavelength of the electron from its wave nature by the following equations:
$$\lambda=\frac{h}{p}=\frac{h}{Mv}$$
Where $v$ is the speed of the electron wave- particle and $p$ is its momentum.
The by relativistic effective mass of the electron, M is given by: $$M=\frac{M_0}{\sqrt{1-v^2/c^2}}$$ Where $M_0$ is the rest mass of electron.
So thus the effective mass in terms of De Broglie wavelength can be found from the first and second equation by:
$$M=\frac{M_0\sqrt{1-v^2/c^2}}{\lambda*v}$$
|
2019-05-26 21:48:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075177669525146, "perplexity": 156.02748253799243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00548.warc.gz"}
|
http://mlwiki.org/index.php/Battle_of_the_Sexes
|
# ML Wiki
## Cooperation Games
Unlike Pure Competition Games where players have opposite interests, here players have the same interests
• $\forall a \in A, \forall i, j: u_i(a) = u_j(a)$
## Coordination Game
Which side of the road you drive on?
• suppose two players meet at a passage
• they want to get through
• both need to choose either to go left or right
• win-win situation only when they both pick the same side
• otherwise both lose
Left Right
Left (1, 1) (0, 0)
Right (0, 0) (1, 1)
## Battle of the Sexes
This is not only a cooperation game, but also a Pure Competition Game
Description:
• 2 players - a husband and a wife
• 2 options - ballet and football
• they want to go together
• but Husband prefers to go to football, and wife wants to see the ballet
wife $\to$
husband $\downarrow$
$B$ $F$
$B$ (2, 1) (0, 0)
$F$ (0, 0) (1, 2)
There are two Nash Equilibria:
• $(B, B)$ and $(F, F)$
• in both these cases nobody wants to deviate as they would get worse payoff
• consider $(B, F)$ - in this case both want to deviate
## Sources
Machine Learning Bookcamp: Learn machine learning by doing projects. Get 40% off with code "grigorevpc".
|
2021-04-19 03:29:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49526363611221313, "perplexity": 4089.451027906977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00185.warc.gz"}
|
https://sysmath.com/jssc/EN/10.1007/s11424-021-0209-y
|
### Regression Analysis of Interval-Censored Data with Informative Observation Times Under the Accelerated Failure Time Model
ZHAO Shishun1, DONG Lijian1, SUN Jianguo2
1. 1. Center for Applied Statistical Research, School of Mathematics, Jilin University, Changchun 130012, China;
2. Department of Statistics, University of Missouri, Columbia, MO 65211, USA
• Received:2020-09-03 Revised:2020-12-31 Online:2022-08-25 Published:2022-08-02
• Supported by:
This research was supported by the National Natural Science Foundation of China under Grant No. 11671168 and the Science and Technology Developing Plan of Jilin Province under Grant No. 20200201258JC.
ZHAO Shishun, DONG Lijian, SUN Jianguo. Regression Analysis of Interval-Censored Data with Informative Observation Times Under the Accelerated Failure Time Model[J]. Journal of Systems Science and Complexity, 2022, 35(4): 1520-1534.
This paper discusses regression analysis of interval-censored failure time data arising from the accelerated failure time model in the presence of informative censoring. For the problem, a sieve maximum likelihood estimation approach is proposed and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring or observation process. Also I-spline functions are used to approximate the unknown functions in the model, and a simulation study is carried out to assess the finite sample performance of the proposed approach and suggests that it works well in practical situations. In addition, an illustrative example is provided.
[1] Chen C, Lu T, Chen M, et al., Semiparametric transformation models for current status data with informative censoring, Biometrical Journal, 2012, 54(5): 641-656.[2] Finkelstein D, A proportional hazards model for interval-censored failure time data, Biometrics, 1986, 42: 845-854.[3] Huang J, Efficient estimation for the proportional hazards model with interval censoring, The Annals of Statistics, 1996, 24(2): 540-568.[4] Sun J, The Statistical Analysis of Interval-Censored Failure Time Data, Springer Science & Business Media, New York, 2006.[5] Finkelstein D and Wolfe R, A semiparametric model for regression analysis of interval-censored failure time data, Biometrics, 1985, 41: 933-945.[6] Zeng D, Cai J, and Shen Y, Semiparametric additive risks model for interval-censored data, Statistica Sinica, 2006, 16: 287-302.[7] Wang L, McMahan C, Hudgens M, et al., A flexible, computationally efficient method for fitting the proportional hazards model to interval-censored data, Biometrics, 2016, 72(1): 222-231.[8] Zeng D, Mao L, and Lin D, Maximum likelihood estimation for semiparametric transformation models with interval-censored data, Biometrika, 2016, 103(2): 253-271.[9] Zhou Q, Hu T, and Sun J, A sieve semiparametric maximum likelihood approach for regression analysis of bivariate interval-censored failure time data, Journal of the American Statistical Association, 2017, 112(518): 664-672.[10] Zeng D, Gao F, and Lin D, Maximum likelihood estimation for semiparametric regression models with multivariate interval-censored data, Biometrika, 2017, 104(3): 505-525.[11] Huang X and Wolfe R, A frailty model for informative censoring, Biometrics, 2002, 58: 510-520.[12] Zhang Z, Sun L, Sun J, et al., Regression analysis of failure time data with informative interval censoring, Statistics in Medicine, 2007, 26: 2533-2546.[13] Wang P, Zhao H, and Sun J, Regression analysis of case K interval-censored failure time data in the presence of informative censoring, Biometrics, 2016, 72(4): 1103-1112.[14] Ma L, Hu T, and Sun J, Cox regression analysis of dependent interval-censored failure time data, Computational Statistics and Data Analysis, 2016, 103: 79-90.[15] Zhao S, Hu T, Ma L, et al., Regression analysis of interval-censored failure time data with the additive hazards model in the presence of informative censoring, Statistics and Its Interface, 2015, 8(3): 367-377.[16] Xu D, Zhao S, Hu T, et al., Regression analysis of informatively interval-censored failure time data with semiparametric linear transformation model, Journal of Nonparametric Statistics, 2019, 31(3): 663-679.[17] Lin D, Wei L, and Ying Z, Accelerated failure time models for counting processes, Biometrika, 1998, 85(3): 605-618.[18] Jin Z, Lin D, Wei L, et al., Rank-based inference for the accelerated failure time model, Biometrika, 2003, 90(2): 341-353.[19] Zeng D and Lin D, Efficient estimation for the accelerated failure time model, Journal of the American Statistical Association, 2007, 102(480): 1387-1396.[20] Rabinowitz D, Tsiatis A, and Aragon J, Regression with interval-censored data, Biometrika, 1995, 82(3): 501-513.[21] Betensky R, Rabinowitz D, and Tsiatis A, Computationally simple accelerated failure time regression for interval censored data, Biometrika, 2001, 88: 703-711.[22] Nelsen R, An Introduction to Copulas, Springer Science and Business Media, New York, 2006.[23] Lu M, Zhang Y, and Huang J, Estimation of the mean function with panel count data using monotone polynomial splines, Biometrika, 2007, 94(3): 705-718.[24] Schumaker L, Spline Functions: Basic Theory, Cambridge University Press, Cambridge, 2007.[25] Murphy S and Vaart A, On profile likelihood, Journal of the American Statistical Association, 2000, 95(450): 449-465.
[1] Ximing CHENG , Li LUO. VARIABLE SELECTION FOR RECURRENT EVENT DATA WITH INFORMATIVE CENSORING [J]. Journal of Systems Science and Complexity, 2012, 25(5): 987-997. [2] SHI Peide;ZHENG Zhongguo. MUCLTIVARIATE RESISTANT REGRESSION SPLINES FOR ESTIMATING MULTIVARIATE FUNCTIONS FROM NOISY DATA [J]. Journal of Systems Science and Complexity, 1997, 10(3): 217-224.
Viewed
Full text
Abstract
|
2022-08-11 17:20:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4618770182132721, "perplexity": 5583.502681190913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00574.warc.gz"}
|
https://brilliant.org/problems/determinant-of-2x2-matrix/
|
# Determinant of 2x2 Matrix
Algebra Level 1
Given the 2 by 2 matrix $A = \begin{pmatrix} 5 & 1 \\ 1 & 5 \end{pmatrix} ,$ what is the value of $$\det(A)$$?
×
Problem Loading...
Note Loading...
Set Loading...
|
2019-01-23 21:22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6062057018280029, "perplexity": 3468.594278772309}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00184.warc.gz"}
|
http://en.wikipedia.org/wiki/Venn_diagrams
|
# Venn diagram
(Redirected from Venn diagrams)
Venn diagram showing which uppercase letter glyphs are shared by the Greek, Latin and Russian alphabets
A Venn diagram or set diagram is a diagram that shows all possible logical relations between a finite collection of sets. Venn diagrams were conceived around 1880 by John Venn. They are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics and computer science.
## Example
Sets A (creatures with two legs) and B (creatures that can fly)
This example involves two sets, A and B, represented here as coloured circles. The orange circle, set A, represents all living creatures that are two-legged. The blue circle, set B, represents the living creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram. Living creatures that both can fly and have two legs—for example, parrots—are then in both sets, so they correspond to points in the area where the blue and orange circles overlap. That area contains all such and only such living creatures.
Humans and penguins are bipedal, and so are then in the orange circle, but since they cannot fly they appear in the left part of the orange circle, where it does not overlap with the blue circle. Mosquitoes have six legs, and fly, so the point for mosquitoes is in the part of the blue circle that does not overlap with the orange one. Creatures that are not two-legged and cannot fly (for example, whales and spiders) would all be represented by points outside both circles.
The combined area of sets A and B is called the union of A and B, denoted by A ∪ B. The union in this case contains all living creatures that are either two-legged or that can fly (or both).
The area in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by A ∩ B. For example, the intersection of the two sets is not empty, because there are points that represent creatures that are in both the orange and blue circles.
## History
Venn diagrams were introduced in 1880 by John Venn (1834–1923) in a paper entitled On the Diagrammatic and Mechanical Representation of Propositions and Reasonings in the "Philosophical Magazine and Journal of Science", about the different ways to represent propositions by diagrams.[1] The use of these types of diagrams in formal logic, according to Ruskey and M. Weston, is "not an easy history to trace, but it is certain that the diagrams that are popularly associated with Venn, in fact, originated much earlier. They are rightly associated with Venn, however, because he comprehensively surveyed and formalized their usage, and was the first to generalize them".[2]
Venn himself did not use the term "Venn diagram" and referred to his invention as "Eulerian Circles."[1] For example, in the opening sentence of his 1880 article Venn writes, "Schemes of diagrammatic representation have been so familiarly introduced into logical treatises during the last century or so, that many readers, even those who have made no professional study of logic, may be supposed to be acquainted with the general nature and object of such devices. Of these schemes one only, viz. that commonly called 'Eulerian circles,' has met with any general acceptance..."[3] The first to use the term "Venn diagram" was Clarence Irving Lewis in 1918, in his book "A Survey of Symbolic Logic".[2]
Venn diagrams are very similar to Euler diagrams, which were invented by Leonhard Euler (1708–1783) in the 18th century.[note 1] M. E. Baron has noted that Leibniz (1646–1716) in the 17th century produced similar diagrams before Euler, but much of it was unpublished. She also observes even earlier Euler-like diagrams by Ramon Lull in the 13th Century.[4]
In the 20th century, Venn diagrams were further developed. D.W. Henderson showed in 1963 that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was a prime number.[5] He also showed that such symmetric Venn diagrams exist when n is 5 or 7. In 2002 Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. Thus rotationally symmetric Venn diagrams exist if and only if n is a prime number.[6]
Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted by other curriculum fields such as reading.[7]
## Overview
Intersection of two sets $~A \cap B$
Union of two sets $~A \cup B$
Symmetric difference of two sets$A~\Delta~B$
Relative complement of A (left) in B (right) $A^c \cap B~=~B \setminus A$
Absolute complement of A in U $A^c~=~U \setminus A$
A Venn diagram is constructed with a collection of simple closed curves drawn in a plane. According to Lewis,[8] the "principle of these diagrams is that classes [or sets] be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not-null".[8]:157
Venn diagrams normally comprise overlapping circles. The interior of the circle symbolically represents the elements of the set, while the exterior represents elements that are not members of the set. For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while another circle may represent the set of all tables. The overlapping area or intersection would then represent the set of all wooden tables. Shapes other than circles can be employed as shown below by Venn's own higher set diagrams. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets; i.e. they are schematic diagrams.
Venn diagrams are similar to Euler diagrams. However, a Venn diagram for n component sets must contain all 2n hypothetically possible zones that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actually possible zones in a given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram the corresponding zone is missing from the diagram. For example, if one set represents dairy products and another cheeses, the Venn diagram contains a zone for cheeses that are not dairy products. Assuming that in the context cheese means some type of dairy product, the Euler diagram has the cheese zone entirely contained within the dairy-product zone—there is no zone for (non-existent) non-dairy cheese. This means that as the number of contours increases, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small.[9]
## Extensions to higher numbers of sets
Venn diagrams typically represent two or three sets, but there are forms that allow for higher numbers. Shown below, four intersecting spheres form the highest order Venn diagram that has the symmetry of a simplex and can be visually represented. The 16 intersections correspond to the vertices of a tesseract (or the cells of a 16-cell respectively).
For higher numbers of sets, some loss of symmetry in the diagrams is unavoidable. Venn was keen to find "symmetrical figures...elegant in themselves,"[10] that represented higher numbers of sets, and he devised a four-set diagram using ellipses (see below). He also gave a construction for Venn diagrams for any number of sets, where each successive curve that delimits a set interleaves with previous curves, starting with the three-circle diagram.
### Edwards' Venn diagrams
A. W. F. Edwards constructed a series of Venn diagrams for higher numbers of sets by segmenting the surface of a sphere. For example, three sets can be easily represented by taking three hemispheres of the sphere at right angles (x = 0, y = 0 and z = 0). A fourth set can be added to the representation by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on. The resulting sets can then be projected back to a plane to give cogwheel diagrams with increasing numbers of teeth, as shown on the right. These diagrams were devised while designing a stained-glass window in memory of Venn.
### Other diagrams
Edwards' Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides. They are also 2-dimensional representations of hypercubes.
Henry John Stephen Smith devised similar n-set diagrams using sine curves[11] with the series of equations
$y_i = \frac {\sin(2^{i }x)}{2 i} \text{ where } 0 \leq i \leq n-2 \text{ and } i \in \mathbb{N}.$
Charles Lutwidge Dodgson devised a five-set diagram.
## Related concepts
Venn diagram as a truth table
Venn diagrams correspond to truth tables for the propositions $x\in A$, $x\in B$, etc., in the sense that each region of Venn diagram corresponds to one row of the truth table.[12][13] Another way of representing sets is with R-Diagrams.
## Notes
1. ^ In Euler's Letters to a German Princess. In Venn's article, however, he suggests that the diagrammatic idea predates Euler, and is attributable to C. Weise or J. C. Lange.
## References
1. ^ a b Sandifer, Ed (2003). "How Euler Did It" (pdf). The Mathematical Association of America: MAA Online. Retrieved 26 October 2009.
2. ^ a b Ruskey, F.; Weston, M. (June 2005). "Venn Diagram Survey". The electronic journal of combinatorics.
3. ^ Venn, J. (July 1880). "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings". Philosophical Magazine and Journal of Science. 5 10 (59).
4. ^ Baron, M.E. (May 1969). "A Note on The Historical Development of Logic Diagrams". The Mathematical Gazette 53 (384): 113–125. doi:10.2307/3614533. JSTOR 3614533.
5. ^ Henderson, D.W. (April 1963). "Venn diagrams for more than four classes". American Mathematical Monthly 70 (4): 424–6. doi:10.2307/2311865. JSTOR 2311865.
6. ^ Ruskey, Frank; Savage, Carla D.; Wagon, Stan (December 2006). "The Search for Simple Symmetric Venn Diagrams" (PDF). Notices of the AMS 53 (11): 1304–11.
7. ^ Strategies for Reading Comprehension Venn Diagrams
8. ^ a b Lewis, Clarence Irving (1918). A Survey of Symbolic Logic. Berkeley: University of California Press.
9. ^ "Euler Diagrams 2004: Brighton, UK: September 22–23". Reasoning with Diagrams project, University of Kent. 2004. Retrieved 13 August 2008.
10. ^ Jo Venn (1881). Symbolic logic. Macmillan. p. 108. Retrieved 9 April 2013.
11. ^ Edwards, A. W. F. (2004), Cogwheels of the Mind: The Story of Venn Diagrams, JHU Press, p. 65, ISBN 9780801874345.
12. ^ Grimaldi, Ralph P. (2004). Discrete and combinatorial mathematics. Boston: Addison-Wesley. p. 143. ISBN 0-201-72634-3.
13. ^ Johnson, D. L. (2001). "3.3 Laws". Elements of logic via numbers and sets. Springer Undergraduate Mathematics Series. Berlin: Springer-Verlag. p. 62. ISBN 3-540-76123-3.
|
2014-04-24 00:27:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191367983818054, "perplexity": 1314.754543366821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.semanticscholar.org/paper/Symmetry-in-optics-and-photonics%3A-a-group-theory-Rodr'iguez-Lara-El-Ganainy/7050cce7010d3b4467a83234f97f8c1917f0f147
|
# Symmetry in optics and photonics: a group theory approach
@article{RodriguezLara2017SymmetryIO,
title={Symmetry in optics and photonics: a group theory approach},
author={B. M. Rodr'iguez-Lara and Ramy El-Ganainy and Julio Guerrero},
journal={arXiv: Optics},
year={2017}
}
• Published 23 December 2017
• Physics
• arXiv: Optics
21 Citations
## Figures from this paper
Parity-time Symmetry in Non-Hermitian Complex Media
• Physics
• 2018
The explorations of the quantum-inspired symmetries in optical and photonic settings have witnessed immense research interests both in the realms of fundamental physics as well as novel technological
Parity‐Time Symmetry in Non‐Hermitian Complex Optical Media
• Physics
• 2019
It can be anticipated that this trendy field of interest will be indispensable in providing new perspectives in maneuvering the flow of light in the diverse physical platforms in optics, photonics, condensed matter, optoelectronics, and beyond, and will offer distinctive application prospects in novel functional materials.
Observation of parity-time symmetry in microwave photonics
• Physics
Light, science & applications
• 2018
The experimental use of PT symmetry in an optoelectronic oscillator (OEO), a key microwave photonics system that can generate single-frequency sinusoidal signals with high spectral purity, suggests that PT symmetry may find rich applications in microwavePhotonics.
Optical non-Hermitian para-Fermi oscillators
• Physics
Physical Review A
• 2020
We present a proposal for the optical simulation of para-Fermi oscillators in arrays of coupled waveguides. We use a representation that arises as a deformation of the su(2) algebra. This provides us
Symmetries, Conserved Properties, Tensor Representations, and Irreducible Forms in Molecular Quantum Electrodynamics
In the wide realm of applications of quantum electrodynamics, a non-covariant formulation of theory is particularly well suited to describing the interactions of light with molecular matter, and a variety of symmetry principles are drawn out with reference to applications.
• Physics
• 2022
Space-time light structuring has emerged as a very powerful tool for controlling the propagation dynamics of pulsed beam. The ability to manipulate and generate space-time distributions of light has
Symmetric supermodes in cyclic multicore fibers
• Physics
OSA Continuum
• 2019
Nearest-neighbor coupled-mode theory is a powerful framework to describe electromagnetic-wave propagation in multicore fibers, but it lacks precision as the separation between cores decreases. We use
Rigorous Quantum Formulation of Parity-Time Symmetric Coupled Resonators
• Physics
ArXiv
• 2020
The quantum formulation of the coupled coil resonators can provide better guideline to design a better PT-symmetric system.
## References
SHOWING 1-10 OF 117 REFERENCES
Revisiting the Optical PT-Symmetric Dimer
• Physics
Symmetry
• 2016
This work focuses on the optical PT -symmetric dimer, a two-waveguide coupler where the materials show symmetric effective gain and loss, and provides a review of the linear and nonlinear optical realizations from a symmetry-based point of view.
Observation of parity–time symmetry in optics
• Physics
• 2010
One of the fundamental axioms of quantum mechanics is associated with the Hermiticity of physical observables 1 . In the case of the Hamiltonian operator, this requirement not only implies real
Parity–time synthetic photonic lattices
• Physics
Nature
• 2012
The experimental observation of light transport in large-scale temporal lattices that are parity–time symmetric is reported and it is demonstrated that periodic structures respecting this symmetry can act as unidirectional invisible media when operated near their exceptional points.
Supersymmetric laser arrays
• Physics
Science
• 2019
The results not only pave the way toward devising new schemes for scaling up radiance in integrated lasers, but also could shed light on the intriguing synergy between non-Hermiticity and supersymmetry.
Light transport in PT-invariant photonic structures with hidden symmetries
• Physics
• 2014
We introduce a recursive bosonic quantization technique for generating classical parity-time ($\mathcal{PT}$) photonic structures that possess hidden symmetries and higher-order exceptional points.
Observation of supersymmetric scattering in photonic lattices.
• Physics
Optics letters
• 2014
This work experimentally investigates the scattering characteristics of SUSY photonic lattices and demonstrates that discrete settings constitute a promising testbed for studying the different facets of optical supersymmetry.
Jacobi photonic lattices and their SUSY partners.
• Physics
Optics express
• 2014
This work presents a classical analog of quantum optical deformed oscillators in arrays of waveguides and shows that it is possible to attack the problem via factorization by exploiting the corresponding quantum optical model, allowing an unbroken supersymmetric partner of the proposed Jacobi lattices.
SUSY-inspired one-dimensional transformation optics
• Physics
• 2014
A new class of one-dimensional optical transformations that exploits the mathematical framework of supersymmetry (SUSY) is introduced that can be utilized to synthesize photonic configurations with identical reflection and transmission characteristics, down to the phase, for all incident angles, thus rendering them perfectly indistinguishable to an external observer.
Quantum optics as a tool for photonic lattice design
• Physics
• 2015
We present the theoretical basis needed to work in the field of photonic lattices. We start by studying the field modes inside and outside a single waveguide. Then we use perturbation theory to deal
Exceptional points and lasing self-termination in photonic molecules
• Physics
• 2014
We investigate the rich physics of photonic molecule lasers using a non-Hermitian dimer model. We show that several interesting features, predicted recently using a rigorous steady-state ab initio
|
2022-05-18 19:47:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2899750769138336, "perplexity": 4635.170758535048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00413.warc.gz"}
|
https://math.stackexchange.com/questions/1945310/topology-of-the-space-of-conformal-classes-of-metrics
|
# Topology of the space of conformal classes of metrics
Definitions: Consider on a fixed smooth manifold $M$ the space $\text{Met}(M)$ of Riemannian metrics on $M.$ This lives inside an infinite dimensional topological vector space (in fact, in is a Frechet space).
Two metrics $h,g \in \text{Met}(M)$ are said to be conformally equivalent if there exists a nonvanishing (a fortiori positive) smooth function $f$ such that $g(-,-)=fh(-,-).$ This defines an equivalence relation on $Met(M).$ We define the quotient space $\text{Conf}(M):= \text{Met}(M)/\{\text{conformal equivalence}\}$ and endow it with the quotient topology.
My question: It's clear that $\text{Met}(M)$ is contractible since any two metrics can be joined by a straight-line homotopy. Is it true that $\text{Conf}(M)$ is also contractible?
Note that we have the fiber sequence $\{\text{positive functions on M}\} \to \text{Met}(M) \to \text{Conf}(M).$ Since the first two spaces are contractible by straight-line homotopies, it follows from the long exact sequence on homotopy groups that $\text{Conf}(M)$ has vanishing homotopy groups. If this space had the homotopy type of a CW complex, it would be contractible by Whitehead's theorem. However I don't see why it would have the homotopy type of a CW complex...
• It suffices to show that this is a metrizable manifold by a theorem of Palais. But I think this is straightforward, as it's locally modeled on a Frechet space modded out by a closed subspace.
– user98602
Oct 3 '16 at 3:28
• You should state which topology you are using on $Met(M)$ for the question to make sense. Also, $Met(M)$ is not a vector space; the natural structure is the one of an open convex subset of a metric space. Oct 5 '16 at 22:55
Since you did not specify the topology on the space of conformal classes, I will make up my own. Namely, the set of conformal structures on $M^n$ can be identified with the set of reductions of the frame bundle to the bundle whose structure group is the conformal group $CO(n)\cong R_+\times O(n)$. In other words, this is the set of sections of the bundle $E$ over $M$ whose fibers are copies of $F=GL(n,R)/CO(n)$, which is a contractible manifold. I will therefore equip $Conf(M)$ with the $C^\infty$-compact-open topology on the space of sections of $E\to M$. Now my answer to this question proves that $Conf(M)$ is contractible.
• Thanks for your answer. Do you mind explaining why $F$ is contractible? Oct 10 '16 at 14:24
• @MichaelAlbanese: There are two ways to see this. One is to observe that this space is diffeomorphic to $SL(n,R)/SO(n)$, which is the symmetric space for the group $SL(n,R)$ and, hence, has nonpositive curvature and hence is contractible by Cartan-Hadamard theorem. The second is to not that $GL(n,R)/O(n)$ is the convex cone of positive definite bilinear forms and hence, contractible. The multiplicative group $R_+$ acts on this space by scaling making it a principal fiber bundle which has to be trivial since it admits a section, coming from symmetric matrices with unit determinant. Oct 10 '16 at 15:07
|
2021-09-28 13:33:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289050698280334, "perplexity": 62.809080446841634}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060803.2/warc/CC-MAIN-20210928122846-20210928152846-00588.warc.gz"}
|
https://papers.nips.cc/paper/2016/hash/6c9882bbac1c7093bd25041881277658-Abstract.html
|
#### Authors
Han Zhao, Pascal Poupart, Geoffrey J. Gordon
#### Abstract
<p>We present a unified approach for learning the parameters of Sum-Product networks (SPNs). We prove that any complete and decomposable SPN is equivalent to a mixture of trees where each tree corresponds to a product of univariate distributions. Based on the mixture model perspective, we characterize the objective function when learning SPNs based on the maximum likelihood estimation (MLE) principle and show that the optimization problem can be formulated as a signomial program. We construct two parameter learning algorithms for SPNs by using sequential monomial approximations (SMA) and the concave-convex procedure (CCCP), respectively. The two proposed methods naturally admit multiplicative updates, hence effectively avoiding the projection operation. With the help of the unified framework, we also show that, in the case of SPNs, CCCP leads to the same algorithm as Expectation Maximization (EM) despite the fact that they are different in general.</p>
|
2021-02-27 04:45:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133701086044312, "perplexity": 676.7011771634075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00245.warc.gz"}
|
http://reducekeystrokes.com/2018/06/11/how-to-get-eclipse-oxygen-working-with-java-7/
|
To use Eclipse (4.7 Oxygen, 28 June 2017) you will need to have Java 8 installed, JAVA_HOME pointed to its main folder (e.g. C:\Program Files\Java\jdk1.8.0_45) and the PATH environment variable pointed to its bin sub folder (e.g. C:\Program Files\Java\jdk1.8.0_45\bin).
Your path may also contain “C:\ProgramData\Oracle\Java\javapath”, you may need to remove this \ delete the java executables in that \ add the bin folder before this in the path.
Once Eclipse is opened and your workspace initialised take the following steps to switch back to Java 7. If you do not do them all you may see an error like “Version 1.7.0_80 of the JVM is not suitable for this product. Version: 1.8 or greater is required.”
1. Change JAVA_HOME back to the Java 7 folder (e.g. C:\Program Files\Java\jdk1.7.0_80)
2. Change the Java path bin to Java 7 (e.g. C:\Program Files\Java\jdk1.7.0_80\bin)
3. In Eclipse go to Window -> Preferences -> Java -> installed JREs and select the Java 7 jdk folder
4. In Eclipse go to Window -> Preferences -> Java -> Compiler & select Java 7
5. With Eclipse closed open the eclipse.ini & add the following line for your appropriate Java 8 directory. Without this final step I still got the incompatible message:
• -vm C:\Program Files\Java\jdk1.8.0_45\bin
|
2019-12-05 14:39:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881849646568298, "perplexity": 7964.621854575937}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00346.warc.gz"}
|
https://math.stackexchange.com/questions/2477962/strange-attractors-what-is-the-difference-between-a-map-and-differential-equati
|
# Strange attractors: what is the difference between a map and differential equation system?
As far as I have been able to understand, there are two main ways of generating (or finding) a strange attractor:
1. Using a map. E.g. the Hénon map (for a given $a,b$):
$$x_{n+1} = 1 - a x_n^2 + y_n \ \ \ , \ \ \ \ y_{n+1} = b x_n$$
1. Using differential equations. E.g. the Lorentz system (for a given $\delta,\rho,\beta$):
$$\dot x = \delta (y-x) \ \ \ , \ \ \ \ \dot y = x( \rho -z)-y \ \ \ , \ \ \ \ \dot z = x y - \beta z$$
What I do not understand very well is what differences are between a map and differential equations in this respect.
This is what I guess:
1. A map is always a discrete-time dynamical system, so no differential equations are required to generate the strange attractor.
2. In the other hand, a differential equation system is per se a continuous-time dynamical system (due to the fact that it is based indeed on differential equations).
Are the above assumptions correct, or are the differences between a map and a differential-equations-based dynamical system more than that? Can a differential equations system be converted into a map (probably adding some restrictions), or likewise, a map into a differential equations system and be able to reproduce the same strange attractor (or a restricted version of the same)?
• As a side note. There is a notion of Poincaré map which in some cases reduces the studying of dynamics of the ODE to the dynamics of this map. In some cases people say that every diffeomorphism (a differentiable bijective map whose inverse is also differentiable) can be realized as a Poincaré map of some system of ODEs (see suspension flow). Oct 18 '17 at 7:36
• @Evgeny true! As far as I understand (as per the Wikipedia) is a kind of cut of a more than two dimensions dynamical system into a two dimensional plane. Thank you for the reference to suspension flow. Oct 18 '17 at 8:38
• @Evgeny by the way, your question regarding dynamical systems is very useful: math.stackexchange.com/questions/510291/… Oct 18 '17 at 8:41
• Oh, thank you :) I still don't know any way to get the list of such books and articles on these topics. I'm glad that this list is useful for someone else besides me. Oct 18 '17 at 9:31
are the differences between a map and a differential-equations-based dynamical system more than that?
Well, first of all, there is the practical difference that maps are usually easier to analyse while differential equations are closer to reality.
Of course, some theoretical statements need to be translated. For example you need three dimensions in a differential equation to obtain chaos, while one dimension suffices for maps.
Besides there are some phenomena (like weak ergodicity breaking) that have only been observed in carefully constructed maps, as far as I know.
Can a differential equations system be converted into a map […]?
Sure, whenever we solve a differential equation numerically, we are essentially turning it into a (complicated) map. IIRC, some prominent chaotic maps have been obtained this way, though I cannot name one.
A more sophisticated approach is making use of Poincaré sections, i.e., you consider intersections of the trajectory with some plane or manifold in phase space and the mapping of one intersection to the next. In fact, this is how the Hénon map was obtained from the Lorenz system.
Can […] a map [be converted] into a differential equations system […]?
I can see how for many maps, you could carefully construct differential equations that have the map as a Poincaré section, but what would you gain?
be able to reproduce the same strange attractor (or a restricted version of the same)?
Either way, you would gain or lose trivial properties, e.g., the connectedness of the attractor. Note for example that if you connect temporally adjacent points of the Hénon attractor with lines, they would be all over the place, while this does not apply to the Lorenz attractor.
|
2022-01-16 11:53:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723835945129395, "perplexity": 243.62904040665575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00357.warc.gz"}
|
https://groupprops.subwiki.org/wiki/Special:MobileDiff/18938
|
# Changes
The maximal abelian quotient of any group is termed its [[abelianization]], and this is the quotient by the [[commutator subgroup]]. A subgroup is an [[abelian-quotient subgroup]] (i.e., normal with abelian quotient group ) if and only if the subgroup contains the commutator subgroup. ==Formalisms== {{obtainedbyapplyingthe|diagonal-in-square operator|normal subgroup}} A group $G$ is an abelian group if and only if, in the [[external direct product]] $G \times G$, the diagonal subgroup $\{ (g,g) \mid g \in G \}$ is a [[normal subgroup]].
|
2020-04-08 19:55:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7617506384849548, "perplexity": 733.881748356304}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371821680.80/warc/CC-MAIN-20200408170717-20200408201217-00537.warc.gz"}
|
https://zenodo.org/record/5508750/export/csl
|
Journal article Open Access
# Pattern Recognition using Support Vector Machines as a Solution for Non-Technical Losses in Electricity Distribution Industry
Azubuike N. Aniedu; Hyacinth C. Inyiama; Augustine C. O. Azubogu; Sandra C. Nwokoye
### Citation Style Language JSON Export
{
"DOI": "10.35940/ijisme.B1280.037221",
"container_title": "International Journal of Innovative Science and Modern Engineering (IJISME)",
"language": "eng",
"title": "Pattern Recognition using Support Vector Machines as a Solution for Non-Technical Losses in Electricity Distribution Industry",
"issued": {
"date-parts": [
[
2021,
3,
30
]
]
},
"abstract": "<p>Contending with Non-Technical Losses (NTL) is a major problem for electricity utility companies. Hence providing a lasting solution to this menace motivates this and many more research work in the electricity sector in recent times. Non-technical losses are classed under losses incurred by the electricity utility companies in terms of energy used but not billed due to activities of users or malfunction of metering equipment. This paper therefore is aimed at proffering a solution to this problem by first detecting such loopholes via the analysis of consumers’ consumption pattern leveraging Machine learning (ML) techniques. Support vector machine classifier was chosen and used for classifying the customers’ energy consumption data, training the system and also for performing predictive analysis for the given dataset after a careful survey of a number of machine learning classifiers. A classification accuracy (and subsequently, class prediction) of 79.46% % was achieved using this technique. It has been shown, through this research work, that fraud detection in Electricity monitoring, and hence a solution to non-technical losses can be achieved using the right combinations of Machine Learning techniques in conjunction with AMI technology.</p>",
"author": [
{
"family": "Azubuike N. Aniedu"
},
{
"family": "Hyacinth C. Inyiama"
},
{
"family": "Augustine C. O. Azubogu"
},
{
"family": "Sandra C. Nwokoye"
}
],
"page": "1-8",
"volume": "7",
"type": "article-journal",
"issue": "2",
"id": "5508750"
}
9
7
views
|
2022-01-27 15:57:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5911770462989807, "perplexity": 5005.912286936557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00242.warc.gz"}
|
https://chemistry.tutorvista.com/organic-chemistry/no2-molecular-geometry.html
|
To get the best deal on Tutoring, call 1-855-666-7440 (Toll Free)
Top
# NO$_2$ Molecular Geometry
In chemistry we make use of valence shell electronic pair repulsion theory in order to give or predict the shapes of every possible molecules we have around us. The idea of VSEPR theory is very much necessary in order to describe the bonding of any molecule as well as detailing of chemical properties and possible reactions of the molecules.
The entire molecular reactivity series as well the set of properties depend upon how the overall molecular structure shows with and the structure or the geometry of molecules. Nitrogen dioxide is no exception and the molecular geometry is very important to understand the nature, and chemical characteristics of the molecule.
Related Calculators Calculate Geometry How to Calculate Molecular Mass Molecular Formula Calculator
## NO$_2$ Molecular Geometry Drawing
Back to Top
The valence shell electronic pair repulsion involves only covalent bonding in molecules which shares electron pairs between the electrons of the participating atoms. When the electrons are assigned to the participating atoms bonding as well as non-bonding electron pairs we can give us an insight into the possible spatial bonds arrangement by just simple arrangement and spacing of electrons as far as possible because the proximity of electrons causes bending.
The Nitrogen dioxide molecule drawing needs to be understood from the Lewis structure first. Like every molecules and ions where more than one satisfactory lewis structure can be drawn but the octet rule has to be satisfied. Neither of the structures can satisfy nor can be considered as correct as the alternate presence of double and single bond between Nitrogen and Oxygen are distinguishable.
The reason the structures are never considered as stable or satisfactory because the double bond has more density of electrons as well as the length of the bond is shorter than single bonds. Due to these anomalies the structure of nitrogen di oxide molecule has no single stable structure but more than one resonance structures which satisfy the partial stability these individual structures show.
This structure implies that NO2 is a symmetrical having partial double bond character in each of the nitrogen oxygen bonds. The tracking of electrons in such a structure requires specific special notation and we need to write more than one lewis structure which are resonating to each other and connect them with a symbol implying that superimposing these structures and finally arrive at a reasonable representation of the molecule.
## NO$_2^{-}$ Lewis Structure Molecular Geometry
Back to Top
The molecular shape or basic geometry is determined by the electron group geometry and the overall electron cloud along with the ligand number. NO2- is a symmetrical ion having partial double bond character in each of the nitrogen oxygen bonds.
The six electrons in valence shell of oxygen, along with five valence shell electrons of Nitrogen are the ones which are responsible for the structure that we get to see at the end. The defect of oxygen atoms sharing two pairs of electrons not taking into account the geometry that is involved. There is a presence of double bond in between two oxygen atoms when oxygen molecule is formed. A triple bond must be assumed in case of N2 to give each nitrogen atom a noble gas configuration. When these atoms combine together to form the nitrite ion $NO_{2}^{-}$ the alternate presence of double bond between nitrogen and oxygen on one side and single bond between nitrogen and oxygen on the arm.
As these bonds in between nitrogen and oxygen is not that stable we create two resonating structures instead of one structure and hence both of these nitrite ion structures suffix each other as well as compensate the absence of stability in one structure.
• In order to understand these better we refer VSEPR as the lewis structures help us find the steric number of the given molecule’s central atom or atoms.
• The steric number provides the hybridization as well as electron group geometry. The number of ligands basically is the number of atoms which are bonded to central atom.
• Once the electron pair repulsion is visualised or described the theory helps us to draw geometric representation of each which we designate as orbital overlap pattern.
• The sketch helps in displaying nitrogen atom represented with a letter or text and are surrounded by cloud of electrons including the lone pair as well as bonded pairs.
• The bonded pairs of electrons can either be sigma (σ) or pi (π) bonds.
• Each of these help in getting the right orbital overlap sketch which in turn help in getting the exact bend and overall angle of the molecule, in this case nitrite $NO_{2}^{-}$ ion.
## NO$_2^{-}$ Molecular Geometry Bond Angle
Back to Top
In nitrite ion, N atom is $sp_{2}$ hybridised with the presence of one lone pair of electron and hence the bond angle is 115o and 120 degree. The bond angles of $NO_{2}$, $NO_{2}^{+}$ and $NO_{2}^{-}$ are in the order $NO_{2}^{+} > NO2 > NO_{2}^{-}$.
This is mainly because $NO_{2}^{+}$ has no unshared electron and it is linear. $NO_{2}$ has one unshared electron while $NO_{2}^{-}$ has one unshared electron pair.
The consideration of NO2- and many other molecules and ions shows that simple layout for overall number of electrons and assisting them to the atoms valence shells with either bonded or unshared pairs is not entirely satisfactory.
Fortunately the simple model may be altered easily to fit in many of the awkward cases. The problem with $NO_{2}^{-}$ is that the ion is actually more symmetrical than either one of the lewis electronic structures that was observed before. The two structures when superimposed show a new structure having same symmetry as the molecule. These structures also imply that NO2- is a hybrid of two resonance structures and when two or more resonance structures are drawn for a molecule or ion the electronic formula for the species is considered to be a resonance hybrid structures.
## NO$_2^{-}$ Ion Molecular Geometry
Back to Top
To understand better the nitrite ion molecular geometry we need to understand the NO2 structure better. The trigonal planar structure of $NO_{2}^{-}$ has lot to do with the main $NO_{2}$ structure geometry. $NO_{2}$ structure is usually considered as unusual and although its bond length of 119.7 pm lies exactly in between $NO_{2}^{-}$ of 123.6 pm and NO2+ of 115 pm, its bond angle lies outside the normal ranges of bond angles.
$NO_{2}$ has 17 valence shell electrons and these are split into 8 of one spin, and 9 of the other. The 9 member spin set has 3 electrons in bonding regions which are same as the number of bonding electron pairs in 9 electron pair. The unshared electron pairs on the central N atom explains the overall the molecular geometry of all three forms of $NO_{2}$.
In $NO_{2}^{+}$, the N- atom has no unshared electrons and so the molecule is linear having bond angle of 180 degree. In NO2, the N atom has one unshared electron causing less repulsion than in $NO_{2}^{-}$ in which the N atom has unshared electron pair causing more repulsion. This results the bond pairs in $NO_{2}^{-}$ being more close than $NO_{2}$.
The $NO_{2}’s$ 134 degree bond angle is not optimal for either its 8 or 9 member spin set. Both spin sets of $NO_{2}$ are strained and this strain energy for the 9 member spin set on its departure from its optimal bent geometry rises faster than the strain energy of 8 member spin on its departure from its optimal linear geometry and not only due to the 9 member spin set having greater number of electrons.
The total bond order is found to be $\left(\frac{1}{2}\right )$ $\frac{( 3 + 4)}{2}$ = 1.75 which matches the bond order inferred from bond lengths, which is average of 1.5 of $NO_{2}^{-}$ and 2 of $NO_{2}^{+}$.
Related Topics Chemistry Help Chemistry Tutor
*AP and SAT are registered trademarks of the College Board.
|
2019-05-22 01:02:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5099155902862549, "perplexity": 1096.0179081037218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256600.32/warc/CC-MAIN-20190522002845-20190522024845-00262.warc.gz"}
|
https://www.nature.com/articles/s41467-020-20354-2?error=cookies_not_supported&code=ab42f4d8-d560-4190-9116-93743d2359b6
|
## Introduction
Sunset Crater, a small, monogenetic scoria cone composed of alkali basalt in northern Arizona, was the source of multiple sub-Plinian basaltic events (plumes >20 km high1) ~1000 years ago (1085 AD2). It has recently been documented as the most explosive scoria cone eruption identified on Earth to date1, but the driving mechanism of such highly explosive basaltic eruptions is unclear3. Some recent studies4,5,6 have focused on rapid microlite crystallization and corresponding rheology changes to explain explosive behavior in basaltic magmas. In viscous, silicic systems, large explosive eruptions can be triggered when the magmatic system becomes overpressurized by processes such as crystallization-induced volatile exsolution7, new magma injection8, or external changes in the pressure regime9. These processes, however, may be less applicable to basaltic magmas of much lower viscosity. Nevertheless, explosive volcanic eruptions can inject ash and volatiles into the stratosphere with the potential to impact global climate10. Aside from the notable exceptions of basaltic eruptions of significant volume such as Laki (Iceland)11 or large igneous provinces12, the impact of basaltic volcanism on the global atmospheric system is largely unknown.
Magmatic volatile content is an integral part of the interpretation of an eruption but is difficult to measure because volatiles exsolve and escape as magma ascends and depressurizes. The best-preserved pre-eruptive concentration of magmatic volatiles is found within melt inclusions (MIs) that are trapped inside of growing crystals at depth within the magma plumbing system. Because MIs are isolated from the surrounding magma by their crystal host, they theoretically serve as a record of pre-eruptive magma composition and volatile content at the time and location of their entrapment13. MIs, however, are susceptible to modification from post-entrapment crystallization and shrinkage14 during ascent, eruption, and quench at the surface13, which often results in significant CO2 loss to a bubble within the MI (Fig. 1a)15,16,17,18,19,20,21. MI bubbles may not develop solely post-entrapment, however, as they can also originate as a co-entrapped exsolved phase22,23.
One approach to determine MI bubble contents is in situ measurement by Raman spectroscopy17,18,20,24. A number of recent studies using Raman and other methods have found up to 90% of the total MI CO2 sequestered in the bubble, demonstrating the importance of MI bubbles in calculating magmatic volatile budgets17,18,19,20,21. Notably, these previous studies of MI bubbles examined samples with relatively low CO2 content in the MI glass, generally ~100–200 ppm but up to 1500 ppm in rare samples17,18,19,20.
Here we present an investigation of the total volatile budget of the basaltic sub-Plinian eruption of Sunset Crater by MI analysis, including Raman spectroscopic measurements of MI bubbles. We model the size of MI bubbles that can develop post-entrapment and demonstrate that an exsolved CO2 phase existed in the magma at ~15 km depth. We compare magmatic volatiles at Sunset Crater to those in explosive caldera-forming silicic eruptions such as the Bishop Tuff to highlight differences in their abundance and composition. This comparison suggests that the exsolved CO2 phase is a critical pre-eruptive condition that drives highly explosive basaltic eruptions. Furthermore, we constrain the total stratospheric injection of multiple volatile species by the Sunset Crater eruption and propose that basaltic eruptions, including small scoria cones, may be an overlooked source of atmospheric aerosol loading.
## Results and discussion
### MI and bubble compositions
Analysis of MIs reveals pre-eruptive properties of the Sunset Crater magma. MIs are hosted in minimally normally zoned Fo ~82–85 olivine phenocrysts sampled from tephra from the first of several sub-Plinian phases of the eruption (phase 3). The MIs are largely homogeneous in major element composition corrected for 4–11% post-entrapment crystallization (see Supplementary Material). Bubbles are ubiquitous in MIs from all phases of the Sunset Crater eruption, with most MIs containing a single bubble that, in the samples analyzed here, ranges in size from 0.82 vol% to 3.26 vol% of the host MI (Fig. 1b, c). Throughout this section, we present data demonstrating that MIs should be classified into two groups based on bubble vol%. MIs with bubbles <2.5 vol% are hereafter referred to as “Group I” (black filled symbols in Figs. 24; example MI in Fig. 1b) and those with bubbles >2.5 vol% as “Group II” (open symbols in Figs. 2 and 3; cyan symbols in Fig. 4a, b; example MI in Fig. 1c). Total CO2 contents, accounting for both the MI bubble (from Raman spectroscopy) and MI glass (from Fourier transform infrared spectroscopy (FTIR)), encompass a wide range from 2664 to 5591 ppm, with an average value of 4268 ppm (Fig. 2a and Supplementary Material). Group II MIs (bubbles >2.5 vol%) have the highest total CO2 contents (>4000 ppm; open symbols in Fig. 2a). S and Cl contents measured for a different subset of phase 3 MIs are ~2000 and ~425 ppm, respectively, and show minimal variability across all samples (see Supplementary Material).
There are two different mechanisms that could produce a set of MIs with a wide range of volatile contents as observed in the Sunset Crater samples. One possible explanation is that the magma is volatile undersaturated, and so as crystallization proceeds, volatile elements that are incompatible in phenocryst phases will concentrate in the magma. The alternative explanation is that the MI volatiles record a degassing path as a volatile-saturated magma ascends and depressurizes. The data presented here show that the total CO2 content generally decreases with decreasing host Fo content in these samples (Fig. 2b). This relationship implies that the magma is volatile saturated when olivine is crystallizing because CO2 exsolves from the magma as crystallization proceeds and Fo content decreases. However, these olivine data also show two roughly parallel but distinct trends: Group II MIs (>2.5 vol% bubbles) show the same decrease in CO2 with lower Fo, but offset to higher total CO2 than Group I samples (Fig. 2b). So, while the Sunset Crater magma was likely volatile saturated and degassed as it ascended, this mechanism does not explain why there are two trends in Fig. 2b nor provide justification for the highest CO2 contents being restricted to MIs with larger (>2.5 vol%) bubbles.
Further observations also support the division of MIs into two distinct groups by bubble vol%. Bubbles in Group II samples are proportionally larger by vol% than those in Group I, and they also typically have larger diameters, with average bubble diameters of 26 μm for Group II vs. 20 μm for Group I (see Supplementary Material). Additionally, Group I MIs define a relatively linear trend between CO2 concentration in the bubble and bubble vol% (Fig. 2c), while some Group II MIs deviate from this relationship. The two groups are also distinguished by the percentage of the total MI CO2 that is contained in the bubble (Fig. 2d); most Group I MIs contain <40% of their total CO2 content in their bubbles, whereas all Group II MI bubbles contain >40% of the total MI CO2. These results suggest that the bubbles in MIs from each of the two groups may have formed by different processes.
### Bubble growth modeling
The two primary mechanisms of MI bubble formation include differential shrinkage of the MI and crystal host as well as crystallization at the MI–crystal interface (Fig. 1a). Shrinkage occurs because the host crystal is relatively incompressible compared to the MI and thus the MI shrinks more than the crystal during cooling14,25, resulting in pressure loss within the MI. Post-entrapment crystallization involves diffusion of elements from the MI into a denser crystal phase, decreasing the MI volume within its cavity and thus decreasing pressure in the MI16,26.
Bubbles in MIs form and grow in two stages: in the subsurface due to small degrees of pre-eruptive cooling (early stage) and during rapid cooling upon eruption into the atmosphere until quench (late stage). The cooling rate of the magma during early-stage bubble growth is typically slow enough such that both post-entrapment crystallization and shrinkage occur. Additionally, because CO2 solubility is very strongly pressure dependent27, the decrease in pressure associated with these post-entrapment modifications will cause CO2 to exsolve into the bubble that forms. On the other hand, in late-stage (syn-eruptive) growth, especially in explosive eruptions, cooling is extremely rapid and CO2 does not have time to diffuse from the MI into the bubble15,28. In fact, cooling during late-stage growth is rapid enough that post-entrapment crystallization is also kinetically inhibited20, but the bubble volume does continue to increase syn-eruption due to the shrinkage process.
The size of MI bubbles that can be generated due to post-entrapment modification processes during early- (pre-eruption) and late-stage (syn-eruption) cooling can be modeled from properties of the melt and host phenocryst at different temperatures20. Bubble formation during early-stage shrinkage is a function of the difference between the temperature of the magma when the MI is trapped and its temperature just prior to eruption (ΔT). The additional early-stage bubble volume generated during post-entrapment crystallization is determined by the amount of crystallization that occurs, which also depends on ΔT. Modeling of late-stage shrinkage requires an estimation of the glass transition temperature (Tg), which varies based on total H2O content and quench rate29.
The lines in Fig. 3a show the results of modeling the size of MI bubbles generated by both stages of post-entrapment cooling for the Sunset Crater magma. The value ΔT on the x-axis represents cooling prior to eruption and accounts for early-stage differential shrinkage and post-entrapment crystallization. The value of Tg represents the post-eruptive quench temperature, while the lines represent the final bubble vol% resulting from combined early- and late-stage bubble growth for different values of ΔT and Tg. The results are strongly dependent on the value of Tg, which has been investigated experimentally in basaltic melts at cooling rates between 5 and 20 °C/min29. However, for a highly explosive eruption such as at Sunset Crater, the cooling of small clasts, and especially the free crystals analyzed here, is likely to occur much faster than 20 °C/min. Based on Tg data29 and the shape of the relaxation curve of a silicate liquid30, we estimate Tg for these MIs to be 675 °C (Fig. 3a, solid line). In order to illustrate the sensitivity of these estimates to Tg, we also plot modeled bubble volumes for values of Tg at +/−100 °C from our preferred value (Fig. 3a, dashed lines). Based on these calculations, Group II MIs are too large to have formed from post-entrapment cooling alone.
However, an alternate explanation for the different bubble trends in Group I and Group II MIs is that they experienced different cooling histories either pre- or syn-eruption. We reject this hypothesis on the basis of our interpretation of eruption dynamics and associated cooling during subsurface ascent and syn-eruptive quench. First, the data suggest that the pre-eruptive (subsurface) cooling was not significantly different between the two groups of MIs. The MI compositions (see Supplementary Material) indicate that they originated from a batch of homogeneous magma at depth that ascended rapidly without any pause at shallower depths prior to eruption. Additionally, there is no correlation between the amount of post-entrapment crystallization experienced by the MIs and the bubble size (Fig. 3b) nor is there any difference in the amount of post-entrapment crystallization between the two groups of MIs. Second, syn-eruptive quench would have been rapid in a sub-Plinian eruption (>20 km high plume). However, if the rates of syn-eruptive quench differed among crystals, the bubble CO2 densities would also show differences, because during quench the bubble grows without CO2 diffusion into the bubble. In other words, MIs experiencing slower syn-eruptive cooling should have larger bubbles (i.e., Group II MIs) with lower bubble CO2 densities, but this is not recorded in the bubble density data (Fig. 3c).
One additional factor that can affect the size of MI bubbles is H+ diffusion out of the MI during pre-eruptive (subsurface) cooling. This process results in a lower partial molar volume of the MI, which can lead to contraction of the MI and formation of a bubble31. This process cannot be solely responsible for the differences in bubble sizes between Group I and Group II MIs given the similar H2O contents of all MIs (Fig. 2a) and that samples with nearly identical H2O contents have different bubble sizes. Two samples show relatively low H2O contents that could indicate some minor H+ diffusion, but all other Group II MIs share similar H2O contents to Group I MIs and should not have been affected by H+ diffusion.
Given the lack of evidence for different cooling histories or significant H+ diffusion, Group II MIs do have bubbles that are too large to have formed solely from post-entrapment cooling. Therefore, our favored explanation is that Group II MIs were trapped heterogeneously as two phases: liquid melt and exsolved CO2. The Group II MIs have greater total CO2 contents (Fig. 2a), as well as a greater percentage of the total MI CO2 content in their bubbles (Fig. 2d), supporting the presence of an initial CO2-rich exsolved phase.
We can estimate the original dissolved CO2 content at entrapment for MIs with a co-entrapped exsolved phase by approximating the proportion of dissolved MI CO2 that sequesters into Group II bubbles during post-entrapment processes. According to models31, co-entrapped bubbles suppress exsolution of CO2 from the melt because they counter some of the pressure loss due to shrinkage of the MI. The degree of suppression of CO2 exsolution not only depends very strongly on the initial pressure but also on the initial bubble volume fraction and magma composition31. Using this model, we estimate that two MIs of alkali basalt composition at 450 MPa, one with no initial bubble and one with a 1 vol% bubble, should lose nearly the same proportion of their original dissolved CO2 to a bubble after 10% post-entrapment crystallization (~30 vs. 26%). Thus the proportion of originally dissolved CO2 that exsolves into Group II bubbles should be similar to that of Group I bubbles (~35% on average) as the two groups experienced similar amounts of post-entrapment crystallization (Fig. 3b).
If we reconstruct the dissolved volatile contents at MI entrapment, assuming Group II MIs lost 35% of the originally dissolved CO2 to a bubble, the two groups of MIs overlap in dissolved volatile content at entrapment (Fig. 4a). Note that the two Group II samples with the lowest H2O contents could be affected by H+ loss via diffusion as discussed above. H+ loss would facilitate additional CO2 exsolution during pre-eruptive cooling, thus the CO2 contents at entrapment shown in Fig. 4a may be slightly underestimated for these two samples. The reconstructed CO2 at MI entrapment also clarifies the crystallization process, yielding one primary trend with Fo that suggests simple exsolution of CO2 from the magma as crystallization proceeds (Fig. 4b).
Based on these estimates of the original dissolved volatile content, Group II MIs were trapped with 487 to 2226 ppm of exsolved CO2 in a bubble. Assuming that these heterogeneously entrapped bubbles formed as a result of oversaturation in a magma at 450 MPa and 1200 °C where the density of CO2 is ~0.74 g/cm3, the bubbles would be ~9–18 μm in diameter, corresponding to 0.18–0.80 vol% of their host MIs. Modeling indicates that these very small (<1 vol%) initial bubble sizes would not significantly suppress further CO2 exsolution at these storage pressures31, which validates our conclusion that both groups of MIs likely do lose a similar proportion of originally dissolved CO2 to a bubble during early-stage (pre-eruptive) cooling.
### Implications for eruption style and scale
The MI results show that the Sunset Crater magma was volatile saturated and included a CO2-rich exsolved phase. Fluid-saturated isobars for Sunset Crater32 indicate MI entrapment pressures between 300 and 500 MPa, corresponding to depths between ~12 and 17 km (Fig. 4a). The MIs with a co-entrapped exsolved phase (Group II; cyan symbols in Fig. 4a) show entrapment pressures consistent with this entire magma storage region, suggesting a bubbly magma deep in the system. Figure 4c presents the interpretation of MI entrapment and bubble growth in MIs in the Sunset Crater magma. Group I and II MIs are both trapped at storage depths between 300 and 500 MPa, but Group II MIs are trapped with a pre-existing bubble. The MIs next undergo early (pre-eruptive) bubble growth with CO2 exsolution as a result of a slight decrease in temperature, causing both post-entrapment crystallization and shrinkage. Finally, the MIs are erupted and quenched, which yields late (syn-eruptive) bubble growth due to shrinkage only, and without further CO2 exsolution. The modifications shown in Fig. 4c are cumulative from the entrapment stage. In summary, the MIs undergo the same two stages of bubble growth, but the bubbles in Group II samples remain proportionally larger due to their initial volumes.
Even if the Group II samples could be explained without heterogeneous entrapment, we would still expect CO2 exsolution in this deep magma storage region on the basis of the total CO2 results (Fig. 2a). As explained previously, the olivine data (Fig. 2b) show that the magma was volatile saturated throughout the entire depth range over which the MIs were trapped because samples follow a trend of decreasing CO2 with decreasing Fo content (i.e., CO2 exsolves as crystallization proceeds). If the total CO2 in Group II MIs did represent the dissolved CO2 at MI entrapment, the MIs would be trapped over a very wide pressure range from almost 600 to 300 MPa (i.e., Fig. 2a). A volatile-saturated magma of the Sunset Crater composition ascending from 600 to 300 MPa would have to exsolve nearly 3000 ppm of CO2 into bubbles based on volatile solubility data32.
Experimental work confirms that bubbles with similar dimensions to our estimates for the heterogeneously entrapped bubbles (i.e., 10s of micrometers in diameter) can be generated in alkali basalt magma at these pressures33. Bubbles with these properties (0.74 g/cm3 and ~20 μm diameter) would be coupled with the magma: bubble rise velocities would be on the order of 1011 m/s based on calculations of the buoyancy force balanced against gravity and viscous forces, assuming Stokes drag and a range of reasonable magma viscosities, temperatures, and densities. This pre-eruptive system at Sunset Crater with a bubbly basaltic magma could be analogous to silicic magmas that produce highly explosive eruptions such as the Bishop Tuff rhyolite34,35.
While Sunset Crater and silicic caldera-forming systems, exemplified by the middle-erupted Bishop Tuff34,35, both may be characterized by eruption from bubbly magma storage zones, their storage depths and volatile compositions are notably different. Magma storage depths, calculated from MI saturation pressures, were 12–17 km at Sunset Crater but just 7 km for the Bishop Tuff magma34,35. The total dissolved H2O and CO2 content of the Sunset Crater magma was 4.6 mol% at the time of MI entrapment, of which 89 mol% was H2O and 11 mol% was CO2. In contrast, the Bishop Tuff magma contained significantly greater dissolved volatile content at MI entrapment (17 mol%), which was essentially 100 mol% H2O34,35. The composition of the exsolved phase in equilibrium with each magma, as calculated from fluid isopleths (ref. 36 for rhyolite and ref. 32 for basalt), is also distinct. The exsolved phase in the Sunset Crater magma had only trace H2O (3 mol%) and was predominantly CO2 (97 mol%). The Bishop Tuff, on the other hand, had an exsolved phase that was primarily H2O (92 mol% H2O) with a small amount of CO2 (8 mol%)34,35.
Exsolved H2O is expected to play a large role in explosive eruptions, but an exsolved CO2 phase at the greater depths of magma storage in basaltic eruptions may also produce the conditions necessary for large explosive eruptions. The significant overpressure from exsolved CO2 can fracture wall rock33,37, leading to rapid magma ascent and an explosive eruption. Given that storage zones of basaltic systems are expected to be deeper than those of silicic magmas based on neutral buoyancy considerations, and the greater solubility of H2O compared to CO2, high CO2 concentrations are required for an exsolved phase to exist at expected storage depths for basalt. This exsolved CO2 phase may be necessary to initiate a pathway to the surface via overpressure and fracturing, and to drive rapid ascent from the deeper part of the system to the shallow region, where H2O exsolution is expected to take over the driving role. We thus propose that the exsolved CO2 phase present at depth, as indicated by co-entrapped MI bubbles, was a necessary or critical condition that drove the sub-Plinian eruption of basalt at Sunset Crater, analogous to exsolution of H2O-rich fluids driving caldera-forming silicic eruptions.
Rapid ascent due to an exsolved CO2 phase at depth may be a common mechanism for mafic explosive eruptions. For example, Mt. Etna (Italy) has experienced some sub-Plinian and Plinian events and is similar in composition to Sunset Crater38,39,40. While some Etna MIs do contain significant dissolved CO241, the CO2 in Etna MI bubbles has not yet been quantified. Stromboli (Italy) is another mafic volcano that has produced explosive paroxysms, and exsolved CO2 at depths up to 10 km beneath the crater has been proposed as the trigger for these events based on measured crater plume emissions42. Additionally, fluid inclusions in phenocrysts from Piton de la Fournaise (Réunion Island) suggest that CO2 exsolution begins deep in the magma plumbing system of this volcano (500 MPa)43. Based on findings presented here, we would expect that the MI bubbles from the most explosive eruptions at these volcanoes might also contain significant CO2 and evidence of a co-entrapped exsolved CO2 phase, suggesting bubbly magma deep in the plumbing system. MI analysis following the procedures described here to assess MI bubble contents and formation would provide the data necessary to test this theory.
There are other processes, such as rapid shallow microlite crystallization and corresponding rheology changes, that have been proposed to explain explosive behavior in mafic magmas4,5. CO2 exsolution was proposed as the trigger for the ~456 ka Pozzolane Rosse explosive mafic eruption at Colli Albani (Italy), but the CO2 was not magmatic in origin, and the explosive nature was controlled by increased magma viscosity as a result of extensive decompression-induced leucite crystallization44. Extensive crystallization has also recently been proposed as the cause of the Masaya Triple Layer mafic Plinian eruption (2.1 ka; Nicaragua), which is relatively volatile poor6. While there is evidence in the Sunset Crater eruptive products for variable microlite crystallization in a portion of the groundmass, a significant fraction of the tephra in the sub-Plinian phases has a glassy and vesicular texture1, suggesting that rapid microlite crystallization cannot be the only important control. We further emphasize that rapid microlite crystallization is generally driven by exsolution of H2O from the magma in the shallow subsurface, and we therefore favor the idea that a deeper mechanism, such as exsolution of CO2, is required to explain eruption initiation and rapid magma ascent in basaltic systems whose magmas ascend from greater depths to feed explosive eruptions.
Sunset Crater erupted a significant volume of volatile-rich magma that reached the stratosphere during its most explosive phases. The eruption produced 0.52 km3 dense-rock equivalent (DRE) (2.8 g/cm3 dense rock density) of volcanic material, of which 0.22 km3 was erupted during the sub-Plinian phases1. As a result of the analysis presented here, along with these previously published volcanological characteristics, we estimate that the Sunset Crater eruption released ~0.6 Mt Cl, ~6 Mt SO2, and ~11 Mt CO2. SO2 released during the sub-Plinian phases of the eruption (~2.45 Mt) could reach the stratosphere and generate H2SO4 aerosols. Assuming that 75% of the aerosol mass consists of H2SO445, the mass of stratospheric aerosols released was ~5 Mt. The remainder of the SO2 was emitted during eruptive phases that reached the tropopause or upper troposphere, at which levels it may also have induced atmospheric forcing46. These SO2 values probably represent minima because they do not account for sulfur exsolved from un-erupted magma emitted during the eruption.
The atmospheric impact of explosive mafic eruptions, largely due to their high sulfur contents, may be comparable to their silicic counterparts. Previous interest in atmospheric effects of mafic eruptions has focused on large volume fissure eruptions, such as the Laki (Iceland) eruption of 1783–1784 (122 Mt SO2)11, which is a different eruptive mechanism entirely. Explosive silicic eruptions, although still much larger in terms of erupted volume, are better analogies to the dynamics of the Sunset Crater eruption. Two such historical eruptions, the 1991 eruption of dacite at Pinatubo (Philippines) and the 1815 eruption of trachyandesite at Tambora (Indonesia), resulted in profound atmospheric impacts. The Pinatubo eruption, which had significant impact on global climate for 3 years post-eruption, erupted 10 times the mass of magma (5 km3 DRE) as Sunset Crater (0.5 km3 DRE), but released just ~3 times the mass of SO2 (17 Mt)47. The Tambora eruption was responsible for the “year without a summer,” and while it erupted ~60 times the mass of magma (30 km3 DRE) as Sunset Crater, it released only ~9 times the mass of SO2 (~55 Mt)48. While there is no evidence that Sunset Crater produced atmospheric impact similar to one of these large silicic explosive eruption located near the equator, its volatile output was significant.
Mafic scoria cones are the most common volcanic landform on Earth49, but they are often overlooked because of their small stature. They are not well documented in the literature, partly because of poor preservation and burial by neighboring vents. But because of their high CO2 and SO2 contents, as well as their potential for sub-Plinian or larger eruptions, they can be important contributors of volcanic gases in the atmosphere. It is therefore possible that some unassigned events in the ice core record were derived from highly explosive mafic eruptions from scoria cone volcanoes.
## Methods
### Samples
Free olivine crystals, generally coated in glass, were picked from a fraction of Sunset Crater tephra between 0.5 to 2 mm in diameter in size, mounted in epoxy, and examined for high-quality MIs for this study. The MIs were required to be glassy, ~>50 μm in diameter (often larger, depending on the placement of the bubble), away from any cracks or irregularities in the crystal, and tens of micrometers from the crystal rim. Approximately 20% of olivine crystals from Sunset Crater tephra contained viable MIs. MIs from Sunset Crater are commonly faceted, ranging from ellipsoidal to negative crystal shapes. All MIs contain one bubble ranging in size from ~1 to 5 vol% in typical MIs. Rarely, multiple bubbles are present in a single MI, and in these cases, there is one primary bubble with the other bubble(s) being much smaller in size. We carefully selected primary MIs that did not exhibit evidence of extensive H2O loss or decrepitation. The only secondary modification physically apparent in the analyzed MIs was the bubble.
### CO2 calibration
To correct for instrument variability, any CO2 densimeter50 must be adjusted using CO2 standards analyzed using a specific Raman instrument51. In this study, pure CO2 gas was sealed in capillary tubes using a vacuum line to create a set of synthetic inclusions for use as standards52. These CO2 gas standards have densities between 0.008 to 0.133 g/cm3, which is consistent with the lower range of CO2 densities measured in MI bubbles.
### Raman analysis
MI bubbles were analyzed using Raman spectroscopic techniques. The Raman data were collected using a custom built Raman spectrometer in a 180° geometry at ASU in the LeRoy Eyring Center for Solid State Science (LE-CSSS). The sample was excited using a 150-mW Coherent Sapphire SF laser with a 532-nm laser wavelength. The laser power was controlled using a neutral density filter wheel and an initial laser power of 100 mW. The laser was focused onto the sample using a 50x super long working distance plan APO Mitutoyo objective with a numerical aperture of 0.42. The signal was discriminated from the laser excitation using a Kaiser laser band pass filter followed by an Ondax® SureBlock™ ultranarrow-band notch filter and a Semrock edge filter. The data were collected using a Shamrock 750 spectrometer from Andor® on an Andor iDUS back thinned Silicon CCD cooled to −95 °C, and a 1200 mm−1 grating was utilized to achieve optimal spectral resolution while preserving signal strength.
High-quality MIs were polished to <30 μm from the MI bubble and imaged with a petrographic microscope in preparation for Raman analysis. The MIs were specifically chosen to provide a representative sample of textural features observed in the Sunset Crater eruption (e.g., varying bubble volumes, MI sizes, MI shapes). In addition to these MIs and the four CO2 standards, cyclohexane, naphthalene, and 1,4 bis (2-methylstyryl) benzene were also analyzed as Raman shift axis calibration standards during each Raman session. For the MI bubbles, the laser power at the sample was set to 6 mW (in isolated cases where the signal was low, the power was increased to 12 mW), and five 30-s scans were accumulated.
Raman spectra were first calibrated along the Raman shift axis using known values of the peaks of the three calibration standards (corresponding to 17 peaks in the measured range). Next, peak fitting was applied to the Fermi diad peaks for the CO2 standards and MIs using a Gaussian–Lorentzian peak-fitting program, and preliminary CO2 densities were calculated using the ref. 50 densimeter. A linear fit was obtained to adjust the Raman-calculated CO2 densities of the capillary tube standards to their true densities, and all of the MI bubble densities were translated according to this fit.
The total contribution of CO2 from the MI bubble was calculated from the Raman data using a mass balance approach16,18,24. In addition to the bubble CO2 density, which is determined by the Raman analysis, this calculation requires density of the MI glass and the CO2 concentration of the glass, as well as the volumes of the MI glass and bubble. The CO2 contents of the MI glasses were determined by FTIR, which is described in the “Dissolved volatile analysis” section below. The densities of the MI glasses were determined from major element composition and H2O content, and this calculation is also described in the “Dissolved volatile analysis” section below. Volumes were determined from photomicrographs of the MIs, first using ImageJ to trace the area of the MI. This area was fit to an ellipse using the software, and volumes were calculated assuming the MI is an ellipsoid with a third axis intermediate between the long and short axes of the fit ellipse. The volumes of MI bubbles were calculated assuming spherical geometry. The uncertainty in the volume of the MI is primarily due to the estimate of the third axis. The average difference in length between the long and short MI axes is ~17 μm, and so we assume that the error in the length of the third axis is ±10 μm. This yields ~5% error in the total CO2 content (i.e., Fig. 2a), and ~15% error in the bubble vol% (i.e., ±0.3 vol% for a 2 vol% bubble).
The mass balance calculations to calculate the total CO2 abundance of the MI were completed following these steps:
1. 1.
Calculate the mass of CO2 in the MI glass: mass of the MI glass (glass density × glass volume) × CO2 concentration of the glass;
2. 2.
Calculate the mass of CO2 in the MI bubble: bubble density × bubble volume;
3. 3.
Calculate the mass fraction of CO2 in the bubble: divide the mass of CO2 in the bubble (#2) by the total mass of CO2 in the MI (glass + bubble; #1 + #2);
4. 4.
Calculate the reconstructed (glass + bubble) CO2 concentration: divide the CO2 concentration in the MI glass by the value of one minus the mass fraction of CO2 in the bubble (#3).
Raman results are listed in Supplementary Table 1.
### Major element analysis
After Raman analysis, MI glasses were analyzed for major elements using a Cameca SX100 Ultra electron microprobe at the University of Arizona. Each element was counted for 20 s (10 s for Na) using 15 keV accelerating voltage, 20 nA beam current, and a 15-μm spot size. Olivine compositions were measured at the same conditions using a focused beam. Compositions of naturally quenched inclusions studied by Raman were corrected for post-entrapment crystallization and Fe loss using Petrolog353. These corrections were calculated using an oxidation state equal to the NNO buffer54 and the olivine-melt equilibria model of ref. 55, which yields a Fe–Mg partitioning coefficient of ~0.3 at 1200 °C. The FeOT value of 11 wt% was selected from bulk rock data1. Corrected MI and olivine compositions are listed in Supplementary Table 1.
A second set of MIs from the same tephra sample was analyzed for sulfur and chlorine on the same electron microprobe instrument. These elements were counted for 180 s each using 15 keV accelerating voltage, 20 nA beam current, and a 15-μm spot size. S and Cl results are listed in Supplementary Table 2.
### Dissolved volatile analysis
H2O and CO2 contents of the MI glasses were determined by FTIR. MIs were doubly intersected and polished in preparation for transmission FTIR analysis. FTIR analyses were performed using a Nicolet iN10 MX instrument at the United States Geological Survey in Menlo Park. Spectra were collected between 5500 and 1000 cm−1 wavenumber for 45 s (128 scans) with high spectral resolution, and a background was collected before analyzing each sample. The aperture was carefully maximized for each inclusion according to the size of the doubly exposed spot on the inclusion to obtain an optimal spectrum.
H2O and CO2 contents were calculated using the Beer–Lambert Law:
$$C = \frac{{{\mathrm{MW}}{\kern 1pt} * {\kern 1pt} A}}{{\rho {\kern 1pt} * {\kern 1pt} \varepsilon {\kern 1pt} * {\kern 1pt} d}}$$
(1)
where C is concentration in wt%, MW is the molecular weight of the absorbing species, A is the peak height (absorbance) of interest, d is sample thickness in cm, ρ is density of the sample in g/L, and ε is a molar absorption coefficient in L/mol-cm. Absorbances (A) were measured after subtraction of French-curve baselines drawn beneath each peak to reproduce the spectra of volatile-free samples. Thicknesses were determined using a Zygo ZeScope optical profilometer in the LE-CSSS at ASU, which allowed for precise thickness across the FTIR aperture to be determined. Density was calculated for each MI using the method detailed in ref. 56 wherein molecular partial molar volume contributions are totaled for a dry glass and density is adjusted iteratively based on water content. For total water absorption at ~3500 cm−1, the absorption coefficient used was 63 L/mol-cm from ref. 57. In rare cases where a near-IR peak for OH at ~4500 cm1 was visible, the coefficient of 0.67 L/mol-cm from ref. 58 was used. The near-IR peak for molecular water at ~5200 cm−1 was not able to be resolved in any of these spectra. In these basaltic glasses, CO2 is stored in the melt as CO3, and the absorption coefficient was calculated for the MI composition according to ref. 59, with an average value of ~320 L/mol-cm. The ref. 59 relationship was specifically calibrated for alkali-rich mafic magmas like Sunset Crater, and so we estimate ~5% uncertainty for CO2 content, while the uncertainty in H2O content is ~10%.
Volatile contents were also corrected for olivine growth at the rim of the MI (post-entrapment crystallization) using the results of the major element corrections. H2O, CO2, and K2O display incompatible behavior with olivine crystallization. After calculation of volatile contents from FTIR spectra using Eq. 1, the ratio of analyzed and corrected K2O contents was used to adjust the volatile components for post-entrapment modification. FTIR results are listed in Supplementary Table 1.
### Bubble growth model
We model post-entrapment MI bubble formation and growth following the method of ref. 20, employing differences in density to determine maximum bubble volumes. The pre-eruptive bubble volume is controlled by the difference in the temperature of the melt at the time of MI entrapment compared to its temperature just before the onset of eruption. This temperature difference is assessed during calculations of post-entrapment crystallization using Petrolog353. The final bubble volume after the crystal and MI are quenched during eruption is determined from the density of the crystal host and melt at the glass transition temperature, which was discussed above and estimated to be ~675 °C.
First, post-entrapment crystallization of olivine is calculated in 35 °C temperature steps from the liquidus temperature to the minimum pre-eruptive temperature using rhyolite-MELTS version 1.2.060. The liquidus temperature was calculated to be 1166 °C using rhyolite-MELTS. For all calculations in rhyolite-MELTS, we use a primitive (high MgO, low SiO2) MI composition (sample a-06) to represent the initial melt composition. Volatile content, pressure, and oxidation state are also required inputs in rhyolite-MELTS. To calculate pressure, we use the highest dissolved CO2 concentration measured by FTIR (~3100 ppm), and for H2O, we take the highest H2O/K2O ratio of the MIs multiplied by the K2O content of the primitive composition, yielding 1.51 wt% H2O. We calculate the pressure for this volatile content to be ~387.5 MPa using a solubility model for the Sunset Crater composition from ref. 32. An oxidation state equivalent to the NNO buffer was used for these calculations. Rhyolite-MELTS outputs the new melt composition, masses of liquid (melt) and olivine, and the olivine density at each temperature step.
Next, we determine the pre-eruptive bubble size based on differences in melt density. We use the results from rhyolite-MELTS and partial molar volume coefficients from refs. 61,62 to calculate the size of the cavity that forms as a result of crystallization of a higher-density olivine phase. This cavity volume must then be adjusted for shrinkage of the olivine host. The change in volume of the olivine host is calculated using the volume at ambient temperature of 43.95 cm3/mol from ref. 63 and adjusted for temperature by thermal expansion coefficients from ref. 64.
Finally, the maximum bubble size that can form during eruption (quench) is calculated. The bubble grows only from the shrinkage process during quench, and so the final melt volume is determined solely from the melt density at the glass transition temperature. The shrinkage of the olivine host is again accounted for at this stage as described above. The bubble sizes calculated for both the pre-eruption and quench stages are added together, and a linear fit is determined for the data to display the relationship between bubble size fraction and ΔT (Fig. 3a). An example calculation of the model is shown in Supplementary Table 3.
|
2023-01-28 05:03:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5088403224945068, "perplexity": 4641.743583620893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00172.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/bioseq/html/seaview.html
|
seaview {bioseq} R Documentation
## SeaView: DNA sequences and phylogenetic tree viewer
### Description
This function opens SeaView (Gouy, Guindon & Gascuel, 2010) to visualize biological sequences and phylogenetic trees. The software must be installed on the computer.
### Usage
seaview(
x,
seaview_exec = getOption("bioseq.seaview.exec", default = "seawiew")
)
### Arguments
x a DNA, RNA or AA vector. Alternatively a DNAbin or AAbin object or a phylogenetic tree (class phylo). seaview_exec a character string giving the path of the program.
### Details
By default, the function assumes that the executable is installed in a directory located on the PATH. Alternatively the user can provide an absolute path to the executable (i.e. the location where the software was installed/uncompressed). This can be stored in the global options settings using options(bioseq.seaview.exec = "my_path_to_seaview").
### References
Gouy M., Guindon S. & Gascuel O. (2010) SeaView version 4 : a multiplatform graphical user interface for sequence alignment and phylogenetic tree building. Molecular Biology and Evolution 27(2):221-224.
Other GUI wrappers: aliview()
|
2023-03-24 12:28:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24306732416152954, "perplexity": 11880.335356262145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00438.warc.gz"}
|
https://math.libretexts.org/Bookshelves/Linear_Algebra/Map%3A_Linear_Algebra_(Waldron%2C_Cherney%2C_and_Denton)/11%3A_Basis_and_Dimension/11.1%3A_Bases_in_%5C(%5CRe%5E%7Bn%7D%5C)
|
# 11.1: Bases in $$\Re^{n}$$
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
In review question 2, chapter 10 you checked that
$\Re^{n} = span \left\{ \begin{pmatrix}1\\0\\ \vdots \\ 0\end{pmatrix}, \begin{pmatrix}0\\1\\ \vdots \\ 0\end{pmatrix}, \ldots, \begin{pmatrix}0\\0\\ \vdots \\ 1\end{pmatrix}\right\},$
and that this set of vectors is linearly independent. (If you didn't do that problem, check this before reading any further!) So this set of vectors is a basis for $$\Re^{n}$$, and $$\dim \Re^{n}=n$$. This basis is often called the $$\textit{standard}$$ or $$\textit{canonical basis}$$ for $$\Re^{n}$$. The vector with a one in the $$i$$th position and zeros everywhere else is written $$e_{i}$$. (You could also view it as the function $$\{1,2,\ldots,n\}\to \mathbb{R}$$ where $$e_{i}(j)=1$$ if $$i=j$$ and $$0$$ if $$i\neq j$$.) It points in the direction of the $$i^{th}$$ coordinate axis, and has unit length. In multivariable calculus classes, this basis is often written $$\{ i, j, k \}$$ for $$\Re^{3}$$.
Note that it is often convenient to order basis elements, so rather than writing a set of vectors, we would write a list. This is called an ordered basis. For example, the canonical ordered basis for $$\mathbb{R^{n}}$$ is $$(e_{1},e_{2},\ldots,e_{n})$$. The possibility to reorder basis vectors is not the only way in which bases are non-unique:
Remark (Bases are not unique)
While there exists a unique way to express a vector in terms of any particular basis, bases themselves are far from unique.
For example, both of the sets:
$\left\{ \begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}0\\1\end{pmatrix} \right\} \textit{ and } \left\{ \begin{pmatrix}1\\1\end{pmatrix}, \begin{pmatrix}1\\-1\end{pmatrix} \right\}$
are bases for $$\Re^{2}$$. Rescaling any vector in one of these sets is already enough to show that $$\Re^{2}$$ has infinitely many bases. But even if we require that all of the basis vectors have unit length, it turns out that there are still infinitely many bases for $$\Re^{2}$$ (see review question 3).
To see whether a collection of vectors $$S=\{v_{1}, \ldots, v_{m} \}$$ is a basis for $$\Re^{n}$$, we have to check that they are linearly independent and that they span $$\Re^{n}$$. From the previous discussion, we also know that $$m$$ must equal $$n$$, so lets assume $$S$$ has $$n$$ vectors. If $$S$$ is linearly independent, then there is no non-trivial solution of the equation
$0 = x^{1}v_{1}+\cdots + x^{n}v_{n}.$
Let $$M$$ be a matrix whose columns are the vectors $$v_{i}$$ and $$X$$ the column vector with entries $$x^{i}$$. Then the above equation is equivalent to requiring that there is a unique solution to
$MX=0\, .$
To see if $$S$$ spans $$\Re^{n}$$, we take an arbitrary vector $$w$$ and solve the linear system
$w=x^{1}v_{1}+\cdots + x^{n}v_{n}$
in the unknowns $$x^{i}$$. For this, we need to find a unique solution for the linear system $$MX=w$$.
Thus, we need to show that $$M^{-1}$$ exists, so that
$X=M^{-1}w$
is the unique solution we desire. Then we see that $$S$$ is a basis for $$V$$ if and only if $$\det M\neq 0$$.
Theorem
Let $$S=\{v_{1}, \ldots, v_{m} \}$$ be a collection of vectors in $$\Re^{n}$$. Let $$M$$ be the matrix whose columns are the vectors in $$S$$. Then $$S$$ is a basis for $$V$$ if and only if $$m$$ is the dimension of $$V$$ and
$\det M \neq 0.$
Remark
Also observe that $$S$$ is a basis if and only if $${\rm RREF}(M)=I$$.
Example 113
Let
$S=\left\{ \begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}0\\1\end{pmatrix} \right\} \textit{ and } T=\left\{ \begin{pmatrix}1\\1\end{pmatrix}, \begin{pmatrix}1\\-1\end{pmatrix} \right\}.$
Then set $$M_{S}=\begin{pmatrix} 1 & 0\\ 0 & 1\\ \end{pmatrix}$$. Since $$\det M_{S}=1\neq 0$$, then $$S$$ is a basis for $$\Re^{2}$$.\\
Likewise, set $$M_{T}=\begin{pmatrix} 1 & 1\\ 1 & -1\\ \end{pmatrix}$$. Since $$\det M_{T}=-2\neq 0$$, then $$T$$ is a basis for $$\Re^{2}$$.
|
2019-02-15 22:12:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9915409088134766, "perplexity": 136.26579533655206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479159.2/warc/CC-MAIN-20190215204316-20190215230316-00142.warc.gz"}
|
https://proxies-free.com/real-analysis-prove-that-left-int_x-mathbf-f-d-mu-right-_p-leq-int_x-left-mathbf-f-right-_p-d-mu-for-mathbf-f-f_1-f_n/
|
# real analysis – Prove that \$ left | int_X mathbf {f} d mu right | _p leq int_X left | mathbf {f} right | _p d mu \$ for \$ mathbf {f} = (f_1, …, f_n) \$
I have problems with the following proof.
Define $$mathbf {f} (x) = (f_1 (x), …, f_n (x))$$ Where $$f_i: X to mathbb {R}$$ for every positive integer $$i leq n$$, and each one $$f_i$$ is integrable.
Prove that
$$left | int_X mathbf {f} \ mathrm {d} mu right | _p leq int_X left | mathbf {f} right | _p \ mathrm {d} mu$$
to the $$1 leq p leq infty$$
Now we can explicitly write the left side as
$$left | int_X mathbf {f} \ mathrm {d} mu right | _p = left ( sum_ {j = 1} ^ n left | int_X f_j \ mathrm {d} mu right | ^ p right) ^ {1 / p}$$
Now the exponent annoys me. If we had the case $$p = 1$$, then it's just there for every integrable function $$g: X to mathbb {R}$$, we have
$$left | int_X g \ mathrm {d} mu right | leq int_X | g | \ mathrm {d} mu$$
Consequently,
$$left | int_X mathbf {f} \ mathrm {d} mu right | = sum_ {j = 1} ^ n left | int_X f_j mathrm {d} mu right | leq sum_ {j = 1} ^ n int_X | f_j | \ mathrm {d} mu = int_X left ( sum_ {j = 1} ^ n | f_j | right) \ mathrm {d} mu = int_X | mathbf {f} | \ mathrm {d} mu$$
But I can not handle the exponential $$1
Thanks!
|
2019-11-20 17:27:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416569471359253, "perplexity": 2242.8546702768735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00326.warc.gz"}
|
https://motls.blogspot.com/2007/08/public-against-global-warming-ideology.html?m=1
|
Monday, August 27, 2007
Public mostly against global warming ideology
Andrew Revkin wrote an article in the New York Times about Steve McIntyre's 1/4-degree Fahrenheit reduction of the recent U.S. temperature record.
AOL reprinted the story and added two polls. 180,000+ people voted whether the threat of AGW is being exaggerated. The result?
• 52% think it is exaggerated
• 26% think it is understated
• 22% think it is fair
Recently, Sharon Begley complained that 42% thought that AGW was being exaggerated. Now it's a more sensible figure, namely 52%. ;-)
There are also 11 pictures with moderately catastrophic predictions by the IPCC. Among 150,000+ voters,
• 55% think that the predictions won't prove fairly accurate
• 45% think that they will prove fairy accurate
Thanks to Rae!
Related: Prof Christopher Lingle criticizes the "strikingly one-sided" reporting on the climate change in the Japan Times.
Related: Two key BBC news bosses attacked global warming jihadists' plans to dedicate a whole day to environmentalist propaganda, saying it was not the broadcaster's job to preach to viewers. The program titled Planet Relief was classified as "consciousness raising" and contradicts the corporation's guidelines. Peter Barron realizes that many people think that the BBC's job is to save the planet but this thinking must be stopped, he insists. Peter Horrocks also realizes that it's not their job to "proselytize" about the AGW religion.
Related: Two months ago, a British poll showed that 3/4 of Britons think that global warming is a natural occurrence and not a result of carbon emissions.
Related: In another British poll two months ago, 56% of respondents agreed that scientists are still questioning climate change. Most people thought that the problem was exaggerated to make money.
Related: One more British poll whose results were published two weeks ago showed, among many other things, that the number of people who consider environment to be among the most important issues for the government dropped from 25 percent in 2001 to 19 percent in 2007. 32 percent refused to reduce their flying and 24 percent refused to reduce their use of cars.
1 comment:
1. Pages of many web site contains global warming pictures. But that pictures not give enough information of global warming. Global Warming myth is very deep ozone has doubled since the mid-19th century due to chemical emissions from vehicles, industrial processes and the burning of forests, the British climate researchers wrote.
|
2019-10-19 10:16:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27217620611190796, "perplexity": 4648.668680166565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00199.warc.gz"}
|
http://www.purplemath.com/learning/viewtopic.php?f=7&t=649&p=2021
|
## Complete the Square solving for x
Simplificatation, evaluation, linear equations, linear graphs, linear inequalities, basic word problems, etc.
### Complete the Square solving for x
Who can walk me thru the steps please
x^2-11x+24=0 solve for x and 8x^2+2x-15=0 solve for x Help
prnygboy1
Posts: 1
Joined: Wed Jun 24, 2009 6:39 am
prnygboy1 wrote:Who can walk me thru the steps please
This lesson will walk you through the steps.
. . . . .$x^2\, -\, 11x\, +\, 12\, =\, 0$
. . . . .$\mbox{move the constant: }x^2\, -\, 11x\, =\, -12$
. . . . .$\mbox{square half of linear coeff; add: }x^2\, -\, 11x\, +\, \frac{121}{4}\, =\, -12\, +\, \frac{121}{4}$
. . . . .$\mbox{restate as square: }\left(x\, -\, \frac{11}{2}\right)^2\, =\, \frac{121\, -\, 48}{4}$
...and so forth.
Once you've learned those steps, please attempt the exercises. If you get stuck, you can then reply with a clear listing of your steps and reasoning so far. Thank you!
stapel_eliz
Posts: 1705
Joined: Mon Dec 08, 2008 4:22 pm
|
2013-05-22 13:17:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746370077133179, "perplexity": 3903.3169819467544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701760529/warc/CC-MAIN-20130516105600-00056-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=JJSHBB_2016_v22n2_115
|
ESD(Exponential Standard Deviation) Band centered at Exponential Moving Average
Title & Authors
ESD(Exponential Standard Deviation) Band centered at Exponential Moving Average
Lee, Jungyoun; Hwang, Sunmyung;
Abstract
The Bollinger Band indicating the current price position in the recent price action range is obtained by adding/substracting the simple standard deviation (SSD) to/from the simple moving average (SMA). In this paper, we first compare the characteristics of the SMA and the exponential moving average (EMA) in the operator`s point of view. A basic equation is obtained between the interval length N of the SMA operator and the weighting factor $\small{{\rho}}$ of the EMA operator, that makes the centers of the 1st order momentums of each operator impulse respoinse identical. For equivalent N and $\small{{\rho}}$, frequency response examples are obtained and compared by using the discrete time Fourier transform. Based on observation that the SMA operator reacts more excessively than the EMA operator, we propose a novel exponential standard deviation (ESD) band centered at the EMA and derive an auto recursive formula for the proposed ESD band. Practical examples for the ESD band show that it has a smoother bound on the price action range than the Bollinger Band. Comparisons are also made for the gap corrected chart to show the advantageous feature of the ESD band even in the case of gap occurrence. Trading techniques developed for the Bollinger Band can be straight forwardly applied to those for the ESD band.
Keywords
Bollinger Band;ESD(Exponential Standard Deviation);Simple standard deviation;Discrete time fourier transform;Exponential moving average;
Language
Korean
Cited by
References
1.
A.Papoulis and S.U.Pillai, Probability, "Random Variables and Stochastic Processes," 4th edition, McGraw Hill, 2002.
2.
A. V. Oppenheim . R. W. Schafer and J. R. Buck, "Discrete-time signal processing," Prentice Hall, 1999.
3.
4.
Kim Hyun-Ji., U-Jin Jang, "Trading strategies using an exponential moving average line," Joint Conference on Industrial Engineering Journal in Spring 2010, 2010.06, 1124-1130.
5.
Kwon Sang-Joo., "Robust Kalman Filtering with Perturbation Estimation Process-for Uncertain Systems, Journal of Institute of Control," Robotics and Systems, Vol.12, Iss. 3, 2006, 201-207.
6.
Kwon Seong-Ki, Dong-Myung Lee, "Performance Analysis of Compensation Algorithm for Localization Using the Equivalent Distance Rate and the Kalman Filter," The Journal of Korean Institute of Communications and Information Sciences, Vol.37, Iss. 5B, 2012, 370-376.
7.
Lee Jae-Won, "Astock Trading System based on Supervised Learning of Highly Volatile Stock Price Patterns," Korean Institute of Information Scientists And Engineers, Computing Practices and Letters Article 19, No.1(2013), 23-29.
8.
Lee Jung-Youn, Hwang Sun-Myung, "Efficient Utilization Condition of MACD and Nontrend Status Detecting Index," Korean Institute of Information Scientists And Engineers, Korea Computer Conference in 2015, 2015, 630-632
9.
Oh Won-Seok, "Systematic future trading with a composition strategy," Korea Academic Society of Businee Administration, Conference and Annual General Meeting, 2008, 510-513.
10.
Rhee, Jung-Soo, "A note for hybrid Bollinger bands," Journal of the Korean Data and Information Science Society, Vol.21, Iss. 4, 2010, 777-782.
11.
S.J.Orfanidis., "Introduction to Signal Processing," Prentice Hall Inc, 2010.
|
2018-11-15 08:58:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535289466381073, "perplexity": 6401.847602296099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742569.45/warc/CC-MAIN-20181115075207-20181115101207-00526.warc.gz"}
|
http://2007.runescape.wikia.com/wiki/Pest_Control
|
# Pest Control
15,370pages on
this wiki
This is a safe minigame.If you die here, you will not lose any of your items.
The official world for Pest Control is world 344 (P2P).
Location on World Map Void Knights' Outpost ↑ Unknown ← Pest Control → Unknown ↓ Pest Control Island
Pest Control (often known simply as PC) is a co-operative members-only combat-based activity. Players must defend an NPC (non-player character) known as the Void Knight from an onslaught of monsters, while at the same time destroying the portals from which the monsters spawn.
The activity is played in the name of Guthix to retain balance in the world, which may be disrupted by an influx of monsters invading islands in the south of the world. It is run by an order of Guthix known as the Void Knights. Players board landers, which transport them to islands under invasion. The activity is divided into three landers; access to each lander is determined by combat level.
This is a 'safe' activity. Players who die keep their items, respawn on the lander, and can rejoin combat immediately. In addition, Hitpoints, Prayer points, the Special Attack bar, and run Energy are fully restored at the end of each game. All stats are restored to their normal levels at the end of each game. This means that stats boosts of various potions like Super sets do not carry over from game to game. This make using these potions somewhat expensive, as each game will require a new dose from each potion.
## RequirementsEdit
### SkillsEdit
The only requirement to participate in a game of Pest Control is to have a combat level of 40 or above.
• A minimum combat level of 40 is required to use the novice lander.
• To use the intermediate lander, a player must have a combat level of 70 or higher
• To use the veteran lander, a player must have a combat level of 100 or higher.
Some players attempt to recruit Pest Control players into their clans, in the hopes that high level players will join the clan and help to win a higher percentage of games in a short amount of time. These players state the clan name to join. Although any level players can join such a clan, the clan may kick out players who are lower level than the clan wants or who do not perform well in the Pest Control games.
Other players try to get high level players to switch to a world where a clan of high level players plays Pest Control, again in the hopes that high level players will help to win a higher percentage of games in a short amount of time. These players announce their intentions by saying things like, 'Trade for a 100 plus world'. By opening the trading interface (no items need be exchanged), the advertising player can verify the trading player's combat level and, if it is high enough, will then disclose the world that the clan is playing Pest Control on. The world is disclosed by the number of items (usually coins) the advertising player places in the trading interface. For example, if the advertising player places 365 coins in the interface, then the clan is on World 365. This procedure keeps the world number private and thus prevents its disclosure to anyone below the clan's desired combat levels.
### ParticipationEdit
In order to receive commendation points, a player must inflict 50 points of damage before the end of the game. Repairing a barricade or gate on the island acts as 5 points of damage on a monster, allowing players who may not be able to get the required points in the time allotted or players who do not want to train a combat-related skill to receive points. Therefore, if a player did no combat at all during a game of Pest Control, they would have to repair 10 barricades or gates. You can get 50 points of damage from repairing gates on any level boat.
As of 18 December 2014, Pest Control now grants full experience when dealing damage in the minigame. Prior to this, players would earn 1/2 the experience they would normally receive for damage dealt. (e.g. 4 damage = 8xp in attack rather than 16xp). This experience penalty did not apply when doing damage to portals.
### PlayersEdit
A game of Pest Control can be played by 5 to 25 players. Multiple games may be played at once by many groups of 25 players. Anyone who wishes to play must board the lander. If the lander fills with 25 players, the game will begin automatically. Otherwise, players must wait 5 minutes for the game to begin.
In busy Pest Control worlds, the landers, particularly the level 40 novice one, can fill up quickly. A lander leaves soon after it has 25 players on board. Sometimes, far more than 25 players attempt to get on a lander. A player not taken in the first load of 25 players is assigned a priority number for the lander, starting at 1 and rising by 1 with each load that leaves without the player. The higher the priority number, the more likely the player will be taken in the next load. Priority numbers as high as 3 have occasionally been seen and it is rumoured that higher ones have occurred.
On slow Pest Control worlds, a lander with a minimum of 5 but fewer than 25 players will leave after a 5 minute wait. Some players bring items to cast the High Level Alchemy or Humidify spells while they wait for the landers to fill up.
### RestrictionsEdit
Pets are not allowed on the island. Dwarven Multicannons may not be used during the activity.
Alchemy spells are also forbidden on the island.
### LocationEdit
Pest Control is located on the Void Knights' Outpost, which is on one of the southernmost islands in the game. The fastest way to get to the outpost is through use of the Minigame Group Finder, by selecting Pest Control and teleporting directly there. However, teleport has a 20 minute cooldown and cannot be cast from PvP Worlds, The Wilderness or the Duel Arena.
Otherwise, the minigame can be accessed from the docks of Port Sarim south of the Lady Lumbridge (ship). Speak with the Squire and she will ask if you want to go to the Void Knights' Outpost. Alternatively, you can right-click the squire and select 'Travel'.
To get to Port Sarim you can use use fairy ring code A•I•Q to Mudskipper Point, use the Amulet of Glory's teleport to Draynor Village then go around the fence or walk from Falador or Lumbridge.
Another quick way to get to Port Sarim is to teleport to Ardougne, take the boat to Brimhaven and then take the Charter Ship to Port Sarim. The Charter Ship is located right next to the Squire's ship.
Also, you can buy teles to pest control that are given as clue scroll rewards from G.E.
## The OutpostEdit
The Outpost is an island, with various utilities, including:
Harboured at the docks in the south-west corner of the island are three, LCVP-style ships, called "landers", which a player must board to participate in the activity. The landers carry a combat level requirement of 40, 70 or 100 to board. The lander must have 5 users on board before the activity may begin.
## WinningEdit
The rules for pest control are simple. There are two ways to win the game:
1. Keep the Void Knight alive for 20 minutes.
2. Destroy all 4 portals before the Void Knight is killed. This is much more commonly done, as games can be won in as little as 2 minutes with this strategy. However, the Void Knight can easily be killed even in two minutes if left undefended, so a small number of players should defend him. Since players rarely organise themselves, it can be worth checking the Void Knight's status after each portal kill and switch to defence for a while if the knight is being swarmed or has less than half health. Each portal that is killed will give the void knight 50 hp.
## PestsEdit
Pests are the monsters that spawn out of portals to kill players and the Void Knight.
Monsters
Name Attack style Combat levels Notes
Brawler Melee 51, 76, 101, 111 or 129 Blocks the way due to its massive size, making it impossible to walk through. It is impossible for players or pests to shoot over it with ranged or magic attacks.
Defiler Ranged 33, 50, 66, 80, or 97 Can range over the walls.
Ravager Melee 6, 53, 71, 89, or 106 Destroys Gates and Barricades. Non-aggressive, but can hit hard with melee.
Shifter Melee 36, 57, 76, 90, or 104 Can teleport (also over the walls). Immune to poison.
Spinner Melee 37, 55, 74, 88, or 92 Can heal the portals. Will not retaliate when attacked.
Splatter Melee 22, 33, 44, 54, or 65 Explodes when killed, damaging nearby players and pests. Will explode instantly upon moving adjacent to gates and barricades.
Torcher Magic 33, 49, 67, 79, 91 or 92 Can cast spells over the walls.
### BrawlerEdit
Brawlers are the largest and most resilient creatures in the Pest Control activity, and they defend the portals. They resemble a gorilla or a small elephant with spikes sprouting from their backs and a pointed, very slightly transparent, snout. When you see one of these creatures, ignore them, for they are the lowest priority to kill unless they are in your way.
Normally they will not attack the fort, though they are still a match for anyone attempting to destroy the portal. Their combat levels can be 51, 76, 101, 111 or 129, and their colouring is based on their level. Brawlers are one of only five creatures that you cannot run through (the others being Monkey Guards on Ape Atoll, and the monsters fought in the quest Dream Mentor) - they block your path in a similar manner to the barricades in the Castle Wars activity.
Brawlers block other monsters too. This can be used to the team's advantage by 'luring' a brawler to the steps where the Void Knight is. Defilers and Torchers cannot shoot over brawlers, and Shifters are unable to teleport through them. Brawlers will never attack the Void Knight. Thus, 'luring' can be an effective tactic to protect the Void Knight from potential damage by monsters behind the lured brawler. Again, however, players rarely organise themselves and many players will attack the brawler anyway. If there are hordes of monsters behind the brawler when it is killed, the result can be catastrophic and the Void Knight can take substantial amounts of damage as a result. Thus, it is wise to never kill brawlers when they can be used to keep the void knight from sustaining additional damage.
### DefilerEdit
"Defiler" redirects here. For the set effect of Verac the Defiled, see here.
Defilers are fast, agile creatures in the Pest Control activity. They have the appearance of the lower half of a snake, a humanoid top half and a face resembling that of a cat. They can throw flying spikes over long distances, which can inflict a large amount of damage to the Void Knight. They can even launch their barbs over walls, though if they are in the spaces right in front of one of the three gates, they cannot shoot over it, so keeping them closed will block those directly in front of it. Their combat levels vary from 33, 50, 66, 80, or 97, and their colouring is based on these levels.
### RavagerEdit
Ravagers are short, humanoid creatures with large claws in the Pest Control activity. Their appearance is closely related to that of a mole with over sized claws and red eyes. They are capable of tearing down the gates and so they must be killed to protect the void knight from the torchers and defilers. However, they are relatively weak when it comes to direct combat. Their combat levels can be 36, 53, 71, 89, or 106 and their colouring is based on these levels. When attacked, a ravager will often continue destroying its target (if any) before engaging in combat with the attacking player, and may also destroy anything nearby that gets repaired, though after the barricades have been destroyed, they are not a threat.
### ShifterEdit
Shifters are creatures that excel in melee combat and can teleport across the island and even past walls. For this reason, they are very dangerous for those on defence since they can teleport right next to the Void Knight and attack him. They have the bottom half of a spider with the scythes of a praying mantis (similar to the Abyssal demon). Their combat levels can be 36, 57, 76, 90, or 104 and their colouring is based on these levels. Although it has the ability to teleport other monsters, such as ravagers and torchers (and up on to towers), they can only teleport others a very short distance.
For some reason shifters are some of the few monsters that can attack at a diagonal, other than ranged monsters, most single squared beasts will align with the player to attack. They also seem to hit the Void Knight from a distance while teleporting around him/her.
### SpinnerEdit
Spinners are creatures that appear as spinning tops or jellyfish, and float above the ground. They repair portals on the island and it is unlikely the damage players do will be greater than the amount the spinners heal, especially if there is more than one. However, if the players manage to destroy the portal before any spinners healing it are killed, the spinners will spin around violently and then explode, hitting all players within a few squares with poison that deals 5 hit points of damage instantaneously and then poisons for 1 hit point afterwards. Their combat levels can be 37, 55, 74, 88, or 92. Since they often prevent players from destroying the portals quickly (and thus extend the length of the mini-game) they are the first priority to kill, even if the portal they surround is still protected.
### SplatterEdit
Splatters are creatures that appear like a giant ball with a single eye in the middle and liquid inside them. They will go towards the nearest standing barricades or fort doors and detonate, causing substantial damage to all players, monsters, and objects that are in the immediate vicinity. This will also happen if they are killed, which is easy since they are often low level and weak defensively. When a splatter "detonates" near another splatter, if the secondary splatters' life points are low enough, the damage may cause a chain reaction, increasing the overall damage. Some players find amusement in exploiting the splatter's detonation. By making several or many splatters follow a player with auto-retaliate turned off, leading them into a group of enemies or players, and then killing one, they can start this chain reaction of detonations, likely killing everything/everyone surrounding them. Their combat levels can be 22, 33, 44, 54, or 65 and their colouring is based on these levels. If the opportunity arises, you can use the splatters sort of like a Void Seal by detonating them near large groups of monsters. This does not work on portals. Currently no prayer protection can defend against their "detonation". Players wearing Dharok's equipment may wish to kill Splatters to lower their health and activate the set effect to do extra damage. Killing them is a higher priority the closer they get to the gates, as their explosions can damage the gates. Splatters will never attack the Void Knight, but if killed near it, it can cause damage.
### TorcherEdit
Torchers are creatures that look like winged snakes with bat wings and will actively attack the Void Knight. They have a long distance magical attack which can harm both players and the Void Knight. They can even launch this attack over walls, though if they are in the spaces right in front of one of the three gates, they cannot shoot over it, so keeping them closed will block those directly in front of it. They have a relatively low hit points and defence, so they are dispatched easily. Their combat levels can be 33, 49, 67, 79, 91 or 92 and their colouring is based on these levels.
## PortalsEdit
Portals are the key mechanic in Pest Control. A portal will continually spawn pests until it is destroyed.
Colour Location Attack weakness
W
E
SE
SW
At the start of a game, all four portals are given a shield, which makes the portal impenetrable. Players will have to wait for the Void knight to disable the shield before attacking. The first shield will go down 15 seconds into the game, and then the following shields will go down in 30 second intervals thereafter. Portals will be announced as the shields drop. All shields drop at 1 minute, 45 seconds.
Once the shield is down, players are free to attack and destroy the portals. Each portal begins with 200 hitpoints in the novice lander, or 250 in the intermediate and veteran landers. Apart from their specific weaknesses, the portals have relatively strong defence. Players attacking a portal should either be exploiting its weakness, or have a high accuracy bonus. The yellow portal is weak against stab and slash attacks, the red portal is weak against crush, the blue portal is weak against magic, and the purple portal is weak against ranged.
In general, Portal 1 and Portal 2 will predict which portal shields will drop as Portal 3 and Portal 4.
This is the portal shield drop sequence, which shows the order in which portals shields will drop. Only the red portal shield won't drop first, though.
Portal 1 Portal 2 Portal 3 Portal 4
The best strategy is for the players to follow the portals as they open, killing or luring all spinners first (1-2 spinners are ok with a very strong team- i.e. Dharokers or equivalent), then killing the portal. Players should ALWAYS close the gates as they run through or past them, this will also help keep the Void Knight's HP up longer.
A player should remain alert in the event that the portal spawns spinners. A Spinner will repair damage done to the portal, and the effect of multiple spinners will stack, making the portal nearly invincible. Unless your team is specifically trying to gain more experience by allowing the spinner to heal the portal, it is better to either kill or lure the spinners to prevent the portal from healing. A common misconception is that using special attack or prayer on the portals will cause Spinners to appear, but this is nothing more than a myth.
Once a portal is destroyed, it will stop spawning pests, and the Void Knight will regain 50 health. The game will end as soon as the four portals are successfully destroyed.
## StrategiesEdit
The best strategy is to make sure the gates stay closed as the players run through or past them. This helps keep the Void Knight safe. It is ok for 2-3 players to be at the Void Knight to help keep pests from attacking him, but the majority of the players should be following the portals as they open to destroy them as quickly as possible. Rangers, Halberdiers, and Magers can easily lure spinners away from the portals while the meleers are attacking and killing the portals.
Never stand in the middle of the front of the portal because this is where the pests spawn and you will trap spinners from being lured from the portal and also trap yourself. A great place for Dharokers to stand is at the back of the portal in the middle. When the portal dies, if any spinners are still alive, they will poison all those nearby, unless they are standing at the back of the portal nowhere near the spinner(s).
When at the portals, Spinners will spawn and begin repairing the portal. A Spinner makes a distinct sound when it starts healing the portal, notifying players of its presence. Generally, if there are a number of people attacking the portal, a single Spinner can be ignored, and the portal can still be easily destroyed. Once multiple Spinners spawn, however, the portal will be repaired quicker than players can damage it. Players will need to either kill or lure the Spinners away in order to be able to destroy the portal.
## RewardsEdit
Winning players are rewarded with coins and commendation points.
You have successfully defended the island! (link)
Alas, the Void Knight has died. (link)
Depending on your boat, you will gain a different number of commendation points upon winning a game:
• Novice Lander: 2 points per game won.
• Intermediate Lander: 3 points per game won.
• Veteran Lander: 4 points per game won.
If the player's team destroys all of the portals, they will receive coins equal to 10 times their combat level as well as the commendation points.
Many players save up their points to maximize their rewards. If a player trades in 100 points in one go, they get an extra 10% experience per point. If they trade in 10 points, they get an extra 1% experience.
Note: The most commendation points a player can have at any time is 4000 (changed in April 16 2015). If you board a boat while holding 4000 points, a warning will be given to the player to trade the points in. If a player continues to play with 4000 points, subsequent games will not award any commendation points. You will also get a warning if winning the next game would result in wasting points, for example if you had 3998 points and you were playing in the Veteran boat (4 points per game). In this event, winning the game would only take you to the maximum 4000 points, wasting the other two.
Commendation points can be traded in for:
### ExperienceEdit
• Experience in any combat skill that is equal to or greater than level 25. The formula is the following: $N \times Floor(\frac{l^2}{600})$, where $l$ equals level; $N$ equals 18 for prayer, 32 for magic/range, and 35 for all others. Floor means round down to the nearest integer (whole number). The experience this generates is summarised below. The amounts below are awarded per commendation point traded in.
• It should be noted that when redeeming 100 points, 110 points worth of experience is awarded.
Points
Level Attack Strength
Defence Hitpoints
Ranged
Magic
Prayer
1 25-34 35xp 32xp 18xp
1 35-42 70xp 64xp 36xp
1 43-48 105xp 96xp 54xp
1 49-54 140xp 128xp 72xp
1 55-59 175xp 160xp 90xp
1 60-64 210xp 192xp 108xp
1 65-69 245xp 224xp 126xp
1 70-73 280xp 256xp 144xp
1 74-77 315xp 288xp 162xp
1 78-81 350xp 320xp 180xp
1 82-84 385xp 352xp 198xp
1 85-88 420xp 384xp 216xp
1 89-91 455xp 416xp 234xp
1 92-94 490xp 448xp 252xp
1 95-97 525xp 480xp 270xp
1 98-99 560xp 512xp 288xp
### Void Knight equipmentEdit
Item Cost Description
Void Knight mace 250
Able to cast Claws of Guthix in place of a Guthix staff.
Not required for the Void set effect.
Void Knight top 250 Required for the Void set effect.
Void Knight robe 250 Required for the Void set effect.
Void Knight gloves 150 Required for the Void set effect.
Void melee helm 200 +10% Melee accuracy and damage.
Void magic helm 200 +30% Magic accuracy.
Void ranger helm 200 +10% Ranged accuracy and +20% Ranged damage.
Void Knight seal 10
Operate in a Pest Control game to inflict 100 points of damage to surrounding pests.*
Worn in the amulet slot.
• The Void Knight Seal ability cannot be used outside of Pest Control.
• Damage inflicted by a seal does not count toward the 50 damage requirement in Pest Control games.
*This amulet has 8 charges and will crumble to dust when depleted.
• There is a requirement of 42 Attack, Strength, Defence, Hitpoints, Ranged, and Magic, as well as 22 Prayer to use any Void item.
• A set of Void robes and one special helm will cost 850 points.
• A complete set of Void items (including a mace) will cost 1500 points.
### Herb packEdit
Pack contains an assortment of herbs.-30 Points
For example, a pack may contain: 2 Harralander, 3 Ranarr, 1 Toadflax, 3 Irit, 4 Avantoe and 2 Kwuarm. Also note the amounts are randomized. This is just an average.
### Mineral packEdit
Pack commonly contains 25 Coal, 18 Iron ore -15 points
### Seed packEdit
Pack commonly contains 3 sweetcorn seeds, 6 tomato seeds and 2 limpwurt seeds -15 points
## WorldEdit
The official world to play Pest Control on is:
• World 344.
As with any mini-game, it is not required to use the official world, though landers in most other worlds are usually empty if a Pest Control clan is not present. However, if a player does not have a specific team to join, simply using the Novice lander on a crowded world is plenty sufficient. Occasionally, a game will be lost on that lander, but games are won fairly consistently, so there's not much of a loss of points overall.
However, in response to the prevalence of inefficient players and Pest Control macros that resulted in slower games and a higher amount of losses, many players began to seldom use the official world for playing the Pest Control mini-game. Instead, many players generally go to the Quest/Diary/Mini-game tab and access the Pest Control mini-game chat to see where others are playing Pest Control. It should noted that players in these clans migrate to different worlds often, almost daily, presumably to mitigate the possible presence of macros.
|
2016-07-29 15:58:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2647028863430023, "perplexity": 3572.2541448787174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257831769.86/warc/CC-MAIN-20160723071031-00196-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://blog.dopex.io/articles/bull-theses/dpx-bull-thesis?source=post_internal_links---------1-------------------------------
|
## May 25, 2021
# $DPX Bull Thesis We have all read our fair share of Tokenomics in the cryptocurrency space — DeFi connoisseurs and complete newbies alike. “Worthless governance” seems to be the core concept behind protocol tokens in the DeFi space that are more often than not used as incentives for liquidity providers. Although it may not be a necessity to be distributed as incentives in the long run — a la UNI and YFI, it’s definitely a powerful bootstrapping tool for tokens to build and scale an early community of strong believers. ### “Please, Not Another Governance Token Ser” Why do we call them “worthless governance” tokens? Simple — they’re mainly designed to be used for plain old governance votes. Boring right. Some may be usable today for earning incentives via staking or liquidity provision on your favorite AMMs, but in the long term they usually seem to converge on the singular governance use case. There are tokens that aim to differentiate from the rest by offering alternate use cases such as protocol fee collection for token stakers like in Yearn/SushiSwap — which creates more reason for these tokens to hold value and create objective ways to value them, even with naive methods such as price correlation to fees collected. The more features a coin offers coupled with its governance functionality, the more value it could be perceived to have. Of course, the extent to which governance decisions influence the protocol plays a big role in value accrual for these tokens as well. For example, MKR being used to vote on adjusting parameters for a multi billion dollar stablecoin (DAI) holds a lot greater value than XYZSwap passing a vote to adjust weights for scarcely used liquidity pools on a cloned vanilla AMM protocol. ###$DPX: The “Productive Governance” Token
Taking these points into consideration, DPX, the Dopex Governance Token, was designed with a good mix of objective and subjective valuation properties accounted for. On the objective side, DPX has governance staking, delegate staking, use in margin collateral and liquidity provision incentives as features with potential quantifiable data points that could be mapped to correlate protocol usage to token price.
From a subjective view, DPX governance decisions play a major role from a valuation perspective considering the governance focused model required for Dopex to work optimally as a liquid and efficient options protocol.
But how important of a role do these governance functions play with Dopex?
#### Governance decides:
• Weights of DPX rewards for each pool
• Rebate amounts in rDPX for each pool
• Strike thresholds for option chains in pools
• Fallback for price multipliers in case of delegate failure
• Removal and slashing of delegates
This incentivizes the largest option liquidity providers on Dopex as well as delegates to control these parameters in a way to fairly price options which instill confidence in the protocol and subsequently result in higher protocol usage, TVL and market capitalization.
Dopex delegates who are responsible for quoting option prices and arguably one of the biggest stakeholders in the protocol are required to stake 2500 DPX in the delegate staking contract to be able to quote multipliers within the pricing formula, which would be applied on realized volatility/RV, for each asset to account for option writing risk and short/long term price forecasts.
The pricing formula is used to derive volatility smiles that are used to generate fair priced option chains for each available asset on Dopex.
Delegates are incentivized to buy and stake DPX, quote price multipliers that result in fair pricing for purchasers since it would result in capture of option flow from users of competing platforms considering the increase in price and liquidity efficiency.
Users looking to be writers or just looking for passive yield would in turn be incentivized to deposit their assets in Dopex for passive, perpetual option writing considering their risk would be minimized in comparison to other platforms, while also being incentivized with DPX rewards and receiving rebates in the form of rDPX, which would minimize potential losses in case of unprofitable epochs.
### The Bull Case for Productivity
Higher TVL, volume and overall usage would rationally result in a pricier token. However, based on what we’ve seen in the past these metrics don’t always translate to a higher market cap for the token, unless the actual token is productive in nature.
The question is — how does Dopex aim to solve this?
#### DPX is made to be a productive asset in the following ways:
1. Protocol Fee Collection
Staked DPX earns a portion of protocol fees every epoch. rDPX is used to boost a user’s proportion of these rewards along with DPX tokens.
2. Staking Rewards
DPX stakers also initially get an allocation of DPX rewards as an incentive for locking up their tokens.
3. Liquidity Provision Rewards
DPX/WETH and rDPX/WETH liquidity providers initially earn rDPX & DPX rewards as an incentive. For the first few weeks, this would be the only way to obtain rDPX
4. Margin Collateral
DPX, rDPX and option tokens are usable as collateral for opening cross margin option positions within each pool
5. Synthetic Collateral
DPX in the future would be usable as collateral to create synthetic assets reflecting real world assets such as stocks, indices, commodities etc.
### Conclusion
To recap, DPX is a governance token that is used to govern the Dopex protocol in terms of pricing, reward distribution, fee collection and rebate token allocation. While also being a productive asset that is usable as collateral, able to earn liquidity provision rewards for LPs and collects protocol fees along with DPX incentives for stakers.
Dopex is a decentralized options protocol that aims to maximize liquidity, minimize losses for option writers and maximize gains for option buyers — all in a passive manner.
Dopex uses option pools to allow anyone to earn a yield passively. Offering value to both option sellers and buyers by ensuring fair and optimized option prices across all strike prices and expiries. This is thanks to our own innovative and state-of-the-art option pricing model that replicates volatility smiles.
# 📱Stay Connected
Follow our official social media accounts and visit our website to stay up to date with everything Dopex.
# 🚨IMPORTANT
Be careful of fake Telegram groups, Discord servers and Twitter accounts trying to impersonate Dopex.
← Back to Blog
|
2022-01-28 06:48:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3066224157810211, "perplexity": 7991.403415377124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00635.warc.gz"}
|
http://tasks.illustrativemathematics.org/content-standards/5/NF/B/7
|
# 5.NF.B.7
Apply and extend previous understandings of division to divide unit fractions by whole numbers and whole numbers by unit fractions.Students able to multiply fractions in general can develop strategies to divide fractions in general, by reasoning about the relationship between multiplication and division. But division of a fraction by a fraction is not a requirement at this grade.
a Interpret division of a unit fraction by a non-zero whole number, and compute such quotients. For example, create a story context for $(1/3) \div 4$, and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that $(1/3) \div 4 = 1/12~$ because $(1/12) \times 4 = 1/3$.
b Interpret division of a whole number by a unit fraction, and compute such quotients. For example, create a story context for $4 \div (1/5)$, and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that $4 \div (1/5) = 20~$ because $20 \times (1/5) = 4$.
c Solve real world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions, e.g., by using visual fraction models and equations to represent the problem. For example, how much chocolate will each person get if 3 people share 1/2 lb of chocolate equally? How many 1/3-cup servings are in 2 cups of raisins?
|
2021-01-17 02:31:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485750913619995, "perplexity": 732.3435366348683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00110.warc.gz"}
|
https://www.physicsforums.com/threads/trig-and-calculus.49559/
|
# Trig And calculus
1. Oct 25, 2004
### alamin
Need help in integration and trig
How do you integrate
1. 1/(1-x^5)
2. 1/(1+x^4)
and the trig question.
Show that
(a^2 - b^2)/c^2 = sin(A-B)/sin(A+B)
2. Oct 25, 2004
### HallsofIvy
Staff Emeritus
Every polynomial, such as 1- x5 and 1+ x4 can be factored, using real numbers, into a product of linear or quadratic terms.
To factor 1- x5, find all complex roots to x5= 1. One is, of course 1 the others are complex conjugates which can be paired to give two quadratic factors. Then use "partial fractions".
Same for 1/(1+x4).
In the trig question, are we to assum that a, b, and c are lengths of sides opposite angles A, B, C? In a right triangle or general triangle?
3. Oct 26, 2004
### alamin
general triangle!
a,b,c are length's
A,B,C are opposite angles
Sorry but i made a mistake in the first integration question
its suppose to be 1/sqrt(1-x^5)
When i used this integration in mathematica 5 : i got something like hypergeometric2f1....
Can u help me out!
4. Oct 26, 2004
### HallsofIvy
Staff Emeritus
The 5 "fifth roots of unity" lie on a circle, in the complex plane of radius 1, equally spaced around the circle. The angle between them is 360/5= 72 degrees so they are;
1, cos(72)+ i sin(72), cos(144)+ i sin(144), cos(216)+ i sin(216), cos(288)+ i sin(288).
Since cos(72)= cos(288), sin(72)= -sin(288), cos(144)= cos(216), and sin(144)= sin(216), these are in pairs of complex conjugates (as they have to be in order to satisfy and equation with real coefficients.
The solutions to x5= 1 are: 1, cos(72)+ i sin(72), cos(72)- i sin(72), cos(144)+ i sin(144), cos(144)- i sin(144) and so
1- x= -(x-1)(x- cos(72)+ i sin(72))(x- 72- i sin(72))(x- cos(144)+ isin(144))(x- cos(144)- i sin(144))= -(x-1)((x-cos(72))2+ sin2(72))((x-cos(144)2+ sin2(144))
= -(x-1)(x2- 2cos(72)+ 1)(x2-2cos(144)+ 1).
Once you have that factorization you can expand 1/(1- x5) in partial fractions.
|
2017-03-24 02:27:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420491814613342, "perplexity": 7472.546237390504}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187519.8/warc/CC-MAIN-20170322212947-00181-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://www.proofwiki.org/wiki/Bernoulli_Process_as_Geometric_Distribution
|
# Bernoulli Process as Geometric Distribution
## Theorem
Let $\left \langle{X_i}\right \rangle$ be a Bernoulli process with parameter $p$.
Let $\mathcal E$ be the experiment which consists of performing the Bernoulli trial $X_i$ until a failure occurs, and then stop.
Let $k$ be the number of successes before a failure is encountered.
Then $k$ is modelled by a geometric distribution with parameter $p$.
### Shifted Geometric Distribution
Let $\left \langle{Y_i}\right \rangle$ be a Bernoulli process with parameter $p$.
Let $\mathcal E$ be the experiment which consists of performing the Bernoulli trial $Y_i$ as many times as it takes to achieve a success, and then stop.
Let $k$ be the number of Bernoulli trials to achieve a success.
Then $k$ is modelled by a shifted geometric distribution with parameter $p$.
## Proof
Follows directly from the definition of geometric distribution.
Let $X$ be the discrete random variable defined as the number of successes before a failure is encountered.
Thus the last trial (and the last trial only) will be a failure, and the others will be successes.
The probability that $k$ successes are followed by a failure is:
$\Pr \left({X = k}\right) = p^k \left({1 - p}\right)$
Hence the result.
$\blacksquare$
|
2013-12-11 18:31:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962945938110352, "perplexity": 222.80471067540353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164043130/warc/CC-MAIN-20131204133403-00086-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/52183/series-of-repetitive-content-loading-functions
|
A friend helped me put the following code together, which is made up of a series of repetitive loadContent() functions. I've just shown the first two functions but they're all the same.
The code works fine but I'm wondering if anyone could give me a tip as to how I could reduce the bloat in this code and condense the functions somehow. I read somewhere that repetitive code usually means that the code can be simplified or shortened somehow. I'm no expert at jQuery or coding so I'm not sure where to start. I'm assuming that the code could be written a lot better but I might be wrong.
function loadContent1()
{
$('#content1').empty(); if(typeof(counterArray[1]) != 'undefined' && counterArray[1] != null)clearTimeout(counterArray[1]); jQuery.ajax({ type: "GET", url: "//cgi-bin/mggsn_ssr21.cgi", beforeSend:function(){ jQuery('#content1').append('<img class="loading" src="img/loading.gif" alt="Loading..." />'); jQuery('#refresh1text').html("<p>Loading...</p>"); } }) .done(function(data){ if(typeof(counterArray[1]) != 'undefined' && counterArray[1] != null)clearTimeout(counterArray[1]);$('#content1').empty();
$('#content1').html(data); //alert(data.slice(0.100)); countDownTimeArray[1]=countDownIntervalArray[1]; countDown(1); }) .fail(function(){$('#content1').empty();
jQuery('#content1').html('<h4>failed to load content</h4>' + this.url);
});
}
{
$('#content2').empty(); if(typeof(counterArray[2]) != 'undefined' && counterArray[2] != null)clearTimeout(counterArray[2]); jQuery.ajax({ type: "GET", url: "/cgi-bin/mtest2.cgi", beforeSend:function(){ jQuery('#content2').append('<img class="loading" src="img/loading.gif" alt="Loading..." />'); jQuery('#refresh2text').html("<p>Loading...</p>"); } }) .done(function(data){ if(typeof(counterArray[2]) != 'undefined' && counterArray[2] != null)clearTimeout(counterArray[2]);$('#content2').empty();
$('#content2').html(data); //alert(data.slice(0.100)); countDownTimeArray[1]=countDownIntervalArray[1]; countDown(2); }) .fail(function(){$('#content2').empty();
jQuery('#content2').html('<h4>failed to load content</h4>' + this.url);
});
}
You could simplify like so:
For the HTML, you could embed specifics as data-* attributes, which you can fetch in JS later on.
<div class="content" data-source="//cgi-bin/mggsn_ssr21.cgi" data-refreshtext="#refresh1text'>
...
</div>
<div class="content" data-source="/cgi-bin/mtest2.cgi" data-refreshtext="#refresh2text'>
...
</div>
As for the loading gif, I'm not a fan of <img>. I'm more into making the loading gif a background that's coupled with a class, so that it's easy to add using addClass and remove using removeClass.
.content.loading{
background-repeat: no-repeat;
background-position: center center;
min-height: 100px;
}
As for the function, we grab the data-* attributes, and execute the ajax.
function loadContent(selector){
var element = $(selector); var url = element.data('source'); var refreshSelector = element.data('refreshtext');$.ajax({
type: "GET",
url: url,
beforeSend: function () {
$(refreshSelector).html("<p>Loading...</p>"); } }).done(function(data){ element.html(data); ... }).fail(function(){ element.html('<h4>failed to load content</h4>' + this.url); ... }).complete(function(){ // for all cases, fail or success, remove the gif background element.removeClass('loading'); }); } // use like: loadContent('.content'); You may supply any selector as your parameter here, like an ID. Just make sure it has the right data-* attributes to go with it. Joseph cleaned up the code a lot so let me point out a few things that you'll be able to use in your future code. • Consistent formatting greatly improves code readability and maintenance. There are many out there that are good starts, and I'm a fan of Google's language styles for the most part. • Since you're already depending on the definition of $, it's odd to mix it with jQuery in the same code. Pick one and stick with it.
• To check that a value is neither null or undefined you can simply compare it to null using ==/!=. When you know the value cannot be some other falsy value (e.g. false, 0, or ''), you can simply use ! or the boolean context.
if (counterArray[1]) { ... }
• DOM queries can be expensive on large pages, so it pays to save the results of a query if you're going to use it multiple times, and it cleans up the code. I like prefixing the name of a variable holding the results of a selector with a $ to make it clear that it's not a raw DOM element. var$content = $('#content1'); • Most jQuery methods return the selector itself, allowing you to chain multiple calls together. $content.empty().html(data);
• html completely replaces the contents of the element so there's no reason to call empty on it first. DOM manipulation causes a reflow; combine your manipulations when possible.
|
2019-12-14 08:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19322629272937775, "perplexity": 1559.7053025540617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00022.warc.gz"}
|
https://1library.co/document/yd291leq-on-weighted-average-interpolation-with-cardinal-splines.html
|
# On weighted average interpolation with cardinal splines
(1)
## Splines
J. López-Salazar • G. Pérez-Villalón
Abstract Given a sequence of data {y„}„ez with polynomial growth and an odd number d, Schoenberg proved that there exists a unique cardinal spline / of degree d with polynomial growth such that f(n) = yn for all n e Z. In this work, we show that this result also holds if we consider weighted average data f * h(n) = yn, whenever the average function h satisfles some light conditions. In particular, the interpolation result is valid if we consider cell-average data f_ " f(x)dx = y„ with 0 < a < 1/2. The case of even degree d is also studied.
Keywords spline • cardinal interpolation • average sampling
### 1 Introduction
During the last forty years, the spaces of splines have become one of the most useful func-tion spaces in applied mathematics. Within that área, this paper is devoted to the topic of interpolation with spline functions. Let fid be the central B-spline of degree ¿ G N given by
fid = AÍ-i/2,1/2] * • • • * #[-i/2,i/2] (d + 1 terms),
where #[-1/2,1/2] denotes the characteristic function of the interval [—1/2, 1/2] and the sym-bol * denotes the integral convolution. In this work we consider the space Sd generated by the integer shifts of the B-spline fid. That is, a function / belongs to Sd if and only if there is a unique sequence {a^ltez in C such that
(2)
As is well-known, if d is odd, then Sd coincides with the space of all functions / e Cd~l (R) such that f\[k,k+i] is a polynomial of degree not exceeding d for each k e Z. If d is even, then Sd coincides with the space of all functions / e Cd_1(]R) such that f\[k-i/2,k+i/2\ is a
polynomial of degree not exceeding d for every k e Z.
Given a sequence of real or complex numbers {y„}„ez, there is a unique linear spline
f £ Si such that / ( « ) = y„ for every n e Z , which is the function obtained by linear in-terpolation between every pair of consecutive data. On the contrary, for d > 2, there are inflnitely many splines / e Sd such that / ( « ) = y„ for n e Z. However, Schoenberg proved in the seventies that if the function / is required to have some growth conditions, then the interpolation problem has a unique solution, as the following theorem shows.
Theorem 1 (Shoenberg [9]) Let a > 0. If{y„}„ez is a sequence in C such that y„ = O (\n \a)
as n -> ±oo, then there is a unique function f e Sd such that f(n) = yn for all n e Z and
f(x) = O(\x\a) as x^±oo.
Because of physical reasons, the available data often are not the valúes of a function / at n, but weighted averages near n. That is,
### /
1/2 j-n+1/2
f(n — x)h(x)dx = l f(x)h(n — x)dx,
-1/2 Jn-1/2
where the average function h, with support in [—1/2, 1/2], reflects the characteristic of the acquisition device. Note that [—1/2, 1/2] is the máximum possible support of h without overlap between the samples. The average interpolation problem / * h(n) = y„ has been studied for band-limited functions and for shift invariant spaces in [1-8, 11, 12] and [13].
The aim of this paper is to prove the following theorem.
Theorem 2 Let h : R -> R be a measurable function that satisfles the following properties: (a) h(x)>OforallxeR.
(b) The support of h is contained in [—1/2, 1/2]. (c) O < f_1/2h(x)dx < oo andO < fQ h(x)dx < oo.
Let ¿ G N and a > 0. If {y„}„ez is a sequence in C such that y„ = 0(\n\a) as n —»• ± o o ,
then there is a unique function f e Sd such that f * h(n) = y„ for all n e Z and f(x) = 0 ( | x |a) asx ^ ± o o .
Theorem 2 was proven in [5] for degree d = 1, 2, 3, 4. In [6] and [7], it was proved with-out limitation on the degree, but with more restrictive conditions on the average function h.
Our approach to Theorem 2 is based on the following result.
Theorem 3 (Pérez and Portal [5]) Let d e N and let h be a function with the properties given in Theorem 2. Let us assume that all the zeros of the function
## G(.t) = J2[Pd*Hk)]r
k keZ
are simple and none ofthem is on the unit árele [z e C : |z| = 1}. If a > O and {y„}„ez is a
sequence in C such that y„ = 0(\n\a) as n —»• ± o o , then there is a unique function f e Sd
(3)
In order to apply Theorem 3, all the results presen ted below will be used to pro ve that the zeros of G satisfy the required conditions.
After this paper was flnished, the authors knew that the same result had been obtained independently and at the same time by Ponnaian and Shanmugam in [8]. Although both papers use Theorem 3, the way those authors follow to study the zeros of the G function is different from ours. Ponnaian and Shanmugam base their arguments on the study that they carry out about the roots of the exponential Euler splines. In particular, they need to obtain a recursion relation for those exponential splines. Moreover, they consider four different cases on the degree d. On the contrary, our arguments may be more direct, since we apply already known properties of the Euler-Frobenius polynomials and only have to consider even and odd degree cases separately.
### 2 The Roots of G when d Is Even
The foliowing lemma gives the degree of the Laurent polynomial G.
Lemma 1 Let h be a function with the properties given in Theorem 2. For d eN and t e Z , it holds
### (i) p
d*h(k) = 0if\k\>d-f.
### (ii) p
d*h(k)>0if\k\<d-^.
Proof Let us recall that pd is a continuous function such that pd{x) > 0 if
•1+1 d±
2 ' 2
d+1 ^ „ ^ d+1
and pd(x) = 0 if x i ( - ^ á^k). The support of h is contained in [ - 1 / 2 , 1/2], so
,-1/2
### pd*h(k)= I pd(k — x)h(x)dx.
7-1/2
If \k\ > ^ and x e [ - 1 / 2 , 1/2], then pd(k - x) = 0, so pd *h(k) = 0.
Since fQ h(x)dx > 0, there is e e (0, 1/2) such that fs h(x)dx > 0. Let
í 1 d+1 1 M = mini pd(x) : — < x < e i > 0.
I f 0 < i t < ^±i a n d x e [ e , 1/2], then/fc-x e [-±, ^ - e]. Henee
,.1/2 ^.1/2
Pd*h(k)> pd(k - x)h{x)dx > M • / h(x)dx>0.
Similarly, using that f°1/2 h(x)dx > 0, we obtain that pd * h(k) > 0 if - ?±± < k < 0. D
Let us now define the splines Tí d that will be extensively used throughout the paper. For
each t e C\{0} and each d eN, the symbol Tí d denotes the function
### rtJ{x) = YJt-kPÁx-k).
keZ
It is easy to check that if x e R and n e Z, then
(4)
Moreover, if d > 2, then
M
M
1
### (x + i/2). (2)
Property (2) can be deduced from the following known fact:
P'd(x) = fa-iix + 1/2) - ^ _ i ( x - 1/2).
Lemma 2 For any f < 0 and any d e N, thefunction Tíjd has a unique root in [—1/2, 1/2).
Proof By (1), Tí j d(l/2) = f_ 1Tí d(—1/2). Since f < 0, there are two possibilities:
M
M
### (i/2) = o
or
Tíjd (—1/2) and Tíjd (1/2) have different sign.
The case d = 1 can be easily deduced in any of the above possibilities, having in mind that T(j i (x) is a piecewise linear spline with knots in Z and Tt, i (0) = 1.
We now assume that the result holds for some d G N; that is, there is a unique aíjd e
[ - 1 / 2 , 1/2) such that Tí>d(aí>d) = 0. The spline Tí>d+1 is a Cd-function on R. By (2),
rM + i M = (i - f)Tí>d(x + 1/2) = (1 - O f - ' r ^ í x - 1/2).
If Tí j d + 1 has a local extremum at a point x0 € (—1/2, 1/2), then T /d + 1 (x0) = 0, so Tíjd(x0 +
1/2) = Tí d( x0 — 1/2) = 0. Consequently, Tí d + 1 has at most one local extremum x0 on the
i n t e r v a l ( - l / 2 , 1/2):
xo = at¡d- 1/2 if at4 e (0, 1/2)
or
_xo = at,d+ l / 2 if «í,d G [ - 1 / 2 , 0).
That implies the following consequences:
1. If Tí > d + 1( - l / 2 ) = Tí > d + 1(l/2) = 0, then Tí>d+1(x) > 0 for every x G ( - 1 / 2 , 1/2) or
Tí d + 1( x ) < 0 for every x G (—1/2, 1/2). In this case, —1/2 is the unique root of Tí d + 1
on [ - 1 / 2 , 1/2).
2. If Tíjd+1(—1/2) and Tí j d + 1(l/2) have different sign, then there is aí j d +i G (—1/2, 1/2)
such that Tí d + 1( aí d + 1) = 0. As Tí d + 1 has at most one local extremum on (—1/2, 1/2),
it follows that Tí j d + 1(x) ^ 0 for every x G [—1/2, 1/2), x ^ «í>d+i.
This concludes the proof for d + 1, so the result holds for every d e N. D
In the proof of the following lemmas we use the Euler-Frobenius polynomials, which are deflned in terms of the forward B-splines
/ d+l\ Qd+i(x) = pd\x — I.
Here Qd+i (x) ^ 0 if and only if x G (0, d + 1). The Euler-Frobenius polynomial of degree d — 1 is the function
d - l
### nd(t) = d\J2Qd+i(k + i)tk.
(5)
Then 77d is a monic polynomial of degree d — 1 whose roots are all simple and negative. If
A.1 < • • • < Xd_! are the roots of 77d and /¿i < • • • < n¿i-2 a r e the roots of /7d_!, then
A.1 < ¡A-i < X2 < H2 < A.3 < • • • < Xd_2 < Md-2 < A.d_i.
Moreover, X1Xd_1 = X2Xd_2 = • • • = 1, so if <¿ is even, then
^-d/2 — —
1-For the proof of these facts, see Schoenberg [10, pp. 391-392].
Lemma 3 Let d e N be even and let k\ < • • • < kd_\ be the roots of nd. Then sign(r,j > d(x)) = (-l)>+f
for every x e (—1/2, 1/2) and every j e { 1 , . . . , d — 1}.
Proof Let X be any root of 77d. Then
(3)
¿fceZ ¿fceZ
+ 1
d + 1
X1- !
¿fceZ di
d
### (X) = o.
By Lemma 2, T^jd only has one root on [—1/2, 1/2), so the sign of T^jd is constant on
(—1/2, 1/2). Since 7ij d(—1/2) = 0, we obtain the following alternatives:
TX4{x) > 0 f o r a l l x e ( - 1 / 2 , 1/2) i f T ¿d( - l / 2 ) > 0
or
YKd{x) < 0 f o r a l l x e ( - 1 / 2 , 1/2) if T £d( - l / 2 ) < 0.
We now study the sign of T[d{-1/2). By (2),
d
á
k
### P*-Ák)
¿fceZ ^ ' ¿fceZ
( I - A . ) ; .1- !
### ^A.*e¿(* + i):
V ~ ? -77^!(X). ( á - 1 ) !
Since X < 0 and x e (—1/2, 1/2), we have
sign(T,,d(x)) = s i g n ( T £d( - l / 2 ) ) = ( - I )1" ! • sign(/7d_1(X))
(6)
Let us flrst assume that d = 2. Since Yl\ (f) = 1 for every t e R, we have
si gn(T,,2(x)) = ( - 1 )1- ! . sign(77i(A.)) = 1
for all x e (—1/2, 1/2). That pro ves the statement of the Lemma when d = 2.
If d is an even integer bigger than 2, A.i < • • • < kd_i are the roots of 77d and /¿i < • • • <
/¿d-2 are the roots of 77d_!, then
A.1 < ni < X2 < n2 < k3 < • • • < Xd_2 < Md-2 < A.d_i.
As 77d_! is monic and its roots are simple, it can be written as
d
d
## -2).
Then 77¿_i(A.i) > 0, nd^(k2) < 0, ... 77¿_i(A.¿_i) > 0. That is,
sign(77d_1(^)) = ( - l )j + 1
for every j = l,... ,d — 1. Therefore, if x e (—1/2, 1/2), then
signfa.,„(*)) = ( - I )1" ! • sign(77d_1(^))
= ( - i )1- i . ( - i y ' +1 = ( - i y ' + l
This completes the proof. D
Theorem 4 Let h be afunction with the properties given in Theorem 2. If d e N is even, then the function
## G(.t) = J2[Pd*Hk)]r
k
¿fceZ
has d roots which are simple, negative and differentfrom — 1.
Proof We will study the roots of the function
### F(t) = tál2G(t) = YX^*h(k)]ti-k.
¿fceZ
By Lemma 1, F is a polynomial of degree d:
### F(t) = pd *h(~^\td + • • • + pd *h(^ - \X + pd *h{^\.
Since the coefflcients of F are all positive, we have F(0) > 0 and limí^_c o F(t) = +oo.
Thus, F and G have exactly the same roots. The function fid is even, so if t ^ 0, then
,-1/2 ,-1/2 T r * pd(k - x)h(x)dx = td/2Tt¡d(x)h(x)dx.
(7)
Let A.1 < • • • < Ad_i be the roots of 77d, which are all negative. By Lemma 3,
úgn(kf2rXj,d(x)) = ( - l )d / 2 • (-l)j+di = (-l)j (5)
for every x e (—1/2, 1/2) and every j e ¡ l , . . . , ¿ - l ¡ . By (4) and (5) and having in mind that h is non-negative, we have
F ( A . i ) < 0 , F ( A2) > 0 , ... F ( Ad_ i ) < 0 .
Since limí^_c o F(t) = +oo and F(0) > 0, there are
such that F(si) = 0 , . . . , F(sd) = 0. As F is a polynomial of degree d, it cannot have other roots. By (3), it is known that Ad/2 = — 1, so F(— 1) ^ 0. Therefore, si ^ —l,...,s¿ ^
- 1 . D
Remark 1 Theorem 2 with d even is now a direct consequence of Theorems 3 and 4.
### 3 The Roots of G when d Is Odd
In order to study the roots of G when d is odd, we use the midpoint Euler-Frobenius poly-nomials introducedby Schoenberg [10, pp. 393-394]. Given d e N, the function
### Pd{t) = 2dd\YjQdJk+^)t
is a monic polynomial of degree d whose roots are all simple and negative. If Ai < • • • < Ad
are the roots of Pd and /¿i < • • • < /zd_i are the roots of P¿-i, then
Ai < /¿i < A2 < M2 < A3 < • • • < Ad_i < /zd_i < Ad.
Moreover, if d is odd, then
^(d+l)/2= — 1- (6)
Lemma 4 Let d e N be odd and letk\ < • • • <kd be the roots of Pd. Then
úgn(rXj,d(x)) = (-iy+ái1
for every x e (—1/2, 1/2) and every j e { 1 , . . . , d}.
Proof Let A be any root of Pd. Then
^ ' keZ ^ ' keZ ^ '
\-d
(8)
By Lemma 2, T>L d only has one root on [—1/2, 1/2), so the sign of T>L d is constant on ( - 1 / 2 , 1 / 2 ) .
líd= l,then
A
ti
### 8i(-*) = l.
¿fceZ
Therefore, sign(Tx,i(x)) = 1 for every x e (—1/2, 1/2). That proves the statement of the Lemma when d = 1.
Let us now assume that d is any odd integer bigger than or equal to 3. Since 7i,d(—1/2) = 0 and the sign of T4 is constant on (—1/2, 1/2), we obtain the following alternatives:
TKd(x) > 0 f o r a l l x G ( - 1 / 2 , 1/2) if T £d( - l / 2 ) > 0
or
T.4{x) < 0 f o r a l l x e ( - 1 / 2 , 1/2) i f T ¿d( - l / 2 ) < 0.
We now study the sign of T[ d( - 1 / 2 ) . By (2),
Ti
á
kPd-i(k)
¿fceZ
k
d
k
d
d
## (k + i )
(l-k)k^ykkQd[k + - ) = -r-í —
v > ^ wy -r j 2d-1(d- 1)!
(1 - k) X-d
d
### -i{k).
Since X < 0 and x G (—1/2, 1/2), we have
sign(TM(*)) = s i g n ( T , 'd( - l / 2 ) ) = ( - l ) Y . sign(Pd_1(X)).
This study holds when X is any root of Pd.
If A.1 < • • • < kd are the roots of Pd and /¿i < • • • < H¿Í-I a r e the roots of P¿-i, then
A.1 < ni < k2 < n2 < k3 < • • • < kd-i < Hd-i < kd.
As Pd-i is monic and its roots are simple, it can be written as
Pd-x(t) = (t ~ Mi)(f - M2) • • • (f -
(¿d-l)-Then Pd-i(k{) > 0, Pd-i(k2) < 0,... Pd-i(kd) > 0. That is,
sign(Pd_1(^)) = ( - i y '+ 1
for every j = 1 , . . . , d. Therefore, if x G (—1/2, 1/2), then
sign(T,j>d(x)) = ( - l )1^ • sign(P¿_i(A.,-))
= ( _1) ¥ . ( - i y + i = ( - i y + ^ .
(9)
Theorem 5 Let h be afunction with the properties given in Theorem 2. Ifd e N is odd, then the function
### G(t) = YJ[Pa*h(k)]t-k
¿fceZ
has d + 1 roots which are simple, negative and different from — 1.
Proof We will study the roots of the function
### H(t) = td-^G(t) =
YJ[^*h(k)Y^k-¿fceZ
By Lemma 1, H is a polynomial of degree d + 1:
Again the coefflcients of H ave all positive, which yields H(Q) > 0 and limí^_c o H(t) =
+oo. Henee, all zeros of H and G coincide. As in (4), if t ¿ O, then
td+Tt4{x)h{x)dx. (7)
•1/2
Let A.1 < • • • < k,¡ be the roots of the P¿, which are all negative. By Lemma 4,
g
1
(
(
### 8)
for every x e (—1/2, 1/2) and every j e { 1 , . . . , d}. By (7) and (8),
//(>.!)< O, H ( X2) > 0 , . . . H{Xd)<0.
Since //(O) > O and limí^_c o //(f) = +oo, there are
Si G ( - 0 0 , ki),S2 G (A.i,A.2), . . . , í d G (>-d-l,>-d),'S'd+l G (A.d,0)
such that H(si) = O , . . . , H(sd+{) = 0. Since / / is a polynomial of degree <¿ + 1, it does not have other roots. By (6), it is known that X(d+i)/2 = — 1, so H(—l) ^ O and therefore,
si=é-l,...,sd+i=é-l. D
Remark 2 Theorem 2 with d odd can be now deduced from Theorems 3 and 5.
### References
1. Aldroubi, A., Sun, Q., Tang, W-S.: Convolution, average sampling, and a Calderón resolution of the identity for shift-invariant spaces. J. Fourier Anal. Appl. 11, 215-244 (2005)
2. Ericsson, S.: Generalized sampling in shift invariant spaces with frames. Acta Math. Sin. 28, 1823-1844 (2012)
(10)
4. Kang, S., Kwon, K.H.: Generalized average sampling in shift invariant spaces. J. Math. Anal. Appl. 377, 70-78(2011)
5. Pérez Villalón, G., Portal, A.: Reconstruction of splines from local average samples. Appl. Math. Lett. 25, 1315-1319 (2012)
6. Pérez Villalón, G., Portal, A.: Sampling in shift-invariant spaces of functions with polynomial growth. Appl. Anal. 92, 2536-2546 (2013)
7. Ponnaian, D., Shanmugam, Y: Existence and uniqueness of spline reconstruction from local weighted average samples. Rend. Circ. Mat. Palermo 63, 97-108 (2014)
8. Ponnaian, D., Shanmugam, Y: On the zeros of the generalized Euler-Frobenius Laurent polynomial and reconstruction of cardinal splines of polynomial growth from local average samples. J. Math. Anal. Appl.
432, 983-993 (2015)
9. Schoenberg, I.J.: Cardinal interpolation and spline functions, II: interpolation of data of power growth. I. Approx. Theory 6, 404-420 (1972)
10. Schoenberg, I.J.: Cardinal interpolation and spline functions, IV: the exponential Euler splines. Int. Ser. Numer. Math. 20, 382^104 (1972)
11. Sun, W., Zhou, X.: Average sampling in spline subspaces. Appl. Math. Lett. 15, 233-237 (2002) 12. Sun, W., Zhou, X.: Average sampling in shift invariant subspaces with symmetric averaging functions.
I. Math. Anal. Appl. 287, 279-295 (2003)
Actualización...
Actualización...
|
2022-07-07 14:00:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256105184555054, "perplexity": 2014.7580506873162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00397.warc.gz"}
|
https://www.controlbooth.com/threads/which-would-you-get.4227/
|
# Which would you get?
#### CowboyDan
##### Member
I am looking into buying 16 new headworn microphones. I have used the Countrymen E6 isomax earsets and I have had problems with the cables but overall they are good mics. I was wondering if anyone has used the Audio-tech MicroSet or the Isomax B6 Lapel.
I am using these in an active theater department. We also work with the local community theater. So whatever I do get needs to be rather tough.
The E6 is good but as I mentioned before it can have a problem with the cables going bad. The Audio-Tech is also okay but I am worried that it will not hold up to the abuse. The B6 is more of a lapel but it is small enough that I could put it on most actors without a problem.
Anyone that give some input on one or any of these would be very helpful.
Thanks
Dan
#### BNBSound
##### Active Member
If I had to pick, I'd probably say the Countryman. But, those spidery thin cables are going to be an issue on any mic when used frequently. Personally, I'd say stick with the tried and true Shure W93. Pinned to the hair or taped to the chin with a band aid it's usually a lot less noticeable than any boom style mic. And at less than a hundred bucks a piece, it's not a tragedy when one goes all scratchy and has to be ditched.
#### fosstech
##### Active Member
We used the B6 elements for our last musical. We had 20 of them going into Shure UHF and UHF-R transmitters. Great sounding, and really tiny! We only had one unexpectedly die on us due to a bad cable. The other one that died, the actor "killed" by snagging the cable on a piece of the set during a rehearsal without costumes. The cable ripped right out of the plug. Not too difficult to fix with a new TA4 connector, a magnifying glass, and a very fine tipped soldering iron. Be careful with the protective caps on the elements, they come off too easily and are impossible to find if they fall off at some point during a show. Make sure you have some spares. Also be sure you order some spares of whatever model you buy. You will need them at some point.
They're really great sounding mics. We did an audio recording, and the vocal quality was as good as the CD of the original cast.
The WL93's are big and they don't sound as good as the Countrymans. But you can get three WL93's for the price of one B6. We had a couple WL93's on characters that weren't as critical, and you could tell the difference.
We used 3M Blenderm tape to attach the elements to the cheek bone of the actors' faces. In places that weren't exposed, like behind the ear and the neck, we used 3M Transpore tape, which holds better but is more visible. For those who had problems with the tape sticking, we used the brown band-aid type tape, which held like skin colored gaff
#### CURLS
##### Member
Ok, so without hesitation of those who are going to bash me fore saying this and disregard BUDGET. I have to say hand's down MKE2 either red dot or gold, you cannot ask a professional theatre audio guys oppinion w/o mke being said. Ok, so yes it is pricey but, on the other hand it is the industry standard for broadway, and for broadcasting talent, and even inside parabolic dishes.
Other than that rant i have had the pleasure of using mke platinums with sennheiser recievers. OH yea, I think it's impossible to not say the word BadA.. when talking about those series of lavs. And when i used those it was in a high school, and ya dont wanna know how many!
#### krhodus
##### Member
We just bought 6 Countyman B6's. From just using them for the one show, they sound amazing. While we previously haven't had B6's we have had B3's since who knows when. The only problem I've ran in with them is that some of the ones that are 10-12 years old are starting to get shorts right near the connector, which doesn't surprise me considering how much they are used in the high school. Just watch out on the B6's, the cord is super thin, atleast compared to B3's or Shure lav's.
#### soundlight
##### Well-Known Member
We use E6 earsets, and they sound great, and work very well. We use them with Shure TX/RX stuff, not sure what model.
#### PhantomD
##### ♂
I've never heard of any of this CountryMan equipment.
Should that be a shock?
#### jkowtko
##### Well-Known Member
... we have had B3's since who knows when. The only problem I've ran in with them is that some of the ones that are 10-12 years old are starting to get shorts right near the connector, which doesn't surprise me considering how much they are used in the high school.
10-12 years? You have gotten some great use out of your B3s.
I know this is an old post, but ...
Fyi, Countryman will replace B3 connectors fairly cheaply ... from $25 to 42 depending on the connector. Give them a call 650-364-9988 and they'll tell you where to mail them in for testing and repair. They will test the mic elements for sonic quality, tell you if there are any issues, and if not, cut and resolder the connector, and then ship them back to you. And if the element is bad they will replace the entire cord for$99. It's a great deal.
#### FMEng
##### Well-Known Member
Fight Leukemia
I rather like the Countryman E6. They make cables for them in two grades, standard 1mm, and rugged 2mm. You might want to try the heavier cables.
I also try to always store them with the cables in a relaxed position. If you store them wound tightly around a belt pack, that stresses them and they break faster.
#### andrewharper
##### Member
I'd go with the B6 in the hairline. It is generally my favorite sounding option out of the 3 mics you mention.
The Audio Technica's are good, but I don't like them as much as the Countrymen(mans?). I like the earmount on the E6 more and I personally like the sound of the E6 better. Just my opinion.
When I use E6's, I use as many 2mm cables as I can. It is a good idea to always have spare cables during tech and run. I've paid way to much to have those cables over-nighted in the past.
If you have the budget, check out Danish Pro Audio. The DPA 4060 is similar to the B6 and the DPA 4088 (card) and 4066 (omni) is similar to the E6, but the DPA comes standard with a two ear mount. The sound is beautiful and the 4060s are my first choice. The DPA headsets aren't terribly durable, though they do sound great.
+1 for the Sennheiser MKE's. I owned a pair for a while and I still regret selling them.
Andrew Harper
#### The_Guest
##### Senior Team Emeritus
Avoid the WL93 if you can. They get the job done, but they're huge, ugly, and overpriced for what else is out there. Shure makes great wireless systems, just terrible elements. Everytime I've ran their wireless I always opt to go with Sennheiser mke elements because they sound pretty good (not the best, but great for the money) and are not distractions when actors where them.
Obviously countrymen are the lavalier kings. They're hard to beat when money isn't an option. DPA also makes a **** fine element.
Sony makes some great wireless, but it's extremely expensive and difficult to repair. A lot of shops don't stock sony parts, so when you bring them in, it can take a little longer to repair. Sony are great for installs. I love their elements and they sound very natural, but their very fragile. I don't often opt to use them because not enough people use it or support it or it's not readily available.
In conclusion, I choose my wireless based on my money....
Sennheiser mke's: When I need lots of reliable elements for cheap. Small and sound great.
Countryman: I splurge for it anytime I can. Fabulous products. I kind of play brand favoritism here.
DPA: When Countryman are sold out or out of stock or I can get the DPA cheaper. They're very similar in quality to countryman. DPA are always over shadowed by the various options. Give them a good look...I know I should more.
Sony: When you want to try something completely different. Not much advice here. They make good stuff, but it's just very different. It has a unique sound to it.
#### mbenonis
##### Wireless Guy
I used B3's this summer and they worked great. They're bigger than the B6, but work fine on most actors depending on how picky your director is (and they work fine under wigs, of course). BTW, the E6's can be ordered with 1 or 2 mm cables - we got the 2mm cables,a nd while we haven't used the E6's yet, the cables seemed quite rugged.
I agree with what has been said about Shure 93's - avoid them. They just sound terrible next to a B3.
#### averyfrix
##### Member
Audio technica is the worst brand
spend the money it will help in the future get SHURE
wireless lavaliere around ear
bhphotovideo.com or musiciansfriend.com or fullcompass.com
hope this helps
#### soundlight
##### Well-Known Member
Audio technica is the worst brand
spend the money it will help in the future get SHURE
wireless lavaliere around ear
bhphotovideo.com or musiciansfriend.com or fullcompass.com
hope this helps
A bit more of an explanation of the issues that you've had and with what products would be very helpful. I know many people who trust Audio-Technica stuff and recommend it regularly, and I've personally used quite a number of their products and have yet to have any issues. Then again, I've never used their real cheap stuff, which may be what you're referring to. High-end AT wireless is pretty dern good, and the AT4041 is an excellent mic when it comes to SDCs in it's price range. And for the money, the AT PRO37 is a decent low-budget SDC. I've used the PRO37 for overheads on a number of occasions.
Put quite simply, give us the story, don't just product bash. And of those sites you've listed, I'd only buy from Fullcompass, and there are better dealers out there for most of the products in question here.
|
2020-02-23 07:09:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2557373046875, "perplexity": 2752.9463112257895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145747.6/warc/CC-MAIN-20200223062700-20200223092700-00348.warc.gz"}
|
http://math.eretrandre.org/tetrationforum/showthread.php?mode=threaded&tid=487&pid=5070
|
• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Equations for Kneser sexp algorithm sheldonison Long Time Fellow Posts: 639 Threads: 22 Joined: Oct 2008 08/08/2010, 07:14 PM (This post was last modified: 08/12/2010, 04:36 AM by sheldonison.) This is a continuation of the thread in the computation forum, http://math.eretrandre.org/tetrationforu...hp?tid=486 This thead contains some of the mathematical equations I used for the fast Kneser algorithm. In this post, B is the base for the sexp function, and L is the fixed point for base B. This is the complex valued superfunction, developed from the fixed point L for base B, where B>$\eta$ $c=L\times\ln(B)$ for base e, c=L $\operatorname{superf}_{B}(z) = \lim_{n \to \infty} B^{[n](L + c^{z-n})} = \lim_{n \to \infty} \exp^{[n]} ((L+c^{z-n})/\ln(B))$ This is the complex valued inverse superfunction, developed from the fixed point, which is the inverse of the equation above. The inverse superfunction has the property, that isuperf(B^z)=isuperf(B)+1. This particular equation is normalized, so that it converges to the same value as the limit of n approaches infinity. Both of these two functions are implemented in the pari-GP program I wrote. $\operatorname{isuperf}_{B}(z) = \lim_{n \to \infty} \log_{c} (c^n \times ((\log_{B}^{[n]}(z))-L))$ If we started with a pefect sexp(z) function, then this is the 1-cyclic theta function linking the sexp with the superf/isuperf. $\theta(z)=\operatorname{isuperf}(\operatorname{sexp}(z))-z$ $\operatorname{sexp}(z)=\operatorname{superf}(z+\theta(z))$ Theta(z) has a singularity at all integer values of n. Theta(z) is represented by an infinite sequence of fourier terms. The fourier series for theta(z) can be developed from any arbitrary unit length on the real axis of sexp(z), where z>-2. Only terms with positive values of n are included, and all terms a_n for negative values of n are zero. $\theta(z)=\sum_{n=0}^{\infty}a_n\times \e^{(2\pi n i z)}$ Theta(z) is intimately connected to the Riemann unit circle mapping, used by Kneser's construction. The Taylor series for the Riemann unit circle function (I'm not sure of the correct notation here) uses the exactly the same a_n coeffecients as the 1-cyclic theta function! This is something that connects the complex fourier analysis of theta(z) to the theory of complex analytic functions, which is really neat! The RiemannCircle has a singularity at z=1, which corresponds to the singularities at the integer values of theta(z). $\operatorname{RiemannCircle}(z) = \theta(\ln(z)/2\pi i)$ $\theta(z) = \operatorname{RiemannCircle}(e^{2\pi i z}) =\operatorname{isuperf}(\operatorname{sexp}(z))-z$ $\operatorname{RiemannCircle(z)=\sum_{n=0}^{\infty}a_n\times z^n$ If we had a perfect sexp(z) Taylor series, then we have a function for the values of the Riemann unit circle function, which is generated from the theta function, using the equation above, from the inverse superfunction. Now, we can use Cauchy's integral formula to calculate the Taylor series for the Riemann unit circle function. And this also gives us the coefficients of the 1-cyclic theta(z) function. Of course, there is still the problem of the singularity on the unit circle, which causes problems due to slow convergence. In later posts, I will try to go into some detail, showing values for the Taylor series results for the Riemann circle function, and how the coefficients slowly decay, with poor convergence on the unit circle. The program I wrote iterates, calculating approximate values the RiemannCircle Taylor series based on an approximation function for sexp(z). And then uses the Taylor series for the RiemannCircle function to calculate another better approximation for the sexp(z) function. Many, many, many more details to follow! Be patient. This may take a few days.... - Enjoy, Sheldon « Next Oldest | Next Newest »
Messages In This Thread Equations for Kneser sexp algorithm - by sheldonison - 08/08/2010, 07:14 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 08/08/2010, 08:29 PM RE: Equations for Kneser sexp algorithm - by Gottfried - 08/08/2010, 10:01 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 08/09/2010, 06:27 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 08/11/2010, 06:11 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 08/12/2010, 04:43 AM Riemann mapping, for some scenarios - by sheldonison - 08/12/2010, 03:35 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 08/13/2010, 07:39 PM RE: Equations for Kneser sexp algorithm - by tommy1729 - 08/12/2010, 08:37 PM RE: Equations for Kneser sexp algorithm - by bo198214 - 08/15/2010, 06:34 AM RE: Equations for Kneser sexp algorithm - by sheldonison - 08/15/2010, 10:05 AM RE: Equations for Kneser sexp algorithm - by bo198214 - 06/10/2011, 08:48 AM RE: Equations for Kneser sexp algorithm - by sheldonison - 06/10/2011, 01:43 PM RE: Equations for Kneser sexp algorithm - by bo198214 - 06/13/2011, 01:12 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 06/14/2011, 03:00 PM RE: Equations for Kneser sexp algorithm - by bo198214 - 06/14/2011, 05:07 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 06/19/2011, 03:14 AM RE: Equations for Kneser sexp algorithm - by bo198214 - 06/20/2011, 09:22 PM RE: Equations for Kneser sexp algorithm - by sheldonison - 06/21/2011, 01:48 AM
Possibly Related Threads... Thread Author Replies Views Last Post Kneser method question tommy1729 9 332 02/11/2020, 01:26 AM Last Post: sheldonison Moving between Abel's and Schroeder's Functional Equations Daniel 1 190 01/16/2020, 10:08 PM Last Post: sheldonison Sexp redefined ? Exp^[a]( - 00 ). + question ( TPID 19 ??) tommy1729 0 1,640 09/06/2016, 04:23 PM Last Post: tommy1729 Taylor polynomial. System of equations for the coefficients. marraco 17 16,899 08/23/2016, 11:25 AM Last Post: Gottfried [split] Understanding Kneser Riemann method andydude 7 8,128 01/13/2016, 10:58 PM Last Post: sheldonison Totient equations tommy1729 0 1,808 05/08/2015, 11:20 PM Last Post: tommy1729 Bundle equations for bases > 2 tommy1729 0 1,839 04/18/2015, 12:24 PM Last Post: tommy1729 Can sexp(z) be periodic ?? tommy1729 2 3,928 01/14/2015, 01:19 PM Last Post: tommy1729 Grzegorczyk hierarchy vs Iterated differential equations? MphLee 0 2,103 01/03/2015, 11:02 PM Last Post: MphLee A system of functional equations for slog(x) ? tommy1729 3 4,526 07/28/2014, 09:16 PM Last Post: tommy1729
Users browsing this thread: 1 Guest(s)
|
2020-02-23 21:03:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 10, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645973205566406, "perplexity": 3668.9988961780646}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00148.warc.gz"}
|
https://ankit2788.github.io/2019/05/22/Design-my-own-Fractal/
|
Design your own Fractal
My last article in the Fractals series was devoted to the understanding of Fractals from the most basic perspective. Being extensively used in Computer graphics, this article will present a way to create and design your own fractals.
2 different fractal images are presented here. With a slight difference in the inputs, a whole new image is generated. Isn’t that fascinating! So, let’s make our hands dirty and get on board with the how of designing.
Iterated Functions System (IFS)
Before we design our first Fractal, we need to understand the self similarity attribute. In computer science and mathematics, an iterative (or even a recursive) function can achieve self similarity.
f({X_t}_1) = k * f(X_t) + c$f({X_t}_1) = k * f(X_t) + c$, where \lvert k\rvert \lt 1$\lvert k\rvert \lt 1$.
Such a recursive function when applied to graphics, generate a pattern like below
This is just a dotted line, with relative spacing increasing with number of iterations. Starting at point (0,0), the same linear transformation and translation is applied iteratively. This Transformation matrix is given by: \begin{bmatrix} .81 & .44 \\ .50 & -.48 \end{bmatrix}$% $ (generated randomly)
Note - If you want to brush up a little bit on Linear Algebra, I would recommend 3B1B youtube series
If recursively drawing a single function generates a brush paint pattern, now you can imagine what would it like be if we combine multiple transformations and draw them recursively. This idea is referred to as Iterated Functions System
Sierpinski Triangle
A very famous fractal image is also the most simplistic one. Lets break it down piece by piece
The Iterated Function System(IFS) for this image can be presented as:
1. Begin with a starting object
2. Reduce the object into half of the size and make 3 copies (3 function systems )
3. Align them in the shape of an equilateral triangle. (represented by the translation vector)
4. Go to step #2.
f_1(x) =
\begin{bmatrix}
.5 & 0 \\
0 & .5
\end{bmatrix}x
f_2(x) =
\begin{bmatrix}
.5 & 0 \\
0 & .5
\end{bmatrix}x + \begin{bmatrix}
.5 \\
0
\end{bmatrix}
f_3(x) =
\begin{bmatrix}
.5 & 0 \\
0 & .5
\end{bmatrix}x + \begin{bmatrix}
.25 \\
.433
\end{bmatrix}
Koch Curve
Another great example is Koch Curve. Let’s look at the system of functions required to bring this fractal into life.
f_1(x) =
\begin{bmatrix}
1/3 & 0 \\
0 & 1/3
\end{bmatrix}x
f_2(x) =
\begin{bmatrix}
1/6 & -\sqrt{3}/6 \\
\sqrt{3}/6 & 1/6
\end{bmatrix}x + \begin{bmatrix}
1/3 \\
0
\end{bmatrix}
f_3(x) =
\begin{bmatrix}
1/6 & \sqrt{3}/6 \\
-\sqrt{3}/6 & 1/6
\end{bmatrix}x + \begin{bmatrix}
1/2 \\
\sqrt{3}/6
\end{bmatrix}
f_4(x) =
\begin{bmatrix}
1/3 & 0 \\
0 & 1/3
\end{bmatrix}x + \begin{bmatrix}
2/3 \\
0
\end{bmatrix}
This system of equations represent the following iteration methodology:
1. Begin with a starting object
2. Reduce the object into 1/3 of the size and make 4 copies (4 function systems )
3. Rotate 1 copy by 60°
4. Rotate another copy by -60°.
5. The last copy is just the replica of original object but 1/3 of size
6. Repeat steps 2-5.
I hope these 2 examples must have given a brief flavour on how Fractals are generated. Applying certain Transformations repeatedly can lead to a whole other object altogether.
What’s next now?
This is just the beginning of what fractals are capable of. Art is just a way to visualise mathematics. Here, I present to you few more Fractal images. Try to guess the Transformations that could have led to these fractals.
So far, we have only covered the Linear Transformations that can be applied to any object. Can there be more such transformations? Can we add a randomness to a randomness? Stay tuned for the next article to find out more.
Note - In case anyone is interested for the source code (written in Python) to generate these image, please contact me directly!
|
2020-01-28 03:15:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6390343308448792, "perplexity": 1379.8205403633228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00103.warc.gz"}
|
https://experts.mcmaster.ca/display/publication223223
|
# The Uncountable Spectra of Countable Theories Academic Article
•
• Overview
•
• Research
•
• Identity
•
•
• View All
•
### abstract
• Let T be a complete, first-order theory in a finite or countable language having infinite models. Let I(T,kappa) be the number of isomorphism types of models of T of cardinality \kappa. We denote by \mu (respectively \hat\mu) the number of cardinals (respectively infinite cardinals) less than or equal to \kappa. We prove that I(T,\kappa), as a function of \kappa > \aleph_0, is the minimum of 2^{\kappa} and one of the following functions: 1. 2^{\kappa}; 2. the constant function 1; 3. |\hat\mu^n/{\sim_G}|-|(\hat\mu - 1)^n/{\sim_G}| if \hat\mu<\omega for some 1= \omega some group G <= Sym(n); 4. the constant function \beth_2; 5. \beth_{d+1}(\mu) for some infinite, countable ordinal d; 6. \sum_{i=1}^d \Gamma(i) where d is an integer greater than 0 (the depth of T) and \Gamma(i) is either \beth_{d-i-1}(\mu^{\hat\mu}) or \beth_{d-i}(\mu^{\sigma(i)} + \alpha(i)), where \sigma(i) is either 1, \aleph_0 or \beth_1, and \alpha(i) is 0 or \beth_2; the first possibility for \Gamma(i) can occur only when d-i > 0.
• July 2000
|
2020-07-07 16:56:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962386429309845, "perplexity": 4426.812771723423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655893487.8/warc/CC-MAIN-20200707142557-20200707172557-00239.warc.gz"}
|
http://physics.stackexchange.com/questions/1741/isotope-properties-plotting-tool
|
# Isotope properties plotting tool?
I'm looking for something that will generate scatter plots comparing different properties of isotopes. Ideally I'd like some web page that lets me select axis and click go but a CSV file with lost of properties would work.
-
This seems like the kind of question to ask on the webapps SE. But I guess it could go here too... (admittedly there probably aren't that many people on webapps.SE who are familiar with nuclear physics) – David Z Dec 8 '10 at 5:46
@David: First, I also thought that this is off-topic but now (after exchanging few comments with Frédéric and looking at some other software questions) I tend to think that it is fine (especially considering the software tag). Also, there is the reason you stated: this probably wouldn't get answered anywhere else. – Marek Dec 8 '10 at 11:06
I chose to post it here because primarily the question is about the data set and secondarily about the tooling. – BCS Dec 8 '10 at 15:32
Or like this
-
Exploring the Table of Isotopes is my go-to site.
There are a few others out there, but when I moved from my last computer those were some of the bookmarks I didn't bring, 'cause I never used them...
You haven't specified what you want in your CVS file. I think this site only offers transition energies and branching fractions by radiation type.
For searching a whole database of transistions for lines of [energy range]$\times$[A or Z range]$\times$[halflife range] use The Lund/LBNL Nuclear Data Search. Very useful if you are thinking of building a novel calibration source or some thing like that. This site also has stopping powers and other good stuff (go up a level from the link...). Another go-to resource.
-
What I was hoping to do was plot N vs λ. The spesific problem I'm interested in is what's with the bite mark in this chart at N~=130?: en.wikipedia.org/wiki/File:Isotopes_and_half-life.svg – BCS Dec 8 '10 at 2:33
|
2015-03-04 20:31:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.539859414100647, "perplexity": 1649.9202557260053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463658.66/warc/CC-MAIN-20150226074103-00137-ip-10-28-5-156.ec2.internal.warc.gz"}
|
http://www.physicspages.com/2016/12/10/
|
# Poisson brackets are invariant under a canonical transformation
References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 2.7; Exercise 2.7.9.
The Poisson bracket of two functions is defined as
$\displaystyle \left\{ \omega,\sigma\right\} =\sum_{i}\left(\frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial p_{i}}-\frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial q_{i}}\right) \ \ \ \ \ (1)$
Calculating the Poisson bracket requires knowing ${\omega}$ and ${\sigma}$ as functions of the coordinates ${q_{i}}$ and momenta ${p_{i}}$ in the particular coordinate system we’re using. However, we’ve seen that the Euler-Lagrange and Hamilton’s equations are invariant under a canonical transformation and since the Poisson bracket is a fundamental quantity in classical mechanics, in particular because the time derivative of a function ${\omega}$ is the Poisson bracket ${\left\{ \omega,H\right\} }$ with the Hamiltonian, it’s natural to ask how the Poisson bracket of two functions transforms under a canonical transformation.
The simplest way of finding out (although not the most elegant) is to write the canonical transformation as
$\displaystyle \bar{q}_{i}$ $\displaystyle =$ $\displaystyle \bar{q}_{i}\left(q,p\right)\ \ \ \ \ (2)$ $\displaystyle \bar{p}_{i}$ $\displaystyle =$ $\displaystyle \bar{p}\left(q,p\right) \ \ \ \ \ (3)$
We can then write the Poisson bracket in the new coordinates as
$\displaystyle \left\{ \omega,\sigma\right\} _{\bar{q},\bar{p}}=\sum_{j}\left(\frac{\partial\omega}{\partial\bar{q}_{j}}\frac{\partial\sigma}{\partial\bar{p}_{j}}-\frac{\partial\omega}{\partial\bar{p}_{j}}\frac{\partial\sigma}{\partial\bar{q}_{j}}\right) \ \ \ \ \ (4)$
Assuming the transformation is invertible, we can use the chain rule to calculate the derivatives with respect to the barred coordinates. This gives the following (we’ve used the summation convention in which any index repeated twice in a product is summed; thus in the following, there are implied sums over ${i,j}$ and ${k}$):
$\displaystyle \left\{ \omega,\sigma\right\} _{\bar{q},\bar{p}}$ $\displaystyle =$ $\displaystyle \left(\frac{\partial\omega}{\partial q_{i}}\frac{\partial q_{i}}{\partial\bar{q}_{j}}+\frac{\partial\omega}{\partial p_{i}}\frac{\partial p_{i}}{\partial\bar{q}_{j}}\right)\left(\frac{\partial\sigma}{\partial q_{k}}\frac{\partial q_{k}}{\partial\bar{p}_{j}}+\frac{\partial\sigma}{\partial p_{k}}\frac{\partial p_{k}}{\partial\bar{p}_{j}}\right)-\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle \left(\frac{\partial\omega}{\partial q_{i}}\frac{\partial q_{i}}{\partial\bar{p}_{j}}+\frac{\partial\omega}{\partial p_{i}}\frac{\partial p_{i}}{\partial\bar{p}_{j}}\right)\left(\frac{\partial\sigma}{\partial q_{k}}\frac{\partial q_{k}}{\partial\bar{q}_{j}}+\frac{\partial\sigma}{\partial p_{k}}\frac{\partial p_{k}}{\partial\bar{q}_{j}}\right)\ \ \ \ \ (5)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial p_{k}}\left(\frac{\partial q_{i}}{\partial\bar{q}_{j}}\frac{\partial p_{k}}{\partial\bar{p}_{j}}-\frac{\partial q_{i}}{\partial\bar{p}_{j}}\frac{\partial p_{k}}{\partial\bar{q}_{j}}\right)+\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial q_{k}}\left(\frac{\partial p_{i}}{\partial\bar{q}_{j}}\frac{\partial q_{k}}{\partial\bar{p}_{j}}-\frac{\partial p_{i}}{\partial\bar{p}_{j}}\frac{\partial q_{k}}{\partial\bar{q}_{j}}\right)+\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial q_{k}}\left(\frac{\partial q_{i}}{\partial\bar{q}_{j}}\frac{\partial q_{k}}{\partial\bar{p}_{j}}-\frac{\partial q_{i}}{\partial\bar{p}_{j}}\frac{\partial q_{k}}{\partial\bar{q}_{j}}\right)+\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial p_{k}}\left(\frac{\partial p_{i}}{\partial\bar{q}_{j}}\frac{\partial p_{k}}{\partial\bar{p}_{j}}-\frac{\partial p_{i}}{\partial\bar{p}_{j}}\frac{\partial p_{k}}{\partial\bar{q}_{j}}\right)\ \ \ \ \ (6)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial p_{k}}\left\{ q_{i},p_{k}\right\} +\frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial q_{k}}\left\{ p_{i},q_{k}\right\} +\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial q_{k}}\left\{ q_{i},q_{k}\right\} +\frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial p_{k}}\left\{ p_{i},p_{k}\right\} \ \ \ \ \ (7)$
For a canonical transformation, the Poisson brackets in the last equation satisfy
$\displaystyle \left\{ q_{i},p_{k}\right\}$ $\displaystyle =$ $\displaystyle -\left\{ p_{i},q_{k}\right\} =\delta_{ik}\ \ \ \ \ (8)$ $\displaystyle \left\{ q_{i},q_{k}\right\}$ $\displaystyle =$ $\displaystyle \left\{ p_{i},p_{k}\right\} =0 \ \ \ \ \ (9)$
[Actually, we had worked out these conditions for the barred coordinates in terms of the original coordinates, but since the transformation is invertible and both sets of coordinates are canonical, the Poisson brackets work either way.] Applying these conditions to the above, we find
$\displaystyle \left\{ \omega,\sigma\right\} _{\bar{q},\bar{p}}$ $\displaystyle =$ $\displaystyle \left(\frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial p_{k}}-\frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial p_{k}}\right)\delta_{ik}\ \ \ \ \ (10)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{\partial\omega}{\partial q_{i}}\frac{\partial\sigma}{\partial p_{i}}-\frac{\partial\omega}{\partial p_{i}}\frac{\partial\sigma}{\partial p_{i}}\ \ \ \ \ (11)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left\{ \omega,\sigma\right\} _{q,p} \ \ \ \ \ (12)$
Thus the Poisson bracket is invariant under a canonical transformation.
|
2017-02-27 22:56:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914721250534058, "perplexity": 112.09023987850645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00515-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://motls.blogspot.com/2012/10/sheldon-glashow-does-science-evolve.html?m=1
|
## Friday, October 05, 2012
### Sheldon Glashow: Does science evolve through blind chance or Intelligent Design?
Five months ago or so, Honeywell organized a series of lectures by the Nobel laureate Sheldon Glashow at the Czech Technical University (ČVUT) in Prague.
The lecture you can watch now asked the question whether science evolves by chance or by design.
It's a sort of a fun, light, philosophically and historically loaded talk.
Maybe the number of the historical episodes will be boring for you: he could be a professional historian of science right away.
Typical Czech engineering students are listening to Glashow. ;-)
But if you like the first part, continue with Part 2 and Part 3. If you make it to the third part, there will be some examples of his point from modern physics. Around 18:00, he also talks about Gell-Mann and quarks' and string theorists' delight when they deduced that string theory predicted gravity. Glashow doesn't count it as a prediction because he had known about gravity before string theory was born. Of course, from the viewpoint of the history of science, it wasn't a (new) prediction: the chronology guarantees that. However, from the viewpoint of science and the strength and validity of its hypotheses, the fact that string theory implies general relativity is exactly as important and consequential as a prediction! The chronology is just a part of the history, social science, it was accidental, and a scientist simply can't pay attention to such things.
On the other hand, I agree that both accidental discoveries as well as "planned research" have been important and will be important.
Some other not-too-demanding physics news: Australia opened the world's fastest radio telescope.
Robert Christy, a physicist who worked on the Manhattan Project and the first one who became hostile against Edward Teller after he identified Oppenheimer as a communist, died.
An 11-year-old maľchik (=Russian boy) discovered the mammoth of the century (the best preserved one in 100 years).
|
2021-06-14 13:01:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4273546636104584, "perplexity": 2147.8783750500147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00462.warc.gz"}
|
http://www.ijpe-online.com/EN/10.23940/ijpe.20.09.p5.13621373
|
Int J Performability Eng ›› 2020, Vol. 16 ›› Issue (9): 1362-1373.
### DDoS Attack Real-Time Defense Mechanism using Deep Q-Learning Network
Wei Feng and Yuqin Wu*
1. College of Information and Mechanical and Electrical Engineering, Ningde Normal University, Ningde, 352100, China
• Submitted on ; Revised on ; Accepted on
• Contact: * E-mail address: wuyuqinlw@163.com
Abstract: The system distributed denial of service (DDoS) contains high covert attack characteristics and requires real-time defense. In order to solve such two problems for system DDoS, this paper proposes a novel DDoS attack real-time defense mechanism based on deep Q-learning network (DQN). This mechanism regards the terminal adaptive control system as the protection object, periodically extracts the network attack characteristic parameters, and takes such parameters as the input parameters of the deep Q-learning network. Our defense measures are based on dynamic service resource allocation, which dynamically adjusts the service resource according to the current operating state of the system. The current operating state will ensure the response rate of normal service requests. Finally, the attack and defense processes are modeled and simulated using colored Petri network (CPN) combined with DQN. Experimental results show that the proposed mechanism has real-time and high sensitivity defense for the response to DDoS attacks. The proposed mechanism significantly improves the automation degree of system defense. By using such a mechanism in the real-time defense of DDoS attacks, the system will be safer than the state-of-the-art mechanisms.
|
2021-01-19 05:53:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3007151782512665, "perplexity": 3351.2634295363887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00438.warc.gz"}
|
https://math.stackexchange.com/questions/1904219/adder-delay-model-in-a-ripple-carry-adder
|
In section 5.3. of the following book an analysis of carry propagation in the ripple carry adder is performed. However the statistical analysis doesn't particularly convince me. Specifically at the beginning of page 81 it is stated that "The probability that a carry generated at position i will propagate up to and including position $j - 1$ and sto at position $j(j > i)$ is $2^{-(j-1-i)}\times1/2$".
However my modeling gives me a different result, to be honest the result given by the book is not explained very well, IMHO.
Here is my derivation,
I call $g_i$ the event that a carry is generated at position $i$, $p_i$ the event that a carry is propagated through the position $i$ and $a_i$ the event that it is annihileted. All of these events depend only from $x_i, y_i$ (bits of the input). I have to compute the probability
$$Pr\left(g_i,p_{i+1},...,p_{j-1},a_{j}\right) = Pr(g_i) \prod_{k=i}^{j-1}Pr(p_{k}) Pr(a_j)$$
However when I substitute the probabilities given by the book at the beginning of page 81 the result that comes out from my expression is not the same, so probably both I'm missing something or the book's result is wrong.
Can you help to understand where is the problem?
• Based on the example given in the book, it seems to me a chain can stop either when the carry is annihilated or when a new carry is generated. If you only count annihilation you will get half the correct probability. What probability did your modeling give you? – David K Aug 26 '16 at 19:11
• basically the same, but instead of the factor 1/2 it gives me 1/2^4 – user8469759 Aug 26 '16 at 19:16
• I think that should be $\prod_{k=i+1}^{j-1}Pr(p_{k})$ rather than $\prod_{k=i}^{j-1}Pr(p_{k})$, because you cannot have both $g_i$ and $p_i$ occur simultaneously. – David K Aug 26 '16 at 20:24
The probability that a carry generated at position $i$ will propagate up to and including position $j - 1$ and stop at position $j$ $(j > i)$
You seem to have interpreted this as "the probability that a carry is generated at position $i$ and propagates up to and including position $j - 1$ and is annihilated at position $j$". The only way for a chain to be "annihilated" is if it reaches a position where both addends have bit value $0$.
The book is interpreting this phrase to mean "the probability that a carry propagates up to and including position $j - 1$ and the chain stops for any reason at position $j$, given that a carry is generated at position $i$". Reasons for the chain to stop are because position $j$ annihilates the carry, or because position $j$ generates a new carry (which stops the previous chain), or because "position $j$" is actually the "carry out" bit (which by assumption no chain can continue beyond).
That is, the probability that the book computes as $2^{-(j-i)}$ is meant to be $$\left(\prod_{k=i+1}^{j-1}Pr(p_{k})\right) Pr(a_j \lor g_j)$$ which is a factor of $8$ greater than the result you reported.
|
2020-01-26 18:45:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139791488647461, "perplexity": 149.59129643068667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00188.warc.gz"}
|
https://bayesiancomputationbook.com/markdown/chp_03.html
|
# 3. Linear Models and Probabilistic Programming Languages¶
With the advent of Probabilistic Programming Languages, modern Bayesian modeling can be as simple as coding a model and “pressing a button”. However, effective model building and analysis usually takes more work. As we progress through this book we will be building many different types of models but in this chapter we will start with the humble linear model. Linear models are a broad class of models where the expected value of a given observation is the linear combination of the associated predictors. A strong understanding of how to fit and interpret linear models is a strong foundation for the models that will follow. This will also help us to consolidate the fundamentals of Bayesian inference (Chapter 1) and exploratory analysis of Bayesian models (Chapter 2) and apply them with different PPLs. This chapter introduces the two PPLs we will use for the majority of this book, PyMC3, which you have briefly seen, as well as TensorFlow Probability (TFP). While we are building models in these two PPLs, focusing on how the same underlying statistical ideas are mapped to implementation in each PPL. We will first fit an intercept only model, that is a model with no covariates, and then we will add extra complexity by adding one or more covariates, and extend to generalized linear models. By the end of this chapter you will be more comfortable with linear models, more familiar with many of the steps in a Bayesian workflow, and more comfortable conducting Bayesian workflows with PyMC3, TFP and ArviZ.
## 3.1. Comparing Two (or More) Groups¶
If you are looking for something to compare it is hard to beat penguins. After all, what is not to like about these cute flightless birds? Our first question may be “What is the average mass of each penguin species?”, or may be “How different are those averages?”, or in statistics parlance “What is the dispersion of the average?” Luckily Kristen Gorman also likes studying penguins, so much so that she visited 3 Antarctic islands and collected data about Adelie, Gentoo and Chinstrap species, which is compiled into the Palmer Penguins dataset [28]. The observations consist of physical characteristics of the penguin mass, flipper length, and sex, as well as geographic characteristics such as the island they reside on.
We start by loading the data and filtering out any rows where data is missing in Code Block penguin_load. This is called a complete case analysis where, as the name suggests, we only use the rows where all observations are present. While it is possible to handle the missing values in another way, either through data imputation, or imputation during modeling, we will opt to take the simplest approach for this chapter.
penguins = pd.read_csv("../data/penguins.csv")
# Subset to the columns needed
missing_data = penguins.isnull()[
["bill_length_mm", "flipper_length_mm", "sex", "body_mass_g"]
].any(axis=1)
# Drop rows with any missing data
penguins = penguins.loc[~missing_data]
We can then calculate the empirical mean of the mass body_mass_g in Code Block penguin_mass_empirical with just a little bit of code, the results of which are in Table 3.1
Listing 3.2 penguin_mass_empirical
summary_stats = (penguins.loc[:, ["species", "body_mass_g"]]
.groupby("species")
.agg(["mean", "std", "count"]))
species mean (grams) std (grams) count Adelie 3706 459 146 Chinstrap 3733 384 68 Gentoo 5092 501 119
Now we have point estimates for both the mean and the dispersion, but we do not know the uncertainty of those statistics. One way to get estimates of uncertainty is by using Bayesian methods. In order to do so we need to conjecture a relationship of observations to parameters as example:
(3.1)$\overbrace{p(\mu, \sigma \mid Y)}^{Posterior} \propto \overbrace{\mathcal{N}(Y \mid \mu, \sigma)}^{Likelihood}\; \overbrace{\underbrace{\mathcal{N}(4000, 3000)}_{\mu} \underbrace{\mathcal{H}\text{T}(100, 2000)}_{\sigma}}^{Prior}$
Equation (3.1) is a restatement of Equation (1.3) where each parameter is explicitly listed. Since we have no specific reason to choose an informative prior, we will use wide priors for both $$\mu$$ and $$\sigma$$. In this case, the priors are chosen based on the empirical mean and standard deviation of the observed data. And lastly instead of estimating the mass of all species we will first start with the mass of the Adelie penguin species. A Gaussian is a reasonable choice of likelihood for penguin mass and biological mass in general, so we will go with it. Let us translate Equation (3.1) into a computational model.
Listing 3.3 penguin_mass
adelie_mask = (penguins["species"] == "Adelie")
with pm.Model() as model_adelie_penguin_mass:
σ = pm.HalfStudentT("σ", 100, 2000)
μ = pm.Normal("μ", 4000, 3000)
mass = pm.Normal("mass", mu=μ, sigma=σ, observed=adelie_mass_obs)
prior = pm.sample_prior_predictive(samples=5000)
trace = pm.sample(chains=4)
inf_data_adelie_penguin_mass = az.from_pymc3(prior=prior, trace=trace)
Before computing the posterior we are going to check the prior. In particular we are first checking that sampling from our model is computationally feasible and that our choice of priors is reasonable based on our domain knowledge. We plot the samples from the prior in Fig. 3.1. Since we can get a plot at all we know our model has no “obvious” computational issues, such as shape problems or mispecified random variables or likelihoods. From the prior samples themselves it is evident we are not overly constraining the possible penguin masses, we may in fact be under constraining the prior as the prior for the mean of the mass includes negative values. However, since this is a simple model and we have a decent number of observations we will just note this aberration and move onto estimating the posterior distribution.
After sampling from our model, we can create Fig. 3.2 which includes 4 subplots, the two on the right are the rank plots and the left the KDE of each parameter, one line for each chain. We also can reference the numerical diagnostics in Table 3.2 to confirm our belief that the chains converged. Using the intuition we built in Chapter 2 we can judge that these fits are acceptable and we will continue with our analysis.
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat $$\mu$$ 3707 38 3632 3772 0.6 0.4 3677.0 2754.0 1.0 $$\sigma$$ 463 27 401 511 0.5 0.3 3553.0 2226.0 1.0
Comfortable with the fit we plot a posterior plot in Fig. 3.3 that combines all the chains. Compare the point estimates from Table 3.1 of the mean and standard deviation with our Bayesian estimates as shown in Fig. 3.3.
With the Bayesian estimate however, we also get the distribution of plausible parameters. Using the tabular summary in Table Table 3.2 from the same posterior distribution in Fig. 3.2 values of the mean from 3632 to 3772 grams are quite plausible. Note that the standard deviation of the marginal posterior distribution varies quite a bit as well. And remember the posterior distribution is not the distribution of an individual penguin mass but rather possible parameters of a Gaussian distribution that we assume describes penguin mass. If we wanted the estimated distribution of individual penguin mass we would need to generate a posterior predictive distribution. In this case it will be the same Gaussian distribution conditioned on the posterior of $$\mu$$ and $$\sigma$$.
Now that we have characterized the Adelie penguin’s mass, we can do the same for the other species. We could do so by writing two more models but instead let us just run one model with 3 separated groups, one per species.
Listing 3.4 nocovariate_mass
# pd.categorical makes it easy to index species below
all_species = pd.Categorical(penguins["species"])
with pm.Model() as model_penguin_mass_all_species:
# Note the addition of the shape parameter
σ = pm.HalfStudentT("σ", 100, 2000, shape=3)
μ = pm.Normal("μ", 4000, 3000, shape=3)
mass = pm.Normal("mass",
mu=μ[all_species.codes],
sigma=σ[all_species.codes],
observed=penguins["body_mass_g"])
trace = pm.sample()
inf_data_model_penguin_mass_all_species = az.from_pymc3(
trace=trace,
coords={"μ_dim_0": all_species.categories,
"σ_dim_0": all_species.categories})
We use the optional shape argument in each parameter and add an index in our likelihood indicating to PyMC3 that we want to condition the posterior estimate for each species individually. In programming language design small tricks that make expressing ideas more seamless are called syntactic sugar, and probabilistic programming developers include these as well. Probabilistic Programming Languages strive to allow expressing models with ease and with less errors.
After we run the model we once again inspect the KDE and rank plots, see Fig. 3.4. Compared to Fig. 3.2 you will see 4 additional plots, 2 each for the additional parameters added. Take a moment to compare the estimate of the mean with the summary mean shown for each species in Table 3.1. To better visualize the differences between the distributions for each species, we plot the posterior again in a forest plot using Code Block mass_forest_plot. Fig. 3.5 makes it easier to compare our estimates across species and note that the Gentoo penguins seem to have more mass than Adelie or Chinstrap penguins.
Listing 3.5 mass_forest_plot
az.plot_forest(inf_data_model_penguin_mass_all_species, var_names=["μ"])
Fig. 3.5 makes it easier to compare our estimates and easily note that the Gentoo penguins have more mass than Adelie or Chinstrap penguins. Let us also look at the standard deviation in Fig. 3.6. The 94% highest density interval of the posterior is reporting uncertainty in the order of 100 grams.
az.plot_forest(inf_data_model_penguin_mass_all_species, var_names=["σ"])
### 3.1.1. Comparing Two PPLs¶
Before expanding on the statistical and modeling ideas further, we will take a moment to talk about the probabilistic programming languages and introduce another PPL we will be using in this book, TensorFlow Probability (TFP). We will do so by translating the PyMC3 intercept only model in Code Block nocovariate_mass into TFP.
It may seem unnecessary to learn different PPLs. However, there are specific reasons we chose to use two PPLs instead of one in this book. Seeing the same workflow in different PPLs will give you a more thorough understanding of computational Bayesian modeling, help you separate computational details from statistical ideas, and make you a stronger modeler overall. Moreover, different PPLs have different strength and focus. PyMC3 is a higher level PPL that makes it easier to express models with less code, whereas TFP provides a lower level PPL for composable modeling and inference. Another is that not all PPLs are able to express all models as easily as each other. For instance Time Series models (Chapter 6) are more easily defined in TFP whereas Bayesian Additive Regression Trees are more easily expressed in PyMC3 (Chapter 7). Through this exposure to multiple languages you will come out with a stronger understanding of both the fundamental elements of Bayesian modeling and how they are implemented computationally.
Probabilistic Programming Languages (emphasis on language) are composed of primitives. The primitives in a programming language are the simplest elements available to construct more complex programs. You can think of primitives are like words in natural languages which can form more complex structures, like sentences. And as different languages use different words, different PPLs use different primitives. These primitives are mainly used to express models, perform inference, or express other parts of the workflow. In PyMC3, model building related primitives are contained under the namespace pm. For example, in Code Block penguin_mass we see pm.HalfStudentT(.), and pm.Normal(.), which represent a random variable. The with pm.Model() as . statement evokes a Python context manager, which PyMC3 uses to build the model model_adelie_penguin_mass by collecting the random variables within the context manager. We then use pm.sample_prior_predictive(.) and pm.sample(.) to obtain samples from the prior predictive distribution and from the posterior distribution, respectively.
Similarly, TFP provides primitives for user to specify distributions and model in tfp.distributions, running MCMC (tfp.mcmc), and more. For example, to construct a Bayesian model, TensorFlow provides multiple primitives under the name tfd.JointDistribution [29] API. In this chapter and the remaining of the book we mostly use tfd.JointDistributionCoroutine, but there are other variants of tfd.JointDistribution which may better suit your use case [1]. Since basic data import and summary statistics stays the same as Code Block penguin_load and penguin_mass_empirical we can focus on the model building and inference. model_penguin_mass_all_species expressed in TFP which is shown in Code Block penguin_mass_tfp below
Listing 3.6 penguin_mass_tfp
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
root = tfd.JointDistributionCoroutine.Root
species_idx = tf.constant(all_species.codes, tf.int32)
body_mass_g = tf.constant(penguins["body_mass_g"], tf.float32)
@tfd.JointDistributionCoroutine
def jd_penguin_mass_all_species():
σ = yield root(tfd.Sample(
tfd.HalfStudentT(df=100, loc=0, scale=2000),
sample_shape=3,
name="sigma"))
μ = yield root(tfd.Sample(
tfd.Normal(loc=4000, scale=3000),
sample_shape=3,
name="mu"))
mass = yield tfd.Independent(
tfd.Normal(loc=tf.gather(μ, species_idx, axis=-1),
scale=tf.gather(σ, species_idx, axis=-1)),
reinterpreted_batch_ndims=1,
name="mass")
Since this is our first encounter with a Bayesian model written in TFP, let us spend a few paragraphs to detail the API. The primitives are distribution classes in tfp.distributions, which we assign a shorter alias tfd = tfp.distributions. tfd contains commonly used distributions like tfd.Normal(.). We also used tfd.Sample, which returns multiple independent copies of the base distribution (conceptually we achieve the similar goal as using the syntactic sugar shape=(.) in PyMC3). tfd.Independent is used to indicate that the distribution contains multiple copies that we would like to sum over some axis when computing the log-likelihood, which specified by the reinterpreted_batch_ndims function argument. Usually we wrap the distributions associated with the observation with tfd.Independent [2]. You can read a bit more about shape handling in TFP and PPL in Section Shape Handling in PPLs.
An interesting signature of a tfd.JointDistributionCoroutine model is, as the name suggests, the usage of Coroutine in Python. Without getting into too much detail about Generators and Coroutines, here a yield statement of a distribution gives you some random variable inside of your model function. You can view y = yield Normal(.) as the way to express $$y \sim \text{Normal(.)}$$. Also, we need to identify the random variables without dependencies as root nodes by wrapping them with tfd.JointDistributionCoroutine.Root. The model is written as a Python function with no input argument and no return value. Lastly, it is convenient to put @tfd.JointDistributionCoroutine on top of the Python function as a decorator to get the model (i.e., a tfd.JointDistribution) directly.
The resulting jd_penguin_mass_all_species is the intercept only regression model in Code Block nocovariate_mass restated in TFP. It has similar methods like other tfd.Distribution, which we can utilize in our Bayesian workflow. For example, to draw prior and prior predictive samples, we can call the .sample(.) method, which returns a custom nested Python structure similar to a namedtuple. In Code Block penguin_mass_tfp_prior_predictive we draw 1000 prior and prior predictive samples.
Listing 3.7 penguin_mass_tfp_prior_predictive
prior_predictive_samples = jd_penguin_mass_all_species.sample(1000)
The .sample(.) method of a tfd.JointDistribution can also draw conditional samples, which is the mechanism we will make use of to draw posterior predictive samples. You can run Code Block penguin_mass_tfp_prior_predictive2 and inspect the output to see how random samples change if you condition some random variables in the model to some specific values. Overall, we invoke the forward generative process when calling .sample(.).
Listing 3.8 penguin_mass_tfp_prior_predictive2
jd_penguin_mass_all_species.sample(sigma=tf.constant([.1, .2, .3]))
jd_penguin_mass_all_species.sample(mu=tf.constant([.1, .2, .3]))
Once we condition the generative model jd_penguin_mass_all_species to the observed penguin body mass, we can get the posterior distribution. From the computational perspective, we want to generate a function that returns the posterior log-probability (up to some constant) evaluated at the input. This could be done by creating a Python function closure or using the .experimental_pin method, as shown in Code Block tfp_posterior_generation:
Listing 3.9 tfp_posterior_generation
target_density_function = lambda *x: jd_penguin_mass_all_species.log_prob(
*x, mass=body_mass_g)
jd_penguin_mass_observed = jd_penguin_mass_all_species.experimental_pin(
mass=body_mass_g)
target_density_function = jd_penguin_mass_observed.unnormalized_log_prob
Inference is done using target_density_function, for example, we can find the maximum of the function, which gives the maximum a posteriori probability (MAP) estimate. We can also use methods in tfp.mcmc [30] to sample from the posterior. Or more conveniently, using a standard sampling routine similar to what is currently used in PyMC3 [3] as shown in Code Block tfp_posterior_inference:
Listing 3.10 tfp_posterior_inference
run_mcmc = tf.function(
autograph=False, jit_compile=True)
mcmc_samples, sampler_stats = run_mcmc(
1000, jd_penguin_mass_all_species, n_chains=4, num_adaptation_steps=1000,
mass=body_mass_g)
inf_data_model_penguin_mass_all_species2 = az.from_dict(
posterior={
# TFP mcmc returns (num_samples, num_chains, ...), we swap
# the first and second axis below for each RV so the shape
# is what ArviZ expected.
k:np.swapaxes(v, 1, 0)
for k, v in mcmc_samples._asdict().items()},
sample_stats={
k:np.swapaxes(sampler_stats[k], 1, 0)
for k in ["target_log_prob", "diverging", "accept_ratio", "n_steps"]}
)
In Code Block tfp_posterior_inference we ran 4 MCMC chains, each with 1000 posterior samples after 1000 adaptation steps. Internally it invokes the experimental_pin method by conditioning the model (pass into the function as an argument) with the observed (additional keyword argument mass=body_mass_g at the end). Lines 8-18 parse the sampling result into an ArviZ InferenceData, which we can now run diagnostics and exploratory analysis of Bayesian models in ArviZ. We can additionally add prior and posterior predictive samples and data log-likelihood to inf_data_model_penguin_mass_all_species2 in a transparent way in Code Block tfp_idata_additional below. Note that we make use of the sample_distributions method of a tfd.JointDistribution that draws samples and generates a distribution conditioned on the posterior samples.
prior_predictive_samples = jd_penguin_mass_all_species.sample([1, 1000])
dist, samples = jd_penguin_mass_all_species.sample_distributions(
value=mcmc_samples)
ppc_samples = samples[-1]
ppc_distribution = dist[-1].distribution
data_log_likelihood = ppc_distribution.log_prob(body_mass_g)
# Be careful not to run this code twice during REPL workflow.
prior=prior_predictive_samples[:-1]._asdict(),
prior_predictive={"mass": prior_predictive_samples[-1]},
posterior_predictive={"mass": np.swapaxes(ppc_samples, 1, 0)},
log_likelihood={"mass": np.swapaxes(data_log_likelihood, 1, 0)},
observed_data={"mass": body_mass_g}
)
This concludes our whirlwind tour of TensorFlow Probability. Like any language you likely will not gain fluency in your initial exposure. But by comparing the two models you should now have a better sense of what concepts are Bayesian centric and what concepts are PPL centric. For the remainder of this chapter and the next we will switch between PyMC3 and TFP to continue helping you identify this difference and see more worked examples. We include exercises to translate Code Block examples from one to the other to aid your practice journey in becoming a PPL polyglot.
## 3.2. Linear Regression¶
In the previous section we modeled the distribution of penguin mass by setting prior distributions over the mean and standard deviation of a Gaussian distribution. Importantly we assumed that the mass did not vary with other features in the data. However, we would expect that other observed data points could provide information about expected penguins mass. Intuitively if we see two penguins, one with long flippers and one with short flippers, we would expect the larger penguin, the one with long flippers, to have more mass even if we did not have a scale on hand to measure their mass precisely. One of the simplest ways to estimate this relationship of observed flipper length on estimated mass is to fit a linear regression model, where the mean is conditionally modeled as a linear combination of other variables
(3.2)$\begin{split}\begin{split} \mu =& \beta_0 + \beta_1 X_1 + \dots + \beta_m X_m \\ Y \sim& \mathcal{N}(\mu, \sigma) \end{split}\end{split}$
where the coefficients, also referred as parameters, are represented with $$\beta_i$$. For example, $$\beta_0$$ is the intercept of the linear model. $$X_i$$ is referred to predictors or independent variables, and $$Y$$ is usually referred to as target, output, response, or dependent variable. It is important to notice that both $$\boldsymbol{X}$$ and $$Y$$ are observed data and that they are paired $$\{y_j, x_j\}$$. That is, if we change the order of $$Y$$ without changing $$X$$ we will destroy some of the information in our data.
We call this a linear regression because the parameters (not the covariates) enter the model in a linear fashion. Also for models with a single covariate, we can think of this model as fitting a line to the $$(X, y)$$ data, and for higher dimensions a plane or more generally a hyperplane.
Alternatively we can express Equation (3.2) using matrix notation:
(3.3)$\mu = \mathbf{X}\boldsymbol{\beta}$
where we are taking the matrix-vector product between the coefficient column vector $$\beta$$ and the matrix of covariates $$\mathbf{X}$$.
An alternative expression you might have seen in other (non-Bayesian) occasions is to rewrite Equation (3.2) as noisy observation of some linear prediction:
(3.4)$Y = \mathbf{X}\boldsymbol{\beta} + \epsilon,\; \epsilon \sim \mathcal{N}(0, \sigma)$
The formulation in Equation (3.4) separates the deterministic part (linear prediction) and the stochastic part (noise) of linear regression. However, we prefer Equation (3.2) as it shows the generative process more clearly.
Design Matrix
The matrix $$\mathbf{X}$$ in Equation (3.3) is known as design matrix and is a matrix of values of explanatory variables of a given set of objects, plus an additional column of ones to represent the intercept. Each row represents an unique observation (e.g., a penguin), with the successive columns corresponding to the variables (like flipper length) and their specific values for that object.
A design matrix is not limited to continuous covariates. For discrete covariates that represent categorical predictors (i.e., there are only a few categories), a common way to turn those into a design matrix is called dummy coding or one-hot coding. For example, in our intercept per penguin model (Code Block nocovariate_mass), instead of mu = μ[species.codes] we can use pandas.get_dummies to parse the categorical information into a design matrix, and then write mu = pd.get_dummies(penguins["species"]) @ μ, where @ is a Python operator for performing matrix multiplication. There are also few other functions to perform one hot encoding in Python, for example, sklearn.preprocessing.OneHotEncoder, as this is a very common data manipulation technique.
Alternatively, categorical predictors could be encoded such that the resulting column and associated coefficient representing linear contrast. For example, different design matrix encoding of two categorical predictors are associated with Type I, II and III sums of squares in null-hypothesis testing setting for ANOVA.
If we plot Equation (3.2) in “three dimensions” we get Fig. 3.7, which shows how the estimated parameters of the likelihood distribution can change based on other observed data $$x$$. While in this one illustration, and in this chapter, we are using a linear relationship to model the relationship between $$x$$ and $$Y$$, and a Gaussian distribution as a likelihood, in other model architectures, we may opt for different choices as we will see in Chapter 4.
### 3.2.1. Linear Penguins¶
If we recall our penguins we were interested using additional data to better estimate the mean mass of a group of penguins. Using linear regression we write the model in Code Block non_centered_regression, which includes two new parameters $$\beta_0$$ and $$\beta_1$$ typically called the intercept and slope. For this example we set wide priors of $$\mathcal{N}(0, 4000)$$ to focus on the model, which also is the same as saying we assume no domain expertise. We subsequently run our sampler, which has now estimated three parameters $$\sigma$$, $$\beta_1$$ and $$\beta_0$$.
Listing 3.12 non_centered_regression
adelie_flipper_length_obs = penguins.loc[adelie_mask, "flipper_length_mm"]
with pm.Model() as model_adelie_flipper_regression:
# pm.Data allows us to change the underlying value in a later code block
σ = pm.HalfStudentT("σ", 100, 2000)
β_0 = pm.Normal("β_0", 0, 4000)
β_1 = pm.Normal("β_1", 0, 4000)
μ = pm.Deterministic("μ", β_0 + β_1 * adelie_flipper_length)
mass = pm.Normal("mass", mu=μ, sigma=σ, observed = adelie_mass_obs)
To save space in the book we are not going to show the diagnostics each time but you should neither trust us or your sampler blindly. Instead you should run the diagnostics to verify you have a reliable posterior approximation.
After our sampler finishes running we can plot Fig. 3.8 which shows a full posterior plot we can use to inspect $$\beta_0$$ and $$\beta_1$$. The coefficient $$\beta_1$$ expresses that for every millimeter change of Adelie flipper length we can nominally expect a change of 32 grams of mass, although anywhere between 22 grams to 41 grams could reasonably occur as well. Additionally, from Fig. 3.8 we can note how the 94% highest density interval does not cross 0 grams. This supports our assumption that there is a relationship between mass and flipper length. This observation is quite useful for interpreting how flipper length and mass correlate. However, we should be careful about not over-interpreting the coefficients or thinking a linear model necessarily implies a causal link. For example, if we perform a flipper extension surgery to a penguin this will not necessarily translate into a gain in mass, it could actually be the opposite due to stress or impediments of this penguin to get food. The opposite relation is not necessarily true either, providing more food to a penguin could help her to have a larger flipper, but it could also make it just a fatter penguin. Now focusing on $$\beta_0$$ however, what does it represent? From our posterior estimate we can state that if we saw an Adelie penguin with a 0 mm flipper length we would expect the mass of this impossible penguin to be somewhere between -4151 and -510 grams. According to our model this statement is true, but negative mass does not make sense. This is not necessarily an issue, there is no rule that every parameter in a model needs to be interpretable, nor that the model provide reasonable prediction at every parameter value. At this point in our journey the purpose of this particular model was to estimate the relationship between flipper length and penguin mass and with our posterior estimates, we have succeeded with that goal.
Models: A balance between math and reality
In our penguin example it would not make sense if penguin mass was below 0 (or even close to it), even though the model allowed it. Because we fit the model using values for the masses that are far from 0, we should not be surprised that the model fails if we want to extrapolate conclusions for values close to 0 or below it. A model does not necessarily have to provide sensible predictions for all possible values, it just needs to provide sensible predictions for the purposes that we are building it for.
We started on this section surmising that incorporating a covariate would lead to better predictions of penguin mass. We can verify this is the case by comparing the posterior estimates of $$\sigma$$ from our fixed mean model and with our linearly varying mean model in Fig. 3.9, our estimate of the likelihood’s standard deviation has dropped from a mean of around $$\approx 460$$ grams to $$\approx 380$$ grams.
### 3.2.2. Predictions¶
In the Linear Penguins we estimated a linear relationship between flipper length and mass. Another use of regression is to leverage that relationship in order to make predictions. In our case given the flipper length of a penguin, can we predict its mass? In fact we can. We will use our results from model_adelie_flipper_regression to do so. Because in Bayesian statistics we are dealing with distributions we do not end up with a single predicted value but instead a distribution of possible values. That is the posterior predictive distribution as defined in Equation (1.8). In practice, more often than not, we will not compute our predictions analytically but we will use a PPL to estimate them using our posterior samples. For example, if we had a penguin of average flipper length and wanted to predict the likely mass using PyMC3 we would write Code Block penguins_ppd:
Listing 3.13 penguins_ppd
with model_adelie_flipper_regression:
# Change the underlying value to the mean observed flipper length
# for our posterior predictive samples
posterior_predictions = pm.sample_posterior_predictive(
In the first line of Code Block penguins_ppd we fix the value of our flipper length to the average observed flipper length. Then using the regression model model_adelie_flipper_regression, we can generate posterior predictive samples of the mass at that fixed value. In Fig. 3.11 we plot the posterior predictive distribution of the mass for penguins of average flipper length, along the posterior of the mean.
In short not only can we use our model in Code Block non_centered_regression to estimate the relationship between flipper length and mass, we also can obtain an estimate of the penguin mass at any arbitrary flipper length. In other words we can use the estimated $$\beta_1$$ and $$\beta_0$$ coefficients to make predictions of the mass of unseen penguins of any flipper length using posterior predictive distributions.
As such, the posterior predictive distribution is an especially powerful tool in a Bayesian context as it let us predict not just the most likely value, but a distribution of plausible values incorporating the uncertainty about our estimates, as seen from Equation (1.8).
### 3.2.3. Centering¶
Our model in Code Block non_centered_regression worked well for estimating the correlation between flipper length and penguin mass, and in predicting the mass of penguins at a given flipper length. Unfortunately with the data and the model provided our estimate of $$\beta_0$$ was not particularly useful. However, we can use a transformation to make $$\beta_0$$ more interpretable. In this case we will opt for a centering transformation, which takes a set of values and centers its mean value at zero as shown in Code Block flipper_centering.
Listing 3.14 flipper_centering
adelie_flipper_length_c = (adelie_flipper_length_obs -
With our now centered covariate let us fit our model again, this time using TFP.
Listing 3.15 tfp_penguins_centered_predictor
def gen_adelie_flipper_model(adelie_flipper_length):
@tfd.JointDistributionCoroutine
σ = yield root(
tfd.HalfStudentT(df=100, loc=0, scale=2000, name="sigma"))
β_1 = yield root(tfd.Normal(loc=0, scale=4000, name="beta_1"))
β_0 = yield root(tfd.Normal(loc=0, scale=4000, name="beta_0"))
μ = β_0[..., None] + β_1[..., None] * adelie_flipper_length
mass = yield tfd.Independent(
tfd.Normal(loc=μ, scale=σ[..., None]),
reinterpreted_batch_ndims=1,
name="mass")
# If use non-centered predictor, this will give the same model as
mcmc_samples, sampler_stats = run_mcmc(
posterior={
k:np.swapaxes(v, 1, 0)
for k, v in mcmc_samples._asdict().items()},
sample_stats={
k:np.swapaxes(sampler_stats[k], 1, 0)
for k in ["target_log_prob", "diverging", "accept_ratio", "n_steps"]}
)
The mathematical model we defined in Code Block tfp_penguins_centered_predictor is identical to the PyMC3 model model_adelie_flipper_regression from Code Block non_centered_regression, with sole difference being the centering of the predictor. PPL wise however, the structure of TFP necessitates the addition of tensor_x[..., None] in various lines to extend a batch of scalars so that they are broadcastable with a batch of vectors. Specifically None appends a new axis, which could also be done using np.newaxis or tf.newaxis. We also wrap the model in a function so we can easily condition on different predictors. In this case we use the centered flipper length, but could also use the non-centered predictor which will yield similar results to our previous model.
When we plot our coefficients again, $$\beta_1$$ is the same as our PyMC3 model but the distribution of $$\beta_0$$ has changed. Since we have centered our input data on its mean, the distribution of $$\beta_0$$ is the same as our prediction for the group mean with the non-centered dataset. By centering the data we now can directly interpret $$\beta_0$$ as the distribution of mean masses for Adelie penguins with a mean flipper length. The idea of transforming the input variables can also be performed at arbitrary values of choice. For example, we could subtract out the minimum flipper length and fit our model. In this transformation this would change the interpretation $$\beta_0$$ to the distribution of means for the smallest observed flipper length. For a greater discussion of transformations in linear regression we recommend Applied Regression Analysis and Generalized Linear Models [31].
## 3.3. Multiple Linear Regression¶
In many species there is a dimorphism, or difference, between different sexes. The study of sexual dimorphism in penguins actually was the motivating factor for collecting the Palmer Penguin dataset [32]. To study penguin dimorphism more closely let us add a second covariate, this time sex, encoding it as a categorical variable and seeing if we can estimate a penguins mass more precisely.
Listing 3.16 penguin_mass_multi
# Binary encoding of the categorical predictor
with pm.Model() as model_penguin_mass_categorical:
σ = pm.HalfStudentT("σ", 100, 2000)
β_0 = pm.Normal("β_0", 0, 3000)
β_1 = pm.Normal("β_1", 0, 3000)
β_2 = pm.Normal("β_2", 0, 3000)
μ = pm.Deterministic(
"μ", β_0 + β_1 * adelie_flipper_length_obs + β_2 * sex_obs)
mass = pm.Normal("mass", mu=μ, sigma=σ, observed=adelie_mass_obs)
inf_data_penguin_mass_categorical = pm.sample(
target_accept=.9, return_inferencedata=True)
You will notice a new parameter, $$\beta_{2}$$ contributing to the value of $$\mu$$. As sex is a categorical predictor (in this example just female or male), we encode it as 1 and 0, respectively. For the model this means that the value of $$\mu$$, for females, is a sum over 3 terms while for males is a sum of two terms (as the $$\beta_2$$ term will zero out).
Syntactic Linear Sugar
Linear models are so widely used that specialized syntax, methods, and libraries have been written just for regression. One such library is Bambi (BAyesian Model-Building Interface[33]). Bambi is a Python package for fitting generalized linear hierarchical models using a formula-based syntax, similar to what one might find in R packages, like lme4 [34], nlme [35], rstanarm [36] or brms [37]). Bambi uses PyMC3 underneath and provides a higher level API. To write the same model, if disregarding the priors[4] as the one in Code Block penguin_mass_multi in Bambi we would write:
Listing 3.17 bambi_categorical
import bambi as bmb
model = bmb.Model("body_mass_g ~ flipper_length_mm + sex",
trace = model.fit()
The priors are automatically assigned if not provided, as is the case in the code example above. Internally, Bambi stores virtually all objects generated by PyMC3, making it easy for users to retrieve, inspect, and modify those objects. Additionally Bambi returns an az.InferenceData object which can be directly used with ArviZ.
Since we have encoded male as 0 this posterior from model_penguin_mass_categorical estimates the difference in mass compared to a female Adelie penguin with the same flipper length. This last part is quite important, by adding a second covariate we now have a multiple linear regression and we must use more caution when interpreting the coefficients. In this case, the coefficients provides the relationship of a covariate into the response variable, if all other covariates are held constant [5].
We again can compare the standard deviations across our three models in Fig. 3.15 to see if we have reduced uncertainty in our estimate and once again the additional information has helped to improve the estimate. In this case our estimate of $$\sigma$$ has dropped a mean of 462 grams in our no covariate model defined in Code Block penguin_mass to a mean value 298 grams from the linear model defined in Code Block penguin_mass_multi that includes flipper length and sex as a covariates. This reduction in uncertainty suggests that sex does indeed provide information for estimating a penguin’s mass.
Listing 3.18 forest_multiple_models
az.plot_forest([inf_data_adelie_penguin_mass,
inf_data_penguin_mass_categorical],
var_names=["σ"], combined=True)
More covariates is not always better
All model fitting algorithms will find a signal, even if it is random noise. This phenomenon is called overfitting and it describes a condition where the algorithm can quite handily map covariates to outcomes in seen cases, but fails to generalize to new observations. In linear regressions we can show this by generating 100 random covariates, and fitting them to a random simulated dataset [10]. Even though there is no relation, we would be led to believe our linear model is doing quite well.
### 3.3.1. Counterfactuals¶
In Code Block penguins_ppd we made a prediction using parameters fitted in a model with a single covariate and our target, and changing that covariate, flipper length, to get an estimate of mass at that fixed flipper length. In multiple regression, we can do something similar, where we take our regression, hold all covariates constant except one, and see how that change to that one covariate changes our expected outcome. This analysis is called a counterfactual analysis. Let us extend the multiple regression from the previous section (Code Block penguin_mass_multi), this time including bill length, and run a counterfactual analysis in TFP. The model building and inference is shown in Code Block tfp_flipper_bill_sex.
Listing 3.19 tfp_flipper_bill_sex
def gen_jd_flipper_bill_sex(flipper_length, sex, bill_length, dtype=tf.float32):
flipper_length, sex, bill_length = tf.nest.map_structure(
lambda x: tf.constant(x, dtype),
(flipper_length, sex, bill_length)
)
@tfd.JointDistributionCoroutine
def jd_flipper_bill_sex():
σ = yield root(
tfd.HalfStudentT(df=100, loc=0, scale=2000, name="sigma"))
β_0 = yield root(tfd.Normal(loc=0, scale=3000, name="beta_0"))
β_1 = yield root(tfd.Normal(loc=0, scale=3000, name="beta_1"))
β_2 = yield root(tfd.Normal(loc=0, scale=3000, name="beta_2"))
β_3 = yield root(tfd.Normal(loc=0, scale=3000, name="beta_3"))
μ = (β_0[..., None]
+ β_1[..., None] * flipper_length
+ β_2[..., None] * sex
+ β_3[..., None] * bill_length
)
mass = yield tfd.Independent(
tfd.Normal(loc=μ, scale=σ[..., None]),
reinterpreted_batch_ndims=1,
name="mass")
return jd_flipper_bill_sex
jd_flipper_bill_sex = gen_jd_flipper_bill_sex(
mcmc_samples, sampler_stats = run_mcmc(
1000, jd_flipper_bill_sex, n_chains=4, num_adaptation_steps=1000,
In this model you will note the addition of another coefficient beta_3 to correspond to the addition of bill length as a covariate. After inference, we can simulate the mass of penguins with different fictional flipper lengths, while holding the sex constant at male, and the bill length at the observed mean of the dataset. This is done in Code Block tfp_flipper_bill_sex_counterfactuals with the result shown in Fig. 3.16. Again since we wrap the model generation in a Python function (a functional programming style approach), it is easy to condition on new predictors, which is useful for counterfactual analyses.
Listing 3.20 tfp_flipper_bill_sex_counterfactuals
mean_flipper_length = penguins.loc[adelie_mask, "flipper_length_mm"].mean()
# Counterfactual dimensions is set to 21 to allow us to get the mean exactly
counterfactual_flipper_lengths = np.linspace(
mean_flipper_length-20, mean_flipper_length+20, 21)
sex_male_indicator = np.zeros_like(counterfactual_flipper_lengths)
mean_bill_length = np.ones_like(
counterfactual_flipper_lengths) * bill_length_obs.mean()
jd_flipper_bill_sex_counterfactual = gen_jd_flipper_bill_sex(
counterfactual_flipper_lengths, sex_male_indicator, mean_bill_length)
ppc_samples = jd_flipper_bill_sex_counterfactual.sample(value=mcmc_samples)
estimated_mass = ppc_samples[-1].numpy().reshape(-1, 21)
Following McElreath[10] Fig. 3.16 is called a counterfactual plot. As the word counterfactual implies, we are evaluating a situation counter to the observed data, or facts. In other words, we are evaluating situations that have not happened. The simplest use of a counterfactual plot is to adjust a covariate and explore the result, exactly like we just did. This is great, as it enables us to explore what-if scenarios, that could be beyond our reach otherwise [6]. However, we must be cautious when interpreting this trickery. The first trap is that counterfactual values may be impossible, for example, no penguin may ever exist with a flipper length larger than 1500mm but the model will happily give us estimates for this fictional penguin. The second is more insidious, we assumed that we could vary each covariate independently, but in reality this may not be possible. For example, as a penguin’s flipper length increases, its bill length may as well. Counterfactuals are powerful in that they allow us to explore outcomes that have not happened, or that we at least did not observe happen. But they can easily generate estimates for situations that will never happen. It is the model that will not discern between the two, so you as a modeler must.
Correlation vs Causality
When interpreting linear regressions it is tempting to say “An increase in $$X$$ causes and increase in $$Y$$”. This is not necessarily the case, in fact causal statements can not be made from a (linear) regression alone. Mathematically a linear model links two (or more variables) together but this link does not need to be causal. For example, increasing the amount of water we provide to a plant can certainly (and causally) increase the plant’s growth (at least within some range), but nothing prevents us from inverting this relationship in a model and use the growth of plants to estimate the amount of rain, even when plant growth do not cause rain [7]. The statistical sub-field of Causal Inference is concerned with the tools and procedures necessary to make causal statements either in the context of randomized experiments or observational studies (see Chapter 7 for a brief discussion)
## 3.4. Generalized Linear Models¶
All linear models discussed so far assumed the distribution of observations are conditionally Gaussian which works well in many scenarios. However, we may want to use other distributions. For example, to model things that are restricted to some interval, a number in the interval $$[0, 1]$$ like probabilities, or natural numbers $$\{1, 2, 3, \dots \}$$ like counting events. To do this we will take our linear function, $$\mathbf{X} \mathit{\beta}$$, and modify it using an inverse link function [8] $$\phi$$ as shown in Equation (3.5).
(3.5)\begin{align}\begin{aligned}\begin{split}\begin{split} \mu =& \phi(\mathbf{X} \beta) \\ Y \sim& \Psi (\mu, \theta)\end{split}\\\end{split}\end{aligned}\end{align}
where $$\Psi$$ is some distribution parameterized by $$\mu$$ and $$\theta$$ indicating the data likelihood.
The specific purpose of the inverse link function is to map outputs from the range of real numbers $$(-\infty, \infty)$$ to a parameter range of the restricted interval. In other words the inverse link function is the specific “trick” we need to take our linear models and generalize them to many more model architectures. We are still dealing with a linear model here in the sense that the expectation of the distribution that generates the observation still follows a linear function of the parameter and the covariates but now we can generalize the use and application of these models to many more scenarios [9].
### 3.4.1. Logistic Regression¶
One of the most common generalized linear model is the logistic regression. It is particularly useful in modeling data where there are only two possible outcomes, we observed either one thing or another thing. The probability of a head or tails outcome in a coin flip is the usual textbook example. More “real world” examples includes the chance of a defect in manufacturing, a negative or positive cancer test, or the failure of a rocket launch [38]. In a logistic regression the inverse link function is called, unsurprisingly, the logistic function, which maps $$(-\infty, \infty)$$ to the $$(0,1)$$ interval. This is handy because now we can map linear functions to the range we would expect for a parameter that estimates probability values, that must be in the range 0 and 1 by definition.
(3.6)$p = \frac{1}{1+e^{-\mathbf{X}\beta}}$
With logistic regression we are able to use linear models to estimate probabilities of an event. Sometimes, instead we want to classify, or to predict, a specific class given some data. In order to do so we want to turn the continuous prediction in the interval $$(-\infty, \infty)$$ to one between 0 and 1. We can do this with a decision boundary to make a prediction in the set $${0,1}$$. Let us assume we want our decision boundary set at a probability of 0.5. For a model with an intercept and one covariate we have:
(3.7)$\begin{split}\begin{split} 0.5 &= logistic(\beta_{0} + \beta_{1}*x) \\ logit(0.5) &= \beta_{0} + \beta_{1}*x \\ 0 &= \beta_{0} + \beta_{1}*x \\ x &= -\frac{\beta_{0}}{\beta_{1}} \\ \end{split}\end{split}$
Note that $$logit$$ is the inverse of $$logistic$$. That is, once a logistic model is fitted we can use the coefficients $$\beta_0$$ and $$\beta_1$$ to easily compute the value of $$x$$ for which the probability of the class is greater than 0.5.
### 3.4.2. Classifying Penguins¶
In the previous sections we used the sex, and bill length of a penguin to estimate the mass of a penguin. Lets now alter the question, if we were given the mass, sex, and bill length of a penguin can we predict the species? Let us use two species Adelie and Chinstrap to make this a binary task. Like last time we use a simple model first with just one covariate, bill length. We write this logistic model in Code Block model_logistic_penguins_bill_length
Listing 3.21 model_logistic_penguins_bill_length
species_filter = penguins["species"].isin(["Adelie", "Chinstrap"])
bill_length_obs = penguins.loc[species_filter, "bill_length_mm"].values
species = pd.Categorical(penguins.loc[species_filter, "species"])
with pm.Model() as model_logistic_penguins_bill_length:
β_0 = pm.Normal("β_0", mu=0, sigma=10)
β_1 = pm.Normal("β_1", mu=0, sigma=10)
μ = β_0 + pm.math.dot(bill_length_obs, β_1)
# Application of our sigmoid link function
θ = pm.Deterministic("θ", pm.math.sigmoid(μ))
# Useful for plotting the decision boundary later
bd = pm.Deterministic("bd", -β_0/β_1)
# Note the change in likelihood
yl = pm.Bernoulli("yl", p=θ, observed=species.codes)
prior_predictive_logistic_penguins_bill_length = pm.sample_prior_predictive()
trace_logistic_penguins_bill_length = pm.sample(5000, chains=2)
inf_data_logistic_penguins_bill_length = az.from_pymc3(
prior=prior_predictive_logistic_penguins_bill_length,
trace=trace_logistic_penguins_bill_length)
In generalized linear models, the mapping from parameter prior to response can sometimes be more challenging to understand. We can utilize prior predictive samples to help us visualize the expected observations. In our classifying penguins example we find it reasonable to equally expect a Chinstrap penguin, as we would an Adelie penguin, at all bill lengths, prior to seeing any data. We can double-check our modeling intention has been represented correctly by our priors and model using the prior predictive distribution. The classes are roughly even in Fig. 3.18 prior to seeing data which is what we would expect.
After fitting the parameters in our model we can inspect the coefficients using az.summary(.) function (see Table 3.3). While we can read the coefficients they are not as directly interpretable as in a linear regression. We can tell there is some relationship with bill length and species given the positive $$\beta_1$$ coefficient whose HDI does not cross zero. We can interpret the decision boundary fairly directly seeing that around 44 mm in bill length is the nominal cutoff for one species to another. Plotting the regression output in Fig. 3.19 is much more intuitive. Here we see the now familiar logistic curve move from 0 on the left to 1 on the right as the classes change, and a decision boundary where one would expect it given the data.
mean sd hdi_3% hdi_97% $$\beta_0$$ -46.052 7.073 -58.932 -34.123 $$\beta_1$$ 1.045 0.162 0.776 1.347
Let us try something different, we still want to classify penguins but this time using mass as a covariate. Code Block model_logistic_penguins_mass shows a model for that purpose.
Listing 3.22 model_logistic_penguins_mass
mass_obs = penguins.loc[species_filter, "body_mass_g"].values
with pm.Model() as model_logistic_penguins_mass:
β_0 = pm.Normal("β_0", mu=0, sigma=10)
β_1 = pm.Normal("β_1", mu=0, sigma=10)
μ = β_0 + pm.math.dot(mass_obs, β_1)
θ = pm.Deterministic("θ", pm.math.sigmoid(μ))
bd = pm.Deterministic("bd", -β_0/β_1)
yl = pm.Bernoulli("yl", p=θ, observed=species.codes)
inf_data_logistic_penguins_mass = pm.sample(
5000, target_accept=.9, return_inferencedata=True)
mean sd hdi_3% hdi_97% $$\beta_0$$ -1.131 1.317 -3.654 1.268 $$\beta_1$$ 0.000 0.000 0.000 0.001
Our tabular summary in Table 3.4 shows that $$\beta_1$$ is estimated to be 0 indicating there is not enough information in the mass covariate to separate the two classes. This is not necessarily a bad thing, just the model indicating to us that it does not find discernible difference in mass between these two species. This becomes quite evident once we plot the data and logistic regression fit in Fig. 3.20.
We should not let this lack of relationship discourage us, effective modeling includes a dose of trial an error. This does not mean try random things and hope they work, it instead means that it is ok to use the computational tools to provide you clues to the next step.
Let us now try using both bill length and mass to create a multiple logistic regression in Code Block model_logistic_penguins_bill_length_mass and plot the decision boundary again in Fig. 3.21. This time the axes of the figure are a little bit different. Instead of the probability of class on the Y-axis, we instead have mass. This way we can see the decision boundary between the dependent variables. All these visual checks have been helpful but subjective. We can quantify our fits numerically as well using diagnostics.
Listing 3.23 model_logistic_penguins_bill_length_mass
X = penguins.loc[species_filter, ["bill_length_mm", "body_mass_g"]]
# Add a column of 1s for the intercept
X.insert(0,"Intercept", value=1)
X = X.values
with pm.Model() as model_logistic_penguins_bill_length_mass:
β = pm.Normal("β", mu=0, sigma=20, shape=3)
μ = pm.math.dot(X, β)
θ = pm.Deterministic("θ", pm.math.sigmoid(μ))
bd = pm.Deterministic("bd", -β[0]/β[2] - β[1]/β[2] * X[:,1])
yl = pm.Bernoulli("yl", p=θ, observed=species.codes)
inf_data_logistic_penguins_bill_length_mass = pm.sample(
1000,
return_inferencedata=True)
To evaluate the model fit for logistic regressions we can use a separation plot [39], as shown in Code Block separability_plot and Fig. 3.22. A separation plot is a way to assess the calibration of a model with binary observed data. It shows the sorted predictions per class, the idea being that with perfect separation there would be two distinct rectangles. In our case we see that none of our models did a perfect job separating the two species, but the models that included bill length performed much better than the model that included mass only. In general, perfect calibration is not the goal of a Bayesian analysis, nevertheless separation plots (and other calibration assessment methods like LOO-PIT) can help us to compare models and reveal opportunities to improve them.
Listing 3.24 separability_plot
models = {"bill": inf_data_logistic_penguins_bill_length,
"mass": inf_data_logistic_penguins_mass,
"mass bill": inf_data_logistic_penguins_bill_length_mass}
_, axes = plt.subplots(3, 1, figsize=(12, 4), sharey=True)
for (label, model), ax in zip(models.items(), axes):
az.plot_separation(model, "yl", ax=ax, color="C4")
ax.set_title(label)
We can also use LOO to compare the three models we have just created, the one for the mass, the one for the bill length and the one including both covariates in Code Block penguin_model_loo and Table 3.5. According to LOO the mass only model is the worst at separating the species, the bill length only is the middle candidate model, and the mass and bill length model performed the best. This is unsurprising given what we have seen from the plots, and now we have a numerical confirmation as well.
Listing 3.25 penguin_model_loo
az.compare({"mass":inf_data_logistic_penguins_mass,
"bill": inf_data_logistic_penguins_bill_length,
"mass_bill":inf_data_logistic_penguins_bill_length_mass})
rank loo p_loo d_loo weight se dse warning loo_scale mass_bill 0 -11.3 1.6 0.0 1.0 3.1 0.0 True log bill 1 -27.0 1.7 15.6 0.0 6.2 4.9 True log mass 2 -135.8 2.1 124.5 0.0 5.3 5.8 True log
### 3.4.3. Interpreting Log Odds¶
In a logistic regression the slope is telling you the increase in log odds units when x is incremented one unit. Odds most simply are the ratio between the probability of occurrence and probability of no occurrence. For example, in our penguin example if we were to pick a random penguin from Adelie or Chinstrap penguins the probability that we pick an Adelie penguin would be 0.68 as seen in Code Block adelie_prob
# Class counts of each penguin species
counts = penguins["species"].value_counts()
chinstrap_count = counts["Chinstrap"]
array([0.68224299])
And for the same event the odds would be
adelie_count / chinstrap_count
array([2.14705882])
Odds are made up of the same components as probability but are transformed in a manner that makes interpreting the ratio of one event occurring from another more straightforward. Stated in odds, if we were to randomly sample from Adelie and Chinstrap penguins we would expect to end up with a ratio of 2.14 more Adelie penguins than Chinstrap penguins as calculated by Code Block adelie_odds.
Using our knowledge of odds we can define the logit. The logit is the natural log of the odds which is the fraction shown in Equation (3.8). We can rewrite the logistic regression in Equation (3.6) in an alternative form of using the logit.
(3.8)$\log \left(\frac{p}{1-p} \right) = \boldsymbol{X} \beta$
This alternative formulation lets us interpret the coefficients of logistic regression as the change in log odds. Using this knowledge we can calculate the probability of observing Adelie to Chinstrap penguins given a change in the observed bill length as shown in Code Block logistic_interpretation. Transformations like these are both interesting mathematically, but also very practically useful when discussing statistical results, a topic we will discuss more deeply in Sharing the Results With a Particular Audience).
Listing 3.28 logistic_interpretation
x = 45
β_0 = inf_data_logistic_penguins_bill_length.posterior["β_0"].mean().values
β_1 = inf_data_logistic_penguins_bill_length.posterior["β_1"].mean().values
bill_length = 45
val_1 = β_0 + β_1*bill_length
val_2 = β_0 + β_1*(bill_length+1)
f"""(Class Probability change from 45mm Bill Length to 46mm:
{(special.expit(val_2) - special.expit(val_1))*100:.0f}%)"""
'Class Probability change from 45mm Bill Length to 46mm: 15%'
## 3.5. Picking Priors in Regression Models¶
Now that we are familiar with generalized linear models let us focus on the prior and its effect on posterior estimation. We will be borrowing an example from Regression and Other Stories [40], in particular a study[13] where the relationship between the attractiveness of parents and the percentage of girl births of those parents is explored. In this study researchers estimated the attractiveness of American teenagers on a five-point scale. Eventually many of these subjects had children, of which the ratio of gender per each attractiveness category was calculated, the resulting data points of which are shown in Code Block uninformative_prior_sex_ratio and plotted in Fig. 3.23. In the same code block we also write a model for single variable regression. This time however, focus specifically on how priors and likelihoods should be assessed together and not independently.
Listing 3.29 uninformative_prior_sex_ratio
x = np.arange(-2, 3, 1)
y = np.asarray([50, 44, 50, 47, 56])
with pm.Model() as model_uninformative_prior_sex_ratio:
σ = pm.Exponential("σ", .5)
β_1 = pm.Normal("β_1", 0, 20)
β_0 = pm.Normal("β_0", 50, 20)
μ = pm.Deterministic("μ", β_0 + β_1 * x)
ratio = pm.Normal("ratio", mu=μ, sigma=σ, observed=y)
prior_predictive_uninformative_prior_sex_ratio = pm.sample_prior_predictive(
samples=10000
)
trace_uninformative_prior_sex_ratio = pm.sample()
inf_data_uninformative_prior_sex_ratio = az.from_pymc3(
trace=trace_uninformative_prior_sex_ratio,
prior=prior_predictive_uninformative_prior_sex_ratio
)
Nominally we will assume births are equally split between males and females, and that attractiveness has no effect on sex ratio. This translates to setting the mean of the prior for intercept $$\beta_0$$ to be 50 and the prior mean for the coefficient $$\beta_1$$ to be 0. We also set a wide dispersion to express our lack of knowledge about both the intercept and the effect of attractiveness on sex ratio. This is not a fully uninformative prior, of which we covered in Section A Few Options to Quantify Your Prior Information, however, a very wide prior. Given these choices we can write our model in Code Block uninformative_prior_sex_ratio, run inference, and generate samples to estimate posterior distribution. From the data and model we estimate that the mean of $$\beta_1$$ to be 1.4, meaning the least attractive group when compared to the most attractive group the birth ratio will differ by 7.4% on average. In Fig. 3.24 if we include the uncertainty, the ratio can vary by over 20% per unit of attractiveness [10] from a random sample of 50 possible “lines of fit” prior to conditioning the parameters to data.
From a mathematical lens this result is valid. But from the lens of our general knowledge and our understanding of birth sex ratio outside of this studies, these results are suspect. The “natural” sex ratio at birth has been measured to be around 105 boys per 100 girls (ranging from around 103 to 107 boys), which means the sex ratio at birth is 48.5% female, with a standard deviation of 0.5. Moreover, even factors that are more intrinsically tied to human biology do not affect birth ratios to this magnitude, weakening the notion that attractiveness, which is subjective, should have this magnitude of effect. Given this information a change of 8% between two groups would require extraordinary observations.
Let us run our model again but this time set more informative priors shown in Code Block informative_prior_sex_ratio that are consistent with this general knowledge. Plotting our posterior samples the concentration of coefficients is smaller and the plotted posterior lines fall into bounds that are more reasonable when considering possible ratios.
Listing 3.30 informative_prior_sex_ratio
with pm.Model() as model_informative_prior_sex_ratio:
σ = pm.Exponential("σ", .5)
# Note the now more informative priors
β_1 = pm.Normal("β_1", 0, .5)
β_0 = pm.Normal("β_0", 48.5, .5)
μ = pm.Deterministic("μ", β_0 + β_1 * x)
ratio = pm.Normal("ratio", mu=μ, sigma=σ, observed=y)
prior_predictive_informative_prior_sex_ratio = pm.sample_prior_predictive(
samples=10000
)
trace_informative_prior_sex_ratio = pm.sample()
inf_data_informative_prior_sex_ratio = az.from_pymc3(
trace=trace_informative_prior_sex_ratio,
prior=prior_predictive_informative_prior_sex_ratio)
This time we see that estimated effect of attractiveness on gender is negligible, there simply was not enough information to affect the posterior. As we mentioned in Section A Few Options to Quantify Your Prior Information choosing a prior is both a burden and a blessing. Regardless of which you believe it is, it is important to use this statistical tool with an explainable and principled choice.
## 3.6. Exercises¶
E1. Comparisons are part of everyday life. What is something you compare on a daily basis and answer the following question:
• What is the numerical quantification you use for comparison?
• How do you decide on the logical groupings for observations? For example in the penguin model we use species or sex
• What point estimate would you use to compare them?
E2. Referring to Model penguin_mass complete the following tasks.
1. Compute the values of Monte Carlo Standard Error Mean using az.summary. Given the computed values which of the following reported values of $$\mu$$ would not be well supported as a point estimate? 3707.235, 3707.2, or 3707.
2. Plot the ESS and MCSE per quantiles and describe the results.
3. Resample the model using a low number of draws until you get bad values of $$\hat R$$, and ESS
4. Report the HDI 50% numerically and using az.plot_posterior
E3. In your own words explain how regression can be used to do the following:
1. Covariate estimation
2. Prediction
3. Counterfactual analysis
Explain how they are different, the steps to perform each, and situations where they would be useful. Use the penguin example or come up with your own.
E4. In Code Block flipper_centering and Code Block tfp_penguins_centered_predictor we centered the flipper length covariate. Refit the model, but instead of centering, subtract the minimum observed flipped length. Compare the posterior estimates of the slope and intercept parameters of the centered model. What is different, what is the same. How does the interpretation of this model change when compared to the centered model?
E5. Translate the following primitives from PyMC3 to TFP. Assume the model name is pymc_model
1. pm.StudentT("x", 0, 10, 20)
2. pm.sample(chains=2)
Hint: write the model and inference first in PyMC3, and find the similar primitives in TFP using the code shown in this chapter.
E6. PyMC3 and TFP use different argument names for their distribution parameterizations. For example in PyMC3 the Uniform Distribution is parameterized as pm.Uniform.dist(lower=, upper=) whereas in TFP it is tfd.Uniform(low=, high=). Use the online documentation to identify the difference in argument names for the following distributions.
1. Normal
2. Poisson
3. Beta
4. Binomial
5. Gumbel
E7. A common modeling technique for parameterizing Bayesian multiple regressions is to assign a wide prior to the intercept, and assign more informative prior to the slope coefficients. Try modifying the model_logistic_penguins_bill_length_mass model in Code Block model_logistic_penguins_bill_length_mass. Do you get better inference results? Note that there are divergences with the original parameterization.
E8. In linear regression models we have two terms. The mean linear function and the noise term. Write down these two terms in mathematical notation, referring to the equations in this chapter for guidance. Explain in your own words what the purpose of these two parts of regression are. In particular why are they useful when there is random noise in any part of the data generating or data collection process.
E9. Simulate the data using the formula y = 10 + 2x + $$\mathcal{N}(0, 5)$$ with integer covariate x generated np.linspace(-10, 20, 100). Fit a linear model of the form $$b_0 + b_1*X + \sigma$$. Use a Normal distribution for the likelihood and covariate priors and a Half Student’s T prior for the noise term as needed. Recover the parameters verifying your results using both a posterior plot and a forest plot.
E10. Generate diagnostics for the model in Code Block non_centered_regression to verify the results shown in the chapter can be trusted. Use a combination of visual and numerical diagnostics.
E11. Refit the model in Code Block non_centered_regression on Gentoo penguins and Chinstrap penguins. How are the posteriors different from each other? How are they different from the Adelie posterior estimation? What inferences can you make about the relationship between flipper length and mass for these other species of penguins? What does the change in $$\sigma$$ tell you about the ability of flipper length to estimate mass?
M12. Using the model in Code Block tfp_flipper_bill_sex_counterfactuals run a counterfactual analysis for female penguin flipper length with mean flipper length and a bill length of 20mm. Plot a kernel density estimate of the posterior predictive samples.
M13. Duplicate the flipper length covariate in Code Block non_centered_regression by adding a $$\beta_2$$ coefficient and rerun the model. What do diagnostics such as ESS and rhat indicate about this model with a duplicated coefficient?
M14. Translate the PyMC3 model in Code Block non_centered_regression into Tensorflow Probability. List three of the syntax differences.
M15. Translate the TFP model in Code Block tfp_penguins_centered_predictor into PyMC3. List three of the syntax differences.
M16. Use a logistic regression with increasing number of covariates to reproduce the prior predictive distributions in Fig. 2.3. Explain why its the case that a logistic regression with many covariates generate a prior response with extreme values.
H17. Translate the PyMC3 model in Code Block model_logistic_penguins_bill_length_mass into TFP to classify Adelie and Chinstrap penguins. Reuse the same model to classify Chinstrap and Gentoo penguins. Compare the coefficients, how do they differ?
H18. In Code Block penguin_mass our model allowed for negative values mass. Change the model so negative values are no longer possible. Run a prior predictive check to verify that your change was effective. Perform MCMC sampling and plot the posterior. Has the posterior changed from the original model? Given the results why would you choose one model over the other and why?
H19. The Palmer Penguin dataset includes additional data for the observed penguins such as island and bill depth. Include these covariates into the linear regression model defined in Code Block non_centered_regression in two parts, first adding bill depth, and then adding the island covariates. Do these covariates help estimate Adelie mass more precisely? Justify your answer using the parameter estimates and model comparison tools.
H20. Similar the exercise 2H19, see if adding bill depth or island covariates to the penguin logistic regression help classify Adelie and Gentoo penguins more precisely. Justify if the additional covariates helped using the numerical and visual tools shown in this chapter.
|
2022-10-07 21:21:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5873777866363525, "perplexity": 1310.6572583644374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00236.warc.gz"}
|
https://publications.iitm.ac.in/publication/reply-to-comments-on-an-asymmetric-24-ghz-directional-coupler
|
X
Reply to Comments on An Asymmetric 2.4 GHz Directional Coupler Using Electrical Balance''
Abhishek Kumar,
Published in Institute of Electrical and Electronics Engineers Inc.
2019
Abstract
The theory presented by Kumar et al. is based on the assumption that edge-coupled transmission lines (TLs) have infinite even-mode characteristic impedance and is clearly mentioned. Simulation is done to show that better than 10 dB matching can be achieved with a practical TL without using additional TL sections as long as impedance transformed by the even-mode of TL is much lower than the port impedance. The addition of extra quarter-wave TL sections can further improve the matching. IEEE
|
2023-03-22 04:07:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846367359161377, "perplexity": 1859.7974041768343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00076.warc.gz"}
|
https://robotwealth.com/machine-learning-for-financial-prediction-experimentation-with-david-aronsons-latest-work-part-2/
|
# Machine learning for Trading: Part 2
Posted on May 10, 2016 by
8,578 Views
## Introduction
My first post on using machine learning for financial prediction took an in-depth look at various feature selection methods as a data pre-processing step in the quest to mine financial data for profitable patterns. I looked at various methods to identify predictive features including Maximal Information Coefficient (MIC), Recursive Feature Elimination (RFE), algorithms with built-in feature selection, selection via exhaustive search of possible generalized linear models, and the Boruta feature selection algorithm. I personally found the Boruta algorithm to be the most intuitive and elegant approach, but regardless of the method chosen, the same features seemed to keep on turning up in the results.
In this post, I will take this analysis further and use these features to build predictive models that could form the basis of autonomous trading systems. Firstly, I’ll provide an overview of the algorithms that I have found to generally perform well on this type of machine learning problem as well as those algorithms recommended by David Aronson (2013) in Statistically Sound Machine Learning for Algorithmic Trading of Financial Instruments (SSML). I’ll also discuss a framework for measuring the performance of various models to facilitate robust comparison and model selection. Finally, I will discuss methods for combining predictions to produce ensembles that perform better than any of the constituent models alone.
Without further ado, let’s dive in and discuss some machine learning algorithms.
## Algorithm selection
Anyone familiar with machine learning can tell you that the quantity of algorithms available to the practitioner these days is staggering. With the rise of open source packages like R, used widely by industry, academics and hobbyists to collaborate on and share machine learning research, some of the most cutting edge statistical learning frameworks are literally at our fingertips.
Its an exciting time, but also somewhat daunting.
At last count there were 81 machine learning packages listed on CRAN’s Machine Learning Task View, and many of those packages provide access to numerous individual algorithms! It’s impractical to perform an exhaustive search for the ‘best’ algorithm for a particular task, but it is certainly possible to arrive at some guidelines around what tends to work better for particular use cases.
Aronson and Masters (2013) prefer linear and quadratic regression, boosted trees, and general regression neural networks. They state that “a single decision tree’s utility is debatable for financial data”, as are bagged ensembles of trees such as random forests, however boosted trees may be more appropriate. Personally, I found that neural networks initialized using stacked autoencoders were more promising than the general regression networks preferred by Aronson and Masters. I also found particularly useful models based on Friedman’s gradient boosting machine (Friedman, 2001). In addition, I was able to obtain surprisingly decent results from a simple k-Nearest Neighbors algorithm, however I had less success with bagging methods like random forests. Like Aronson and Masters, I avoided using single decision trees – for the the number of variables used in the investigation (see next section), there seems little point. The table below shows the algorithms that I investigated and highlights those that showed the most promise for this particular use case.
Even with the small selection listed here, we clearly have some decisions to make. I’ll present some results from these models below.
## Choosing combinations of variables
For the purpose of this exercise, I chose the six features from the previous post that I feel are most likely to convey predictive information about the target variable. There are numerous combinations of features that we could use to build individual models, for example, various combinations of 2, 3, etc variables. Assuming we only build models based on at least two variables, we have 57 possible unique combinations from a pool of six features. Aronson and Masters (2013) caution against using too many variables due to the risk of overfitting the data, stating that two or three variables is usually the practical limit. In this analysis, I’ll take their advice and explore models with two or three features only. This limits the number of unique combinations to 35.
The following features were chosen (refer to the previous post for calculation details):
1. 3-period momentum
2. Delta bollinger width
3. 10-period velocity
4. 10:100 period ATR ratio
5. 10:20 period ATR ratio
6. 7-period ATR
Each feature was normalized to the range $+1 : -1$ using a rolling 50-period window. I generally find that this method gives superior model performance to normalizing over the entire data set.
Download the raw data I used in this study via this link: EUvarsD1. In addition, the code below uses Zorro to calculate the features and target variables listed above and outputs them to a CSV file like the one in the download link. Using this code, you can generate your own feature sets for use in the modelling framework described in this post. You can also use it to change any other parameters, such as the normalization period and the amount of historical data. Using Zorro, this can easily be adapted to other markets as well. Click the box below to expand the code.
/* MACHINE LEARNING FEATURE CREATION
robotwealth.com
Create a CSV file with features and target variable
for use in machine learning framework in R
*/
function run()
{
StartDate = 20090827;
EndDate = 20151231;
LookBack = 141;
BarPeriod = 1440;
asset("EUR/USD");
int period = 50; //lookback for calculating normalized/standardized values
vars Price = series(price());
vars High = series(priceHigh());
vars Low = series(priceLow());
vars Close = series(priceClose());
vars logPrice = series(log(Price[0]));
vars logOpen = series(log(priceOpen()));
vars logHigh = series(log(High[0]));
vars logLow = series(log(Low[0]));
vars logClose = series(log(Close[0]));
vars logDeltaClose = series(logClose[0] - logClose[1]);
//bollinger width
vars bWdith3 = series(log(sqrt(Moment(Price, 3, 2))/Moment(Price, 3, 1)));
// delta bollinger width
vars deltabWdith3 = series(bWdith3[0] - bWdith3[3]);
vars deltabWidth3N = series(Normalize(deltabWdith3, period));
//Price velocity - linear regression of log price
vars velocity10 = series(TSF(logPrice, 10)/ATR(logOpen, logHigh, logLow, logClose, 7));
vars velocity10N = series(Normalize(velocity10, period));
//Price momentum
vars mom3 = series((logDeltaClose[0] - logDeltaClose[3])/sqrt(Moment(logDeltaClose, 100, 2)));
vars mom3N = series(Normalize(mom3, period));
//ATR ratio
vars atrRatFast = series(ATR(10)/ATR(20));
vars atrRatFastN = series(Normalize(atrRatFast, period));
vars atrRatSlow = series(ATR(10)/ATR(100));
vars atrRatSlowN = series(Normalize(atrRatSlow, period));
//ATR
vars ATR7 = series(ATR(7));
vars ATR7N = series(Normalize(ATR7, period));
//objective
vars objective = series( (Close[0]-Close[1])/ATR(100) );
//print variables to CSV
char line[150];
sprintf(line, "%02d/%02d/%04d, %.5f, %.5f, %.5f, %.5f, %.5f, %.5f, %.5f\n", day(1), month(1), year(1), deltabWidth3N[1], velocity10N[1], mom3N[1], atrRatFastN[1], atrRatSlowN[1], ATR7N[1], objective[0]);
if(!is(LOOKBACK)) file_append("Log\\EUvarsD1.csv", line);
}
## A framework for measuring and comparing performance
We have 35 possible variable combinations and 7 algorithms with which to construct predictive models. The subset of variables was constrained based on the feature selection process discussed in the last post. I’ve constrained the list of algorithms by attempting to maximize their diversity. For example, I’ve chosen a simple nearest neighbor algorithm, a bagging algorithm, boosting algorithms, tree-based models, neural networks and so on. Clearly, I’ve constrained my universe of models to only a fraction of what is possible. Still, there is a lot of choice. We could randomly choose various models in the hope of landing on something profitable, however since with today’s computing power we very much have the means to implement it, I much prefer the idea of a systematic, comprehensive assessment. I’ll describe my framework for such an assessment below.
In this example, I will train each model to maximize the return of trading the model’s predictions normalized to the recent volatility measured by the 100-period ATR. Simple enough, but how would we objectively assess the performance of each model against this metric? And more importantly, how do we account for overfitting? Ideally, we would measure the out-of-sample performance of each model, but of course we have a finite amount of data and we need to maximize its utility. This is where cross validation comes in. There are plenty of great sources on the internet for detailed descriptions of cross validation, so I will only describe the procedure briefly.
Cross validation involves dividing the training data into k portions, training a model on k-1 portions and testing it on the portion that was held out. The performance on the hold out set is a first estimate of the true out of sample performance. The procedure is repeated using each subsample as the hold out test set and the results finally aggregated. This is known as k-fold cross validation. The estimate can be made more robust (usually) by randomly resampling the data into new portions and repeating the procedure. This is known as repeated k-fold cross validation. Other variations include bootstrapped cross validation (where resampling is undertaken with replacement – that is, individual observations can appear more than once in any subsample) and leave-one-out cross validation (where one observation at a time is held out, the remainder of the data used to train a model, and the performance of the model estimated by predicting the held out observation, repeated for every observation in the training set).
Cross validation is a very useful procedure for estimating the true out of sample performance while maximizing the utility of the training data. This sounds fantastic, and it is for most data sets, but time series data presents some unique challenges. Consider a data set with no temporal dimension and in which the observations are independent and identically distributed. In any predictive modelling task for such a data set, we can never have too much data (within the practical constraints of computing resources of course). This is not necessarily the case with time series data.
If we use too much of the time series to train our model, we risk including irrelevant and outdated patterns which of course by definition are unlikely to show up in the next instances of the time series. If we use too little data, we run the risk of under-fitting the model and missing the predictive information we hope to capture. How much data do we need then? I don’t know, but I intend to find out.
Rob Hyndman describes a method for cross-validating time series data which is extremely useful for algorithmic system development as it actually mirrors a procedure that can be used for live trading. Also known as “rolling origin forecast evaluation” and “forward chaining”, it is the best method I have found for quickly estimating the future performance of a model. The procedure is implemented thus:
1. Fit the model to a window of sequential data of length $t$: $x_1, x_2, …, x_t$
2. Predict the next value in the sequence, $x_{t+1}$, and compute the forecast error by comparing the prediction with the observed value.
3. Shift the origin of the window forward by one and repeat steps 1 and 2.
4. Repeat the process until $x_1=x_{n-t}$ where $n$ is the length of the series (ie until we run out of data for creating a window of length $t$)
5. Aggregate the forecast error for an estimate of the out of sample performance of the model
For readers familiar with walk forward analysis, time series cross validation is equivalent to walk forward analysis with the test set being a single period.
Before you continue....
Want to see how we trade for a living with algos — so you can too?
Learn where to start and see how systematic retail traders generate profit long-term:
## Is there an optimal window length?
As mentioned above, I want to investigate the existence or otherwise of an optimal amount of data to include in the rolling window of training data. How much data is too much? Too little? At what point do we incorporate old and outdated information into a model to its detriment? At what point do we underfit the model due to lack of data?
For this experiment, I’ll model the EUR/USD exchange rate using a gradient boosting machine, a neural network and a k-nearest neighbors algorithm using various window lengths in the cross-validation procedure. My hypothesis is that there exists an optimal amount of data that maximizes the performance of a model for this particular time series. I am choosing three different algorithms in order to test the sensitivity of the optimal window length to the choice of algorithm. I expect that the length of the window itself will be an optimization parameter that is different for each market and that may itself change with time.
The figures below show the Sharpe ratios and directional accuracy by window length for the cross-validated model predictions. Returns of the analogous trading system were calculated as follows:
for all $y > 0$, go long at the close and hold the position for 1 period
for all $y < 0$, go short at the close and hold the position for 1 period
where $y$ is the prediction of the next period’s return normalized to the 100-period ATR. Results are exclusive of trading costs.
The Sharpe ratio shows an obvious increase with increasing window length. This came as a surprise to me, as I suspected that there would be a point where data was too far removed from current market conditions to be of use in a predictive trading model. If such a point exists, it is clearly greater than 1,000 days. In hindsight, normalizing each feature using a rolling 50-period normalization window very likely ensures that the model dynamically adapts to changing conditions, but I must admit that I stumbled upon this more by accident than by design.
Setting aside the obvious trend in the Sharpe ratio results, the other thing that jumps out at me is that the cross-validated performance of the k-Nearest Neighbors algorithm is at least on par with, and at times significantly better than the much more complex gradient boosting machines and neural networks. It is interesting that such a simple algorithm can hold its own against the more complex models.
In my last post, I talked about the concept of a “prediction threshold”, being a method of filtering trades by entering the market only when the magnitude of the predicted next-period return exceeds a certain value. My hypothesis for doing so is that there is merit in filtering the small moves, which would be affected by randomness, in favor of the larger moves. Said differently, it makes sense (depending on the objectives of the trader) to target those returns in the tails of the distribution. Testing this hypothesis, I plotted a heatmap of Sharpe ratios by window length and prediction threshold for the neural network model:
In general, there appears to be merit to this hypothesis with Sharpe ratios generally increasing for increasing prediction threshold.
Here’s the code I used for the neural network models (the models based on k-nearest neighbors and gradient boosting machines used a similar procedure):
###############################################
#### ROLLING FORECAST ORIGIN FRAMEWORK ########
###############################################
library(caret)
library(nnet)
library(neuralnet)
library(deepnet)
library(foreach)
library(doParallel)
eu <- read.csv("EUvarsD1.csv", header = F, stringsAsFactors = F)
colnames(eu) <- c("date", "deltabWdith3", "velocity10", "mom3", "atrRatFast", "atrRatSlow", "ATR7", "objective")
# summary function for training caret models on maximum profit
absretSummary <- function (data, lev = NULL, model = NULL) { # for training on a next-period return
positions <- ifelse(data[ , "pred"] > 0.0, 1, ifelse(data[, "pred"] < -0.0, -1, 0))
trades <- positions*data[, "obs"]
names(profit) <- 'profit'
return(profit)
}
############################################################################
# nnet models --------------------------------------------------------------
############################################################################
modellist.nnet <- list()
lengths <- c(50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 600, 700, 800, 900, 1000)
j <- 1
for (i in lengths) {
nnetGrid <- expand.grid(.layer1 = c(2:3), .layer2 = c(0:2), .layer3=0, .hidden_dropout=c(0, 0.1, 0.2), .visible_dropout=c(0, 0.1, 0.2))
timecontrol <- trainControl(method = 'timeslice', initialWindow = i, horizon = 1, summaryFunction=absretSummary, selectionFunction = "best",
returnResamp = 'final', fixedWindow = TRUE, savePredictions = 'final')
cl <- makeCluster(4)
registerDoParallel(cl)
set.seed(503)
modellist.nnet[[j]] <- train(eu[, 2:4], eu[, 8], method = "dnn",
trControl = timecontrol, tuneGrid = nnetGrid, preProcess = c('center', 'scale'))
stopCluster(cl)
j <- j+1
cat("Window: ", i)
}
equity.nnet <- data.frame()
sharpe.nnet <- data.frame()
acc.nnet <- data.frame()
for (j in c(1:15)) {
k <- 1
for (t in c(0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8)) {
threshold <- t
trades <- ifelse(modellist.nnet[[j]]$pred$pred > threshold, modellist.nnet[[j]]$pred$obs,
ifelse(modellist.nnet[[j]]$pred$pred < -threshold, -modellist.nnet[[j]]$pred$obs, 0))
plot(cumulative, type = 'l', col = 'blue', main = paste0('Model: ', j, 'Thresh: ', threshold))
equity.nnet[j, k] <- cumulative[length(cumulative)]
acc.nnet[j, k] <- (length(y)/length(x))*100
if (length(x) > 100) { # exclude any sharpes with less than 100 trades
b <- sd(x) #std dev of trades
sharpe.nnet[j, k] <- sqrt(252) * a/b
}
else sharpe.nnet[j, k] <- 0
k <- k+1
}
}
rownames(sharpe.nnet) <- lengths
colnames(sharpe.nnet) <- c('0', '0.05', '0.1', '0.15', '0.2', '0.25', '0.3', '0.35', '0.4', '0.45', '0.5', '0.55', '0.6', '0.65',
'0.7', '0.75', '0.8')
# sharpe ratio
sharpe.nnet[ "WINDOW" ] <- rownames(sharpe.nnet)
s.molten <- melt( sharpe.nnet, id.vars="WINDOW", value.name="SHARPE", variable.name="THRESHOLD" )
s.molten <- na.omit(s.molten)
#Factorize 'WINDOW' for plotting
s.molten$WINDOW <- as.character(s.molten$WINDOW)
s.molten$WINDOW <- factor(s.molten$WINDOW, levels=unique(s.molten$WINDOW)) s <- ggplot(s.molten, aes(x=WINDOW, y=THRESHOLD, fill = SHARPE)) + geom_tile(colour = "white") + scale_fill_gradient(low = "white", high = "blue4") + ggtitle("Sharpe Ratio by Window Length and Prediction Threshold") ## Comprehensive model comparison Now that I know that, in general, a longer rolling window helps a predictive model achieve a larger Sharpe ratio, my next experiment will be a comprehensive comparison of the 245 possible models from our 7 algorithms and 35 variable combinations. The results below show the cross-validated Sharpe ratios for each combination of algorithm and variable subset investigated in this study using a rolling window of 1,000 days of history as the training data. In each case, the optimal model was found by tuning the individual algorithm’s hyperparameters across a sensible subset of possible values. No prediction threshold was used to filter trades (that is, each model took a long or short position every day). Once again, transaction costs are excluded. The neural network initialized using a stacked autoencoder is the clear out-performer of this group of algorithms. This model seems to be able to learn the underlying predictive patterns better than any other algorithm used in this study, and it does so consistently regardless of the combination of variables used. Initialization with a stacked autoencoder essentially forces the network to recreate the input data in the pre-training phase, which results in a network that tends to learn the features that form a good representation of its input, reducing the noise component of the input data. With this in mind, it may be possible to use this technique with a larger number of features, potentially foregoing an extensive feature selection process. This is an interesting idea, and something to pursue at another time. We can also see that the boosting algorithms performed relatively well, the multi-adaptive regressive spline models were consistent, and that k-nearest neighbors performed well for such a simple model. The Cubist models were less consistent and the random forest models performed worst of all. The best Sharpe ratio of all the models was 1.72. The following R code shows how I implemented this framework for efficiently testing and comparing the algorithms used in this study: ######################################### # SYTEMATIC MODEL COMPARISON ############ ######################################### library(caret) library(nnet) library(neuralnet) library(deepnet) library(randomForest) library(earth) library(bst) library(Cubist) library(foreach) library(doParallel) library(reshape) library(ggplot2) ###### Import and process data eu <- read.csv("EUvarsD1.csv", header = F, stringsAsFactors = F) colnames(eu) <- c("date", "deltabWdith3", "velocity10", "mom3", "atrRatFast", "atrRatSlow", "ATR7", "objective") ###### Caret support functions absretSummary <- function (data, lev = NULL, model = NULL) { # for training on a next-period return positions <- ifelse(data[ , "pred"] > 0.0, 1, ifelse(data[, "pred"] < -0.0, -1, 0)) trades <- positions*data[, "obs"] profit <- sum(trades) names(profit) <- 'profit' return(profit) } window.length <- 1000 timecontrol <- trainControl(method = 'timeslice', initialWindow = window.length, horizon = 1, summaryFunction=absretSummary, selectionFunction = "best", returnResamp = 'final', fixedWindow = TRUE, savePredictions = 'final') gbmGrid <- expand.grid(shrinkage=0.1, n.trees = c(500, 600, 700), n.minobsinnode = c(1, window.length/50, window.length/25, window.length/10, window.length/5, window.length/2), interaction.depth = c(2, 3)) knnGrid <- expand.grid(k = c(1, 2, 5, 10, window.length/50, window.length/25, window.length/10)) dnnGrid <- expand.grid(.layer1 = c(1:3), .layer2 = c(0:1), .layer3=0, .hidden_dropout=c(0, 0.1, 0.2), .visible_dropout=c(0, 0.1, 0.2)) rfGrid <- expand.grid(.mtry = c(1,2)) earthGrid <- expand.grid(.nprune = c(1:5), .degree = c(1:2)) bstTreeGrid <- expand.grid(.mstop = c(3, 5, 7, 10, 12, 15), .maxdepth = c(1:3), .nu=c(0.1, 0.2, 0.3, 0.4, 0.5)) cubistGrid <- expand.grid(.committees = c(50, 100, 150), .neighbors = c(9)) ###### Model training and tuning models <- c("knn", "gbm", "dnn", "rf", "earth", "bstTree", "cubist") # 2-var models m.list.2var <- list() l <- 1 for (i in models) { for (j in c(2:7)) { for (k in c(2:7)) { if (j >= k) next cl <- makeCluster(4) registerDoParallel(cl) set.seed(503) m.list.2var[[l]] <- train( eu[, c(j,k)], eu[, 8], method = i, trControl = timecontrol, tuneGrid = get(paste0(i, "Grid")) ) stopCluster(cl) l <- l + 1 cat(l, ".", i, "[", j, k, "]\n") } } } # 3-var models m.list.3var <- list() l <- 1 for (i in models) { for (j in c(2:7)) { for (k in c(2:7)) { if (j >= k) next for (m in c(2:7)) { if (k >= m) next cl <- makeCluster(8) registerDoParallel(cl) set.seed(503) m.list.3var[[l]] <- train( eu[, c(j,k,m)], eu[, 8], method = i, trControl = timecontrol, tuneGrid = get(paste0(i, "Grid")) ) stopCluster(cl) l <- l + 1 cat(l, ".", i, "[", j, k, m,"]\n") } } } } # compare performance and generate heat map all.models <- c(m.list.2var, m.list.3var) Sharpe <- vector() vars <- list() method <- vector() for (i in c(1:length(all.models))) { trades <- all.models[[i]]$resample$profit #trades Sharpe[i] <- sqrt(252) * mean(trades)/sd(trades) vars[[i]] <- colnames(all.models[[i]]$trainingData[, 1:(ncol(all.models[[i]]$trainingData)-1)]) method[i] <- all.models[[i]]$method
}
cat("Best Sharpe Ratio: ", max(Sharpe))
perf <- data.frame(method, as.character(vars), Sharpe)
colnames(perf) <- c("Algorithm", "Variables", "Sharpe")
perf$Algorithm <- as.factor(perf$Algorithm)
perf$Variables <- as.factor(perf$Variables)
levels(perf$Algorithm) <- c("Bst.Trees", "Cubist", "NNet", "MARS", "GBM", "KNN", "RF") levels(perf$Variables) <- as.character(c(1:35))
perf.molten <- melt( perf, id.vars=c("Algorithm", "Variables"), value.name="Sharpe" )
perf.molten <- perf.molten[, c(1,2,4)]
colnames(perf.molten) <- c("ALGORITHM", "FEATURE_COMBINATION", "SHARPE")
s.heat <- ggplot(perf.molten, aes(x=ALGORITHM, y=FEATURE_COMBINATION, fill = SHARPE)) +
geom_tile(colour = "white") +
scale_fill_gradient2(low="darkorange", high="blue4", guide="colorbar") +
ggtitle("Sharpe Ratio by Model") +
theme_classic()
## Accounting for data mining bias
Data mining bias refers to the unfortunate selection of a trading model based on randomly good performance. For instance, a system with no basis in economic or financial reality has a profit expectancy of exactly zero, excluding transaction costs. However, due to the finite sample size of a backtest, sometimes such a system will show a backtested performance that can lead us to believe it is better (or worse) than random. As the number of samples grow in live trading, the worthlessness of such a system becomes apparent.
Data mining bias shows up often, but it is of particular concern when we harness computing power to systematically test numerous potential trading models, exactly as I have done in this post. White (2000) describes a method for accounting for this data mining bias, referred to as White’s Reality Check or the Bootstrap Reality Check. David Aronson covers it in detail in “Evidence Based Technical Anlysis.” I will only describe the procedure briefly here, but here is the link to the original paper.
White’s Reality Check requires that we keep a record of all variants of the trading model that were tested during the development process and produce a zero-mean returns series for each. We then randomize these returns series using bootstrap resampling and note the total return of the best performer. This process is repeated several thousand times, and the median best return corresponds to the data mining bias introduced by the development process. We can then observe where the originally selected best model fits into the distribution of bootstrapped results to obtain a confidence level relating to its possession or otherwise of an actual positive expectancy. Saddest of all, and I think this is why White used the term “reality check”, is that the expected performance of the selected strategy in real trading is only the performance measured in its backtest MINUS the median performance of the bootstrapped returns series.
Here is the histogram of Sharpe ratios from implementing White’s Reality Check on the 245 models under comparison for 5,000 bootstrap iterations:
The best Sharpe ratio I obtained is approximately equivalent to the median bootstrapped best Sharpe ratio, implying that its expectancy is actually close to zero. However, I have clearly mis-estimated the data mining bias since I excluded the models discarded during the hyperparameter tuning phase of model construction. In addition, this method is known to have a bias towards Type II errors. In other words, this method tends to reject systems that do have an edge. In any event, following is the R code that implements the reality check. I leave it to the interested reader to implement the reality check across the full set of models including those rejected during hyperparameter tuning.
############ APPLY WHITE's REALITY CHECK
#get pest performance from raw returns of each model
Sharpe <- vector()
for (i in c(1:length(all.models))) {
trades <- all.models[[i]]$resample$profit
}
cat("Best Sharpe Ratio: ", max(Sharpe))
Sharpe.hist <- ggplot(data.frame(Sharpe), aes(x=Sharpe)) + geom_histogram(binwidth = 0.1, fill = "darkorange2", color = "black") +
ggtitle("Histogram of Sharpe Ratios")
# White's reality check
# loop through list of detrended trades, randomize each by bootstrap
# select best performer, note Sharpe
# repeat
boot.Sharpe <- vector()
for (i in c(1:5000)) {
best.Sharpe <- 0
for (j in c(1:length(detrend.trades))) {
rand.ind <- sample(c(1:length(detrend.trades[[j]])), length(detrend.trades[[j]]), replace = T) #indices of bootstrapped data
sharpe.ratio <- sqrt(252) * mean(rand)/sd(rand)
if (sharpe.ratio > best.Sharpe) best.Sharpe <- sharpe.ratio
}
boot.Sharpe[i] <- best.Sharpe
cat("Iteration: ", i, "\n")
}
# histogram of bootstrapped Sharpes:
boot.Sharpe.hist2 <- ggplot(data.frame(boot.Sharpe), aes(x=boot.Sharpe)) + geom_histogram(binwidth = 0.1, fill = "darkorange2", color = "black") +
ggtitle("Histogram of Bootstrapped Sharpe Ratios")
## Ensembles and hybrid methods
Ensembling is the practice of aggregating the predictions of multiple models in order to achieve a prediction accuracy that exceeds any individual model. It is analogous to using a committee of experts to reach a consensus. There are numerous ways to create an ensemble, including:
• Bagging: aggregate models based on bootstrapped training data, for example the random forest algorithm.
• Boosting: models developed sequentially where each additional model aims to improve performance on the least accurate part of the feature space of the previous model. An example is the boosted trees used previously in this study.
• Stacking: model predictions are combined using a “meta-model”. In my experience tends to result in over-fit models.
• Aggregating models based on random subsets of the input features.
• Aggregating models based on random subsets of the training data, which is equivalent to bagging without replacement as opposed to bootstrapping.
• Aggregating models based on different algorithms.
We have seen above that the models that incorporate boosting fared better than the random forest models, which use the bagging technique. Aronson and Masters (2013) advocate the latter three methods listed above as a protection against overfiitting. They argue that even if the component models are over-fit, if they are trained on different training and/or feature sets, they will be exposed to different noise patterns while real patterns will tend to be represented across the various training sets. The idea is that the noise patterns cancel out, while the real patterns are reinforced. Ensembles appear to be most effective when the component models each have a positive expectancy, but whose predictions are uncorrelated.
In this study, I will focus on combining different algorithms and different subsets of the feature set and I will once again advocate a systematic approach. Following is my approach for selecting component models for an ensemble:
1. Build a model for each variable and algorithm combination (already done, above)
2. Exclude any model with a cross-validated Sharpe ratio < $x$, say 1.25
3. Examine a correlation matrix of the predictions of the remaining models. Retain a subset of models such that all correlations are < 0.75
4. Combine the remaining models by either averaging the predictions or forming a majority vote on the direction
The figure below shows the equity curves of the best model and the two ensembles formed by averaging and majority vote described above (trading position sizes on EUR/USD relative to the 100-period ATR and excluding trading costs).
In this case, there is little overall difference between the best model and either of the ensembles. There may be other possible ensembles that perform better; I haven’t performed an exhaustive search. Note that in the absence of an out of sample test following the aggregation of component models into an ensemble, this step has introduced a measure of selection bias into the results. Here’s the code to reproduce these results:
#########################################
# ENSEMBLE CONSTRUCTION #################
#########################################
# best model
best.mod.trades <- all.models[[which.max(Sharpe)]]$resample$profit
plot(cumsum(best.mod.trades), type = 'l', col = 'darkred')
cat('Best Sharpe: ', max(Sharpe))
# retain models with Sharpe > x
x <- 1.25
Sharpe.retained.ind <- which(Sharpe > x)
retained.models <- all.models[Sharpe.retained.ind]
resamps <- resamples(retained.models) # collect results of cross-validation in a list
# bwplot(resamps)
retained.preds <- lapply(retained.models, "[[", "pred")
retained.preds <- lapply(retained.preds, "[[", "pred")
retained.preds <- lapply(retained.preds, as.numeric)
retained.preds.df <- as.data.frame(retained.preds)
colnames(retained.preds.df) <- as.factor(c(Sharpe.retained.ind))
# remove highly correlated predictions
library(corrplot)
correlations <- cor(retained.preds.df)
high.corrs <- findCorrelation(correlations, cutoff = 0.75)
reduced.corrs <- correlations[-c(high.corrs), -c(high.corrs)]
corrplot(correlations)
corrplot(reduced.corrs)
# create ensemble
ens <- retained.preds.df[, -c(high.corrs)]
colnames(ens) <- paste0("Mod", colnames(ens))
obs <- all.models[[1]]$pred$obs
# take average predictions of compoenents
ave <- rowMeans(ens)
trades.ave <- ifelse(ave>0, obs, -obs)
plot(cumsum(trades.ave), type = 'l', col = "steelblue")
cat(profit.factor)
cat(sharpe.ratio)
# majority vote
pos <- rowSums(ens>0)
neg <- rowSums(ens<0)
trades.maj <- ifelse(pos>neg, obs, ifelse(pos<neg, -obs, 0))
plot(cumsum(trades.maj), type = 'l', col = "deeppink2")
cat(profit.factor)
cat(sharpe.ratio)
# plot
colnames(ens.methods) <- c("INDEX", "AVE", "MAJ", "BEST")
ens.molten <- melt(ens.methods, id.vars = "INDEX", variable_name = "METHOD", value.name = "RETURN")
colnames(ens.molten) <- c("INDEX", "METHOD", "RETURN")
ens.plot <- ggplot(ens.molten, aes(x=INDEX, y=RETURN, color = METHOD)) +
geom_line()
Finally, I’ll try combining the predictions using a linear regression model as a “meta-model”. This approach has the potential to curve-fit the results, so I will test it using the time series cross validation approach used above:
# linear combination
window.length <- 250
timecontrol <- trainControl(method = 'timeslice', initialWindow = window.length, horizon = 1, summaryFunction=absretSummary, selectionFunction = "best",
returnResamp = 'final', fixedWindow = TRUE, savePredictions = 'final')
cl <- makeCluster(4)
registerDoParallel(cl)
set.seed(503)
linear.ens <- train(ens, obs, method = "lm", trControl = timecontrol)
stopCluster(cl)
plot(cumsum(linear.ens$resample$profit), type = 'l', col = 'blue4')
linear.ens.sharpe <- sqrt(252)*mean(linear.ens$resample$profit)/sd(linear.ens$resample$profit)
cat(linear.ens.sharpe)
This gives a Sharpe ratio of only 0.94, significantly less than the simpler ensembles and indeed the best of the component models.
## Next Steps
This post and the last provide a framework for systematically comparing predictive models based on machine learning algorithms as the basis of trading systems. Despite the length of the posts, they have barely scratched the surface of what is possible. The following are ideas for future research that I intend to pursue. I would love to hear from people interested in collaborating on any of these ideas and I offer my work thus far as a basis from which to proceed:
• Regime-based split linear models. The idea is that the training data is split based on different market regimes on the basis of a particular variable or condition for determining where the split occurs. Regimes characterized by extreme volatility are inherently difficult to predict, and it may be useful to simply avoid trading during these periods. A regime-based approach could potentially verify this hypothesis and provide clues as to its practical application. I have briefly investigated this approach and did not find a measurable increase in performance, but I haven’t investigated closely enough to rule out this idea completely.
• Another regime-based approach is to dynamically adjust the relative weights on individual component models based on each model’s performance in the current market conditions. This is appealing in that it does not require classification of the market regime. The weight adjustment is essentially regime-agnostic in the sense that it would not care about terms such as “trending” or “range bound”, whose identification of course carries substantial lag. By taking its cues from real-time performance, a dynamically adaptive weight allocation approach would minimize this lag to the extent possible.
• So far I have simply aggregated models into ensembles through averaging predictions, combining directional votes and using simple linear regression as a “meta-model”. My feeling is that combining predictions using non-linear stacking methods will lead to over-fitting. However, it would be useful to verify this feeling on hard data.
• I have also read about ‘sequential prediction” which, like boosting, involves building an ensemble from a series of models. If the first model is able to learn a dominant pattern, its residuals are then used as input to a second model under the assumptions that these residuals contain the noise terms as well as more subtle patterns. The first model’s predictions are then considered a measure of the dominant pattern and the second model’s predictions an estimate of the deviation of the first model’s prediction from the target.
• A related approach is to attempt to capture linear relationships in the data using classical time series methods such as ARIMA/GARCH and then use the residuals of these models as input to an algorithm capable of capturing more complex non-linear relationships, such as a neural network.
• Finally, the work presented here has only considered the most recent values of individual features in model construction. There may be benefit in incorporating recent past values as features, for example the 1-period return for each of the last 3 periods.
## A Note on the Practicalities of Trading Systems Research
If I can offer one piece of advice based on my experiences, I would caution systems researchers (especially those with a penchant for the theoretical) against trying to build the “perfect” predictive model, if such a thing even exists. Admittedly it is an interesting exercise, but an approach much more grounded in the realities of extracting profits from the markets is to focus on building a model that meets one’s needs. If I could turn back the clock to the tune of about twelve months, I would go live with my machine learning models much earlier (as soon as they were fit for purpose) and then refined them as time went on. I made the mistake of repeatedly putting off the go-live date until I had improved the models performance just a bit more. In doing so, I missed some perfectly achievable profits. In addition, I would have uncovered implementation and execution issues much more efficiently.
## Conclusions
You have seen in this post and the last that mining financial data for profitable patterns in a statistically sound manner is incredibly time consuming and effort-intensive. What you’ve read in these last two posts is simply a distillation of a significant research effort and one that I hope provides a framework for others interested in this field to take it further.
We have seen that neural networks initialized with stacked autoencoders have a great deal of potential in predictive models for trading systems, but that incredibly simple models like the k-nearest neighbors algorithm can also perform adequately. I have also touched on the specter of data mining bias and explored one possible method for accounting for it. Finally, we explored ensembles of component models, but didn’t get a significant boost to model performance in this case.
I intend to pursue the ideas listed in the Next Steps section, likely beginning with the sequential prediction approach. If people are interested in collaborating on any of these ideas, I would very much like to hear from you.
Before you continue....
Want to see how we trade for a living with algos — so you can too?
Learn where to start and see how systematic retail traders generate profit long-term:
May 10, 2016 at 9:46 pm
Thanks,Very interesting
Can you share the data set files?
May 11, 2016 at 8:10 am
Hi Lev
Apologies for that oversight. You can now access the raw data via a download link in the post (under the heading “Choosing combinations of variables”). I’ve also added the source code (written in Lite-C, compatible with Zorro) that I used to generate this data. This code can be modified to generate custom feature sets for any market or time period for which you have price data. The code will output a CSV file compatible with the framework in the post.
Kris
May 13, 2016 at 6:38 pm
Hi Kris
Thanks for the source and csv.
I tried to reproduce your “heatmap of Sharpe ratios by window length and prediction threshold” and I get different results , All my predictions was in range (-0.3 – 0.3)
Lev
May 13, 2016 at 9:03 pm
That result has really got me scratching my head. I suspect something is not quite right with the implementation on your end because with so many models being compared, you would expect to see much more variation in the Sharpes beyond that range through random variation alone.
If anyone else gets this result, please let me know in the comments.
May 13, 2016 at 10:33 pm
I mean predictions not shapes.
The problem is only in “heatmap of Sharpe ratios by window length and prediction threshold” sample
modellist.nnet[[j]]$pred$pred in range of (-0.3 – 0.3)
May 14, 2016 at 2:46 pm
OK, I understand now. That’s actually not an unexpected result and indicates that in this case the model is having difficulty predicting the observations in the tails of the returns/atr distribution. Here’s one of my results, for comparison:
summary(as.numeric(modellist.nnet[[9]]$pred$pred))
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.51530 -0.10380 -0.01300 -0.01491 0.07690 0.40590
Check out what happens with the directional accuracy of these predictions though. They are actually good enough to give the model a decent cross-validated equity curve. They also hint at some potential improvements that can be made. For example, would a classification approach be better than the regression approach presented here? What about intelligent subsampling of the input to each iteration of the time series cross-validation procedure? I actually don’t know off the top of my head if such approaches would make a difference, but the results you see hint that there are improvements that could be made.
May 11, 2016 at 4:53 am
Fantastic tutorial series! Had never come across a neural network initialised with stacked autoencoders before. Looking forward to playing about with this technique.
May 11, 2016 at 8:17 am
Thanks Gekko! They are a relatively recent addition to my toolkit as well. The unsupervised reconstruction of the input appears to help the network effectively clarify a pattern if it does exist. The Zorro guys have done some work in this area too, using a wider feature set and a deeper architecture than the ones I’ve used here. They achieved a directional accuracy in the order of ~60%. My own work corroborates this. SAEs have enormous potential.
May 12, 2016 at 12:13 am
This grabbed my attention as well. I have not had much success with NNs of various architectures for regression problems. Classification of course is quite good, although my lack of patience for fiddling with feature scaling and data shape often just lead me to use a GBM. I wonder if there is any literature out there explaining why it would help with regression specifically. There is quite a bit that explains why boosted trees and NNs don’t fare well in regression. If unsupervised pre-training really changes the game for regression that’s very encouraging, as there are alot of those techniques already out there and more on the horizon. I am going to start testing toute suite.
May 12, 2016 at 9:36 am
Hi Shane. In my experience, neural nets fare extremely well on complex regression tasks. They have been referred to as “universal approximators” for good reason, having the ability to approximate any linear or non-linear function (compare this with for example a linear time series model like ARIMA). Being powerful learners, they also have the ability to very easily over-fit the training data. Therefore it makes sense to apply appropriate guards against over-fitting: regularization techniques (L1, L2), random dropouts, unsupervised pre-training and early stopping can be effective ways to get the most out of a neural network while preventing its power from getting out of control.
Also, if you are more comfortable with classification, you can simply re-frame the regression problem into a classification one using something like:
if (next.day.return > 0) target.variable = 1
else target.variable = 0
May 15, 2016 at 1:59 am
Thanks for the reply (and the great blog material in general). I actually have no discomfort with regression, and wrt finance/trading, continuous or countable targets are usually my choice. I simply just have not been able to surpass a MARS model or a custom nonparametric approach using a probabilistic programming library with any NN (and still haven’t after a few tests with deepnet BTW). However, this may have a great deal to do with my feature space and my choice of target. One observation I have made is that these R NN libraries have done a great job of setting some defaults on the myriad hyperparameters and providing some quick automated ways to iterate through them.
One particular architecture that I note is glaringly lacking is Recurrent Neural Networks. LSTMs, GRUs and other time series/sequence aware NN architectures have achieved the best performance for me wrt finance/trading. Obviously, there ability to model non-stationary data is desirable. So hopefully someone is generous enough to write some libraries for those in R that become part of the caret ecosystem. In the meantime, I’ll keep at it with Torch, Keras, Blocks etc.
Again, you have motivated me to spend some more time with these models and the caret ecosystem in particular. Look forward to future posts.
May 17, 2016 at 11:48 am
Hi Shane
Investigating recurrent networks is a priority for me too. These guys use recurrent architectures in many of the topics discussed in this publication and report good results in the FX markets. I know of no library in R that incorporates RNNs just yet. Time to make a foray into the world of R library creation perhaps?
[…] Machine learning for financial prediction: experimentation with Aronson s latest work – part 2… My first post on using machine learning for financial prediction took an in-depth look at various feature selection methods as a data pre-processing step in the quest to mine financial data for profitable patterns. I looked at various methods to identify predictive features including Maximal Information Coefficient (MIC), Recursive Feature Elimination (RFE), algorithms with built-in feature […]
May 11, 2016 at 9:49 pm
HI Kris
Great Article and thanks for posting!
I am interested in the Stacked Auto encoder approach you are using.
If you are using this to compress the data, can you not use more than 2/3 features to create inputs to your NN. Can i also ask how you go about choosing the optimal number of layers and the number of nodes in the Auto-encoder.
Tom
May 12, 2016 at 9:17 am
Hi Thomas
Indeed it does make sense to use more features with the stacked autoencoder approach. The unsupervised reconstruction of the input assists the network to detect any predictive patterns that may be present. This means that redundant or noisy features will have less of an impact on the output. I would however caution against a brute force approach where you throw everything you’ve got at such a network. You will quickly run into practical computation issues, particularly if you are simultaneously exploring various network architectures. Some intelligent feature engineering goes a long way.
In terms of choosing an optimal architecture, unfortunately there isn’t a simple formula that I’m aware of. Yoshua Bengio at the University of Montreal published some very useful guidelines for practical training of deep architectures which I use to guide my starting point. From there, I’ll train and cross-validate several architectures and hyperparameter sets. Max Kuhn’s caret package in R is my go-to toolkit for doing this as efficiently as possible. Here’s a link to Bengio’s paper: https://arxiv.org/pdf/1206.5533v2.pdf and Max’s website: https://topepo.github.io/caret/index.html
One word of caution, if I may: It’s easy to get bogged down in the search for an “optimal” model even having discovered several models that will serve the intended purpose very well. The search for an “optimal” model could go on indefinitely if we allowed it. Focus on a defined objective for your model and let that guide your efforts.
Thanks for commenting!
June 4, 2016 at 1:20 am
Thanks for the response Kris. Much appreciated.
May 12, 2016 at 4:56 am
Kris, that’s another great post !
I love that “Comprehensive model comparison” heatmap.
I like using the K-Nearest Neighbors algorithm for my “exploration” (i.e. exploring my feature space) because it is MUCH faster than most other types of algorithms.
I find that KNN usually gives reasonable results, thus it allows me to find interesting features much more quickly ….
Let’s say I have 1000 features;
choose(1000, 2) + choose(1000, 3) = 166,666,500
choose(1000, 2) + choose(1000, 3) + choose(1000, 4) = 41,583,791,250
That’s a lot of combinations !
With KNN I can explore ~100,000 random combinations over a 24 hour period. That’s perhaps a 100x speedup compared with say SVM.
Best, Nick
May 12, 2016 at 9:23 am
Thanks Nick!
I agree, the heatmaps reveal a lot of useful information quickly. Plus they brighten up the blog a little!
That’s a lot of features! An exhaustive search of even the 2- and 3-variable combinations at a rate of 100,000/day would require approximately 4.5 years to complete! But of course you are doing random search of the feature space which is obviously a lot more efficient. Still, without seeing your data set, I would speculate that out of the 1,000 features, there would be many highly correlated, and therefore redundant features which could be removed with little to no detriment of the final model. Again, without seeing your data, I suspect that an intelligent feature selection phase would assist greatly.
Your approach got me thinking and I started googling to see whether others were using also this approach. Turns out there is some activity in this area. Have you seen this paper: https://papers.nips.cc/paper/2848-nearest-neighbor-based-feature-selection-for-regression-and-its-application-to-neural-activity.pdf?
Have you experimented with such a feature-weighted version of k-NN?
May 12, 2016 at 9:48 am
Yes, a lot of them will be correlated to some degree and an intelligent feature selection process does help.
It’s easy to have 1000 features these days. Even starting with the standard technical indicators in TTR (~50 I think), you could add Super Smoother filters, take differences, apply PCA …. Then we have macro, fundamental and sentiment data….
.
A lot of people won’t like the data-driven approach – whatever works for them I suppose ….
Cheers Kris.
May 12, 2016 at 9:54 am
Indeed! There is certainly an air of dismissiveness to the data-driven approach in the quant finance community. Personally, I see it as just another weapon in the arsenal and a useful source of portfolio diversification in addition to the classical approaches.
Cheers
May 16, 2016 at 5:08 pm
Hi Kris,
Do you choose the best network structure for “live” trading after the optimization
nnetGrid <- expand.grid(.layer1 = c(2:3), .layer2 = c(0:2), .layer3=0, .hidden_dropout=c(0, 0.1, 0.2), .visible_dropout=c(0, 0.1, 0.2))
Because I understand that for each model you choose the result with best porfit with summery function?
May 17, 2016 at 12:13 pm
Hi Lev
That line simply specifies the grid of hyperparameters that are tested. The caret train function works as follows: for each hyperparameter combination, build a model on each iteration of the cross-validation procedure and calculate the average performance across each cross-validation iteration. Once this is done, the ‘best’ combination of hyperparameters is selected and a model fitted on the entire training data set. The specific case of time series cross validation is very applicable to trading since it mimics the process we would go through in order to trade the markets using these methods, with one important caveat: if at any point in time we picked the ‘best’ model based on this hyperparameter tuning process for live trading, we introduce selection bias and therefore the cross-validated performance is an over-estimate of actual performance. An unbiased estimate would only be possible with another out of sample validation data set. White’s Reality Check is another method of dealing with this.
In practice, I find that a further out of sample validation set is generally a more accurate predictor of future performance, but the utility of this is milted by the finite nature of our data set. I am happy enough to go live with a model selected in this way, but I temper my expectations of its future performance using White’s Reality Check.
May 16, 2016 at 6:16 pm
[…] Machine learning for financial prediction: experimentation with Aronson’s latest work – part… […]
June 11, 2016 at 5:03 pm
Interesting post – many thanks. Re your second point under “Next Steps” were you thinking along the lines of a variant of Zorro’s “Equity Curve Trading”, only rather than turning models on/off adjusting their relative position sizes (though some might be reduced to zero)?
Was also wondering if you had considered using equity curve plus other performance stats (e.g. rolling Sharpe/Sortino/K-Ratio) as predictors for a standalone overlay model that would automatically control individual model position size (weighting)?
June 12, 2016 at 5:20 pm
Hi Andy. I actually had in mind something like your second point: using a rolling performance measure as a means of controlling individual model weighting. However there are of course many approaches and I don’t favour one over the other. What I am really trying to achieve is to reduce the selection bias in my system by including as many models as possible. The downside of my approach is that there is always a lag in calculating the “optimal” portfolio weights.
June 13, 2016 at 5:52 pm
Thanks. A long while back I had a similar lag issue with calculating the weights of a portfolio of non-ML models. One technique that seemed to work (occasionally quite well) and minimised the lag was to use a Kalman filter on the equity curves and calculate various difference measures based on that. I tried using all Kalman values, the filtered values, the smooths and the predictions. As I recall, the predictions worked best, which I didn’t expect.
July 22, 2016 at 3:37 am
Hi – thanks for the fascinating and informative post!
Did you try holding some data out and seeing how one of your deep neural nets performed on it? When I follow along your code and attempt doing this, the predictions on out of sample data have very little variation, suggesting an input scaling problem or that the network has been overfit…
Thanks!
August 2, 2016 at 6:57 pm
Hey Jim
Yes I did try holding out some data and testing the performance of the various neural nets. My out of sample predictions do show a sensible amount of variation. I get a greater variation between the neural nets trained on smaller training windows and those trained on larger windows, as one would expect. Further, while directional out of sample predictions tend to converge as the training window increases, I still see variation in the absolute out of sample predictions.
I doubt the problem you are having is related to input scaling if you are using the same data that I used, since the inputs are all scaled to the range -1 to +1 (you can verify this with the R command “summary(eu)”. Of course, if you are using your own data, you would need to ensure that you pre-processed your input features appropriately.
Thanks for checking out my blog!
December 22, 2016 at 4:48 am
Hi Kris,
Great posts! Many thanks for sharing.
Have you ever put real money to trade using your machine learning systems? Some people say these systems always mysteriously failed, while others say GS, JP Morgan and DE Shaw etc all use machine learning system to trade and made big bucks, what’s your comment on this?
December 22, 2016 at 10:55 pm
Hi Jeff
No problem, glad you found it useful. Yes, I am currently allocating to my ML strategies. I don’t think there is any great mystery about trading systems failing – it happens all the time. I have little idea what those guys you mentioned do to make money, but if they aren’t using machine learning, they are foregoing a very useful tool.
December 28, 2016 at 4:30 am
Hi Kris,
Thank you so much for your comment. What do your average profit factors look like for your machine learning trading systems ? Which factor you think leads to a successful trading system, indicators or algorithms, if you’re asked to name only one?
Happy Holidays
Jeff
January 3, 2017 at 11:31 pm
Hi Jeff
If I had to pick one, I’d pick execution as the most important determinant of algo trading success. That’s probably a very boring response, but one that I think is justified!
Happy holidays to you too.
February 15, 2017 at 12:14 am
Kris,
Thanks so much for this really insightful tutorial, I think this is one of the best examples of nnet application to risk and return data in the public domain.
I have a basic question on the absretSummary function you’re using to assess the learner’s performance. Are you using the observed risk-adjusted returns in the line rather than geometric or logn scaled returns?
trades <- positions*data[, "obs"]
Thanks & Regards,
February 15, 2017 at 1:43 am
Matt, thanks for the kind words, great to hear that the article was useful.
You’re correct – I’m using the observed risk adjusted returns in my objective function: the last close minus the prior close divided by the 100-period average true range.
Cheers
March 23, 2017 at 1:33 pm
May 3, 2017 at 2:16 am
Someone pointed out that there is no sense to do a variable analysis because with different data-window or different time at all, the variables would be different.
I see that if you choose a specific time ( as the years of the strategy ) and data-window ( as the period to generate an indicator ) everything will be different. Then a SMA cross over strategy will be winning using 20-period in 2009 and 15-period over 2015. The fact that the period changes does not mean that the analysis is wrong, as the fact that the variable importance may change does not mean that the analysis is wrong.
I see two solutions for the variable importance change depending on data-window etc:
1- We use different data-windows and years to do the analysis and we pick up only the variables that remain as importance in all the different scenarios. Then the discussion is closed because the variables choosen are the best in each scenario.
2- We perform a WFO where the variables used are changed to train the model as the parameters of the model are tuned in every WFO cycle. In that way “variable tunning” should be added to the analysis meaning that if we choose that the model is retrainned the first trading day of each month, then the variable analysis should be as well done and be part of that training. does it make sense?
May 3, 2017 at 10:04 pm
For sure, you raise really good points. Both your solutions are sensible; however in my experience you’ll have more success with the second one. Including the variable selection step in the model re-training process is not uncommon in machine learning applications to time series data. Another option is to create models with the best subset of variables over different lookback periods and ensemble their predictions.
August 8, 2017 at 8:53 pm
Great content on your site!
I’m trying to get my head around stacked autoencoders and dropout.
Quora: “Though the fundamental principle is same, making sure that large no of the parameters do not over-fit the data. But on a closer look they work differently. While denoising work on input layer only, dropout work on all layers (but output). So I guess when u have a deep network and you also want the hidden units of some higher layer to avoid over-fitting you might want to choose dropout over denoising. For shallow models both should work the same.”
The following quote is from the following article: https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
The idea of adding noise to the states of units has previously been used in the context of Denoising Autoencoders (DAEs) by Vincent et al. (2008, 2010) where noise is added to the input units of an autoencoder and the network is trained to reconstruct the noise-free input. Our work extends this idea by showing that dropout can be effectively applied in the hidden layers as well and that it can be interpreted as a form of model averaging
Seems like autoencoders and dropout are two ways of achieving the same thing, although dropout can be used on more layers. Any thoughts on this would be most welcome!
August 25, 2017 at 10:31 am
I guess you could say that, although I’m not sure about the mathematical equivalence. In my experience, dropout has been the most effective way to control overfitting in my applications. As I understand denoising autoencoders, we apply noise only on the input layer, while of course dropout extends through the network. I was heavily into autoencoders a few years ago, but have moved away from them in the deep learning research I’m currently doing.
September 2, 2017 at 11:30 pm
Hi Kris and thank you so much, all your work are fantastic!
I have a question
why modellist.nnet[[j]]$pred$pred value are different from predict( modellist.nnet[[j]]) ?
Thanks & Regards
December 5, 2017 at 4:30 pm
Hi Bob, you’re most welcome. It’s admittedly been a while since I used caret, but from memory the caret model object includes predictions from the training/cross-validation procedure whereas predict is used to present new observations to your model. Check out the caret documentation, which is a great resource in general, for more info.
December 1, 2017 at 7:20 am
Hi, I’m trying to execute the Rolling Forecast Origin Framework but it is giving to me an error (well, it is giving lots of errors, but I guess that the other ones are because of this first one):
+ modellist.nnet[[j]] <- train(eu[, 2:4], eu[, 8], method = "dnn",
+ trControl = timecontrol, tuneGrid = nnetGrid, preProcess = c('center', 'scale'))
+ stopCluster(cl)
+ j <- j+1
+ cat("Window: ", i)
+ }
Error in e$fun(obj, substitute(ex), parent.frame(), e$data) :
unable to find variable "optimismBoot"
In addition: There were 21 warnings (use warnings() to see them)
>
[...]
> warnings()
Warning messages:
1: In new.env(parent = emptyenv()) :
closing unused connection 22 (<-WIN-BDG5DH3A9DM:11032)
2: In new.env(parent = emptyenv()) :
closing unused connection 21 (<-WIN-BDG5DH3A9DM:11032)
3: In new.env(parent = emptyenv()) :
closing unused connection 20 (<-WIN-BDG5DH3A9DM:11032)
4: In new.env(parent = emptyenv()) :
closing unused connection 19 (<-WIN-BDG5DH3A9DM:11032)
5: In new.env(parent = emptyenv()) :
closing unused connection 18 (<-WIN-BDG5DH3A9DM:11032)
6: In new.env(parent = emptyenv()) :
closing unused connection 17 (<-WIN-BDG5DH3A9DM:11032)
7: In new.env(parent = emptyenv()) :
closing unused connection 16 (<-WIN-BDG5DH3A9DM:11032)
8: In new.env(parent = emptyenv()) :
closing unused connection 15 (<-WIN-BDG5DH3A9DM:11032)
9: In new.env(parent = emptyenv()) :
closing unused connection 14 (<-WIN-BDG5DH3A9DM:11032)
10: In new.env(parent = emptyenv()) :
closing unused connection 13 (<-WIN-BDG5DH3A9DM:11032)
11: In new.env(parent = emptyenv()) :
closing unused connection 12 (<-WIN-BDG5DH3A9DM:11032)
12: In new.env(parent = emptyenv()) :
closing unused connection 11 (<-WIN-BDG5DH3A9DM:11032)
13: In new.env(parent = emptyenv()) :
closing unused connection 10 (<-WIN-BDG5DH3A9DM:11032)
14: In new.env(parent = emptyenv()) :
closing unused connection 9 (<-WIN-BDG5DH3A9DM:11032)
15: In new.env(parent = emptyenv()) :
closing unused connection 8 (<-WIN-BDG5DH3A9DM:11032)
16: In new.env(parent = emptyenv()) :
closing unused connection 7 (<-WIN-BDG5DH3A9DM:11032)
17: In new.env(parent = emptyenv()) :
closing unused connection 6 (<-WIN-BDG5DH3A9DM:11032)
18: In new.env(parent = emptyenv()) :
closing unused connection 5 (<-WIN-BDG5DH3A9DM:11032)
19: In new.env(parent = emptyenv()) :
closing unused connection 4 (<-WIN-BDG5DH3A9DM:11032)
20: In new.env(parent = emptyenv()) :
closing unused connection 3 (<-WIN-BDG5DH3A9DM:11032)
21: In train.default(eu[, 2:4], eu[, 8], method = "dnn", trControl = timecontrol, ... :
The metric "RMSE" was not in the result set. profit will be used instead.
December 5, 2017 at 4:37 pm
Yep, I’ve seen this one too. It is a recent phenomenon arising due to some changes to the way caret interfaces with the parallel backend. I know that Max (caret’s author) is aware of the problem and has written a fix. At the time of writing, this fix existed in the dev version on GitHub, but it hadn’t yet been pushed to CRAN. My advice: try updating to the latest version of caret, and if the problem persists, then you’ll need the dev version from GitHub. In that case, first, do install.packages('devtools') then install from GitHub via devtools::install_github('topepo/caret/pkg/caret')
[…] never that simple. We have features to engineer and transform (no trivial task – see here and here for an exploration with applications for finance), not to mention the vagaries of dealing with data […]
|
2020-10-28 19:43:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43418756127357483, "perplexity": 1763.9309844589347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00606.warc.gz"}
|
https://www.physicsforums.com/threads/thermal-conduction-and-newtons-law-of-cooling.772837/
|
# Thermal Conduction and Newton's Law of Cooling
Fourier's law of thermal conduction states that $$\mathbf{j}=-k\nabla T,$$ where $\mathbf{j}$ is the heat flux. Integrating both sides of this equation over a closed surface gives the equation $$\frac{dQ}{dt}=-k\int \nabla T \cdot d\mathbf A.$$
If there is a temperature discontinuity across this surface, then $\frac{dQ}{dt}$ diverges, in contradiction with Newton's law of cooling. Are Fourier's law of conduction and Newton's law of cooling mutually incompatible?
Chestermiller
Mentor
Fourier's law of thermal conduction states that $$\mathbf{j}=-k\nabla T,$$ where $\mathbf{j}$ is the heat flux. Integrating both sides of this equation over a closed surface gives the equation $$\frac{dQ}{dt}=-k\int \nabla T \cdot d\mathbf A.$$
If there is a temperature discontinuity across this surface, then $\frac{dQ}{dt}$ diverges, in contradiction with Newton's law of cooling. Are Fourier's law of conduction and Newton's law of cooling mutually incompatible?
What makes you think there can be a temperature discontinuity at the surface? There, of course, can be a discontinuity of the temperature gradient at the surface, but this equation applies inside the region bounded by the surface.
Chet
What makes you think there can be a temperature discontinuity at the surface? There, of course, can be a discontinuity of the temperature gradient at the surface, but this equation applies inside the region bounded by the surface.
Chet
Well, let's imagine that you put a warm bottle of beer in a refrigerator to cool it down. At the surface of the bottle there is (at least initially) a temperature discontinuity, because the beer and the air in the fridge are at different temperatures. Newton's law of cooling has no trouble handling this, but Fourier predicts (at least initially) an infinite rate of cooling.
Newton's law of cooling has no trouble handling this, but Fourier predicts (at least initially) an infinite rate of cooling.
It predicts an infinite rate of cooling of the infinitesimally thin layer of the can that is in contact with the cold air, which is probably approximately right.
Chestermiller
Mentor
Well, let's imagine that you put a warm bottle of beer in a refrigerator to cool it down. At the surface of the bottle there is (at least initially) a temperature discontinuity, because the beer and the air in the fridge are at different temperatures. Newton's law of cooling has no trouble handling this, but Fourier predicts (at least initially) an infinite rate of cooling.
Yes, this is true, but it only lasts an instant. And the cumulative amount of heat transferred at short times will be proportional to time to the 1/2 power. One can determine this by solving the transient heat conduction equation in the region near the boundary using a similarity solution (i.e. Boundary layer solution).
Chet
|
2021-10-26 17:52:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943545937538147, "perplexity": 246.53738545596823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00283.warc.gz"}
|
http://blogs.msdn.com/b/visualstudio/archive/2010/05/05/msbuild-property-functions-2.aspx
|
# MSBuild Property Functions (2)
### MSBuild Property Functions (2)
#### Built-in MSBuild functions
The full list of built-in [MSBuild] functions, like the one above, are in the MSDN topic here. They include arithmetic (useful, for example, for modifying version numbers), functions to convert to and from the MSBuild escaping format (on rare occasions, that is useful). Here's another example
$([MSBuild]::Add($(VersionNumber), 1))
And here's one other property function that will be useful to some people:
$([MSBuild]::GetDirectoryNameOfFileAbove(directory, filename) Looks in the designated directory, then progressively in the parent directories until it finds the file provided or hits the root. Then it returns the path to that root. What would you need such an odd function for? It's very useful if you have a tree of projects in source control, and want them all to share a single imported file. You can check it in at the root, but how do they find it to import it? They could all specify the relative path, but that's cumbersome as it's different depending on where they are. Or, you could set an environment variable pointing to the root, but you might not want to use environment variables. That's where this function comes in handy – you can write something like this, and all projects will be able to find and import it: <Import Project="$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildThisFileDirectory), EnlistmentInfo.props))\EnlistmentInfo.props" Condition=" '$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildThisFileDirectory), EnlistmentInfo.props))' != '' " /> #### Error handling The functions parser is pretty robust but not necessarily that helpful when it doesn't wokr. Errors you can get include (1) It doesn't evaluate but just comes out as a string. Your syntax isn't recognized as an attempt at a function, most likely you've missed a closing parenthesis somewhere. That's easy to do when there's lots of nesting. (2) error MSB4184: The expression "…" cannot be evaluated. It treated it as a function, but probably it couldn't parse it. (3) error MSB4184: The expression "…" cannot be evaluated. Method '…' not found. It could parse it, but not find a member it could coerce to, or it was considered ambiguous by the binder. Verify you weren't calling a static member using instance member syntax. Try to make the call less ambiguous between overloads, either by picking another overload (that perhaps has a unique number of parameters) or using the Convert class to force one of the parameters explicitly to the type the method wants. One common case where this happens is where one overload takes an integer, and the other an enumeration. (4) error MSB4184: The expression "[System.Text.RegularExpressions.Regex]::Replace(d:\bar\libs;;c:\Foo\libs;, \lib\x86, '')" cannot be evaluated. parsing "\lib\x86" - Unrecognized escape sequence \l. Here's an example where it bound the method, but the method threw an exception ("unrecognized escape sequence") because the parameter values weren't valid. (5) error MSB4186: Invalid static method invocation syntax: "....". Method 'System.Text.RegularExpressions.Regex.Replace' not found. Static method invocation should be of the form:$([FullTypeName]::Method()), e.g. $([System.IO.Path]::Combine(a, b)).. Hopefully this is self explanatory, but more often than a syntax mistake, you called an instance member using static member syntax. #### Arrays Arrays are tricky as the C# style syntax "new Foo[]" does not work, and Array.CreateInstance needs a Type object. To get an array, you either need a method or property that returns one, or you use a special case where we can force a string into an array. Here's an example of the latter case:$(LibraryPath.Split(;))
In this case, the string.Split overload wants a string array, and we're converting the string into an array with one element.
#### Regex Example
Here I'm replacing a string in the property "LibraryPath", case insensitively.
<LibraryPath>$([System.Text.RegularExpressions.Regex]::Replace($(LibraryPath), $(DXSDK_DIR)\\lib\\x86, , System.Text.RegularExpressions.RegexOptions.IgnoreCase))</LibraryPath> Here's how to do the same with string manipulation, less pretty. <LibraryPath>$(LibraryPath.Remove($(LibraryPath.IndexOf($(DXSDK_DIR)\lib\x86, 0, $(IncludePath.Length), System.StringComparison.OrdinalIgnoreCase)),$([MSBuild]::Add($(DXSDK_DIR.Length), 8))))</LibraryPath> #### Future Thoughts So far in my own work I've found this feature really useful, and far, far, better than creating a task. It can make some simple tasks that were impossible possible, and often, easy. But as you can see from the examples above, it often has rough edges and sometimes it can be horrible to read and write. Here's some ways we can make it better in future: 1. A "language service" would make writing these expressions much easier to get right. What that means is a better XML editing experience inside Visual Studio for MSBuild format files, that understands this syntax, gives you intellisense, and squiggles errors. (Especially missed closing parentheses!) 2. A smarter binder. Right now we're using the regular CLR binder, with some customizations. Powershell has a much more heavily customized binder, and I believe there is now one for the DLR. If we switch to that, it would be much easier to get the method you want, with appropriate type conversion done for you. 3. Some more methods in the [MSBuild] namespace for common tasks. For example, a method like$([MSBuild]::ReplaceInsensitive(\$(DXSDK_DIR)\\lib\\x86, )) would be easier than the long regular expression example above.
4. Enable more types and members in the .NET Framework that are safe, and useful.
5. Make it possible to expose your own functions, that you can use with this syntax, but write in inline code like MSBuild 4.0 allows you to do for tasks. You'd write once, and use many.
6. Offer some similar powers for items and metadata.
What do you think?
• Post
• I've found that GetDirectoryNameOfFileAbove will sometimes result in a path with a backslash at the end, and sometimes it will not. I think it is there when the file is in the same directory, and not when it has to go up a directory to find it. I haven't debugged this enough to file a Connect bug on it. Have you run into this?
• @GregM, no, but I can believe it - please do open a bug for us. Fortunately, generally extra slashes within a path don't cause problems on Windows. (Unless you're manipulating the path as a string, I guess.) They're just not pretty.
If we're sometimes adding a slash, realistically we'll have to fix it to always add a slash so nobody breaks.
Dan
• Yip, you're adding a trailing slash if the file is at the same location as the starting directory.
• <a href="http://www.solidhomedubai.com">Dubai Properties</a>
Easily, the publish is really the greatest on this laudable topic. I concur with your conclusions and will thirstily look forward to your future updates. Saying thanks will not just be sufficient, for the fantastic lucidity in your writing. I will instantly grab your rss feed to stay privy of any updates. Solid work and much success in your business enterprise!
Page 1 of 1 (5 items)
|
2014-08-20 20:52:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2741490602493286, "perplexity": 3653.5174927607304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500812662.69/warc/CC-MAIN-20140820021332-00222-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/s41598-019-53904-w
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Demography, heritability and genetic correlation of feline hip dysplasia and response to selection in a health screening programme
## Abstract
Feline hip dysplasia (FHD) is a debilitating condition affecting the hip joints of millions of domestic cats worldwide. Despite this, little is known about FHD except that it is relatively common in the large breed Maine Coon. We used 20 years of data from 5038 pedigree-registered Maine Coon cats in a radiographic health screening programme for FHD to determine, for the first time, its heritability, genetic correlation to body mass and response to selection. FHD prevalence was 37.4%, with no sex predilection; however, FHD severity increased with age and body mass. Heritability of the radiographic categories used to classify FHD severity was 0.36 (95%CI: 0.30–0.43). The severity of FHD symptoms was also genetically correlated with body mass (0.285), suggesting that selection for a large body type in this breed concurrently selects for FHD. Support for this was found by following generational responses to selective breeding against FHD. Not only did selective breeding successfully reduce the severity of FHD symptoms in descendants, but these cats were also smaller than their ancestors (−33g per generation). This study highlights the value of breeding programmes against FHD and cautions against breed standards that actively encourage large bodied cats.
## Introduction
Hip dysplasia is a degenerative condition affecting the coxofemoral joint, where abnormal anatomical development and weight-bearing forces combine to create ongoing degenerative joint disease1,2. Because of its debilitating impacts on individual animals, considerable research effort has focussed on the disease’s genetic and environmental risk factors in order to mitigate these risks. Much of this work has been done in dogs because hip dysplasia is relatively common in larger dog breeds2,3,4,5; thus in dogs we know much about its breed predisposition, heritability, genetic regulation, environmental risk factors, symptoms and radiographic presentation, welfare implications and response to selection in health screening programmes1,2,3,4,6,7,8,9. Selective breeding against canine hip dysplasia through breed-specific programmes has become the mainstay of managing the prevalence of the disease for over 50 years2,3,6,10. However unlike dogs, hip dysplasia in cats has been largely overlooked and underreported11,12,13, with very little known about its demography (but see14), and almost nothing known about risk factors, heritability or potential response to selection11,13,14.
Current knowledge about feline hip dysplasia (FHD) comes largely from individual case reports in the veterinary literature15,16,17, and four studies reporting its breed-specific prevalence11,14,18,19. While there is evidence that FHD may be more common in some breeds (e.g. Devon Rex, Himalayan & Persian11,19), these differences are often difficult to quantify because of the small sample sizes reported (e.g. the 78 cats examined in19 are divided among 9 breeds). Despite this, there are clear indications that domestic shorthairs have a lower prevalence of FHD (10.4% from 899 cats11,18,19) compared to Maine Coons (24.9% from 2548 cats14). Because of these likely breed associations, FHD is expected to have a genetic basis similar to the polygenic condition in dogs11,13,20,21. Thus it is assumed that FHD can be similarly managed, with selective breeding and culling being used to limit its impact in specific populations. However, while these assumptions are reasonable, the heritability of FHD and its subsequent response to selection are still unknown; issues that have significant implications for our understanding and management of FHD6,11,21.
Heritability is the proportion of phenotypic variation arising from additive genetic effects, and is critical for estimating the expected response to selection22. To determine heritability of a phenotypic trait, in this case FHD, we need many individuals of known genetic relationship to one another whose FHD symptoms can be compared. Currently in cats, such an analysis is only feasible in the Maine Coon. Two international registries collect data on FHD in the Maine Coon: the Orthopedic Foundation for Animals (OFA) whose demographics are described in Loder & Todhunter14 and the Swedish-based PawPeds programme (https://pawpeds.com). In both programmes, FHD is classified into ranked scores based on the severity of radiographic signs (Fig. 1). The PawPeds programme also bases breeding recommendations on these FHD scores, with breeders strongly encouraged to selectively breed for a lower incidence of FHD. Organised selective breeding against disease traits is much less common in cats compared to dogs; thus, PawPeds data offer an unusual opportunity to study the success of breeding programmes in domestic cats. These factors, combined with the Maine Coon pedigree registry also managed by PawPeds, allows not only an assessment of FHD demographics (e.g.14), but also an estimate of its heritability and response to selective breeding.
We analysed data collected by the PawPeds FHD Maine Coon health programme and linked these data to the Maine Coon pedigree registry. This allowed us to specifically examine: (1) demographics of hip dysplasia from 5038 Maine Coon cats and its relationship to probable risk factors (i.e. sex, age, body mass and year of the health programme), (2) response to selective breeding, by looking at the relationship between FHD scores and the number of generations within the selective breeding programme, and (3) the heritability of FHD, by using the linked pedigree to calculate the ratio of additive genetic variance to phenotypic variance. The Maine Coon is a large cat breed, and this in combination with evidence that hip dysplasia in dogs is associated with large dog breeds3,5, suggests some linkage between genes for body size and hip dysplasia risk. Thus, we also examined: (4) the genetic correlation between FHD scores and body mass to assess how selection for size in this large breed may concurrently select for FHD. By extension, we also examined whether selection against FHD in the health programme concurrently selected for a smaller body mass in subsequent generations [because of this genetic correlation]. These questions are not only invaluable for greatly expanding our currently limited knowledge of FHD, but also for validating the effectiveness of current and future breeding programmes. Information regarding genetic correlations between FHD and body mass will have significant implications for the breed standard and the sorts of traits breeders should be selecting for if they want to limit FHD in this (and other) cat breed(s).
## Results
### Prevalence and factors related to HD expression
More than one third of all Maine Coon cats surveyed showed some radiographic signs of hip dysplasia (37.4%), with the proportion of cats in each HD category being approximately the same for males and females (i.e. hip score 1 = 22%, score 2 = 12% and score 3 = 4%; Table 1). For cats with radiographic signs of FHD, 36.9% had lesions in only one hip and 63.1% in both hips; males and females were similar in proportion and range of severity (Supplementary Table S1; Fig. S1). During the 20 years of health monitoring, the proportion of cats without radiographic signs of FHD declined marginally (Fig. 2), with the proportion of cats in the most severe FHD categories (2 & 3) increasing (Fig. 2; Supplementary Fig. S2; Table S2). Residual body mass and age were related to the severity of FHD expression, with heavier and older cats having higher than average hip scores compared to lighter or younger cats (Fig. 3; Supplementary Table S3).
### Response to selective breeding
Despite the cross-sectional data showing no reduction in the prevalence or severity of FHD in Maine Coons over time (Fig. 2), there was strong evidence from the multi-generational longitudinal data that selective breeding reduced FHD scores. Here, the predicted average hip score approximately halved after five generations of selective breeding (Fig. 4a); after eight generations the predicted maximum hip score per individual had declined to 0.268 (95% CI: 0.21–0.33) for females and 0.286 (0.21–0.39) for males (from a starting value at generation 1 of 0.85 (0.75–0.96) & 0.86 (0.73–1.03) respectively; Fig. 4a). This effect of culling animals with high FHD scores and only breeding from animals with normal hips or low FHD scores, thus resulted in a 68% (95%CI: 62–73%) reduction in the mean value of the hip score for females and a 66% (60–72%) reduction for males (assuming hip scores follow an approximately linear ordinal scale) between generations 0 and 8 (see Supplementary Table S3 for model coefficient estimates).
### Heritability and genetic correlations of HD in Maine Coons
The response to selective breeding was further supported by the quantitative genetic analyses demonstrating moderate heritability of HD scores (h2 = 0.36 at the observed data scale; Table 2). Genetic correlation analyses using the bivariate response traits of residual body mass and radiographic HD scores showed that these factors were moderately positively correlated (rG = 0.285; Table 2). Thus, individuals with a genetic predisposition to be larger than average also had higher than average FHD scores because of some genetic linkage between these traits.
To confirm this finding and to examine a potential consequence of this genetic correlation, we considered whether selection against FHD within the health screening programme had an incidental effect on body mass. Here we expected that residual body mass should decline as a consequence of HD being selected against. We regressed residual body mass against the number of generations of selective breeding and found clear evidence for a reduction in body size (mean ~0.25 kg after 8 generations) when breeders select against FHD (Fig. 4b; Supplementary Table S3).
## Discussion
Our study confirms the high prevalence of FHD in the Maine Coon (37.4%; cf. 24.9% in14) and provides additional justification for the screening and selective breeding programmes that have developed around this cat breed. However it is naive to think that FHD is a condition largely restricted to the Maine Coon. The Maine Coon was the focus of this study simply because it is the only breed with data routinely collected on FHD. Previous studies show that other cat breeds likely suffer from this condition (e.g. Devon Rex [40%], Abyssinian [30%], Himalayan [25%], Persian [15%]11,19); however these are based on very small sample sizes (between 5 and 25 cats) because organisations associated with these breeds have not made specific recommendations to their members to screen for FHD. The only other type of cat with sufficient sampling to provide a good estimate of FHD prevalence is the domestic shorthair, where a combination of studies provides an estimate of 10.4% from a total of 899 cats11,18,19. This clearly suggests that FHD has a similar prevalence to canine HD (overall prevalence = 15.6%; breed ranges 0–77%)5. These findings are in stark contrast to Hayes et al.20 who estimated FHD prevalence to be extremely low (at 1/180th the prevalence of canine HD, based on 14 cases in 270,000 hospital visits), but neglected to consider that many of these patients would have had FHD but were not evaluated for it because cats do not show the same obvious clinical symptoms as dogs12,13. Thus, it is clear there has been a severe underestimation of FHD prevalence in the general cat population.
As with previous studies we found no association between gender and FHD prevalence or severity (Table 1). Laterality of FHD (when present) was relatively common with 37% of females and males presenting with an FHD score of >0 in only one hip (c.f. 45% and 43% in14). As has been previously noted, when bilateral HD occurs the severity of the condition is generally higher than when only one hip is involved (Supplementary Table S1)5,14. The explanation for this relationship between laterality and HD severity can be likened to sampling error from a probability distribution; cats with a genetic predisposition for mild FHD are more likely to have a normal hip by chance (and hence unilateral HD) than those with a genetic predisposition for severe FHD (hence they get bilateral HD more often; see Fig. S3 for an expanded explanation). Perhaps not surprisingly with a degenerative joint condition, our results show that FHD severity was related to age and the residual body mass of the individual (i.e. the deviation from the expected body size given the animal’s sex and age). Thus, older and heavier cats tended to have more severe FHD than younger and lighter cats. These relationships are important to consider when examining general FHD patterns in data, as non-uniform distributions of age or body mass within a dataset could lead to spurious conclusions if not accounted for.
Any effectiveness of the PawPeds health programme was not apparent when the raw cross-sectional data were initially examined. Over the course of the programme we expected the proportion of cats without FHD symptoms to increase because FHD was being selected against. Contrary to this, the proportion of cats in the moderate to severe FHD categories increased at the expense of those in the mild to no symptom categories (Fig. 2). At first glance this seemed to indicate the programme was not working. However, interpreting the cross-sectional data in this way depended on some very strong assumptions about the state of new cats entering the programme in later years being the same as those in the early years (age, body mass, severity of symptoms). By analysing longitudinal data that followed generations through the programme, we could see that selective breeding had a dramatic effect on the severity of hip scores. Thus, the lack of patterns in the expected direction in the cross sectional data likely arose from changes in the types of animals being submitted to the programme, with there being some evidence in the data that residual body mass increased during the life of the programme.
Our estimates of heritability (h2) of FHD were within the range commonly reported for HD in dogs (0.1–0.7)1,2,3,4,5, suggesting a similar polygenic aetiology and an expected response to selective breeding. But it is important to consider exactly what is meant by h2 of FHD and why h2 estimates will likely vary between this study and future studies. Heritability is the proportion of phenotypic variance in our measure of FHD that is explained by additive genetic factors. Thus, one needs to consider the amount of variation in the population and the relative contribution of genetic versus non-genetic effects. This means that variation arising from different gene pools or differences in the general rearing conditions or nutrition of the animals being evaluated may significantly impact the h2 estimates, with these potentially differing because of spatio-temporal or sampling issues. But potentially more importantly is the way FHD is measured or categorised in the first place. Heritability is calculated from the phenotypic variance, with the phenotype in this case not being FHD itself, but how we measure FHD. PawPeds uses a 4-category ordinal ranking to define FHD severity and grades the cat based on the highest (worst) hip score, while the Orthopedic Foundation for Animals (OFA) registry uses a 7-category grading system and grades the cat based on an average of the two hip scores14. While these ways of categorising FHD are likely to be very closely aligned and thus should reveal similar h2 estimates, any differences will probably say more about the method used to categorise FHD rather than the underlying gene pool. Thus, where future work focuses on the relative advantages of different methods for diagnosing or categorising FHD (e.g. rank scores versus Norberg angles, laxity scores or subluxation indices)19,23, it should be kept in mind that these diagnostic methods and their phenotypic variance may influence h2 estimates, with this, in turn influencing the effectiveness of selective breeding programmes based on these FHD scores6. In addition, h2 estimates could be influenced through observational error depending on the grading system used in the selection programme. Here we used a single observer when categorising FHD, whereas other programmes use multiple observers14. Single observer programmes potentially limit between-observer variation3,24 and hence produce higher and more accurate h2 estimates; however, they run the risk of introducing systematic bias during the course of the programme if the observer subtly changes the magnitude of their assessments over time. The degree to which these observer errors impact on the effectiveness of FHD screening programmes is unknown, but could potentially be investigated using the PawPeds and Orthopedic Foundation for Animals data. Currently, our study shows that the method used by PawPeds provides a measure with a moderate level of heritability for selection to act on and with direct evidence of declining FHD scores in response to selection. Thus there is no reason to change the way FHD is classified in this programme, unless compelling evidence shows that other ways of scoring FHD are more effective at reducing its severity within a selective breeding context.
In dogs, larger breeds are most commonly associated with the highest prevalence of HD. The genetic correlation analyses in our study clearly demonstrate that some of the genes responsible for the larger body type seen in the Maine Coon also increase FHD risk, either because of genetic pleiotropy or inheritance of a linked gene complex. Thus breeding for a large body type carries with it the increased risk of FHD. This finding was further supported when we looked at the average body size of cats within the selective breeding programme against FHD; here as the number of generations of selection against FHD increased, body size decreased. This has serious implications for how the breed standard should be interpreted by breeders and show judges. The Maine Coon breed standard describes the breed as large in type (e.g. “…the optimum being a large, typey cat…the Maine Coon is large framed…medium to large in size”; mcbfa.org; cfa.org; fifeweb.org). This in itself is not a bad thing; however, there is likely to be temptation for some judges and breeders to favour the larger-than-average cats within the breed, to accentuate one of its defining features. This temptation is hinted at in the same Maine Coon breed standards when they specifically warn against this (“Males may be larger, females are usually smaller. Females should not be penalized because of this size difference… Quality should never be sacrificed for size…Type must not be sacrificed for size”). Our study shows that these warnings need to be heeded, as selecting for a larger-than-average body type is likely to carry with it the unwanted genetic consequences of higher FHD risk. This raises ethical questions about exactly what traits should be promoted in the breed, and the possible trend that these cats are getting larger over time.
Our study is the first to estimate heritability of FHD, examine its genetic correlation with a breed-standard trait and measure response to selective breeding against FHD in a monitored health programme. The data also add considerable weight to our knowledge about the demography of FHD in the Maine Coon14 and have significant implications for how we should view the prevalence and management of FHD. Thus, there are associated welfare and ethical issues to consider, not only from a veterinary perspective, but also from the perspective of cat societies and their members. It is important to keep in mind that our focus on the Maine Coon is not just because of its risk profile for FHD, but also because it is the only breed with the data to support such an investigation. Feline hip dysplasia is a significant clinical problem affecting millions of cats worldwide, and it is time it received similar consideration as its canine counterpart.
## Methods
### Data
From the PawPeds health database, we extracted data for Maine Coons from January 2000 to June 2019 that included 5038 records (female = 3287 & male = 1751) from individuals with information on FHD scores, sex, age and parentage. Of these records, 2156 also had data on the individuals’ body weight (female = 1402 & male = 754) allowing body-weight-related analyses on this subset. We also had access to the Maine Coon pedigree database (https://pawpeds.com) that allowed us to derive information on the genetic relationships between these individuals (for a summary of the pedigree database see Supplementary Table S4).
### Analyses
Our initial approach was to examine the prevalence of different FHD scores based on sex to get a general demographic overview of the data for comparison to14 (Table 1). These data were collected during a 20-year period, thus we also wanted to examine whether there was any general trends in the relative proportion of FHD scores during this time that might reflect the influence of the breeding programme. Because the hip-score rankings (0–3) were ordinal, but the proportional odds assumption for an ordered logistic regression was violated (chi-square P = 0.01), we used the more general multinomial logistic regression using the ‘mlogit’ package26 in R27 for this analysis.
Symptoms of FHD are likely related to an interaction between the age and weight of an individual and its genetics. Thus we were interested in how age and weight influenced the phenotypic expression of FHD in male and female cats. Because body mass is age and sex dependent, we converted each individual’s body mass data to a sex- and age-adjusted residual body mass; thus, negative residuals were for individuals smaller than expected, and positive residuals for individuals larger (heavier) than expected for their age (for details see Supplementary Fig. S4). FHD scores displayed a classic Poisson distribution and thus we could use a log-link Poisson regression (truncated to a maximum of 3) to estimate the effect of age and residual body mass on FHD expression (see Supplementary Appendix S1a for full model details). To examine the effect of selection on FHD expression within the PawPeds health programme, for each individual we used information from the pedigree to determine how many generations of their ancestors had been previously assessed for FHD (and thus been selected for within the programme). Thus each individual received a measure of how many generations of selective breeding preceded them, on both their maternal and paternal sides. These were then averaged to give a mean ancestor score that allowed us to estimate changes in the mean FHD score relative to the number of generations of selective breeding. As with the preceding analysis, we used a truncated Poisson regression to estimate this relationship and included age and year to control for age and year effects (see Supplementary Appendix S1b for full model details).
To estimate heritability we needed to statistically partition the phenotypic variance (VP) of FHD into its additive genetic (VA) and the residual or additional variance components. To do this we fitted a generalised linear mixed-effects model known as an ‘animal model’ in quantitative genetics28. The animal model uses information about the genetic relationships between individuals from the pedigree (Supplementary Table S4) to estimate VA; from this the heritability (h2) is then calculated using the formula h2 = VA/VP. Models were implemented using the R package ‘MCMCglmm’29 following pedigree checking using the ‘pedantics’ package30. These animal models were formulated as ‘threshold’ models for ordinal data, with the unit variance fixed to 1 and individual ID included to account for repeated measures on the same individual (see Supplementary Appendix S1c for full model details). Ordinal threshold models use a (probit) link function to translate between the underlying latent statistical variable (which is continuous) and the observed state of the phenotype (in this case, discrete states from 0–3); thus, h2 estimates from these models are based on the underlying latent variable rather than the observed state of the phenotype. This means that h2 estimates from non-gaussian models need to be transformed to account for the variance associated with the link function if h2 is to be interpreted in the same scale as the observed phenotypic data31. Thus for h2 estimates of threshold models we present them at both the latent scale and observed data scale (after transformation using the ‘QGglmm’ package)31.
We then estimated the h2 of residual body mass and its genetic correlation to the FHD scores by using an extension of the animal model which permits variance to be partitioned between multiple traits28. This allowed us to ask whether the phenotypic covariance we observe between residual body mass and FHD scores was due to additive genetic effects (COVA). Genetic covariance between traits may occur because they share the same genes or linked gene complexes and is expressed as the genetic correlation (rG) using the formula: rG = COVA/$$\sqrt{{V}_{A1}}\times {V}_{A2}$$ (see Supplementary Appendix S1d for full model details). As a further confirmation that genetic covariances might exist between body size and FHD, we examined how residual body mass changed relative to the number of generations of selective breeding within the PawPeds programme. Here we expected residual body mass to decline as the number of generations increased, if there was the expected positive genetic correlation between residual body mass and FHD (i.e. the direct selection against FHD would indirectly select against larger body mass; see Supplementary Appendix S1e for full model details).
For the FHD response variable used in these analyses we used each individual’s maximum hip score (e.g. if a cat had a left hip score = 1 and a right hip score = 2, its maximum hip score = 2) because: (1) it seemed reasonable to classify a cat based on its worst HD score, since both hips were not always affected equally (Table 1 & Supplementary Table S2; Fig. S1), (2) this variable has been used in canine HD studies1,3, and (3) the genetic correlation between maximum hip score and the left or right hip scores was extremely high (0.996; see Table 2) supporting the idea that it captured the genetic variation we were generally seeking to explain. With the exception of the multinomial modelling of the cross-sectional data, all models were implemented in a Bayesian statistical framework with minimally-informative priors. For the regression models, these were run in JAGS32 called from R27, and for the quantitative genetic models these were run in MCMCglmm29. In all cases models were run until MCMC chains converged and were checked for model fit using posterior predictive checks33 (see Supplementary Appendix S1). Results are generally presented as means and the 95% credible intervals from the posterior distributions (i.e. the range where the value of the estimate occurs with a 95% probability) unless otherwise stated.
## Data availability
All data used in this study are available from PawPeds (https://pawpeds.com/healthprogrammes/): the Maine coon data on FHD were retrieved from the Hip Dysplasia database, and the weight data used to create the growth models were retrieved from the Hypertrophic Cardiomyopathy database.
## References
1. Wilson, B. J. et al. Heritability and phenotypic variation of canine hip dysplasia radiographic traits in a cohort of Australian German shepherd dogs. PLoS ONE 7, e39620, https://doi.org/10.1371/journal.pone.0039620 (2012).
2. Oberbauer, A. M., Keller, G. G. & Famula, T. R. Long-term genetic selection reduced prevalence of hip and elbow dysplasia in 60 dog breeds. PLoS ONE 12, e0172918, https://doi.org/10.1371/journal.pone.0172918 (2017).
3. Swenson, L., Audell, L. & Hedhammar, Å. Prevalence and inheritance of and selection for hip dysplasia in seven breeds of dogs in Sweden and benefit:cost analysis of a screening and control program. J. Am. Vet. Med. Assoc. 210, 207–214 (1997).
4. Todhunter, R. J. et al. Genetic structure of susceptibility traits for hip dysplasia and microsatellite informativeness of an outcrossed canine pedigree. J. Hered. 94, 39–48 (2003).
5. Loder, R. T. & Todhunter, R. J. The demographics of canine hip dysplasia in the United States and Canada. J. Vet. Med. 2017, 5723476, https://doi.org/10.1155/2017/5723476 (2017).
6. Wilson, B. J., Nicholas, F. W. & Thomson, P. C. Selection against canine hip dysplasia: success or failure? Vet. J. 189, 160–168 (2011).
7. Wilson, B. J. et al. Genetic correlations among canine hip dysplasia radiographic traits in a cohort of Australian German shepherd dogs, and implications for the design of a more effective genetic control program. PLoS One 8, e78929, https://doi.org/10.1371/journal.pone.0078929 (2013).
8. Zhou, Z. et al. Differential genetic regulation of canine hip dysplasia and osteoarthritis. PLoS One 5, e13219, https://doi.org/10.1371/journal.pone.0013219 (2010).
9. Lust, G. An overview of the pathogenesis of canine hip dysplasia. J. Am. Vet. Med. Assoc. 210, 1443–1445 (1997).
10. Keller, G. G., Dzuik, E. & Bell, J. S. How the Orthopedic Foundation for Animals (OFA) is tackling inherited disorders in the USA: using hip and elbow dysplasia as examples. Vet. J. 189, 197–202 (2011).
11. Keller, G. G., Reed, A. L., Lattimer, J. C. & Corley, E. A. Hip dysplasia: a feline population study. Vet. Radiol. Ultrasound 40(460), 464 (1999).
12. Lascelles, B. D. X. Feline degenerative joint disease. Vet. Surg. 39, 2–13 (2010).
13. Perry, K. Feline hip dysplasia: a challenge to recognise and treat. J. Feline Med. Surg. 18, 203–218 (2016).
14. Loder, R. T. & Todhunter, R. J. Demographics of hip dysplasia in the Maine Coon cat. J. Feline Med. Surg. 20, 302–307 (2018).
15. Peiffer, R. L. & Blevins, W. E. Hip dysplasia and pectinous resection in the cat. Feline Pract. 4, 40–43 (1974).
16. Holt, P. E. Hip dysplasia in a cat. J. Small Anim. Pract. 19, 273–276 (1978).
17. Patsikas, M. N., Papazoglou, L. G., Komninou, A., Dessiris, A. K. & Tsimopoulos, G. Hip dysplasia in the cat: a report of three cases. J. Small Anim. Pract. 39, 290–294 (1998).
18. Köppel, E. & Ebner., J. Hip dysplasia in the cat. Klientierpraxis 35, 281–298 (1989).
19. Lagenbach., A. et al. Relationship between degenerative joint disease and hip joint laxity by use of distraction index and Norberg angle measurements in a group of cats. J. Am. Anim. Hosp. Assoc. 213, 1439–1443 (1998).
20. Hayes, H. M., Wilson, G. P. & Burt, J. K. Feline hip dysplasia. J. Am. Anim. Hosp. Assoc. 5, 447–448 (1979).
21. Schnabl-Feichter, E., Tichy, A., Gumpenberger, M. & Bockstahler, B. Comparison of ground reaction force measurements in a population of domestic shorthair and Maine Coon cats. PLoS One 13, e0208085, https://doi.org/10.1371/journal.pone.0208085 (2018).
22. Wilson, A. J. Why h2 does not always equal VA/VP? J. Evol. Biol. 21, 647-650 (2008)
23. Valastro, C. et al. The CT dorsolateral subluxation index is a feasible method for quantifying laxity in the feline hip joint. Vet. Radiol. Ultrasound 60, 1–6 (2019).
24. Leppänen, M., Mäki, K., Juga, J. & Saloniemi, H. Factors affecting hip dysplasia in German shepherd dogs in Finland: efficacy of the current improvement programme. J Small. Anim. Pract. 41, 19–23 (2000).
25. Root C. R. et al. A disease of Maine Coon cats resembling congenital canine hip dysplasia. American Collage of Veterinary Radiologists Annual Meeting, Dec. (1987)
26. Croissant, Y. mlogit: Multinomial Logit Models. R package version 0.4-2, https://CRAN.R-project.org/package=mlogit(2019)
27. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, https://www.R-project.org/ (2018)
28. Wilson, A. J. et al. An ecologist’s guide to the animal model. J Anim. Ecol. 79, 13–26 (2009).
29. Hadfield, J. D. MCMC methods for multi-response generalized linear mixed models: the MCMCglmm R Package. J. Stat. Software 33, 1–22 (2010).
30. Morrissey, M. Pedantics: functions to facilitate power and sensitivity analyses for genetic studies of natural populations. R package version 1.7, https://CRAN.R-project.org/package=pedantics (2018).
31. de Villemereuil, P., Schielzeth, H., Nakagwa, S. & Morrissey, M. General methods for evolutionary quantitative genetic inference from generalised mixed models. Genetics 204, 1281–1294 (2016).
32. Plummer, M. JAGS: a program for analysis of Bayesian graphical models using Gibbs sampling. Proceedings of the 3rd international workshop on distributed statistical computing, Vienna (2003)
33. Hooten, M. B. & Hobbs, N. T. A guide to Bayesian model selection for ecologists. Ecol. Monogr. 85, 3–28 (2015).
## Acknowledgements
ML was funded by the Swedish Research Council (VR 2012-03634) and ÅO was funded by FORMAS (2017-00559). Open access funding provided by Swedish University of Agricultural Sciences.
## Author information
Authors
### Contributions
Study conception and design: Å.O., M.L., P.E. and K.H. Data collection and retrieval: U.O., L.A., P.E., K.H. and Å.O. Data analysis by M.L. M.L. led the writing of the manuscript with Å.O. All authors contributed critically to the drafts and gave final approval for submission.
### Corresponding author
Correspondence to Matthew Low.
## Ethics declarations
### Competing interests
M.L. declares no competing interests. P.E., K.H., U.O., L.A. & Å.O. declare they currently or have previously worked with PawPeds; however, their involvement in this paper did not relate to the analysis or presentation of the results.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Low, M., Eksell, P., Högström, K. et al. Demography, heritability and genetic correlation of feline hip dysplasia and response to selection in a health screening programme. Sci Rep 9, 17164 (2019). https://doi.org/10.1038/s41598-019-53904-w
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-019-53904-w
|
2022-05-24 03:42:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.552876889705658, "perplexity": 5550.833590535871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00047.warc.gz"}
|
https://rpg.stackexchange.com/questions/101933/can-skeletons-be-put-to-sleep
|
# Can Skeletons be put to sleep?
There seems to be some debate over whether or not Skeletons can be put to sleep by way of the Sleep spell in 5e.
Can they or not?
• -1 since the debate was about sleeping in general not just by means of the sleep spell. The sleep spell explicit say that undead are excluded. That doesn't give implicit immunity of the effect inflicted by sleep spell for undeads by every means. – Zaibis Jun 21 '17 at 11:02
• Also your question is ambiguous since the answer to the title is yes while to OP had to be answered with no. – Zaibis Jun 21 '17 at 11:03
# No
This limitation is defined in the spell description:
Undead and creatures immune to being charmed aren’t affected by this spell.
Skeletons are undead, therefore they cannot be put to sleep using the sleep spell.
• Sleep is also not a Condition and woudn't be listed as such for condition immunities. Always check your spell limitations for language that may be vitally important! – NautArch Jun 20 '17 at 14:20
• @NautArch Unconscious is a condition though, and immunity to it would protect against being put to sleep. – Doval Jun 20 '17 at 15:41
• @Doval yes, as pyrotechnical has in their answer. Skeletons are not immune to Unconscious. – NautArch Jun 20 '17 at 16:13
• @Doval sleep is not unconscious; I think you are mixing a misunderstanding on the real world with the game mechanics. A creature immune to unconscious would still sleep. Regarding condition immunities, a creature must be immune to charm to avoid the spell. – Mindwin Jun 21 '17 at 0:35
• I found this: ref Normal sleep is an altered state of consciousness. You are aware and responding to stimuli, such as rolling over when uncomfortable or pulling up covers when cold, but in a diminished way. Unless the sleeper is given enough stimulation, they continue to sleep through it. - while it is not from scientifical research, corroborates the fact that sleep is not completely unconscious. – Mindwin Jun 21 '17 at 0:38
They can't be put to sleep using the Sleep spell because of the inherent limitations of that spell. However, if you had another means of inflicting the unconscious condition upon them, for example a brass dragon's sleep breath, then they would be susceptible.
|
2019-07-21 11:26:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3573112189769745, "perplexity": 2906.536304739867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00281.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-9-roots-and-radicals-9-5-solving-radical-equations-problem-set-9-5-page-424/24
|
Elementary Algebra
$y=34/9$
Recall, in order to cancel out a square root, we square both sides of the equation. Thus, we square both sides of the equation to obtain that 9(y-2)=16. We now simplify: $9y-18=16 \\\\ 9y=34 \\\\ y=34/9$ We now must check our solution: $3\sqrt{34/9-2}=4 \\\\ 3\sqrt{16/9} =4 \\\\ 3(4/3) =4 \\\\ 4=4$ This is true, so our solution is valid.
|
2018-09-20 02:19:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644272089004517, "perplexity": 147.61127007692104}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156376.8/warc/CC-MAIN-20180920020606-20180920040606-00227.warc.gz"}
|
https://undergroundmathematics.org/polynomials/r6431/solution
|
Review question
# Can we solve $\sqrt{4x+13}-\sqrt{x+1}=\sqrt{12-x}?$ Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource
Ref: R6431
## Solution
1. Solve $\sqrt{4x+13}-\sqrt{x+1}=\sqrt{12-x}.$
Squaring both sides of the equation gives us \begin{align*} &(4x+13)+(x+1)-2\sqrt{(4x+13)(x+1)} = 12-x\\ \Longrightarrow\quad& 5x+14-12+x = 2\sqrt{(4x+13)(x+1)}\\ \Longrightarrow\quad& 3x+1 = \sqrt{(4x+13)(x+1)}. \end{align*} Squaring again implies that \begin{align*} &9x^2+6x+1 = (4x+13)(x+1)=4x^2+17x+13 \\ \Longrightarrow\quad& 5x^2-11x-12 = 0 \\ \Longrightarrow\quad& (5x+4)(x-3) = 0 \end{align*}
and so $x = -\dfrac{4}{5}$ or $x = 3$.
We know now that these are the only two possibilities for a solution. There is a problem, however; squaring may have intrduced false roots.
For example, if we have $x=\sqrt{x+2}$, then squaring gives $x^2 = x+2$, and so $(x-2)(x+1)=0$, and $x = 2$ or $-1$.
But $2$ works in our original equation, while $-1$ does not. So $-1$ is a false root, and $2$ is the only true solution.
So we need to check whether or not the possible solutions above satisfy our original equation.
If $x = -\dfrac{4}{5}$, then the left-hand side is $\sqrt{4\left(-\dfrac{4}{5}\right)+13}-\sqrt{-\dfrac{4}{5} + 1} = \sqrt{\dfrac{49}{5}}-\sqrt{\dfrac{1}{5}}=\dfrac{1}{\sqrt{5}}(7-1)=\dfrac{6}{\sqrt{5}},$ while the right-hand side is $\sqrt{12+\dfrac{4}{5}}=\sqrt{\dfrac{16}{5}}=\dfrac{4}{\sqrt{5}},$ which are not equal. Thus $x=-\dfrac{4}{5}$ is not a solution.
If $x=3$, then the left-hand side is $\sqrt{4\times 3+13}-\sqrt{3 + 1}=5-2=3$ while the right-hand side is $\sqrt{12-3}=\sqrt{9}=3.$ Thus $x=3$ is the only solution to the original equation.
1. One root of the equation $3x^3+14x^2+2x-4=0$ is rational. Obtain this root and complete the solution of the equation.
Let $f(x)=3x^3+14x^2+2x-4$. Assume the given rational root is $\dfrac{a}{b}$, where the highest common factor of $a$ and $b$ is $1$ and, without loss of generality, $b>0$.
Substituting this solution into $f(x)=0$ and multiplying through by $b^3$ we find $3a^3+14a^2b+2ab^2-4b^3=0.$ Note that this means $b$ divides $3a^3$ (since it divides every other term) and so $b$ is $1$ or $3$ (since it cannot share a factor with $a$).
Now $2$ divides $14a^2b+2ab^2-4b^3$, and so $2$ divides $3a^3$, and thus $2$ divides $a$, say $a = 2c$. This gives us $6c^3 + 14c^2b + cb^3 - b^3 = 0,$ and so $c$ divides $b^3$. But $c$ and $b$ have no common factor, and so $c$ must be $\pm 1$, and $a$ must be $\pm 2$.
So the only possible rational roots for the equation are $\dfrac{2}{3}, -\dfrac{2}{3}, 2$ or $-2$.
It is easy to check that $-\dfrac{2}{3}$ is the only one to satisfy the equation. Thus $x=-\dfrac{2}{3}$ is the rational root. Factorising the cubic we find $3x^3 + 14x^2 + 2x - 4 = (3x+2)(x^2+4x-2)=0.$ So either $x = -\dfrac{2}{3}$, or $x^2+4x-2=0$ and so $x=\dfrac{-4\pm\sqrt{16-4\times(-2)}}{2}=-2\pm\sqrt{6}.$
|
2018-01-21 18:36:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882164597511292, "perplexity": 139.46144809239638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00348.warc.gz"}
|
https://math.stackexchange.com/questions/2733318/prove-that-v-is-not-a-free-module
|
# Prove that $V$ is not a free module.
I am attempting to solve this:
Let $R=\mathbb{Z}[\sqrt{-5}]$, and let $V$ be the R-module presented by the matrix $\begin{bmatrix} 2 \\ 1+δ \end{bmatrix}$ where $δ=\sqrt{-5}$. Prove that $V$ is not a free module.
Note on definitions
I want to note that my professor defines ($V$ is a free R-module) $\iff \exists k\in\{1,2,...\}[V\cong R^k]$. He also considers the empty set to NOT be a valid basis. So there is no basis for the trivial vector space. I know that these definitions are controversial, but bear with me for now.
My attempted solution
Let $T=\begin{bmatrix} 2 \\ 1+δ \end{bmatrix}R$. We know already $V\cong R^2/T$.
Assume $V$ is a free module. We want to obtain a contradiction.
Now, I see that the rank of $\begin{bmatrix} 2+P \\ 1+δ+P \end{bmatrix}$ (when $P$ is a prime ideal of $R$) can be either $0$ or $1$, depending on $P$. My professor says that the rank being non-constant contradicts the fact that $V$ is free, but alas, I do not see the contradiction.
• What is $\delta$? – user14972 Apr 12 '18 at 3:13
• $δ=\sqrt{-5}$. I have now added that info to the question. – Pascal's Wager Apr 12 '18 at 3:14
• @Pascal'sWager : Well, yes. I imagined it to be a formal variable, possibly lingering from some external context. – Eric Towers Apr 12 '18 at 3:17
• Hm, how does one read "an $R$ module presented by a matrix"? I can't say I've heard the expression. From the context it looks like $R^2/(2,1+\sqrt{-5})$? – rschwieb Apr 12 '18 at 10:54
• @Pascal'sWager OK, that's a good start for explaining to me. The issue I have is that "$AR^n$ (the product of a matrix with direct product of copies of $R$) doesn't have any meaning to me. I think you mean to say it's the submodule of $R^m$ generated by the $n$ columns of $A$. Right? – rschwieb Apr 12 '18 at 14:18
For any maximal ideal $P\subset R$, consider $V/PV$, which is an $R/P$-vector space. If $V$ were isomorphic to $R^k$, then $V/PV$ would be isomorphic to $R^k/PR^k\cong (R/P)^k$, so it would have dimension $k$ as an $R/P$-vector space.
But now note that $V/PV$ is presented as an $R/P$-vector space by the matrix $\begin{bmatrix} 2+P \\ 1+\delta+P \end{bmatrix}$. So if $V/PV$ has dimension $k$, that matrix must have rank $2-k$. In particular, if $V$ is free, the rank of $\begin{bmatrix} 2+P \\ 1+\delta+P \end{bmatrix}$ would have to be the same for all $P$. Since this is not true, $V$ cannot be free.
To prove that $V/PV$ is presented as an $R/P$-vector space by the matrix $\begin{bmatrix} 2+P \\ 1+\delta+P \end{bmatrix}$, first consider the following general situation. We have a module $M$ with submodules $N,K,$ and $L$ with $K,L\subseteq N$. Note then that $$(M/K)/(N/K)\cong M/N\cong (M/L)/(N/L).$$ To apply this here, let $M=R^2$, $K=PM$, $L=\begin{bmatrix} 2 \\ 1+\delta \end{bmatrix}R$, and $N=K+L$. Then $(M/K)/(N/K)$ is the quotient of $(R/P)^2$ by the subspace generated by $\begin{bmatrix} 2 + P \\ 1+\delta + P \end{bmatrix}$; that is, it is the $R/P$-vector space presented by the matrix $\begin{bmatrix} 2 + P \\ 1+\delta + P \end{bmatrix}$. On the other hand, $(M/L)/(N/L)$ is $V/PV$.
• This is a very good solution. However, it is not obvious to me that $V/PV \cong (R/P)^2/ \begin{bmatrix} 2+P \\ 1+δ+P \end{bmatrix}(R/P)$. I tried defining a map $φ:V/PV \to (R/P)^2/ \begin{bmatrix} 2+P \\ 1+δ+P \end{bmatrix}(R/P)$ by $φ((\begin{bmatrix} a \\ b \end{bmatrix}+T)+PV)=\begin{bmatrix} a+P \\ b+P \end{bmatrix}$. But, alas, it seems like a very ugly function to work with and I have not been able to prove that it is well-defined, even though I conjecture it is. – Pascal's Wager Apr 20 '18 at 3:53
Hint: There is a standard example of failure of unique factorization in $\mathbb{Z}[\sqrt{-5}]$. Maybe you can adapt that?
Blunter Hint:
What's $(1+ \delta)(1- \delta)$. Is that also divisible by the other element of the presentation matrix?
Bazooka hint:
The quotient is generated by the images of the standard basis elements in $R^2$ under the projection onto the quotient. Free things don't have relations and $\begin{pmatrix}1 \\ 0\end{pmatrix} \cdot 2 + \begin{pmatrix}0 \\ 1 \end{pmatrix} \cdot (1 + \delta) =0$. So this is not a free module on two generators. Consequently, if it is a free module on one generator, it is generated by one of these. Call these $e_1$ and $e_2$. Keep adding copies of $e_1$ to the equation until you realize there's a multiple of $e_1$ that is a multiple of $e_2$ that is a multiple of $e_1$, i.e., $e_1$ satisfies a relation... (Prior hints might shorten this search a bit.)
• I figured that this would play a part in the solution, but I still don't see the contradiction. I have a feeling that I need to multiply the one-and-only element of $B$ by a nonzero element of $R$ to obtain zero, but I'm still stuck. If you don't mind, I could use an even blunter hint :-) – Pascal's Wager Apr 12 '18 at 3:29
• I understand that $\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \begin{bmatrix} 2 \\ 1+δ \end{bmatrix}R, \begin{bmatrix} 0 \\ 1 \end{bmatrix} + \begin{bmatrix} 2 \\ 1+δ \end{bmatrix}R \}$ is not a valid basis for the quotient. However, I do not see how this implies that there exists no basis at all of size 2 for the quotient. – Pascal's Wager Apr 12 '18 at 14:32
• Actually, I think I now understand how to solve the problem. Basically the key is 6 times any basis vector is zero. – Pascal's Wager Apr 12 '18 at 15:03
• $\{e_1, e_2\}$ generates the quotient. So some subset of these is a minimal generating set. Use relations among them to figure out which ones are redundant. Are they all redundant? When you get to the last one, you are asking if it has a relation with itself that is not forced by the ring. For instance, $\mathbb{Z}/6\mathbb{Z}$ as a module over itself is freely generated by $[1] \pmod{6}$, because its relation is inherited from the ring, but $2 \pmod{6}$ is not, because $4 \cdot [2] \cong 1 \cdot [2] \pmod{6}$ is not forced by a "$4 = 1$" relation in the ring. – Eric Towers Apr 12 '18 at 15:03
• But there is no $r\in R-\{0\}$ such that $r$ times the coset I wrote is zero .... or is there? – Pascal's Wager Apr 12 '18 at 20:16
|
2019-12-12 19:49:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376864194869995, "perplexity": 122.81111045191187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00269.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=449854
|
# Finding the constants in a general solution
by rygza
Tags: constants, solution
P: 52 No, you don't have enough information to determine $$C_1$$.
P: 52 C1 is not a constant if it depends on 1/tan(16t). Besides, that comes from assuming that x(t) is zero everywhere, which you did not state in the problem. You have only presented one equation to extract information from: x(0)=1/6. It is not possible to determine both constants from one piece of information. Any value for $$C_1$$ is consistent with the information you have given us. Is there more?
|
2014-07-23 22:32:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5790668725967407, "perplexity": 372.6693934140534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00174-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://mocktime.com/newspaper-editorials-the-hindu-upsc-ias/the-indian-express-editorials/three-to-tango/
|
# Three to tango
Under Sharif, Pakistan’s equations with Saudi Arabia and Iran will change
Exactly one year ago, the Indian media speculated about the rapprochement between New Delhi and Riyadh after the Saudis handed over a key figure (carrying a Pakistani passport) to the Union government: Abu Jundal, the man who was allegedly on the phone with the LeT terrorists during the Mumbai attacks of November 2008. In 2010, Manmohan Singh had signed the “Riyadh Declaration” during a most successful visit: India and Saudi Arabia committed themselves to sharing information on terrorist activities and signed an extradition treaty indeed, in addition to Jundal, the Saudis have extradited two other alleged members of the Indian Mujahideen, A. Rayees and Fasih Mehmood, in October 2012. This year, in January, A.K. Antony paid his first visit to his opposite number in the Saudi government. Pakistani authorities were very nervous about these developments, which went on a par with an increasing mutual dependence in the domain of energy, since India has become the fourth-largest customer of the Saudis for oil (after Iran lost ground on the Indian list because of sanctions).
But that was before the comeback of Nawaz Sharif, at a time when Pakistan was ruled by a PPP government the Saudis disliked openly. As early as October 2008, a few weeks after the election of Asif Zardari as president, the deputy chief of mission of Pakistan in Riyadh told his opposite number of the American embassy that the Saudi government would not help Pakistan (which eventually got only $300 million of aid in 2008) and would be “waiting for the Zardari government to fall” (US embassy cable dated October 16, 2008 revealed by WikiLeaks). The Saudis had no confidence in Zardari, who, they suspected, was a Shia (ibid April 9, 2009). As a result, they cultivated their relationship with the army (thanks to which Islamabad got$700 million of aid in 2009 at the Pakistan donors’ conference in Japan) and, in parallel, they prepared for the comeback of Nawaz Sharif. Here, one needs to realise that, since the 1970s, as the Saudi ambassador to the US, Adel al-Jubeir, once said, Saudis “are not observers in Pakistan, we are participants”. Indeed, Riyadh keeps interceding and mediating not only between the US and Islamabad but also between the Pakistani army and the civilians, as evident from its role in
… contd.
|
2019-09-20 12:14:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23382680118083954, "perplexity": 10087.514678691316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00074.warc.gz"}
|
http://www.continuousreflection.org/tag/inverse-functions/
|
In her blog post, My Backwards Approach to Inverse Functions, Emily Alman makes an observation that is, I think, vitally important for math teachers to understand: "One problem with algebra is that there is often a disconnect between the meaning/understanding and the computations/doing. We try our darndest to bridge the gap between the two, but I find that [...]
|
2017-12-12 13:57:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8264606595039368, "perplexity": 664.1819834057521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00381.warc.gz"}
|
http://devmaster.net/forums/topic/10012-hypot-like-function-but-for-3-components/
|
# hypot()-like function but for 3 components
8 replies to this topic
### #1kwhatmough
New Member
• Members
• 5 posts
Posted 01 August 2008 - 09:53 PM
I need a fast way to compute sqrt(x*x + y*y + z*z) that avoids intermediate overflow and underflow.
I know that most runtime libraries (C,C++,Java) provide a 2d hypot() function that does this for 2 variables, _AND_ I realize that I can achieve my result using 2 calls to it as follows:
hypot( hypot(x, y), z)
but that seems uneccessarily slow. Is there a faster way that still avoids intermediate overflow and underflow? I am _NOT_ looking for a fast sqrt() function here. I am well aware of the _NUMEROUS_ threads on fast sqrt() and fast reciprocal sqrt(). What I am looking for is a way to scale the 3 inputs before calling <any fast sqrt method here>, and then to un-scale afterwards to obtain the accurate result.
So I started looking at how to implement a 2d hypot() function:
Define hypot(x, y) as sqrt(x*x + y*y) but that avoids intermediate overflow and underflow:
Using the fact that sqrt(x^2+y^2) == k*sqrt((x/k)^2+(y/k)^2) for k>0
you can implement a fast hypot() by choosing k to be a power of 2 close to sqrt(x*y). Since you choose k to be a power of 2, computing k, x/k, and y/k can be done using bit twiddling and knowledge of IEEE floating point representation. I can go into that further, but my problem is how to generalize this for 3 components x,y,z without simply nesting the call as hypot(hypot(x,y),z). Any suggestions?
### #2Reedbeta
DevMaster Staff
• 5344 posts
• LocationSanta Clara, CA
Posted 01 August 2008 - 09:58 PM
Since sqrt(x*y) is the geometric mean of x and y, I'm guessing for 3D you want the geometric mean of x, y, and z, which would be the cube root of x*y*z. But I suspect that's not as easy to estimate by bit-twiddling as sqrt(x*y) is...
reedbeta.com - developer blog, OpenGL demos, and other projects
### #3kwhatmough
New Member
• Members
• 5 posts
Posted 01 August 2008 - 10:07 PM
Thanks for the reply; I see that my reference to hypot() causes confusion but actually all I want is the magnitude of a 3d vector that avoids overflow and underflow that one would normally get in the interim calculation of
sqrt(x*x + y*y + z*z).
{Square, not cube root in my case}.
### #4kwhatmough
New Member
• Members
• 5 posts
Posted 01 August 2008 - 10:08 PM
Sorry now I understand. You're saying cubeRoot(xyz) for the moderating factor. I see.
### #5Nils Pipenbrinck
Senior Member
• Members
• 597 posts
Posted 02 August 2008 - 09:05 AM
Hi.
Take a look at this paper:
http://torus.untergr...gorean_sums.pdf
It shows how to calculate the vector length in a non obvious but mathematical much more stable way. It's not *that* fast though.
Don't bookmark the pdf please. I'll take it offline in a couple of days..
Cheers,
Nils
My music: http://myspace.com/planetarchh <-- my music
My stuff: torus.untergrund.net <-- some diy electronic stuff and more.
### #6Blaxill
Member
• Members
• 66 posts
Posted 08 August 2008 - 07:55 PM
Can you not extend your original idea?
sqrt(x^2+y^2+z^2) == k*sqrt((x/k)^2+(y/k)^2+(z/k)^2) for k>0
Assuming you want the geometric mean (as suggested by Reedbeta) you want k to approximate (xyz)^(1/3) as a power of 2 so
2^n = (xyz)^(1/3)
2^3n = xyz
n = (ln xyz) / 3(ln2)
Urgh maybe its not the way to go, but only one ln needs to be calculated. I don't know of any useful approximations to ln x, but it would only have to be loosely accurate
After that, x/k, y/k and z/k can be done again with bit shifts.
### #7kwhatmough
New Member
• Members
• 5 posts
Posted 09 August 2008 - 12:31 AM
(Thanks to Nils for the fascinating paper).
Yes I believe that is the idea. If you assume IEEE format for the floats you can do some bit-fiddling to approximate a log_2 function by extracting the exponent bits.
Something like this might work for positive x:
floor(log_2(float x)) is something like: (floatToIntBits(x) >> 23) - 127
where in C you might implement floatToIntBits(x) as: *(int*)&x
or (preferably?) use an anonymous union:
union {
int asInt;
float asFloat;
} u;
u.asFloat = x;
return (u.asInt >> 23) - 127;
### #8Blaxill
Member
• Members
• 66 posts
Posted 09 August 2008 - 01:52 AM
So I think you want this. The 2 float/integer conversions are probably the slowest bits.
float get_k(const float &x, const float &y, const float &z)
{
union
{
float mult;
int bits;
};
mult = x*y*z;
const int exp = (((*reinterpret_cast<int*>(&mult)) >> 23) & 255) - 128;
bits = (bits & ~(255 << 23)) + 127 << 23;
mult = (((-1.0f/3.0f) * mult + 2.0f) * mult - 2.0f/3.0f + static_cast<float>(exp))* 0.69314718f; // mult = ln(xyz)
mult *= 0.48089834; // mult = (ln xyz) / 3(ln2) = n
// k = 2^n = 2^mult
return (1<<static_cast<int>(mult));
}
### #9rranft
New Member
• Members
• 7 posts
Posted 09 August 2008 - 06:02 PM
for(long i = 0; i < 500000; i++){
ax = (float)rand() + 1 / 100.0f;
ay = (float)rand() + 1 / 100.0f;
az = (float)rand() + 1 / 100.0f;
bx = (float)rand() + 1 / 100.0f;
by = (float)rand() + 1 / 100.0f;
bz = (float)rand() + 1 / 100.0f;
dx = ax - bx;
dy = ay - by;
dz = az - bz;
dist = sqrt(dx + dy + dz);
}
takes 0.07 seconds on a core2 duo 2.14ghz - of course, that's all I was doing....
#### 1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users
|
2013-06-20 05:32:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6010487079620361, "perplexity": 5863.616783582697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00017-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/e-m-question-maxwells-equation-in-matter-reduce-to-maxwells-equation-in-a-vacuum.988656/
|
# E&M Question -- Maxwell's Equation in matter reduce to Maxwell's equation in a vacuum....
ParticleGinger6
New user has been reminded to show their work on schoolwork questions
Homework Statement:
Maxwell's Equation in matter reduces to Maxwell's equation in vacuum if polarization and magnetization are zero?
Relevant Equations:
They can be found in the attached photo
I do not know where to start.
#### Attachments
• Chegg 5.6.20.JPG
33.5 KB · Views: 117
Homework Helper
Gold Member
How are polarization (i.e. the Polarization Vector Field) and magnetization (i.e. Magnetization Vector Field)\\ related to the vector fields $\vec D$ , $\vec E$, $\vec B$, and $\vec H$?
ParticleGinger6
@robphy so i found the equations D = epsilon*E and H = (1/mu)*H where epsilon = epsilon(not)*(1+Xe) and mu = mu(not)*(1+Xe). I think if I use that convert D into terms of P which would look like P = D - epsilon(not)*E and H = B/(mu(not)*(1+Xe)). From there you can get Magnetization from M = Xm*H
Am I on the right track
Yes... but use $\epsilon_0$ and $\mu_0$ instead of the $\chi$s.
|
2023-02-05 04:12:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755452036857605, "perplexity": 1954.3245894134275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00212.warc.gz"}
|
https://codeforces.com/blog/entry/82284
|
### vovuh's blog
By vovuh, history, 3 weeks ago,
I'm so thankful to all testers, especially to Gassa and Rox for their invaluable help!
1409A - Yet Another Two Integers Problem
Idea: vovuh
Tutorial
Solution
1409B - Minimum Product
Idea: vovuh
Tutorial
Solution
1409C - Yet Another Array Restoration
Idea: vovuh
Tutorial
Solution (Gassa)
Solution (vovuh)
Solution (Rox)
1409D - Decrease the Sum of Digits
Idea: MikeMirzayanov
Tutorial
Solution
1409E - Two Platforms
Idea: vovuh
Tutorial
Solution
1409F - Subsequences of Length Two
Idea: vovuh
Tutorial
Solution
Solution (Gassa, greedy, O(n^4))
• +107
» 3 weeks ago, # | +11 fastest tutorial <3
• » » 3 weeks ago, # ^ | -6 That's @vovuh for you.
» 3 weeks ago, # | ← Rev. 2 → 0 Runtime Error C++ Exit code is -1073741571Can anyone explain what is going wrong in my code that is giving me runtime error?Any help would be appreciatedThanks :)
• » » 3 weeks ago, # ^ | +1 r[r.length() — 1]++; I guess this line is what causing error when r="" that is length = 0, r[-1] will give segmentation fault
• » » » 3 weeks ago, # ^ | 0 Thank you very much!!!
» 3 weeks ago, # | 0 nice problemset
» 3 weeks ago, # | +19 My screencast + live commentary, enjoy watching :)https://youtu.be/nloGFTpdTJo
• » » 3 weeks ago, # ^ | 0 +1 thanks
• » » » » 3 weeks ago, # ^ | 0 re_start Test Case2828374536691952768 75348970515787375312 29
• » » » » » 3 weeks ago, # ^ | 0 Thanks @mehdi for your suggestion, I got AC
• » » 3 weeks ago, # ^ | 0 nice bro
» 3 weeks ago, # | +11 Wonderful problems, tutorials and solutions!Thanks to writers!
» 3 weeks ago, # | ← Rev. 2 → -67 .
• » » 3 weeks ago, # ^ | +30 Don't spam the editorial blogs with these dead memes dude.
» 3 weeks ago, # | 0 Why does pow(10,18) return 10^18-1
• » » 3 weeks ago, # ^ | +2 pow() doesn't always give the right answer, because it was intended to work with doubles, not integers
• » » 3 weeks ago, # ^ | 0 if u want to use pow for high powers u can typecast with long long int , whichs works quite effectively.
• » » » 3 weeks ago, # ^ | 0 I do not know what that is could you show me an example?
• » » » » 3 weeks ago, # ^ | +1 I think he means if you want to do pow(10,x) and not have it mess up, do (long long)pow(10,x) and it will cast it to a long long and will give the right answer.
• » » 3 weeks ago, # ^ | 0 Yep the same thing happened to me so instead I made an array to divide the digits according to it's powers.
» 3 weeks ago, # | +7 Why B solution is optimal? I did it, but didn't understand why it worked
• » » 3 weeks ago, # ^ | ← Rev. 2 → +4 Suppose you have 2 numbers a and b and you can decrement them by 1, 2 times.Then you have 3 options - a-1,b-1 - a-2,b - a,b-2In first case product is ab+1-a-b. In second case product is ab-2b. In third case product is ab-2a.Now let b>aThen ab+1-a-b > ab-a-b > ab-2b (because a
• » » 3 weeks ago, # ^ | +6 If we decrease a 2 times, then you decreased the product by 2*b. If we decrease b 2 times, then we decreased the product by 2*a. If we decrease a once and b once, then we decreased the product by a+b.If a>b, 2*a>=a+b>=2*b, whereas, if b>a, 2*b>=a+b>=2*aI guess you can see why decreasing one as much as possible is optimal.
• » » » 3 weeks ago, # ^ | +1 thanks_wicardobeth_ now i understand why it is optimal;
• » » 3 weeks ago, # ^ | ← Rev. 5 → 0 Let p = a + b, f = f(a, b) = ab. b = p - a f(a, b) = ab = a(p - a) = ap - a^2 df/da = p - 2a Hence, the derivative of f equals 0 in case p = 2a. Then: a = p/2 => b = a = p/2 (considering p is even) So f reaches its maximum value when a = b (see [interior extremum theorem](https://en.wikipedia.org/wiki/Fermat%27s_theorem_(stationary_points)) for more details).For example, if you have p = 8 then in order to maximize f you have to take a = b = 4, and in order to minimize f you have to take a = 1, b = p - a = 7. Hope this helps!
• » » 3 weeks ago, # ^ | 0 Let the two numbers be a and b. Without loss of generality, let b > a. (The case when both are equal is trivial.) Then we can express b as b = a + k, where k is some positive integer. Consider that we have to make just one move. So we can have either (a, a+k-1) or (a-1, a+k) after one move. The product of two numbers will then be (a^2 + a*k - a) and (a^2 + a*k - a - k) respectively. Clearly, lowering the smaller number gives a smaller product. You can easily extend this argument to n moves, since as we lower the smaller number, it stays smaller and hence we go on reducing it until possible.
• » » » 2 weeks ago, # ^ | 0 thanks
» 3 weeks ago, # | 0 Can someone tell me why my submission for E doesn't work? It's coordinate compression + greedy. https://codeforces.com/contest/1409/submission/91881485
• » » 3 weeks ago, # ^ | +2 Dont know why you used coordinate compression but greedy wont work for this. Greedily choosing best for one plank might incur loss in overall max answer.
• » » » 3 weeks ago, # ^ | 0 Could you give me an example please? I still don't get why greedy won't work. I used coordinate compression beacause the values of the xs and ys could range from 1 to 10^9.
• » » » » 3 weeks ago, # ^ | 0 I got why. Thanks anyways!
• » » » » » 3 weeks ago, # ^ | 0 Can you explain why it wouldn't work?? I too had a similar approach.
• » » » » » » 3 weeks ago, # ^ | 0 Take this case: 6 1 1 2 2 3 3 4 1 1 2 1 2 1 Answer should be 6, but greedily it is 5.
» 3 weeks ago, # | +18 Well, with a little bit of maths, C can even be solved in O(n)
• » » 3 weeks ago, # ^ | 0 Can you please explain how?
• » » » 3 weeks ago, # ^ | 0 use arithmetic progression and give a try
» 3 weeks ago, # | 0 Can someone please tell me what's wrong with My Code Problem D
• » » 3 weeks ago, # ^ | ← Rev. 8 → 0 dp solution is too slow for this problem. See the constraints, you need to do it faster.
• » » » 3 weeks ago, # ^ | 0 this Digit dp solution has same time complexity and it got Accepted : https://codeforces.com/contest/1409/submission/91838952
» 3 weeks ago, # | +4 E can also be solved with a simple DP. First, sort the $x$ array. Then for each $i$ from $0$ up to $n - 1$: Find first such $j$ that either $j = n$ or $x_i + k < x_j$. Notice that $j$ never decreases so there is no additional complexity here. Let $dp^0_i = max(dp^0_i, dp^0_{i - 1})$ and $dp^1_i = max(dp^1_i, dp^1_{i - 1})$ for $i \ne 0$ and $dp^0_i = dp^1_i = 0$ for $i = 0$. Let $dp^0_j = max(dp^0_j, j - i)$ and $dp^1_j = max(dp^1_j, dp^0_i + j - i)$. $dp^0_i$ contains best answer up to $x = x_i$ with one platform, and $dp^1_i$ with two platforms. If you are lazy (like me), just take the maximum of all values. Note that $j$ can be $n$ in the above loop, so the DP array will have $n + 1$ items.
» 3 weeks ago, # | +1 Very nice problems!
» 3 weeks ago, # | +11 Thanks vovuh! Nice contest to learn the basics for noobs like me
» 3 weeks ago, # | 0 Amazing problems, enjoyed solving them!
» 3 weeks ago, # | ← Rev. 2 → +4 How to solve E if we have K platforms? $O(KNlogN)$ is fine but better than this??
• » » 3 weeks ago, # ^ | +27 $dp_{i, j}$ — we at the $x=i$ and placed $j$ segments. $dp_{i, j} = max(dp_{i - 1, j}, dp_{i - k - 1, j - 1} + cnt(i - k, i)$ where $cnt(l, r)$ is the number of points between $x=l$ and $x=r$. Can be improved by coordinates compression and two pointers (if the outer loop iterates over $k$ and inner loop iterates over the positions). Total complexity is $O(nk)$.
• » » » 3 weeks ago, # ^ | 0 Nice solution, so this works even if we have $q$ platforms, and for each of them, we are given $cost_i$ and $length_i$, and you have a total budget $B$. It's some knapsack then?
• » » » » 3 weeks ago, # ^ | +3 Yes, it's a knapsack, but you need an additional state in your dynamic programming to solve such a problem.
• » » 3 weeks ago, # ^ | ← Rev. 7 → 0 You can binary search to find the maximum index can reach from each index i and store data in two arrays X, Yfirst of all you need to sort x points "y points are useless"by using binary search you can search for ind "the maximum index you can reach from index i" and store in X and Y the number of nodes "which is ind-i+1"then you can maximize the elements of X from the back so you have the maximum number of nodes you can take starting from node i to the endthe answer will be max(Y[i]+X[i+Y[i]]) "make sure (i+Y[i]) doesn't go out of the array"O(nLog(n))https://codeforces.com/contest/1409/submission/91933328
• » » » 3 weeks ago, # ^ | ← Rev. 2 → 0 You should re-read the problem
• » » » » 3 weeks ago, # ^ | 0 It was misunderstand.Sorry.
• » » 2 weeks ago, # ^ | +1 I believe you can solve this version in $O(NlogN)$ using Alien's trick. Here's a good tutorial on it: http://serbanology.com/show_article.php?art=The%20Trick%20From%20Aliens
• » » » 2 weeks ago, # ^ | 0 Wow! This is what I was looking for $O(NlogN)$ solution. Thanks a lot!
» 3 weeks ago, # | 0 Yeah, F can be solved in O(n^3).
» 3 weeks ago, # | 0 Hey could someone please take a look. I did the same thing as the editorial for Problem E but still getting WA.[Submission here](https://codeforces.com/contest/1409/submission/91884363)any help is appreciated, thanks!
• » » 3 weeks ago, # ^ | +1 I think pts2 = upper_bound(all(xc),xc[next_ind]+k) — (xc.begin()+next_ind); should be replaced with a suffix MAX array
• » » » 3 weeks ago, # ^ | 0 I think both of them should yield the same result, is that incorrect? Anyway will try that!Thanks for your help!
» 3 weeks ago, # | ← Rev. 2 → 0 can someone help me the tutorial of Problem B ? .. I didnt get it ... or any better soln will be appreciated !
• » » 3 weeks ago, # ^ | 0 Well the tutorial says that if we decide decrease a by 1 in the step then it's better to continue to decrease a until we can and then start with b. Similarly if we start with b then keep on decreasing it until we can then start with a. consider both of the above cases and print the smallest product. Hope this helps:)
» 3 weeks ago, # | 0 Really Thank you so much secondthread for your live video editorials. I have upsolved F problem by watching your solution. Live Video Editorital : https://www.youtube.com/watch?v=qgJ4KiQR4QY&t=2635s
» 3 weeks ago, # | ← Rev. 2 → -11 Problem D — Can be solved in constant time (O(18)) ApproachFirst, find the sum of the digits as summMove from the first place digit (ones) to the higher one, check if summ — (sum from ones digit to current position) + 1 <= s(needed sum)if true: the answer n — (number if you make digit after ith 9 and add one to the result) ... Exampleif n = 500 and s = 4 all 3 digits should be 0 to make them zero the number should be increamented until 999 and adding +1 so 500 --> 1000 so the answer will be 1000-500 = 500 ... Python Implementationdef solve(n, s): n = str(n)[::-1] summ = 0 for i in n: summ += int(i) if summ <= s: return 0 else: summ1 = 0 for i in range(len(n)): summ1 += int(n[i]) if summ - summ1 + 1<= s: return int(n[::-1][:len(n)-i-1] + ("9"*(i+1))) + 1 - int(n[::-1]) for _ in range(int(input())): # Multicase n, s = list(map(int, input().split())) print(solve(n, s))
• » » 3 weeks ago, # ^ | +27 This is the same time complexity as the intended solution.
• » » » 3 weeks ago, # ^ | -9 I believe the editorial solution is actually O(18*18) as it goes through the array once to calculate the prefix/suffix and it also goes through the array to calculate the sum. To reduce the extra log(N) factor, an int could be used to keep track of the sum and just update it accordingly in between iterations. However, in the grand scheme of things, it doesn't really matter.
• » » » » 3 weeks ago, # ^ | ← Rev. 2 → -10 Yeah! I did the same solution mentioned above. It was really an easy problem for an D statement I guess.
• » » 3 weeks ago, # ^ | 0 It can also be solved with binary search.
• » » » 3 weeks ago, # ^ | ← Rev. 3 → 0 Can you give me the link for the binary search solution. I thought in that direction during the contest but couldn't come up with the solution??Update: I saw your submission.
• » » 3 weeks ago, # ^ | 0 Please help me I am getting wrong ans in D solution:https://codeforces.com/contest/1409/submission/91972053Instead of making 9 from the digit where we get sum greater than or equal to sum i am increasing the previos value by one and making all zero from that digit .
• » » » 3 weeks ago, # ^ | 0 After hours of debugging, I got it.Try this test case:11992 118 can be added to make it 2000In this test case, the previous value is 9 (the first from the left), so when u add 1 to 9 it becomes 10. Then the array will be [1, 10, 0, 0]. when u concatenate these values to string the number will be 11000.Then the answer will be 11000-1992 = 9008 which is incorrect. 8 increments are enough.Hope u find it helpful.
• » » » » 3 weeks ago, # ^ | 0 Thank you so much brother
» 3 weeks ago, # | ← Rev. 2 → 0 1409E - Two Platforms I do not get the point why the y[] are in the statement. If it is obvious that we do not need them, then they are obviously unnecessary. If it is not so obvious then it is a misleading statement. So, why?
• » » 3 weeks ago, # ^ | +19 spookywooky I do not get the point why this comment is under the editorial. If it is obvious that we do not need it, then it is obviously unnecessary. If it is not so obvious then it is a misleading comment. So, why?
• » » » 3 weeks ago, # ^ | +4 I mean, I don't think this makes sense. Clearly spookywooky thought we do need the comment, so your clever response isn't actually valid.
• » » » » 3 weeks ago, # ^ | -8 I thought this legend needs $y$-coordinates as well as he thought we need this comment.Partially, your answer below is right, this part teaches to notice some probably useless things in the problem and to make some transitions from one problem to another. And yeah, this is essentially not the worst way to allow repeated $x$-coordinates.
• » » » » » 3 weeks ago, # ^ | -19 https://codeforces.com/contest/1409/submission/91894057can somebody please help me with this problem why it is giving the wrong answer? any help would be highly appreciatedThanks
• » » 3 weeks ago, # ^ | +14 The y coordinates are of no use I agree but they are necessary as a part the author to chose to describe the original problem, sometimes some variables are just to help understand the actual statement clearly , but here these y coordinates are not misleading!
• » » » 3 weeks ago, # ^ | +1 I do not think they are necessary in any way. The statement would be much simpler if formulated in 1 dimensional space. "Here are some x[], find two segments of size k covering as most possible of those points on x-axis, how much?"
• » » » » 3 weeks ago, # ^ | +17 Unfortunately problems are not all supposed to be simple. You have to decipher for yourself what is important and what is not. Just because it was obvious to some does not mean that others don't make that connection, at least immediately.
• » » » » 3 weeks ago, # ^ | 0 This is not atcoder.
• » » » 3 weeks ago, # ^ | +3 I don't know whether they are misleading or not, but they are definitely unnecessary.
• » » » » 3 weeks ago, # ^ | +4 Unnecessary from readers point of view, think as a setter and you have this question in your mind but you have to plot a frame to fit it into an understandable statement. Now since everyone have different ways to describe the problem thus a statement in order to describe the original point of you may go in various directions. If you were a setter may be you would describe it in an easier fashion. But everyone cannot have same thoughts. Making a perfect problem statement that match thought of every participant is a tough kind of job. When we will jump into it we will realize the importance of clearity!
• » » 3 weeks ago, # ^ | +37 One benefit of $y$-coordinates is that it makes it clear and unambiguous that we can have repeated $x$-coordinates, as well as overlapping platforms. However, the statement said platforms could overlap anyway, so yeah I'm not sure.Also, maybe it was intentional that they're unnecessary because the 2D to 1D transformation is something worth testing/teaching, especially in Division 3. In general, part of the difficulty of Codeforces problems does stem from transforming the problem into an equivalent but simpler one (in a way, this is all problem-solving).
» 3 weeks ago, # | ← Rev. 2 → 0 C can be solved in O(n) (approximately) but calculating the factors of the difference in x and y (which are basically possible values for final difference) and then selecting the smallest value that satisfies the conditions. After selecting the optimal smallest difference, the array can be easily constructed.Solution : 91852634
• » » 3 weeks ago, # ^ | 0 Yes, I just traverse from 1 to 100 and check two conditions. no need for factors. 91844711
• » » » 3 weeks ago, # ^ | 0 Yes, that will definitely work for the given constraints. Cool!I tried to come up with a solution that will work for bigger constraints without any modifications. Also, I couldn't think of such a straightforward and short approach (considering the small constraints) during the contest.
• » » » » 3 weeks ago, # ^ | 0 I think this is one of the most intuitive solution in O(n). 91880180
» 3 weeks ago, # | ← Rev. 2 → 0 For Problem D: Can anyone tell me what's wrong in the following approach: For example 217871987498122 10, traverse from starting digits, 2,1,7,8 and add them, when sum become greater than or equal to 10, that is on 7. increment one digit before 7 from 1 to 2 and rest all digits after it to 0, the number will become 220000000000000. if it violates on first digit then make 1000... this number. please help?
• » » 3 weeks ago, # ^ | ← Rev. 2 → 0 you need to stop when the sum becomes greater than or equal to S. In this test case, you are actually already doing that.Otherwise, the approach looks good to me. I've also implemented an almost similar approach.
• » » » 3 weeks ago, # ^ | 0 i had stopped, can you tell me, what's wrong in the code? 91871968
• » » » » 3 weeks ago, # ^ | ← Rev. 3 → +10 I tried stress testing your submission against mine on random inputs and found that your code fails on some inputs. ExampleExample input: 1785120922 34Your Output: -1783335802Answer: 78Upon observing, it fails on all such inputs where the digit at which you stop is 9, so your code tries to increment it, but it fails, and ends up dropping the remaining digits altogether.
• » » » 3 weeks ago, # ^ | 0 I have the same kind of approach and with getting the position when the sum of digits from left when it surpasses s.next, I took three cases correspondingly possible to update the number, The solution worked fine in test case 1 & 2, but in 3 it is saying one of the output exceeded int64 format.Can you please help in telling what went wrong? please help? thanks submission:91894618(https://codeforces.com/contest/1409/submission/91894618)
• » » » » 3 weeks ago, # ^ | ← Rev. 3 → 0 Your code fails because of two very minor issues.Firstly you are not checking if the sum of digits is already less than S.By Changing if (sum_digits(n)==s) to if (sum_digits(n)<=s)Also, the inbuilt pow function can be inaccurate at large values since it uses floating-point values. By Changing nfinal+= b[k]*pow(10,18-k); to nfinal+= b[k]*( (ll)(pow(10,18-k) + 0.5));After making these changes, I submitted your code and it got accepted. --> 91898574
• » » » » » 3 weeks ago, # ^ | ← Rev. 2 → 0 Thank you for the help, I had been trying it for many times.:)
» 3 weeks ago, # | ← Rev. 2 → 0 O(n) solution for C: my submission: click here
» 3 weeks ago, # | +4 My solution to C was as follows. I "stuffed" as many elements as I could betweeen x and y. Then I calculated this minimum difference and added some elements less than x. I would add as many as I could until I was done or these values became negative. In the case they became negative, I would add the remaining values above y. I believe this should be O(N)
» 3 weeks ago, # | 0 https://codeforces.com/contest/1409/submission/91894057can somebody please help me with this problem why it is giving the wrong answer? any help would be highly appreciatedThanks
• » » 3 weeks ago, # ^ | 0 In the loop where you work with Sum and carry stuff, you have put the check and increment condition w.r.t j where it should be in respect to i.As a result i is not changing therefore affecting your answer.
• » » » 3 weeks ago, # ^ | 0 Brother, You are too good. thanks for showing your generosity towards me. Got AC
• » » » » 3 weeks ago, # ^ | +1 Always happy to help a fellow coder!!
» 3 weeks ago, # | 0 Can somebody explain me the greedy approach in problem F please....
• » » 3 weeks ago, # ^ | +29 Let the string $t$ be be (begin and end). Sure, there is also a case where the letters of $t$ are equal, but it can be solved separately, or the solution could be carefully implemented to account for that.Now, suppose that we are going to make exactly $k$ moves. We have two kinds of useful moves: put b somewhere and put e somewhere. Let the number of b-moves be $b$, and the number of e-moves be $e = k - b$.Of all the ways to put exactly $b$ letters b, the most impactful way is to put them into $b$ leftmost places in the string... unless we change some e into b in the process. Let the number of e -> b moves we are going to make be $x$ ($0 \le x \le b$). So, with our b-moves, we pick $x$ leftmost letters e and change them into b, then pick $b - x$ leftmost letters which are not b and not e and change them into b.Similarly, when we are going to put exactly $e$ letters e, the most impactful way to do it is to put them into $e$ rightmost places in the string... unless we change some b into e. Let the number of b -> e moves we are going to make be $y$ ($0 \le y \le e$). So, with our e-moves, we change $y$ leftmost letters b into e, and also change $e - y$ leftmost letters which are not b and not e into e.What remains is to look over all possible tuples of $(b, e, x, y)$. Remember that $e = k - b$, so there are only $O (n^3)$ such tuples. For each of them, simulate the above greedy process (in linear time) and then find the answer (in linear time too). The total complexity is $O (n^4)$. A possible speedup is to precompute the required stuff for prefixes and suffixes, bringing it down to $O (n^3)$ or $O (n^2)$ (didn't actually try).Why we can assume $e = k - b$? If we actually do less than $k$ moves in the optimal solution, it means all the letters become b or e. So we can cheat: for the optimal $e$, we make unnecessary moves from the left, and then overwrite their results by exactly $e$ moves from the right. All in all, this greedy solution requires more observations than the dynamic programming solution, but requires less proficiency to come up with it.
» 3 weeks ago, # | ← Rev. 2 → 0 I tried solving problem D with another approach, It passed test case 1 and 2, but in test case 3 it's giving an error for an output which is not in format int64;Can anyone explain what is going wrong in my code that is giving me this error?Any help would be appreciated Thanks :)
• » » 3 weeks ago, # ^ | ← Rev. 2 → 0 Got the issue fixed.91939751
» 3 weeks ago, # | ← Rev. 2 → +1 1409C - Yet Another Array Restoration can be solved greedily in $O(n)$ instead of $O(n+\sqrt{y})$. Hint Find the largest $k » 3 weeks ago, # | +1 In 1409D - Decrease the Sum of Digits, if we maintain the digit sum, time complexity will be reduced to$O(\log n)$. Code (Python 3)import sys def read_int(): return int(sys.stdin.readline()) def read_ints(): return map(int, sys.stdin.readline().split(' ')) t = read_int() for case_num in range(t): n, s = read_ints() a = [0] + [int(i) for i in str(n)] ds = sum(a) cost = 0 idx = len(a) - 1 radix = 1 while ds > s: if a[idx] > 0: cost += (10 - a[idx]) * radix ds -= a[idx] a[idx] = 0 ds += 1 a[idx - 1] += 1 i = idx - 1 while a[i] >= 10: a[i - 1] += 1 a[i] -= 10 ds -= 9 i -= 1 radix *= 10 idx -= 1 print(cost) • » » 3 weeks ago, # ^ | 0 couldn't you add radix *= 10 idx -= 1 to inside the while a[i] > 10 loops since you wouldn't need to consider those indices after you subtract them from them as they should equal 0? » 3 weeks ago, # | 0 Hi guys, I'm a python coder and I was wondering if someone could explain to me what's slowing my code down?It should work quite fast since each loop can only go 18 times, and it seems to work fast when I tested it. However, I keep running out of time on test 3. » 3 weeks ago, # | 0 Chinese tutorials updated. » 3 weeks ago, # | 0 Can someone tell me why my code is giving the wrong answer, please... Submission: https://codeforces.com/contest/1409/submission/91873431 Thank you.. big help • » » 3 weeks ago, # ^ | 0 For$10, 1$, your code if(sum == s && i!=v.size()-1) is triggered at$i=0$, so your shortcut to output "0" if(i == v.size()) fails. • » » » 3 weeks ago, # ^ | 0 got it.. thanks • » » » 3 weeks ago, # ^ | 0 still its giving wrong answer... after the correction https://codeforces.com/contest/1409/submission/91908966 • » » » 3 weeks ago, # ^ | 0 sorry my bad... i did a mistake » 3 weeks ago, # | +5 Similar problem to D.https://codeforces.com/problemset/problem/1143/B » 3 weeks ago, # | 0 I'm trying to understand the tutorial for Two Platforms, but I don't get what they mean by suffix maximum array and prefix maximum array. If anyone has the code for that in Java/Python that would help too. I'm trying to learn how to do dynamic programming but I still don't really get it. » 3 weeks ago, # | 0 Video Tutorial C:https://www.youtube.com/watch?v=FMu64hfwo3AVideo Tutorial B:https://www.youtube.com/watch?v=QJ0bHUT7qOc » 3 weeks ago, # | 0 My doubt is pretty naive. Still I want to know why wrapper class Integer/ Long array sorting is faster than primitive data types like int / long ? I know that primitive data type use quicksort internally in Arrays.sort(). Also, when to use primitive for sorting and when to use wrapper for the same.Also look at both of my submissions for E in this contest. Submission 1 getting TLE because of using Arrays.sort() in long[] array. My submission 91904739 Submission 2 passing system tests because of using Arrays.sort() in Long[] array. My submission: 91904938 • » » 3 weeks ago, # ^ | ← Rev. 2 → +1 In a nutshell, you can be hacked with an adversarial case that makes quicksort degrade to quadratic. Object sort uses merge sort instead which doesn't have this issue, but wrapper objects are slow and inconvenient.I think the best solution is to randomly shuffle before sorting (I have a prewritten safeSort method, you can see it here: 91830757). • » » » 3 weeks ago, # ^ | 0 Thanks for your reply. I got what the actual problem is. By the way, shuffling the array and then sorting it may (very rare) also cause TLE if the array after shuffling comes out to be the worst case for quicksort. » 3 weeks ago, # | 0 im new here, somebody knows why i haven't got rated, I got accepted in problem A but didn't get any rating yet, thanks, sorry if I miss anything, thankyou • » » 3 weeks ago, # ^ | 0 Wait for one more hour. » 3 weeks ago, # | +1 Then let's build suffix maximum array on r and prefix maximum array on l. For l, just iterate over all i from 2 to n and do li:=max(li,li−1). For r, just iterate over all i from n−1 to 1 and do ri:=max(ri,ri+1).In problem E — the editorialist tells to build suffix maximum array on r and prefix maximum array on l. But Why?? I understand the use of the prefix array and suffix array. Why to find the maximum prefix array and suffix array? • » » 3 weeks ago, # ^ | ← Rev. 2 → 0 Let's assume the answer is segL and segR (which do not overlap of course). Assume that there is a point midp which lies between two segments but do not overlap. We know segL stands on the left side of the midp and segR stands on the right side of the midp. Note that segR is the best platform possible on the right side of the midp otherwise the answer would contain the better platform (which means segR is not a valid solution). Greedly you can see the score of the best platform that is on the right side of the midp can be found using maximum suffix array. Also the the score of the longest platform that is on the left side of the midp can be found using maximum prefix array. This operation takes O(1) time if you have the prefix and suffix arrays. Now, because we don't know the midp, our goal is to iterate over all possible midps in O(n) time and store the best possible score. Please don't hesitate asking any questions I'm trying my best to make things clearer. • » » » 3 weeks ago, # ^ | +1 Thanks for your reply. I understood it. » 3 weeks ago, # | 0 Can someone share some thoughts about how$F$can be solved when length of$t$is arbitrary. During contest i didn't read the part that length of$t$is fixed and is 2. I kept trying to solve it for t of arbitrary length and when after the contest i read that length of t is fixed i realised it was trivial (: for length t = 2. » 3 weeks ago, # | 0 @vovuh I did array restoration in o(n2) and it is simple https://ide.codingblocks.com/s/328390 • » » 3 weeks ago, # ^ | 0 I think 91880180 is more simple. O(n) • » » » 3 weeks ago, # ^ | 0 can u explain • » » » » 3 weeks ago, # ^ | 0 civil_eng0003 This is more intuitive solution https://www.youtube.com/watch?v=j6r3NYWOrjs » 3 weeks ago, # | 0 Question about problem B. How can we proof that we need to decrease firstly one number and only then second one. Why greedy? Why we cannot decrease for example first — second — first -second -second — first ...? • » » 3 weeks ago, # ^ | 0 That is because let's say a>b, then consider two consecutive operation. If you decrease the value of b in two consecutive operations, the product value will change by 2*a( a*(b-2)=a*b-2*a --> difference of 2*a ). If you first decrease a and then b, then the product will be (a-1)*(b-1)=a*b-a-b+1. So the decrease in value is a+b-1. Since, a>b then difference is clearly greater in first case.(a+b-1<2*a). • » » » 3 weeks ago, # ^ | 0 Thanks! • » » 3 weeks ago, # ^ | ← Rev. 3 → 0 That's a shame that the editorial is missing the proof.We claim that in the optimal case all the operations have been applied to a single integer unless it has reached its minimum.Proof by contradiction. Assume that both integers have decreased and none has reached its minimum. Let$a \leq b$. Then if one of the operations is performed on$a$instead of$b$(so that the total number of operations does not increase) the product$ab$will decrease. Indeed$(a-1)(b+1)=ab + a - b - 1 < ab\$.
• » » » 3 weeks ago, # ^ | 0 Thank you very-very much!!!
» 3 weeks ago, # | 0 A slightly different and a bit easier to understand (maybe?) implementation of E. 91876338. :)
» 3 weeks ago, # | 0 Regarding problem E. Two Platforms, I used TreeMap to solve. Them time complexity is also O(N log N), but it is TLE. I got stuck on investigating the cause. Could anyone help me to have a look? thanks.https://codeforces.com/contest/1409/submission/91911507
» 3 weeks ago, # | 0 Problem B :Need to press enter for n to get the last test case(Otherwise it just waits for input) for which im getting tle. Code https://codeforces.com/contest/1409/submission/91910842 . Can some1 point out the problem please?
» 3 weeks ago, # | 0 Really need help trying to figure what went wrong on test case 3. Was unable to figure out throughout the contest.
» 3 weeks ago, # | 0 Solution Videos : problem A : https://www.youtube.com/watch?v=SC_x74jJ2Doproblem B : https://www.youtube.com/watch?v=euDkPcB7bdcproblem C : https://www.youtube.com/watch?v=7fYsv4gFA5QProblem D : https://www.youtube.com/watch?v=sduhfv70VJ0
» 3 weeks ago, # | +4 Why this greedy approache for F is wrong?Can you help me?Detail:Suppose we change i positions into t[0],and i2 into t[1].Let's greedy:The i positions which are changed into t[0] is as small as possible,and the i2 positions which are changed into t[1] is as big as possible. (If s[i]=t[0]/t[1] ,it doesn't need to be changed).Code: Spoiler/* {By GWj */ #pragma GCC optimize(2) #include #define rb(a,b,c) for(int a=b;a<=c;++a) #define rl(a,b,c) for(int a=b;a>=c;--a) #define LL long long #define IT iterator #define PB push_back #define II(a,b) make_pair(a,b) #define FIR first #define SEC second #define FREO freopen("check.out","w",stdout) #define rep(a,b) for(int a=0;a>a #define R2(a,b) cin>>a>>b #define check_min(a,b) a=min(a,b) #define check_max(a,b) a=max(a,b) using namespace std; const int INF=0x3f3f3f3f; typedef pair mp; /*} */ string s,t; int main(){ int n,k; R2(n,k); R2(s,t); LL res=-1; rb(i,0,k){ rb(i2,0,k){ if(i+i2>k) continue; string cpy=s; int cnt=i; rep(j,n){ if(!cnt) break; if(cpy[j]==t[0]) continue; cnt--; cpy[j]=t[0]; } cnt=i2; rl(j,n-1,0){ if(!cnt) break; if(cpy[j]==t[1]) continue; cnt--; cpy[j]=t[1]; } LL rest=0; int cc=0; rep(j,n) { if(cpy[j]==t[1]) rest+=cc; if(cpy[j]==t[0]) cc++; } check_max(res,rest); } } cout<
• » » 3 weeks ago, # ^ | ← Rev. 2 → +6 i did a greedy approach too and it was like you said the t[0] position should be the least and the t[1] position should be the last but there is a problem maybe we have some character t[0] in the end of the string not exactly the last character but lets say the second half of the string lets say t[0] = a and t[1] = bthere can be a situation like this : aaa .. b .... (a) .. bb in this case its better to change (a) to (b) to get more occurrences.in my solution i count if we change the i position to t[0] or t[1] how much we can get and how much we loss as we can see in the sample if we change (a) to (b) then we get 3 more occurrences and loss 2 occurrences and its good but some times its not however my solution got wrong answer in testcase 41 :))))
• » » » 3 weeks ago, # ^ | 0 Thank you very much.I have completely understand.
» 3 weeks ago, # | 0 For problem D, why do we need to have(10−dig)%10 moves instead of (10−dig) moves? As dig=n%10, is that %10 really necessary?
• » » 3 weeks ago, # ^ | 0 dig can be 0 as well so (10-dig) will give you pw*10 instead it should be pw*0
» 3 weeks ago, # | 0 For the third question i got tle in the contest...but i submitted the same solution it worked now please check this https://codeforces.com/contest/1409/submission/91922785
» 3 weeks ago, # | 0 Runtime error on test case #6 for Quesion D Could someone please explain the reason, I couldn't find . Thanks a lot! Here is the link to my solution : https://codeforces.com/contest/1409/submission/91924509
» 3 weeks ago, # | 0 Can someone help me with F. My dp[ind][k][left] states that we are at index 'ind', we can do atmost k changes, and we have left occurances of t[0] from [0:ind-1]. Then my answer will be dp[0][k][0]. My transition are, either change current character to t[0], or t[1] or don't change at all. I am getting wrong answer at testcase 20. My submission 91926014
» 3 weeks ago, # | 0 Binary Search solution for Dint T=1; int n, s; auto fun = [&] (int m) { string s1 = to_string(n); string s2 = to_string(m+n); if( s1.length() < s2.length() ) return true; int sum = 0, sm = 0; for(auto c:s1) sm+=(c-'0'); for(int i=0;i>T; while(T --> 0) { cin >> n >> s; int ans; int beg = 0, end = 1e18, mid; while(beg<=end) { mid = (beg+end)/2; if(fun(mid)) ans = mid, end = mid-1; else beg = mid +1; } cout << ans << '\n'; } Link
» 3 weeks ago, # | 0 Dear, MikeMirzayanov Please Check Why all my Correct Solutions got Skipped. I had submitted 3 Solutions in Yesterday Div 3. Contest and they were correct but saw today all were skipped which dipped my rating a lot. Please Check why Did it happen.
• » » 3 weeks ago, # ^ | +1 pranshukas Because you sumbit your code again in your alt account enigma_13 to hack them afterword.(Remember hacks in div3 give no points).
• » » » 3 weeks ago, # ^ | 0 I know that hacking any solution don't give any points. I was new to it so I thought to learn it. That's Why I just submitted my own solutions form some other ID and give it a try. I thought it was in no way harming any policy and it wasn't any mischief either because I haven't cheated, nor Plagrised. Then Why did my Submissions got Skipped?
• » » » » 3 weeks ago, # ^ | +5 Did you read the rules of the contest before participating ? If not then the last rule says: "I will not use multiple accounts and will take part in the contest using your personal and the single account".
» 3 weeks ago, # | 0 Nice Contest! Too much hard for noob like me, but tutorial is very helpfullll. HAPPY CODING.
» 3 weeks ago, # | 0 In the solution of Two platforms problem,why are we considering l[i]+r[i+1] rather than l[i]+r[i]?
• » » 3 weeks ago, # ^ | 0 Because the range is inclusive, and as we want to maximize the points we save, we never want to overlap the platforms, when going for l[i]+r[i], you're basically overlapping the right border of l and the left border of r, and thus, overcounting and not getting an optimal solution. I think once you read a couple times the last part of the editorial you'll understand.
» 3 weeks ago, # | 0 int the problem D we can do the recursive solution as well here is my solution link https://codeforces.com/contest/1409/submission/91941387
» 3 weeks ago, # | 0 I don't understand where the author says in problem E that li:=max(li,li−1) will help. Why? How? Can somebody explain it more clearly?
• » » 3 weeks ago, # ^ | +1 Don't forget that you have calculated l and r beforehand.where li is the number of points to the left from the point i (including i) that are not further than k from the i-th pointAfter that you convert the same array to a prefix maximum array (ith element contains best possible value of elements in range(0, i)). By induction you can iterate from left to right and you can say that ith element of maximum array is either i-1th element or the new added element's value. Because the author didn't separate the two arrays it might get a little confusing. They just converted the array to aa prefix maximum array without assigning it to a new variable.
• » » » 3 weeks ago, # ^ | 0 oh okay I got it thanks!!basically maximum contribution upto i from left side + maximum contribution from i+1 to end in right side for each i, while i is the fixed barrier between the two segments(worst case they will share a common end).
• » » » » 3 weeks ago, # ^ | 0 Exactly. But I don't think worst case is sharing a common end. The algorithm will always run n times on an array.
• » » » » » 3 weeks ago, # ^ | +1 yeah, poor choice of words there actually! I meant extreme case if they share common end not overlap
» 3 weeks ago, # | +1 I think the explanation of Problem E is a little hard to understand. I think the key point of this question was "Dividing the number line into two segments.". The answer is the best platform that lies in the left segment + the best platform that lies in the right segment. Simply iterate over all possible divisions and the best result is your answer. I couldn't understand the solution when I read this explanation but when I watched SecondThread's video I really understood the problem.I also really liked the difficulty levels of the first 4 questions.
» 3 weeks ago, # | 0 I was trying to solve problem F. I came up with a solution but don't know if it is correct. I'm having hard time implementing it.The idea is this:Input: n = 7 k = 3 s = "asddsaf" t = "sd" now for s, we calculate 2 arrays, left and right, where, left[i] means how many t[0] appear on left side of s[i] not including s[i] itseld, ans similarly right[i] means how many t[1] appear on right side of s[i] not including s[i].For above test case: a s d d s a f left --> 0 0 1 1 1 2 2 right--> 2 2 1 0 0 0 0now we start changing characters in following way: i -> left pointer, initially at 0 j -> right pointer, initially at n-1; when left[i] < right[i], it means that changing s[i] to t[1] will be good; when left[j] > right[i], it means that changing s[i] to t[0] will be good;In above 2 case, we go with whichever has higher absolute difference when both are equal we try to make counts of t[0] and t[1] equal. if anywhere k = 0 , we stopthen our answer is sum of number of t[1]'s to right side of t[0]'s in our modified s.Kindly tell me if it's correct or where it is wrong.
» 3 weeks ago, # | 0 What is the proof for the fact in problem B solution ?
• » » 3 weeks ago, # ^ | 0 Assume you are going step by step (decrease 1 at a time) and a > b (you can switch them very easily in code).if you decrease a, the final product will be (a-1)b = ab — b if you decrease b, the final product will be a(b-1) = ab — abecause a > b, decreasing a from ab will result in a smaller product then decreasing b from ab.The problem starts when a=x. After a=x, you start decreasing from b. I don't think you can have a solid proof of which is optimal without starting from a or starting from b after this point. Therefore try both possibilities.
» 3 weeks ago, # | 0 What is 1ll in editorialist solution? ans = min(ans, (a — da) * 1ll * (b — db));
• » » 3 weeks ago, # ^ | 0 LL stands for long long data type. It basically returns 1 as a long long instead of integer default.
• » » » 3 weeks ago, # ^ | 0 Thanks but why it is used here?
• » » » » 3 weeks ago, # ^ | +1 Because both a and b are integers that may be up to 1e9, so the multiplication would be up to 1e18, meaning it would overflow, that's the reason we want to convert it to long long.
» 3 weeks ago, # | 0 In the solution code of problem C (Rox); the condition diff/delta + 1 > n Can someone explain it for me?
• » » 3 weeks ago, # ^ | 0 i.e x=4, y=16, n=5 when diff=1 you can get 16 — 4 = 12 so you need to have 12 — 1 = 11 integers to fill that gap. Thats why you check if you have enough integers to fill that gap.
» 3 weeks ago, # | ← Rev. 2 → 0 I want to verify that the following solution for problem E is correct.First, If we can cover all points, then answer is n. this is easy.otherwise, Let V be a vector of pair such that V[i].first = a given x position. V[i].second = number of points at position V[i].first. This vector is sorted w.r.t. x positions The idea is that we should put the left border of the first platform at one of the positions V[0].first, V[1]. first, ... so we try all of themcur_best := 0left_border_pos = -1 right_border_pos = infFor all i we calculate the last j such that V[j].first <= V[i].first + k using binary searchcount the number of points at position range(V[i].first, V[j].first)if this number is bigger than cur_best then update cur_best and border position variables`end loop Now, we know the best way to put the first platform.I repeat the previous steps for the second platform but I do not allow overlap between the two platforms. If this solution is logically correct, then why this code is not accepted?I know it's very bad written but It's easy to understand I guess.
» 3 weeks ago, # | ← Rev. 2 → 0 Please help me I am getting wring ans in D solution:https://codeforces.com/contest/1409/submission/91972053
» 3 weeks ago, # | 0 How to solve F with bigger constraints? I was trying to solve for O(n^2) solution but couldn't make significant progress.
» 3 weeks ago, # | 0 I found this solution for F to be much simpler to understand than the one in the editorial.
» 3 weeks ago, # | 0 prob -D this is my code and please tell me how to remove TLE in my code https://codeforces.com/contest/1409/submission/92060096
» 3 weeks ago, # | 0 I felt soo dumb on myself when i did the correct thing but wasted 45 min and 5 WA submissions for erasing the vector for graph only upto n-1 and not n,on question D.
» 3 weeks ago, # | 0 In problem 1409E - Two Platforms I couldn't find why my logic is wrong!In the submission page, I can't see the test cases.Here is my submission 92089145. Can anybody give me any test cases for which my approach is wrong?
» 3 weeks ago, # | ← Rev. 2 → 0 vovuh Sir can you suggest few problems similar to E
• » » 3 weeks ago, # ^ | 0 yeah , i hope that
» 3 weeks ago, # | 0 Is there a way to solve Problem C in O(n) time?
• » » 3 weeks ago, # ^ | 0 refer my solution its O(n)https://codeforces.com/contest/1409/submission/92122521
» 3 weeks ago, # | 0 guys I am newbie..can anyone explain problem D in much simpler way with an example test case
» 3 weeks ago, # | 0 92209053 I am getting a time limit exceeded on test case 5, problem E. I am using Java and I am not sure what should I do to avoid this error. Any help please?
» 3 weeks ago, # | 0 can you explain why there is line int k = min((y — 1) / delta, n — 1); int a0 = y — k * delta;in question 1409C tutorial 3 example1409C][USER:VOVUH - Yet Another Array Restoration
» 2 weeks ago, # | 0 I am not able to understand how dp is working in this question 1409F — Subsequences of Length Two ? I have written the top down dp but bottom up is somehow not making sense to me . Is there any better way to understand it .
» 2 weeks ago, # | ← Rev. 2 → 0 del
» 36 hours ago, # | 0 https://codeforces.com/contest/1409/submission/93888937 why it is not working I tried to use the binary search to reach a convenient endpoint for the left and right arrays
|
2020-09-28 03:42:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3441638946533203, "perplexity": 1483.527250009482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00641.warc.gz"}
|
https://greprepclub.com/forum/qotd-7-selected-data-for-greetin-card-sales-2424.html
|
It is currently 12 Dec 2018, 08:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# QOTD#7 Selected Data for Greetin Card Sales
Author Message
TAGS:
GMAT Club Legend
Joined: 07 Jun 2014
Posts: 4749
GRE 1: Q167 V156
WE: Business Development (Energy and Utilities)
Followers: 93
Kudos [?]: 1657 [2] , given: 396
QOTD#7 Selected Data for Greetin Card Sales [#permalink] 03 Aug 2016, 16:53
2
KUDOS
Expert's post
00:00
Question Stats:
75% (03:15) correct 25% (00:00) wrong based on 4 sessions
22. In 1993 the number of Valentine’s Day cards sold was approximately how many times the number of Thanksgiving cards sold?
A. 20
B. 30
C. 40
D. 50
E. 60
[Reveal] Spoiler: OA
A
23. In 1993 a card company that sold 40 percent of the Mother’s Day cards that year priced its cards for that occasion between $1.00 and$8.00 each. If the revenue from sales of the company’s Mother’s Day cards in 1993 was r million dollars, which of the following indicates all possible values of r ?
A. 155 < r < 1,240
B. 93 < r < 496
C. 93 < r < 326
D. 62 < r < 744
E. 62 < r < 496
[Reveal] Spoiler: OA
E
24. Approximately what was the percent increase in the annual revenue from all greeting card sales from 1990 to 1993 ?
A. 50%
B. 45%
C. 39%
D. 28%
E. 20%
[Reveal] Spoiler: OA
D
For Question 25, select all the answer choices that apply.
25. In 1993 the average (arithmetic mean) price per card for all greeting cards sold was $1.25. For which of the following occasions was the number of cards sold in 1993 less than the total number of cards sold that year for occasions other than the ten occasions shown? Indicate all such occasions. A. Christmas B. Valentine’s Day C. Easter D. Mother’s Day E. Father’s Day F. Graduation G. Thanksgiving H. Halloween [Reveal] Spoiler: OA C, D, E, F, G, H Practice Questions Question: 22 - 25 Page: 155 -156 [Reveal] Spoiler: Img Attachment: test.jpg [ 164.48 KiB | Viewed 6968 times ] _________________ Sandy If you found this post useful, please let me know by pressing the Kudos Button Try our free Online GRE Test GMAT Club Legend Joined: 07 Jun 2014 Posts: 4749 GRE 1: Q167 V156 WE: Business Development (Energy and Utilities) Followers: 93 Kudos [?]: 1657 [0], given: 396 Re: QOTD#7 Selected Data for Greetin Card Sales [#permalink] 03 Aug 2016, 17:01 Expert's post Explanation 22. According to the table, the number of Valentine’s Day cards sold in 1993 was 900 million, and the number of Thanksgiving cards sold was 42 million. Therefore the number of Valentine’s Day cards sold was $$\frac{900}{42}$$, or approximately 21.4 times the number of Thanksgiving cards sold. Of the answer choices, the closest is 20. The correct answer is Choice A. 23. According the table, 155 million Mother’s Day cards were sold in 1993. The card company that sold 40 percent of the Mother’s Day cards sold (0.4)(155) million, or 62 million cards. Since that company priced the cards between$1.00 and $8.00 each, the revenue, r million dollars, from selling the 62 million cards was between ($1.00)(62) million and ($8.00)(62) million, or between$62 million and $496 million; that is, 62 < r < 496. Thus the correct answer is Choice E. 24. According to the bar graph, the annual revenue from all greeting card sales in 1990 was approximately$4.5 billion, and the corresponding total in 1993 was approximately $5.75 billion. Therefore the percent increase from 1990 to 1993 was approximately $$\frac{(5.75-4.5)}{4.5}*100%$$, or approximately 28%. The correct answer is Choice D. 25. According to the bar graph, the total annual revenue in 1993 was approximately$5.75 billion. In the question, you are given that the average price per card for all greeting cards sold was \$1.25. Therefore the total number of cards sold for all occasions was $$\frac{5.75}{1.25}$$ billion, or 4.6 billion.
According to the table, the total number of cards sold in 1993 for the ten occasions shown was 3.9 billion. So the number of cards sold for occasions other than the ten occasions shown, in billions, was 4.6 – 3.9, or 0.7 billion. Note that 0.7 billion equals 700 million. From the table, you can see that less than 700 million cards were sold for each of six of the occasions in the answer choices: Easter, Mother’s Day, Father’s Day, Graduation, Thanksgiving, and Halloween.
Thus the correct answer consists of Choices C, D, E, F, G, and H.
_________________
Sandy
If you found this post useful, please let me know by pressing the Kudos Button
Try our free Online GRE Test
Display posts from previous: Sort by
|
2018-12-12 16:09:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26374512910842896, "perplexity": 4081.086281249994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00227.warc.gz"}
|
https://luerobertinho.com.br/blog/be0ea4-b2h6-ionic-or-covalent
|
The transfer and sharing of electrons among atoms govern the chemistry of the elements. Silicon tetrachloride is a molecular compound, consisting of a silicon centre covalently bound to four chlorine ligands. Write the symbol for each ion and name them.Note that there is a system for naming some polyatomic ions; The nature of the attractive forces that hold atoms or ions together within a compound is the basis for classifying chemical bonding. Determine if they are ionic or molecular compounds. Atoms of group 17 gain one electron and form anions with a 1− charge; atoms of group 16 gain two electrons and form ions with a 2− charge, and so on. PLEASE TELL YOUR TEACHER THIS. The symbol for the ion is MgNitrogen’s position in the periodic table (group 15) reveals that it is a nonmetal. B2H6 ( Diborane ) is Molecular bond. In fact, transition metals and some other metals often exhibit variable charges that are not predictable by their location in the table. so no rule is very good. Polar "In c...Question = Is AsH3 polar or nonpolar ? There is no neutral B2H4 molecule. BH is not stable, there is no compound by that formula. ... Any compound which forms a giant lattice from ionic bonds is known as an ionic compound. Get your answers by asking now. (credit: modification of work by Stanislav Doronenko)Because the ionic compound must be electrically neutral, it must have the same number of positive and negative charges. ... Is testosterone a covalent or ionic compound? Predict which forms an anion, which forms a cation, and the charges of each ion. These electron pairs are known as shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. (credit: modification of work by Mark Blaser and Matt Evans)Watch this video to see a mixture of salts melt and conduct electricity.In every ionic compound, the total number of positive charges of the cations equals the total number of negative charges of the anions. For example, a neutral calcium atom, with 20 protons and 20 electrons, readily loses two electrons. Question = Is SCN- polar or nonpolar ?
This requires a ratio of one CaPredict the formula of the ionic compound formed between the lithium ion and the peroxide ion, ${\text{O}}_{2}{}^{2-}$ (Hint: Use the periodic table to predict the sign and the charge on the lithium ion. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. Answer = TeCl4 ( Tellurium tetrachloride ) is Polar What is polar and non-polar? This results in a cation with 20 protons, 18 electrons, and a 2+ charge. Purpose • The type of bonding present in a substance greatly influences its properties. When electrons are transferred and ions form, When an element composed of atoms that readily lose electrons (a metal) reacts with an element composed of atoms that readily gain electrons (a nonmetal), a transfer of electrons usually occurs, producing ions. Positively charged ions are called cations, and negatively charge ions are called anions. Answer: H2SO3 ( Sulfurous acid ) is a Molecular bond What is chemical bond, ionic bo...A chemical bond is a lasting attraction between atoms, ions or molecules that enables the formation of chemical compounds. Nonmetals form negative ions (anions). I'll tell you the ionic or Molecular bond list below. Answer = CLO3- (Chlorate) is Polar What is polar and non-polar? (a) A sodium atom (Na) has equal numbers of protons and electrons (11) and is uncharged. Is B2H6 ( Diborane ) ionic or Molecular bond ? What I had found is that it is an Ionic compound List ionic or Molecular bond. (As a comparison, the molecular compound water melts at 0 °C and boils at 100 °C.) Atoms of many main-group metals lose enough electrons to leave them with the same number of electrons as an atom of the preceding noble gas.
This trend can be used as a guide in many cases, but its predictive value decreases when moving toward the center of the periodic table.
I'll tell you the ionic or Molecular bond list below.
|
2021-06-25 00:56:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.635071873664856, "perplexity": 2114.8518196875607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00185.warc.gz"}
|
https://encyclopediaofmath.org/wiki/Simons_inequality
|
# Simons inequality
Jump to: navigation, search
2010 Mathematics Subject Classification: Primary: 53A10 [MSN][ZBL]
An inequality proved by Simons in his fundamental work [Si] on minimal varieties, which played a pivotal role in the solution of the Bernstein problem. The inequality bounds from below the Laplacian of the square norm of the second fundamental form of a minimal hypersurface $\Sigma$ in a general Riemannian manifold $N$ of dimension $n+1$. More precisely, if $A$ denotes the second fundamental form of $\Sigma$ and $|A|$ its Hilbert-Schmidt norm, the inequality states that, at every point $p\in \Sigma$, $\Delta_\Sigma |A|^2 (p) \geq - C (1 + |A|^2 (p))^2$ where $\Delta_\Sigma$ is the Laplace operator on $\Sigma$ and the constant $C$ depends upon $n$ and the Riemannian curvature of the ambient manifold $N$ at the point $p$. When $N$ is the Euclidean space, a more precise form of the inequality is $\Delta_\Sigma |A|^2 \geq - 2 |A|^4 + 2 \left(1+\frac{2}{n}\right) |\nabla_\Sigma |A||^2$ (see Lemma 2.1 of [CM] for a proof and [SSY] for the case of general ambient manifolds). Moreover, the inequality is an identity in the special case of $2$-dimensional minimal surfaces of $\mathbb R^3$ (cf. [CM]).
The inequality was used by Simon in [Si] to show, among other things, that stable minimal hypercones of $\mathbb R^{n+1}$ must be planar for $n\leq 6$ and it was subsequently used to infer curvature estimates for stable minimal hypersurfaces, generalizing the classical work of Heinz [He], cf. [SSY], [CS] and [SS]. Simons also pointed out that there is a nonplanar stable minimal hypercone in $\mathbb R^8$, cf. Simons cone.
#### References
[CS] H. I. Choi, R. Schoen, "The space of minimal embeddings of a surface into a three-dimensional manifold of positive Ricci curvature", Invent. Math., 81, (1985) pp. 387-394. [CM] T. H. Colding, W. P. Minicozzi III, "A course in minimal surfaces", Graduate Studies in Mathematics, AMS, (2011). [He] E. Heinz, "Ueber die Loesungen der Minimalflaechengleichung" Nachr. Akad. Wiss. Goettingen Math. Phys. K1 ii, (1952) pp. 51-56 [SS] R. Schoen, L. Simon, "Regularity of stable minimal hypersurfaces" Comm. Pure App. Math., 34, (1981), pp. 741-797. [SSY] R. Schoen, L. Simon, S. T. Yau, "Curvature estimates for minimal hypersurfaces" Acta Math., 132 (1975) pp. 275-288 [Si] J. Simons, "Minimal varieties in riemannian manifolds" Ann. of Math., 88 (1968) pp. 62-105 MR233295 Zbl 0181.49702
How to Cite This Entry:
Simons inequality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Simons_inequality&oldid=33626
|
2020-08-12 12:35:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894615650177002, "perplexity": 801.5297707278133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738892.21/warc/CC-MAIN-20200812112531-20200812142531-00580.warc.gz"}
|
https://zbmath.org/?q=an:0572.14007
|
On the normal bundle of space curves in $$\mathbb{P}^3$$. (Sur le fibré normal des courbes gauches.)(French)Zbl 0572.14007
Let $$D_s(g)$$ (resp. $$D^0_s(g)$$, resp. $$D_{ss}(g)$$, resp. $$D^0_{ss}(g)$$, resp. $$D_P(g))$$ be the first integer $$d$$ such that there is a smooth connected curve $$C\subset\mathbb{P}^3$$ of degree $$d$$, genus $$g$$, with normal bundle $$N_C$$ stable (resp. stable and with $$H^1(C,N_C)=0$$, resp. semi-stable, resp. semi-stable and with $$H^1(C,N_C)=0$$, resp. with $$H^1(C,N_C(-2))=0)$$. The authors announce the following result (which seems to be very useful in many problems about space curves):
(a) For all $$g\geq 2$$, $$D^0_s(g)$$, $$D_P(g)$$ are finite;
(b) if $$g\geq 1$$, $$d\geq D^0_s(g)$$ (resp. $$D^0_{ss}(g)$$, resp. $$D_P(g))$$ there is a smooth connected curve $$C\subset\mathbb{P}^3$$ with degree $$d$$, genus $$g$$, with stable normal bundle (resp. semi-stable, resp. with $$H^1(C,N_C(-2))=0)$$;
(c) $$D^0_s(g)\leq g+3$$;
(d) $$1\leq \lim \sup g^{-2/3}D_{ss}(g)\leq \lim \sup g^{-2/3}D_s(g)\leq \lim \sup g^{-2/3}D^0_{ss}(g)=\lim \sup g^{-2/3}D^0_s(g)\leq \lim \sup D_P(g)\leq (9/8)^{1/3}$$;
(f) if $$g\geq 2$$, $$d\geq 3g$$, there is in $$\mathbb{P}^3$$ a smooth curve of genus $$g$$, degree $$d$$ with normal bundle of degree of stability $$[g/2]$$.
MSC:
14F06 Sheaves in algebraic geometry 14H45 Special algebraic curves and curves of low genus 14H10 Families, moduli of curves (algebraic) 14N05 Projective techniques in algebraic geometry
|
2022-09-27 23:19:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246382117271423, "perplexity": 316.3302100914061}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00609.warc.gz"}
|
http://www.vidyarthiacademy.in/vidyarthiacademy/ncertsolutions/ixmaths/ixmaths01.1ncertsolutions.php
|
NCERT Solutions Class 9 Mathematics
Exercise 1.1
Q.1. Is zero a rational number? Can you write it in the form $\frac{\mathrm{p}}{\mathrm{q}}$, where p and q are integers and q ≠ 0?
Q.2. Find six rational numbers between 3 and 4.
Q.3. Find five rational numbers between $\frac{3}{5}$ and $\frac{4}{5}$.
Q.4: State whether the following statements are true or false. Give reasons for your answers.
(i) Every natural number is a whole number.
(ii) Every integer is a whole number.
(iii) Every rational number is a whole number.
NCERT Solutions Class 9 Mathematics
Exercise 1.1
Q.1. Is zero a rational number? Can you write it in the form $\frac{\mathrm{p}}{\mathrm{q}}$, where p and q are integers and q ≠ 0?
Solution:
Yes. Zero is a rational number as it can be represented as $\frac{0}{1}$, $\frac{0}{2}$, etc..
Q.2. Find six rational numbers between 3 and 4.
Solution:
The given numbers can be represented as,
$3=\frac{3}{1}=\frac{3}{1}×\frac{7}{7}=\frac{21}{7}$
$4=\frac{4}{1}=\frac{4}{1}×\frac{7}{7}=\frac{28}{7}$
Therefore, six rational numbers between 3 and 4 are
$\frac{22}{7},\frac{23}{7},\frac{24}{7},\frac{25}{7},\frac{26}{7},\frac{27}{7}.$
Q.3. Find five rational numbers between $\frac{3}{5}$ and $\frac{4}{5}$.
Solution:
The given numbers can be represented as,
$\frac{3}{5}=\frac{3}{5}×\frac{6}{6}=\frac{18}{30}$
$\frac{4}{5}=\frac{4}{5}×\frac{6}{6}=\frac{24}{30}$
Therefore, five rational numbers between $\frac{3}{5}$ and $\frac{4}{5}$ are
$\frac{19}{30},\frac{20}{30},\frac{21}{30},\frac{22}{30},\frac{23}{30}.$
Q.4: State whether the following statements are true or false. Give reasons for your answers.
(i) Every natural number is a whole number.
(ii) Every integer is a whole number.
(iii) Every rational number is a whole number.
Solution:
(i) Every natural number is a whole number.
• True, since the collection of whole numbers contains all natural numbers.
(ii) Every integer is a whole number.
• False, as integers may be negative but whole numbers are always positive.
(iii) Every rational number is a whole number.
• False, as rational numbers may be fractional but so they include numbers which are not whole numbers.
|
2019-03-19 00:10:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37936732172966003, "perplexity": 605.4474424964098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201812.2/warc/CC-MAIN-20190318232014-20190319014014-00291.warc.gz"}
|
https://www.statistics-lab.com/%E7%89%A9%E7%90%86%E4%BB%A3%E5%86%99%E7%94%B5%E5%8A%A8%E5%8A%9B%E5%AD%A6%E4%BB%A3%E5%86%99electromagnetism%E4%BB%A3%E8%80%83phys2213/
|
### 物理代写|电动力学代写electromagnetism代考|PHYS2213
statistics-lab™ 为您的留学生涯保驾护航 在代写电动力学electrodynamics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写电动力学electrodynamics代写方面经验极为丰富,各种代写电动力学electrodynamics相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
## 物理代写|电动力学代写electromagnetism代考|A Basic Stochastic Integral
The following is similar to Example $2 .$
Example 5 Suppose $s=1,2,3, \ldots$ is time, measured in days. Suppose a share, or unit of stock, has value $x(s)$ on day s; suppose $z(s)$ is the number of shares held on day $s$; and suppose $c(s)$ is the change in the value of the shareholding on day $s$ as a result of the change in share value from the previous day so $c(s)=$ $z(s-1)(x(s)-x(s-1))$. Let $w(s)$ be the cumulative change in shareholding value at end of day $s$, so $w(s)=w(s-1)+c(s)$. If share value $x(s)$ and stockholding $z(s)$ are subject to random variability, how is the gain (or loss) from the stockholding to be estimated?
Take initial value (at time $s=0)$ of the share to be $x(0)$ (or $\left.x_{0}\right)$, take the initial shareholding or number of shares owned to be $z(0)$ (or $\left.z_{0}\right)$. Then, at end of day $1(s=1)$,
$$c(1)=z(0) \times(x(1)-x(0)), \quad w(1)=w(0)+c(1)=c(1)$$
At end of day $s$,
$$c(s)=z(s-1) \times(x(s)-x(s-1)), \quad w(s)=w(s-1)+c(s)$$
After $t$ days,
$$w(t)=\sum_{s=1}^{t} z(s-1)(x(s)-x(s-1)) .$$
If the time increments are reduced to arbitrarily small size (so $s$ represents number of “time ticks” -fractions of a second, say), with the meaning of the other variables adjusted accordingly, then
$$w(t)=\sum_{j=1}^{n} z\left(s_{j-1}\right)\left(x\left(s_{j}\right)-x\left(s_{j-1}\right)\right), \quad \text { or } \quad w(t)=\sum z(s) \Delta x(s)$$
The latter expressions are Riemann sum estimates of $\int_{0}^{t} z(s) d x(s)$ (a Stieltjestype integral) whenever the latter exists.
Each of the expressions in (2.4) is sample value of a random variable
$$W(t)=\sum_{j=1}^{n} Z\left(s_{j-1}\right)\left(X\left(s_{j}\right)-X\left(s_{j-1}\right)\right) \text { or } \int_{0}^{t} Z(s) d X(s)$$constructed from the random variables $X, Z$, and $W$. These notations symbolize in a “naive” or “realistic” way-the stochastic integral of the process $Z$ with respect to the process $X$. In chapter 8 of [MTRV], symbols s, or $\mathbf{S}$, or $\mathcal{S}$ are used (in place of the symbol $\int$ ) for various kinds of stochastic integral. In the context described here, S would be the appropriate notation. (See (5.28) below.)
## 物理代写|电动力学代写electromagnetism代考|Choosing a Sample Space
It was mentioned earlier that there are many alternative ways of producing a sample space $\Omega$ (along with the linked probability measure $P$ and family $\mathcal{A}$ of measurable subsets of $\Omega$ ). The set of numbers
$${-5,-4,-3,-2,-1,0,1,2,3,4,5,10}$$
was used as sample space for the random variability in the preceding example of stochastic integration. The measurable space $\mathcal{A}$ was the family of all subsets of $\Omega$, and the example was illustrated by means of two distinct probability measures $P$, one of which was based on Up and Down transitions being equally likely, where for the other measure an Up transition was twice as likely as a Down.
An alternative sample space for this example of random variability is
$$\Omega=\Omega \Omega_{1} \times \Omega \Omega_{2} \times \Omega \Omega_{3} \times \Omega \Omega_{4}$$
where $\Omega_{j}={U, D}$ for $j=1,2,3,4$; so the elements $\omega$ of $\Omega$ consist of sixteen 4-tuples of the form
$$\omega=(\cdot, \cdot \cdot \cdot), \quad \text { such as } \omega=(U, D, D, U) \text { for example. }$$
Let the measurable space $\mathcal{A}$ be the family of all subsets $A$ of $\Omega$; so $\mathcal{A}$ contains $2^{16}$ members, one of which (for example) is
$$A={(D, U, U, D),(U, D, D, U),(D, U, D, U),(U, U, U, U),(D, D, D, D)}$$
with $A$ consisting of five individual four-tuples. Assume that Up transitions and Down transitions are equally likely, and that they are independent events. Then, as before,
$$P({\omega})=\frac{1}{16}$$
for each $\omega \in \Omega$. For $A$ above, $P(A)=\frac{5}{16}$.
To relate this probability structure to the shareholding example, let $\mathbf{R}^{4}=$ $\mathbf{R} \times \mathbf{R} \times \mathbf{R} \times \mathbf{R}$, and let
$$f: \Omega \mapsto \mathbf{R}^{4}, \quad f(\omega)=((x(1), x(2), x(3), x(4)),$$
using Table 2.4; so, for instance,
$$f(\omega)=f((U, D, D, U))=(11,10,9,10)=(x(1), x(2), x(3), x(4)),$$
and so on. Next, let $\mathbf{S}$ denote the stochastic integrals of the preceding section, so for $x=(x(1), x(2), x(3), x(4)) \in \mathbf{R}^{4}$,
$$\mathbf{S}(x)=\int_{0}^{4} z(s) d x(s)=\sum_{s=1}^{4} z(s-1)(x(s)-x(s-1)),$$
so $\mathbf{S}(x)$ gives the values $w(4)$ of Table 2.4. As described in Section $2.3$, the rationale for deducing the probabilities of outcomes $\mathbf{S}(x)$, = $w(4)$, from the probabilities on $\Omega$ is the relationship
$$P(w(4))=P\left(f^{-1}\left(\mathbf{S}^{-1}(w(4))\right)\right) .$$
## 物理代写|电动力学代写electromagnetism代考|More on Basic Stochastic Integral
The constructions in Sections $2.3$ and $2.4$ purported to be about stochastic integration. While a case can be made that (2.6) and (2.7) are actually stochastic integrals, such simple examples are not really what the standard or classical theory of Chapter 1 is all about. The examples and illustrations in Sections $2.3$ and $2.4$ may not really be much help in coming to grips with the standard theory of stochastic integrals outlined in Chapter $1 .$
This is because Chapter 1, on the definition and meaning of classical stochastic integration, involves subtle passages to a limit, whereas (2.6) and (2.7) involve only finite sums and some elementary probability calculations.
From the latter point of view, introducing probability measure spaces and random-variables-as-measurable-functions seems to be an unnecessary complication. So, from such a straightforward starting point, why does the theory become so challenging and “messy”, as portrayed in Chapter $1 ?$
As in Example 2, the illustration in Section $2.3$ involves dividing up the time period (4 days) into 4 sections; leading to sample space $\Omega=\mathbf{R}^{4}$ in (2.15). Why not simply continue in this vein, and subdivide the time into 40 , or 400 , or 4 million steps instead of just 4 ; using sample spaces $\mathbf{R}^{40}$, or $\mathbf{R}^{400}$, or $\mathbf{R}^{4000000}$, respectively? The computations may become lengthier, but no new principle is involved; each of the variables changes in discrete steps at discrete points in time. ${ }^{5}$
Other simplifications can be similarly adopted. For instance, only two kinds of changes are contemplated in Section 2.3: increase (Up) or decrease (Down).
## 物理代写|电动力学代写electromagnetism代考|A Basic Stochastic Integral
C(1)=和(0)×(X(1)−X(0)),在(1)=在(0)+C(1)=C(1)
C(s)=和(s−1)×(X(s)−X(s−1)),在(s)=在(s−1)+C(s)
(2.4)中的每个表达式都是随机变量的样本值
## 物理代写|电动力学代写electromagnetism代考|Choosing a Sample Space
−5,−4,−3,−2,−1,0,1,2,3,4,5,10
Ω=ΩΩ1×ΩΩ2×ΩΩ3×ΩΩ4
ω=(⋅,⋅⋅⋅), 如 ω=(在,D,D,在) 例如。
F:Ω↦R4,F(ω)=((X(1),X(2),X(3),X(4)),
F(ω)=F((在,D,D,在))=(11,10,9,10)=(X(1),X(2),X(3),X(4)),
## 有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
2023-02-04 02:07:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873857855796814, "perplexity": 735.2471095451122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00575.warc.gz"}
|
https://answers.yahoo.com/question/index?qid=20130113100204AAgm8CH
|
# Let A be an invertible nxn matrix and v an eigenvector of A with associated eigenvalue λ.?
a) Is v an eigenvector of A^3? if so, what is the eigenvalue?
b) Is v an eigenvector of A^-1? if so, what is the eigenvalue?
c) Is v an eigenvector of A + 2I? if so, what is the eigenvalue?
d) is v an eigenvector of 7A? if so, what is the eigenvalue?
e) Let A be an nxn matrix and let B=A-αI for some scalar α. How do the eigenvalues of A and B compare?
Relevance
• 7 years ago
for v to be an eigenvector with eigenvalue λ
A v = λ v
a) A v = λ v
multiply by A
A^2 v = A λ v = λ A v = λ λ v = λ^2 v
A^3 v = A λ^2 v =λ^2 A v = λ^3 v
so v is an eigenvector of A^3 and λ^3 is the associated eigenvalue
b) A v = λ v
left multiply by A^-2
A^-2 A v = A^-2 λ v
A^-1 v = λ A^-2 v = (λ A^-2) v
for v to be an eigenvector of A^-1 then A^-2 must = I the unit matrix
so v is not in general an eigenvector of A^-1
c) A v = λ v
add 2 I v to both sides
Av + 2 I v = λ v + 2 I v or noting that 2 I v = 2 v on the right hand side
(A+2 I ) v = (λ + 2 ) v
so v is an eigenvector of Av + 2 I v with associated eigenvalue of λ + 2
d) A v = λ v
7 A v = 7 λ v
(7A) v = (7 λ) v
so v is an eigenvector of 7A and has eigenvalue of 7 λ
e) A v = λ v and
B=A-αI
B v = A v - α v
B v = λ v - α v = (λ - α) v
so the eigenvalue of B called λb is
λb = λ - α
|
2019-11-12 03:31:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156112432479858, "perplexity": 1892.347532344099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00109.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Subset_Product_Action
|
# Definition:Subset Product Action
## Definition
Let $\struct {G, \circ}$ be a group.
Let $\HH$ be the set of subgroups of $G$.
### Left Subset Product Action
The (left) subset product action of $G$ is the group action $*: G \times \powerset G \to \powerset G$:
$\forall g \in G, S \in \powerset G: g * S = g \circ S$
### Right Subset Product Action
The (right) subset product action of $G$ is the group action $*: G \times \powerset G \to \powerset G$:
$\forall g \in G, S \in \powerset G: g * S = S \circ g$
## Also see
• Results about the Subset Product action can be found here.
|
2023-03-30 14:30:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448556065559387, "perplexity": 1370.3842925010315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00390.warc.gz"}
|
https://docs.wxpython.org/wx.WithImages.html
|
# wx.WithImages¶
A mixin class to be used with other classes that use a wx.ImageList.
This class is used by classes such as wx.Notebook and wx.TreeCtrl, that use image indices to specify the icons used for their items (page icons for the former or the items inside the control for the latter).
The icon index can either be a special value wx.NO_IMAGE to indicate that an item doesn’t use an image at all or a small positive integer to specify the index of the icon in the list of images maintained by this class. Note that for many controls, either none of the items should have an icon or all of them should have one, i.e. mixing the items with and without an icon doesn’t always work well and may result in less than ideal appearance.
To initialize the list of icons used, call SetImages method passing it a vector of wx.BitmapBundle objects which can, in the simplest case, be just wx.Bitmap or wx.Icon objects – however, as always with wx.BitmapBundle, either more than one bitmap or icon needs to be specified or the bitmap bundle needs to be created from SVG to obtain better appearance in high DPI.
Alternative, traditional API which was the only one available until wxWidgets 3.1.6, is based on the use of wx.ImageList class. To use it, you need to create an object of this class and then call either AssignImageList to set this image list and give the control its ownership or SetImageList to retain the ownership of the image list, which can be useful if the same image list is shared by multiple controls, but requires deleting the image list later.
Note
ImageList-based API is not formally deprecated, but its use is discouraged because it is more complicated than simply providing a vector of bitmaps and it doesn’t allow specifying multiple images or using SVG, which is required for good high DPI support. Please don’t use AssignImageList and SetImageList in the new code and use SetImages instead.
## Class Hierarchy¶
Inheritance diagram for class WithImages:
## Methods Summary¶
__init__ AssignImageList Sets the image list for the page control and takes ownership of the list. GetImageCount Return the number of images in this control. GetImageList Returns the associated image list, may be None. GetUpdatedImageListFor Returns the image list updated to reflect the DPI scaling used for the given window if possible. HasImages Return True if the control has any images associated with it. SetImageList Sets the image list to use. SetImages Set the images to use for the items in the control.
## Class API¶
class wx.WithImages(object)
Possible constructors:
WithImages()
A mixin class to be used with other classes that use a ImageList.
### Methods¶
__init__(self)
AssignImageList(self, imageList)
Sets the image list for the page control and takes ownership of the list.
This function exists for compatibility only, please use SetImages in the new code.
Parameters
imageList (wx.ImageList) –
See also
GetImageCount(self)
Return the number of images in this control.
The returned value may be 0 if there are no images associated with the control.
Return type
int
New in version 4.1/wxWidgets-3.1.6.
GetImageList(self)
Returns the associated image list, may be None.
Note that the new code should use GetUpdatedImageListFor instead.
Return type
wx.ImageList
See also
GetUpdatedImageListFor(self, win)
Returns the image list updated to reflect the DPI scaling used for the given window if possible.
If SetImages has been called, this function creates the image list containing the images using the DPI scaling in effect for the provided win, which must be valid.
Otherwise it behaves as GetImageList , i.e. returns the image list previously set using SetImageList or AssignImageList , and just returns None if none of them had been called.
Parameters
win (wx.Window) –
Return type
wx.ImageList
Returns
Possibly null pointer owned by this object, i.e. which must not be deleted by the caller.
New in version 4.1/wxWidgets-3.1.6.
HasImages(self)
Return True if the control has any images associated with it.
Return type
bool
New in version 4.1/wxWidgets-3.1.6.
SetImageList(self, imageList)
Sets the image list to use.
It does not take ownership of the image list, you must delete it yourself.
This function exists for compatibility only, please use SetImages in the new code.
Parameters
imageList (wx.ImageList) –
See also
SetImages(self, images)
Set the images to use for the items in the control.
This function allows to specify the images to use in multiple different resolutions, letting the control to select the appropriate one for its DPI scaling. For this reason, it should be preferred to using the functions taking wx.ImageList, which has a fixed size, in the new code.
Parameters
images (Vector) – Non empty vector of bitmap bundles. Valid image indexes for the items in this control are determined by the size of this vector.
New in version 4.1/wxWidgets-3.1.6.
### Properties¶
ImageCount
ImageList
|
2023-03-22 23:24:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1908199042081833, "perplexity": 2373.9667143068614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00327.warc.gz"}
|
https://www.gamedev.net/forums/topic/622947-igipf-developing-an-ios-game-in-2-weeks/
|
iGipf - Developing an iOS game in 2 weeks
Recommended Posts
We all remember those "How to learn C++ in 21 days" books:
[img]http://2.bp.blogspot.com/-uNRGnrU0KjE/T3DMp98Ru6I/AAAAAAAAASI/VHbGsuA_Uf4/s1600/Teach-yourself-C++-in-21-days-2.png[/img]
Many people are probably skeptical about this idea, but (as we’ve already been into the future) there is no reason to doubt it? Before we delve into the specifics of our programming venture, here is a brief introduction to our team. We are a couple of programmers with no claim (oh, really?) to fame like Angry Birds and, moreover, with very little experience in iOS or GameDev in general. Roughly speaking, for us, this project is being undertaken as both a challenge to ourselves and, ultimately, for the pure fun and experience even if little can be achieved (oh, really?) in the end.
Now that you know at least a little more about us than Google can tell you, let’s talk about the rules for this endeavor. “Two weeks are enough for every project” was a statement made by one of us (let’s call him Bill…No, Bond is better) in a Skype call late one night. “Okay, but maybe…” was the reply that was abruptly cut off by “No buts or maybes.” Well, OK, if you say so! Of course, we could spend these weeks investing time in our families, working, taking care of the fate of humanity, or any number of other noble pursuits. Yeah, right! If you’ll buy that, I have a bridge to sell you. If we weren’t doing this project, the honest truth is that we’d be playing Starcraft, wasting time on Facebook, or just drinking beer.
Anyway, back to the rules, which have been set:
Two weeks of deeply intensive development is enough if you think about it. After all, 2 * 7 * 24 = 336 hours, which translates, via the following sophisticated math (336/8 *7/5 = 58.8), into nearly 59 calendar days or 2 whole months! Clearly the math justifies the time limit (looks like there might be a trick somewhere… But millions of managers can be wrong!). And, quite seriously (this is a serious post; honestly), our programming practices prove that the best way to raise the priority of any given task is to decrease the time limits for completing it. So, exactly two weeks (“exactly” means two weeks plus an extra two weeks depending on the situation). Rule 1 is complete!
Work on other rules later.
OK, now that the time limit is set, we must decide what we should write. After an objective evaluation of our capabilities (who are we trying to fool?), we decide not to write a 4D-shooter with RPG (or BDSM) elements and multiplayer capabilities for 20,000,000 simultaneous players. However, after some discussion, we do think that having a simpler multiplayer option our game is a grand idea.
As a basis, we took the game called GIPF, as there is nothing in the AppStore (and not planned, as we see). We decided to change the rules of the original game a little bit to make it even simpler to play, to ensure that it is not an exact copy of the original game, and to make it even more fun. In short, it will be a logical game with rules a bit more complicated than Tic-tac-toe and with a bit less need for strategy than chess. This seems like a good way for roughly 67 to 68% of Americans to spend their time. This number of users is calculated using extremely strict formulas with basic expectations about the number of sales of the application, at the price of $149, that are needed in order for us to purchase an island in the Mediterranean Sea on which to live and carry on about our programming. These numbers can be slightly reduced (~1,000 fold) in case we decide that money is not the main goal of our lives. Our plans for the day: High-level architectural design of the application List of tasks and time limits Make decisions about the supported platforms and iOS version [size=3][b]Day 2.[/b][/size] Today there won't be any jokes because the serious development process has started. Let's begin on a positive note: we still have some development time left and our time milestone hasn’t passed yet. While being serious, we should look at what our current decisions are: We will use Cocos2d as the graphical engine ([url="http://www.cocos2d-iphone.org/"]http://www.cocos2d-iphone.org/[/url]) The minimum target iOS version will be 4.2 or 5.0 (there are some good reasons). This decision has been made after analysis of the iOS statistics for August 2011 ([url="http://www.marco.org/2011/08/13/instapaper-ios-device-and-version-stats-update)"]http://www.marco.org/2011/08/13/instapap...ts-update[/url]) Initial architecture thoughts: [img]http://img820.imageshack.us/img820/2698/gipfmodellayer.png[/img] Model: low level We thought that when it comes to AI and processing tons of data, any overhead would be multiplied by a large factor (maybe several thousand fold). As a result, we decided to write the low-level (everything that is connected with data representation and the board state) in pure C. The two base classes are: * BoardInfo – This contains data that does not change during the whole game: board parameters: width/height, field structure, etc. * BoardState – This contains data that changes during the game: current set of pieces on board, number of pieces in reserve for both players, etc. Our AI thoughts are based on evaluation of a large quantity of board states (for example, if we use Alpha-Beta pruning), so we absolutely need to use simple 'memcpy' to copy them and the fastest operations to operate with them. That's why the data organization above and pure C were chosen. Model: high level Here comes Objective-C and high-level wrappers of the two classes discussed above. AI uses low-level model only to make decisions about what is the “best move.” In all other cases, high level model objects are used (even to make the move, which was just evaluated using low-level structures only). HexBoardGame – This class contains inner variables of type BoardInfo that do not change during the game. It also contains an inner variable of type BoardState, which is changed every turn, as well as initialization functions. Everything that is needed to create a game skeleton from board parameters (BoardInfo) and changing board states (BoardState) is also included in this class. GipfBoardGame – This contains the specific implementation of HexBoardGame for iGipf rules. It expands HexBoardGame with such things as moves with pieces shifting (see the rules), row removal (see the rules), etc. Controller [url="http://igipf.com/img/blog/GIPFController.png"]Here[/url], we think everything is clear. Views Everything is even clearer [url="http://igipf.com/img/blog/GIPFView.png"]here[/url]. CCScene -- is a class from Cocos2d. Conclusion: Enough for today. These drafts helped us to understand what we are going to implement and how to start the basic coding routine. The next interesting step will be AI experiments for which we need a proof-of-concept; but we'll talk about that. Share this post Link to post Share on other sites [B][SIZE="3"]Today we’re writing about the AI part. Hopefully a bit of theory will be useful. [/SIZE][/B] [B]Minimax[/B] We’re developing a turn-based game with zero-sum play ( if someone wins n-points, second player loses n-points). The whole game can be represented as a tree with min-levels where one player tries to minimize possible loss and max-levels where second player tries to maximize gain. Let’s look at the picture: [IMG]http://2.bp.blogspot.com/-uqRQl6M_NiQ/T3XORxd1QLI/AAAAAAAAAS8/s2lx1vihNoQ/s1600/minimax.png[/IMG] Imagine that the max player (circle) is considering his next turn. If we expand all of his turns, then all of his opponent’s turns that come after his turn and continue to do so, we can get the tree as above. Look at level 4 - it’s min turn, we compare all possible outcomes and propagate minimum value to the level above (3). 10 is less than +infinity, so 10 goes to third level, 5 and -10 have only one possible turn in their sub-branches, they are propagated as is, 5 is less than 7 so it is propagated, and so forth. This brings us to level 3. It’s max turn. We compare each possible outcome and propagate max for every sub-branch, just as we did going from level 3 to level 4. We repeat this pattern again and again and again unless we reach top level. So, the value of game is -7. Absolute value doesn’t mean anything so far - it’s just a cost of game according to rules that we defined. If both players are rational than min player will lose 7 points and max player will win 7 points; so far so good. [B]Complexity[/B] It’s a pretty simple solution, isn’t it? Nevertheless, let’s look at our case: we have 24 edge points and 42 possible ways on each level (actually even more - some turns require an extra-step). Just by simple math we know that on the 4th level we’ll have 42*42*42*42 ~3M leaves, by the 5th level this count is already 130M. And don’t forget that we’re working with mobile phones, not super-computers. It seems as though we’re in trouble, but this is where alpha-beta pruning comes to help us. [B]Alpha-beta pruning[/B] The idea is pretty simple. In real trees we can skip some branches, because they don’t give us a better solution. [IMG]http://1.bp.blogspot.com/-kZAXkwih_DE/T3XO9lAjSDI/AAAAAAAAATE/rvKERvCWUO8/s640/ab_pruning.png[/IMG] Look at the right branch on level 1 (consider top level as 0) - it’s min turn and here we have options: 5 and 8 -- in reality we never need to expand the 8 sub-branch because we already know that it doesn’t provide any benefit to the min player when compared to branch 5. Hence, we can prune 8-branch. That’s the idea behind the very simple and very efficient algorithm called: [URL="http://en.wikipedia.org/wiki/Alpha-beta_pruning"]alpha-beta pruning[/URL]. [B]Simple prototype[/B] Because the move engine isn’t completed yet, we built a simple simulation by generating a tree with random numbers and trying to calculate cost of a game. We can then check how many nodes have been pruned. The results are amazing: Leaves per level: 42 Levels: 3 Number of leaves: 74088 Pruning per level: {level 1: [B]40[/B], level 2: [B]245[/B]} Evaluated: 6823 ([B]9%[/B]) Leaves per level: 42 Levels: 4 Number of leaves: 3111696 Prunings per level: {level 1: [B]41[/B], level 2: [B]509[/B], level 3: [B]10194[/B]} Evaluated: 150397 ([B]4%[/B]) [B]Note[/B]: 4% and 9% means that pruning saved us about ~ 96% and 91% of calculation times. Of course the real evaluation function doesn’t return pure random values, but we suppose that that result will be pretty similar. Our plans are to complete the move engine over the weekend. [B]Current status[/B] Lines of code and files:$ find . "(" -name "*.m" -or -name "*.c" -or -name "*.h" ")" -print | xargs wc -l
157 ./HexBoardStatic/BoardInfo.c
94 ./HexBoardStatic/BoardInfo.h
58 ./HexBoardStatic/BoardState.c
66 ./HexBoardStatic/BoardState.h
25 ./HexBoardStatic/Common.h
44 ./HexBoardStatic/Common.m
17 ./HexBoardStatic/CommonHexConst.c
85 ./HexBoardStatic/CommonHexConst.h
97 ./HexBoardStatic/CommonOperations.c
76 ./HexBoardStatic/CommonOperations.h
87 ./HexBoardStatic/GipfGame.h
346 ./HexBoardStatic/GipfGame.m
210 ./HexBoardStatic/GipfOperations.c
38 ./HexBoardStatic/GipfOperations.h
10 ./HexBoardStatic/GipfSpecificConstants.c
35 ./HexBoardStatic/GipfSpecificConstants.h
113 ./HexBoardStatic/HexBoardGame.h
216 ./HexBoardStatic/HexBoardGame.m
1774 total
[SIZE="3"][B]Confession [/B][/SIZE]
While weekend work (this post is a bit late, sorry for that) is in progress right now, we have to confess something. Even though we did not have any iOS code written before the project began, we did have small prototypes implemented in python. This prototype gave us confidence that the project could be implemented in 2 weeks.
Even though we lost a couple of days of work on prototypes in scripting language, it was really useful to see the work in action. Indeed, it raised important performance issues and led to our decision to implement part of the game in pure C. But everything is good in its season.
[B]The prototype[/B]
The Python prototype was first a console script of 300 lines; basically just game logic (with some restrictions such as remove selection if many rows are subject to simultaneous removal). Later we added some GUI elements (otherwise it was very frustrating and uncomfortable to play from command line) for which we used the kivy framework for those who are curious. It was our plan to try to run it under the Android OS and check performance on mobile hardware.
Just look at this picture:
[IMG]http://4.bp.blogspot.com/-NPsVJnj70LQ/T3gPccQspgI/AAAAAAAAATQ/Ir6sPmOIkUM/s400/Screenshot-Pong.png[/IMG]
This is already a good candidate for the AppStore, isn’t it? Just kidding!
[B]Performance Issue[/B]
The good news – The game worked and we can play, though it was not so convenient.
The bad news -- By looking 3 levels deep, the game needed about 1 minute for analysis. Through simple profiling we realized that most of the time was consumed by copying (cloning) boards for the next move and performing all operations with hexagon rows. We should mention that analysis to a depth of 2 levels is enough for newbie, but for an expert analysis needs to go at least 4 levels deep.
[B]Optimizations[/B]
Okay, as we see, performance is a serious issue if we deal with AI calculations on the object-level without special optimizations. Thus, the current goal is to create a layer of the highest possible performance to use 100% of the calculation capabilities of the iPhone/iPad.
Main points:
- Pure C, only structs
- Structures have strict grouping
--- Those that are frequently copied (BoardState)
--- Those that are rarely copied (BoardInfo)
- No heap allocations in repeated operations (only stack)
- Pre-allocated buffers in heap for all needed storage (arrays, temp vars, etc.)
- Copy any data using 'memcpy' only
- Optimized board operations
We believe that we could seriously beat our high-level Python implementation, because most performance in the Python prototype is lost on allocations and constructing objects (instead of memcpy to preallocated space). We expect something like a 10x improvement; but we'll see.
Share on other sites
[B][SIZE="3"]More AI. Unit-tests. [/SIZE][/B]
Last time we talked about AI and alpha-beta pruning and we can now happily report that the AI work is done! We haven’t written anything about evaluation function yet, so we’ll cover that today.
[B]Evaluation function[/B]
One of the requirements of AI for turn-based game is that every position must be evaluated; otherwise we don’t have the necessary information needed to choose a move. To build an evaluation function, we need to understand what parameters we are will be working with. Basically they are:
- the number of pieces on board that are ours;
- the number of the pieces on the board that belong to our opponent;
- the number of pieces in our reserve; and,
- the number of pieces in the opponent’s reserve
Now, in a more advanced version we can build a heuristic, such as evaluating the number of rows that can be removed in 1(2, ...) move, key position points occupied and so forth. Right now, however, we don’t use any of them and with good depth of search they are redundant. Even with 4 parameters, writing an evaluation function isn’t such an easy task. After all, what should be considered the stronger position: 10 pieces on board + 2 in reserve or 2 pieces on board + 10 in reserve? In which case does adding 1 piece give more power? Unfortunately, we aren’t gurus of gipf theory so we need other way to figure this all out.
[B]Practice: AI vs AI[/B]
The method we settled on consisted of us pairing one AI with another AI using a different evaluation function and then testing all combinations we could invent. Practice is the best criteria of truth! Actually, trial and error can be a very good way to check that an AI mechanism is correct: pair AI that considers n-turns deeps with AI that considers m-turns deep. Here are some cool stats from our work (We counted each player’s turn as separate, so sequence player 1 – player 2 – player 1 is 3 turns.):
5-levels AI beats 3-levels AI in ~60 turns
5-levels AI beats 4-levels AI in ~130 turns
5-levels AI vs 5-levels AI played over 200 turns!
[B]Pure C: Profit![/B]
This probably isn’t going to be one of the more disputed points in this process, but we stilled ask ourselves if pure C as it worth it? Certainly, yes!
Our python prototype started to lag at depth level 4. We had to wait roughly one minute or so for an AI move at that level. Basically, allocations\copies took too much time and processing power. In our C version, with no heap alloc, struct memcpy-only, and optimized calc code, a depth level 4 analysis took roughly a second (~0.6 s - 1.2 s).
In other words, we got a 60-fold increase performance improvement by using pure C, which made the game more friendly for users (User will wait a couple of seconds, not minutes). So the answer is a resounding yes; going with pure C was worth the effort.
[B]Unit tests[/B]
Another topic we want to cover here is unit-tests. Playing with an evaluation function and watching how 2 AI players interacted, we were able to discover some bugs. Yep, we aren’t perfect. Thankfully, we planned for our imperfections when we decided to add regression testing and spent a couple of hours adding the ability to set up any game scenario and execute AI based on that.
In just 2 days we already had about 20 tests. About half of them caught bugs and other half were related to restrictions we constructed to save us from another dozen of bugs. Of course, with AI it is almost impossible to expect accurate, perfect moves in the middle of the game, but in the end of the game some moves are 100% precise. As an example, consider that player has only 1 piece left, his optimal strategy is to remove his pieces lest he lose. This was not always how the AI played, which led to a few bugs in the end of the game.
To share one bug with you to give an idea of what we are talking about, consider this situation. Sometimes AI managed to find a very good turn on depth 4 and collected many pieces. The problem, however, was that he was out of pieces on depth 2 meaning he had already actually lost the game. Clearly, fixing this error in logic led to better game play. The bottom line is this. If you ask us “is unit-testing worth it?” the answer is “yes, absolutely!”
Before wrapping up, let’s look at our current status.
[B]Done:[/B]
- AI module
- Core mvc architecture basis according to the docs
- Model library (GipfGame) -> GameController
- Added scene controllers for all scenes
- Player model layer basic (PlayerBase, UserPlayer, CPUPlayer)
[B]LOC and files:[/B]
\$ find . "(" -name "*.m" -or -name "*.c" -or -name "*.h" ")" -print | xargs wc -l
20 ./AppDelegate.h
172 ./AppDelegate.m
16 ./CCSprite+Utility.h
23 ./CCSprite+Utility.m
10 ./Constants.c
35 ./Constants.h
21 ./ContentControllerDelegate.h
23 ./ControllerBase.h
32 ./ControllerBase.m
27 ./CPUPlayer.h
46 ./CPUPlayer.m
45 ./GameConfig.h
54 ./GameController.h
88 ./GameController.m
18 ./GameParameters.h
21 ./GameParameters.m
22 ./GameScene.h
29 ./GameScene.m
26 ./GameSceneController.h
44 ./GameSceneController.m
27 ./GameSceneDelegate.h
157 ./HexBoardStatic/BoardInfo.c
94 ./HexBoardStatic/BoardInfo.h
58 ./HexBoardStatic/BoardState.c
66 ./HexBoardStatic/BoardState.h
25 ./HexBoardStatic/Common.h
44 ./HexBoardStatic/Common.m
17 ./HexBoardStatic/CommonHexConst.c
85 ./HexBoardStatic/CommonHexConst.h
97 ./HexBoardStatic/CommonOperations.c
76 ./HexBoardStatic/CommonOperations.h
87 ./HexBoardStatic/GipfGame.h
346 ./HexBoardStatic/GipfGame.m
210 ./HexBoardStatic/GipfOperations.c
38 ./HexBoardStatic/GipfOperations.h
10 ./HexBoardStatic/GipfSpecificConstants.c
35 ./HexBoardStatic/GipfSpecificConstants.h
113 ./HexBoardStatic/HexBoardGame.h
216 ./HexBoardStatic/HexBoardGame.m
17 ./main.m
45 ./MainAppViewController.h
221 ./MainAppViewController.m
61 ./PlayerBase.h
62 ./PlayerBase.m
28 ./PlayerBaseSub.h
40 ./PlayerDelegate.h
16 ./SceneBase.h
18 ./SceneBase.m
16 ./SceneManager.h
37 ./SceneManager.m
60 ./SinglePlayerLobbyController.h
67 ./SinglePlayerLobbyController.m
16 ./SinglePlayerLobbyScene.h
13 ./SinglePlayerLobbyScene.m
37 ./SinglePlayerLobbySceneDelegate.h
26 ./UserPlayer.h
25 ./UserPlayer.m
3582 total
Share on other sites
Our next step is multiplayer mode. Imagine our delight when we read that, “in iOS 5, Game Center has a new API that makes it easy to create turn-based games”. Upon learning this, and then looking briefly at the API, we decided to change the minimum supported iOS version to 5.0 and use this functionality. This API is supported even on 3GS model iPhones and it really shouldn’t be a big deal that our game won’t work on models that are already almost 4 years old.
So, the turn-based games API sounded like exactly what we need! Unfortunately, problems began to pop up from the start. Initially we wanted to limit the time per turn to ensure the game was dynamic. As you might already guess, there are no such functionality in API. That was frustrating, but after a long discussion we agreed that because the game requires thinking and planning, maybe a time limit wasn’t most important feature. Eventually, the joy gained from clear victory won through mental prowess is much more fun than a victory gained by default through timeout. In addition to this benefit, we also realized that dropping timeout limits would allow a player to engage in many matches simultaneously. We decided that using the Game Center API for turn-based games was still worth it.
So resolved on issue, but the net reared its head in less a moment. When you create a new match, you get to play the first turn right away (even if the system hasn’t found an auto match partner yet!) When the system has found someone, he can see your turn. There is currently no other way to initiate a game and though it looks pretty strange, that’s how it’s implemented from Apple’s side. A player creates a game by sending his or her first turn, which acts as a flag that the game is started and tells auto-matcher to consider the player as candidate for playing. So, players must be aware that if they make their first turn in auto-match mode, that doesn’t mean they are already playing with someone. As an aside, it is worth noting that match maker takes about a minute to pair 2 players. Also, even if one game is waiting for a player and another player clicks “auto-match”, which you think would, ideally, pair that player with already created game, Game Center sometimes creates new game instead. To say the least, testing interaction with this black box is a very tricky task.
Ok, you probably think we have already mentioned most of the possible problems with Game Center and the answer to that thought is …”No, not really.” The next problem arose with notifications that occur in the background. Imagine, you are playing a game and switch to another application, spend some time there, and then get a notification that your opponent made his turn and so you switch back to our game. It seems relatively simple, but when we tested this scenario, lo and behold, nothing happened. So, the game is active, everything is ok, the game moves to the background and the notification arrives (at least at the system iOS level - and you can see it) but the game knows nothing about it. That’s it. It seems that 3-4 seconds after the game goes to the background, all listeners are disabled. That’s probably ok to prevent battery drain etc, but when you switch back to the application it should (why not?) deliver all notifications, but in fact it doesn't. This is a very frustrating problem and we now need to re-implement all game-initialization (recreate the whole game state) logic every time the game emerges from the background, just to proceed to the next turn.
Despite all of the complaints above, it’s still cool to test game. Even though there have been some minor issues, we’re already almost done and are very excited about delivering the application soon!
Share on other sites
[left]The two weeks are over and so is our story. You can see the final result in screenshots below[/left]
[center]
[color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif][url="http://2.bp.blogspot.com/-MJtxJEPuEUw/T4Hm1TyrviI/AAAAAAAAATc/SqtMjydjG2k/s1600/IMG_0007.PNG"][img]http://2.bp.blogspot.com/-MJtxJEPuEUw/T4Hm1TyrviI/AAAAAAAAATc/SqtMjydjG2k/s200/IMG_0007.PNG[/img][/url][url="http://3.bp.blogspot.com/-X8NowBja6ZI/T4Hm5KC9PEI/AAAAAAAAATk/-QETi7GLds8/s1600/IMG_0008.PNG"][img]http://3.bp.blogspot.com/-X8NowBja6ZI/T4Hm5KC9PEI/AAAAAAAAATk/-QETi7GLds8/s200/IMG_0008.PNG[/img][/url][url="http://1.bp.blogspot.com/-tMNZntpiBdY/T4HnIbVA0eI/AAAAAAAAAT8/4tVAj8q3ug8/s1600/IMG_0010.PNG"][img]http://1.bp.blogspot.com/-tMNZntpiBdY/T4HnIbVA0eI/AAAAAAAAAT8/4tVAj8q3ug8/s200/IMG_0010.PNG[/img][/url][url="http://4.bp.blogspot.com/-EZkXU5Hj4OU/T4Hm9do6vmI/AAAAAAAAATs/IaMrd1Wka28/s1600/IMG_0009.PNG"][img]http://4.bp.blogspot.com/-EZkXU5Hj4OU/T4Hm9do6vmI/AAAAAAAAATs/IaMrd1Wka28/s200/IMG_0009.PNG[/img][/url][url="http://3.bp.blogspot.com/-NbBA2aCz0qE/T4HnAHegblI/AAAAAAAAAT0/R8OJCb1ENFw/s1600/IMG_0011.PNG"][img]http://3.bp.blogspot.com/-NbBA2aCz0qE/T4HnAHegblI/AAAAAAAAAT0/R8OJCb1ENFw/s200/IMG_0011.PNG[/img][/url][/font][/color][/center]
[left][color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif]After looking at result of our work, we can say one thing. There is nothing scary about writing an iOS game. Yep, iOS requires coders to change some programming approaches. Sometimes you can’t understand why the heck it should be done a particular way, but that’s ok.[/font][/color][/left]
[left][color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif]The most difficult part of the whole process was our lack of experience in working with certain libraries (APIs). For example, we twice changed our version of cocos2d. The first time we selected a stable version, but it had a couple of issues connected with texture coordinates positioning. So, we switched to a beta version. The second change was our error in that we encountered a memory leak with sprites and thought, after a small investigation, that it could have been due to using the beta version of cocos2d. Thus, we updated the version again. However, the memory leak did not go away and we later found an error in our code.[/font][/color][/left]
[left][color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif]Another problem is working with scenes directly in code. When you have one resolution supported it’s probably ok, but with 4 resolutions it’s already a nightmare. Now, CocosBuilder beta release went to open source and we definitely will consider it in our next projects.[/font][/color][/left]
[left][color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif]The next consideration for developers is graphics and sounds. Don’t be afraid to outsource everything you can’t do as soon as possible. It’s very frustrating when you have to wait because you hesitated to outsource something and thus can’t speed up a process. So, just do it in the beginning of a project when you’re still pretty comfortable with your own deadlines. By the way, don’t be afraid of tough deadlines. Frankly, it motivates.[/font][/color][/left]
[left][color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif]What’s next? We completed everything we had planned for the first version. Of course, it’s not ideal and we have many ideas about what can be improved. Should there be interest in the game, we will integrate those improvements. The final version has 120 files and 10870 lines of code (with empty lines and comments). The application has been submitted for review, so we can rest a week or two. Thank you for all of your comments. If you have any questions, we’ll do our best to answer them. We hope to see you guys in multiplayer mode![/font][/color][/left]
[left][color=#000000][font=Arial, Tahoma, Helvetica, FreeSans, sans-serif]Check the result on the AppStrore: [url="http://itunes.apple.com/us/app/igipf/id510715421?ls=1&mt=8"]http://itunes.apple....15421?ls=1&mt=8[/url][/font][/color][/left]
Create an account
Register a new account
• Partner Spotlight
• Forum Statistics
• Total Topics
627655
• Total Posts
2978459
• 10
• 12
• 22
• 13
• 33
|
2017-10-18 09:31:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2381419539451599, "perplexity": 1972.71123869676}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822851.65/warc/CC-MAIN-20171018085500-20171018105500-00154.warc.gz"}
|
http://math.stackexchange.com/questions/26527/turning-a-closed-form-generating-function-back-to-ordinary-power-series
|
# Turning a closed-form generating function back to ordinary power series
If I know the formal power series, I know how to find the closed form:
$$\displaystyle F = \sum_{n=0}^{\infty} {X^n} = 1 + X^1 + X^2 + X^3 + ...$$
$$\displaystyle F \cdot X = X \cdot \sum_{n=0}^{\infty} {X^n} = X^1 + X^2 + X^3 + X^4 + ...$$
$$\displaystyle F - F \cdot X = 1$$ $$\displaystyle F = \frac 1 {1 - X}$$
But if I only know the closed form $\frac 1 {1 - X}$, how do I turn it back into the series $1 + X^1 + ...$? In other words, how do I do extract the coefficients if I only know the closed form and I do not know that $\frac 1 {1 - X}$ corresponds to $1 + X^1 + ...$.
My textbook and everywhere I looked at seems to avoid talking about this, and somehow magically transform things back and forth with a set of known formulas. Is there a better way to do this, or is formula matching the best we can do?
Edit: This is the type of questions I need to solve:
Find the coefficient of $X^8$ in the formal power series $(1 - 3X^4)^{-6}$
-
taylor series, start differentiating! – yoyo Mar 12 '11 at 16:17
@yoyo: can you elaborate on that? – Lie Ryan Mar 12 '11 at 16:43
For rational functions, the easiest way to find a power series is probably to use long division. Rewrite $a/b$ as $(a-cb)/b + c$, where $c$ is the quotient of the lowest-degree terms of $a$ and $b$. Repeat ad nauseam. – Tanner Swett Jun 3 '14 at 2:00
The quickest way to transform your generating function to a power series is to have a table of formulae handy. For a given rational function, you would use partial fraction expansion if necessary and switch back to the power series by looking at the appropriate entry in the table for each term. If you look at Herbert Wilf's book http://www.math.upenn.edu/~wilf/gfologyLinked2.pdf , in section 2.5, he has a list of such formulae.
You have
$$\frac{1}{(1-x)^{k+1}} = \sum_n \binom{n+k}{n} x^n$$
So, for your case, we have
$$\frac{1}{(1-3x^4)^6} = \sum_n \binom{n+5}{n} 3^n x^{4n}$$
The coefficient of $x^8$ in this expansion is $3^2 \times \binom{7}{2} = 189$.
It is actually not difficult to derive the identity. You start with
$$\frac{1}{1-x} = 1+x+x^2 +\ldots$$
Take derivative on both sides k times and you will get
$$\frac{k!}{(1-x)^{k+1}} = \sum_n [(n+k)(n+k-1) \ldots n]x^n$$
this simplifies to
$$\frac{1}{(1-x)^{k+1}} = \sum_n \binom{n+k}{n} x^n$$
-
Differentiation $n$ times, setting the variable to 0, multiplying by $n!$ gets you the $n$th coefficient of the ogf of a function $f$. That is, if $F(x)$ is the ogf of $f(n)$ then
$$f(n) = [(D^nf)(0)]\cdot n!$$.
But you want the closed form of $f$ from the ogf. For the most part, that is an ad hoc process (svenkatr's answer gives a particular method for $\frac{1}{(1-x)^k}$).
A general method is to use the above differentiation trick (essentially computing the Taylor series), but finding a pattern from it. For a simple example, if $F(x)= \log \frac{1}{1-x}$, differentiating numerous times, you'll see the pattern 1, 1/2, 1/3, 1/4,..., so you'll notice and be able to prove) that $f(n) = 1/n$.
-
|
2016-02-09 05:50:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.8958130478858948, "perplexity": 210.0480098949087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156520.89/warc/CC-MAIN-20160205193916-00227-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://ml.cddddr.org/mop/msg00045.html
|
• To: MOP.pa@Xerox.COM
• From: Richard P. Gabriel <rpg@lucid.com>
• Date: Fri, 22 Jun 90 22:49:06 PDT
After a quick reading of the new stuff, here is what I came up
with:
Page 3-3
In this document, the term {\bit
metaobject} is used to refer precisely to an object which represent a
CLOS class, slot definition, generic function, method or method
combination.
represent => represents
Page 3-4
Any metaobject must be an instance of a subclass
of exactly one of these classes.
That is, there cannot be direct instances of these or such instances are not
metaobjects?
I take it that the term available as'' is used to indicate that
these things are observable and when observed have the representations
stated? Because this construction is odd, you should explain it.
Page 3-5
Certain kinds of information is associated with both direct and
effective slot definitions.
kinds ... is => kinds ... are
Certain other information is only associated with direct slot definition
metaobjects.
only associated => associated only
Page 3-9
\item{\bull} For a given set of arguments, a method $M\sub{2}$ {\bit
shadows} a method $M\sub{1}$ if and only if $M\sub{1}$ and $M\sub{2}$
are both associated with the same generic function; and either
$M\sub{1}$ and $M\sub{2}$ are both primary methods or $M\sub{1}$ and
$M\sub{2}$ are both {\bf :around} methods or $M\sub{2}$ is an {\bf
:around} method and $M\sub{1}$ is a primary method; and for that set of
arguments $M\sub{2}$ is more specific than $M\sub{1}$; and when
$M\sub{2}$ is invoked, {\bf call-next-method} is called from within its
body.
First, I think the word shadow' will fake people out, defined like
this. Second, I think all :around methods shadow primary methods
regardless of specificity of argument, so you need to word this
differently.
\item{\bull} For a given set of arguments, a method $M\sub{2}$ {\bit
overrides} a method $M\sub{1}$ if and only if $M\sub{1}$ and $M\sub{2}$
are both associated with the same generic function; and either
$M\sub{1}$ and $M\sub{2}$ are both primary methods or $M\sub{1}$ and
$M\sub{2}$ are both {\bf :around} methods or $M\sub{2}$ is an {\bf
:around} method and $M\sub{1}$ is a primary method; and for that set of
arguments $M\sub{2}$ is more specific than $M\sub{1}$; and when
$M\sub{2}$ is invoked, {\bf call-next-method} is not called from within
its body.
:around methods override primary ones regardless of specificity.
But, this description permits any implementation modifications provided
provided that for any portable class $C\sub{\hbox{p}}$ that is a
subclass of one or more specified classes $C\sub{0} \ldots C\sub{i}$,
the following are true:
provided provided => provided
What is a portable class?
\item{\bull} The method applicability of any specified generic function
is the same in terms of behavior as it would had no
Wow, do you mean to say that the same methods are applicable and do the
same things? This is a sentence the gov'mint coulda written.
Page 3-10
Typically, when a method is allowed to be overridden, a small number
of related will need to be overridden as well.
related will => related methods will
Page 3-18
An error is signalled if this value is not a proper list; or if it is
the empty list; or if {\bf validate-superclass} returns false for any
element of the list.
The construction ; or'' is over-punctuated. Also,
VALIDATE-SUPERCLASS takes two arguments. I would try this:
An error is signaled under the following conditions: if this value is
not a proper list, if it is the empty list, or if {\bf
validate-superclass} applied the class and any element of this
list returns false.
An error is signalled if this value is not a proper list or; if any
element of the list is not an instance of the class {\bf
direct-slot-definition} or one of its subclasses.
If the class is being initialized, this argument defaults to false.
If it's a proper list, the default must be nil or (), not false. I
list in this subsection.
Page 3-21
The generic function {\bf map-dependents} can be called to access the
set of dependents of a class or generic function. The generic function
{\bf remove-dependent} can be called to remove an object from the set of
dependents of a class or generic function. The effect of calling {\bf
add-dependent} or {\bf remove-depedent} while a call to {\bf
map-dependents} is in progress is unspecified.
remove-depedent => remove-dependent
Page 3-22
;;;
;;; Updaters are used encapsulate any metaobject which needs updating
;;; when a given class or generic function is modified. RECORD-UPDATER
;;; is called to both create an updater and add it to the dependents of
;;; the class or generic functions. Methods on the generic function
;;; UPDATE-DEPENDENT, specialized to the specific class of updater do
;;; the appropriate update work.
;;;
used encapsulate => used to encapsulate
(defun record-updater (class dependee dependent &rest initargs)
(let ((updater (apply #'make-instance class :dependee dependee
:dependent dependent
initargs)))
updater))
Why isn't this a generic function or doesn't it use check-type to make
sure CLASS is an updater?
Page 3-24
Why isn't add-direct-method just part of some dependent protocol between
classes and methods, instead of this more direct treatment, or is this
typical of how dependent protocols are handled?
Page 3-28
The generic function {\bf add-method} adds a method to the set of
methods associated with a generic function. After adding the method to
this set, {\bf compute-discriminating-function} is called and its result
is installed by calling {\bf set-funcallable-instance-function}. The
{\it generic-function} argument is destructively modified and returned
as the the result.
the the => the
Page 3-29
Chapter 1 section --- Agreement on Parameter Specializers and
Qualifiers''
section -- Agreement => section---Agreement
This same problem occurs elsewhere.
Page 3-30
\Defmeth {allocate-instance}
{({\it class\/} structure-class) {\rest} {\it initargs}}
The instance returned by this method has slots with undefined values.
I would very much prefer to leave out structures as much as possible
from this protocol. Structures are very nice as they are, and I fear
the possible slowdown of structures or incorrectness and complexity of
implementation in order to accomodate the MOP. The complexity of
implementation is not justified for the simple facility.
Page 3-32
This method can be overridden. Because of the consistency requirements
between this generic function and {\bf
compute-applicable-methods-using-classes}, doing so may require also
overidding
\method{compute-applicable-methods-using-classes}{standard-generic-function
t}.
This phrasing is better than similar phrasings you have used for this
situation. Replace the others with something like this.
Page 3-35
Under compute-class-precedence-list
If the specified class or any of its superclasses is a forward
referenced class an error is signalled.
There are two situations - when the user calls this and when it is
called by the system (at finalize-inheritance time?). In the first
case it should not signal an error, and in the second it should.
Page 3-36
Under compute-discriminating-function
\label Values:
The value returned by this generic function is a function.
and later on
The result of {\bf compute-discriminating-function}
cannot be called directly with {\bf apply} or {\bf funcall}.
This means it isn't a function. If you cannot add to X nor subtract
from it, it ain't a number, and if you cannot funcall or apply X, it
ain't a function.
Actually, I think it should not be a function at all, but an instance
of a class that one could call DISCRIMINATOR (with the usual
STANDARD-DISCRIMINATOR). This object is a thing such that when you
SET-FUNCALLABLE-INSTANCE-FUNCTION on a generic function and it, a
funcallable generic function is produced. I suppose this situation
indicates some kind of initialization or reinitialization.
Imagine this implementation: a generic function has a code sequence at
its head which uses a table to determine applicability. The result of
a call to compute-discriminating-function, then, produces some kind of
table object, which is then combined with the code sequence to produce
the funcallableness of the generic function.
Okay, so that shows that this shouldn't be a function, and that there
should be a discrimninator class. But the standard- version of this
part of the protocol could use the rest of the protocol that you
suggest (compute-applicable-methods etc) when subclasses of
discriminators are made and user methods applied, though I feel a
little uncomfortable about it. I prefer making this a whole lot more
abstract, possibly not going this deep in the protocol.
Determination of the the effective method is done by calling {\bf
compute-effective-method}.
the the => the
Page 3-38
This generic function returns two values. The first is an effective
method. The second is a list of effective method options.
You should specify effective method'' a bit, if even simply to state
that it is some abstract, first class thing.
Page 3-40
The class of the effective slot definition metaobject is determined by
calling {\bf effective-slot-definition-class}. The effective slot
definition is then created by calling {\bf make-instance}.
Why isn't this just CLASS-OF?
should mention that the list of superclass-slot-definitions are in
CPL order.
I would leave out the structure methods for this.
Page 3-42
The direct slot definitions are then collected into individual lists,
one list for each slot name associated with any of the direct slot
definitions. The slot names are compared with {\bf eql}. Each such
list is then sorted into class precedence list order. Direct slot
definitions coming from classes earlier in the class precedence list of
{\it class} appear before those coming from classes later in the class
precedence list.
I found this hard to understand because of the phrase associated
with any of the direct slot definitions.'' I presume these are the
direct slot definitions for the slot with that name in any of the
classes this class inherits from. It's better to be a little wordy,
but clear.
-rpg-
`
|
2022-05-25 10:45:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6429693102836609, "perplexity": 3617.2338010838266}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00733.warc.gz"}
|
https://crypto.stackexchange.com/questions/52240/why-does-this-rsa-example-break-when-1-setting-p-or-q-as-non-prime-or-2-sett
|
# Why does this RSA example break when: 1) setting p or q as non-prime, or 2) setting p and q as the same number?
This answer talks about how non-prime numbers would make the algorithm easier to break, not that the algorithm doesn't work full-stop: Why does RSA need p and q to be prime numbers?
Question 1: this implementation uses the number 57, which is not prime as it divides by 3.
1. Choose p = 13, q = 57, e = 5 and message 'm' = 4
2. Therefore n = pq = 741 and (p - 1)(q - 1) = 672
3. encrypt message m as m^e mod n = 4^5 mod 741 = 283
4. calculate decryption key d as e^-1 mod (p–1)(q–1) = 5^-1 mod 812 = 269
5. decrypt '283' by 283^d mod n = 283^269 mod 741 = 199 ≠ 4
How does using a non-prime number cause the result to be wrong? I suspect the answer has something to do with the way that an inverse mod calculation looks for prime factors using the Euclidean algorithm, and if n is not the product of two prime numbers then it will give a different answer. But how is (p-1)(n-1) affected by p or q not being prime?
Question 2: p and q are the same prime number:
1. Choose p = 11, q = 11, e = 3 and message 'm' = 2
2. Therefore n = pq = 121 and (p - 1)(q - 1) = 100
3. encrypt message m as m^e mod n = 2^2 mod 121 = 8
4. calculate decryption key d as e^-1 mod (p–1)(q–1) = 5^-1 mod 100 = 67
5. decrypt '8' by 8^d mod n = 8^67 mod 121 = 24 ≠ 4
How does using two (prime) identical numbers cause the result to be wrong? For this one I don't know where to start.
A first step is to use the correct value for the totient function $\phi(n),$ which is $432$ and $110$ in your examples.
Even better use the Carmichael function $\lambda(n)$ as described in RSA key generation, which is $36$ and $110.$
He is correct in saying that your decryption results are wrong because you are using incorrect values for the totient function $\phi(n)$. If you look at the Euler's theorem, you will see that taking a wrong value for $\phi(n)$, will result in a residue ≠ 1 mod $n$ when you do decryption. In this case you will get residue as $m^{k}$ mod $n$, where $k = e.d (= a.\phi(n) + k)$ mod $\phi(n)$. Here $\phi(n)$ is the correct value of the totient function, which you are taking as incorrect in the example above.
It is also not at all secure to use either $p = q$ or one (or both) of $p$ or (and) $q$ to be composite. If you take $p = q$, then your implementation is not backed up by the hardness of the integer factorization problem, no matter how big you take you primes $p$ and $q$ to be. On the other hand, if you take composites instead of primes $p$ or (and) $q$ then you are just making it easier for someone to factorize the modulus value $n = p.q$.
|
2021-09-18 18:06:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667016386985779, "perplexity": 292.20431690428126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00167.warc.gz"}
|
http://physics.stackexchange.com/questions/30251/using-einsteins-relativity-who-is-younger/31588
|
# Using Einstein's Relativity: Who is younger?
Suppose we have a person A and a person B.
Person B travels very close to speed of light and never returns. He's constant in speed. Then, we can say two things:
1. B is younger than A.
2. A is younger than B (since we can consider B's reference as inertial).
Who is correct between the two?
-
possible duplicate of How is the classical twin paradox resolved? – Sklivvz Dec 24 '12 at 15:18
This is different from the classical twin paradox in that you don't have one of the twins turning around and returning to the starting point. – David Z Dec 24 '12 at 16:06
See the answers of this question: How is the classical twin paradox resolved?
The point is that both will never be able to compare their ages without experiencing acceleration. And, acceleration makes reference frame non-inertial for which physics isn't valid.
In case of negligible acceleration in orbit to compare ages, this paper addresses the issue:
The twin paradox in compact spaces
Authors: John D. Barrow, Janna Levin
Phys.Rev. A63 (2001) 044104
Abstract: Twins travelling at constant relative velocity will each see the other's time dilate leading to the apparent paradox that each twin believes the other ages more slowly. In a finite space, the twins can both be on inertial, periodic orbits so that they have the opportunity to compare their ages when their paths cross. As we show, they will agree on their respective ages and avoid the paradox. The resolution relies on the selection of a preferred frame singled out by the topology of the space.
-
Interesting reference, nice way to be able to compare clocks without leaving an inertial frame. – twistor59 Jun 17 '12 at 12:51
Will clocks diverge if a stationary massive object were present in one. Side of an orbit? – Argus Jul 8 '12 at 5:40
OK, let's restate the problem just a bit for the sake of clarity.
Two persons, a & b, observe that they are moving uniformly with respect to each other and that their relative speed is close to $c$.
Both a & b observe that the other ages relatively slowly.
Now, your question: which person is correct, i.e., which person is absolutely aging more slowly?
Answer: There is no absolute time in SR.
However, there is an invariant time (proper time) associated with each person and all observers agree on the elapsed proper time for each person.
-
|
2014-07-24 02:25:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6596943140029907, "perplexity": 790.3532286641649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997884827.82/warc/CC-MAIN-20140722025804-00110-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://peeterjoot.wordpress.com/2009/07/
|
• 324,678
# Archive for July, 2009
## Transverse electric and magnetic fields
Posted by peeterjoot on July 31, 2009
# Motivation
In Eli’s Transverse Electric and Magnetic Fields in a Conducting Waveguide blog entry he works through the algebra calculating the transverse components, the perpendicular to the propagation direction components.
This should be possible using Geometric Algebra too, and trying this made for a good exercise.
# Setup
The starting point can be the same, the source free Maxwell’s equations. Writing $\partial_0 = (1/c) \partial/{\partial t}$, we have
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} &= - \partial_0 \mathbf{B} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \mu \epsilon \partial_0 \mathbf{E} \end{aligned} \quad\quad\quad(1)
Multiplication of the last two equations by the spatial pseudoscalar $I$, and using $I \mathbf{a} \times \mathbf{b} = \mathbf{a} \wedge \mathbf{b}$, the curl equations can be written in their dual bivector form
\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{E} &= - \partial_0 I \mathbf{B} \\ \boldsymbol{\nabla} \wedge \mathbf{B} &= \mu \epsilon \partial_0 I \mathbf{E} \end{aligned} \quad\quad\quad(5)
Now adding the dot and curl equations using $\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}$ eliminates the cross products
\begin{aligned}\boldsymbol{\nabla} \mathbf{E} &= - \partial_0 I \mathbf{B} \\ \boldsymbol{\nabla} \mathbf{B} &= \mu \epsilon \partial_0 I \mathbf{E} \end{aligned} \quad\quad\quad(7)
These can be further merged without any loss, into the GA first order equation
\begin{aligned}\left(\boldsymbol{\nabla} + \frac{\sqrt{\mu\epsilon}}{c}\partial_t\right) \left(\mathbf{E} + \frac{I\mathbf{B}}{\sqrt{\mu\epsilon}} \right) = 0 \end{aligned} \quad\quad\quad(9)
We are really after solutions to the total multivector field $F = \mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}$. For this problem where separate electric and magnetic field components are desired, working from (7) is perhaps what we want?
Following Eli and Jackson, write $\boldsymbol{\nabla} = \boldsymbol{\nabla}_t + \hat{\mathbf{z}} \partial_z$, and
\begin{aligned}\mathbf{E}(x,y,z,t) &= \mathbf{E}(x,y) e^{\pm i k z - i \omega t} \\ \mathbf{B}(x,y,z,t) &= \mathbf{B}(x,y) e^{\pm i k z - i \omega t} \end{aligned} \quad\quad\quad(10)
Evaluating the $z$ and $t$ partials we have
\begin{aligned}(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}}) \mathbf{E}(x,y) &= \frac{i\omega}{c} I \mathbf{B}(x,y) \\ (\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}}) \mathbf{B}(x,y) &= -\mu \epsilon \frac{i\omega}{c} I \mathbf{E}(x,y) \end{aligned} \quad\quad\quad(12)
For the remainder of these notes, the explicit $(x,y)$ dependence will be assumed for $\mathbf{E}$ and $\mathbf{B}$.
An obvious thing to try with these equations is just substitute one into the other. If that’s done we get the pair of second order harmonic equations
\begin{aligned}{\boldsymbol{\nabla}_t}^2\begin{pmatrix}\mathbf{E} \\ \mathbf{B} \end{pmatrix}= \left( k^2 - \mu \epsilon \frac{\omega^2}{c^2} \right)\begin{pmatrix}\mathbf{E} \\ \mathbf{B} \end{pmatrix} \end{aligned} \quad\quad\quad(14)
One could consider the problem solved here. Separately equating both sides of this equation to zero, we have the $k^2 = \mu\epsilon \omega^2/c^2$ constraint on the wave number and angular velocity, and the second order Laplacian on the left hand side is solved by the real or imaginary parts of any analytic function. Especially when one considers that we are after a multivector field that of intrinsic complex nature.
However, that is not really what we want as a solution. Doing the same on the unified Maxwell equation (9), we have
\begin{aligned}\left(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) \left(\mathbf{E} + \frac{I\mathbf{B}}{\sqrt{\mu\epsilon}} \right) = 0 \end{aligned} \quad\quad\quad(15)
Selecting scalar, vector, bivector and trivector grades of this equation produces the following respective relations between the various components
\begin{aligned}0 = \left\langle{{\cdots}}\right\rangle &= \boldsymbol{\nabla}_t \cdot \mathbf{E} \pm i k \hat{\mathbf{z}} \cdot \mathbf{E} \\ 0 = {\left\langle{{\cdots}}\right\rangle}_{1} &= I \boldsymbol{\nabla}_t \wedge \mathbf{B}/\sqrt{\mu\epsilon} \pm i I k \hat{\mathbf{z}} \wedge \mathbf{B}/\sqrt{\mu\epsilon} - i \sqrt{\mu\epsilon}\frac{\omega}{c} \mathbf{E} \\ 0 = {\left\langle{{\cdots}}\right\rangle}_{2} &= \boldsymbol{\nabla}_t \wedge \mathbf{E} \pm i k \hat{\mathbf{z}} \wedge \mathbf{E} - i \frac{\omega}{c} I \mathbf{B} \\ 0 = {\left\langle{{\cdots}}\right\rangle}_{3} &= I \boldsymbol{\nabla}_t \cdot \mathbf{B}/\sqrt{\mu\epsilon} \pm i I k \hat{\mathbf{z}} \cdot \mathbf{B}/\sqrt{\mu\epsilon} \end{aligned} \quad\quad\quad(16)
From the scalar and pseudoscalar grades we have the propagation components in terms of the transverse ones
\begin{aligned}E_z &= \frac{\pm i}{k} \boldsymbol{\nabla}_t \cdot \mathbf{E}_t \\ B_z &= \frac{\pm i}{k} \boldsymbol{\nabla}_t \cdot \mathbf{B}_t \end{aligned} \quad\quad\quad(20)
But this is the opposite of the relations that we are after. On the other hand from the vector and bivector grades we have
\begin{aligned}i \frac{\omega}{c} \mathbf{E} &= -\frac{1}{{\mu\epsilon}}\left(\boldsymbol{\nabla}_t \times \mathbf{B}_z \pm i k \hat{\mathbf{z}} \times \mathbf{B}_t\right) \\ i \frac{\omega}{c} \mathbf{B} &= \boldsymbol{\nabla}_t \times \mathbf{E}_z \pm i k \hat{\mathbf{z}} \times \mathbf{E}_t \end{aligned} \quad\quad\quad(22)
# A clue from the final result.
From (22) and a lot of messy algebra we should be able to get the transverse equations. Is there a slicker way? The end result that Eli obtained suggests a path. That result was
\begin{aligned}\mathbf{E}_t = \frac{i}{\mu\epsilon \frac{\omega^2}{c^2} - k^2} \left( \pm k \boldsymbol{\nabla}_t E_z - \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \right) \end{aligned} \quad\quad\quad(24)
The numerator looks like it can be factored, and after a bit of playing around a suitable factorization can be obtained:
\begin{aligned}{\left\langle{{ \left( \pm k + \frac{\omega}{c} \hat{\mathbf{z}} \right) \boldsymbol{\nabla}_t \hat{\mathbf{z}} \left( \mathbf{E}_z + I \mathbf{B}_z \right) }}\right\rangle}_{1}&={\left\langle{{ \left( \pm k + \frac{\omega}{c} \hat{\mathbf{z}} \right) \boldsymbol{\nabla}_t \left( E_z + I B_z \right) }}\right\rangle}_{1} \\ &=\pm k \boldsymbol{\nabla} E_z + \frac{\omega}{c} {\left\langle{{ I \hat{\mathbf{z}} \boldsymbol{\nabla}_t B_z }}\right\rangle}_{1} \\ &=\pm k \boldsymbol{\nabla} E_z + \frac{\omega}{c} I \hat{\mathbf{z}} \wedge \boldsymbol{\nabla}_t B_z \\ &=\pm k \boldsymbol{\nabla} E_z - \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \\ \end{aligned}
Observe that the propagation components of the field $\mathbf{E}_z + I\mathbf{E}_z$ can be written in terms of the symmetric product
\begin{aligned}\frac{1}{{2}} \left( \hat{\mathbf{z}} (\mathbf{E} + I\mathbf{B}) + (\mathbf{E} + I\mathbf{B}) \hat{\mathbf{z}} \right)&=\frac{1}{{2}} \left( \hat{\mathbf{z}} \mathbf{E} + \mathbf{E} \hat{\mathbf{z}} \right) + \frac{I}{2} \left( \hat{\mathbf{z}} \mathbf{B} + \mathbf{B} \hat{\mathbf{z}} + I \right) \\ &=\hat{\mathbf{z}} \cdot \mathbf{E} + I \hat{\mathbf{z}} \cdot \mathbf{B} \end{aligned}
Now the total field in CGS units was actually $F = \mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}$, not $F = \mathbf{E} + I \mathbf{B}$, so the factorization above isn’t exactly what we want. It does however, provide the required clue. We probably get the result we want by forming the symmetric product (a hybrid dot product selecting both the vector and bivector terms).
# Symmetric product of the field with the direction vector.
Rearranging Maxwell’s equation (15) in terms of the transverse gradient and the total field $F$ we have
\begin{aligned}\boldsymbol{\nabla}_t F = \left( \mp i k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F \end{aligned} \quad\quad\quad(25)
With this our symmetric product is
\begin{aligned}\boldsymbol{\nabla}_t ( F \hat{\mathbf{z}} + \hat{\mathbf{z}} F) &= (\boldsymbol{\nabla}_t F) \hat{\mathbf{z}} - \hat{\mathbf{z}} (\boldsymbol{\nabla}_t F) \\ &=\left( \mp i k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F \hat{\mathbf{z}}- \hat{\mathbf{z}} \left( \mp i k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F \\ &=i \left( \mp k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) (F \hat{\mathbf{z}} - \hat{\mathbf{z}} F) \\ \end{aligned}
The antisymmetric product on the right hand side should contain the desired transverse field components. To verify multiply it out
\begin{aligned}\frac{1}{{2}}(F \hat{\mathbf{z}} - \hat{\mathbf{z}} F) &=\frac{1}{{2}}\left( \left(\mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}\right) \hat{\mathbf{z}} - \hat{\mathbf{z}} \left(\mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}\right) \right) \\ &=\mathbf{E} \wedge \hat{\mathbf{z}} + I \mathbf{B}/\sqrt{\mu\epsilon} \wedge \hat{\mathbf{z}} \\ &=(\mathbf{E}_t + I \mathbf{B}_t/\sqrt{\mu\epsilon}) \hat{\mathbf{z}} \\ \end{aligned}
Now, with multiplication by the conjugate quantity $-i(\pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\omega/c)$, we can extract these transverse components.
\begin{aligned}\left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \left( \mp k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) (F \hat{\mathbf{z}} - \hat{\mathbf{z}} F) &=\left( -k^2 + {\mu\epsilon}\frac{\omega^2}{c^2}\right) (F \hat{\mathbf{z}} - \hat{\mathbf{z}} F) \end{aligned}
Rearranging, we have the transverse components of the field
\begin{aligned}(\mathbf{E}_t + I \mathbf{B}_t/\sqrt{\mu\epsilon}) \hat{\mathbf{z}} &=\frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t \frac{1}{{2}}( F \hat{\mathbf{z}} + \hat{\mathbf{z}} F) \end{aligned} \quad\quad\quad(26)
With left multiplication by $\hat{\mathbf{z}}$, and writing $F = F_t + F_z$ we have
\begin{aligned}F_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t F_z \end{aligned} \quad\quad\quad(27)
While this is a complete solution, we can additionally extract the electric and magnetic fields to compare results with Eli’s calculation. We take
vector grades to do so with $\mathbf{E}_t = {\left\langle{{F_t}}\right\rangle}_{1}$, and $\mathbf{B}_t/\sqrt{\mu\epsilon} = {\left\langle{{-I F_t}}\right\rangle}_{1}$. For the transverse electric field
\begin{aligned}{\left\langle{{ \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t (\mathbf{E}_z + I \mathbf{B}_z/\sqrt{/\mu\epsilon}) }}\right\rangle}_{1} &=\pm k \hat{\mathbf{z}} (-\hat{\mathbf{z}}) \boldsymbol{\nabla}_t E_z + \frac{\omega}{c} \underbrace{{\left\langle{{I \boldsymbol{\nabla}_t \hat{\mathbf{z}}}}\right\rangle}_{1}}_{-I^2 \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t } B_z \\ &=\mp k \boldsymbol{\nabla}_t E_z + \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \\ \end{aligned}
and for the transverse magnetic field
\begin{aligned}{\left\langle{{ -I \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t (\mathbf{E}_z + I \mathbf{B}_z/\sqrt{\mu\epsilon}) }}\right\rangle}_{1} &=-I \sqrt{\mu\epsilon}\frac{\omega}{c} \boldsymbol{\nabla}_t \mathbf{E}_z+{\left\langle{{ \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t \mathbf{B}_z/\sqrt{\mu\epsilon} }}\right\rangle}_{1} \\ &=- \sqrt{\mu\epsilon}\frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t E_z\mp k \boldsymbol{\nabla}_t B_z/\sqrt{\mu\epsilon} \\ \end{aligned}
Thus the split of transverse field into the electric and magnetic components yields
\begin{aligned}\mathbf{E}_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \mp k \boldsymbol{\nabla}_t E_z + \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \right) \\ \mathbf{B}_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( - {\mu\epsilon}\frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t E_z \mp k \boldsymbol{\nabla}_t B_z \right) \end{aligned} \quad\quad\quad(28)
Compared to Eli’s method using messy traditional vector algebra, this method also has a fair amount of messy tricky algebra, but of a different sort.
## perl -p … one of the handiest one liner commands
Posted by peeterjoot on July 29, 2009
Suppose you have a list of files:
$cat r /vbs/engn/include/sqlbgbc_inlines.h /vbs/engn/include/sqlekrcb.h /vbs/engn/sqb/sqlbenvi.C that you want to make a systematic simple change to, replacing all instances of some pattern with another. For illustration purposes, suppose that replacement is a straight replacement of the simplest sort changing all instances of the variable type: blah_t to something else: MyBlahType Such a search and replace can be done in many ways. For one file it wouldn’t be unreasonable to run: vim filename then: :%s/blah_t/MyBlahType/g :wq for multiple files, this isn’t the most convienent way (although you can do it with vim plus a command line for loop in a pinch using a vim command script if you really wanted to). An easier way is the following one liner: perl -p -i -e 's/blah_t/MyBlahType/g' cat r Let’s break this down. First is the cat r part. This presumes you are running in a unix shell where backquotes (not regular quotes like ”) mean “take the output of the back-quoted command and embed that in the new command”. This means the command above is equivalent to: perl -p -i -e 's/blah_t/MyBlahType/g;' /vbs/engn/include/sqlbgbc_inlines.h /vbs/engn/include/sqlekrcb.h /vbs/engn/sqb/sqlbenvi.C Next is the -i flag for the perl command. This specifies a suffix for a backup file. When no such file is specified (as here) then this means do an in-place modification of an existing file (something that’s particularily convienent if you are working with a version control system and if you have a recently checked out source file don’t have to worry as much about saving backups in case the search and replace goes wrong). If you wanted a backup, with suffix .bak, then replace -i with -i.bak (no spaces between -i and .bak). The -e option says to treat the parameter as the entire perl program. By example, if you had the following small perl command in a file (say ./mySearchAndReplace):$ cat ./mySearchAndReplace
s/blah_t/MyBlahType/g;
then the one liner above would also be equivalent to:
perl -p -i ./mySearchAndReplace /vbs/engn/include/sqlbgbc_inlines.h /vbs/engn/include/sqlekrcb.h /vbs/engn/sqb/sqlbenvi.C
The remaining worker option in the perl comand is the -p. This is really a convienence option and says to “wrap” the entire command (be that in a file or via -e) in a loop that processes standard input and outputs the results. You could do the same thing explicitly like so:
$cat ./mySearchAndReplaceFilter while (<>) # all lines from stdin { s/blah_t/MyBlahType/g; } A command or script, such as sed, that takes all input from stdin and provides an altered stdout, is called a filter. In perl while (<$filehandle>) is the syntax to process all lines in an opened file, and nothing means the current default file (usually stdin). So a final decoding of the one liner is a command like:
perl -i ./mySearchAndReplaceFilter /vbs/engn/include/sqlbgbc_inlines.h /vbs/engn/include/sqlekrcb.h /vbs/engn/sqb/sqlbenvi.C
## Bivector form of quantum angular momentum operator, the Coriolis term
Posted by peeterjoot on July 28, 2009
In the previous factorization of the Laplacian, a projection of the gradient along a constant direction vector $\mathbf{a}$ we found
\begin{aligned}\boldsymbol{\nabla}^2 &=(\hat{\mathbf{a}} \cdot \boldsymbol{\nabla})^2 - (\hat{\mathbf{a}} \wedge \boldsymbol{\nabla})^2 \\ \end{aligned}
The vector $\mathbf{a}$ was arbitrary, and just needed to be constant with respect to the factorization operations. The transition to non-constant vectors was largely guesswork and was in fact wrong. This guess was that we had
\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{\partial^2 }{\partial r^2} - \frac{1}{{\mathbf{x}^2}} (\mathbf{x} \wedge \boldsymbol{\nabla})^2 \end{aligned} \quad\quad\quad(2)
The radial factorization of the gradient relied on the direction vector $\mathbf{a}$ being constant. If we evaluate (2), then there should be a non-zero remainder compared to the Laplacian. Evaluation by coordinate expansion is one way to verify this, and should produce the difference. Let’s do this in two parts, starting with $(x \wedge \nabla)^2$. Summation will be implied by mixed indexes, and for generality a general basis and associated reciprocal frame will be used.
\begin{aligned}(x \wedge \nabla)^2 f &=((x^\mu \gamma_\mu) \wedge (\gamma_\nu \partial^\nu)) \cdot ((x_\alpha \gamma^\alpha) \wedge (\gamma^\beta \partial_\beta)) \\ &=(\gamma_\mu \wedge \gamma_\nu) \cdot (\gamma^\alpha \wedge \gamma^\beta) x^\mu \partial^\nu (x_\alpha \partial_\beta) f \\ &=({\delta_\mu}^\beta {\delta_\nu}^\alpha -{\delta_\mu}^\alpha {\delta_\nu}^\beta) x^\mu \partial^\nu (x_\alpha \partial_\beta) f \\ &=x^\mu \partial^\nu ((x_\nu \partial_\mu) - x_\mu \partial_\nu) f \\ &=x^\mu (\partial^\nu x_\nu) \partial_\mu f - x^\mu (\partial^\nu x_\mu) \partial_\nu f \\ &+x^\mu x_\nu \partial^\nu \partial_\mu f - x^\mu x_\mu \partial^\nu \partial_\nu f \\ &=(n-1) x \cdot \nabla f +x^\mu x_\nu \partial^\nu \partial_\mu f - x^2 \nabla^2 f \\ \end{aligned}
For the dot product we have
\begin{aligned}(x \cdot \nabla)^2 f &=x^\mu \partial_\mu( x^\nu \partial_\nu ) f \\ &=x^\mu (\partial_\mu x^\nu) \partial_\nu f + x^\mu x^\nu \partial_\mu \partial_\nu f \\ &=x^\mu \partial_\mu f + x^\mu x_\nu \partial^\nu \partial_\mu f \\ &=x \cdot \nabla f + x^\mu x_\nu \partial^\nu \partial_\mu f \\ \end{aligned}
So, forming the difference we have
\begin{aligned}(x \cdot \nabla)^2 f - (x \wedge \nabla)^2 f &=-(n - 2) x \cdot \nabla f + x^2 \nabla^2 f \\ \end{aligned}
Or
\begin{aligned}\nabla^2 &= \frac{1}{{x^2}} (x \cdot \nabla)^2 - \frac{1}{{x^2}} (x \wedge \nabla)^2 + (n - 2) \frac{1}{{x}} \cdot \nabla \end{aligned}
Going back to the quantum Hamiltonian we do still have the angular momentum operator as one of the distinct factors of the Laplacian. As operators we have something akin to the projection of the gradient onto the radial direction, as well as terms that project the gradient onto the tangential plane to the sphere at the radial point
\begin{aligned}-\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 + V&=-\frac{\hbar^2}{2m} \left( \frac{1}{{\mathbf{x}^2}} (\mathbf{x} \cdot \boldsymbol{\nabla})^2 - \frac{1}{{\mathbf{x}^2}} (\mathbf{x} \wedge \boldsymbol{\nabla})^2 + \frac{1}{{\mathbf{x}}} \cdot \boldsymbol{\nabla} \right) + V \end{aligned}
## Bivector form of quantum angular momentum operator
Posted by peeterjoot on July 27, 2009
# Spatial bivector representation of the angular momentum operator.
Reading ([1]) on the angular momentum operator, the form of the operator is suggested by analogy where components of $\mathbf{x} \times \mathbf{p}$ with
the position representation $\mathbf{p} \sim -i \hbar \boldsymbol{\nabla}$ used to expand the coordinate representation of the operator.
The result is the following coordinate representation of the operator
\begin{aligned}L_1 &= -i \hbar( x_2 \partial_3 - x_3 \partial_2 ) \\ L_2 &= -i \hbar( x_3 \partial_1 - x_1 \partial_3 ) \\ L_3 &= -i \hbar( x_1 \partial_2 - x_2 \partial_1 ) \\ \end{aligned}
It is interesting to put these in vector form, and then employ the freedom to use for $i = \sigma_1 \sigma_2 \sigma_3$ the spatial pseudoscalar.
\begin{aligned}\mathbf{L} &= -\sigma_1 (\sigma_1 \sigma_2 \sigma_3) \hbar( x_2 \partial_3 - x_3 \partial_2 ) -\sigma_2 (\sigma_2 \sigma_3 \sigma_1) \hbar( x_3 \partial_1 - x_1 \partial_3 ) -\sigma_3 (\sigma_3 \sigma_1 \sigma_2) \hbar( x_1 \partial_2 - x_2 \partial_1 ) \\ &= -\sigma_2 \sigma_3 \hbar( x_2 \partial_3 - x_3 \partial_2 ) -\sigma_3 \sigma_1 \hbar( x_3 \partial_1 - x_1 \partial_3 ) -\sigma_1 \sigma_2 \hbar( x_1 \partial_2 - x_2 \partial_1 ) \\ &=-\hbar ( \sigma_1 x_1 +\sigma_2 x_2 +\sigma_3 x_3 ) \wedge ( \sigma_1 \partial_1 +\sigma_2 \partial_2 +\sigma_3 \partial_3 ) \\ \end{aligned}
The choice to use the pseudoscalar for this imaginary seems a logical one and the end result is a pure bivector representation of angular momentum operator
\begin{aligned}\mathbf{L} &= - \hbar \mathbf{x} \wedge \boldsymbol{\nabla} \end{aligned} \quad\quad\quad(1)
The choice to represent angular momentum as a bivector $\mathbf{x} \wedge \mathbf{p}$ is also natural in classical mechanics (encoding the orientation of the plane and the magnitude of the momentum in the bivector), although its dual form the axial vector $\mathbf{x} \times \mathbf{p}$ is more common, at least in introductory mechanics. Observe that there is no longer any explicit imaginary in (1), since the bivector itself has an implicit complex structure.
# Factoring the gradient and Laplacian.
The form of (1) suggests a more direct way to extract the angular momentum operator from the Hamiltonian (i.e. from the Laplacian). Bohm uses the spherical polar representation of the Laplacian as the starting point. Instead let’s project the gradient itself in a specific constant direction $\mathbf{a}$, much as we can do to find the polar form angular velocity and acceleration components.
Write
\begin{aligned}\boldsymbol{\nabla} &=\frac{1}{{\mathbf{a}}} \mathbf{a} \boldsymbol{\nabla} \\ &=\frac{1}{{\mathbf{a}}} (\mathbf{a} \cdot \boldsymbol{\nabla} + \mathbf{a} \wedge \boldsymbol{\nabla}) \\ \end{aligned}
Or
\begin{aligned}\boldsymbol{\nabla} &=\boldsymbol{\nabla} \mathbf{a} \frac{1}{{\mathbf{a}}} \\ &=(\boldsymbol{\nabla} \cdot \mathbf{a} + \boldsymbol{\nabla} \wedge \mathbf{a}) \frac{1}{{\mathbf{a}}} \\ &=(\mathbf{a} \cdot \boldsymbol{\nabla} - \mathbf{a} \wedge \boldsymbol{\nabla}) \frac{1}{{\mathbf{a}}} \\ \end{aligned}
The Laplacian is therefore
\begin{aligned}\boldsymbol{\nabla}^2 &=\left\langle{{ \boldsymbol{\nabla}^2 }}\right\rangle \\ &=\left\langle{{ (\mathbf{a} \cdot \boldsymbol{\nabla} - \mathbf{a} \wedge \boldsymbol{\nabla}) \frac{1}{{\mathbf{a}}} \frac{1}{{\mathbf{a}}} (\mathbf{a} \cdot \boldsymbol{\nabla} + \mathbf{a} \wedge \boldsymbol{\nabla}) }}\right\rangle \\ &=\frac{1}{{\mathbf{a}^2}} \left\langle{{ (\mathbf{a} \cdot \boldsymbol{\nabla} - \mathbf{a} \wedge \boldsymbol{\nabla}) (\mathbf{a} \cdot \boldsymbol{\nabla} + \mathbf{a} \wedge \boldsymbol{\nabla}) }}\right\rangle \\ &=\frac{1}{{\mathbf{a}^2}} ((\mathbf{a} \cdot \boldsymbol{\nabla})^2 - (\mathbf{a} \wedge \boldsymbol{\nabla})^2 ) \\ \end{aligned}
So we have for the Laplacian a representation in terms of projection and rejection components
$\boldsymbol{\nabla}^2 = (\hat{\mathbf{a}} \cdot \boldsymbol{\nabla})^2 - \frac{1}{\mathbf{a}^2} (\mathbf{a} \wedge \boldsymbol{\nabla})^2$
The vector $\mathbf{a}$ was arbitrary, and just needed to be constant with respect to the factorization operations. Setting $\mathbf{a} = \mathbf{x}$, the radial position from the origin, we have
\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{\partial^2 }{\partial r^2} - \frac{1}{\mathbf{x}^2} (\mathbf{x} \wedge \boldsymbol{\nabla})^2 \end{aligned}
So in polar form the bivector form of the angular momentum operator is quite evident, just by application of projection of the gradient onto the radial direction and the tangential plane to the sphere at the radial point
\begin{aligned}-\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 + V&=-\frac{\hbar^2}{2m} \frac{\partial^2 }{\partial r^2} + \frac{\hbar^2}{2m \mathbf{x}^2} (\mathbf{x} \wedge \boldsymbol{\nabla})^2 + V \end{aligned}
# References
[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
## free textbook on tensor calculus.
Posted by peeterjoot on July 24, 2009
Looks like a good one here to cover foundations for a study of GR http://www.math.odu.edu/~jhh/counter2.html. A free version is available to whet one’s appetite for a purchase, and even with only 80% of the content that free one concatonates to 373 pages!
## 4D divergence theorem, continued.
Posted by peeterjoot on July 23, 2009
# Obsolete with potential errors.
This post may be in error. I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.
See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.
# Original Post:
The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write
\begin{aligned}d^4 x &= ( \gamma_0 dx^0 ) \wedge ( \gamma_1 dx^1 ) \wedge ( \gamma_2 dx^2 ) \wedge ( \gamma_3 dx^3 ) \\ &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 dx^0 dx^1 dx^2 dx^3 \\ &= i dx^0 dx^1 dx^2 dx^3 \\ \end{aligned}
As seen previously (but not separately), the divergence can be expressed as the dual of the curl
\begin{aligned}\nabla \cdot f&=\left\langle{{ \nabla f }}\right\rangle \\ &=-\left\langle{{ \nabla i (\underbrace{i f}_{\text{grade 3}}) }}\right\rangle \\ &=\left\langle{{ i \nabla (i f) }}\right\rangle \\ &=\left\langle{{ i ( \underbrace{\nabla \cdot (i f)}_{\text{grade 2}} + \underbrace{\nabla \wedge (i f)}_{\text{grade 4}} ) }}\right\rangle \\ &=i (\nabla \wedge (i f)) \\ \end{aligned}
So we have $\nabla \wedge (i f) = -i (\nabla \cdot f)$. Putting things together, and writing $i f = -f i$ we have
\begin{aligned}\int (\nabla \wedge (i f)) \cdot d^4 x&= \int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 \\ &=\int dx^0 \partial_0 (f i) \cdot \gamma_{123} dx^1 dx^2 dx^3 \\ &-\int dx^1 \partial_1 (f i) \cdot \gamma_{023} dx^0 dx^2 dx^3 \\ &+\int dx^2 \partial_2 (f i) \cdot \gamma_{013} dx^0 dx^1 dx^3 \\ &-\int dx^3 \partial_3 (f i) \cdot \gamma_{012} dx^0 dx^1 dx^2 \\ \end{aligned}
It is straightforward to reduce each of these dot products. For example
\begin{aligned}\partial_2 (f i) \cdot \gamma_{013}&=\left\langle{{ \partial_2 f \gamma_{0123013} }}\right\rangle \\ &=-\left\langle{{ \partial_2 f \gamma_{2} }}\right\rangle \\ &=- \gamma_2 \partial_2 \cdot f \\ &=\gamma^2 \partial_2 \cdot f \end{aligned}
The rest proceed the same and rather anticlimactically we end up coming full circle
\begin{aligned}\int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 &=\int dx^0 \gamma^0 \partial_0 \cdot f dx^1 dx^2 dx^3 \\ &+\int dx^1 \gamma^1 \partial_1 \cdot f dx^0 dx^2 dx^3 \\ &+\int dx^2 \gamma^2 \partial_2 \cdot f dx^0 dx^1 dx^3 \\ &+\int dx^3 \gamma^3 \partial_3 \cdot f dx^0 dx^1 dx^2 \\ \end{aligned}
This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with c=1) from the dual Stokes equation the perhaps less obvious result
\begin{aligned}\int \partial_\mu f^\mu dt dx dy dz&=\int (f^0(t_1) - f^0(t_0)) dx dy dz \\ &+\int (f^1(x_1) - f^1(x_0)) dt dy dz \\ &+\int (f^2(y_1) - f^2(y_0)) dt dx dz \\ &+\int (f^3(z_1) - f^3(z_0)) dt dx dy \\ \end{aligned}
When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not.
## the most basic vi trick. exiting the damn thing.
Posted by peeterjoot on July 23, 2009
Although I now rate myself as a guru level vi user, there was a time that I totally hated the vi editor. Back in university if you tried to run the newsreader (trn in those days) if you hadn’t configured for a “usable” editor, vi would be invoked and then you were stuck.
So, … if you don’t want to use vi, and end up using it by mistake, how to exit? The following is promising looking, but doesn’t actually work:
~
~
~
:!@!@^! shit, how the hell do you exit this damn thing Here’s the real way. Step 1: If you are in edit mode (you have on purpose or accidentally hit the ‘i’ character) you’ll need to hit the Escape character on your keyboard. If you don’t know if you are in edit mode or not you can hit the Esc character. You can hit it five times if you like, once you are out of edit mode, you’ll stay there. Step 2: type ‘:’ (the colon character), then wq, or q, or q! or wq! Example output at the bottom of the vi screen will look something like: ~ ~ ~ :wq! This last one means write and quit, and a really mean it, even if I have multiple files being edited. Plain old “:w” is just write, “:q” is quit, and “:q!” is “quit damnit, yes I really mean it” While my vi hating days are gone, the transition from vi hating to loving requires one first baby step: being able to exit the damn editor. Eventually, if like me, you are forced to work on multi platform Unix development where the only editor you can depend on is vi, taking further steps away from vi hating may be possible. Posted in perl and general scripting hackery | Tagged: | Leave a Comment » ## Stokes theorem in Geometric Algebra formalism. Posted by peeterjoot on July 22, 2009 # Obsolete with potential errors. This post may be in error. I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus. See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind. # Original Post: [Click here for a PDF of this post with nicer formatting] # Motivation Relying on pictorial means and a brute force ugly comparison of left and right hand sides, a verification of Stokes theorem for the vector and bivector cases was performed ([1]). This was more of a confirmation than a derivation, and the technique fails the transition to the trivector case. The trivector case is of particular interest in electromagnetism since that and a duality transformation provides a four-vector divergence theorem. The fact that the pictorial means of defining the boundary surface doesn’t work well in four vector space is not the only unsatisfactory aspect of the previous treatment. The fact that a coordinate expansion of the hypervolume element and hypersurface element was performed in the LHS and RHS comparisons was required is particularly ugly. It is a lot of work and essentially has to be undone on the opposing side of the equation. Comparing to previous attempts to come to terms with Stokes theorem in ([2]) and ([3]) this more recent attempt at least avoids the requirement for a tensor expansion of the vector or bivector. It should be possible to build on this and minimize the amount of coordinate expansion required and go directly from the volume integral to the expression of the boundary surface. # Do it. ## Notation and Setup. The desire is to relate the curl hypervolume integral to a hypersurface integral on the boundary \begin{aligned}\int (\nabla \wedge F) \cdot d^k x = \int F \cdot d^{k-1} x\end{aligned} \hspace{\stretch{1}}(2.1) In order to put meaning to this statement the volume and surface elements need to be properly defined. In order that this be a scalar equation, the object $F$ in the integral is required to be of grade $k-1$, and $k \le n$ where $n$ is the dimension of the vector space that generates the object $F$. ## Reciprocal frames. As evident in equation (2.1) a metric is required to define the dot product. If an affine non-metric formulation of Stokes theorem is possible it will not be attempted here. A reciprocal basis pair will be utilized, defined by \begin{aligned}\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu\end{aligned} \hspace{\stretch{1}}(2.2) Both of the sets $\{\gamma_\mu\}$ and $\{\gamma^\mu\}$ are taken to span the space, but are not required to be orthogonal. The notation is consistent with the Dirac reciprocal basis, and there will not be anything in this treatment that prohibits the Minkowski metric signature required for such a relativistic space. Vector decomposition in terms of coordinates follows by taking dot products. We write \begin{aligned}x = x^\mu \gamma_\mu = x_\nu \gamma^\nu\end{aligned} \hspace{\stretch{1}}(2.3) ## Gradient. When working with a non-orthonormal basis, use of the reciprocal frame can be utilized to express the gradient. \begin{aligned}\nabla \equiv \gamma^\mu \partial_\mu \equiv \sum_\mu \gamma^\mu \frac{\partial {}}{\partial {x^\mu}}\end{aligned} \hspace{\stretch{1}}(2.4) This contains what may perhaps seem like an odd seeming mix of upper and lower indexes in this definition. This is how the gradient is defined in [4]. Although it is possible to accept this definition and work with it, this form can be justified by require of the gradient consistency with the the definition of directional derivative. A definition of the directional derivative that works for single and multivector functions, in $\mathbb{R}^{3}$ and other more general spaces is \begin{aligned}a \cdot \nabla F \equiv \lim_{\lambda \rightarrow 0} \frac{F(x + a\lambda) - F(x)}{\lambda} = {\left.\frac{\partial {F(x + a\lambda)}}{\partial {\lambda}} \right\vert}_{\lambda=0}\end{aligned} \hspace{\stretch{1}}(2.5) Taylor expanding about $\lambda=0$ in terms of coordinates we have \begin{aligned}{\left.\frac{\partial {F(x + a\lambda)}}{\partial {\lambda}} \right\vert}_{\lambda=0}&= a^\mu \frac{\partial {F}}{\partial {x^\mu}} \\ &= (a^\nu \gamma_\nu) \cdot (\gamma^\mu \partial_\mu) F \\ &= a \cdot \nabla F \quad\quad\quad\square\end{aligned} The lower index representation of the vector coordinates could also have been used, so using the directional derivative to imply a definition of the gradient, we have an additional alternate representation of the gradient \begin{aligned}\nabla \equiv \gamma_\mu \partial^\mu \equiv \sum_\mu \gamma_\mu \frac{\partial {}}{\partial {x_\mu}}\end{aligned} \hspace{\stretch{1}}(2.6) ## Volume element We define the hypervolume in terms of parametrized vector displacements $x = x(a_1, a_2, ... a_k)$. For the vector x we can form a pseudoscalar for the subspace spanned by this parametrization by wedging the displacements in each of the directions defined by variation of the parameters. For $m \in [1,k]$ let \begin{aligned}dx_i = \frac{\partial {x}}{\partial {a_i}} da_i = \gamma_\mu \frac{\partial {x^\mu}}{\partial {a_i}} da_i,\end{aligned} \hspace{\stretch{1}}(2.7) so the hypervolume element for the subspace in question is \begin{aligned}d^k x \equiv dx_1 \wedge dx_2 \cdots dx_k\end{aligned} \hspace{\stretch{1}}(2.8) This can be expanded explicitly in coordinates \begin{aligned}d^k x &= da_1 da_2 \cdots da_k \left(\frac{\partial {x^{\mu_1}}}{\partial {a_1}} \frac{\partial {x^{\mu_2}}}{\partial {a_2}} \cdots\frac{\partial {x^{\mu_k}}}{\partial {a_k}} \right)( \gamma_{\mu_1} \wedge \gamma_{\mu_2} \wedge \cdots \wedge \gamma_{\mu_k} ) \\ \end{aligned} Observe that when $k$ is also the dimension of the space, we can employ a pseudoscalar $I = \gamma_0 \gamma_1 \cdots \gamma_k$ and can specify our volume element in terms of the Jacobian determinant. This is \begin{aligned}d^k x =I da_1 da_2 \cdots da_k {\left\lvert{\frac{\partial {(x^1, x^2, \cdots, x^k)}}{\partial {(a_1, a_2, \cdots, a_k)}}}\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.9) However, we won’t have a requirement to express the Stokes result in terms of such Jacobians. ## Expansion of the curl and volume element product We are now prepared to go on to the meat of the issue. The first order of business is the expansion of the curl and volume element product \begin{aligned}( \nabla \wedge F ) \cdot d^k x&=( \gamma^\mu \wedge \partial_\mu F ) \cdot d^k x \\ &=\left\langle{{ ( \gamma^\mu \wedge \partial_\mu F ) d^k x }}\right\rangle \\ \end{aligned} The wedge product within the scalar grade selection operator can be expanded in symmetric or antisymmetric sums, but this is a grade dependent operation. For odd grade blades $A$ (vector, trivector, …), and vector $a$ we have for the dot and wedge product respectively \begin{aligned}a \wedge A = \frac{1}{{2}} (a A - A a) \\ a \cdot A = \frac{1}{{2}} (a A + A a)\end{aligned} Similarly for even grade blades we have \begin{aligned}a \wedge A = \frac{1}{{2}} (a A + A a) \\ a \cdot A = \frac{1}{{2}} (a A - A a)\end{aligned} First treating the odd grade case for $F$ we have \begin{aligned}( \nabla \wedge F ) \cdot d^k x&=\frac{1}{{2}} \left\langle{{ \gamma^\mu \partial_\mu F d^k x }}\right\rangle - \frac{1}{{2}} \left\langle{{ \partial_\mu F \gamma^\mu d^k x }}\right\rangle \\ \end{aligned} Employing cyclic scalar reordering within the scalar product for the first term \begin{aligned}\left\langle{{a b c}}\right\rangle = \left\langle{{b c a}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.10) we have \begin{aligned}( \nabla \wedge F ) \cdot d^k x&=\frac{1}{{2}} \left\langle{{ \partial_\mu F (d^k x \gamma^\mu - \gamma^\mu d^k x)}}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{ \partial_\mu F (d^k x \cdot \gamma^\mu - \gamma^\mu d^k x)}}\right\rangle \\ &=\left\langle{{ \partial_\mu F (d^k x \cdot \gamma^\mu)}}\right\rangle \\ \end{aligned} The end result is \begin{aligned}( \nabla \wedge F ) \cdot d^k x &= \partial_\mu F \cdot (d^k x \cdot \gamma^\mu) \end{aligned} \hspace{\stretch{1}}(2.11) For even grade $F$ (and thus odd grade $d^k x$) it is straightforward to show that (2.11) also holds. ## Expanding the volume dot product We want to expand the volume integral dot product \begin{aligned}d^k x \cdot \gamma^\mu\end{aligned} \hspace{\stretch{1}}(2.12) Picking $k = 4$ will serve to illustrate the pattern, and the generalization (or degeneralization to lower grades) will be clear. We have \begin{aligned}d^4 x \cdot \gamma^\mu&=( dx_1 \wedge dx_2 \wedge dx_3 \wedge dx_4 ) \cdot \gamma^\mu \\ &= ( dx_1 \wedge dx_2 \wedge dx_3 ) dx_4 \cdot \gamma^\mu \\ &-( dx_1 \wedge dx_2 \wedge dx_4 ) dx_3 \cdot \gamma^\mu \\ &+( dx_1 \wedge dx_3 \wedge dx_4 ) dx_2 \cdot \gamma^\mu \\ &-( dx_2 \wedge dx_3 \wedge dx_4 ) dx_1 \cdot \gamma^\mu \\ \end{aligned} This avoids the requirement to do the entire Jacobian expansion of (2.9). The dot product of the differential displacement $dx_m$ with $\gamma^\mu$ can now be made explicit without as much mess. \begin{aligned}dx_m \cdot \gamma^\mu &=da_m \frac{\partial {x^\nu}}{\partial {a_m}} \gamma_\nu \cdot \gamma^\mu \\ &=da_m \frac{\partial {x^\mu}}{\partial {a_m}} \\ \end{aligned} We now have products of the form \begin{aligned}\partial_\mu F da_m \frac{\partial {x^\mu}}{\partial {a_m}} &=da_m \frac{\partial {x^\mu}}{\partial {a_m}} \frac{\partial {F}}{\partial {x^\mu}} \\ &=da_m \frac{\partial {F}}{\partial {a_m}} \\ \end{aligned} Now we see that the differential form of (2.11) for this $k=4$ example is reduced to \begin{aligned}( \nabla \wedge F ) \cdot d^4 x &= da_4 \frac{\partial {F}}{\partial {a_4}} \cdot ( dx_1 \wedge dx_2 \wedge dx_3 ) \\ &- da_3 \frac{\partial {F}}{\partial {a_3}} \cdot ( dx_1 \wedge dx_2 \wedge dx_4 ) \\ &+ da_2 \frac{\partial {F}}{\partial {a_2}} \cdot ( dx_1 \wedge dx_3 \wedge dx_4 ) \\ &- da_1 \frac{\partial {F}}{\partial {a_1}} \cdot ( dx_2 \wedge dx_3 \wedge dx_4 ) \\ \end{aligned} While 2.11 was a statement of Stokes theorem in this Geometric Algebra formulation, it was really incomplete without this explicit expansion of $(\partial_\mu F) \cdot (d^k x \cdot \gamma^\mu)$. This expansion for the $k=4$ case serves to illustrate that we would write Stokes theorem as \begin{aligned}\boxed{\int( \nabla \wedge F ) \cdot d^k x =\frac{1}{{(k-1)!}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {F}}{\partial {a_{u}}} \cdot (dx_r \wedge dx_s \wedge \cdots \wedge dx_t)}\end{aligned} \hspace{\stretch{1}}(2.13) Here the indexes have the range $\{r, s, \cdots, t, u\} \in \{1, 2, \cdots k\}$. This with the definitions 2.7, and 2.8 is really Stokes theorem in its full glory. Observe that in this Geometric algebra form, the one forms $dx_i = da_i {\partial {x}}/{\partial {a_i}}, i \in [1,k]$ are nothing more abstract that plain old vector differential elements. In the formalism of differential forms, this would be vectors, and $(\nabla \wedge F) \cdot d^k x$ would be a $k$ form. In a context where we are working with vectors, or blades already, the Geometric Algebra statement of the theorem avoids a requirement to translate to the language of forms. With a statement of the general theorem complete, let’s return to our $k=4$ case where we can now integrate over each of the $a_1, a_2, \cdots, a_k$ parameters. That is \begin{aligned}\int ( \nabla \wedge F ) \cdot d^4 x &= \int (F(a_4(1)) - F(a_4(0))) \cdot ( dx_1 \wedge dx_2 \wedge dx_3 ) \\ &- \int (F(a_3(1)) - F(a_3(0))) \cdot ( dx_1 \wedge dx_2 \wedge dx_4 ) \\ &+ \int (F(a_2(1)) - F(a_2(0))) \cdot ( dx_1 \wedge dx_3 \wedge dx_4 ) \\ &- \int (F(a_1(1)) - F(a_1(0))) \cdot ( dx_2 \wedge dx_3 \wedge dx_4 ) \\ \end{aligned} This is precisely Stokes theorem for the trivector case and makes the enumeration of the boundary surfaces explicit. As derived there was no requirement for an orthonormal basis, nor a Euclidean metric, nor a parametrization along the basis directions. The only requirement of the parametrization is that the associated volume element is non-trivial (i.e. none of $dx_q \wedge dx_r = 0$). For completeness, note that our boundary surface and associated Stokes statement for the bivector and vector cases is, by inspection respectively \begin{aligned}\int ( \nabla \wedge F ) \cdot d^3 x &= \int (F(a_3(1)) - F(a_3(0))) \cdot ( dx_1 \wedge dx_2 ) \\ &- \int (F(a_2(1)) - F(a_2(0))) \cdot ( dx_1 \wedge dx_3 ) \\ &+ \int (F(a_1(1)) - F(a_1(0))) \cdot ( dx_2 \wedge dx_3 ) \\ \end{aligned} and \begin{aligned}\int ( \nabla \wedge F ) \cdot d^2 x &= \int (F(a_2(1)) - F(a_2(0))) \cdot dx_1 \\ &- \int (F(a_1(1)) - F(a_1(0))) \cdot dx_2 \\ \end{aligned} These three expansions can be summarized by the original single statement of (2.1), which repeating for reference, is \begin{aligned}\int ( \nabla \wedge F ) \cdot d^k x = \int F \cdot d^{k-1} x \end{aligned} Where it is implied that the blade $F$ is evaluated on the boundaries and dotted with the associated hypersurface boundary element. However, having expanded this we now have an explicit statement of exactly what that surface element is now for any desired parametrization. # Duality relations and special cases. Some special (and more recognizable) cases of (2.1) are possible considering specific grades of $F$, and in some cases employing duality relations. ## curl surface integral One important case is the $\mathbb{R}^{3}$ vector result, which can be expressed in terms of the cross product. Write $\hat{\mathbf{n}} d^2 x = -i dA$. Then we have \begin{aligned}( \boldsymbol{\nabla} \wedge \mathbf{f} ) \cdot d^2 x&=\left\langle{{ i (\boldsymbol{\nabla} \times \mathbf{f}) (- \hat{\mathbf{n}} i dA) }}\right\rangle \\ &=(\boldsymbol{\nabla} \times \mathbf{f}) \cdot \hat{\mathbf{n}} dA\end{aligned} This recovers the familiar cross product form of Stokes law. \begin{aligned}\int (\boldsymbol{\nabla} \times \mathbf{f}) \cdot \hat{\mathbf{n}} dA = \oint \mathbf{f} \cdot d\mathbf{x}\end{aligned} \hspace{\stretch{1}}(3.14) ## 3D divergence theorem Duality applied to the bivector Stokes result provides the divergence theorem in $\mathbb{R}^{3}$. For bivector $B$, let $iB = \mathbf{f}$, $d^3 x = i dV$, and $d^2 x = i \hat{\mathbf{n}} dA$. We then have \begin{aligned}( \boldsymbol{\nabla} \wedge B ) \cdot d^3 x&=\left\langle{{ ( \boldsymbol{\nabla} \wedge B ) \cdot d^3 x }}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{ ( \boldsymbol{\nabla} B + B \boldsymbol{\nabla} ) i dV }}\right\rangle \\ &=\boldsymbol{\nabla} \cdot \mathbf{f} dV \\ \end{aligned} Similarly \begin{aligned}B \cdot d^2 x&=\left\langle{{ -i\mathbf{f} i \hat{\mathbf{n}} dA}}\right\rangle \\ &=(\mathbf{f} \cdot \hat{\mathbf{n}}) dA \\ \end{aligned} This recovers the $\mathbb{R}^{3}$ divergence equation \begin{aligned}\int \boldsymbol{\nabla} \cdot \mathbf{f} dV = \int (\mathbf{f} \cdot \hat{\mathbf{n}}) dA\end{aligned} \hspace{\stretch{1}}(3.15) ## 4D divergence theorem How about the four dimensional spacetime divergence? Write, express a trivector as a dual four-vector $T = if$, and the four volume element $d^4 x = i dQ$. This gives \begin{aligned}(\nabla \wedge T) \cdot d^4 x&=\frac{1}{{2}} \left\langle{{ (\nabla T - T \nabla) i }}\right\rangle dQ \\ &=\frac{1}{{2}} \left\langle{{ (\nabla i f - if \nabla) i }}\right\rangle dQ \\ &=\frac{1}{2} \left\langle{{ (\nabla f + f \nabla) }}\right\rangle dQ \\ &=(\nabla \cdot f) dQ\end{aligned} For the boundary volume integral write $d^3 x = n i dV$, for \begin{aligned}T \cdot d^3 x &= \left\langle{{ (if) ( n i ) }}\right\rangle dV \\ &= \left\langle{{ f n }}\right\rangle dV \\ &= (f \cdot n) dV\end{aligned} So we have \begin{aligned}\int \partial_\mu f^\mu dQ = \int f^\nu n_\nu dV\end{aligned} the orientation of the fourspace volume element and the boundary normal is defined in terms of the parametrization, the duality relations and our explicit expansion of the 4D stokes boundary integral above. ## 4D divergence theorem, continued. The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write \begin{aligned}d^4 x &= ( \gamma_0 dx^0 ) \wedge ( \gamma_1 dx^1 ) \wedge ( \gamma_2 dx^2 ) \wedge ( \gamma_3 dx^3 ) \\ &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 dx^0 dx^1 dx^2 dx^3 \\ &= i dx^0 dx^1 dx^2 dx^3 \\ \end{aligned} As seen previously (but not separately), the divergence can be expressed as the dual of the curl \begin{aligned}\nabla \cdot f&=\left\langle{{ \nabla f }}\right\rangle \\ &=-\left\langle{{ \nabla i (\underbrace{i f}_{\text{grade 3}}) }}\right\rangle \\ &=\left\langle{{ i \nabla (i f) }}\right\rangle \\ &=\left\langle{{ i ( \underbrace{\nabla \cdot (i f)}_{\text{grade 2}} + \underbrace{\nabla \wedge (i f)}_{\text{grade 4}} ) }}\right\rangle \\ &=i (\nabla \wedge (i f)) \\ \end{aligned} So we have $\nabla \wedge (i f) = -i (\nabla \cdot f)$. Putting things together, and writing $i f = -f i$ we have \begin{aligned}\int (\nabla \wedge (i f)) \cdot d^4 x&= \int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 \\ &=\int dx^0 \partial_0 (f i) \cdot \gamma_{123} dx^1 dx^2 dx^3 \\ &-\int dx^1 \partial_1 (f i) \cdot \gamma_{023} dx^0 dx^2 dx^3 \\ &+\int dx^2 \partial_2 (f i) \cdot \gamma_{013} dx^0 dx^1 dx^3 \\ &-\int dx^3 \partial_3 (f i) \cdot \gamma_{012} dx^0 dx^1 dx^2 \\ \end{aligned} It is straightforward to reduce each of these dot products. For example \begin{aligned}\partial_2 (f i) \cdot \gamma_{013}&=\left\langle{{ \partial_2 f \gamma_{0123013} }}\right\rangle \\ &=-\left\langle{{ \partial_2 f \gamma_{2} }}\right\rangle \\ &=- \gamma_2 \partial_2 \cdot f \\ &=\gamma^2 \partial_2 \cdot f \end{aligned} The rest proceed the same and rather anticlimactically we end up coming full circle \begin{aligned}\int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 &=\int dx^0 \gamma^0 \partial_0 \cdot f dx^1 dx^2 dx^3 \\ &+\int dx^1 \gamma^1 \partial_1 \cdot f dx^0 dx^2 dx^3 \\ &+\int dx^2 \gamma^2 \partial_2 \cdot f dx^0 dx^1 dx^3 \\ &+\int dx^3 \gamma^3 \partial_3 \cdot f dx^0 dx^1 dx^2 \\ \end{aligned} This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with $c=1$) from the dual Stokes equation the perhaps less obvious result \begin{aligned}\int \partial_\mu f^\mu dt dx dy dz&=\int (f^0(t_1) - f^0(t_0)) dx dy dz \\ &+\int (f^1(x_1) - f^1(x_0)) dt dy dz \\ &+\int (f^2(y_1) - f^2(y_0)) dt dx dz \\ &+\int (f^3(z_1) - f^3(z_0)) dt dx dy \\ \end{aligned} When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not. # References [1] Peeter Joot. Stokes theorem applied to vector and bivector fields [online]. http://sites.google.com/site/peeterjoot/math2009/stokesGradeTwo.pdf. [2] Peeter Joot. Stokes law in wedge product form [online]. http://sites.google.com/site/peeterjoot/geometric-algebra/vector_integral_relations.pdf. [3] Peeter Joot. Stokes Law revisited with algebraic enumeration of boundary [online]. http://sites.google.com/site/peeterjoot/geometric-algebra/stokes_revisited.pdf. [4] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003. ## vim: swap two patterns Posted by peeterjoot on July 22, 2009 Suppose you want to invert something like the following assignment pageName->name1 = fakeBPD->pageKey.pageID.pkPageNum; pageName->name2 = fakeBPD->pageKey.pageID.pkPoolID; pageName->name3 = fakeBPD->pageKey.pageID.pkObjectID; pageName->name4 = fakeBPD->pageKey.pageID.pkObjectType; to produce: fakeBPD->pageKey.pageID.pkPageNum = pageName->name1; fakeBPD->pageKey.pageID.pkPoolID = pageName->name2; fakeBPD->pageKey.pageID.pkObjectID = pageName->name3; fakeBPD->pageKey.pageID.pkObjectType = pageName->name4; positioning on the first line, this requires nothing more than: ,+3 s/$$.*$$ = $$.*$$;/\2 = \1;/ Lets break this down. The first part :,+3 specifies a range of lines. For example :1,3 s/something/else/ this would replace something with else on lines 1-3. If the first line number is left off, the current line is implied, and as above the end line number can be a computed expression relative to the current line (i.e. I did three additional lines on top of the current line). The line numbers themselves can be patterns. For example: :,/^}/ s/something/else/ would make the substuition something -> else from the current line to the line that starts with } (^ matches to the beginning of the line). Now, how about the original seach and replace. There a regular expression capture was used. The match pattern was essentially: /.* = .*;/ but we want to put the two interesting bits in regular expression variables (back references) that can be referred to in the replacement expression. In vim the back references go like \1, \2, …. Different regular expression engines do this differently, and in perl you’d use1, $2 for the back references, and to generate them just /(.*) = (.*);/ instead of $$.*$$. Posted in perl and general scripting hackery | Tagged: , | Leave a Comment » ## A vim -q Plus grep -n. An editor trick everybody should know. Posted by peeterjoot on July 20, 2009 If you use vi as your editor (and by vi I assume vi == vim), then you want to know about the vim -q option, and grep -n to go with it. This can be used to navigate through code (or other files) looking at matches to patterns of interest. Suppose you want to look at calls of strchr() that match some pattern. One way to do this is to find the subset of the files that are of interest. Say:$ grep strchr.*ode sqle*ca*C
sqlecatd.C: sres = strchr(SQLZ_IDENT, (Uint8)nodename[i]);
sqlecatn.C: if ((sres = strchr(SQLZ_DBNAME, (Uint8)mode[0])) != NULL)
sqlecatn.C: sres = strchr(SQLZ_IDENT_AID, (Uint8)mode[i]);
and edit all those files, searching again for the pattern of interest in each file. If there aren’t many such matches, your job is easy and can be done manually. Suppose however that there’s 20 such matches, and 3 or 4 are of interest for editing, but you won’t know till you’ve seen them with a bit more context. What’s an easy way to go from one to the next? The trick is grep -n plus vim. Example:
$grep -n strchr.*ode sqle*ca*C | tee grep.out sqlecatd.C:710: sres = strchr(SQLZ_IDENT, (Uint8)nodename[i]); sqlecatn.C:505: if ((sres = strchr(SQLZ_DBNAME, (Uint8)mode[0])) != NULL) sqlecatn.C:518: sres = strchr(SQLZ_IDENT_AID, (Uint8)mode[i]);$ vim -q grep.out
vim will bring you right to line 710 of sqlecatd.C in this case. To go to the next pattern, which will be in this case also in the next file, use the vim command
:cn
You can move backwards with :cN, and see where you are and the associated pattern with :cc
vim -q understands a lot of common filename/linenumber formats (and can probably be taught more but I haven’t tried that). Of particular utility is compile error output. Redirect your compilation error output (from gcc/g++ for example) to a file, and when that file is stripped down to just the error lines, you can navigate from error to error with ease (until you muck up the line numbers too much).
A small note. If you are grepping only one file, then the grep -n output won’t have the filename and vim -q will get confused. Example:
grep -n strchr.*ode sqlecatn.C
505: if ((sres = strchr(SQLZ_DBNAME, (Uint8)mode[0])) != NULL)
518: sres = strchr(SQLZ_IDENT_AID, (Uint8)mode[i]);
Here just include a filename that doesn’t exist in the grep command line string
grep -n strchr.*ode sqlecatn.C blah 2>/dev/null
sqlecatn.C:505: if ((sres = strchr(SQLZ_DBNAME, (Uint8)mode[0])) != NULL)
sqlecatn.C:518: sres = strchr(SQLZ_IDENT_AID, (Uint8)mode[i]);
I’m assuming here that you don’t have a file called blah in the current directory. The result is something that vim -q will still be happy about.
|
2018-06-19 20:06:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 169, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000089406967163, "perplexity": 3592.148942362787}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00102.warc.gz"}
|
http://conceptmap.cfapps.io/wikipage?lang=en&name=Cell_complex
|
# CW complex
(Redirected from Cell complex)
A CW complex is a kind of a topological space that is particularly important in algebraic topology.[1] It was introduced by J. H. C. Whitehead[2] to meet the needs of homotopy theory. This class of spaces is broader and has some better categorical properties than simplicial complexes, but still retains a combinatorial nature that allows for computation (often with a much smaller complex). The C stands for "closure-finite", and the W for "weak" topology.[clarification needed]
A CW complex can be defined inductively.[3]
• A 0-dimensional CW complex is just a set of zero or more discrete points (with the discrete topology).
• A 1-dimensional CW complex is constructed by taking the disjoint union of a 0-dimensional CW complex with one or more copies of the unit interval. For each copy, there is a map that "glues" its boundary (its two endpoints) to elements of the 0-dimensional complex (the points). The topology of the CW complex is the quotient space defined by these gluing maps.
• In general, an n-dimensional CW complex is constructed by taking the disjoint union of a k-dimensional CW complex (for some k<n) with one or more copies of the n-dimensional ball. For each copy, there is a map that "glues" its boundary (the n-1 dimensional sphere) to elements of the (n-1)-dimensional complex. The topology of the CW complex is the quotient space defined by these gluing maps.
• An infinite-dimensional CW complex can be constructed by repeating the above process countably many times.
In an n-dimensional CW complex, for every kn, a k-cell is the interior of a k-dimensional ball added at the k-th step. The k-skeleton of the complex is the union of all its k-cells.
## Examples
As mentioned above, every collection of discrete points is a CW complex (of dimension 0).
### 1-dimensional CW-complexes
Some examples or 1-dimensional CW complexes are:[4]
• An interval. It can be constructed from two points (x and y), and the 1-dimensional ball B (an interval), such that one endpoint of B is glued to x and the other is glued to y. The two points x and y are the 0-cells; the interior of B is the 1-cell. Alternatively, it can be constructed just from a single interval, with no 0-cells.
• A circle. It can be constructed from a single point x and the 1-dimensional ball B, such that both endpoints of B are glued to x. Alternatively, it can be constructed from two points x and y and two 1-dimensional balls A and B, such that the endpoints of A are glued to x and y, and the endpoints of B are glued to x and y too.
• A graph. It is a 1-dimensional CW complex in which the 0-cells are the vertices and the 1-cells are the edges. The endpoints of each edge are identified with the vertices adjacent to it.
• 3-regular graphs can be considered as generic 1-dimensional CW complexes. Specifically, if X is a 1-dimensional CW complex, the attaching map for a 1-cell is a map from a two-point space to X, ${\displaystyle f:\{0,1\}\to X}$ . This map can be perturbed to be disjoint from the 0-skeleton of X if and only if ${\displaystyle f(0)}$ and ${\displaystyle f(1)}$ are not 0-valence vertices of X.
• The standard CW structure on the real numbers has as 0-skeleton the integers ${\displaystyle \mathbb {Z} }$ and as 1-cells the intervals ${\displaystyle \{[n,n+1]:n\in \mathbb {Z} \}}$ . Similarly, the standard CW structure on ${\displaystyle \mathbb {R} ^{n}}$ has cubical cells that are products of the 0 and 1-cells from ${\displaystyle \mathbb {R} }$ . This is the standard cubic lattice cell structure on ${\displaystyle \mathbb {R} ^{n}}$ .
### Multi-dimensional CW-complexes
Some examples of multi-dimensional CW complexes are:[4]
• An n-dimensional sphere. It admits a CW structure with two cells, one 0-cell and one n-cell. Here the n-cell ${\displaystyle D^{n}}$ is attached by the constant mapping from its boundary ${\displaystyle S^{n-1}}$ to the single 0-cell. An alternative cell decomposition has one (n-1)-dimensional sphere (the "equator") and two n-cells that are attached to it (the "upper hemi-sphere" and the "lower hemi-sphere"). Inductively, this gives ${\displaystyle S^{n}}$ a CW decomposition with two cells in every dimension k such that ${\displaystyle 0\leq k\leq n}$ .
• The n-dimensional real projective space. It admits a CW structure with one cell in each dimension.
• The terminology for a generic 2-dimensional CW complex is a shadow.[5]
• A polyhedron is naturally a CW complex.
• Grassmannian manifolds admit a CW structure called Schubert cells.
• Differentiable manifolds, algebraic and projective varieties have the homotopy-type of CW complexes.
• The one-point compactification of a cusped hyperbolic manifold has a canonical CW decomposition with only one 0-cell (the compactification point) called the Epstein-Penner Decomposition. Such cell decompositions are frequently called ideal polyhedral decompositions and are used in popular computer software, such as SnapPea.
### Non CW-complexes
• An infinite-dimensional Hilbert space is not a CW complex: it is a Baire space and therefore cannot be written as a countable union of n-skeletons, each of which being a closed set with empty interior. This argument extends to many other infinite-dimensional spaces.
• The space ${\displaystyle \{re^{2\pi i\theta }:0\leq r\leq 1,\theta \in \mathbb {Q} \}\subset \mathbb {C} }$ has the homotopy-type of a CW complex (it is contractible) but it does not admit a CW decomposition, since it is not locally contractible.
• The Hawaiian earring is an example of a topological space that does not have the homotopy-type of a CW complex.
## Formulation
Roughly speaking, a CW complex is made of basic building blocks called cells. The precise definition prescribes how the cells may be topologically glued together.
An n-dimensional closed cell is the image of an n-dimensional closed ball under an attaching map. For example, a simplex is a closed cell, and more generally, a convex polytope is a closed cell. An n-dimensional open cell is a topological space that is homeomorphic to the n-dimensional open ball. A 0-dimensional open (and closed) cell is a singleton space. Closure-finite means that each closed cell is covered by a finite union of open cells (or meets only finitely many other cells[6]).
A CW complex is a Hausdorff space X together with a partition of X into open cells (of perhaps varying dimension) that satisfies two additional properties:
• For each n-dimensional open cell C in the partition of X, there exists a continuous map f from the n-dimensional closed ball to X such that
• the restriction of f to the interior of the closed ball is a homeomorphism onto the cell C, and
• the image of the boundary of the closed ball is contained in the union of a finite number of elements of the partition, each having cell dimension less than n.
• A subset of X is closed if and only if it meets the closure of each cell in a closed set.
### Regular CW complexes
A CW complex is called regular if for each n-dimensional open cell C in the partition of X, the continuous map f from the n-dimensional closed ball to X is a homeomorphism onto the closure of the cell C.
### Relative CW complexes
Roughly speaking, a relative CW complex differs from a CW complex in that we allow it to have one extra building block which does not necessarily possess a cellular structure. This extra-block can be treated as a (-1)-dimensional cell in the former definition.[7][8][9]
## Inductive construction of CW complexes
If the largest dimension of any of the cells is n, then the CW complex is said to have dimension n. If there is no bound to the cell dimensions then it is said to be infinite-dimensional. The n-skeleton of a CW complex is the union of the cells whose dimension is at most n. If the union of a set of cells is closed, then this union is itself a CW complex, called a subcomplex. Thus the n-skeleton is the largest subcomplex of dimension n or less.
A CW complex is often constructed by defining its skeleta inductively by 'attaching' cells of increasing dimension. By an 'attachment' of an n-cell to a topological space X one means an adjunction space ${\displaystyle B\cup _{f}X}$ where f is a continuous map from the boundary of a closed n-dimensional ball ${\displaystyle B\subset R^{n}}$ to X. To construct a CW complex, begin with a 0-dimensional CW complex, that is, a discrete space ${\displaystyle X_{0}}$ . Attach 1-cells to ${\displaystyle X_{0}}$ to obtain a 1-dimensional CW complex ${\displaystyle X_{1}}$ . Attach 2-cells to ${\displaystyle X_{1}}$ to obtain a 2-dimensional CW complex ${\displaystyle X_{2}}$ . Continuing in this way, we obtain a nested sequence of CW complexes ${\displaystyle X_{0}\subset X_{1}\subset \cdots X_{n}\subset \cdots }$ of increasing dimension such that if ${\displaystyle i\leq j}$ then ${\displaystyle X_{i}}$ is the i-skeleton of ${\displaystyle X_{j}}$ .
Up to isomorphism every n-dimensional CW complex can be obtained from its (n − 1)-skeleton via attaching n-cells, and thus every finite-dimensional CW complex can be built up by the process above. This is true even for infinite-dimensional complexes, with the understanding that the result of the infinite process is the direct limit of the skeleta: a set is closed in X if and only if it meets each skeleton in a closed set.
## Homology and cohomology of CW complexes
Singular homology and cohomology of CW complexes is readily computable via cellular homology. Moreover, in the category of CW complexes and cellular maps, cellular homology can be interpreted as a homology theory. To compute an extraordinary (co)homology theory for a CW complex, the Atiyah-Hirzebruch spectral sequence is the analogue of cellular homology.
Some examples:
• For the sphere, ${\displaystyle S^{n},}$ take the cell decomposition with two cells: a single 0-cell and a single n-cell. The cellular homology chain complex ${\displaystyle C_{*}}$ and homology are given by:
${\displaystyle C_{k}={\begin{cases}\mathbb {Z} &k\in \{0,n\}\\0&k\notin \{0,n\}\end{cases}}\quad H_{k}={\begin{cases}\mathbb {Z} &k\in \{0,n\}\\0&k\notin \{0,n\}\end{cases}}}$
since all the differentials are zero.
Alternatively, if we use the equatorial decomposition with two cells in every dimension
${\displaystyle C_{k}={\begin{cases}\mathbb {Z} ^{2}&0\leqslant k\leqslant n\\0&{\text{otherwise}}\end{cases}}}$
and the differentials are matrices of the form ${\displaystyle \left({\begin{smallmatrix}1&-1\\1&-1\end{smallmatrix}}\right).}$ This gives the same homology computation above, as the chain complex is exact at all terms except ${\displaystyle C_{0}}$ and ${\displaystyle C_{n}.}$
• For ${\displaystyle \mathbb {P} ^{n}(\mathbb {C} )}$ we get similarly
${\displaystyle H^{k}\left(\mathbb {P} ^{n}(\mathbb {C} )\right)={\begin{cases}\mathbb {Z} &0\leqslant k\leqslant 2n,{\text{ even}}\\0&{\text{otherwise}}\end{cases}}}$
Both of the above examples are particularly simple because the homology is determined by the number of cells—i.e.: the cellular attaching maps have no role in these computations. This is a very special phenomenon and is not indicative of the general case.
## Modification of CW structures
There is a technique, developed by Whitehead, for replacing a CW complex with a homotopy-equivalent CW complex which has a simpler CW decomposition.
Consider, for example, an arbitrary CW complex. Its 1-skeleton can be fairly complicated, being an arbitrary graph. Now consider a maximal forest F in this graph. Since it is a collection of trees, and trees are contractible, consider the space ${\displaystyle X/\sim }$ where the equivalence relation is generated by ${\displaystyle x\sim y}$ if they are contained in a common tree in the maximal forest F. The quotient map ${\displaystyle X\to X/\sim }$ is a homotopy equivalence. Moreover, ${\displaystyle X/\sim }$ naturally inherits a CW structure, with cells corresponding to the cells of ${\displaystyle X}$ which are not contained in F. In particular, the 1-skeleton of ${\displaystyle X/\sim }$ is a disjoint union of wedges of circles.
Another way of stating the above is that a connected CW complex can be replaced by a homotopy-equivalent CW complex whose 0-skeleton consists of a single point.
Consider climbing up the connectivity ladder—assume X is a simply-connected CW complex whose 0-skeleton consists of a point. Can we, through suitable modifications, replace X by a homotopy-equivalent CW complex where ${\displaystyle X^{1}}$ consists of a single point? The answer is yes. The first step is to observe that ${\displaystyle X^{1}}$ and the attaching maps to construct ${\displaystyle X^{2}}$ from ${\displaystyle X^{1}}$ form a group presentation. The Tietze theorem for group presentations states that there is a sequence of moves we can perform to reduce this group presentation to the trivial presentation of the trivial group. There are two Tietze moves:
1) Adding/removing a generator. Adding a generator, from the perspective of the CW decomposition consists of adding a 1-cell and a 2-cell whose attaching map consists of the new 1-cell and the remainder of the attaching map is in ${\displaystyle X^{1}}$ . If we let ${\displaystyle {\tilde {X}}}$ be the corresponding CW complex ${\displaystyle {\tilde {X}}=X\cup e^{1}\cup e^{2}}$ then there is a homotopy-equivalence ${\displaystyle {\tilde {X}}\to X}$ given by sliding the new 2-cell into X.
2) Adding/removing a relation. The act of adding a relation is similar, only one is replacing X by ${\displaystyle {\tilde {X}}=X\cup e^{2}\cup e^{3}}$ where the new 3-cell has an attaching map that consists of the new 2-cell and remainder mapping into ${\displaystyle X^{2}}$ . A similar slide gives a homotopy-equivalence ${\displaystyle {\tilde {X}}\to X}$ .
If a CW complex X is n-connected one can find a homotopy-equivalent CW complex ${\displaystyle {\tilde {X}}}$ whose n-skeleton ${\displaystyle X^{n}}$ consists of a single point. The argument for ${\displaystyle n\geq 2}$ is similar to the ${\displaystyle n=1}$ case, only one replaces Tietze moves for the fundamental group presentation by elementary matrix operations for the presentation matrices for ${\displaystyle H_{n}(X;\mathbb {Z} )}$ (using the presentation matrices coming from cellular homology. i.e.: one can similarly realize elementary matrix operations by a sequence of addition/removal of cells or suitable homotopies of the attaching maps.
## 'The' homotopy category
The homotopy category of CW complexes is, in the opinion of some experts, the best if not the only candidate for the homotopy category (for technical reasons the version for pointed spaces is actually used).[10] Auxiliary constructions that yield spaces that are not CW complexes must be used on occasion. One basic result is that the representable functors on the homotopy category have a simple characterisation (the Brown representability theorem).
## Properties
• CW complexes are locally contractible.
• CW complexes satisfy the Whitehead theorem: a map between CW complexes is a homotopy-equivalence if and only if it induces an isomorphism on all homotopy groups.
• The product of two CW complexes can be made into a CW complex. Specifically, if X and Y are CW complexes, then one can form a CW complex X × Y in which each cell is a product of a cell in X and a cell in Y, endowed with the weak topology. The underlying set of X × Y is then the Cartesian product of X and Y, as expected. In addition, the weak topology on this set often agrees with the more familiar product topology on X × Y, for example if either X or Y is finite. However, the weak topology can be finer than the product topology if neither X nor Y is locally compact. In this unfavorable case, the product X × Y in the product topology is not a CW complex. On the other hand, the product of X and Y in the category of compactly generated spaces agrees with the weak topology and therefore defines a CW complex.
• Let X and Y be CW complexes. Then the function spaces Hom(X,Y) (with the compact-open topology) are not CW complexes in general. If X is finite then Hom(X,Y) is homotopy equivalent to a CW complex by a theorem of John Milnor (1959).[11] Note that X and Y are compactly generated Hausdorff spaces, so Hom(X,Y) is often taken with the compactly generated variant of the compact-open topology; the above statements remain true.[12]
• A covering space of a CW complex is also a CW complex.
• CW complexes are paracompact. Finite CW complexes are compact. A compact subspace of a CW complex is always contained in a finite subcomplex.[13][14]
## References
### Notes
1. ^ Hatcher, Allen (2002). Algebraic topology. Cambridge University Press. ISBN 0-521-79540-0.CS1 maint: ref=harv (link) This textbook defines CW complexes in the first chapter and uses them throughout; includes an appendix on the topology of CW complexes. A free electronic version is available on the author's homepage.
2. ^ Whitehead, J. H. C. (1949a). "Combinatorial homotopy. I." Bull. Amer. Math. Soc. 55 (5): 213–245. doi:10.1090/S0002-9904-1949-09175-9. MR 0030759.CS1 maint: ref=harv (link) (open access)
3. ^ channel, Animated Math (2020). "1.2 Introduction to Algebraic Topology. CW Complexes". Youtube.
4. ^ a b channel, Animated Math (2020). "1.3 Introduction to Algebraic Topology. Examples of CW Complexes". Youtube.
5. ^ Turaev, V. G. (1994), "Quantum invariants of knots and 3-manifolds", De Gruyter Studies in Mathematics (Berlin: Walter de Gruyter & Co.) 18
6. ^ Hatcher, Allen, Algebraic topology, p.520, Cambridge University Press (2002). ISBN 0-521-79540-0.
7. ^ Davis, James F.; Kirk, Paul (2001). Lecture Notes in Algebraic Topology. Providence, R.I.: American Mathematical Society.
8. ^ https://ncatlab.org/nlab/show/CW+complex
9. ^ https://www.encyclopediaofmath.org/index.php/CW-complex
10. ^ For example, the opinion "The class of CW complexes (or the class of spaces of the same homotopy type as a CW complex) is the most suitable class of topological spaces in relation to homotopy theory" appears in Baladze, D.O. (2001) [1994], "CW-complex", Encyclopedia of Mathematics, EMS Press
11. ^ Milnor, John (1959). "On spaces having the homotopy type of a CW-complex". Trans. Amer. Math. Soc. 90: 272–280. doi:10.1090/s0002-9947-1959-0100267-4. JSTOR 1993204.
12. ^ "Compactly Generated Spaces" (PDF).
13. ^ Hatcher, Allen, Algebraic topology, Cambridge University Press (2002). ISBN 0-521-79540-0. A free electronic version is available on the author's homepage
14. ^ Hatcher, Allen, Vector bundles and K-theory, preliminary version available on the authors homepage
### General references
• Lundell, A. T.; Weingram, S. (1970). The topology of CW complexes. Van Nostrand University Series in Higher Mathematics. ISBN 0-442-04910-2.CS1 maint: ref=harv (link)
• Brown, R.; Higgins, P.J.; Sivera, R. (2011). Nonabelian Algebraic Topology:filtered spaces, crossed complexes, cubical homotopy groupoids. European Mathematical Society Tracts in Mathematics Vol 15. ISBN 978-3-03719-083-8.CS1 maint: ref=harv (link) More details on the [1] first author's home page]
|
2020-09-23 09:14:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 55, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737594485282898, "perplexity": 460.21151318039927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00656.warc.gz"}
|
https://www.yourdictionary.com/calorie
|
#### Sentence Examples
• The unit of heat assumed in the table is the calorie at 20° C., which is taken as equal to 4.180 joules, as explained in the article Calorimetry.
• Its Relation To The Calorie At Any Given Temperature, Such As 15° Or 20°, Cannot Be Determined With The Same Degree Of Accuracy As The Ratio Of The Specific Heat At 15° To That At 20°, If The Scale Of Temperature Is Given.
• Measure is equivalent to 4.177 joules per calorie at 16.5° C., on the scale of Joule's mercury thermometer.
• The method requires very delicate weighing, as one calorie corresponds to less than two milligrammes of steam condensed; but the successful application of the method to the very difficult problem of measuring the specific heat of a gas at constant volume, shows that these and other difficulties have been very skilfully overcome.
• There Can Be No Doubt, However, That The Final Result Is The Most Accurate Direct Determination Of The Value Of The Mean Calorie Between O° And Ioo° C. In Mechanical Units.
|
2018-12-11 11:29:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707259297370911, "perplexity": 747.905899057297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823618.14/warc/CC-MAIN-20181211104429-20181211125929-00161.warc.gz"}
|
https://dsp.stackexchange.com/questions/23904/reflecting-negative-frequency-to-positive-frequency
|
# Reflecting negative frequency to positive frequency
I am trying to synthesize sinusoids by using window function in the frequency domain.
It involves:
1. In frequency domain, shift the window to center around the peak frequency
2. To generate a DFT frame, sample a few values of the window around the peak as the spectral motif
3. Inverse-Fourier transform the spectral motif, to generate the sinusoid in the desired frequency
This approach works great in general, except for synthesising low frequencies. Because when shifting the window to the low frequency, the left side of the window will sit in the negative frequency domain.
The figure in the middle demonstrates the issue (T(k) sits in the negative domain)
I found a solution here, it suggested to add the complex conjugate value of the left tail of the window (in the negative domain) to the DFT bin that’s on the right tail of the window (in the positive domain). Which I can’t make sense of, and following this solution creates even more distortion. So I wonder if anyone knows how to do it properly. Any suggestions would be very appreciated!
Some excerpt from the aforementioned solution:
The reflection about the k=0 axis is due to the specific embodiment described herein for synthesizing a sinusoid. For each real sinusoid, one peak exists in the positive frequency bins and another peak exists in the negative frequency bins. In the embodiment wherein only the peak in the positive frequency bins is synthesized, a peak centered about a low positive frequency bin spills into the negative frequencies (as shown by the plot for Ht(k−bc) in FIG. 3). Similarly, a peak centered about a low negative frequency bin spills into the positive frequencies. The portion of Ht(k−bc) in the negative frequencies that is reflected, or T*(−k), represents the portion of the peak centered about the negative frequency bin that spills into the positive frequencies.
PS. Some time ago I've raised this question on DSP-related forum. I've got very detailed suggestion from Robert, to ignore the bins lying on negative domain to specify the whole positive frequency, and then complex conjugate to reflect it to the negative frequency, which has improved the problem but it still can’t go down below 80 Hz. So I thought I’ll post again here.
• How are you dealing with setting the 0 Hz bin? – Olli Niemitalo Jun 4 '15 at 19:46
• @OlliNiemitalo based on the material I read, it should be 2 * real part of the associated value (the value sampled at 0 Hz) of the window function – Thinium Jun 4 '15 at 20:24
• sometimes i wonder if frequency-domain synthesis is worth the effort. crossing all of the i's and dotting all of the t's is a pain in the ass. – robert bristow-johnson Jun 4 '15 at 22:19
• Not an answer, but: When dealing with a related issue, I found the correct solution (for my issue) by realizing that the window is actually always reflected at 0Hz, and these two copies always overlap: Normally only the side-lobes do, which can be ignored if those are weak enough, but at 0 Hz they just happen to overlap completely. – Sebastian Reichelt Jun 5 '15 at 10:23
• So I think a good way to figure out the "correct" solution is to first consider a variant of the Fourier transform that doesn't drop negative frequencies, and then figure out how to map the solution to a Fourier transform that does. Especially, you need to consider how the omission of negative frequencies is dealt with at the 0 Hz bin, which usually involves a factor of 2. – Sebastian Reichelt Jun 5 '15 at 10:26
You could use another approach: allow negative frequencies, do a complex IFFT, discard the imaginary parts of the time domain samples, and multiply the result by 2. Let's try it in Octave (MATLAB clone) with a motif $[1 + 2i, 3 + 4i, 5 + 6i, 7 + 8i]$ shifted so that its leftmost bin ($1 + 2i$) lands on a negative frequency. FFT length is 8. (I rewrote the results for readability.)
> g = [3 + 4i, 5 + 6i, 7 + 8i, 0, 0, 0, 0, 1 + 2i]
g =
3 + 4i 5 + 6i 7 + 8i 0 + 0i 0 + 0i 0 + 0i 0 + 0i 1 + 2i
> y = real(ifft(g))*2
y =
4.00000 -0.89645 -2.00000 0.98223 1.00000 -1.60355 0.00000 4.51777
> fft(y)
ans =
6 + 0i 6 + 4i 7 + 8i 0 + 0i 0 + 0i 0 + 0i 7 - 8i 6 - 4i
The same time-domain signal $y$ can be generated by synthesizing the first 5 bins of $\text{fft}(y)$ and feeding them to real IFFT (which I assume you use). The 3 remaining negative-frequency bins the IFFT kind of generates internally. You can follow these steps for the synthesis:
1) For each bin of the shifted motif that lands on a positive-frequency bin, copy the value verbatim. (You are already doing this)
2) For a bin of the shifted motif that lands on the 0 Hz bin, double its real part and write that to the 0 Hz bin, discarding the imaginary part, like $3 + 4i \rightarrow 6 + 0i$ in the example. (You are already doing this)
3) For each bin of the shifted motif that lands on a negative-frequency bin, add its complex conjugate (same real part, sign of imaginary part flipped) to the positive-frequency bin that is at the same distance from the 0 Hz bin. This will be one of the bins already written to in step 1. In the second bin of the example $5+6i$ and $\text{conj}(1 + 2i) = 1 - 2i$ were summed to get $6 + 4i$. Don't expect the imaginary parts to cancel, because even if the motif might be symmetrical, those two motif bins are not located symmetrically on the opposite sides of the motif, unless you are synthesizing a 0 Hz sinusoid. It is perfectly fine if the imaginary part remains non-zero; frequency domain bins can have complex values and still yield real-valued time-domain data.
Oh, and if the shifted motif reaches the Nyquist bin (corresponding to half the sampling frequency) or beyond, do exactly the same things there, mirroring around the Nyquist bin.
• Could you please elaborate a bit about the last two sentences “don’t expect the imaginary parts to cancel…”? For example, if I complex conjugate a negative bin and add it to the positive side (as you said, the bin will be one that’s already written in step 1.), then this bin will have 2 times the real part of the original value, and the imaginary part gets cancelled out, right? It’s amazing that you already know what I am confused about without me mentioning it! Thanks in advance! – Thinium Jun 4 '15 at 22:20
• Those two bins are not located symmetrically on opposite sides of the shifted motif. They are located symmetrically around 0 Hz. For example, if the leftmost bin of the shifted motif lands on the first negative frequency bin, you would add its complex conjugate to the first positive frequency bin. That would be where the 3rd leftmost bin of the motif was written to. It is very unlikely that the leftmost and the 3rd leftmost bins of the motif have identical values. – Olli Niemitalo Jun 5 '15 at 5:28
• in your example it looks like a complex conjugate of the first half of the spectrum to the latter half is not necessary, right? As I am only dealing with frequency up to half of the sample rate. I assume only the first half of the spectrum will be filled (I've edited the original post with some psedo code to elaborate my attempt). Thanks! – Thinium Jun 5 '15 at 10:54
• That's right. In the example I just take the real part in time domain. This is equivalent to doing the complex conjugate mirroring in frequency domain. – Olli Niemitalo Jun 5 '15 at 11:50
so, if you know what the window shape will be in the time-domain, then you know what the "motif" (i haven't thought of using that word) of the window is in the frequency domain. the Fourier transform of the window is your motif centered at DC (bin 0). at some point you need to truncate the sidelobes of the motif to zero. with a Gaussian window, the motif shape will also be Gaussian and will decay to zero rapidly on both sides and when it gets to, say, $10^{-9}$ or so, then you can truncate the Gaussian motif to zer0. if you want your sinusoid frequencies to have fractional-bin precision, you will need to be able to interpolate the motif at different fractional-bin offsets and maybe store a variety of slightly different motif templates for different fractional-bin offsets.
so, first start out by clearing your entire spectrum to 0. for each frequency component, you plop down a motif shifted to that frequency and, where the motif is non-zero, you must add it to the existing spectrum (which is initialized to zero). that means some of the tails of motifs of different sinusoidal frequencies will overlap. make sure you overlap-add and not just write over existing non-zero values in your spectrum.
do this for both the positive and negative frequency components of each real sinusoid.
if the frequency is so low that some of the tail of the motif extends past DC (bin 0) then allow that and add it in. i.e. the motif for the positive frequency component will extend into the negative frequencies and the reflected motif for the negative frequency component will extend into the positive frequencies. just do it and allow the tails to overlap and add whether they cross over to the other sign of frequencies or not.
• I remember you mentioned in the DSP-related forum that this approach works well for chirp-like sound. (ps. I am using the concatenate approach from jean laroche, not overlap-add) I notice that if I modulate the frequency or amplitude rapidly, there is crackles in the signal. Is this an unavoidable artefact? – Thinium Sep 20 '15 at 16:25
Thanks so much olli! It works! To do it with real fft, just follow these 3 steps described by Olli, then complex conjugate the first half of the spectrum to the latter half.
Extending Olli's example:
With the spectral motif [1 + 2i , 3 + 4i, 5 + 6i, 7 + 8i] and fft size of 8, after doing what olli said you'll first get:
[6 + 0i, 6 + 4i, 7 + 8i, 0 + 0i, 0 + 0i, 0 + 0i, 0 + 0i, 0 + 0i]
Then perform a complex conjugate of the first 4 bins to the latter half:
[6 + 0i, 6 + 4i, 7 + 8i, 0 + 0i, 0 + 0i, 0 + 0i, 7 - 8i, 6 - 4i]
Then perform ifft, which should give you real-valued time-domain data.
• One correction: "Latter half" should exclude the Nyquist bin as it was dealt with separately by doubling the real part and zeroing the imaginary part. – Olli Niemitalo Jun 7 '15 at 12:38
|
2019-12-12 06:04:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6353906393051147, "perplexity": 832.6830381678885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00422.warc.gz"}
|
https://www.physicsforums.com/threads/simplifying-and-expanding-expressions.1043539/
|
# Simplifying and expanding expressions
• MHB
litchris
I have a test on simplifying and expanding expressions, could someone help me with this. I don't understand the formula and the way you do it
skeeter
Do you have specific examples of expressions requiring expansion and/or simplification?
Post a few you have attempted, showing how you tried them yourself ... help us help you.
litchris
i tried 4x+7x-5x and 4x^2-2xy-3y^2+6xy+3y^2-x^2 totally confused.
skeeter
you can sum like terms ...
4x, 7x and -5x are all like terms $\implies 4x + 7x - 5x = 11x - 5x = 6x$
for the second, like terms have the same variables to the same power. like terms have the same color in the expression below ...
${\color{red}4x^2} {\color{blue}-2xy} {\color{green}-3y^2}{\color{blue}+6xy}{\color{green}+3y^2}{\color{red}-x^2}$
I assume you know how to sum terms with the same and/or different signs
Why don't you try and combine them ...
have a look at the link, too
https://www.mathsisfun.com/algebra/like-terms.html
Last edited by a moderator:
litchris
Thanks skeeter this helps
SDAlgebra
Would you use brackets in your test? In that case...
You multiply each number in the brackets by the other brackets for example: [ (x+y)(x+y) ] would equal [ x2 + xy + xy + y2 ]. Simplifying these expressions would equal x2 + 2xy + y2
Or...
As Skeeter said you combine the expressions from different sides to make a final answer.
4x+7y+2x+9y = 6x + 16y
These are purposely easier just for you to get the gist :)
|
2023-03-22 10:42:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7823577523231506, "perplexity": 1631.915780572209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00036.warc.gz"}
|
https://batman.readthedocs.io/en/develop/uq.html
|
# Uncertainty Quantification¶
## What is Uncertainty¶
As it can be infered from the name, Uncertainty Quantification (UQ) aims at undestanding the impact of the uncertainties of a system. Uncertainties can be decomposed in two parts:
• Aleatoric: intrinsic variability of a system,
• Epistemic: lack of knowledge, models errors.
The aleatoric part is the one we seek to measure. For example, looking at an airfoil, if we change the angle of attack, some change are expected on the lift and drag. On the other hand, the epistemic part represent our bias. Using RANS models, the turbulence is entirelly modeled—as opposed to LES where we compute most of it—so we might miss some phenomena.
Then, there are three kind of uncrtainty study:
• Uncertainty Propagation: observe the response of the system to perturbed inputs (PDF, response surface),
• Sensitivity Analysis: measure the respective importance of the input parameters,
• Risk Assessment: get the probability to exceed a threshold.
In any case, from perturbed input we are looking at the output response of a quantity of interest .
See also
The Visualization module is used to output UQ.
## Sobol’ indices¶
There are several methods to estimate the contribution of different parameters on quantities of interest [iooss2015]. Among them, sensitivity methods based on the analysis of the variance allow to obtain the contribution of the parameters on the QoI’s variance [ferretti2016]. Here, classical Sobol’ [Sobol1993] method is used which gives not only a ranking but also quantifies the importance factor using the variance. This method only makes the hypothesis of the independence of the input variables. It uses a functional decomposition of the variance of the function to explore:
$\begin{split}\mathbb{V}(\mathcal{M}_{gp}) &= \sum_{i}^{p} \mathbb{V}_i (\mathcal{M}_{gp}) + \sum_{i<j}^{p}\mathbb{V}_{ij} + ... + \mathbb{V}_{1,2,...,p},\\ \mathbb{V}_i(\mathcal{M}_{gp}) &= \mathbb{\mathbb{V}}[\mathbb{E}(\mathcal{M}_{gp}|x_i)]\\ \mathbb{V}_{ij} &= \mathbb{\mathbb{V}}[\mathbb{E}(\mathcal{M}_{gp}|x_i x_j)] - \mathbb{V}_i - \mathbb{V}_j,\end{split}$
with $$p$$ the number of input parameters constituting $$\mathbf{x}$$. This way Sobol’ indices are expressed as
$S_i = \frac{\mathbb{V}[\mathbb{E}(\mathcal{M}_{gp}|x_i)]}{\mathbb{V}[\mathcal{M}_{gp}]}\qquad S_{ij} = \frac{\mathbb{V}[\mathbb{E}(\mathcal{M}_{gp}|x_i x_j)] - \mathbb{V}_i - \mathbb{V}_j}{\mathbb{V}[\mathcal{M}_{gp}]}.$
$$S_{i}$$ corresponds to the first order term which apprises the contribution of the i-th parameter, while $$S_{ij}$$ corresponds to the second order term which informs about the correlations between the i-th and the j-th parameters. These equations can be generalized to compute higher order terms. However, the computational effort to converge them is most often not at hand [iooss2010] and their analysis, interpretations, are not simple.
Total indices represents the global contribution of the parameters on the QoI and express as:
$S_{T_i} = S_i + \sum_j S_{ij} + \sum_{j,k} S_{ijk} + ... \simeq 1 - S_{i}.$
For a functional output, Sobol’ indices can be computed all along the output and retrieve a map or create composite indices. As described by Marrel [marrel2015], aggregated indices can also be computed as the mean of the indices weighted by the variance at each point or temporal step
$S_i = \frac{\displaystyle\sum_{l = 1}^{p} \mathbb{V} [\mathbf{f}_l] S_i^{l}}{\displaystyle\sum_{l = 1}^{p} \mathbb{V} [\mathbf{f}_l]}.$
The indices are estimated using Martinez’ formulation. In [baudin2016], they showed that this estimator is stable and provides asymptotic confidence intervals—approximated with Fisher’s transformation—for first order and total order indices.
## Uncertainty propagation¶
Instead of looking at the individual contributions of the input parameters, the easyest way to assess uncertainties is to perform simulations by perturbing the input distributions using particular distributions. The quantitie of interest can then be visualized. This is called a response surface. A complementary analysis can be drawn from here as ones can compute the Probability Density Function (PDF) of the output. In order for these statistical information to be relevant, a large number of simulations is required. Hence the need of a surrogate model (see Surrogate).
## References¶
iooss2015
Iooss B. and Saltelli A.: Introduction to Sensitivity Analysis. Handbook of UQ. 2015. DOI: 10.1007/978-3-319-11259-6_31-1
ferretti2016
Ferretti F. and Saltelli A. et al.: Trends in sensitivity analysis practice in the last decade. Science of the Total Environment. 2016. DOI: 10.1016/j.scitotenv.2016.02.133
Sobol1993
Sobol’ I.M. Sensitivity analysis for nonlinear mathematical models. Mathematical Modeling and Computational Experiment. 1993.
iooss2010
Iooss B. et al.: Numerical studies of the metamodel fitting and validation processes. International Journal on Advances in Systems and Measurements. 2010
marrel2015
Marrel A. et al.: Sensitivity Analysis of Spatial and/or Temporal Phenomena. Handbook of Uncertainty Quantification. 2015. DOI: 10.1007/978-3-319-11259-6_39-1
baudin2016
Baudin M. et al.: Numerical stability of Sobol’ indices estimation formula. 8th International Conference on Sensitivity Analysis of Model Output. 2016.
|
2021-06-14 15:16:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7480958104133606, "perplexity": 1025.6899088269583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00093.warc.gz"}
|
https://noir-lang.org/modules_packages_crates/packages.html
|
# Packages
A Nargo Package is a collection of one of more crates. A Package must include a Nargo.toml file.
A Package must contain either a library or a binary crate.
## Creating a new package
A new package is created using the new command.
$nargo new my-project$ ls my-project
Nargo.toml
src
\$ ls my-project/src
main.nr
## Binary vs Library
Similar to Cargo, Nargo follows the convention that if there is a src/main.nr then the project is a binary. If it contains a src/lib.nr then it is a library.
However, note that dissimilar to Cargo, we cannot have both a binary and library in the same project.
|
2023-03-20 21:16:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43560564517974854, "perplexity": 3713.346539223055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00527.warc.gz"}
|
https://math.stackexchange.com/questions/3074662/is-there-a-way-to-evaluate-analytically-the-following-infinite-double-sum
|
# Is there a way to evaluate analytically the following infinite double sum?
Consider the following double sum $$S = \sum_{n=1}^\infty \sum_{m=1}^\infty \frac{1}{a (2n-1)^2 - b (2m-1)^2} \, ,$$ where $$a$$ and $$b$$ are both positive real numbers given by \begin{align} a &= \frac{1}{2} - \frac{\sqrt{2}}{32} \, , \\ b &= \frac{1}{4} - \frac{3\sqrt{2}}{32} \, . \end{align} It turns out that one of the two sums can readily be calculated and expressed in terms of the tangente function. Specifically, $$S = \frac{\pi}{4\sqrt{ab}} \sum_{m=1}^\infty \frac{\tan \left( \frac{\pi}{2} \sqrt{\frac{b}{a}} (2m-1) \right)}{2m-1} \, .$$
The latter result does not seem to be further simplified. i was wondering whether someone here could be of help and let me know in case there exists a method to evaluate the sum above. Hints and suggestions welcome.
Thank you
PS From numerical evaluation using computer algebra systems, it seems that the series is convergent. This apparently would not be the case if $$b<0$$.
• Maybe Poisson summation formula? – Angina Seng Jan 15 '19 at 17:28
• @LordSharktheUnknown Thanks for the comments. Could you please be a bit specific. An example would be highly appreciated – Daddy Jan 15 '19 at 17:31
• On second thoughts, I believe now that the double sum is divergent. – Angina Seng Jan 15 '19 at 18:07
• @LordSharktheUnknown i agree. The divergence looks logarithmic. Thanks for the hint – Daddy Jan 16 '19 at 7:25
• $$S=\frac{\pi}{4\sqrt{ab}}\sum_{m=1}^\infty\frac{\tan\left(\frac{\pi}{2}\sqrt{\color{red}{\frac{b}{a}}}\left(2m-1\right)\right)}{2m-1}$$ – Hazem Orabi Jan 24 '19 at 13:11
Answer: The series diverges when $$\sqrt{\frac{b}{a}}$$ is irrational, as in your case. This follows from your reduced form of the sum and a few observations:
1. $$\tan{\frac{\pi x}{2}}$$ has period 2 and diverges like $$\frac{1}{n - x}$$ near any odd integer $$n$$. More precisely, there exists a constant $$c_1$$ so that for sufficiently small $$x$$ $$\left|\tan{\frac{\pi (n - x)}{2}}\right| > \left|\frac{c_1}{x}\right|$$ for any odd $$n$$.
2. By the Dirichlet approximation theorem, for irrational $$\sqrt{\frac{b}{a}}$$ there are infinitely many integers $$m$$ such that $$\sqrt{\frac{b}{a}} (2m-1)$$ is within distance $$\frac{c_2}{2m-1}$$ of an odd integer, for some positive constant $$c_2$$.
3. Combining the above two remarks, we find that there are infinitely many $$m$$ such that $$\left| \frac{\tan{\frac{\pi}{2}\sqrt{\frac{b}{a}} (2m-1)}}{2m-1} \right|>\frac{c_1}{c_2}$$
Thus there are infinitely many terms in the sum which are bigger in magnitude than some positive constant, so the sum can't converge.
Remark: If $$\sqrt{\frac{b}{a}}$$ is some rational number $$\frac{p}{q}$$, then the convergence depends on the parity of $$p$$ and $$q$$. I'll sketch what happens in each case:
• If $$p$$ and $$q$$ are odd, then for some $$m$$ we find $$\sqrt{\frac{b}{a}} (2m-1)$$ will be an odd integer and the tangent in the sum will diverge. Thus the sum doesn't converge.
• If $$p$$ is odd and $$q$$ is even, then $$\sqrt{\frac{b}{a}} (2m-1)$$ is never an integer, and is symmetrically distributed across the period of the tangent. Note that the tangent is an odd function when reflected across any even integer, so for each negative value of the tangent in the sum there is a corresponding equal and opposite positive value. Within each period of the tangent there are a finite number of such pairs, and the contribution of each such pair to the sum goes like $$~\frac{1}{m^2}$$, which is convergent. (This is exactly analogous to how $$\sum \frac{(-1)^n}{n}$$ converges conditionally.) Thus the sum will (conditionally) converge.
• If $$p$$ is even and $$q$$ is odd, most points will pair up like in the previous case, but there will be a leftover collection of points $$\sqrt{\frac{b}{a}} (2m-1)$$ which land on even integers. Fortunately, the tangent is zero on these points, so the sum will again converge like before.
So summarily the sum converges only when $$\sqrt{\frac{b}{a}}$$ is a rational number with either the numerator or denominator even. When it does converge, you should be able to find an explicit formula for the sum by adding up a finite number of sums of the form $$\sum \frac{1}{a+n^2}$$ (arising from the pairing of points mentioned above), which can each be evaluated analytically.
(This is quite sketchy, but it doesn't apply to your particular value anyway, so I'm hoping I can get away with it...)
Let
$$q=\sqrt{\dfrac{b}{a}},$$ then attack via digamma-function leads to \begin{align} &S = \dfrac1a\sum\limits_{n=1}^\infty\sum\limits_{m=1}^\infty\dfrac1{(2n-1)^2-q^2(2m-1)^2} \\[4pt] &= \dfrac1a\sum\limits_{n=1}^\infty\sum\limits_{m=1}^\infty\dfrac1{2(2n-1)}\left(\dfrac1{2n-1-q(2m-1)}+\dfrac1{2n-1+q(2m-1)}\right)\\[4pt] &= \dfrac1{4aq}\sum\limits_{n=1}^\infty\dfrac1{2n-1}\sum\limits_{m=1}^\infty \left(-\dfrac1{m-\frac{2n-1+q}{2q}}+\dfrac1{m+\frac{2n-1-q}{2q}}\right)\\[4pt] &= \dfrac1{4aq}\sum\limits_{n=1}^\infty\dfrac1{2n-1}\left(\psi\left(1-\frac{2n-1+q}{2q}\right)-\psi\left(1+\frac{2n-1-q}{2q}\right)\right)\\[4pt] &= \dfrac1{4aq}\sum\limits_{n=1}^\infty\dfrac1{2n-1}\left(\psi\left(\frac{q-2n+1}{2q}\right)-\psi\left(\frac{q+2n-1}{2q}\right)\right)\\[4pt] &= \dfrac1{4aq}\sum\limits_{n=1}^\infty\dfrac1{2n-1}\cdot\pi\cot\left(\pi\frac{q-2n+1}{2q}\right)\\[4pt] &= \dfrac\pi{4\sqrt{ab}}\sum\limits_{n=1}^\infty\dfrac1{2n-1}\tan\left(\frac\pi 2 \sqrt{\frac ab}(2n-1)\right)\\[4pt] &= \dfrac\pi{4\sqrt{ab}}\sum\limits_{m=1}^\infty\dfrac1{2m-1}\tan\left(\frac\pi 2 \sqrt{\color{red}{\frac ab}}(2m-1)\right). \end{align}
• (+1) \begin{align}S&=\color{red}{+}\frac{\pi}{4\sqrt{ab}}\sum_{\color{red}{m=1}}^\infty\frac{\tan\left(\frac{\pi}{2}\sqrt{\color{red}{\frac{b}{a}}}\left(2m-1\right)\right)}{2m-1}\\&=\color{red}{-}\frac{\pi}{4\sqrt{ab}}\sum_{\color{red}{n=1}}^\infty\frac{\tan\left(\frac{\pi}{2}\sqrt{\color{red}{\frac{a}{b}}}\left(2n-1\right)\right)}{2n-1}\end{align} – Hazem Orabi Jan 29 '19 at 13:33
• @HazemOrabi I've elaborated my reasons in the answer. What are yours? – Yuri Negometyanov Jan 29 '19 at 17:14
|
2020-04-05 14:42:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979366064071655, "perplexity": 227.21360271108688}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00080.warc.gz"}
|
https://www.foxmetro.org/docs/page.php?185b1c=divergence-definition-math
|
Cannons Talk Talk Lyrics, Parfums De Marly Percival Clone, The Red Wheelbarrow Symbolism, Carlon Jeffery Daughter, Glucuronic Acid Abbreviation, De Dana Dan Movie Full Hd, Where Can I Stream Hope And Glory, Chlorogenic Acid Sources, Finding I Am - Session 4, Is Assassin's Creed Unity Good Now, Alpena Light Bar Wiring Harness, Newco Pressure Seal Valves, Cupcake Jemma Recipes, Year 2 Workbooks, Construction Manager Resume Objective, Personal Loans For Bad Credit, Absalom, Absalom Family Tree, Loose Ends Band, Boho Nursery Furniture, Beginner Ab Workout Male, Wasabi Ice Cream Japan, Dhs Medicaid Number, Hungry Shark World Mod Apk, Xbox 360 System Update 2020, Cammenga Compass Repair, William I, German Emperor, How To Eat Muesli For Weight Gain, Word Options Dialog Box Mac, Trinkets Moe's Dad, " />
## divergence definition math
If $$\vecs{F}$$ were magnetic, then its divergence would be zero. Determine whether the function is harmonic. Consider the vector fields in Figure $$\PageIndex{1}$$. https://mathworld.wolfram.com/Divergence.html. vector function A) (1) where the surface integral gives the value of integrated over a closed infinitesimal boundary surface surrounding a volume element , which is taken to size zero using a limiting process. a volume element , which is taken to size zero using a That is, imagine a vector field represents water flow. This article was most recently revised and updated by, https://www.britannica.com/science/divergence-mathematics, Oregon State University - College of Science and Mathematics - Divergence and Curl of Vector Fields, Math Insight - The Idea of Divergence of a Vector Field. Divergence (div) is “flux density”—the amount of flux entering or leaving a point. Example of a Vector Field with no Variation around a Point. Using curl, we can see the circulation form of Green’s theorem is a higher-dimensional analog of the Fundamental Theorem of Calculus. theorem, also known as Gauss's theorem. where is the matrix The vector to the left of P This operator is called the Laplace operator, and in this notation Laplace’s equation becomes $$\vecs \nabla^2 f = 0$$. Calculus, 4th ed. Corrections? The converse of Divergence of a Source-Free Vector Field is true on simply connected regions, but the proof is too technical to include here. If a vector function A is given by: Then the divergence of A is the sum of how fast the vector function is changing: The symbol is the partial derivative symbol, which means Practice online or make a printable study sheet. Our mission is to provide a free, world-class education to anyone, anywhere. Example of a Vector Field Surrounding a Point (negative divergence). the element, written symbolically as, where is the vector field of fluid velocity. The next two theorems say that, under certain conditions, source-free vector fields are precisely the vector fields with zero divergence. The bigger the flux density (positive or negative), the stronger the flux source or sink. This equation makes sense because the cross product of a vector with itself is always the zero vector. Thus, this matrix is a way to help remember the formula for curl. Is it possible for $$\vecs G(x,y,z) = \langle \sin x, \, \cos y, \, \sin (xyz)\rangle$$ to be the curl of a vector field? If we think of divergence as a derivative of sorts, then Green’s theorem says the “derivative” of $$\vecs{F}$$ on a region can be translated into a line integral of $$\vecs{F}$$ along the boundary of the region. integral, where the surface integral gives the value of integrated over §1.7 in Mathematical Unlimited random practice problems and answers with built-in Step-by-step solutions. Using divergence, we can see that Green’s theorem is a higher-dimensional analog of the Fundamental Theorem of Calculus. The definition of curl can be difficult to remember. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. theorem. In this section, I'll give the definition with no math: Divergence at a point (x,y,z) is the measure of the vector flow out of a surface surrounding that point. Find the curl of $$\vecs{F} = \langle P,Q \rangle = \langle y,0\rangle$$. Note that $$f_{xx} = 2$$ and $$f_{yy} = 0$$, and so $$f_{xx} + f_{yy} \neq 0$$. Join the initiative for modernizing math education. Methods for Physicists, 3rd ed. Taking the curl of vector field $$\vecs{F}$$ eliminates whatever divergence was present in $$\vecs{F}$$. Therefore, $$f$$ is not harmonic and $$f$$ cannot represent an electrostatic potential. There are six sides to this box, and the net "content" Therefore, we can use the Divergence Test for Source-Free Vector Fields to analyze $$\vecs{F}$$. Therefore, we can take the divergence of a curl. Then if the divergence is a positive number, this means water is flowing out Reading, MA: Addison-Wesley, pp. Ist die Divergenz überall gleich null, so bezeichnet man das Feld als quellenfrei. Therefore, the circulation form of Green’s theorem is sometimes written as, $\oint_C \vecs{F} \cdot d\vecs{r} = \iint_D \text{curl}\, \vecs F \cdot \,\mathbf{\hat k}\,dA,$. Since conservative vector fields satisfy the cross-partials property, all the cross-partials of $$\vecs F$$ are equal. Divergence. what is the divergence in Figure 6? point acts as a source of fields (produces more fields than it takes in) or as a sink of fields (fields are diminished around the point). If there are no changes, then we’ll get 0 + 0 + 0, which means no net flux. Example of a Vector Field Surrounding a Point. At any particular point, the amount flowing in is the same as the amount flowing out, so at every point the “outflowing-ness” of the field is zero. The result is a function that describes a rate of change. divergence at any point in space because we knew the functions defining the vector A from Equation [5], and then Physicists use divergence in Gauss’s law for magnetism, which states that if $$\vecs{B}$$ is a magnetic field, then $$\vecs \nabla \cdot \vecs{B} = 0$$; in other words, the divergence of a magnetic field is zero. Interpretiert man das Vektorfeld als Strömungsfeld einer Größe, für die die Kontinuitätsgleichung gilt, dann ist die Divergenz die Quelldichte. We abbreviate this “double dot product” as $$\vecs \nabla^2$$. This is analogous to the Fundamental Theorem of Calculus, in which the derivative of a function $$f$$ on a line segment $$[a,b]$$ can be translated into a statement about $$f$$ on the boundary of $$[a,b]$$. In fact, defining, the divergence in arbitrary orthogonal curvilinear 1985. The circle would flow toward the origin, and as it did so the front of the circle would travel more slowly than the back, causing the circle to “scrunch” and lose area. a closed infinitesimal boundary surface surrounding surrounding the point P. Now imagine the vector A represents water flow. field B at P is negative. Locally, the divergence of a vector field $$\vecs{F}$$ in $$\mathbb{R}^2$$ or $$\mathbb{R}^3$$ at a particular point $$P$$ is a measure of the “outflowing-ness” of the vector field at $$P$$. That is, Since $$\text{div}(\text{curl}\,\vecs v) = 0$$, the net rate of flow in vector field $$\text{curl}\;\vecs v$$\) at any point is zero. Here's a couple more examples. For example, the. In Figure 4, we have a vector field D that wraps around the point P. In Figure 2, if we imagine the water flowing, we would see the point P acting like As the leaf moves along with the fluid flow, the curl measures the tendency of the leaf to rotate. Therefore, the divergence at $$(0,2,-1)$$ is $$e^0 - 1 + 4 = 4$$. at locations (1,0,0) and (-1,0,0) we have Ex=0. Vector Calculus: Understanding the Dot Product, Vector Calculus: Understanding the Cross Product, Vector Calculus: Understanding Circulation and Curl, Vector Calculus: Understanding the Gradient, Understanding Pythagorean Distance and the Gradient, The symbol for divergence is the upside down triangle for. There's plenty more to help you build a lasting, intuitive understanding of math. Similarly, $$\text{div}\, v(P) < 0$$ implies the more fluid is flowing in to $$P$$ than is flowing out, and $$\text{div}\, \vecs{v}(P) = 0$$ implies the same amount of fluid is flowing in as flowing out. They are important to the field of calculus for several reasons, including the use of curl and divergence to develop some higher-dimensional versions of the Fundamental Theorem of Calculus. If $$\vecs{F}$$ represents the velocity of a fluid, then the divergence of $$\vecs{F}$$ at $$P$$ measures the net rate of change with respect to time of the amount of fluid flowing away from $$P$$ (the tendency of the fluid to flow “out of” P). + \left(\dfrac{-3xz}{(x^2 + y^2 + z^2 )^{5/2}} - \left(\dfrac{-3xz}{(x^2 + y^2 + z^2 )^{5/2}} \right) \right) \mathbf{\hat j} \nonumber \4pt] If $$\vecs{F}(x,y,z) = e^x \hat{i} + yz \hat{j} - yz^2 \hat{k}$$, then find the divergence of $$\vecs{F}$$ at $$(0,2,-1)$$. The concept of divergence can be generalized to tensor fields, where it is a contraction of what is known as the covariant Divergence and curl are two important operations on a vector field. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. If , then divergence definition: 1. the situation in which two things become different: 2. the situation in which two things become…. \[\begin{align*} \text{curl}\, f &= \vecs\nabla \times \vecs{F} \\ &= \begin{vmatrix} \mathbf{\hat i} & \mathbf{\hat j} & \mathbf{\hat k} \\ \partial/\partial x & \partial/\partial y & \partial / \partial z \\ P & Q & R \end{vmatrix} \\ &= (R_y - Q_z)\,\mathbf{\hat i} + (P_z - R_x)\,\mathbf{\hat j} + (Q_x - P_y)\,\mathbf{\hat k} \\ &= (xz - x)\,\mathbf{\hat i} + (x^2 - yz)\,\mathbf{\hat j} + z \,\mathbf{\hat k}. Determine divergence from the formula for a given vector field. P: Figure 2. The definition of the divergence therefore follows naturally by noting that, in the However, if you look at the rate of Therefore, we can apply the previous theorem to $$\vecs{F}$$. Since $$P_x + Q_y = \text{div}\,\vecs F$$, Green’s theorem is sometimes written as, \[\oint_C \vecs F \cdot \vecs N\; ds = \iint_D \text{div}\, \vecs F \;dA.. $\vecs{F}(x,y,z) = \langle xy, \, 5-z^2, \, x^2 + y^2 \rangle \nonumber.$. Find the curl of $$\vecs{F}(P,Q,R) = \langle x^2 z, e^y + xz, xyz \rangle$$. is positive. Therefore, the circulation form of Green’s theorem can be written in terms of the curl. outputs a scalar-valued function measuring the change in density of the fluid at each point Since a conservative vector field is the gradient of a scalar function, the previous theorem says that $$\text{curl}\, (\vecs \nabla f) = \vecs 0$$ for any scalar function $$f$$. In this section, we examine two important operations on a vector field: divergence and curl. 37-42, Walk through homework problems step-by-step from beginning to end. divergence - a variation that deviates from the standard or norm; "the deviation from the mean" deviation , difference , departure variation , fluctuation - an instance of change; the rate or magnitude of change "Diverge" means to move away from, which may help you remember that divergence is the rate of flux expansion (positive div) or contraction (negative div). Divergence is an operation on a vector field that tells us how the field behaves toward or away from a point. at (x,y,z)=(3,2,1) then we can use equation [6] to see that the divergence of A is 2+6*1 = 8. Join the newsletter for bonus content and the latest updates. Then, $$\text{div}\, \vecs{F} = 0$$ if and only if $$\vecs{F}$$ is source free. is a constant of proportionality known Both the divergence and curl are vector operators whose properties are revealed by viewing a vector field as the flow of a fluid or gas. Let $$\vecs{F} (x,y) = \langle -ay, bx \rangle$$ be a rotational field where $$a$$ and $$b$$ are positive constants. By the definitions of divergence and curl, and by Clairaut’s theorem, \begin{align*} \text{div}(\text{curl}\, \vecs{F}) = \text{div}[(R_y - Q_z)\,\mathbf{\hat i} + (P_z - R_x)\,\mathbf{\hat j} + (Q_x - P_y)\,\mathbf{\hat k}] \\ = R_{yx} - Q_{xz} + P_{yz} - R_{yx} + Q_{zx} - P_{zy}\\ = 0. What is an upside-down triangle (also known as the del operator) with a dot next to it not flowing into or out of the surface at each point. \[\vecs \nabla \times \vecs{F} = (R_y - Q_z)\,\mathbf{\hat i} + (P_z - R_x)\,\mathbf{\hat j} + (Q_x - P_y)\,\mathbf{\hat k} \nonumber, $\vecs \nabla \cdot \vecs{F} = P_x + Q_y + R_z\nonumber$, $\vecs \nabla \cdot (\vecs \nabla \times \vecs F) = 0\nonumber$, $\vecs \nabla \times (\vecs \nabla f) = 0 \nonumber$. "Divergence." with the original vector field The divergence of a vector v is given by in which v1, v2, and v3 are the vector components of v, typically a velocity field of fluid flow. A formula for the divergence of a vector field can immediately be written down in Cartesian coordinates by constructing a
#### Aurora, North Aurora, Boulder Hill, Montgomery, Oswego, Sugar Grove and portions of Yorkville and Batavia
© 2020 Fox Metro Water Reclamation District - Contact Us - Public Notices - Careers - FOIA - Accessibility
|
2021-06-13 09:06:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507686495780945, "perplexity": 666.0315558895668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00326.warc.gz"}
|
http://comunidadwindows.org/margin-of/standard-error-and-margin-of-error.php
|
Home > Margin Of > Standard Error And Margin Of Error
# Standard Error And Margin Of Error
## Contents
Among survey participants, the mean grade-point average (GPA) was 2.7, and the standard deviation was 0.4. A margin of error tells you how many percentage points your results will differ from the real population value. Back to Top Second example: Click here to view a second video on YouTube showing calculations for a 95% and 99% Confidence Interval. Misleading Graphs 10. Check This Out
The number of standard errors you have to add or subtract to get the MOE depends on how confident you want to be in your results (this is called your confidence Effect of population size The formula above for the margin of error assume that there is an infinitely large population and thus do not depend on the size of the population Your cache administrator is webmaster. COSMOS - The SAO Encyclopedia of Astronomy. https://en.wikipedia.org/wiki/Margin_of_error
## Margin Of Error Calculator
Difference Between a Statistic and a Parameter 3. Another approach focuses on sample size. Sampling theory provides methods for calculating the probability that the poll results differ from reality by more than a certain amount, simply due to chance; for instance, that the poll reports Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply.
The true standard error of the statistic is the square root of the true sampling variance of the statistic. That means if the poll is repeated using the same techniques, 98% of the time the true population parameter (parameter vs. You now have the standard error, Multiply the result by the appropriate z*-value for the confidence level desired. Based On Sample Data, What Do We Call Our Best Guess Of A Population Parameter? It asserts a likelihood (not a certainty) that the result from a sample is close to the number one would get if the whole population had been queried.
More questions How to find Margin of error? Expand» Details Details Existing questions More Tell us some more Upload in Progress Upload failed. asked 5 years ago viewed 39708 times active 7 months ago Get the weekly newsletter! check my site Effect of population size The formula above for the margin of error assume that there is an infinitely large population and thus do not depend on the size of the population
It can be calculated as a multiple of the standard error, with the factor depending of the level of confidence desired; a margin of one standard error gives a 68% confidence Why Will An Interval Estimate Most Likely Fall Around The Population Mean? The general formula for the margin of error for a sample proportion (if certain conditions are met) is where is the sample proportion, n is the sample size, and z* is This maximum only applies when the observed percentage is 50%, and the margin of error shrinks as the percentage approaches the extremes of 0% or 100%. Otherwise, use a z-score.
## Margin Of Error Definition
Most surveys you come across are based on hundreds or even thousands of people, so meeting these two conditions is usually a piece of cake (unless the sample proportion is very Check This Out How to Calculate a Z Score 4. Margin Of Error Calculator Otherwise, calculate the standard error (see: What is the Standard Error?). Margin Of Error Excel A random sample of size 7004100000000000000♠10000 will give a margin of error at the 95% confidence level of 0.98/100, or 0.0098—just under1%.
Required fields are marked *Comment Name * Email * Website Find an article Search Feel like "cheating" at Statistics? his comment is here Warning: If the sample size is small and the population distribution is not normal, we cannot be confident that the sampling distribution of the statistic will be normal. For example, the area between z*=1.28 and z=-1.28 is approximately 0.80. and R.J. Margin Of Error Sample Size
Margin of error = Critical value x Standard deviation of the statistic Margin of error = Critical value x Standard error of the statistic If you know the standard deviation of The choice of t statistic versus z-score does not make much practical difference when the sample size is very large. FPC can be calculated using the formula:[8] FPC = N − n N − 1 . {\displaystyle \operatorname {FPC} ={\sqrt {\frac {N-n}{N-1}}}.} To adjust for a large sampling fraction, the fpc http://comunidadwindows.org/margin-of/standard-margin-of-error.php KellerList Price: $38.00Buy Used:$10.78Buy New: $14.19Texas Instruments TI-NSpire Math and Science Handheld Graphing CalculatorList Price:$179.99Buy Used: $35.35Buy New:$199.99Approved for AP Statistics and Calculus About Us Contact Us
The formula for the SE of the mean is standard deviation / √(sample size), so: 0.4 / √(900)=0.013. 1.645 * 0.013 = 0.021385 That's how to calculate margin of error! Sampling Error Formula Since we don't know the population standard deviation, we'll express the critical value as a t statistic. cme cme Learn any topic Create a ShowMe Pricing and features Login· Signup One-click Share Link To Share http://www.showme.com/sh/?h=P8hMqnI Embed Code Video: Standard Error vs.
## Click here for a short video on how to calculate the standard error.
One example is the percent of people who prefer product A versus product B. Contents 1 Explanation 2 Concept 2.1 Basic concept 2.2 Calculations assuming random sampling 2.3 Definition 2.4 Different confidence levels 2.5 Maximum and specific margins of error 2.6 Effect of population size Disproving Euler proposition by brute force in C more hot questions question feed about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Confidence Level Margin of Error Loading ShowMe...
Bush/Dick Cheney, and 2% would vote for Ralph Nader/Peter Camejo. Stokes, Lynne; Tom Belin (2004). "What is a Margin of Error?" (PDF). The more people that are sampled, the more confident pollsters can be that the "true" percentage is close to the observed percentage. navigate here This is very nicely explained here.
The larger the margin of error, the less confidence one should have that the poll's reported results are close to the true figures; that is, the figures for the whole population. Step 3: Multiply the critical value from Step 1 by the standard deviation or standard error from Step 2. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view current community blog chat Cross Validated Cross Validated Meta your communities Sign up or log in to customize your The likelihood of a result being "within the margin of error" is itself a probability, commonly 95%, though other values are sometimes used.
In cases where the sampling fraction exceeds 5%, analysts can adjust the margin of error using a finite population correction (FPC) to account for the added precision gained by sampling close Also, if the 95% margin of error is given, one can find the 99% margin of error by increasing the reported margin of error by about 30%.
|
2018-02-21 07:03:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6403077244758606, "perplexity": 1049.7284475159126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813571.24/warc/CC-MAIN-20180221063956-20180221083956-00710.warc.gz"}
|
https://zodiguqalefuxot.hamaikastudio.com/generalized-euler-mascheroni-constants-book-10903lc.php
|
Last edited by Goshakar
Thursday, October 22, 2020 | History
2 edition of generalized Euler-Mascheroni constants found in the catalog.
generalized Euler-Mascheroni constants
O. R. Ainsworth
# generalized Euler-Mascheroni constants
## by O. R. Ainsworth
Subjects:
• Euler"s numbers.,
• Functions, Zeta.,
• Mathematical analysis.,
• Laurent series.
• Edition Notes
The Physical Object ID Numbers Statement O.R. Ainsworth and L.W. Howell. Series NASA technical paper -- 2264. Contributions Howell, Leonard W., United States. National Aeronautics and Space Administration. Scientific and Technical Information Office., George C. Marshall Space Flight Center. Pagination 12 p. ; Number of Pages 12 Open Library OL15306034M
FAST CONVERGENCES TOWARDS EULER-MASCHERONI CONSTANT More precisely, we mention the following results related to the speed of con-vergence of the sequence (γ n) ≥1: 1 2(n +1). Dec 07, · The Euler-Mascheroni constant $\gamma$ comes up when evaluating the harmonic numbers. It turns out, however, that $\gamma$ also appears in other (unexpected) cases, see below. The $n$'th harmonic number [math]H_n[/.
No quadratically converging algorithm for computing is known (Bailey ). 7,, digits of have been computed as of Feb. (Plouffe). See also Euler Product, Mertens Theorem, Stieltjes Constants. References. Bailey, D. H. Numerical Results on the Transcendence of Constants Involving,, and Euler's Constant.'' Math. Mar 17, · Among the many constants that appear in mathematics, π, e, and i are the most familiar. Following closely behind is y, or gamma, a constant that arises in many mathematical areas yet maintains a profound sense of mystery. In a tantalizing blend of history and mathematics, Julian Havil takes the reader on a journey through logarithms and the harmonic series, the two,/5.
Nov 01, · Read "A solution to an open problem on the Euler–Mascheroni constant, Applied Mathematics and Computation" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. THE EULER-MASCHERONI CONSTANT AND THE HARM ONIC SERIES During our discussion on Bessel functions of the second kind we encountered the Euler-Mascheroni constant defined as-This important constant represents the difference between the harmonic series which is known to be divergent and the logarithm of infinity.
You might also like
Buyers guide to industry in Derbyshire and Nottinghamshire.
Buyers guide to industry in Derbyshire and Nottinghamshire.
Towards Need-Specific Treatment of Schizophrenic Psychoses
Towards Need-Specific Treatment of Schizophrenic Psychoses
If a stranger approaches you
If a stranger approaches you
family in Brussels
family in Brussels
review of the citizen initiative method of proposing amendments to the Florida Constitution
review of the citizen initiative method of proposing amendments to the Florida Constitution
Anselm Kiefer - Heaven and earth. Modern Art Museum of Fort Worth, September 25, 2005 - January 8, 2006
Anselm Kiefer - Heaven and earth. Modern Art Museum of Fort Worth, September 25, 2005 - January 8, 2006
Needed
Needed
George Bernard Shaw
George Bernard Shaw
novels and romances of Edward Bulwer Lytton.
novels and romances of Edward Bulwer Lytton.
Statistical modelling and multivariate analysis
Statistical modelling and multivariate analysis
Dracula is a Pain in the Neck
Dracula is a Pain in the Neck
guys taking dick up the ass
guys taking dick up the ass
Middle school climate
Middle school climate
Areawide environmental assessment on the development of a NEPA compliance strategy for new source coal mining activity in the western Kentucky coal field
Areawide environmental assessment on the development of a NEPA compliance strategy for new source coal mining activity in the western Kentucky coal field
Profiles of Constitutional Court justices
Profiles of Constitutional Court justices
Glory
Glory
### Generalized Euler-Mascheroni constants by O. R. Ainsworth Download PDF EPUB FB2
THE GENERALIZED EULER-MASCHERONI CONSTANTS By 0. Ainsworth and L. Howell The information in this report has been reviewed for technical content.
Review of any information concerning Department of Defense or nuclear energy activities or programs has been made by the MSFC Security Classification Officer. Note: Citations are based on reference standards. However, formatting rules can vary widely between applications and fields of interest or study.
The specific requirements or preferences of your reviewing publisher, classroom teacher, institution or organization should be applied. The Euler–Mascheroni constant (also called Euler's constant) is a mathematical constant recurring in analysis and number theory, usually denoted by the lowercase Greek letter gamma (γ).
It is defined as the limiting difference between the harmonic series and the natural logarithm. arXivv1 [hamaikastudio.com] 23 Dec Sharp Estimates of the Generalized Euler-Mascheroni Constant Ti-Ren Huang1∗, Bo-Wen Han 1, You-Ling Liu2, Xiao-Yan Ma1 1 Department of Mathematics, Zhejiang Sci-Tech University, HangzhouChina.
A large number of series and integral representations for the Stieltjes constants (or generalized Euler-Mascheroni constants) {\gamma}_k and the generalized Stieltjes constants {\gamma}_k(a) have.
May 18, · In the article, we provide several sharp upper and lower bounds for the generalized Euler–Mascheroni constant $$\gamma (a)$$. As applications, we improve some previously results on the Euler–Mascheroni constant γ. The idea presented may stimulate further research in Cited by: We propose new simple sequences approximating the Euler–Mascheroni constant and its generalization, which converge faster towards their limits than those considered by DeTemple [D.W.
DeTemple, A quicker convergence to Euler’s constant, Am. Math. Monthly (5) () –], Sîntămărian [A. Sîntămărian, A generalization of Euler’s constant, hamaikastudio.com by: Generalized Euler constants and the Riemann hypothesis 46 The constant γ is often known as the Euler-Mascheroni constant, after later work the popular book of J.
Havil [] devoted extensively to mathematics around Euler’s constant, and Finch [, Sections and ].Cited by: Stack Exchange network consists of Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share.
Get this from a library. An integral representation of the generalized Euler-Mascheroni constants. [O R Ainsworth; Leonard W Howell; United States. National Aeronautics and Space Administration.
Scientific and Technical Information Branch.]. Transcendence of Generalized Euler Constants Author(s): M. Ram Murty, Anastasia Zaytseva suggest [4], which is a new book by Havil devoted to the Euler constant.
There are several generalizations of. InLehmer [5] introduced the class of analogues of the Euler constant described below. A GENERALIZATION OF EULER™S CONSTANT MARK HEDLEY 1. History of and the intent to study P n k=0 1 k+1. The EulerŒMascheroni constant, sometimes shortened to Author: Mark Hedley.
Then the generalized Euler-Mascheroni constant γ (a) [1] is given by γ (a) = lim Recently, the two bounds for γ and γ (a) have attracted the attention of many mathe- maticians. The Euler-Mascheroni constant, also known as Euler's constant or simply "gamma," is a constant that appears in many problems in analytic number theory and calculus.
It is denoted by γ, \gamma, γ, and the first few digits of this constant are as follows. Numbers, constants and computation 1 It is also known as the Euler-Mascheroni constant. According to Glaisher [4], the use of the symbol γ is probably due to the geometer Lorenzo Mascheroni () who used it in while Euler used the letter C.
The constant γ is deeply related to the Gamma function Γ(x) thanks to the. AN INTEGRAL REPRESENTATION OF THE GENERALIZED EU LER-MASCHERON I CONSTANTS The generalized Euler-Mascheroni constants are defined by and are coefficients of the Laurent expansion of the Riemann Zeta function {(z) about z = 1 1 O0 (-1)n yn (z-1)" S(Z) = - + c, Re(z) Z 0.
z- 1 n. n=O. Stieltjes Constants Generalized Gamma Functions Liouville–Roth Constants Diophantine Approximation Constants Self-Numbers Density Constant Cameron’s Sum-Free Set Constants Triple-Free Set Constants Erdos¨ –Lebensold Constant Finite Case Infinite.
The Euler–Mascheroni constant γ = is defined in mathematics as the limit of the sequence γ n = 1 + 1 2 + 1 3 + ⋯ + 1 n − ln n and it has numerous applications in many areas of pure and applied mathematics, such as analysis, theory of probability, special functions, applied statistics, or Cited by: The Euler–Mascheroni constant is a mathematical constant recurring in analysis and number theory, usually denoted by the lowercase Greek letter gamma.
"Expansions of generalized Euler's constants into the series of polynomials in Second edition of the book Binary numberBinary: EULER-MASCHERONI CONSTANT In studying the difference between the divergent area under the curve F(x)=1/x from x=1 to infinity and the area under the staircase function where we have– 1 1 () in n x n n S x, the Swiss mathematician Leonard Euler found back in that the area equals the constant value γ=.
Mascheroni has written only one paper about Euler's Constant (). He use only hamaikastudio.coming de:David Bierens de Haan using A, Glaisher should be corrected with "De Haan and Mascheroni A; De Morgan, Boole, &c., have written it γ".There are a lot of early papers about the hamaikastudio.com there was a famous discussion about the constant of the Logarithmic integral function (which (Rated B-class, Mid-importance): WikiProject Mathematics.Series representations (18 formulas) EulerGamma.
Constants EulerGamma.hamaikastudio.comlofInequalitiesandApplications Page3of9 α 4 (a+n–1)6≤γ(a)–μ n(a)0andn≥n 0 Cited by:
|
2021-04-18 17:11:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7876594066619873, "perplexity": 3558.6067803743827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038507477.62/warc/CC-MAIN-20210418163541-20210418193541-00350.warc.gz"}
|
https://socratic.org/questions/the-ratio-of-girls-to-boys-is-2-3-and-there-are-20-people-in-the-class-how-many-
|
# The ratio of girls to boys is 2:3 and there are 20 people in the class, how many are girls and boys?
Jun 2, 2015
Lets name $b$ the number of boys and $g$ the number of girls
$b + g = 20$
$\frac{g}{b} = \frac{2}{3}$ so $g = \frac{2 b}{3}$ ( we multiply by $b$ on each side )
We can thus replace g in the equation :
$b + \frac{2 b}{3} = 20$
We want to put on the same denominator :
$\frac{3 b}{3} + \frac{2 b}{3} = 20$
$\frac{5 b}{3} = 20$
$5 b = 60$ ( we multiply by $3$ on each side )
$b = 12$ ( we divide by $5$ on each side )
We can thus now find $g$ :
$b + g = 20$
$12 + g = 20$
$g = 20 - 12$ ( we substract $12$ on each side )
$g = 8$
There are thus $12$ boys and $8$ girls in the class
|
2020-09-27 10:41:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287593364715576, "perplexity": 325.8668771616672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00328.warc.gz"}
|
https://encyclopediaofmath.org/index.php?title=Parabola
|
# Parabola
Jump to: navigation, search
A plane curve obtained as the intersection of a circular cone with a plane not passing through the vertex of the cone and parallel to one of its tangent planes. A parabola is a set of points $M$ in the plane for each of which the distance to a given point $F$( the focus of the parabola) is equal to the distance to a certain given line $d$( the directrix). Thus, a parabola is a conic with eccentricity one. The distance $p$ from the focus of the parabola to the directrix is called the parameter. A parabola is a symmetric curve; the point of intersection of a parabola with its axis of symmetry is called the vertex of the parabola, the axis of symmetry is called the axis of the parabola. A diameter of a parabola is any straight line parallel to its axis, and can be defined as the locus of the midpoints of a set of parallel chords.
Figure: p071150a
A parabola is a non-central second-order curve. Its canonical equation has the form
$$y ^ {2} = 2px .$$
The equation of the tangent to a parabola at the point $( x _ {0} , y _ {0} )$ is
$$yy _ {0} = p( x + x _ {0} ) .$$
The equation of a parabola in polar coordinates $( \rho , \phi )$ is
$$\rho = \frac{p}{1 - \cos \phi } ,\ \textrm{ where } 0 < \phi < 2 \pi .$$
A parabola has an optical property: Light rays emanating from the focus travel, after reflection in the parabola, parallel to the axis.
#### References
[a1] M. Berger, "Geometry" , II , Springer (1987) pp. Chapt. 17 [a2] J. Coolidge, "A history of the conic sections and quadric surfaces" , Dover, reprint (1968) [a3] H.S.M. Coxeter, "Introduction to geometry" , Wiley (1963)
How to Cite This Entry:
Parabola. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Parabola&oldid=48102
This article was adapted from an original article by A.B. Ivanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
2021-09-17 15:59:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6792489886283875, "perplexity": 403.0560792757001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00132.warc.gz"}
|
http://injuryprevention.bmj.com/content/14/4/228
|
Article Text
Ecological level analysis of the relationship between smoking and residential-fire mortality
1. S T Diekman1,
2. M F Ballesteros1,
3. L R Berger2,
4. R S Caraballo3,
5. S R Kegler4
1. 1
Division of Unintentional Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
2. 2
Department of Pediatrics, University of New Mexico School of Medicine, Albuquerque, NM, USA
3. 3
Office on Smoking and Health, National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention, Atlanta, GA, USA
4. 4
Office of Statistics and Programming, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
1. Dr S Diekman, Division of Unintentional Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, 4770 Buford Hwy, NE, MS F-62, Atlanta, GA 30341, USA; sdiekman{at}cdc.gov
## Abstract
Objectives: To examine the association between tobacco smoking and residential-fire mortality and to investigate whether this association is explained by the confounding effects of selected socioeconomic factors (ie, educational attainment and median household income).
Design: An ecological analysis relating state-level residential-fire mortality to state-level percentages of adults who smoke was conducted. Negative binomial rate regression was used to model this relationship, simultaneously controlling for the selected socioeconomic factors.
Results: After educational attainment and median household income had been controlled for, smoking percentages among adults correlated significantly with state-level, population-based residential-fire mortality (estimated relative rate for a 1% decrease in smoking = 0.93; 95% CI 0.89 to 0.97).
Conclusions: Mortality from residential fires is high in states with high smoking rates. This relationship cannot be explained solely by the socioeconomic factors examined in this study.
## Statistics from Altmetric.com
Residential fires remain a substantial public health problem in the USA, causing an estimated 3030 deaths, 13 300 injuries, and upwards of US$6.7 billion in property loss in 2005.1 Numerous studies have identified risk and protective factors associated with residential-fire injuries and deaths.2-7 Whereas many of these factors are not modifiable (eg, age, gender, race), some factors are amenable to change. At the environmental level, the presence of smoke alarms can decrease the risk of death in a home fire by up to 50%.8 At the individual level, cigarette smoking is a modifiable behavior with a documented association with residential-fire injuries and deaths. Smoking is the fourth leading cause of unintentional residential fires6 and the leading cause of residential-fire deaths, accounting for one-quarter of such fatalities.1 5 In 2003, smoking caused an estimated 15 500 residential fires, 690 related deaths, 1330 related injuries, and US$403 million in direct property damage.5 In the USA, approximately one in five adults smokes cigarettes.9 The presence of a household smoker is a risk factor for residential-fire injuries and deaths,10 and an estimated one-quarter of deaths from residential fires caused by smoking occur among persons other than those whose cigarettes start the fires.5 Many residential fires caused by tobacco products start in the bedroom or living room, and the most common items ignited are upholstered furniture, trash, mattresses, and pillows.4
Ecological study designs are appropriate for studying residential fires because they recognize the complex interactions between individuals and their environment. Several studies have shown the ecological association between smoke alarms and residential-fire injuries and death.11-13 To our knowledge, no ecological study has assessed the association between cigarette smoking and residential-fire deaths. The aim of this study is to quantify the association between state-level residential-fire mortality and percentages of adults who smoke, and to investigate whether this association could be explained by the potential confounding effects of selected socioeconomic factors.
## METHODS
### Data
All study data represent the year 2004. We obtained counts of residential-fire fatalities and population estimates by state from the Web-based Injury Statistics Query and Reporting System (WISQARS).14 This interactive injury mortality database system contains death certificate information filed in state vital statistics offices and includes fire-related deaths reported by attending physicians, medical examiners, and coroners. Residential-fire deaths were identified by International Classification of Diseases 10th Revision (ICD-10) codes X00–X09 and where the place of death was identified as “home.” These external cause-of-injury/death ICD-10 codes include unintentional exposure to smoke, fire, and flames (including toxic fumes). Population estimates were generated by the US Census Bureau collaboratively with the National Center for Health Statistics.
We obtained data indicating the percentage of adults who smoke (by state) from the Centers for Disease Control and Prevention’s (CDC) Behavioral Risk Factor Surveillance System (BRFSS).15 This ongoing surveillance process aggregates health information from state-based telephone surveys about modifiable risk factors related to chronic disease and other leading causes of morbidity and mortality. BRFSS is a random-digit-dial survey of the non-institutionalized US population aged 18 years or older. The survey design and random sampling procedures are described in detail elsewhere.16 17 In 2004, 49 states and the District of Columbia asked tobacco-use questions; Hawaii was the only state with incomplete BRFSS data for these questions. We analyzed the following survey item, “Do you now smoke cigarettes every day, some days, or not at all?”, and defined current smokers as respondents who indicated smoking every day or some days.
State-specific percentages of people with a high school degree or higher (population aged 25 years and older) and median household income (in 2004 inflation-adjusted dollars) were collected through the US Census Bureau’s American Community Survey.18 This is the largest US annual household survey, and its estimates are based on a nationwide sample of about 250 000 addresses per month, or ∼2.5% of the population each year. We chose these potential confounders on the basis of their association with both residential-fire deaths3 19 and smoking.20
### Statistical analysis
Data for Hawaii were excluded from the analysis because of incomplete BRFSS results for 2004. The analysis was conducted in two phases. In the first phase, we performed a descriptive analysis of national data. In the second phase, we conducted a negative binomial rate regression analysis to model residential-fire mortality, by state (and the District of Columbia). We used a negative binomial model as an alternative to a Poisson model to accommodate overdispersion in the count data. The rate regression model adjusts for the size of each state’s population by incorporating the logarithm of the population size as an offset term.21 This results in a model of residential-fire mortality rates rather than a model of residential-fire death counts. The main predictor was the estimated percentage of current adult smokers. The rate regression model also included estimated median household income and estimated percentage of adults with a high school education as covariates to control for the influence of socioeconomic status, thereby avoiding an overstatement of the effect of smoking behavior. Using the fitted model, we estimated the relative rate (and 95% CI) of residential-fire mortality corresponding to a 1% decrease in the smoking percentage, controlling for the sociodemographic characteristics. The regression model was fitted using PROC GENMOD in SAS V9.1.3.
Because we had only 50 data points, we were conservative in our inclusion of potential confounders in the regression model. In addition to educational attainment and median household income, we examined further potential confounding by starting with our existing model and including other candidate confounders one at a time. The additional candidate confounders were represented by the following state-level variables: percentage of the population living in rural areas (in 2000), per capita alcohol consumption, and cumulative heating degree days. The addition of these variables to the existing model did not change the main effect (data not shown), and they were consequently excluded from the reported model.
## RESULTS
### Descriptive analysis
In 2004, there were 2810 residential-fire-related fatalities in the USA, corresponding to a national unintentional residential-fire mortality rate of 0.96 per 100 000. State-specific rates ranged from 0.00 in Vermont to 2.72 in Mississippi.14 Among adults aged 18 and older, the range of current smokers was 10.5–27.5%.15 Among adults aged 25 and older, 83.9% had a high school or high school equivalent degree.18 The median household income was US\$44 684.18
Figure 1 displays the relationship between state-specific smoking percentages and residential-fire mortality. We found a significant, positive association between state-specific smoking and residential-fire mortality rates (r = 0.57).
Figure 1 Residential-fire mortality and smoking rate, by state, 2004.
### Regression analysis
Table 1 presents the estimated regression model coefficients. The percentage of current smokers remains significant in this model, after the effects of education and income are controlled for.
Table 1 Estimated effect of smoking status on residential-fire mortality in the USA, 2004, with selected socioeconomic factors controlled for
The statistically significant regression coefficient estimate of 0.075 for current smoking percentage observed in this study can be interpreted by recognizing the logarithmic transformation of rates inherent to the selected modeling process. Starting from a base fire mortality “R” for a given population, the value exp{0.075}≅1.08 suggests that a change in the smoking percentage of +1% (eg, from 20% to 21%) corresponds on average to a modeled rate of 1.08 × R. Similarly, the value exp{−0.075} ≅0.93 suggests that a change in the smoking percentage of −1% (eg, from 20% to 19%) corresponds on average to a modeled rate of 0.93×R. The latter is equivalent to an estimated relative rate associated with a 1% fall in the smoking percentage of (0.93×R)/R = 0.93. Similarly, the 95% CI end points for the relative rate associated with a 1% fall in the smoking percentage are calculated from the CI end points for the coefficient estimate, resulting in the 95% CI exp{−0.115} to exp{−0.034} or 0.89 to 0.97.
## DISCUSSION
This study suggests a strong ecological association between residential-fire deaths and smoking. To our knowledge, this is the first study that quantifies this association using an ecological study design. This relationship was robust and remained significant even after selected socioeconomic factors had been accounted for. We found that even a modest 1% decrease in the percentage of current smokers corresponds to a 7% decrease in the modeled residential-fire mortality rate, which is promising for prevention. This finding is consistent with studies describing individual-level associations between smoking and residential-fire deaths.1 5 7 10
#### Key points
• This ecological study quantified the association between smoking and residential-fire mortality, while controlling for selected socioeconomic factors.
• A 1% decrease in smoking corresponded to a modeled 7% decrease in residential-fire mortality rates.
• The ecological association found in this study is consistent with findings from studies examining individual-level associations between smoking and residential-fire deaths.
• Successful smoking cessation efforts and adherence to fire protection recommendations may reduce the risk of cigarette-related residential-fire injuries and deaths.
The results of this analysis are strengthened by research that suggests that real-world reductions in smoking-related fire deaths are achievable. Hall22 compared forecasted versus actual deaths due to fires involving smoking materials for the period 1984–1995. The forecast estimate, which was based on changes in cigarette consumption patterns, predicted a 30% net decline in smoking-related fire deaths. This estimate was identical with the actual decline in the percentage of smoking-related fire deaths during the same time period (ie, 30%). Similarly, Garbacz and Thompson23 concluded that a “…60% decline in (cigarette) consumption from 1963 to 2003 may have accounted for about a 60% decline in fire deaths over that period” (p 15). This was based on an elasticity estimate derived from a risk model that tested the impact of smoke detectors and other factors, such as cigarette consumption, on fire deaths. Results from the Garbacz and Thompson study also indicated that cigarette smoking had an estimated 10-fold impact on fire death rates when compared with smoke alarms. Although these findings point to the potential of efforts to reduce cigarette smoking, changes in smoking patterns may not be sufficient to produce continued declines in residential-fire deaths.22
Behavior change efforts that focus on individual behaviors, such as smoking, often have a limited effect at the population level. Social ecological theory suggests that there are environmental influences at the relationship, community, and societal levels that interact with, and influence, individual-level human behavior.24 Policy and legislation efforts may offer a broader public health impact.25 In recent years, legislative efforts have promoted the adoption of fire-safe cigarette laws at the state level. Fire-safe cigarette legislation is a population-based strategy for reducing cigarette-ignited fires and associated injuries. Fire-safe cigarettes reduce the risk of cigarette-ignited fires by using technology that makes the cigarette self-extinguish when left unattended.26 As of January 2008, 22 states across the USA have passed fire-safe cigarette legislation, and these states represent 52% of the national population.25 In 2005, Canada became the first country to require fire-safe cigarettes nation-wide. Fire-safe cigarette laws provide an excellent example of the interplay between environmental factors (state laws and technological advances) and individual factors (smoking behaviors).
This study had several limitations. The BRFSS and US census data were subject to potential errors associated with survey research, including exclusion from the sampling frame, non-response, and reporting errors (eg, recall bias and under-reporting). Although the model estimates for our predictors are also subject to modest sampling variability, and the estimated model coefficients may thus incorporate bias, such bias is typically toward the null.27 Further, the analysis relies on state-level data, and extending the findings to individuals is limited by the familiar ecological fallacy.28 In other words, it is not possible to know—solely on the basis of these aggregate data—whether the individuals who were smokers are also the individuals who were involved in a fatal residential fire. Nonetheless, we believe that this association is also plausible at the individual level because smoking is the leading cause of residential-fire deaths5 and the presence of smokers in a house is a risk factor for residential-fire injury.29 The impact of cigarette smoking on residential-fire death rates may be considerable in poor nations, which often have high rates of smoking, substandard housing conditions, and inadequate fire protection and response systems.30 31
Conversely, the ecological design used in this study can also be viewed as one of its primary strengths because it offers a convenient and low-cost approach to quantifying the relationship between group level factors. Ecological approaches are ideal for studying unintentional injuries19 and residential fires32 because these phenomena are both complex in nature and influenced by factors at the individual and environmental levels. According to Stevenson and McClure, 33 the use of this design “is also particularly valuable when an individual level association is evident and an ecological level association is assessed to determine its public health impact” (p 2).
## IMPLICATIONS FOR PREVENTION
Although smoking cessation campaigns aim to reduce the occurrence of chronic diseases such as lung cancer, emphysema, and cardiovascular disease, the statistical findings in this paper suggest that reductions in smoking that result from such programs may also contribute to a reduction in residential-fire deaths. Smoking cessation programs have shown promise,32 and a survey indicated that 70% of current US adult smokers report wanting to quit.34 In 2005, an estimated 19.2 million (42.5%) adult smokers who were trying to quit had stopped smoking for at least 1 day during the preceding 12 months.9
People who smoke should attempt to quit. The US Department of Health and Human Services, National Institutes of Health, and National Cancer Institute provide a free helpline (1-800-Quit Now) for smokers who want to quit and need help doing so.
Those who continue to smoke should do so outside the house. However, people who smoke indoors may reduce their risk of injury from residential fires by following several recommendations:
• Use deep, sturdy ashtrays that are set on something secure and hard to ignite, such as an end table
• Douse cigarette and cigar butts in water, or extinguish them with sand, before dumping them in the trash
• Do not allow smoking in a home where oxygen is used
• Never smoke in bed or leave burning cigarettes unattended
• Do not smoke if sleepy, drinking, or using medicine or other drugs
• Use fire-safe cigarettes, where available
Additional tips for residential-fire prevention are available at http://www.cdc.gov/ncipc/factsheets/fireprevention.htm.
## CONCLUSIONS
Residential-fire mortality rates are high in states with high smoking rates. This relationship cannot be explained by selected socioeconomic factors alone, which suggests that successful efforts to reduce smoking may translate into a reduction in residential-fire mortality rates.
View Abstract
## Footnotes
• The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention/the Agency for Toxic Substances and Disease Registry.
• Competing interests: None.
## Request permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
2018-01-24 08:04:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3643576502799988, "perplexity": 4053.5226753528286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00058.warc.gz"}
|
https://www.qualitygurus.com/sample-vs-population/
|
# Sample vs. Population
• /
• Blog
• /
• Sample vs. Population
## Population
A population is the entire group of individuals or objects that we are interested in studying. For example, the population could be all the people living in a certain city, all the students in a school, or all the cars on a certain road.
## Sample
A sample is a subset of a population. It is a smaller group of individuals or objects selected from a larger population. The sample represents the population, and we use statistics to make inferences or predictions about the population based on the sample.
## Statistic (NOT Statistics)
A statistic is a measure that describes a characteristic of a sample. For example, a sample's mean, median, mode, and standard deviation are all statistic. It is used to summarize and describe the sample data.
The sample mean is represented by x-bar, and it is calculated by summing all the values in the sample and dividing by the number of values in the sample. For example, if the values in the sample are 2, 3, 4, 5, and 6, then the sample mean would be:
$$\Large{\overline{x} = \frac{2 + 3 + 4 + 5 + 6}{5} = 4}$$
The sample standard deviation is represented by "s", and it is calculated by taking the square root of the sum of the squared differences between each value in the sample and the sample mean, divided by the number of values in the sample minus one. For example, if the sample mean is 4, and the values in the sample are 2, 3, 4, 5, and 6, then the sample standard deviation would be:
$$\Large{s = \sqrt{\frac{(2 - 4)^2 + (3 - 4)^2 + (4 - 4)^2 + (5 - 4)^2 + (6 - 4)^2}{(5 - 1)}} = 1.58}$$
## Parameter
A parameter is a measure that describes a characteristic of a population. For example, the population mean, median, and standard deviation are all parameters. Parameters are used to describe the population.
The population mean is represented by mu, and it is calculated by summing all the values in the population and dividing by the number of values in the population. For example, if the values in the population are 2, 3, 4, 5, and 6, then the population mean would be:
$$\Large{\mu = \frac{(2 + 3 + 4 + 5 + 6)}{5} = 4}$$
The population standard deviation is represented by $$\sigma$$, and it is calculated by taking the square root of the sum of the squared differences between each value in the population and the population mean divided by the number of values in the population. For example, if the population mean is 4, and the values in the population are 2, 3, 4, 5, and 6, then the population standard deviation would be:
$$\Large{\sigma = \sqrt{\frac{(2 - 4)^2 + (3 - 4)^2 + (4 - 4)^2 + (5 - 4)^2 + (6 - 4)^2}{5}} = 1.414}$$
###### About the Author Quality Gurus
Customers served! 1
Quality Management Course
FREE! Subscribe to get 52 weekly lessons. Every week you get an email that explains a quality concept, provides you with the study resources, test quizzes, tips and special discounts on our other e-learning courses.
Similar Posts:
December 26, 2022
## F Distribution
F Distribution
November 28, 2021
## 8 Elements of the Six Sigma Project Charter
8 Elements of the Six Sigma Project Charter
December 26, 2021
## Seven Quality Tools – Control Charts
Seven Quality Tools – Control Charts
November 30, 2021
## Visual Factory
Visual Factory
December 18, 2022
## Calculating Standard Deviation and Variance: Sample vs. Population
Calculating Standard Deviation and Variance: Sample vs. Population
December 22, 2022
## Union and Intersection in Probability
Union and Intersection in Probability
November 25, 2021
## The Complete Guide to SMART Goals and How they are the Key to Achieving Success
The Complete Guide to SMART Goals and How they are the Key to Achieving Success
October 31, 2021
## QG Certificate for Udemy Students
QG Certificate for Udemy Students
December 24, 2021
## Seven Quality Tools – Scatter Diagram
Seven Quality Tools – Scatter Diagram
32 Courses on SALE!
|
2023-01-31 09:30:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6239094138145447, "perplexity": 604.9779626511736}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00659.warc.gz"}
|
http://unix.stackexchange.com/questions/69206/where-is-the-virtual-memory-stored-on-hard-drive?answertab=active
|
# Where is the virtual memory stored on hard drive?
If a process wants to access a memory address that is not in physical memory, the OS outsources a page frame from physical memory to the hard drive for later use. Where on the hard drive is this data / instruction stored?
Is it stored on the swap partition?
-
Yes. Note that however, some other things are stored there, like hibernation state. – BatchyX Mar 26 '13 at 14:11
But what if I do swapoff, where is it stored then? – Ian Mar 26 '13 at 14:34
Nowhere. Your OS will only run in memory, and if it gets full, future memory allocation will be denied and programs may get killed. – BatchyX Mar 26 '13 at 14:41
## 2 Answers
Pages of process memory may be displaced from the RAM to the disk. This is called swapping or paging (the terms are essentially synonymous). The data is moved to the swap space, and loaded back from the swap space when it is needed. Linux supports both partitions (and other block devices) and files as swap space.
If the page in question contains data that's been loaded from a file, then the data is not written to swap space if the page is to be reclaimed: it is simply erased from RAM. When the process needs the page again, the data is loaded back from that file.
-
You can run swapon -s to see what devices and files are being used for swap. For example, my scientific linux machine says:
[user@sl6.3 ~]\$ swapon -s
Filename Type Size Used Priority
/dev/sda3 partition 8388600 833408 -1
So I'm using /dev/sda3 for swap. Also note the priority field that can be used to adjust the order in which swap pages are allocated (see man 2 swapon).
As some folks have stated, if you run out of swap (or have zero swap) the OOM Killer may start killing processes when physical memory gets low.
-
|
2015-01-27 14:25:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34254544973373413, "perplexity": 3024.248506907867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00059-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/429808/how-badly-can-the-lebesgue-differentiation-theorem-fail
|
# How badly can the Lebesgue differentiation theorem fail?
Suppose $$f:\mathbb{R}^n\to\mathbb{R}$$ is integrable. Is it true that $$\lim_{r\to 0}\frac{\displaystyle\int_{B_r(0)}f(y)~\mathrm dy}{r^{n-1}}=0 \quad ?$$ This is obvious if $$0$$ is a Lebesgue point of $$f$$ or if $$n=1$$, but I would like to know if it's true in general.
• $|x|^{-\alpha}$ with $1 \leq \alpha <n$ is an example when $n>1$. Sep 5, 2022 at 11:10
• Thanks, it was easier than I thought. You should post your comment as an answer so I can accept it. Sep 5, 2022 at 11:16
Metafune has given an example of the limit failing to be $$0$$ at a particular point - namely for $$n > 1$$, the function $$|x|^{-\alpha}$$, with $$1 \leq \alpha < n$$ has that limit equal to $$\infty$$ at $$0$$.
In general the limit in question is zero $$\mathcal H^{n-1}$$-a.e, where $$\mathcal H^{n-1}$$ denotes the $$n-1$$ dimensional Hausdorff measure.
|
2023-03-31 21:49:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623540639877319, "perplexity": 139.10156127799024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00497.warc.gz"}
|